Big Technology Podcast - Can We Trust Silicon Valley With Superintelligence? — With Nick Clegg
Episode Date: November 19, 2025Nick Clegg is the former president of Global Affairs at Meta and deputy prime minister of the UK. Clegg joins Big Technology Podcast for a discussion about whether Silicon Valley should be trusted wi...th superintelligence and the risks it will navigate on the way there. In the second half, we also talk about how Silicon Valley uses money to buy influence and wield power in Washington. Tune in for a frank discussion about the economic, business, and political realities facing the tech industry as it pursues its most expensive and ambitious project. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You'd want, surely, politicians to do exactly the thing you've just said, which is hypocritical,
is they take the money and the invite person X from Silicon Valley Y to the fishing weekend or the golf retreat.
You want them, surely, then still to be able to get up on their hind legs and excoriate those companies and apply pressure to them.
But they've done nothing to big tech.
The former president of Global Affairs at Meta and Deputy Prime Minister for the UK joins us for a conversation about how to save the internet
and whether we should trust Silicon Valley with super intelligence.
That's coming up right after this.
The truth is AI security is identity security.
An AI agent isn't just a piece of code.
It's a first-class citizen in your digital ecosystem, and it needs to be treated like one.
That's why ACTA is taking the lead to secure these AI agents.
The key to unlocking this new layer of protection and identity security fabric.
Organizations need a unified, comprehensive approach that protects every identity, human or machine, with consistent policies and oversight.
Don't wait for a security incident to realize your AI agent.
agents are a massive blind spot. Learn how ACTA's identity security fabric can help you secure
the next generation of identities, including your AI agents. Visit ACTA.com. That's OKTA.com.
Capital One's tech team isn't just talking about multi-agentic AI. They already deployed one.
It's called chat concierge and it's simplifying car shopping. Using self-reflection and layered
reasoning with live API checks, it doesn't just help buyers find
car they love. It helps schedule a test drive, get pre-approved for financing, and estimate trade
and value. Advanced, intuitive, and deployed. That's how they stack. That's technology at Capital
One. Welcome to Big Technology Podcast, a show for cool-headed and nuanced conversation of the tech
world and beyond. Today, we are joined by Sir Nick Clegg, the former president of global affairs at
Meta and the former deputy prime minister of the UK and the author of this great new book,
to save the internet, the threat to global connection and the age of AI and political conflict.
It's going to be great conversation. Nick, great to see you again. It's good to be here.
Let's start with, so you spent a number of years advising Mark Zuckerberg through the tricky
mind fields of running Facebook and being this sort of global lightning rod because of the power
and influence that Facebook had. So let's just do a little thought experiment to begin with. You are
advising Sam Altman. What do the next five years look like? Like,
What does Open AI have to be prepared for as it grows bigger and stronger?
Wow, what a question.
Certainly one of the things which will be top of my list, and it's hardly, this hardly betrays great insight.
But I think this issue of the level of emotional dependency that people have on these AI entities, as they become more and more sophisticated, and the,
psychological and ethical dilemmas that will throw up particularly for vulnerable adults and most
importantly of course for kids and for teens i think it's just going to become an issue that is
going to grow and grow and grow just because of the the level of personalized intimacy in this
experience is is like no other we've ever experienced online and i would um you know i would
strongly urge sam altman and his and i they appear to be taking some steps but i suspect they'll need to go a lot
further to get well, well, well ahead of that. And probably take a more conservative stance
than, of course, the commercial imperatives will, you know, will be driving them in the other
direction. That's the age-old sort of dilemma for these companies, which is they want to
compete with each other ferociously in experiment and push the boundaries. But I think when it
comes to, you've obviously got a bunch of litigation going on already in the sad case where
some, you know, kids have taken their own lives and so on. But I just think the, um,
the impersonation effect emotional and otherwise of these AI entities is so dramatically different to anything we've dealt with before.
So that's something I'd probably put right at the top of my list because that's something which in the world of politics doesn't divide politicians.
It unites them.
So that's for sure.
Before we move on, I think we should just pause and talk about this a little bit because we've talked about this a lot on the show.
It seems like opening eye is actually going in the other direction.
they are enabling so so sam altman's perspective is let's let adults be adults and if they want to have
not only romantic attraction or romantic feelings partnership with chat ch pt that's fine but even going
to the point where they are enabling erotic uses or role play between people in chat ch pt and this
i think this probably stems from the fact that people really built deep relationships lots of people
built deep relationships with 4-0, which was this old model that Open AI eventually did away with.
And there was such a backlash that maybe they're responding to the demand.
So is this short-sighted?
Well, I think it certainly is not sustainable if that decision was made on the basis that somehow
the problem of over-reliance, over-dependency, the effect on teen, mental and emotional well-being
is somehow fixed, and I don't follow these things quite as closely as I used to,
but my reading of the assertion made by Open AI was that they can take this risk with more
edgy content, particularly sort of sexualized content for adults, which of course is a massive
use, I mean, you'll get you, sex and pornography is always, of course, is one of always the
leading use cases of any new communication technology, but particularly this one, that that was now
possible because the problem about the exposure of kids to experiences which might make them
more vulnerable to all sorts of harms, that that was somehow fixed.
And I'm not sure if I've seen proof that that has been fixed.
Certainly my knowledge of the old world, the old social media world, suggests to me that
that very sharp distinction between it's okay to allow adults to have more edgy content
because we've somehow gated the content which, or the experiences which are shared with younger
people, that all, of course, relies on a pretty watertight technological solution to how
you verify who falls on which side of that, you know, that age barrier.
And certainly in the old world of social media, that's still not fixed.
It isn't.
I mean, to be fair, there are some states across the US, I think California.
when you most recently in others, who I think are finally doing roughly the right thing,
which is creating this sort of one-and-done app store-based adjudication on age,
which I think is much simpler for parents and so on.
So I think they're moving there, but in a pretty patchy way.
So I just don't think it's unreasonable for society at large, through politics,
through the democratic process, to say, hey, guys, like, we get it.
You want adults to have a more edgy experience, and you've got other competitors who are
making bigger risks and you don't want to sort of be outflanked by them.
But let's just kind of, can we just do that once we've actually sorted out how to keep
younger people age-gated in a way that everybody agrees works?
And that just is not the case so far.
And look, if there's any, if there's any, I think some of the tendency in Silicon Valley
and dare I say it amongst the sort of podcast and commentary at classes of saying,
let's learn the lessons from, you know, the last 20 years of social media.
They're not, I mean, what's the phrase?
History rhymes, but it doesn't exactly repeat itself.
I think sometimes it's a little overworn that comparison.
But this surely is one where the comparison is relevant.
It's like it is so obvious that everybody, it doesn't matter whether you're a kind of libertarian tech bro
or you're working for an organization that's trying to defend the interests of kids,
you'd look back and think, wouldn't it have been great if everyone had just started earlier on this journey,
which, to be fair now, is actually gathering pace
as people are trying to work out exactly how to provide
more age-appropriate experiences to teens.
So you're in the room with Sam Bolton.
Right.
So I'd say that would be number one.
But I'm just saying, like,
let's say you're talking this through with him.
What do you tell him the next five years is going to look like
if this erotic use of chat GPT continues to go the way that it's going?
I think if it goes...
Or even romantic, not even erotic.
Yeah.
I actually don't have a huge problem with the idea that adults should be able to
veil themselves of romantic, I mean exactly where you draw the line between romantic and explicitly
sexualized is, of course, a tricky one. But I've got no problem with the idea that adults
can make their own judgments in this area. And if this is something which is kind of useful to
them or stimulating to them and so, you know, so it's a kind of free, I've got no problem with that
at all. I would say to Sam Altman, listen, if you don't want to spend most of your time giving
evidence in D.C. Because that's what's going to happen. It'll be hearings. And you actually
Yeah, and you actually want to be, you actually want to be, continue to be lauded as a, as a generational tech leader.
I just kind of like, hey, this, you know, don't be careful what you wish for, because if you rush into this too quickly, without having done the homework on the difficult stuff, and it is difficult.
It is really, it is way more difficult than people say, oh, why can't these tech companies just fix everything for young people?
It is more difficult, but I think pending the fact that that, or given that the fact that that is not fixed and that assertion by open AI is demonstrably wrong, I would say to him, you will regret this because maybe not now, maybe not next year, but a few years time, I can guarantee you there will be a societal backlash. It could actually potentially be much greater than it was for the social media, apps, because the level of intimacy of emotional dependency is going to be so much greater.
So I would say to him, you know, what's the phrase?
Festina Alente, you know, rush, rush or hurry slowly would be my counsel team on this topic in particular.
Yeah, it's fascinating that you led with that.
And, you know, one thing that I found is in my life is it's pretty easy to slide into a relationship.
It's tough to get out of once you're there.
And if you're a tech company, you're starting millions of relationships with your users.
It could be tricky to pull those apart.
So, you know, I mean, you study and know Silicon Valley sort of culture, perhaps as well as anybody.
It's just, you know, these guys are tech leaders, and they're all guys.
They're tech leaders.
They're extraordinarily accomplished technologists, entrepreneurs.
They're all highly, highly competitive with each other.
They're not relationship experts.
They're not politicians.
They're not philosophers.
They're not ethicists.
I sometimes sort of, I sometimes think that.
because they're so brilliant of what they do in the commercial and technological field,
we kind of think they're going to arrive at the right judgment on some of these other things.
They're not.
And we shouldn't expect them to.
And we shouldn't be surprised if they don't, which is why I think on things like that,
particularly this issue of what is appropriate for adults and what is appropriate for non-adults
and how do you make that distinction work.
It's kind of, we shouldn't be waiting for the tech companies to decide on that.
And I think it's actually a good thing.
It's messy.
It's messy because it creates such an erratic.
regulatory environment. But I actually think it's a pretty good thing that some of the US states
frustrated as they are, as I think many people are, that there's so little action in DC are starting
to take some of these matters into their own hands. So we'll go back to some more stuff that's coming
down the line for AI, but this is a good moment to pause and think about the strategy of your
former employer, Meta, because Meta, Mark Zuckerberg, they've put billions and billions of
into trying to build personal superintelligence, AI friends.
You know, I was reporting in meta 10 years ago, back when it was Facebook, literally 10
years ago, 2015, and people within the company were talking about how they wanted to build
an AI friend.
And is it that the company sees that this application of an AI friend will be so compelling
to people that they may want to spend more time with it than their human friends?
And that's why they want to go down this route.
So the conversations I had when I was still working in Silicon Valley with folk in meta and elsewhere was interesting because I, you know, I have a, I'm not a, I've never written a line of code myself. I'm not an engineer. I don't pretend to be. So I always asked lots of slightly dumb-ass questions. And I'm old enough and was senior enough just to ask to dumb-ass questions and people would, would bear with me and sort of explain to me. And I would say, so what is this friend thing?
Like how is it a friend?
And I was sort of get, I remember being told, no, Nick, just relax.
It's kind of like, you know, kids have got, they've got deep relationships with their teddy bears, with their pets, with obviously with celebrities.
People project themselves onto celebrities in an extraordinarily intense way.
It's kind of cool that if in the future, you know, you might have, you know, your, a teen might have seven best friends.
And three of them might be human and four of them might be humanoid.
you know, AI or maybe the ratio is different.
And I got, this made me think because I don't know about you, I have, I mean, friends are probably
more important to me than I think a, I think a life which is rich in friendships is one of the,
it's one of the greatest sort of defining features of a life well led.
And I've got some dear, you know, got some deeply, deeply sort of close friendships which I
had during my whole life.
And actually when I think about my, some of my friends are really annoying sometimes.
They're kind of really, they're a total pain.
I love my friends, but sometimes, God, they can be an absolute pain.
But what I mean is that friendship at a human, a profound human level,
is a constant act of compassion and compromise of empathy, of joy,
but also of irritation, because we have to work around each other,
and we all go through ups and downs and life and so on.
And I realized that actually what they were talking about when they talked about friends,
it's not friends at all, because you're not really having to adapt yourself.
the entity, the AI entity, is entirely adapting itself to you.
So my fear, but it's a slightly intuitive one, is you're not talking about friendship,
which is a complicated thing where you have to have the emotional maturity to try and understand
someone else's perspective and put your own feelings aside for a minute and prioritize them
and all that kind of stuff, which is the absolute heart of friendship.
And so important to be an adult, to be a well-rounded adult, that you realize that your life
is not all revolving around you, it's also around your friends and so on.
I suddenly asked, wow, these things, they're not, it's not going to be, they're friends as service.
And that's, that, that worries me a bit because it doesn't worry me on a technological level.
It worries me on a human level, because I think that could foster immense narcissism.
Oh, yes.
And sort of neediness and this sort of expectation that your friends are always going to be there for you, sort 24 hours a day in exactly the same, you know, fresh-voiced way.
And, and so I just, I just kind of, remember, I'm sure these are very smart people.
who are working on this in Silicon Valley, I'm sure the debate has moved on.
But certainly when I first started asking questions about this, some years ago when I was there,
when I also heard exactly what you suggested.
Wouldn't it be great if particularly, and of course, then what folk do,
they always take the most extreme, or the most heart-wrenching example.
Someone who's completely lonely and hasn't got friends?
And of course, who's going to deny that's great if they can find companionship?
or as we've already seen, they can unburden themselves for mental health purposes or if they're
dealing with post-traumatic stress disorder and so on. And I'm not denying any of those use cases.
In fact, I think I'm a big advocate, I think, for some of these, for AI in the use of mental health, for instance,
certainly to triage basic conditions. But to make a claim that it is on a par with the complexity of the
give and take of human friendship, I think displays an extraordinary Achilles heel in the kind of
basic philosophy of some technologists that somehow, because that isn't friendship, that is
friendship of service, fine, call it something else, call it a companion, call it an assistant,
call it an A, but don't pretend it has the richness that true human relationships do, which, as I
say, are often as, they're often as infuriating as they are uplifting. And I certainly would,
pause a little bit if I was to think that future generations were going to rely on this sort of
on-tap, unctuousness that you get from AI entities. I'm not sure if that's the best way to
raise kids to understand the human condition. Yeah, I don't think it's a great way either.
There are good applications, like we will hear good stories of the applications. We had the
replica store CEO here. Right. And she said that, you know, she'd been invited to weddings
between people and their AI assistance.
And by the way, built on like previous generations' technology.
So you can only imagine that's going to continue.
But I think one story that she told that stuck with me
was that somebody who had been through a really rough divorce
started, said basically swore off dating humans,
formed a relationship with a replica counterpart or companion,
whatever you want to call it, AI friend, or more than that.
And that AI friend basically gave this person the confidence
to start dating again.
and they started dating humans again and they have a human partner okay but i want to ask this one question
but those are great stories we shouldn't deny those but yeah i'm i'm with you hundred percent
it's just like we'll hear those from the tech companies we won't hear the other side uh
but i just want to uh hammer down on one more thing not hammer down just touch on one more thing
and then we can move on um the just from a strategy standpoint yeah i wanted to get like the product
your perspective of product is is this going to be is meta's thinking that this is going to be
such a popular product, you know, that it, that open AI will threaten it in this way.
Yeah, I don't know is the answer.
I genuinely don't know.
Right.
I've been there for a while.
I clearly think it is in the DNA of meta to believe that it is a company, it demonstrably
does, has a kind of handle on the social aspects on the sort of the way in which people
develop intense relationships by way of and with.
increasingly online experiences.
So that's kind of, that's in their kind of DNA.
What I just don't know, and I think is, again,
I'm now talking it as an interested outside.
I genuinely don't, I genuinely don't,
is clearly they're throwing a huge amount of money
at both talent and infra to compete at the very edge
of the best frontier models.
they've also got this, you know, fast expanding wearables business,
which, of course, will be digesting a huge amount of sensory data,
which is very, very important as these models evolve
from large language models to something far, far more based on visual and sensory data.
So they've got assembling the remarkable ingredients to deliver very powerful experiences.
It's not entirely clear to me whether what actually in the end will happen is that
the existing menu of apps and services that meta delivers are just going to massively improve,
as they already are for advertisers.
And if you look at the AI tools that we use for advertisers, or to your point, is it also going
to branch into robotics and AI, you know, friendships and so on?
My experience of Mark Zuckerberg, isn't it's one of his admirable qualities, he'll throw
everything at everything.
He'll just, he'll just, and then he's very, very adept at experience.
I mean, with extraordinary speed to say, that works, that doesn't work.
So I suspect that's the way they're going to, I think that would be in keeping with the
sort of ambitious philosophy of the company.
But it's just very difficult for me at this stage to know which one is actually really
going to sing, if any of them are really going to, you know, fly.
It's clearly going to, it's clearly going to do a tremendous amount for the existing
chassis of meta products.
I mean, it's going to lift all of those boats.
And I'm sure that AI entities, companions, friends will definitely be part of the menu, how successful or good it will be, how much people will actually trust them, whether they will navigate the issues we've just talked about in a thoughtful way.
Well, we'll see.
Okay, I'm going to answer my question.
Yes.
I think it is a major competitive threat for meta.
I think meta is the time.
So what is it?
The AI friends, AI companions.
Meta is a time spent company.
That is what people care about their time spent, engagement, growth of products.
If this technology keeps going the way that it's going, the AI friend will be like the stickiest tech product.
And that, to me, I think, is something that they're paying close attention to.
I'm sure.
Because, as you know, in the sort of what I'd call the legacy business is an extraordinary way to describe something which is used by 4 billion people and still is generating revenue hand over fist.
But anyway, what's called the legacy business, of course, interestingly, is becoming less distinct from its competitors.
So, you know, when I arrived at Facebook as it was then, the thing that I always found
very interesting was actually the fact that it was technology which humans could use to
communicate with each other, share content which humans had created to express themselves
and so on now, and you see it particularly on Instagram, the whole thing has shifted more
and more and more to what's called in the jargon, unconnected content, in other words, content
that you're seeing, which is being recommended to you algorithmically from the furthest reaches
of the internet regardless of whether it has anything to do with you or your friends or the groups
you're on or so on and of course increasingly content which is you know synthetic content
which is automatically or it was recommended to you by automated systems and to that extent it's
interesting that almost imperceptibly the meta social media apps are now competing more and more
with TikTok and YouTube they're becoming they're not they're not really stages on which people
generate content, communicate with each other. They are, of course, pipelines at which entertainment
and entertaining and engaging content is, is sort of fired at people. So the kind of, the, the,
the market distinction of Meta's existing products is less distinct from some of those
other major players. And it's been for a, they're all now roughly in the same, part of the
Venn diagram. So, yeah, that's new. Yeah, that mean the content, because the sort of
The social graph-based thing was just pretty distinct.
It was distinct, and it gave them an extraordinary mode.
That's different now.
Yeah, to me, the concept of social media really is dead.
You have unconnected content, which they have with reels, so they're playing there,
and then you have messaging groups.
And they have WhatsApp and messengers, so they're playing there.
But this first era of, you know, share with your friends and your friends are the best
recommender of content to you is gone.
No, I mean, listen, you can lament it.
but that's the way that the world's gone.
And in a sense, it's a demonstration of the extraordinary impact.
I think sometimes even more outsized impact than many people appreciate of TikTok.
I mean, the TikTokification of that whole industry.
And recommender systems.
I mean, that's an AI thing too.
Once, I mean, we're going to talk about China, I think, coming up.
But once people in China got the recommender system to the point that it was,
then it was off to the races from there.
That's right.
That's right.
I mean, listen, sort of folk like me sort of lamented,
bit because I always, I generally find humans more interesting than machines. And it, I, you know,
I always like the, in the old model, it was machines that were allowing humans to be very
human about themselves and with each other. And I sort of feel it's become a much more passive
rather than interactive experience. And I certainly lament that. No, it's great. You brought that up
because I was, we spoke three years ago at Davos. And we were talking about, I ended it with this
question about the mappiness project, which sort of mapped, which activities give people
the most happiness. And I mentioned that social media was like the dead last on this mappiness
project. And you answered a very interesting way. You said, first of all, I doubt that, you know,
three billion people would be that unhappy that they would come back to these products every day,
which is interesting. We can, we won't, I don't think we'll spend too much time on addiction and all
that stuff today. But then the other thing you said was what our research has found is that
passive scrolling actually has less of or a negative correlation with happiness where like engagement
does and it's interesting that like passive passive scrolling makes you less happy yeah right right
engaging with stuff makes you more happy yeah which is like intuitively kind of obvious right but all
social media is passive scrolling well it's certainly moving in that direction yeah no i i um i i uh i think
the passivity of of the experience is quite different to the more active and interactive experience
experience of before. But, you know, as you say, it's also been accompanied by a very significant
shift to much more intimate forms of communication. Right. Messaging apps. Messaging apps.
Discord. Yeah. And I certainly see in my own family, my friend group, that's where people spend
a lot more time. And that is very interactive. It's highly interactive. And so, who knows, maybe if you
look at the whole picture, it's not quite as, it's not quite as blunt or as dismal as kind of active,
you know, active online citizenship to sort of passive bovine consumption of recommended content.
I think it's way more mixed than that because people don't, of course, use one app.
They use multiple apps, particularly young people.
I saw a stat that suggests that, you know, a young American teen uses over 40 apps a month.
But also, as you say, this extraordinary growth of messaging apps as a forum in which people
express themselves.
So maybe, you know, maybe that's a big way to offset that other trend.
We have a very active discord community around this podcast.
Oh, right.
Big technology.
And I was on it this weekend and I was just thinking to myself, how funny is it that we,
effectively, the social internet started with the chat room.
Then it went to all these platforms.
We're back in the chat room.
Yeah, we're back in the chat room.
But what does that say about human nature, right?
It just, it says we, we've, we've have an absolutely overwhelming impulse to communicate,
to express ourselves and to communicate with people in settings that we kind of,
feel kind of comfortable in and that we can kind of visualize and that is containable
and gathers people in around similar interest that that's not going to go that's so i mean that's
millions of years of evolution it seems to be right that's anthropology more than technology for
sure and we'll see i mean how many how speaking we want that comfort how much that comfort to your
point is going to be delivered by people versus AI friends that's going to be a big question
And then back to sort of stitching it all together, inevitably, those who might
find, who might gravitate towards AIs for the vast bulk of that communication will be
those who may just find it kind of more difficult or awkward or to communicate in the,
you know, in your Discord group and elsewhere.
And that then, of course, becomes a slightly self-selecting group of certainly of early adopters.
And that's a problem because you would think that like then they be, those are
probably the most influential people right by technology right exactly sort of exactly no no but
I mean I think that's the nature of early adoption it's it's folk who are either open to just just
just just just open to the ingenuity of new things and or need really need it yeah and and and that's
that's that's that is exactly the you know one of the dilemmas it's one of the reasons back to
your opening question why I think you need to be super mindful of that because that that will
have a big societal and political reaction over time if that's not handled in television
much of me. So the big AI labs, they're filling their ranks with your former colleagues.
Yes, yes. Fiji Simo is the head of consumer apps. She's great. Yeah. At OpenAI, she ran the Facebook
app for a while. Kevin Weil had a product there. Former. He's a great people. Yeah, I've met them.
I've met both of them. Mike Krieger, who was just on the show, Instagram co-founder,
head of product at Anthropic. There's many more. Yeah. So I guess I'm curious to hear your
perspective on why these companies have been such a have decided that they want folks from social
media to take the lead here obviously they know product but it's also sort of i don't know if i'm
if i'm not getting the full picture here about an ai product but social media it seems like it wants
you to just engage as much as as you can because it will show you more ads where
AI is like if it just gets you to engage for engagement sake that's actually like pretty expensive
yeah for them to serve so what's your perspective on this i i i
My guess is it's much simpler than that.
These are very smart people who've been in rapidly scaling businesses.
And, you know, if you're Sal Malmaltman or Dariamodei or you're going, wow, I'm sitting on this, I'm sitting on this rocket ship and it's kind of taking off.
I need people around me who understand scale, who can ship products quickly scale them very quickly and understand how to operate in complex and very quickly.
and understand how to operate in complex and very fast-moving environments.
And if you basically take that as one of your list of requirements or expectations,
then of course people from companies like meta, you know, feature high up on the list.
So, no, I wouldn't have thought it's, I wouldn't have thought it's through the,
that's my assumption at least, I wouldn't have thought it's through the prism that you've just
described, which is one is engagement with commercial upside, the other one is engagement.
it's sort of expensive engagement
because I would have thought at the moment
what they're just racing to do
and they're clearly burning
a lot of money in pursuit of this objective
is just to expand
and get people using these products
and that in a sense is a bit of a playbook
from Mark Zuckerberg
I mean Mark it's one of his
most enduring principles
which is build technology
which people find engaging
you'll work out a way later to monetize that
I mean how many years
was that WhatsApp barely, you know, generated a, you know, a penny of revenue.
And so maybe that's what, that's also something there, the, you know, the other,
the new AI hyperscalers are, that's the page they're taking out of, out of the meta playbook.
It's just that the ROI for these AI companies has to be so much better.
So much.
Ads alone cannot pay for this.
No, no, but that, but that's the, I was about to say, $10 million question.
It's the whatever, it's a multiple trillion dollar question.
Exactly.
That, I, and no one seems to have.
answer to that. And we're clearly in this rather odd position where the infrastructure investment
dwarfs anything that happened in the run-up to the dot-com boom. So it's not enough to say,
oh, well, you know, this has happened before. Yeah, sure, there was a market correction. People
went bust. A lot of bunch of companies disappeared. But we also had this wonderful infrastructure
that we then repurposed for other things. But this is just off the scale compared to that.
I'm sure if you tot up the amount of hundreds of billions that were spent on telecoms infrastructure by some of those, you know, telcos compared to the...
I think this year alone will exceed that.
We've got 300 billion plus in KAPX from Big Tech this year and $1 trillion committed to Open AI this year.
Yeah, exactly.
So, so, and no one's been explained to me, but I'm not, you know, I'm not, I'm not a financial analyst, but like no one's been explained to me how you recoup that money.
So clearly at some point, some...
Something's got a right size.
Someone's going to lose a bunch of money.
There's going to be a correction.
I kind of think that the folk are in the driving seat here,
whether it's the new hypers,
anthropic open eye notably amongst them,
or the established players, Google, Meta, Amazon, and so on, Microsoft.
I just kind of think they're locked in a thing where it says,
yeah, we don't know where this is going to go.
But we know one thing for sure.
If we don't compete, we're sure to lose.
So we don't know whether we're going to win,
but we don't know what the shakeout's going to be,
but the surest way to lose is just not to throw as much money as your next competitor.
So they are in a bit of a kind of, you know, spend whatever it takes, mania.
And there is a sort of manic feel about the whole thing.
That's obvious.
Right.
And so what they do, they've told us what they basically need to pay the money back.
Mark Zuckerberg's talked about developing superintelligence.
Sam Altman's talked about superintelligence and artificial general intelligence.
Why does superintelligence equal, yeah, why does that, why is that a pot of gold necessarily?
Because if, I mean, I think this is, if you're able to hold onto it and no one else can do it.
Well, if you're able to build technology that's smarter than every person, the, I mean, if you think about what's economically valuable, that would be the most economically valuable thing ever created.
If you can hoard it.
This is a bit I've never quite understood.
And I may be completely wrong here, but I'd love to know what you think.
But I've never quite fully understood why that would be a hordable asset that only one company has, keeps under lock and key and everybody else kind of, everybody else is then thwarted.
It seems to be much, much more likely that it's going to be a more diverse and dispersed technology than that.
There's a lot of early evidence that some of the most useful and commercial models are quite specific ones, a small specific ones.
I just, I don't know, until someone, until I, at least, and this is my rather sort of primitive way of look at these things, until I have a bit more information to be able to visualize exactly what these slightly hand wavy turns like AGI and superintelligence really mean.
I find it super difficult to understand why the assumption is that there is a winner-takes-all logic would prevail.
Maybe it will, but in a world where just this week, wasn't it?
I've forgotten the, what's the Hong Kong-based company that's just produced another particularly good open-source AI model for agenetic and coding purposes?
Minis.
Is it what it is?
Yeah, M2 or something.
But anyway.
Kimmi K-K2.
Yeah, maybe, maybe.
Anyway, you know, when you've got, we're in a world where you've got now on a,
where it's now becomes like just standard, it's like become a conventional wisdom
that the world's largest autocracy is churning out the,
the world's most advanced open source, AI models and that,
I see Chesky the other day saying that Airbnb rely very heavily on deep seat for their own work and so on.
I don't know.
That seems to me to be an indicator of just how versatile and dispersed the technology.
rather than how much it can be hoarded in a winner-takes-all knockout blow by one of the labs.
But I accept, of course, A, I may be entirely wrong,
but secondly that if that is the case, if the race to superintelligence is this eureka moment
where one entity wins, everybody else is left in the dust,
then of course that is of such immense commercial value.
You can hardly put a figure on it.
I get it, I get it.
But it does seem to me to imply a bunch of fairly heroic assumptions.
Okay, so that is a great lead into this question that I wanted to ask you, which I might have to alter now.
I wanted to ask you whether we can trust Silicon Valley with superintelligence.
Let's say they do build this all-knowing AI.
Okay, so let's do that, and then I want to ask you whether they can even control it itself because of what you just said.
So let's start with can we trust them, super intelligence?
Well, I said, of course not, because in a sense, my knee-jet response is, as I said earlier,
These are technology companies.
You shouldn't trust technology companies.
It's not even trust.
I use a less loaded way of putting it.
It's like don't look to a technology company
to sort out the moral, societal, political, ethical trade-offs
by which are entailed in the way
in which millions, billions of human beings interact with technology.
They're technologists.
They are hard-driving, highly competitive,
highly commercial
technologies. So to that extent, no, that's not their
role. It's not their
expertise. And it's one of the reasons
why this fashion at the moment, certainly in Silicon Valley and
D.C., this is ultra-libertarian thing of
any kind of constraint, any kind of regulation is unacceptable,
is so foolish because it's like they don't have all the answers
and should you ever expect them to have all of the answers.
So that's on the one, that's the sort of easy bit.
The bit I find just harder to answer is, I kind of just don't yet really know, where is, what is the moment we walk through the looking glass and superintelligence has happened?
Some people say it's when they deliver, you know, when these systems deliver or develop a certain level of autonomy and an ability to self-improve.
others claim that in the end actually there's no way that they can fully escape the rather clunky
probabilistic underlying architecture upon which they're built and they're always going to
come up spit out some slightly hallucinatory outcomes they can't be fully precise all the time
and that they're never going to entirely escape the chains of human command I just
you know, I'm very interested
that there are folk like my dear
friend Jan Lekoun and others who have been pretty
consistent and sure they get
sure they get criticised and mocked for it
but if you ask yourself who's been
most astute
in the kind of commentary of the big trends
of this technology over the last
three or four years I would have thought
it's fair to say that people like him
who have claimed right from the outset
yeah this is really powerful technology
it's really versatile
but it's not the only or maybe
not even the best route to human-style machines which really can self-improve and develop
their own autonomy, their own, dare I say, sort of inverted commas, conscience. If that's true,
and in fact, this alternative paradigm, which people now talk about world models and so on, is where
the future lies, then we might look back at all this kind of superintelligence AGI hype and
say, wow, that was like, that really was kind of hand-wavy stuff.
to recruit AI data scientists in Silicon Valley more than based on something which was fully realizable.
So I tend to always slightly look the other way when I hear a lot of hype because I think the hype just becomes,
it has an intellectually paralyzing effect.
I find it very difficult to think clearly when you hear people throw around these kind of really, you know,
these lofty terms.
When I don't know what they mean, they don't seem to themselves have any consensus about what it means.
And where there are very serious folks saying, look, the paradigmatic limits of the LMLM-based technology is going to act as a persistent constraint on getting there in the first place.
So I really just sort of feel, I just feel the jury is so out now.
But I realize, look, there are people in META and elsewhere, colleagues of mine who I like and admire enormously.
So, you know, it's a failure of imagination on the part of people like me.
no, no, this is around the corner and the scaling laws still prevail and the relationship
between how much you put in, how much you get out, is it still, it still holds and it's still
surprising us.
Yeah.
We listen to Jan here often.
He's been on three times.
I think he has a really good perspective.
But let's get back to like the control part.
Yeah.
I mean, assuming, you know, let's throw out the jargon.
Yeah.
These systems are becoming much more powerful.
They're open enough.
You know, the companies have moved towards closed systems.
they're open enough that something that happens in China can get copied right into the U.S.
and U.S. into China.
Totally.
So is there a concern on your end that like no matter how much governments or companies
might try to control this technology, it's not very controllable?
Well, clearly that is one very real possibility if the predictions are even half true
about how this technology might develop its own logic, its own sense of motivations,
its own sense of survival.
You know, you've seen these recent reports from some...
Unbelievable stuff, right?
They will manipulate evaluators just to preserve their values.
To preserve their values, or to stop themselves being, you know,
rewritten and stuff, and to be extinguished.
Yeah, of course.
I mean, they're hacking, they are playing chess games and then hacking the
program, the chess program, to change the rules so they can win.
Oh, wow.
No, I hadn't seen that one.
Right, right, right.
I was referring to the thing I read about.
When I see the stuff, I'm like, that's so cool.
And also just a little bit scary.
Yes, it is, it is definitely.
Definitely, well, it suggests a survival instinct, which is a very, very kind of profound thought that they develop a kind of animal-like survival instinct for themselves.
And I don't want to just simply dismiss that, but it seems to me at the moment these are fragmentary indications.
They're little sort of flashes of sort of fragmentary evidence that maybe these systems will develop fully, full-blown.
autonomy in the sort of sense that we as humans would understand, it seems to me that there's a long,
long, long, long way to go. And at the moment, these are being driven by such powerful systems and
so much compute capacity and so much data. Is it really plausible to assume that it's just
one more heave, one more, you know, layer of data centers, yet more improvements at inference as
well as at training levels will deliver.
Maybe, maybe, I have to say, I'm, I'm intuitively a little skeptical only because I just look,
I ask myself, what are the motives for the folk saying what they say?
Right.
No, I'm not thinking that it's like an escape scenario.
I'm just like, if you want this technology to follow a certain set of values or to be used
in a certain set of way, it's even less and less like that's possible because of what
you're saying, the diffuse.
But it makes it all the more reason, it makes it all the more important.
And this is probably the bit where I can speak with greater authority than I can about
the claim and counterclaims about between different paradigmatic AI models. That's not my
expertise. That's why in the end politics does need to insert itself. And that's why this
peculiar phase we're in where DC and Silicon Valley have kind of, which always sort of regarded
themselves with great sort of skepticism and kept each other at arm's length, have fallen into
this sort of cloying embrace with each other and all the tech pros, you know, in and out of the
White House like nobody's business, and married to this very kind of belligerent America
First agenda, you know, to hell with the rest of the world, we're in the lead, we're going to,
we're going to, we're going to assert our lead ever more forcefully and, you know,
and screw anyone who says anyone else, anything else. I would be very surprised if that is
a workable strategy for the US. And at some point, I think there's going to be quite a big sort
of falling of the scales from the eyes when they realize you can't beat.
You can't beat, you certainly can't beat China like that,
and I suspect for the long run you can't beat India either.
And who knows about other places,
particularly as they start adopting Chinese open source models
more and more widely in other parts of the world
and then using them for ever more ingenious purposes.
So I think at the moment we're in this rather odd phase
where it seems to me the US political and tech elite
are united in an assertion, which seems to me to be self-evidently flawed,
which is this is not going to lead to permanent, enduring American supremacy.
And when that becomes more obvious, there will need to be a big course correction.
And politics, in one way or another, in my view, which is what I advocate in the final third of my book,
politics ideally in a coordinated fashion between the world's major, you know, techno-democracies,
the U.S., India, and Europe in that descending order of importance,
will, I think, at some point, need to rediscover the merits of multilateral action,
however unfashionable that is to say in the current environment in the U.S.
Okay, I want to talk about that a little bit more, and we'll do that right after this.
Finding the right tech talent isn't just hard.
It's mission critical, and yet many enterprise employers still rely on outdated methods
or platforms that don't deliver.
Today's market hiring tech professionals isn't just about feeling roles, it's about outpacing
competitors. But with niche skills, hybrid preferences, and high salary expectations, it's never
been more challenging to cut through the noise and connect with the right people. That's where Indeed
comes in. Indeed consistently posts over 500,000 tech roles per month, and employers using its
platform benefit from advanced targeting and a 2.1x lift and started applications when
using tech network distribution.
If I needed a hire, top-tier tech talent, I would go with Indeed.
Post your first job and get $75 off at Indeed.com slash tech talent.
That's Indeed.com slash tech talent to claim this offer.
Indeed, built for what's now and for what's next in tech hiring.
Capital One's tech team isn't just talking about multi-agentic AI.
They already deployed one.
It's called chat concierge.
and it's simplifying car shopping.
Using self-reflection and layered reasoning with live API checks,
it doesn't just help buyers find a car they love.
It helps schedule a test drive, get pre-approved for financing,
and estimate trade and value.
Advanced, intuitive, and deployed.
That's how they stack.
That's technology at Capital One.
And we're back here with Nick Clegg.
You should check out his new book,
How to Save the Internet, the Threat to Global Connection.
in the age of AI and political conflict. I've read it through, as you can see if you're watching
on video. Nick, I want to talk to you a little bit briefly before we get into Silicon Valley's
attachment with the Trump administration. How Silicon Valley buys influence in Washington?
There was a terrific story, actually, a great leak of a memo from Brad Smith, the president of
Microsoft. I don't know if you remember this. I think Microsoft employees were asking him.
It basically made its way around the press a couple years ago.
The Microsoft employees were asking him why Microsoft donates to PACs and causes.
And he gave what I think is like the most interesting and honest answer.
I'll just read a bit of it.
He says about the money.
I can tell you it plays an important role.
Not because the checks are big, but because the way the political process works,
politicians in the United States have events.
They have weekend retreats.
You have to write a check and then you get invited to participate.
So if you work in the government affairs team in the United States, you spend your weekends going to these events, you spend your evenings going to these dinners, and the reason you go is because the PAC writes a check.
But out of an ongoing, but out of that, out of that ongoing effort, a relationship evolves and emerges and solidifies, and can tell you as somebody who sometimes is picking up the phone, basically to get people to answer.
Yeah.
Is that how it works?
Yeah.
I think it works.
It's the way it works in the U.S.
Yeah.
Yeah.
It doesn't absolutely doesn't work like that.
in the same way elsewhere. But the U.S. political system is so moneyed in a way that's,
I think, almost without precedent anywhere in the democratic world, slightly that I know about.
And in a sense, you know, what's happened under Trump too is that that has just become,
that transactionalism has just become way more overt, you know.
You don't like the idea of bringing a gold bar to the person.
Yeah, you know, exactly. Gold bars paying for a gold laminated ballroom, all the rest of it.
It's kind of, it's kind of like, it's become all like, most like a sort of
pastiche satire of a highly transactional cash, so money, sorry, financially based, you know,
set of relationships, which I think all the big corporates are involved in, and they do it openly,
they do it legally, they do it lawfully, there's nothing illicit going on, but in a culture
where elected politicians are literally non-stop fundraising, and that's all they're doing,
this has clearly
become an established way
exactly as Brad
who I like and admire
enormously
as Brad explained
it's not it's not
you're not buying a decision
you're buying an entry ticket
right into an event
which then get you know
etc etc etc are the retreats fun
I have I
I am I didn't go myself
there were much better placed folk
in my meta team
who had much better relationships
You know, I'm a non-American.
My role was, I had oversight when I was a meta for the U.S. operation, but was quite a globally focused.
How do, if we talked about in the beginning of this conversation, sort of the need for there to be a check on some of the companies within Silicon Valley, if they're pursuing this powerful technology, it doesn't seem to me like there can be if this is the way the system works?
No, I mean, it doesn't mean that the U.S. political system.
is incapable of regulating and putting constraints on companies that,
whose lobbying teams still make these contributions to go to the golfing weekend here
or the dinner there and so on and so forth.
So I don't think the evidence is that the US political system is just rendered,
you know, completely inert by this, you know, by the way in which corporate contributions
through PACs and so on are made.
It just seems to be the way in which the relationships are conducted and established in the first place.
And the U.S. body politic has regulated everything from the pensions industry to the banking industry to arms and oil,
even though all of those companies and the companies in all of those sectors are doing exactly the same thing,
trying to go to the same golf retreat, going to the same fishing weekend or whatever it is.
So I don't think that.
I don't think it is the case that that means that thoughtful political action in legislative form is made impossible.
Though I totally understand why people say or think, like, you know, your average punters going, wow, is that the way it works?
But it is the way it works in the U.S.
I mean, it's hardly a secret.
And it's totally open there.
There's nothing illicit about it.
That's the crucial thing.
Was it ever weird for you?
that these politicians who I'm sure were like holding these retreats and weekends with your team
members were then up on the hearing stage and giving the tech business.
I'm the worst person to ask because I, you know, I come from a much smaller, but older democracy
called the UK by comparison. And this kind of behavior, you know, this is just totally alien to
art. Oh, it's not all to any European. I think your wife is German. You'll know a little bit
like that. You know, it just, I mean, it's just, it doesn't mean that there aren't money issues in the way
in which politicians are funded.
I just find it's so hypocritical that they would take the money
and then call this tech CEO in for the hearing
and lambast them to show that they're tough on big tech.
But wouldn't you prefer them at least to do that
than then not lambast them?
I mean, I think if anything...
I prefer them to be effective, I guess, is my...
Yeah, no, no, no. I'm with you.
And personally, as I say, as a non-American,
I'm just stunned at the sort of
the extraordinary amount of money
that sloshes around an American democracy, but it always has.
It's kind of the way it is.
And the kind of European approach of having any kind of state subsidy or anything for ads
or for any kind of politics is considered to be, you know,
for perfectly good reasons, perhaps.
It's considered to be deeply, deeply sort of suspect here in the U.S.
We do it in New York, actually.
In New York, we have matching funds.
So if you rate, every dollar you raise up until a certain amount,
you get like $7 or $8.
Okay.
So.
Well, I defer to you on that.
But that may be.
the case. But given it is the way it is, and certainly in my job at meta, I dealt with the way
of the world as it was, not as I might ideally want it to be if I could architect it from its
foundations, you'd want surely politicians to do exactly the thing you've just said, which is
hypocritical, is they take the money and the invite person X from Silicon Valley Y to the fishing
weekend or the golf retreat. You want them surely, then still to be able to get up on their hind legs.
excoriate those companies and apply pressure to them.
But they've done nothing to big tech.
They've done nothing to big tech.
No, no, I know nothing.
And that's what I'm saying.
Of course, you don't want them to like say, you free pass.
Yeah.
But to me, the point is the thing that's really surprising is why the system itself is broken
if that's what's going to happen.
Because we see the effect.
And I'm not here saying we need to have big tech, you know, massive big tech regulation.
Obviously, that could be misguided as well.
It's just that when you look at the way the system works, it's crazy to me.
So my experience, just to just to do something very.
Very unfashionable, which is to stand up for the political class and to stand up for the, you know, the really, in many ways, really thoughtful, good people who I saw were in the business of trying to represent these companies in D.C. and my team and elsewhere, they're not shady people. These are really decent people trying to just do a good decent day's job. But here's the thing. I often found that the reasons why there was no consensus across the aisle on issues.
had less to do with who's gone on their fishing retreat or their golf weekend,
but more to do with deep differences between Republicans and Democrats on state preemption, for instance.
Right.
You know, that was something which constantly bedeviled and stopped progress.
Now, you might say that was an alibi for people not to take action,
but I was always very struck, you know, because I would ask myself,
why on earth does the US of all countries not have a federal privacy law?
It's like that, you know, it's just, you would have thought there'd be in,
highly in line with the constitutional principles of this country. And I think it often came down
to very deeply held views about the relative roles and responsibilities of the federal government
and states and so on. The thing I am surprised about, I have to say, is why there hasn't been,
despite all the energy that is generated on both sides of the aisle on this, why there hasn't
been more progress at federal level on legislating to make sure that kids and teens are protected
in a way that other users, you know, which is quite special compared to other users of social media
and other online experiences. That does seem, and it doesn't surprise me at all that other states,
California is only the latest example, perhaps the most significant one, and now taking matters
into their own hands because that's a reasonable thing to do if nothing happens in D.C.
Totally. All right. I have one last one for you. We have like three minutes left,
but I think this is important. So at Trump's inauguration, we saw
Mark Zuckerberg, Tim Cook, Sundarpa Chai, Elon Musk, I'm probably missing one.
Bezos was there.
Sergei Brin.
So the U.S. tech elite have tied themselves to the Trump administration in a way that we haven't really seen before.
I mean, the tech, they loved Obama.
I don't think they were like this close.
So eventually, politics goes in cycles.
The Trump's way of doing things may not be the way that the U.S. wants to do.
things forever or the globe wants to do things forever. What do you think will end up, what do you
think will be the end result for these tech leaders from tying themselves so closely to someone
who has gone out globally and, you know, taken this America first approach? Yeah. So I think
it's short term and long term. In short term it works for them because they're all, you know,
they're all driven by FOMO and fear or a mixture of both, you know, one person's beating a part
to Mara Logue, oh my gosh, I better get on the next plane to do it myself.
They all worry that someone else is going to somehow, in this highly transactional environment,
this very capricious transactional environment where the sort of Trump administration looks,
you know, from a distance like a sort of form of kind of institutionalized sort of gangster capitalism
where, you know, favors are done to favor, you know, favored companies and individuals
and others are, you know, knee-capped, whether it's countries.
But, you know, we just saw over the last 24 hours, Canada experiencing 10% increase in a tariff because of an ad.
I mean, he'll even do it to friends too.
Tim Cook has been on the receiving end.
And he's, again, we've talked about what did the gold bar get him.
I don't know.
It's not irrational for them to say, wow, this is so random.
And there are so many sort of random drive-by shootings that are going on metaphorically, I'm speaking, you know, by the political class now.
We've just better try and kind of all do what everybody else is doing, turn up at these dinners, turn up at these events.
and hopefully we won't get singled out,
particularly in an environment where, to our earlier conversation,
they're all spending so much money
on what is to them almost a commercially existential race with each other.
Any disadvantage or any advantage garnered by one of your competitors
because of their relationship with this administration
could be of commercially, of great...
So it's not illogical for them all to do the same thing in the way.
that they are. I think the longer term problem is that it just erodes an immense amount of trust
across the political spectrum, you know, in these companies, or at least the leadership of
these companies, because if you're a Democrat, you're going, wow, I remember sitting in a dinner
with, you know, Tech Leader X and Tech Leader Y saying, oh, they were great progressives and now look
at them. But also, honestly, I also think from the Republicans, they're going to go, I know that
this person said X or Y a few years ago.
And globally, by the way, you talked about Canada.
Oh, and how is it going to look?
Yeah, and globally, I mean, you know, you could imagine what it looks like if you're in Delhi, Brussels or, or, but I think many people understand that, that business leaders have to duck and weave, particularly in an environment where ducking and weaving seems to be about the only option available to you in this, in this very capricious kind of governing paradigm that you see in the Trump to administration.
I think in the long run, though, of course, it poses difficulties for them.
It's trust eroding in a big way because what happens if there's a Democrat president?
Well, they're going to suddenly turn around and turn up at the White House and say,
actually, we agree with everything you've always believed.
You can't do that.
So at some point, in my view, as I say, I understand why they're doing everything they're doing,
but at some point in the long run, all of these industry leaders,
if they want to continue to see their businesses prosper for decades to come,
have to find some way that they don't go into this.
I think, slightly demeaning whiplash of kind of, you know,
herd-like behavior,
limpid-like, herd-like behavior,
if I'm not mixing my metaphors,
attaching themselves to one administration,
then attaching themselves to another.
At some point, I hope that a certain kind of distance
will be restored between Silicon Valley and D.C.
I don't think it's, I often say to people,
about the only worst thing in a developed capitalist economy
than having major companies and governments at each other's throats
is having them in each other's pockets.
It's much better if there's a certain wary, respectful distance between the two.
I also kind of think technological innovation just does better when it's not too tied up with
the weird vagaries of politics.
And I suspect Silicon Valley will relearn that.
Well, we could definitely do another full episode on this topic, on what the values of Silicon
Valley actually are.
And I hope we get a chance to do that.
Me too.
Nick, it's been great to see you again.
You're always welcome on the show.
The book, folks, again, is how to save the internet, the threat to global connection in the age of AI and political conflict.
All right, that'll do it for us here, and we'll see you next time on Big Technology Podcast.
