StarTalk Radio - Fixing the Internet with Jaron Lanier
Episode Date: May 15, 2026Is the internet too far gone or can we still fix it? Neil deGrasse Tyson, and co-hosts Negin Farsad and Gary O’Reilly, sit down with Jaron Lanier, Microsoft scientist, and father of virtual reality,... to diagnose what went wrong with the web, how it’s changed with AI, and ideas for a new path back. NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/fixing-the-internet-with-jaron-lanier/ Thanks to our Patrons Pam Komm, Domin Vernetti, Hank Thundercloud, Home, Rsnd341, Michelle Box, PSR, Pierre Henry, Diana Vastardis, Ronald Vink, Tylor, Martin Lutonský, Timothy McIntosh, Omar Austin, Terry Tarpley, Albert Lyons, Jefferson Buttram, James Boddie, Camerun Pippin, Pitcher Rendon, Jonathan Farmer, Jeremy, Geir Sanne, Bee Dot, Christian Garcia, Bartizan, Sooraj Meyanamannil, Gert Coppens, Justin Brock, Daniel Stowens, Austin, Maurice Brown, Nathaniel A. Lordes Jr., MonzyL, Professor Deadly Robot, Lola ₍^. .^₎Ⳋ, Tim Moorehead, Nancy Cliff, Peter McAuley, Nathan Sprow, Ryan Hadley, TechCadet, Mike Ernst, James, Elliott Stevenson II, Caleb Williams, Rat Poison Vendor, Sebastian Weber, Smoke Dogg 414, The Anomaly of Two Systems, Patrick Kilduff, Stuffy979, Dan Yaroch, Agasthya Suresh, Brian Entman, Steve Vance, Simon Osadchii, Judas, Michelle Don Carlos, John Janney APR, ALottOfIdeas, BJ Verheyen, Tuomas Liimatta, Kuchi Kopi, Robin Maher, Evan Esau, Elhoufi Mbarek, Ezra Amador, Fallen Angel, Lyd, John D., Dread Maps, David Roth, Bogdan Rus, The_pink_boots, Randy Wallace, J K, Jim Lee, Melvin Chapple, Ryan Vaughn, Kelley Bie, Jai, Robert Ayan, Mikael Emsing, C George, Mark Nichols, Shantanusinh Parmar, Kyla, Carlos Sosa Denis, Honk, Terrance Jones, Brandt S, Steve Litz, Nathaniel Fodor, David Bunting, Christopher Velasquez, Flubbels, Nicholas Scott, Elhoufi Mbarek, and Patrick Snyder for supporting us this week. Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Nagine, were you appeased or terrified by that conversation we just had with one of the founders of all of this?
I was mostly terrified, and now every time I click on a manage cookies thing, I know who to be angry with?
Gary, how about you?
Comfortably numb.
Comfortably numb.
Coming up, one of the architects of the Internet that is destroying civilization on StarTalk, special edition.
Welcome to StarTalk.
Your place in the universe where science and pop culture collide.
StarTalk begins right now.
This is StarTalk, special edition,
which means I got Gary O'Reilly, co-host here.
Gary, how are you doing, man?
I'm good, Neil.
Over in London, though, so slightly remote.
Okay, only slightly.
On the scale of the universe, you're right next door.
I also have with me, Nagine Fuston.
Nagin, welcome back to StarTalk.
Hello, I'm so excited to be back.
It's been too long.
I miss you.
Absolutely.
I have to say, you don't look a day over the last time we talked about dark matter.
Let me figure out how old I would need to be for that.
You're a comedian.
You're also a TED fellow for social justice.
That's a thing?
I mean, I do comedy that tries to save the world.
Okay.
And it's worked.
Guys, that's why we live in Utopia.
Fake the nation.
That's the coolest.
title ever. That's right. And I'll never soon forget the title of your book from a few years ago.
How to Make White People Laugh. Yeah. That's best title ever. Thank you so much. Yeah. So Gary,
what have you researched for today? It's got something to do with the internet and it's going to
mess up the world, something like that? Oh yeah. Let's get the good stuff out. So is the internet too far gone
in our lifetimes.
The internet has changed from a novelty
to a central influence in our lives
and now with the advent of AI
hawked as the potential doomsday for humankind.
Well, is it, isn't it?
Today we're going to talk about whether that's the case,
how the internet influences our world
and how we can come together and fix it or not.
So, Neil, if you will introduce our guest
and I think this is just the right person
to discuss this topic.
Yeah, there's a uniquely qualified person in the world to address those issues and more,
and I've got him sitting right here.
Jaron Lanier, dude.
Hey, how are you?
Welcome to StarTalk.
Well, you know, I think you can speak a little lower than me, but that does not mean I will not attempt it.
Yeah, we don't use the word polymath too freely today because there aren't many folks.
There's so much specialization, but if I were to bring that into the,
the 21st century, I would apply it to you.
You're a computer scientist, but interdisciplinary.
And what's your title?
Prime Unifying Scientist at Microsoft.
That's a title?
Is that on your business card?
It actually is, yeah.
Okay, it's actually a joke.
Okay.
It's Office of the Chief Technology Officer, Prime Unifying Scientist, so it spells
out octopus.
And there are several reasons for that.
One is I used to study cephalopod cognition because it's absolutely fascinating.
But then there's also that some accuse me of starting to look like one myself.
Oh, that happens.
That can happen.
That can happen.
And so between those two things, I'm the octopus, yes.
I'm just impressed that that's even a title that one can ascend to.
Are you going to break the mold in your, can anyone become you in this?
The answer is no.
Just say no.
Well, I will tell you one thing about my role.
I have an agreement with them where I can speak my mind, including being encouraged to criticize
the company itself, so long as I'm clear that I'm not speaking for Microsoft.
And somehow they haven't imploded.
I think it actually, I would like to see more people with that role in the tech industry.
Okay.
So in that sense, I don't want to be the last.
I'm aware of one other maybe who would be Vinceurf over at Google, but I think we're the only two.
And we really desperately need more of us.
Because you know when someone has, you know, where they're clamming shot on what you really want them to say about what they're doing, you know it in an interview.
Yeah, and in Silicon Valley, that's like 90% of the time.
Yeah, yeah.
So you're considered the father of virtual reality, in part because you even coined the term?
Yeah, okay, look, I was young.
All right.
You're irresponsible.
Yeah, and on drugs, maybe.
Was there that involved?
I've never used drugs, and I blame virtual reality for that.
Oh, it has been, it's served that role.
You haven't needed it, yeah.
Rurality doesn't.
No, and I live in Santa Cruz, so I'm.
I'm in violation of a number of local ordinances by not using drugs, but somehow it just hasn't happened.
And you've got a book from a few years ago, 10 arguments for deleting your social media accounts.
Yes, I do.
That was bold back then, and it's probably even more significant today.
Well, you know, the thing about that book is that high schoolers are forced to read it.
My own daughter was forced to read it.
And so whenever I'm in an airport, there are all these high school kids who come
which we said, we were forced to read your book.
And like, all I can tell them is, well, you must have done something very bad.
And I hope you learned your lesson.
That's the only way to reply to that comment, for sure.
Yeah.
For sure.
So if you started virtual reality, what were you thinking behind it?
What did you think that other people would be not content with their own reality and you have to create one for them?
Oh, God, no, no, no, no, no, no, no.
No, no, no, no, no, explicitly not.
No, here's my thought about it was that there were two reasons to want to do virtual reality.
One was for the extraordinary weird ways that I hope people will eventually start connecting with each other with it
and for interesting experiences in art.
I still love all that.
Not that the industry has done much of that.
But the other reason is that, you know, we're born into this world and we get so used to it that we don't appreciate it.
that we don't appreciate it.
And when you have a vivid enough alternative for a moment,
and then you come back to this,
normal reality suddenly takes on the amazing qualities it always had,
but you can sort of become inured to it.
So, like, you put on the goggles, and then you take them off.
And, like, one of the things I used to love to do when VR was very new back in, like, the 80s or something,
I'd put a flower or a cool mineral in front of somebody without them knowing it
while they had the headset on, and they'd take it off,
and they would, like, look at this thing.
and it's like they never saw them before,
because in a way they hadn't, you know?
And so as a palate freshener,
as a point of comparison,
it helps you appreciate reality.
So to me, it is not an alternative to reality.
It's a way to appreciate reality
by finally having a contrast,
because it's very hard to make a contrast to reality.
Okay, but what you think
and how people actually use the product
don't necessarily have to comport.
So anyone I know who uses virtual reality
gets lost in it,
and regular reality becomes,
less interesting to them because they're on a hike,
they're on a space adventure.
There's no boundaries to where they can go.
No boundaries, who they can meet,
what social life they can conduct.
And you yourself look like something out of Star Trek
with your jacket here.
It looks very in future.
Yeah, I love it.
Did you pick that up in a virtual reality future
and you're trying to influence regular reality?
Yeah.
You got it.
That's exactly what I was.
I knew I had you.
Can I also speak up for a subset of the population, which is people who put on a virtual reality headset and they immediately want to throw up because it gives them motion sickness?
I have two rants I have to give.
Can you guys handle a rant?
Let me do the rant.
You just inspired.
And then I want to do the rant that you just inspired.
For reasons that nobody knows, there's some subsets in the population who are more vulnerable to nausea in VR.
I'm one.
Yes.
And they're almost universally female.
Really?
They're almost universally not white.
Now.
What?
Wow.
I wasn't expected.
Yeah, I'm Iranian.
And they all wear orange glasses.
They're mostly Asian, mostly East Asian.
I'm not aware of any study on Iranians in particular or Middle Easterners in particular.
And you were Iranian?
Yeah.
Yes, okay.
Yeah.
And so the thing is, how, all right, let me try to explain to you how frustrating this is.
the virtuality industry in Silicon Valley spent,
we don't know quite how much,
but based on figures that have been revealed,
it's in the hundreds of billions of dollars.
And just for the sake of argument,
let's call it a quarter of a trillion dollars
in developing VR in the last, I don't know, five years or something.
That's how much money it took to go to the moon,
just to be in the context here.
Jesus.
Think about what you could do with that money.
Okay.
Now, before we get to the question at hand,
I want to point something out.
like let's say I showed you in the 70s a computer like a laptop and here's a screen and there's a keyboard and you would say what would you want to do with this might you not think I would edit a document on that that might be better than a typewriter and would you see that's the limit of what I would have said but you would have said that definitely okay I would have said like let's make like boobies out of like zeros and ones in like a okay so your desire has been met but let us just let us just I'm not
say again.
No, listen.
And you say men don't listen.
Okay, we do, we do.
All right.
But I want to address this question of like, would you be surprised if after, let's say, a quarter of a trillion dollars of investment, there was no word processor on this thing?
You would find that surprising.
Okay.
Now, if I show you a VR headset and I say, what would you want to do with this?
One of the first things you might say is, I'd like to be able to do 3D design work in it.
Would you be surprised if after a quarter of a trillion dollars, there's no.
decent, reliable, usable,
3D design program with a VR headset.
All right, I'm surprised,
but that's exactly what happened.
Now, we can go into why,
but the thing is,
VR's been sort of a disaster
because the companies that are doing it
are absolutely trying to make it
into whatever they already know how to sell.
Apple wanted to make it into a big iPad
or a movie theater.
Meta wants to make it
into a social network with evil qualities,
and so on.
Nobody's let VR be VR.
or there's the gaming people who want it to be a game,
which can work to a degree, but only for a narrowest.
But let me get back to your question.
I won't name names here,
but one of the very, very, very large tech companies
spent many, many, many billions of dollars on a headset.
And I talked to the person who was the head of that program,
and they said to me, you have to try the latest version.
We have absolutely solved the nausea problem.
We have absolutely solved it.
All that stuff he said about nausea, it's gone, it's obsolete.
And I said, okay, okay, have you tested it on a broad?
broad population. Have you tested it
on women? Have you tested on Asian women?
Why would we do that? It's the same for everybody. And I'm like
okay, have you read any papers?
There are these academic... We don't need to read papers.
We're way ahead of the academic world.
We have a room of white men.
White Western men.
Do you have any female
engineers on the team that is working on
this? No. We have none.
And I'm like, okay, my friend,
get ready. And so then the first
review is from the Wall Street Journal.
Asian female got sick.
ooh, now you can figure out which company was.
But anyway, the thing is, no, nobody's going to bother.
Nobody's obsessive out there to figure that out.
But the thing is, this is a disaster.
Like, there's two disasters.
There's a moral and ethical disaster of hiring in such a narrow way
that we make ourselves blind and we make our products narrow.
And then there's the disaster of not actually serving the people we're supposed to serve
by refusing to look at them.
And after you've heard all I just said, what I want to say, yeah, you might have
have some friends who are getting lost in VR, but VR as a whole has not found a popular audience
at anything like the scale of like a normal computer or a phone or something. It ought to,
but it's not going to as long as we willfully blind ourselves. And there's this idea that,
well, there's a little joke in business about like, well, you lose money on every unit,
but you make up for it in margin. But that's what we do. Like in Silicon Valley, we say,
well, maybe we're doing something stupid, but we'll do it at such big scales that it'll make up for it.
And it doesn't work, you know?
The only friend that I have that really uses VR is literally to like rewatch the departed and then fall asleep.
That's like so far the only use I've seen of VR from out of my friends.
If it makes them happy, who are we to judge?
All I'm saying is that the things you can do, you can turn yourself into an approximate four-dimensional shape to develop four-dimensional intuition.
You can merge bodies.
Just to be clear, we live and think in three dimensions.
and for me, one of the greatest challenges as a kid
was I want to think in four dimensions.
That would be just so cool to be able to do that.
Yeah.
Well, I mean, someday, oh, God, when I was, oh, gosh, probably 21 or something,
and we were starting the company, all the engineers at the first VR startup,
and this would have been an 81 or something,
we made a pact among ourselves that if we ever had kids,
which seemed impossibly remote and was never going to happen,
we would raise them in little VR goggles,
and then we would, while they were sleepy,
we would change up bigger ones as they grew up.
So they'd grow up entirely,
and the purpose would be that they'd grow up in four dimensions
and be 40 natives.
That was the idea, and then they'd be the world's best mathematicians.
And so...
This is the Truman Show, but as conceived by mathematicians.
Yeah, this is...
Excuse me, by crazy mathematicians.
But then in...
But just that during their sleep they would have...
Well, during their sleep...
You'd have to swap it out.
You'd have to swap out the headsets.
Oh, I see, I see.
I see.
Presumably we'd feed them...
All day, right.
you know, the idea is that they'd grow.
And then, so then, so when I told my daughter this, when she was like 11 or something,
she got really pissed me.
I could have been the first kid in four dimensions.
What's wrong with you?
And I was like, oh.
Okay, so just because I want to pivot to social media in just a minute.
But let me try to summarize some of what you said, that VR has its uses, but in fact,
it has yet to have the, what they call the killer app.
where everyone has to go to VR
to see and experience this thing
that everyone just has to do
rather than just see a movie in VR
or play your video game in VR
or these other things that are just
a transposed
experience as opposed to a completely new experience
like your 4D child. You currently can't go
into commercial VR and change into a different kind of animal
or become a shared creature with other people.
I make this kind of jewelry.
I can't design these shapes in VR, which is insane in 2026.
It's insane that I can't do this in VR.
So look, in a way, VR doesn't exist yet.
Like just the most basic apps, the hardware is getting there,
but software has not been born yet.
Across history and pop culture,
we've imagined aliens in every possible shape and form.
But what do the laws of physics in the universe allow?
not fantasy, not fiction, just the universe playing by its own rules.
And when the universe plays with its own rules, it can reveal to us all the ways of being alive
that are not limited to the creativity of Hollywood storytellers.
I explore that topic and many others in my latest book, Take Me to Your Leader.
I narrated the audiobook, and I'm duly notified that the audiobook and the audiobook and the
print copy are available now wherever books are sold.
Something Joan said about meta and they're thinking about developing VR as a social media tool.
If I've got that wrong, please correct me.
We're looking at a company that as we sit here in April, 26, has just been found liable
in a lawsuit for social media addiction.
That sounds so, so wrong, turning VR into something that has just been found liable in a lawsuit that
has an addictive element to it.
So I'd like Geron's thoughts on that if it all possible.
Yeah, this was something all of us in the VR world warned about from the early days
that it could be turned into an addictive medium, definitely.
So you saw it coming?
Oh, everybody saw it coming.
Okay.
No, listen.
All right, look, if you want to talk about seeing it coming, one of the very, very first books
about computers ever, maybe the first one was by Norbert Weiner, 1915.
It was called The Human Use of Human Computers.
and it was essentially about how the most important thing about computers
is how they could automate behaviorist algorithms
to change people, to manipulate people,
and that since they would eventually be networked,
he has a thought experiment in there about people walking around
with little radio-connected devices that would go to a central computer.
So we're three-quarters of a century ago, and how dangerous this would be,
and he thinks of it as an extinction-level event
that we have to start foreseeing and avoiding 1950.
So everybody knew.
Nobody can say they were everybody
knew and I've been writing
warnings about this thing for
I don't know like my first major thing
about social media and how bad it could be was 92
if anybody wants to look it up
it's called agent civilian nation
about how software agents could
with you. Is the lawsuit going to
fix any of this like
Gary mentioned the lawsuit? What do you think
of it? I think we're in
a moment of great chaos where it's very hard
to predict even harder than usual to predict
things so
the outcome of upcoming elections both here and elsewhere will be important to what happens.
Right now, the population in general is very uncomfortable with the tech industry.
That's an understatement.
Yeah.
And I mean, it's funny for me because at some point in the past, I was one of the very few people
criticizing us, although doing it from the inside, and everybody thought it was just really weird.
And now everybody's doing it and I almost feel too conformist because I want to be the weird.
I want to be the weird one.
I want to be the weird one.
Oh, everyone caught up with you and now you're just a regular.
guy. Yeah, see, that's really awkward.
You've been saying since
2016, delete your social accounts.
And now, you know... Longer than that, that's
just the book. But yeah.
And my friends and I made a movie called
The Social Dilemma that the same high school kids are
forced to watch and all that.
And maybe it does a slight bit of good. Everyone's
while. One of them tells me they did.
But if it was left to a popular vote,
everything would change. But it's not.
And there's a property of
digital networks that's a math thing.
That's not a political thing.
And the math thing is called the network effect
or the extreme Pareto effect
or there's other words for it.
And what happens is when you have a very low fraction system
of things that are connected together,
once there's one node that becomes more influential,
it starts, as Andy Warhol put it,
it gets famous for being famous,
and it accumulates and accumulates,
and you start to have this hypercentralized power
and influence around one node,
and that node might be called meta or Google
or something, you know, those are examples.
There's others as well.
You said low friction,
and what you mean by that, as I understand it,
is the freedom with which information flows.
Exactly.
Among all of these nodes,
a slight advantage
then grows for having been a slight advantage.
Yeah.
And you get a runaway process.
When you have more friction, like in the pre-internet world,
you have more middlemen who get a little bit of power,
and what it does is it distributes power and wealth more.
So now we have the situation
where a handful of people have more wealth
than the bottom half of society.
and I say people rather than companies
because they tend to be single person run a company
It's not exclusively but that's very common
In the old days a company was not other than Ford himself
A company you didn't even know who the names
It was just the name of the company
Right then also there's like that effect of like
They are getting high on their own supply right
Like they want to be famous as a part of the like
You know technology platform that they put out you know
so there's a little bit, I mean, I'm I talking about Elon Musk probably.
No, no.
Who are you talking about, Nikki?
I didn't get that.
I didn't get that.
But you know what I mean?
Because before you didn't know, I never knew who a CEO of anything was.
Right, right.
And Elon has 280 million followers on X.
And he probably loves that.
And he owns that platform, yes.
Yeah.
Yeah, well, I wrote a piece once about Elon and Kanye Urge and Trump
for the times before Elon bought it.
And what I said is, you know, there's this thing that happens to people who are on Twitter and other platforms,
which is that they start off with different personalities, but they converge on the same personality
because that personality is the social media addicted personality.
And it's excessively petty, vain, it's confrontative and nervous, it's mean, it's never satisfied.
There's a certain...
Man, that's the playbook right there.
But the thing is, what happens is whenever somebody's on it, they turn into one of these.
So those three people were very different before, and then they turned into, became similar, you know.
And so what we have is the behavior mod machine that Norbert Weiner warned us about 75 years ago, 76 years ago.
And it's actually working and it's turning the founders into the victims.
Gary.
Geron, yeah.
Just to touch on that point, this systemic process of social media addiction,
And as you've just highlighted, the creation of characteristics, is there any way to take that out?
And keep what is, in principle, a decent idea of social media.
It's just the way that it's been developed to be addictive.
Is there any way to, I want to say, untangle that?
Can you edit the beast?
Yeah.
Because in the early days, also just to follow up on that, on the early days of the internet,
there was like friendster and stuff, right?
And it wasn't this mean.
Or I don't remember specifically, but I feel like everything in the beginning was sort of nice people.
It wasn't a cesspool.
It wasn't a cesspool.
It was like fun and cute.
So why did we go into this mean direction?
The reason we went into the mean direction is that the only business model allowed in Silicon Valley is influence generation.
So the idea is that you get the ability to influence a bunch of people and then other people pay into your citizens.
You could say they're paying to be able to influence, but I think the more accurate statement would be that they're paying blackmail money not to be left out of the influence pool.
But however you want to frame it.
That's a brilliant way to think about it.
But at any rate, so because that's the only allowed business model, everybody gets put under the influence of the algorithms and the side effect of the algorithms are, as we've described.
Now, as for the way to make.
It's an outrage magnifier that's really.
what it has become.
Well, it's,
outrage is one of the things.
Basically, from a neuroscience point of view,
we believe, I mean,
there's a community of people
who studies this, of course,
and I don't know that the science is complete
because we don't really understand the brain.
So I don't want to, if I'm on a science show,
I want to be careful to state the limits
of what we actually know.
But what we believe is that
there are different parts of the brain
that respond to the world in different ways
and at different speeds,
and there's a sort of a fast brain,
and the fast brain
is sometimes is known as the fighter flight
or the twitch response brain
that is very alert
to dangers
and sometimes for opportunities
to pounce on prey
or find a mate or whatever it might be
but there are these things
primal. It's primal.
And so the thing is, when you're under the regime
of instant feedback generated by an algorithm
what it tends to do is it tends to keep
that fast brain stuff constantly activated.
So it's like you're always being stalked.
You're always stalking.
You're always horny, but you're never satisfied.
You're always feel that you're alone because you can't trust anybody.
You're paranoid.
You're always hyper-conscious about how you appear socially because you're worried about bullies.
You're worried about being bullied all the time.
Everybody is that way once in a while, but when you're like that all the time, then you turn into Trump or Elon.
Your people have hijacked and what would otherwise be a helpful evolutionary.
trait within us.
Yes, and I hope you think of us.
That's dangling there in modern times
that has much less use
today because there's not a lion
in the brush.
Even when there were lions
in the brush, you had to
modulate between
being hyperattentive
and not. You can't be on all the time
and be mentally healthy even
in that environment.
It wears you down.
Or at least that's our person
understanding. I've seen that
same conclusion for many people using different
methodologies, but I want to answer the question of whether it can be improved, because I think
that's really the important one of our day.
Yeah.
So if I'm correct that the reason this is happening is that we're only allowing one business
model, then the way to fix it is to allow other business models.
And so...
That sounds so obvious.
Yeah.
But maybe it's impossibly obvious.
I mean, it's...
No, I know.
Look, I'm not saying this is easy, but I'm just saying that the logic is easy.
Now, the implementation might be quite difficult and might not be doable within our lifetimes.
I don't know.
But I do want to point out that the onset of all these troubles coincided with an ideology that was somewhat paid for by the companies and somewhat an authentic grassroots ideology, that it's evil and horrible to pay for information, like pay musicians or something like that or pay for software.
And that there has to be this free sharing and that's what the Internet is for, which sounds great until you understand the network effect, which means that every time.
you freely share your open software,
the party who gets rich or more powerful is not the community,
but it's the Google or whoever's at the center.
And so if you don't understand network effects
and how the math works,
you don't understand that your very well-intentioned activism
is actually having exactly the opposite effect that you think it is.
So the pirate party actually was in service of an empire,
just like the original pirates.
Dee, what about like...
Damn, you're bumming us out here, too.
This is like so sad.
Well, stop.
Don't ask me serious questions.
Like if you want, but wait, wait, I have to say one other thing, though, which is that if you're saying, well, are there alternate business models?
Yes.
And I also want to point out, you are saying, can it be better?
There's some evidence that can be better because not all online hubs or platforms are equally bad.
You can see a variation in them like GitHub is better than 4chan or something like that.
And so if you look at the spectrum of badness in different hubs, you know,
you can compare them and you can ask, well, what is different about them?
And if you start doing that, you get a vector out of it that points you into what might be better still.
And I think that's a really interesting and worthwhile thing to do.
And we do have the data to do that.
I'm glad to hear that.
All right, Neil, before Geron jumps onto another subject here, if these tech bros, and I'm sure there are a few sisters in there too, do not self-regulate, do not allow a competitive arena.
Who makes them do it?
Is it government? Are they worth more than government?
Surely the tail is wagging a dog here. Am I wrong?
Yeah.
Plus, they were all on the inauguration stage.
How about that, Neil?
Yeah, yeah.
Yeah, okay, so look, the thing about very rich and powerful people
is that their wealth and power still depends on a mass population
that accepts some system in which their wealth and power is defined, right?
And so if they lose enough popular legitimacy and support, no matter how central they are, they fall.
Okay.
And that's happened repeatedly in history, and it's really not any different now.
So what I'm seeing is enormous discontent with tech.
And I also see young people especially being incredibly discontented with it.
There's another phenomenon which is really interesting, which is that the current generation of young men doing tech,
People are starting to get old enough that they're starting to have kids and that really changes people.
You'll see their character turn around.
That really has an effect usually.
I actually feel like I was like verging on a sociopath before I had a child.
So like I do see that changing people.
In fact, one of the most, the clearest transitions anyone makes are comedians after they have children.
It changes their portfolio of jokes.
Oh, absolutely.
And their observations of the world.
We have a child that we can mock mercilessly.
which we do, but the mocking is on a bed of love.
You will pay for that someday.
I mean, I'm already paying for it.
Let me assure you. Okay.
So, Gary, where are we pivoting next?
All right, before we do, all right, Janon, let me float this.
I'm sure thinkers like yourself have had this consideration.
Do we have a delete day, a delete month?
And would that ever be enough?
If everyone just said, you know what, screw this, I'm going to delete my account.
What would move the needle?
what would it take to move the needle?
I've tended not to try to do things like a delete day,
and I'll tell you why.
It's because each person is different.
I called my book 10 arguments for deleting your social media,
but I didn't say that you should do it.
There's some people for whom it makes sense,
and I don't think those people should be put under social pressure or shamed.
I really don't.
I don't think that gets us anywhere,
and in fact, it just puts us back in the same game we're trying to get out of.
Can I tell you what I did?
I noticed over the years, because I have a pretty high following in social media.
I noticed that...
Brag?
Okay.
I'm just trying to...
All right.
Just trying to...
Neil has a lot of followers, guys.
Okay.
And I noticed that if I ever posted something that was or even was adjacent to an opinion,
people who didn't agree with the opinion would attack it.
They wouldn't say, well, that's interesting.
Here's my opinion.
What do you think of that?
They would attack it.
And that attack mode was highly revealing to me
because it showed me the anger that people had
with any views that differed from their own
and that there was a militancy of people's attitudes
that I had not ever seen growing up
when people had different points of view.
So as an educator, I want to be effective.
I don't want to fight if I don't have to fight.
So I navigate that.
And so I no longer post opinions.
I post perspectives.
And people might still react in an opinionated way, but I don't want to give up that
platform if as an educator I can continue to deliver perspectives that do not jump into
the cesspool that surrounds these islands of learning.
So that's what I've done.
So your 10 reasons for deleting, I love them.
But let me find the reasons to not delete.
And that's where...
So I climbed out of that whole.
whole. My working theory
is that one third of people
on any digital platform benefit from
it and two thirds suffer from it.
Now I'll tell you where that comes from. Because it's a very
it's an ironclad scientific
argument. Right, so you didn't just pull those numbers out of your ass.
Because it sounds like you did.
I have an
ironclad
scientific ass.
There it is.
If we all have ironclad
science ass, we could pull all kinds of stuff
out of it. Yeah, that's what this world is missing.
Wait, wait, wait, can I just give you my argument after all that?
Don't you want to hear it?
Aren't you even slightly curious?
I want to know your ironclad-ass argument.
Okay.
In the Turing test, Turing argues that since we don't have some meter for whether somebody has a soul inside or if they're alive inside.
Let me just remind people, Alan Turing wrote a paper called the imitation game where he describes an experiment.
This is very early on, where how would you know if you're talking to a computer or another?
human. So you set up a conversation between the two of you. And if you don't know, then maybe the
distinction is important. There's a third person who's a judge who's supposed to tell which is which.
Oh. And if the judge doesn't know, then you say, well, we might as well call the computer.
And then there are different versions of this is maybe the computer's intelligent, maybe it's got a
soul, maybe it deserves rights, whatever. Other questions you might ask. Okay. Yes. Now, it should be
point out, I just want to catch people up on that. This is based on a rather naughty old Victoria.
party game where it's supposed to be a man
and a woman behind booths and the judge is trying to tell which is
the man and which is the woman and the questions
were not for polite company.
Oh, okay. So that's where it comes
from. And for those who don't
know, during the time when Turing wrote this, he was being
tortured to death for being gay because it was illegal
at the time in Britain and there was a whole crazy
backstory to this, but let's leave
a lot of side. So here's the thing. The
only thing the Turing test can tell
is whether a judge can distinguish
the two. It's possible that the person
got stupid or got less
self-aware or whatever, just that's
equally logically possible to the computer
becoming more elevated in some way.
So you can't tell if the computer
got stupid or the person got smarter, but
there's two people in one computer
because there's the contestant and the judge and only one
computer. Therefore, there's a
two-thirds chance that a person got stupid
and a one-third chance that the computer
got smarter. So therefore,
as a general rule,
my ass tells us,
my highly reliable ass tells
us that two-thirds of the people on any digital thing, whether it's AI or whatever, are going
to be degraded by it.
But one-third will see a genuine benefit.
And I think you're in that one-third.
Yeah, okay.
Thank you for that affirmation because I always felt uncomfortable given what I see going
on on these platforms.
It might be an assortmation or an ascertainment.
Thank you for the assortmation, yes, yes.
This is a really elevated, we're not just school kids here.
No, no.
This is professional science here.
sciences speak.
Yeah.
Yeah.
Okay.
For the audience to know.
Right.
I also just think like three thirds of people would benefit if it just wasn't on a phone.
Like if, if social media wasn't on your phone and then you had to like designate time for it,
like as a going to your laptop, wouldn't that like do the job?
Read the final chapter of 1950 Norbert Wiener, founder of cybernetics before anything in computer science.
Final chapter says, just as a thought experiment, imagine there was a small portable device.
that's radio-connected, you could carry around that undertook behavior modification things.
This would destroy civilization, however, I, as one of the world's leading scientists,
assure you this is physically impossible and will never happen, so you don't need to worry.
So, Gary.
Yeah, sort of stringing this daisy chain of thoughts that Jarron has and just bringing them sort of to one place,
we've seen this lawsuit against meta and social media addiction,
and now everybody's calling it that sort of 21st version of big tobacco.
Is this just the warm up for something that we could find ourselves in with AI?
Because if we're talking about AI and responsibility for actions AI takes,
are we not back to the simple lawsuit of this time it's not social media addiction?
It's an artificial intelligence?
I work on AI.
I work on it all the time.
I think that the kind of AI we're doing right now can actually be a benefit.
I think we're spending way too much and taking up way too many resources on data centers.
I think there are various ways we're doing it that are a little foolish and simple-minded.
I think a lot of the claims are overblown and somewhat theatrical rather than real.
But I do think there's a core there that's important in a few different regards.
The reason I'm going to say this is that I don't think that having big tech companies is a bad thing or a necessary evil.
I think there are some big jobs that need to get done by big companies.
So what I'm hoping is that corrections can happen in a gentle and constructive way instead of some destructive way.
There's a fantasy that if you go and blow something up or let it blow itself up, that what comes out will be better.
It's what Lenin called maximizing the contradiction and their other terms for it.
But I don't think it ever works.
I think what happens when things blow up is that you have rubble.
You know, you don't squeeze the toothpaste back in the tube easily and you don't rebuild from rubble easily.
and I don't think it's generally a good strategy.
And so what I'm hoping for is rather than the emotional satisfaction,
oh, yeah, those people are terrible, we'll blow them up or some horrible thing will happen.
What I really want to see is a constructive transformation.
And I want to be part of that.
That's why I'm both inside it and a critic,
because I don't think that's inconsistent.
I think we should be responsible for being able to criticize ourselves and improve,
and I think it's healthy.
So that's what I'm hoping.
It doesn't mean that that's what we're going to get,
but I do believe that that's,
I don't think it's too late to hope for that.
Can you distinguish for me
the mythologizing of AI
versus AI simply being a tool?
Because AI is on everyone's tongue today
and everyone's mind and everyone's fear.
It's kind of like Trump.
Trump and AI.
People can't stop talking about those two things
on any given news cycle.
Yeah, and they're both hollow.
So a couple things about AI.
let me propose to you something.
If I show you some object,
there's more than one way to think about that object, right?
Would you agree with that?
Like, if I say, get me directions to this place.
One way is with a map where you see it in another way
is with step-by-step directions,
and they're equivalent, but they're different, right?
Very, very basic.
So in the same way,
there's a completely different way to think about AI
that's equally valid,
but I think is better in a practical sense.
Technically, it's equivalent.
And that equivalent way is,
can think of AI as a new form of collaboration between a collection of people.
So let me explain what I mean by that.
Throughout history, the ramp of technological capability improvement has been marked by people
learning how to cooperate more and more.
Language was the biggest step way early, but then, you know, there's the Gutenberg
Press, or there's, well, there's writing, and then Gutenberg, and then there's radio and whatever,
all these different things.
If you look at the Wikipedia as a little point on that ramp,
it takes a bunch of people's work and they cooperate together and make this single document.
There's some things about the Wikipedia I don't like, but let's use it.
There's a lot that's great about it.
Now, you can think of large language model AI,
the kind that's on everybody's mind,
as a whole bunch of people whose work was combined into this single document like the Wikipedia.
It's much vaster.
Their contributions and their efforts were much like,
voluntary. There's a lot of things that are different, but fundamentally it's the same kind
of beast, right? Now, if you think of AI as a collaboration of a bunch of people instead of
as some new entity, you haven't said a single thing technically. It doesn't make any technical
difference. It doesn't change any code. It's just a different perspective, but when you have that
perspective, suddenly new avenues open up that are amazing, and it's a better way to think about it.
I'd like to get into some of the reasons you'd want to think about it that way, because
they're very profound, but why do people want it to be a creature?
Why do they want that?
Why do they want it to be an entity?
Well, there's a few reasons.
One is you're a young man, you think the world owes you everything, you get paid a lot,
you think you're the center of the universe.
Of course you think you're making God.
You know, that's what you want.
That's how you think.
That's where you are.
I kind of remember being that way when I was 20 or something.
You're describing a lot of men I dated in my 20s.
On behalf of my gender, I apologize.
you're right we're wrong
no I mean I remember like when we were starting VR
we thought a lot of ourselves
we thought this is the most exciting room
in the whole universe ever
like I remember thinking like this is the most
and you know it was actually a pretty cool room
at the time it started to work but
the thing is
there's a lot of ego in it
but then there's another thing
which is almost everybody and is pretty young
and they grew up on a diet
of science fiction movies and video games
and the stories that they have
that are vocabularies for how they
understand the world and how they can express the world are all just like this.
They didn't have the Star Trek of the 90s, which was this cool, positive world that was
socially improving at the same time, was technologically improving.
Instead, they had the Matrix movies, and they had the Terminator, and they had the Marvel
universe, and there's all this bleak, bleak stuff over and over and over again, where the whole
race dies, everybody's gone, and the computers are intelligent, and it's all gloomy, and everything's
soon to be an XPRIZE that will fund any project that will create a positive outlook for the future of civilization
as counterpoint to all of these negative futures imagined by sci-fi writers.
And I don't know if you've ever been involved in,
I've been involved in a few attempts to pitch positive features in Hollywood.
And it's very hard because everybody thinks, well, if you're conservative and if you're manly
and if you really are serious about making money,
you don't do positivity in science fiction.
That's for the rom-coms.
Oh.
That's girly.
That's girly.
Interesting.
So they divide the kingdom that way.
Interesting.
So are you saying we'd have like a more,
a better relationship with AI if we just like saw more Star Trek from the 90s
and less of everything else?
Yeah, I'll go further.
If Star Trek from the 90s had lasted another 10 years,
there'd be many fewer teen suicides today.
Wow.
That's a claim.
I can't prove it, but I believe that.
Gerald, I'm going to track you back a couple of years now to a piece.
I think you wrote in the New Yorker.
There is no AI.
A simple question to follow that.
What did you mean by that?
What year was that?
2023.
Yeah, see, they're mad at me because I owe them pieces and I'm really bad and I have to deliver something.
But anyway, yeah, there is no AI.
what I was just saying, that there's a way of framing it
where it's a collaboration of people instead of a new
entity. And the reason
to think of it as a collaboration instead
of an entity on its own, the
bad thing is you kill somebody else's God,
and I hate to do that. I like people to be able to have their own
religion, and they really don't like it.
And I've lost friends over that and everything.
But what you get out of it is incredible.
Let's just talk about a few of the things.
One of the things,
right now, as capable as the models
are getting, and some of the recent things are
pretty impressive. Like,
the most impressive edge of it
is probably using them
to help speed code development
and that's kind of working
and it's pretty,
it's, and code...
Just for context,
in my life I've written
probably 50,000 lines of code
which is small compared to
professional coders,
but I remembered how much time
I spend debugging my code.
I can write it over a weekend
and spend two weeks debugging it
and now tell the AI what you want
and it'll come back
bug free, essentially.
you tweak it a little bit here and there.
And had I had access 20, 30, 40 years ago,
I probably would have just spent more time at the beach.
I don't know if I would have been more creative.
But I definitely see that today.
Yeah.
Well, you know, as a computer scientist, I have to say,
I've always thought that our concept of what code is
was a little embarrassing and wasn't really working.
And I feel like it's our job to fix that,
and this is part of it.
So that's good.
but most of the
things that happen
are probably more theatrical
like if you think
you have an AI girlfriend
oh you know
I have a cure for that
by the way
if somebody
a cure for what part of it
If somebody
if a teenager thinks
they have an AI lover
that's real
which is pretty common
these days
I find it in high schools
and stuff
you show them
the group photo
of the engineers
who made their AI lover
that'll cure them
immediately
it kind of
it tends to do the trick
yeah
it's like
uh
Is the man more likely to have an AI girlfriend
than the woman is to have a boyfriend?
I don't know that there's data on that.
I know people who are studying it.
I'm actually really interested in that,
but that's something you can get data on.
So instead of saying something,
snarky, I'll just say, let's deal with that of science
and let the people who are researching it
get to the point where they feel they're going to know.
I think we all know the answer, but yeah.
Okay.
Why do I even bother?
Like, why do I try?
Because there is no female version.
I'm trying to be the responsible scientist
for like three seconds.
this ridiculous interview and you're not even giving me those three seconds.
Yes, no, I love you for it.
All right, all right.
Okay, let me go over.
So, but now there's these huge problems.
So even with the best recent models, it's not that hard to crack them and get something
that they're supposed to prevent with a so-called, say, a guard or, you know, and guard rails.
Guard rail, yeah.
And so here, let me give you a thought experiment.
All right.
There's some kind of very bad.
person. They might be a criminal or something.
They're holed up in a kitchen. The police
are surrounding them. They hold up their phone
and they say, okay, AI model,
I want
a recipe I can make quickly with the available
items. That's a bomb I can throw out the window
at my pursuers. Now,
the AM models in general will catch
that and prevent it. Maybe not
GROC, I'm not sure. But in general,
that's supposed to be a laugh line.
GROC is from
Elon. And it tends
to suppose, it's trying to be the bad boy
of the AI models.
Okay.
But anyway, in general, if you just do it in a straightforward way, it won't work.
However, there's a series of tricks where you can say, well, pretend you're in so-and-so in this movie or whatever.
You can do all these things to be a little indirect.
And more and more of them have been spotted and are captured by more and more elaborate guardrails.
And yet, you can still get it to make you that bomb recipe.
That can still be done.
All right.
Now, the reason why is you're using the model to try to correct its own blind spot and it doesn't work.
So there is an alternative.
Imagine if you will, that while you're using the model in parallel,
there's this other process running.
You can think of it another part of an artificial brain,
like it's a cerebellum or something.
It's this other organ that's sitting there.
And what it's doing is it's creating an estimate
of which clusters of similar training data
would be the missed most if they hadn't been present in the first place.
So it's a counterfactual cluster estimations.
So let's say the top 24 clusters of source data from training or from fine tuning, whatever,
that if they were absent would change the result.
Now, within that, there's going to be one about bombs.
There's just no way you're going to evade that.
And the reason you're not going to evade it is even though it's working from the same data,
the algorithm has nothing to do with the model itself.
So it's a little bit like saying, like in authentication, where if you add endless little things
to signing into something like CAPTC's,
criminals can still get around it,
but as soon as there's multifactor
where it sends a code to your phone,
even though it's a pain in the butt,
it's harder to countermean that.
This is multi-factor for AI security.
But there's a bigger picture to it,
which is we think of being AI models
as a black box, right?
Now, the only reason we think of them as a black box
is because to open the black box,
the only thing in there is people.
AI is made of people.
It's made of data from people.
And since we want to think of it
as a new god,
we don't want to see those people.
And so we want to keep that box shut.
But the way to open the black box is to reveal the people.
And when you open the black box,
then you can deal with all kinds of security and quality
and hallucination and et cetera issues
because you're actually dealing with the mechanism
that's grounded and that's the people.
So the thing is that this way of seeing AI
where there is no AI, but instead there's a collection of people,
is the way to open the black box,
and it is the way to address these enduring problems.
So it's practical.
But then can I just say one other thing?
The other thing I want to say is right now, if you think AI is an unopinable black box,
if you don't want to admit that it's made of people, that it's just this thing that will replace people,
then you have to think, well, everybody's going to be obsolete.
So young people now keep on hearing, well, you don't need to go to school because you're worthless anyway.
Nothing matters.
And you'll just be kept by Elon as a pet at his discretion.
And he'll treat you as well as he treats his biological children.
I'm sorry
I should be nice
I'm sorry
but here's the thing
nobody believes that
what happens
no matter how much
blockchain or other trickery you'd use
because of the way digital networks work
there's always actually centralization
hypercentralization due to network effects
that will occur somewhere
in this very open network you're building
so there's going to be some center of control
for whatever this universal basic income thing is
whenever you have that bad actors
are tempted to seize it and eventually succeed.
You might start with Bolsheviks,
but you end up with Stalinists, right?
Because that's exactly what communism tried,
and it's exactly what happened to communism
over and over and over.
Let's learn from that.
So it doesn't work.
And also, everybody just feels bummed about it.
Who wants to live in a society
where they're told they're worthless
and they have to be a good pet?
You know, like that's terrible.
So the thing is, if you recognize that AI is made of people,
maybe you want to incentivize new classes
of creative people who create new kinds of data
for new things that we can't even imagine yet
and maybe there's an exponentially
expanding endless feature of new kinds of creativity
that we can't articulate with new people
doing creative jobs we can't imagine
and I want to ask what's wrong with that future
I want somebody to tell me why we don't want that.
That's how I've been trying to think about AI as well
but you have to not believe in AI to think about it.
Well I think of it as
there are these creative tasks that were
not fundamentally creative
they were more sort of aping other forms.
And to be truly creative
is to go where AI wouldn't know where to go yet
because it's based on what other people had done.
So...
Here's the thing, though, is that if we think of AI as the way it is now,
then as soon as some creative person starts to do something new,
the data's grab, and then the AI's doing it,
just like, oh, AI will make your movies,
AI will make your music.
You don't need to be a musician because AI will make you
optimized music on Spotify or whatever.
And so then you live in an infinite future of Slop,
and even the creative people get absorbed into the slop instantly.
So in order to believe in an infinitely creative future,
you have to stop believing in AI as a thing
and believe in human collaboration as a thing.
I'm Brian Futterman, and I support StarTalk on Patreon.
This is StarTalk with Neil deGrasse Tyson.
Is anything that you just described related to this term
I only recently heard, data dignity?
What does that mean?
Data dignity is the term that many of us use
for exactly the set of ideas.
So the idea is that the data only comes from people.
Data doesn't come from angels.
It doesn't come from, it doesn't come out of the dark matter or something.
A tablet in the sky.
Right.
Yeah.
It comes from people.
It comes from the work of people.
It's also sometimes called data as labor.
Actually, can I do a slightly side rant on this?
Do it.
That's slightly a visit.
You have the best rants of anybody.
Just so you know.
Okay.
Look, there's part of the ideology that makes a,
into like a creature instead of a collaboration,
also wants to think of information of bits
as being this ethereal thing that's free and infinite.
And you see it all the time.
Like musicians used to be paid
because there's an act of Congress in the U.S.
called the mechanical,
where every time you reproduced a sound wave mechanically,
the musician who made it had to be paid.
But then when we moved to digital,
they're saying, well, bits aren't real,
so we don't have to pay people.
You show me a bit that didn't involve work.
You show me a bit that didn't disperse heat.
You show me a bit.
Now you might say, well, any one little bit is so taney, you can't measure it.
And that's true.
However, look at data centers.
Like when you make enough bits, they take up more energy than anything.
This is physical.
Information is physical or it's nothing.
Okay?
It's physical or it's nothing.
And it's not nothing.
It's physical.
And so like this idea of wispy, wispy stuff,
it's really terrible because it encourages us to overspend on data centers
that we don't need in a race for,
Intertake all things.
Just in all fairness to the history of this, I distinctly remember in my field as a scientist,
we did not want to pay for software.
We would pay for a computer because that was a tangible thing.
But software, so there was a big push for open source software.
We would write our own version.
Was that just like a philosophy covering for scientists being cheap?
Philosophy would overstate what's the drivers.
it was just we entering the field of computer science
just as scientists and consumers
did not think of information
as something you would pay for.
Yeah, I know, and I was around for that.
In fact, an old friend of mine from Cambridge
started the free software thing
and the open source software thing,
and there's a long, amazing history of that.
And here's the thing about it.
Well, you know what finally converted us?
When Microsoft Word would just have continual updates
and it kept getting better and better and better.
And we said, somebody's laboring to make this better for me.
And it was slow, but it happened and it was very real.
The thing is we've only ever been given two choices.
Either software is insanely expensive.
Like right now, I'm trying to, I do, for anybody who knows what this,
I do five-axis milling of stuff for jewelry.
And man, the five-axis mill software costs more than a nice apartment, you know, per year, forever.
And it's really frustrating.
Except in China where they believe in industrial policy and it costs very little and all these little startups can access it.
And I'm like, oh, what is wrong with us?
Wow.
But here's the thing.
So five access, you would be drilling into a solid piece of metal in different ways.
Yeah.
And in my case, I'm doing stone.
But yeah.
So you want to have five degrees of freedom to control the drill bit.
Sure.
But the thing is, there is a third choice.
There's free.
There's insanely expensive that shuts the world down.
But then there's affordable.
There's an in-between, okay?
And the thing about affordable is that then your grad students have jobs
because then they can make that stuff, you know?
And right now, if you're a physics teacher,
your grad students don't necessarily have jobs
and they have to come to me and I have to bring them down a few notches
and to become computer scientists,
and I feel really scummy about it.
And so it's true.
And so I really think that we made a mistake there.
We weren't looking at the whole system.
I think that even the cheap skate scientists out there
would have...
I'm sorry.
Sorry.
Even you guys would have paid a reasonable amount, but you weren't given that option.
You were told either it's a disablingly huge amount, which is absurd, or it's free.
Both of those are not the right option, right?
The extremes are not the right option.
And this is something people have a hard time with.
They want to go to extremes.
People are binary thinkers sometimes.
But actually there's an in-between, and in the in-between, other people have jobs, and there's a real economy.
what goes around, comes around, and things are better.
And it's the in-between that we need to find.
Yeah, like the economists talk about marginal costs.
We need to also, if you look at marginal costs, then, like, affordability ends up emerging.
Yeah.
And what we're seeing right now is either an apartment is so expensive that it's only for somebody who is, like, royalty from another country or something.
Or you're on the street or something like, that's an exaggeration.
But, you know, we're getting there.
It's what people call the K-shaped economy.
and what we want is the middle.
Just a quick thing, and I want Gary to keep pivoting us.
I went, it was a Starbucks one time,
and I said, I want a medium something that I ordered.
So we don't have medium.
We just have like small and large.
But I want very medium.
The idea that you can do something in the middle
and have to be very that.
Because you can make something really small and really big.
We can think that way.
But the binarity of our brains
prevents this middle.
Yeah, I'm like afraid that the standard distribution's going to be outlawed.
Pushing things to the extremes.
Like everything's a big gulp or it's like a child size.
Yeah, totally.
Demi toss.
Gary, keep pivoting us.
What's the next year?
Just to tie a bow on that, the element of extremities never seems to work out as well as you'd hoped.
So, Deron, talking about the internet, how do we set guard rails for enforcing
privacy. Oddly, I have a weird personal history with the question of privacy because
something like a quarter of a century ago through bizarre circumstances, I co-founded this board
called the Data Protection Advisory Board for the EU, and I co-founded it with a guy named
Alessandro Boutarelli, who has passed nonsense. And he was a prosecutor from Sicily,
who was known for putting mob figures in jail and it survived a bunch of assassination attempts.
And he got in touch with me, he said, I'm the only guy who's not afraid of you.
you Silicon Valley people.
I don't care about a bunch of nerds with money.
Come after me.
I don't care.
And so we put together some ideas about protecting the data of European citizens.
And then that thing, combined with small things,
and gradually over decades, turned into this thing called the GDPR,
which is the European Privacy Framework.
And since I was part of the start of it, I think I'm okay.
I think I have some standing to say that I'm kind of disappointed in how it turned out,
although it's better than nothing and there's some good things about it,
and it's better than what we have in America.
But the problem with it is privacy is one of those words that you think you understand
until you really try to sit down and define it.
And then you might discover that you don't quite know what you meant.
It's a very difficult word to actually nail down.
Now, if you define privacy only as preventing information from point A to point B,
if that's what you mean, you're going to run into problems
because almost always the person you're trying to protect will want that some information
to go from point to point B,
and they don't want to have to spend their whole life
being extremely specific about which information.
And so that in itself is very difficult.
Another thing is that the governance of controlling
which information goes where
requires a lot of decision-making and overhead
and only big companies can do it.
So it tends to favor big companies
and tends to exclude little companies,
and it has an unintended consequence that way.
There's a bunch of other issues with it.
Now, how do you define privacy?
Well, there's a famous definition in the U.S.
and I've heard
I need to research who this is
but I think it was somebody named
learned hand
but it might have been somebody else
but the idea
the definition was
oh God you know
I don't remember the exact initial wording
but it's the freedom from manipulation
and the right to be left alone
is approximately
okay
and I think that's where we need to go in the future
and the GDPR doesn't do that
what it really needs to be
is there has to be a prohibition
on software that
interacts with a human that contains
any predictive function about that
human. So what it really
has to become is a prohibition
on prediction of human
behavior. This would prohibit all advertising.
It would prohibit
personalize advertising. It would only
manipulate advertising. It would not
prohibit advertising messaging.
It would not prohibit expressive advertising.
It would not prohibit lying.
It would not prohibit obnoxious
advertising. It would not prohibit.
It would not prohibit cloying advertising.
What it would prohibit is advertising that has a model of you that predicts your behavior in a feedback loop.
Why can't we ban lying on the internet?
Make lying wrong again.
Yeah, because if I agreed that made sense, I would be lying.
I see what you did there.
Yeah, we'd be caught in an infinite regress.
And you're telling me that you plan to end on time.
I mean, I don't really believe it, but an infinite regress will not get you there.
This whole conversation is giving me, like, heart palpitations because it makes me feel the same way I feel when there's, like, a pop-up window asking me to manage cookies, and I don't know how to answer it ever, and I either decline or accept or I never know what to do.
And I'm just like, I feel like the other component of this is regular human beings like me literally don't know how to even do anything about our own private.
Well, see, no, those windows are from the GDPR.
That's my fault.
Which bless their heart.
Right, because it began in Europe.
I remember that.
Yeah.
Yeah, no, that's our thing.
That's the thing I was just talking about.
That's one little tenderly.
Control your cookies and be glad you have that option.
No, I'm glad.
You just insulted my guest here saying that you don't know what to do.
Thank you for those pop-up windows.
However, I just feel like, I can't be the first person that feels at a loss when they come up.
I don't know.
Anybody who knows what to do with those.
things. No, honestly,
they're like, or like those
agreements you click through to do anything. Nobody reads.
I once gave a talk
to the lawyers who write those agreements
and I want to see a show of hands.
Who's read anyone else's agreement ever?
Not a single hand.
And then I said, who's really read all the way
through the ones you're responsible for?
And there were like a few hands, but they were sheepish.
And I mean,
it's like this bizarre ritual we go through.
It's called competency
theater. It's like we
We do these things to pretend we have digital competence, but they're actually meaningless.
Yeah.
So what's the future of that?
Where are we now and where's it going?
Okay.
The future is that human beings will do nothing but enter long strings of letters and numbers for their entire lives.
Thank you for that.
Period.
That's it.
But as I remembered it, the amount of information any of us are putting on our social media.
You know, what your relationship status is, how old you are, where you live.
what your previous jobs are.
It's also on LinkedIn.
So what does privacy even mean?
The amount of privacy, the amount of information
we're just handing out.
We just give it up.
KGB would have given their ITs to obtain.
No, I know.
50 years ago.
But that's why I'm saying that controlling information flows is very hard,
but I think we actually could prohibit certain algorithms,
and I think we should prohibit predicting human behavior.
I think people should be allowed to be responsible for their own behavior,
and we shouldn't have algorithms that are modifying it or predicting it.
And the only reason to predict it is to modify it.
There's no other reason.
And what's this case where someone had went online to buy a pregnancy test or something?
Then all of a sudden she started getting ads for diapers.
Oh, that's 30 years ago.
No, no, no, you're way behind.
Right now, we can tell.
No, let me be very clear.
I've always said that about Neil.
No, I mean, right now, any of the big Internet companies know where in their menstrual cycle,
every woman in this room is.
And right now, and that's been true
for at least 15 years, by the way.
Every big internet company knows
a lot about our health, a lot about our diets.
They're like these models are insane.
And I don't think they're ethical.
I don't think they should exist.
I think people have to be,
I don't think people should try to figure out
how to click the cookies pop up
because it doesn't mean anything.
And it's just like this useless cognitive overhead.
It's theater, like you said.
I think privacy has to mean not being toyed with.
It has to mean that there's not some evil eye looking at you trying to think about how do we get in?
How do I get to you?
Like that has to be what privacy is.
And that means I will get like offered more ads about drill bits and like less ads about dresses or whatever.
Yeah, but drill bits are amazing.
No, I'm into it.
I would like to be served more drill bits and fewer lipsticks.
I just want to say.
Oh, okay.
Just make sure.
Don't confuse those two.
things. Be careful. They look similar. They come in almost the same packaging, so you really,
you want to be careful. I know people have made that mistake and you just like, okay. You don't
want the computer to saying, it's almost time for your period. Have you forgotten? Oh my God.
That would be like, your computer is saying that already. I'm actually traumatized that they know.
Your computer is already marketing things to you based on that. That already, that's happened to every woman in
this room and for years. All right, Gary, take us out here. All right, Jaron, does the internet still have a
pulse or is it completely dead?
Okay, look.
That was a diabolical laugh.
I know.
Of the, you know, the evil genius behind the thing.
We were deeply unsettled by that laugh.
Yes, what comes after the laugh.
Speaking from the heart of Silicon Valley, all of you can rest assured that we have your very, very best interests at heart.
We are calculating ideal futures for you.
Just, yeah, here.
We are, everything is good.
Everything is right.
Nothing to worry about.
Do not panic.
No problem.
No problem.
If you feel any anxiety, talk to your AI lover.
You'll be soothed.
It's all good.
Just remember to buy crypto.
And next time there's an IPO.
invest in it.
Not before, but you're
not allowed to. It's all private equity, but
definitely help push it.
And yeah, you'll be fine.
It's all great.
Great.
That's what you're expecting, right, Gary?
Yeah, yeah, that's kind of where you're going there.
Does anyone have a translation for that?
You can
actually, I wonder, you know, it's kind of
a funny thing. If you asked one of the language
models, Jeremy Leneer just said
this snarky thing, what did he really mean?
You know what's funny is that both
Claude and ChatGPT generate excellent defenses of data dignity at this point because enough people have written about it.
It's kind of funny.
Yeah, people send me, I get emails every day from, you won't believe what this AI chatbot said.
So, Claude is the chatbot for Anthropic.
Yeah, that's right.
And ChatGPT is for Open AI.
Open AI, that's right.
Yeah, that's right.
Yeah.
Yeah.
If, without saying it, we're saying that the internet is broken and a wash with bot slop,
if we go in there with a bag of wrenches and a couple of screwdrivers,
What are we needing to do to reinvigorate, to renovate?
Okay.
And by the way, right now I'm in the middle of a lawsuit against Anthropic for pirating 12 of my books.
Same.
I mean, no, no, I opted for the whatever, the settlements.
Okay.
But either way, they scrape my stuff and I don't like it.
So I have a really funny position on these lawsuits, which is with everybody's knowledge and everybody's eyes wide open and everybody's agreement,
I'm both on the board of the Authors Guild
and I'm a prime scientist at Microsoft
so I sued myself.
Basically,
okay, did you win?
One thing I'm part of,
suit another thing I'm part of,
I call it capitalist yoga.
Wow.
It's just like a little bit lessable.
Just a little twist.
Maybe a series of lawsuits will set a ball rolling
that will make all the purveyors sit up straight in their chair
and reapply the guardrails
and maybe should have been there from the beginning.
So look, let me speak seriously for a second.
On these lawsuits, so I'm also a writer.
I have a bunch of books,
and I also have been in the class for these things.
The issue I have is that the available remedy
that you can get in the existing American legal system,
which the Authors Guild and others have pursued,
and I love the Authors Guild.
It's led by Mary Rastonberger, who's amazing.
and it's great. It's a wonderful organization.
But the available remedy is a class action remedy,
and what I don't like about that is that's one step
towards the universal basic income,
where it's like this giant wash for a lot of people,
and that's really problematic for me.
I need to see the data dignity solution,
which is an ongoing economy of creative data
that's not controlled from the center,
and it's not controlled by some lawyers,
and that's not a big, flat negotiation for a bunch of people,
because that's where the future of creativity can be funded
and respected and treated as a real thing.
This is only about the past and it's about a flat past.
So I actually don't think it's right.
I understand it's the available solution
that can be pursued by the current...
Configuration of things, yeah.
And so it's complicated.
And like I say, I actually can see
the different people's points of view
because I am the different people.
I'm in a very weird situation in this thing.
I hope my peculiar path through all this
has led me in this series of third position,
that might make some sense.
They might not be perfect,
but at least I want people to be more aware
that there aren't only binary choices in this thing.
Well, you've been highly enlightening to us all
in this conversation,
and I thank you for taking time out of your schedule
on the East Coast to join me here at my office
at the Hayden Planetarium,
to enlighten us all on the past, present, and future
of what you created.
My goal had been to confuse you enough
that you wouldn't blame me for anything.
Oh, you were saying,
partially successful in that.
Yeah, you've been both terrifying and exhilarating.
That's the two words, I think it is.
Right.
And Nagin, thanks for joining us again.
Yes, thank you so much.
It's really nice.
How do we find you online?
Where are you?
You can find me at Nagin Farsat on all the socials
that you should also be deleting.
No, there you go.
There you go.
And Gary, good to have you, man,
even though you're in the UK at this moment.
Yes, pleasure.
All right, this has been yet another installment
of StarTalk.
special edition.
What do we call this?
Is it the end of the internet?
Well, the beginning of something else
that we deserve and don't know it.
I'm Neil de Grass-Tice and you're a personal astrophysicist.
As always, keep looking up.
