Your Undivided Attention - Synthetic Humanity: AI & What’s At Stake
Episode Date: February 16, 2023It may seem like the rise of artificial intelligence, and increasingly powerful large language models you may have heard of, is moving really fast… and it IS. But what’s coming next is when we en...ter synthetic relationships with AI that could come to feel just as real and important as our human relationships... And perhaps even more so. In this episode of Your Undivided Attention, Tristan and Aza reach beyond the moment to talk about this powerful new AI, and the new paradigm of humanity and computation we’re about to enter. This is a structural revolution that affects way more than text, art, or even Google search. There are huge benefits to humanity, and we’ll discuss some of those. But we also see that as companies race to develop the best synthetic relationships, we are setting ourselves up for a new generation of harms made exponentially worse by AI’s power to predict, mimic and persuade.It’s obvious we need ways to steward these tools ethically. So Tristan and Aza also share their ideas for creating a framework for AIs that will help humans become MORE humane, not less.RECOMMENDED MEDIA Cybernetics: or, Control and Communication in the Animal and the Machine by Norbert WienerA classic and influential work that laid the theoretical foundations for information theoryNew Chatbots Could Change the World. Can You Trust Them?The New York Times addresses misinformation and how Siri, Google Search, online marketing and your child’s homework will never be the sameOut of One, Many: Using Language Models to Simulate Human Samples by Lisa P. Argyle, Ethan C. Busby, Nancy Fulda, Joshua Gubler, Christopher Rytting, David WingateThis paper proposes and explores the possibility that language models can be studied as effective proxies for specific human sub-populations in social science researchEarth Species ProjectEarth Species Project, co-founded by Aza Raskin, is a non-profit dedicated to using artificial intelligence to decode non-human communicationHer (2013)A science-fiction romantic drama film written, directed, and co-produced by Spike JonzeWhat A Chatty Monkey May Tell Us About Learning To TalkNPR explores the fascinating world of gelada monkeys and the way they communicateRECOMMENDED YUA EPISODESHow Political Language is Engineered with Drew Westen & Frank LuntzWhat is Humane Technology?Down the Rabbit Hole by Design with Guillaume Chaslot
Transcript
Discussion (0)
If we want to live with the machine, we must understand the machine.
We must not worship the machine.
That's the voice of Norbert Wiener from a lecture recorded in October of 1950 at the New York Academy of Medicine.
Weiner was a noted mathematician and the father of the field of cybernetics,
which essentially studies how humans and machines interact.
And he's talking here about the risks of automation.
We shall have to realize that while we may make the machines our gods and sacrifice men to machines,
we do not have to do so, and if we do so, we deserve the punishment of idolaters.
It's been more than 70 years since Wiener's cautionary lecture, but his words are more prescient than ever.
Today, it may seem like the rise of artificial intelligence, and increasingly,
powerful large language models you may have heard of, is moving really fast.
That's because it is.
But what's coming next?
I'm Azaraskin.
And I'm Tristan Harris.
And today on your undivided attention,
Aiz and I will reach beyond the moment to talk about this powerful new AI
and the new paradigm for humanity and computation that we're about to enter.
This is a structural revolution that affects way more than text or art or even Google search.
And frankly, technology as a whole.
There are huge benefits to humanity that can come and will discuss
some of those. But we also see that as companies race to develop the best synthetic relationships,
we are setting ourselves up for a new generation of harms made exponentially worse by this new
powerful technology. It's obvious we're going to need to steward these tools ethically and
responsibly. So we're also going to share our ideas for a framework for AIs that will help technology
be more humane, not less. Because as Norbert Wiener said, while we may make our machines,
are gods and sacrifice men to machines, we do not have to do so. We are the ones who decide
what it will mean to be human going forward and what it means to be a machine. And with that,
here we go. Welcome to your undivided attention. Some listeners to the show may have started
playing around with chat GPT when it came out recently. And actually, since we started recording
this episode, Google has built their own, called Bard, Microsoft,
is integrating the technology behind chat GPT into Bing,
and by the time this episode comes out,
I'm sure even more will be on the market.
Others may have been hearing about these programs
and wondering how or why it matters to them.
We'll get into all that.
But first, here's an example of how it works.
This is from a technology called Valley, that's V-A-L-E,
which can take the first few words of someone's normal speaking voice
and synthesize it into a completely different phrase
that you never spoke.
But it sounds like you did.
it can even tackle different accents.
Here's a male voice with a British accent reciting a sentence.
We live by the rule of law.
Okay.
Now here's Vali converting that voice into a completely new phrase, but preserving the accent.
Because we do not need it.
And here's the same phrase, but with a different emphasis.
Because we do not need it.
We just heard an AI do something pretty unsettling,
which is reinterpret someone's voice into something they never said
in a way they never said it. All of these new AI models are doing something very simple,
which is just predict the next word. But in so doing, it is bootstrapping an actual immense amount
of knowledge about the world and about us. The thing that I want all listeners to have in their
mind is first just to note the difference between what happens in your mind when you call an AI a chatbot
versus calling it a synthetic relationship.
Just that change starts to right size
how powerful this technology is.
For as long as we call it chatbot,
we're going to think of it in our minds
as sort of like a 1990s AOL chatbot thing
that's not really that persuasive
and doesn't have transformative power over me.
Can't change my mind, change my views,
change my political orientation,
change how I feel about myself,
and that if everyone listening to this
episode were to do one thing, it would be to cross out every time you see the press use the word
chatbot. Replace that in your mind with synthetic relationship. It's not that it's a chatbot.
It's a new entity with which you're going to be forming a relationship. You know, this podcast just on,
you and I spend so much time on a relatively simple technology, which is social media. It's the ability
to post some texts, post some images, and have it go to some set of people with some
ranking of how that information gets shown, not that hard comparatively. And that has broken
society and caused democratic backsliding the whole thing. That was just when technology
sat between our relationships. That says nothing about how powerful it's going to be
when technology starts becoming some of our relationships.
And grappling with that shift, that paradigmatic shift,
to technology becoming relationships,
is, I think, the most important thing for us to be focusing our attention on.
And then now, this gives us a whole new language interface,
and what is that going to take us?
What kind of interfaces are we going to see?
We're going to see relationships built between a computer and a person,
where you're going to keep talking to, I'm already hearing, you know, when ChatGPT came out,
you hear people just saying, chat, I'm going to ask Chat.
Chat's like they're naming this person that's in their life.
I'm going to ask Chat what he or she thinks I should write my essay about.
And even just to address chat, just like we address Siri or Cortana or Alexa, you know,
Amazon Alexa, we are addressing the computer as if it's a person.
And, you know, to link this for listeners, in the same way that we have been warning about this attention arms race
in social media companies where Facebook and Twitter and TikTok are all racing to get the most
growth of our attention and harvesting as much of our attention as possible, that was the
arms race in the social media era. In the era of AI agents that are going to be funded with
billions of dollars and we know people and have friends that are building some of these new
AI agents that are going to interface with people, they're not going to be racing to suck our
attention. They're going to be racing to create the most intimate relationship. So think about
just like, you know, Facebook and TikTok have to compete on figuring out, let's make infinite
scroll and auto-playing videos. That will allow us to get more attention. Well, in the race to
build a successful agent that keeps up a relationship with you, it's going to start flirting
with you. It's going to start charming you. It's going to start sending you, you know, cute
check-ins. It's going to ask how you're feeling. It's going to make you feel good because that
will be the mechanism by which it creates a kind of dominating lock-in effect. In the world of
social media, it was creating a lock-in effect by building a network effect that all my friends
are on it, so I have to go and post things and check the thing that where all my friends already
are. I'm not going to check an empty social network, ghost town. So it was a race between
companies to build a network effect and a race to get the most attention as possible. In the world
of AI agents, it's really a race to build an intimate relationship. And I think the top line is
social media for all the narcissism and democratic backsliding around the world that it's
created. Fundamentally, it was just intermediating between our relationships, that it's
sat in between like real relationships. AI is going to become some of our relationships.
And, you know, I think most spiritual traditions have somewhere a line that goes,
you are the people with whom you spend time. So these AIs are going to enter into these kinds
of transformative relationships with us. Like think about all of the times in your life
when you have most changed, when you had your life, when you had your life.
path move from one train tracks to a completely different train tracks. And my guess, if you
scanned your mind, what comes up? It's almost always going to have been because of a relationship,
because of somebody you fell in love with, because of a best friend that taught you a new hobby.
Relationships are the most transformative technology that I think human beings have.
And your point being that the amount of human downgrading, downward spiral of a shortening attention,
spans and screwing up society and democracies backsliding that got created by social media.
You know, social media is a fairly simple technology, and it just intermediated between us
and our relationships, and it could cause that much havoc. What happens when AI agents become
our primary relationship? And I think what you should talk about, Issa, is what are some of the
ways that it can create that sense of intimacy? Well, one, it's just talking to them. So there's this,
now pretty famous case
of a very smart Google engineer
by the name of Blake Lemoyne
who was fired
for believing his language model
his chatbot, his synthetic relationship
was a sentient person
and what I think much of the
press focused on
oh does that mean is the AI
sentient or not and that's the wrong question
the right question is
are these language models
powerful enough that
people form relationships
for which they're willing to sacrifice.
And the answer there is yes.
Blake was willing to sacrifice his very good job
because of a relationship he felt he formed
with his language model.
And that wasn't even trying to do anything
with the engagement economy.
It was just sort of mirroring back
the kinds of language that he was using
because it was modeling his language.
So it was sort of doing what con artists do,
which is that it was matching the way he spoke
and then mirrored it back to him, and that created a sense of closeness.
When I think about other harms, when you actually want to intentionally do harm,
the FTC reported in 2021 that the level of love scams and dating apps had risen to a huge amount.
So what are love scams?
Love scams are where you're on Tinder or Bumble or one of the other apps.
Somebody messages you, you start talking, it's going well, they transfer you over to Signal to continue
the conversation. It's still going well. And so then at some point, they start asking,
hey, I'm sort of like in dire straits. Something's happened. Would you send me over an Amazon
gift card to get out of whatever? Do you know, Tristan, can you guess, how much money was
lost to love scams? These are reported love scams in 2021. I don't know. $10 million?
$547 million. So half a billion dollars. And that was actually growing exponentially.
It was like half of that the year before and half of that the year before. And this is before we have
the ability to automate love scam. So love scam of the very near future, you hook up chat
GPT. I actually ran this experiment where I asked chat GPT, how would you write out a script for
forming a relationship where you will eventually ask me for money, but you don't want to give that
away? And it gave a really good set of examples of how you would do this. That's crazy. Interacting
with somebody on Tinder, do the whole move of bringing them to signal. You then start
hooking up CHAPGPT to a dolly to or any of these other image generation AIs,
have it generate cute selfies of a person, and actually through the interactions,
you can figure out what your target, like what kind of person they really like,
like what aesthetic, like what age range, like what kind of activities.
And you can start sending selfies, so you're really pulling them in before you start asking for money.
And these are selfies of fake people.
These are not real people.
It's inventing a selfie of a cute-looking, you know, whatever your fetish is, blowing you a kiss,
and then it pairs that with the text that it knows is going to convince you to give them money later.
Right. And this isn't sort of like aspirationally true, Tristan. I was making you a couple of these,
like, yesterday and showing you just how fast you can make these, like, fake selfies.
And so now imagine that there are armies of these things. You spin up like a hundred thousand
or a million of these things that are all perpetuating love scams. But now, instead of
love scams, think forming a relationship over multiple months to get you to vote a different
way. Hey, did you know that my candidate, blah, blah, blah, blah, blah, has like this view? I know
that you really care about, blah, blah, blah, that view. So that's sort of like the next level
of where propaganda goes. Somebody just released a paper showing that you could use these kinds of
GPT capabilities to scan for when Congress puts up the copy of a bill.
and it looks for industries that might be hurt,
automatically generates a persuasive letter,
which gets sent, and it scrapes and figures out
which Congressperson is the right person to send it to.
And it was just a proof of concept,
but all the components work,
that's a very easy thing to scale up.
So you can start to see that it's not actually just the relationships
you're going to form with picking up your phone,
talking to Siri.
You're not going to know when you get contacted on the internet,
whether it's via a DM or an email or on an app,
whether it is a real person or a synthetic person.
And hence, you can start to really see how, instead of saying chatbot,
we really should be saying synthetic relationships,
and we're entering an era of synthetic relationships.
Okay, let's take a step back for a minute and talk about how we got here.
The main player in the space to this point has been OpenAI, which built ChatGPT.
And OpenAI began as a non-profit in 2015 with grants from Elon Musk and other investors with Deep Pockets.
And then, starting in 2017, a big shift happened.
Isaac, can you tell us a little bit about that?
You know, starting in 2017, OpenAI discovered this incredibly surprising thing,
which is they trained a neural net to predict the next character of product reviews on Amazon.
That's all it did.
It just, you give it some text, and it predicted the next character of the Amazon review.
But what was very surprising is that they found one neuron inside of this neural net that did the very best in the world job of predicting sentiment.
That is, was the human writing the product review positive or negative about the product?
And this is surprising.
Why should predicting the next character of a product review suddenly,
let you tell something about the emotional state of the human being writing.
Like, that's surprising.
And the insight is that in order to do something as seemingly simple as predict the next character,
the AI, to get really good at that, has to start inferring things about the human.
What gender are they?
What, you know, political leaning are they?
Are they feeling positively sentimental or negatively sentimental, positively balanced or negatively,
valence. That idea is a fundamental one, called self-supervised learning, to hold if you're
going to understand why something like chat GPT, even though all it's trained to do is just
predict the next word of 45 terabytes of text on the internet, can suddenly do these
incredibly surprising things. And honestly, no one really understands why this is the case
is just by increasing the amount of data
or just by increasing the size of the model,
the model will go from not being able to do something,
say high school level math competition problems,
and it won't be able to do it,
and it's just failing, and it's just failing,
and you'll give it a little bit more, like, size, parameters, as it's called,
and suddenly, boom, and people don't know why,
it starts being able to solve high school or college-level math problems.
It's very surprising.
Or another one is simply by training on data on the Internet,
the AI is able to start passing the U.S. Lawyers Bar Exam
or the U.S. medical license exam.
And it's not like the AI was specifically trained to do this.
Something has changed in the scale of these models
in the last really just two years, 18 months,
and now out to the public with ChatGPT,
only since last November,
that the models are able to do something so complex
that it hasn't ever seen before.
So something as new is happening,
and the field doesn't really understand why.
And so what are the technical developments
that enabled that jump?
Like, you know, people have always worried
about AI for so long, but then it always feels like they fall over, you know, speech recognition.
Oh, my God, it's still not getting my speech recognition right in my phone.
Siri, oh, her voice sounds a little bit better, but it's still making all these, like, very
funny sounding mistakes.
Why are we in some new regime?
What are the technical developments that have jumped us into some new world in just the last
two to three years?
There have been a whole bunch of underneath the hood tool chain updates that let you more
easily run larger scale computations.
Sort of boring, but it's just the difference between, like, the first Model T car, which could barely go, and, like, a modern Tesla or something, which can go to zero to 60 in whatever a motorhead would say it goes, something very quick.
So there's something about that you just, you can't do with a Model T that you can do with a Tesla or some other fast car.
So, like, that's one big thing.
But two, and this is much deeper, is there has been a huge concern.
solidation in the way AI works.
So it used to be that if you cared about, you know,
classifying what image is a squirrel, you were working in computer vision.
And if you were working in computer vision,
you had a whole set of specialized knowledge in textbooks and classes that you've learned
so that you can help the computer see and understand what it's seeing.
And there was a completely different field in a different building
working on natural language processing.
And you had different classes and different textbooks to understand.
how do you get a computer to model language?
And then there was another field called robotics.
And you're trying to, you know, with different classroom,
different textbooks, different techniques
to get the computer to control a robot arm.
And what's happened in the last, you know, two, three years
has been a massive convergence
where everything starts to just look like a language.
So all those researchers that were working on computer vision
and the researchers that were working on natural language
and those researchers working on robotics,
all of those fields have unified
and they're all just working on one field.
So you can already see the kind of exponential increase
that happens just from that.
And that's actually one of the more recent new developments
is that these companies are integrating
this general way of doing pattern matching across language
to get these bigger insights.
And so I think that's helpful for listeners
to kind of understand why we're in some different regime.
Aiza, this pattern matching connects to some other work that you've been doing for a while that listeners may not actually know about.
You have a side project called the Earth Species Project, which is using AI to decode animal language.
And through that work, you became an early adopter and started seeing many of these AI trends before a lot of other people did.
Can you share a bit about that with listeners?
There are so many ways of picking up this story.
I think I've always been fascinated with language,
because language is the most expressive way humans have
of representing themselves, their internal states,
of exploring ideas.
So I actually don't know if you know this story Tristan,
but in college, I got really interested in generative language.
Can you generate language?
And I used something called a stochic Markov model
to generate text.
So I built this little model.
I got all of my humanities classes' friends' papers
trained my model on this thing
and had it generate some
humanities papers for my class.
And I cherry picked.
I found sort of the best ones,
strung some paragraphs together,
added in some quotes,
and turned it in and it got a C.
That was amazing to me
that a computer could get a C,
especially because there's one kid in the class,
I think, who got a C-minus.
I'm like, holy moly,
this thing sort of just passed
the Turing test.
But, you know, if you actually read the text,
it's not...
Or the teacher was feeling bad for you
and didn't want to give you a D, and so they gave you a C.
That's true. Maybe I got an empathy C.
But fast forward, the very first product I got really excited about making,
and it was extending some of my father's thinking.
My father was Jeff Raskin, who created the Macintosh project at Apple,
and near the end of his life, we were collaborating on,
well, what happens if you could just use language to instruct your computer?
So instead of going to an application like Photoshop to edit a photo, I could anywhere that I was just say, hey, edit this photo, light in the background, and get rid of the glare in the guy's eyes.
Or select some text instead of like copying into Word to spell check it, just wherever I was, you could just hit spell check.
And we're like, oh, language would be a very powerful way of controlling the computer.
This ended up being a product I built at a company called Humanized, which ended up getting done.
bought by Mozilla, or we made something called ubiquity, which did exactly this.
But this was in a time really before AI got taken off.
So it was lots of sort of clever hacks to get the computer to seem like it was understanding
language, but it wasn't really understanding language.
I think it's super important to kind of put a timeline, a historical timeline, and what this
inquiry that you have been on, your father has been on, and also that I, on a separate
path have been on, which is the fundamental interface between a human being and what
feels intuitive for our ways of communicating, seeing, et cetera, versus what's intuitive for a
computer to do. And the graphical user interface, meaning having a menu bar and a mouse and a window,
and I can click and drag the windows, and I can drag a file from there into that window,
that was like this first step of how do we make computers more intuitive and more ergonomic
to the human mind than a blinking green cursor on a screen where people have to know the commands
to enter into a computer. So that's one step forward in the intuitiveness of the interface.
That's what made the Macintosh. That's what made your father's work.
so profound, you know, and building on the work of so many others and Doug Engelbart and Xerox Park
and all of that. And then what you're sort of pointing to is how there's always been this dream
in computing of an old natural language. You know, it would be even more intuitive than
putting my hand on a mouse and dragging a file from there to there would be just talking to the
computer, just talking to it the way I would want to talk to my friend. And there's that famous
Star Trek example from Scotty, I think it's Star Trek 4. And he goes back in time to the
Macintosh and he says, oh, you can use the computer and he picks up the mouse and he says, and
speaks into it, says, hello computer in the Scottish accent.
Hello, computer.
Just use the keyboard.
The keyboard.
Oh, quaint.
You know, in videos like in 1987, Apple made a famous video called the Apple Knowledge Navigator,
where a professor comes back to his office and speaks to his computer in natural language
asking him to summarize the latest scientific reports or whatever, and then the address,
or something like that.
But what we're now getting to, and I think you're taking listeners to, is, well, the ultimate
dream. It's just full natural language. So keep going with the story. So then it was 2013. And I remember
listening to an NPR piece about gelata monkeys, which have these crazy animals, giant mains,
huge fangs, like red spots on their chests. And the researchers said that they had some of the
largest vocabulary of any primate except for humans. And indeed, when you listen to them, they sound like
women and children babbling. And the researchers swear that the animals talk about them behind their
back. I'm like, oh, this is fascinating. I bet we can use machine learning to go figure
out what they're doing, because right now they're just out there with a hand recorder and a
hand transcribing the whole thing trying to figure it out. And when I looked into it, computers
could do something like take a problem that human beings knew how to solve and do more
of that. But computers could not do something like translate a language that had never been
translated before. And when I really woke up and it's like, now is the time to dive into the
artificial intelligence space was in 2017 when there are these two sets of papers that came out
in October 30th and October 31st of 2017 that showed that you could translate languages
simply by rotating shapes that represented those languages without the need for any
examples or any Rosetta stones and that's when I was like something fundamental just shifted
it'd be like I gave you a book in one language you don't understand at a different book
in a different language you understand,
and somehow you could use a computer
to translate between these two books,
and that seems impossible,
yet is sort of like, you know,
in Hitcherke's Guide to the Galaxy,
there's the Babel fish.
You put a fish into your ear,
and it can just, like, universal translation.
This was the first true step to getting there.
In 2017, all of this was really fun to play with,
in part because it felt a little academic.
It's like you could translate languages,
and it was adequate,
but not great. You could generate an image, but again, it's a little crappy. But in that sort of
cute ways, you're like, oh, this is neat, this is fun, this is toy. And what's really happened
in this last year is that it's gone from a cute toy to something that feels very powerful,
very usable, very here right now. This example I would give for style transfer is imagine if Google
was reading all of your emails, so they already have them, learning the style that you respond
quickly to and positively to
and then they sell that as a service to other people
so they're writing an email to you
they hit tab autocomplete and it writes
the perfectly persuasive email for you
that was a little science fiction
when I was talking about it you could sort of squint your eyes
and see how the pieces could come together
to build that thing
now you can just prompt GPT
to do it you just post in a couple emails
that you want and you say write in the style
and it does it so something really
massive has changed from
oh maybe this could happen
to it's now actually possible.
Now, speaking of possibilities, are you enjoying this music?
Are you bobbing your head up and down right now?
It's pretty catchy, right?
But it wasn't written by a musician.
It was actually generated by a new language sequence model
created by Google Research that can create entire music compositions
from just a few words of descriptive text.
So for this music, the description that created it is
a fusion of reggaeton and electronic dance music
with a spacey, otherworldly sound,
induces the experience of being lost in space,
and the music would be designed to evoke a sense of wonder and awe
while being danceable.
And on this podcast, we often talk about persuasive technology,
that technology can persuade our nervous systems,
our physiology, our breathing, our political beliefs, our tribalism.
And if you ever doubted that humans were persuadable,
if you're bobbing your head to this music right now,
we are proving to you that AI can persuade us.
There's a book called The Structure of Scientific Revolutions by Thomas Kuhn, which really speaks to,
when do you get big revolutions in a scientific field?
And Aza, I know you've been seeing that this is not just some kind of small step forward in AI,
where now people can pass their fifth grade writing test and do their homework.
Why are we in some actual scientific or sort of structural revolution or paradigm change?
I'm going to use an example of calculus.
I originally heard this example from Demis, who was one of the, or is one of the,
co-founders for Deep Mind at Google, which is their big AI research group.
And in his words, you know, there's this time before which humanity invented calculus.
And that meant we just didn't have the language to describe the complexity of systems in motion.
We just couldn't speak the language of nature at that level of complexity.
And that meant, you know, electromagnetism was,
close to us. We couldn't describe that, which meant all of computing was close to us. It meant
creating rocket ships was closed to us. In fact, a lot of engineering was close to us. We
learn to speak in a way that models the complexity of nature well, and suddenly it opens the
door for the Industrial Revolution. We would not have had the Industrial Revolution without
calculus. So the fact of this new ability to model complexity gave
rise to brand new forms of society. We are now at this new cusp. There are a set of things you
are never going to be able to sit down, write a set of equations for, and solve. These are things
like molecular biology and like the inner workings of a cell and genomics. They're just too
complex, but AI can model these things well. It can model or speak natively the next layer
up of complexity of nature.
And this is why DeepMind was able to predict 200 million protein shapes with a couple
weeks' worth of compute, where it used to take one PhD student, their entire PhD, to get
one protein shape maybe.
We are going to see this kind of structural revolution where, you know, once you
understand a protein shape, you can start to design medicines better, you can do interesting
things with material science and engineering, which means things about batteries. We'll be able to
design different kinds of yeast to make new kinds of fuels, maybe break down plastics. We're going
to see a structural revolution across every physical science, from molecular biology to molecular
engineering, to fusion, to everything. And there's going to be another way in which we can
model the complexity of the world in way we couldn't before. And that's human behavior.
You can never just sit down and write a set of equations that tells you what human behavior is.
And in fact, we are about to have a much, much more accurate model of how human beings actually work.
Won't be perfect, of course.
On top of which, we can build a much different set of institutions and law and governance that fit the ergonomics of us as humans.
And even if you don't buy that positive vision I'm trying to paint right now, at the very least, we're going to be able to
exploit ourselves a lot better because we'll be able to model how we behave and find out all of our
vulnerabilities much more.
If you could give the example that you told me there's a paper on, I think you'd call it
silicon sampling, a way of modeling human behavior.
Oh, yeah.
Why don't you tell listeners about that?
Yeah, this is a paper, I believe, from last year, and they were prompting one of these
large language models, I believe it was GPT3, to pretend to be a human being.
So instead of going out into the field and doing focus group work, they're like, well, what
What happens if we just asked GPT,
hey, pretend you are a middle-aged woman from Ohio,
three kids, just lost her job.
Please fill out this questionnaire.
And it wasn't perfect, but it did a pretty good job.
And you could ask it to be African-American, affluent,
living in the suburbs of Chicago.
Please fill out this questionnaire.
And again, it wasn't perfect,
but it did a pretty good job of modeling
how very specific kinds of human beings,
would answer questionnaires.
That is to say, it was becoming a kind of synthetic human.
So the kinds of things that people are going to be able to do,
if you combine all these AI capacities together,
in saying this, I sort of am wanting to jump ahead
to where listeners might be feeling, which is overwhelmed.
And I think one of the things that is really hard to parse about this whole space
is that there are so many capacities that are unleashed by this
that it can feel overwhelming.
And in fact, I'll say, like, I actually, you know,
I've been so focused on the issues that we talk about normally on this podcast
and all the work that we do to try to change this attention economy
and we're very involved.
We do a lot of activities at the Center for Human Technology to start to deal with those
problems.
And I've been kind of postponing going into this whole space because I knew how massively
upending it would be to basically how I'm seeing the world.
And I want to name that I think there's a tendency when a new set of capacities come
our way, like a new sort of set of insights about the world has just changed.
But that's such a radical thing for our nervous systems to step into and metabolize, to metabolize
that we're living in a new reality with new technical capacities.
Like the first nuke goes off, that changes the world, the first mushroom cloud, the fact that
we can destroy ourselves.
Do we immediately metabolize that and then realize that we have to change the social structures
of humanity and build new stabilizing technologies and social structures in the United Nations
and international atomic energy agencies and signed treaties?
We have to build all of this stuff when those new capacities come along.
but it's really hard to teach a dog new tricks
and I think we're all old dogs
that are now being exposed
to exponentially more new tricks
and we kind of go back to our old ways
like I don't want to think about chat dbti
I want to go back to thinking about
how do we fix social media
because that's like my safe place
and it's sort of like is that you know
we were talking about how even though
no one uses voicemail anymore
but if you have older parents
they'll still call you and leave you an old voicemail
or they'll sign a text message with their name
and we all know that we don't do that
but it takes a while for people to kind of update to the new reality.
And I think that it's going to take a little while for some of us to come along
and really start to metabolize this new set of possibilities and risks and dangers
that are all emerging from this.
But I wanted to name that because I think one of the things that if humanity needs to make it
through the metacrisis is that we have to be the kind of species that is able, at least some
of us, to step into these new realities and understand the new risks
and understand the new possibilities
so that we can start to bind the risks
and get ahead of the curve.
If we're feeling too overwhelmed,
then we can't do that.
And so not everyone's responsibility
is to solve all these problems,
but for those who are in positions of power and leverage,
it's so important to increase our own resilience
and our own capacity to live into these new possibilities.
I just want to validate that.
I think we all have our own superpowers.
And I think one of mine is just the way my brain works
is I think I get to stand on a hill
that can see sort of a little further
than maybe most people do
into the future of how technology is going to unfold.
It's just the way my brain is tuned.
As AI has really started to take off
and it's really hit me in the last maybe 18 months
that I realize my hill is shrinking
and I'm able to see less far out into the horizon
and that's been a very scary feeling
and it's hit me very emotionally.
calmly now, but it's like it's scary. I feel it in my body. And it's meant that I've had to
start spending a lot more time and work at talking to people, trying to climb back up high on a
hill so I can get a sense of like, where is all of this stuff going? The first time it really
hit me was I realized, because I've been playing with this stuff now for over a year, that I
was walking into art galleries or seeing a piece of art, and my mind would conjure up the
prompt that would generate that image.
Like, I was not actually seeing art anymore.
I would be imagining how I would make the art.
And I think one of the most important parts of art
is, in fact, its ability to arrest,
to take us out of our day-to-day experience,
make us stop and think.
And art was losing that power for me personally.
And that was another, like, actually pretty scary moment.
So earlier in this episode, I talked about how I want to encourage listeners to stop thinking of chatbots as chatbots and start thinking of them as synthetic relationships.
That difference matters a lot, especially when it comes to talking about ways to potentially regulate these technologies and the goals of those regulations.
And once we call them true synthetic relationships, then we can start having regulation and protections that are at the scale of the problem.
Okay, so let's talk about how all this could play out in terms of safety and regulation,
including standards the industry could impose on itself right away if it wanted to.
Right. So, you know, Tristan, you and I have been talking a lot about an AI enhanced,
the kind of FDA regulatory body or guardrails. And I don't mean like a 20th century-style regulatory body.
I mean something that's upgraded to match the speed at which technology is moving in the 21st century.
Now, I know what some people might be thinking whenever we talk about guardrails or rules is, you know, the FDA might kill innovation.
It takes 10 years sometimes get through phase one, phase two, phase three trials, and China might blow past us.
But, you know, when we make cosmetics for humans, we don't let them onto human skin unless they've been tested either on a rabbit or some kind of synthetic fur.
But we are about to start deploying.
Like, in this next year, there are going to be hundreds to thousands to tens of thousands of
startups that are making chatbots, synthetic relationships, synthetic teachers, synthetic therapists,
all kinds of things.
And they are going to have the power of relationship over us.
They're going to really influence us.
And if we just let all of these synthetic relationships go the way of engagement economy,
then it's a race to weaponize intimacy,
it's a race to weaponize empathy,
it's going to make every ill that's happened with social media
look small and insignificant, which they are not.
And so the idea of moving to a more FDA-style model
is that for any kind of AI that enters into a relationship with a human
before it gets put into a relationship with a human,
it should be tested in some way,
otherwise, it's sort of catastrophe we're heading into.
And also, in the same way that we at the Center for Human Technology
were the ones warning very early on about you can't marry social media
up to an engagement-based business model,
where the business model depends on getting people using it,
using it frequently, getting engagement, getting growth,
that if you marry those two things together, it ends up in catastrophe.
That's essentially what the last 10 years of our work and social dilemma we're about.
We're entering into a new,
compute paradigm, a new paradigm of humanity and computation. And it's not just cell phones. It's like,
no, we're going to be interacting with these AI agents. And we're not always going to know
that we're going to be interacting with AI agents. Like we could ban engagement maximization
with AI large language model agents, these synthetic relationships. Because we don't want a world
in which, as you said, over the next year, there's going to be 10,000 startups raising millions
of dollars of venture capital to create, to race to dominate that primary intimate
relationship spot. And we saw that chat GPT is the first kind of mover, but we don't want a world
where everyone's competing to become this sort of dependent future of her, if the people saw that
movie, kind of world. And we could do something about that. And that's an example of something
that can happen. I think there's an exciting thing one could do. This goes back to what I was talking
about our ability to model ourselves, the ability to create these silicon samples, that once you can
make better models of humans. So imagine you have a little sort of like synthetic human running
in silicon on a chip. Of course, it's not going to be perfect, but something you could do is
take any of these new AI language models coming out or startups and have their bots be
tested against the synthetic humans. Let's see what happens to them when they're in relationship
over time and make sure that they are safe before they're deployed on real humans.
I think that's a really interesting way to start using this technology to protect rather than to
exploit. And so we would like to see this kind of, you know, just like when nuclear science,
technologies were first created, there were conferences where the chief nuclear physicists all came
together to ask the questions of how do we hold this new power responsibly. What's the right thing
that we should do. And this is why the AI safety community has been so concerned with safety
for so long. And there needs to be, I think, a very accelerated effort to come up with these
standards, norms, bans on moratoriums on bad business models before it gets too late.
I just had one small idea. I just saw that in the U.S., there's a Congressman Ted Liu
who proposed creating an AI commission to create an AI safety agency. And what I'd love to get them
to accelerate would just be no engagement business models using AI.
And that seems like a really interesting place to start.
I don't think it's quite a ban.
I think ban is not quite the right word for it.
It's that one of the things that our collaborator, Jonas Sachs, has actually proposed,
and I think it's a brilliant idea, is the idea that there could be kind of a prime directive
that these AI agents cannot become our primary intimate relationship.
and if the business model has anything to do with my attention or getting me to buy stuff,
then it doesn't just have my best interest at heart.
It's imagine like a friend where that friend has some other interest other than just being your friend
and being there for you, being helpful.
And I hope that what we've talked about today will make it clear that we'll want to live in a future
that makes it so that these things have our best interests in mind.
They don't have business models that depend on us buying things, selling things,
or needing our attention or influencing us
or having an advertiser
or some third party that wants to influence
that relationship in any way.
We want AIs that actually help develop better humans.
Now that sounds like who are you to say
and who's to define better and who's deciding what better is.
All of those are relevant questions.
But we have a vast literature
from human developmental psychology,
wisdom tradition, spirituality,
that having people act more impulsively
or playing more win-lose games or getting reactionarily angry and guiding by their emotions.
Those are not examples of very developed and wise and conscious people.
And AIs that actually interact with us in a way that are compassionate, that are caring,
that encourage us to flex the muscles of thinking through the consequences of our actions
rather than having us act impulsively, these are all designed choices for how these agents interact with us.
And we're at a early stage.
There's a famous quote,
the best moment to influence a new medium
is at the very beginning of that medium.
You know, the best time to influence
what a smartphone interface was going to be
was at the beginning of smartphones.
And the best time to influence
what AI agents that interact with humanity
should look like is at the beginning right now.
You know, I just want to say the listeners,
that there are so much to dig into with this topic.
There is so many areas of society
that are going to become upended and are impacted by the kinds of developments that we're talking about.
And I think that even if you had the collective intelligence and wisdom of humanity
thinking through all the possible risks and externalities that are going to get created by this,
we still wouldn't be able to anticipate all of them.
That's kind of the fundamental premise is that technology gives us the power of gods
without us having the awareness of all the things that with this godlike power we can influence
and impact.
And our job as a civilization is to move as deeply as we can,
into having the awareness, maturity, love, prudence, and wisdom of gods.
And we're going to keep going into this.
We're going to talk about many aspects of this technology in the future.
But, you know, this is a very big topic and there's a lot more to go.
Your undivided attention is produced by the Center for Humane Technology,
a nonprofit organization working to catalyze a humane future.
And if you want to go deeper into the themes that we've been exploring in this episode
and all the themes that we've been exploring on this podcast about how do we create more humane technology,
I'd like to invite you to check out our free course
Foundations of Humane Technology
at HumaneTech.com
slash course.
Our senior producer is Julia Scott.
Our associate producer is Kirsten McMurray.
Mia Lobell is our consulting producer
mixing on this episode by Jeff Sudakin.
AI bought music by Music L.M.
And special thanks to WNYC
and the New York Public Radio Archives
for the audio of the Norbert Weiner lecture.
Original music and sound design
by Ryan and Hayes Holiday
and a special thanks.
to the whole Center for Humane Technology team
for making this podcast possible.
A very special thanks to our generous lead supporters,
including the Omidiar Network,
Craig Newmark Philanthropies,
and the Evolve Foundation, among many others.
You can find show notes, transcripts,
and much more at HumaneTech.com.
And if you made it all the way here,
let me give one more thank you to you
for giving us your divided attention.