The One You Feed - How AI Answers Life's Biggest Questions with Iain Thomas & Jasmine Wang
Episode Date: May 24, 2024In this episode, Iain Thomas and Jasmine Wang discuss their work with artificial intelligence and how they used AI to answer life’s biggest questions. They explore the benefits and complexities of a...rtificial intelligence and its unexpected human-like traits of large language models as well as the potential ethical implications. In this episode, you will be able to: Understand the future and potential of AI to revolutionize culture Explore how AI generated responses on love, connection, and the present moment Discover the influence of AI-generated content on the evolution of creative industries Embrace the humanness embedded in AI in responses to spirituality and human connection To learn more, click here!See omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
It's mind-boggling, you know, the fact that you can have this conversation, you can say something
like, you'll meet the love of your life to a computer, and it'll treat you better if you say
it to it. Welcome to The One You Feed. Throughout time, great thinkers have recognized the importance
of the thoughts we have. Quotes like garbage in, garbage out,
or you are what you think ring true. And yet for many of us, our thoughts don't strengthen
or empower us. We tend toward negativity, self-pity, jealousy, or fear. We see what we
don't have instead of what we do. We think things that hold us back and dampen our spirit.
do, we think things that hold us back and dampen our spirit. But it's not just about thinking.
Our actions matter. It takes conscious, consistent, and creative effort to make a life worth living.
This podcast is about how other people keep themselves moving in the right direction, how they feed their good wolf. I'm Jason Alexander.
And I'm Peter Tilden.
And together, our mission on the Really Know Really podcast
is to get the true answers to life's baffling questions like
why the bathroom door doesn't go all the way to the floor,
what's in the museum of failure?
And does your dog truly love you?
We have the answer.
Go to reallynoreally.com and register to win $500, a guest spot on our podcast, or a limited edition signed Jason bobblehead.
The Really No Really podcast.
Follow us on the iHeartRadio app, Apple podcasts, or wherever you get your podcasts.
Thanks for joining us.
Our guests on this episode are Ian Thomas and
Jasmine Wong. Ian is an internationally bestselling poet, and Jasmine is a globally
recognized researcher and innovator. As the subject of their new book, they prompted AI
GPT-3 with a wealth of humanity's most cherished works. They then ask our most pressing questions about life,
and they show us how artificial intelligence responded. The book is What Makes Us Human?
An Artificial Intelligence Answers Life's Biggest Questions.
Hi, Jasmine. Hi, Ian. Welcome to the show.
Thank you so much for having us.
Yeah, glad to be here.
Yeah, I'm really excited to have you guys on. I was saying to you before we started that, you know, AI is on
everybody's mind a lot. I think it's on everybody's mind who certainly works in any sort of creative
field, but really in any field. And I think there are tremendous promises and perils that go with
AI. And I've been looking for a way to have a conversation about it. And you guys wrote a book,
What Makes Us Human, which you
basically ask GBT to answer questions based on some of the world's wisdom texts. And we're going
to get into that in a second. But for listeners, I'm just kind of setting up that we're going to
be talking about AI sort of in general, and then transitioning to that as we go. But before we
start, we always start with the parable. And it goes like this. There's
a grandparent who's talking with her grandchild and they say, in life, there are two wolves inside
of us that are always at battle. One is a good wolf, which represents things like kindness and
bravery and love. And the other is a bad wolf, which represents things like greed and hatred
and fear. And the grandchild stops. Think about it for a second. They look up at their grandparent
and they say, well, which one wins? And the grandparent says, the one you feed. So I'd like to start off
by asking you what that parable means to you in your life and in the work that you do.
I think it's a particularly pertinent parable right now, especially as it relates to AI and
the conversation around AI. We've had quite a journey with this book,
I think simply because when we started promoting the book,
ChatGPT hadn't come out yet.
And so a lot of time was spent saying,
there is this thing called AI that is coming
and you should be aware of it
and it's going to come out of nowhere.
And a lot of conversations and podcasts
are people going, oh, and so how does it work?
And then post-ChatGPT going,
everyone needs to calm down. It's going crazy. And now we're in this kind of third moment where
people are perhaps justifiably very worried and very anxious. And so the idea of the one you feed
or the things that you pay attention to get bigger, seems very pertinent. I've always said
to my kids, like, you know,
life is like a bicycle. You end up going towards the thing that you're looking at,
you know, and so you need to be very careful about what you're paying attention to.
And I have a big concern right now, I think in society broadly, where the thing that we're
paying attention to in AI sometimes feel like the wrong things. And it feels like there aren't
enough people kind of showcasing what the right things could be. You know, I said that we're in this kind of third
phase right now. The other day at South by Southwest, there was a presentation, you know,
a movie. And before it, there was a kind of teaser promo about all the AI panels and conferences
going on in South by Southwest and the crowd booed. You know, they booed quite loudly.
And it's because there has been a lot of conversation around AI.
I think from a point of view of this is going to be a tool to destroy jobs.
This is going to be a tool to create maximum efficiency within a capitalist system,
because that's the nature of the world
that we live in. And some of that's true. And to ignore those realities is naive. But there's a lot
of new things that it unlocks, a lot of really profound, really interesting things. I guess my
big concern right now is that when it comes to AI, we are giving the wolf that represents fear a lot,
a lot to eat. I also want to make it clear, I don't think it's true for you either, Jasmine.
It's not like we're AI idealists and we think that this is some magical technology that's going to
solve everyone's problems. But there are these incredible opportunities that we're excited about, or at least we want to explore, you know, and show, at least with my work, like, here are these incredible things, you know, and that's exciting.
Jasmine, do you want to build on that?
Do you want to throw something in there?
Yeah, I worked at OpenAI while they were working on GPT-2, so a few generations ahead of GPT-4.
It's the same model underneath, just trained on aT-2. So a few generations ahead of GPT-4. It's the same model underneath,
just trained on a lot more data. And one recurring theme that kept coming up in the office was why I was so positive. I was always really excited to be working on what we're working on. And I think
that still remains true today. I think a lot of people come into the work on artificial general intelligence coming from the
same sort of fear that Ian is talking about, like fear of existential risk, fear of AGI being
misaligned with our values. And I think those are really valid and important places to come from
and go to the voting booths with. But in terms of our day-to-day productive lives
and where we put our creative energy,
I really hope people try and utilize AI
to benefit their own creative processes,
to see how they could interact with this technology
that, yes, is going to transform work as we know it.
But I think the main question that I would ask people to ask themselves is like,
how can I be involved and influence the trajectory of such technology,
especially as mechanisms for collective governance of such technologies are being built out?
Yeah, I love that.
Listener, as you're listening, what resonated with you in that?
I think a lot of us have some ideas of things that we can do to feed our good wolf.
And here's a good tip to make it more likely that you do it.
It can be really helpful to reflect right before you do that thing on why you want to do it.
Our brains are always making a calculation of what neuroscientists would call reward value.
Basically, is this thing worth doing?
And so when you're getting ready to do this thing that you want to do to feed your good wolf, reflecting on
why actually helps to make the reward value on that higher and makes it more likely that you're
going to do that. For example, if what you're trying to do is exercise, right before you're
getting ready to exercise, it can be useful to remind yourself of why.
For example, I want to exercise because it makes my mental and emotional health better today. If
you'd like a step-by-step guide for how you can easily build new habits that feed your good wolf,
go to goodwolf.me slash change and join the free masterclass. I think that gives us a couple of really good places to sort of start
from. And I think it's easy to imagine the AI risks, right? I mean, as you said, there's the
existential ones, like these things become more powerful than us and turn us into their slaves.
I mean, whatever they are, there's countless of them, right? And then the other one that I think
is probably most in front of everybody's mind right now is being replaced from a work perspective. So I think the fears are pretty
easy to articulate. Can you articulate some of the promise of AI? Like what in the short term,
let's say the next present to three years, sort of ways is AI going to make our world better beyond
helping me build better marketing plans or
write better emails, right? Like, of course, you know, I can use it to help me write things, right?
But what are the other ways that this tool is going to make life better in the short term?
I can dive deep into maybe an education use case. One thing I'm really excited about in the future
of AI and what it brings is the ability to bring personalized
tutoring to everybody. So one arc of technology and like how technology started stuff and built
is like take something that hasn't classically only been available to people in like upper
socioeconomic strata and try and democratize that for everybody. So we saw that with entertainment,
we saw that with music, we saw that with music,
we saw that with a bunch of different like knowledge tools. Technology is usually like funded and like priced in a way that's like initially inaccessible, but like trickles down.
And what AI provides here, and I think as an interesting counterpoint to the replacement
narrative, AI might augment teachers in a way that allows them to actually fulfill the ideal
version of their jobs. Right now, teachers are in classrooms in most public school settings
at ratios that are far beyond the ratios that are usually recommended in pedagogical research.
One of my best friends growing up in Edmonton, Alberta, is now an elementary school teacher. And the ideal ratio
as studied in pedagogy, so this might include some formations just that weren't like even in
the realm of economic possibilities, such as one-to-one tutoring. But in a classroom setting,
the ratio that they found was ideal for ages like five to seven was a ratio of one to less than 20.
for ages like five to seven, was a ratio of one to less than 20. And the ratios that she's dealing with look closer to one to 30, which is totally unmanageable. Like kids are falling behind,
they're not getting the attention they need. There have been some notable companies developing AI
tutors that hopefully will lessen the load on teachers, such as Khan Academy with Khan B. Go.
I've been working on a version of
this with my own startup, Trellis. And the hope really, I think, is to achieve a Socratic
personalized dialogue with every child. That's my dream. I really hope that someone achieves it.
And I think it would lead to one of the hugest social upheavals potentially in society.
Like imagine everyone getting like instant feedback on their work,
having a dialogue partner that's like perfectly tuned to their areas of interest.
A kid that's like way out in like rural Ohio gets to like research astronomy,
which none of their teachers know anything about,
but they have access to the superhuman AI that's like
read in essence, everything about astronomy. That's one sci-fi future that I'm really,
really excited about and personally working towards.
That's interesting because I have been involved in an AI project similar to what you guys said
earlier. I, you know, yes, I certainly have all my fears and I'm like, but it is here. It is not
going away. So how do we use it in useful ways? And it's very much what you described, Jasmine. It's a company
called Rebind and they're a brand new startup. But the goal was the founder had a lot of money
and he wanted to study great works of philosophy, but he would start to read them and be like,
I have no idea what the heck is going on here. Right. He had the money to go find like some of
the best scholars in the world at universities and pay those people to tutor him
through it. But then he was like, well, okay, that's lovely. But you know, 99% of the world's
not going to do that. And so what they're doing is taking great books, and they're marrying them
with a scholar to try and create, you know, so that you can have a dialogue with this person. So, you know,
we've had a guest, John Kagg, on the show before, and he may be one of the leading Thoreau scholars
in the world, right? You'll be able to read Walden and have a conversation with him about
Walden. And I got engaged to do it on the Tao Te Ching. I'm not a scholar, that I should be clear,
but it's a book I've loved and have engaged with for 30 plus years.
It's my favorite book.
Ah, so you and I should talk because I want to talk about Trellis too afterwards.
But I did my own interpretation of it based on, you know, lots of different translations.
But I think that's an example of what you're saying there, Jasmine, about how we really can use this to give better educational capabilities to individual children. That's a
great one. Ian, do you have one you'd love to throw forward? Sure. You know, generative AI is
getting a lot of the attention right now because we can see it. You know, we can see chat GPT and
we can engage with it and we can writing jokes, writing songs, things like stable diffusion,
DALI, mid journey, we can see these incredible
pitches, you know, and so that becomes the kind of focal point of what AI is. But, you know, the more
I speak to people or, you know, read stuff around the medical or the manufacturing or, you know,
the research kinds of industries and institutions, they're miles ahead. They're miles ahead in terms of
the kinds of stuff that's happening there. You know, there's these situations where
people are using AI to discover completely new materials. You know, things that would take people
years are happening in days. And in medical situations, the same way that you could have a
kind of, you know, customized teacher relationship, democratizing healthcare,
you know, that's an incredibly powerful thing. And as things like computer vision, the ability
to take a picture of a mole or, you know, someone to holding the phone to your chest and, you know,
being able to hear the way you're breathing and having a computer go, you know what, you need to
go to a doctor and you need to talk to someone. You know, augmenting intelligence, which is a kind of another way of interpreting what
AI stands for, has profound promise when married with humans.
You know, humans bring empathy, understanding, intuition that we have developed over hundreds
of thousands of years.
I'm not quite sure a large language model will be able to match, you know, any reasonably
time soon. You know, who knows the way things are going.
But when you augment someone's intelligence, like you unlock an incredible future, you
know, especially when it comes to things like education, like both of you are talking about.
I have a, you know, a personal connection to Africa.
I was born in South Africa.
I spent a lot of time working in Africa, different technology
things with telcos, whatever. There's a massive education problem across Africa in a lot of
different places. And elevating that continent in a profound way will unlock opportunity that is
unimaginable. And I think that's what's really profound about that continent specifically,
is that one of the things that kind of left Africa behind in a very big way was the digital divide. You know, when some people had access to computers, when some people had access
to the internet, but one thing that Africa is really, really good at is leapfrogging,
you know, and a lot of countries in Africa, like they have better cell phone connection. They have
better mobile banking. A lot of the time, mobile banking have better cell phone connection. They have better mobile banking a lot of the time.
Mobile banking comes from Africa in a very big way because it was invented out of necessity.
And so there is this moment and this opportunity, and we're working with a nonprofit in East Africa right now and a cohort of entrepreneurs, where you can augment the intelligence that's there.
where you can augment the intelligence that's there.
You can complement that human insight,
that human empathy, that human understanding with this incredibly powerful tool.
And you can unlock prosperity
in a way that we cannot imagine.
And so that's really exciting.
On a much shallower level, I guess,
I'm a writer, I'm a poet.
I write all sorts of different kinds
of books. This unlocks entirely new ways of engaging with culture. I've often compared it
to the rise of hip-hop and turntablism, where kids took technology and turntables and samples
and reinterpreted culture in completely new ways to create one of the most powerful cultural forces
in history when it comes to the emergence of something like hip-hop. And there's a similar
thing that's going to happen. I'm not excited about AI writing a book. We co-wrote What Makes
Us Human with GPT-3, but I'm certainly not excited about 10,000 spammy cookbooks appearing on Amazon,
10,000 spammy cookbooks, you know, appearing on Amazon or, you know, 20,000 terrible songs being generated or replacing illustrators or, you know, all of these different things. What I'm focused
on is what are the new kinds of conversations and experiences we can have with culture around us
that are interesting and different. I think young people intuitively, as always, kind of understand that it's there.
The biggest AI platform on the internet right now is ChatGPT.
Like there's, you know, there's millions of people that go there.
Number two is Character AI, which is a platform for young people to have conversations with
fictional AI characters.
You know, they have so much traffic that the site goes down regularly,
like once or twice a day.
They have millions of dollars in funding
and they have so much attention and so much traffic
that it still goes down.
And that points to something for me.
That points to a desire to engage with the world around you,
with literature, with, in character AI's case,
like fictional Super Mario, or fictional Elon Musk, or fictional whatever it is.
But it points to a world in which we engage with culture in a way that is very different,
and that's very exciting for me. I'm a very curious person, and I want to know
what that looks like, and I want to be part of shaping that.
That's really helpful from both of you. I'm going to ask a question here because I try to keep
things practical on this show and this conversation is a little bit of a departure, but in an attempt
to marry that, I'd ask each of you, what is one thing that the average person out there can do today that will help ensure that AI is developed and deployed
in a way that aligns with our best human values, right? We talk about the worst versions of it,
but right. One of the concerns that a lot of people have is there's no real governance in place right now.
So what is something that a person could do, one sort of small way of shaping the discussion around AI in a positive direction? I wish there were more avenues for pushing for more public involvement with AI.
involvement with AI. I think at a meta level, what people should be doing is pushing for mechanisms to solicit public input, because I don't think those are currently in place.
Maybe one nonprofit I would mention that I would really encourage listeners to support
is the Collective Intelligence Project. It's a nonprofit that was started by Divya Siddharth
and Safran Huang, formerly of Microsoft Research and DeepMind.
And they're working on collective intelligence mechanisms to solicit opinions from the public to inform both how AI is developed.
For example, they worked together with Anthropic on their constitutional AI.
So Anthropic, the way that the AI is trained is a little bit different from OpenAI. They train
it with a constitution, which basically has a set of guidelines for the behavior of the AI model.
And the Collective Intelligence Project worked together with Anthropic to develop a collective
constitution where basically people, like the common people, not research scientists at the lab,
deliberated over questions such as should an AI like make racist jokes and like things like this
and developed a collective constitution to guide the AI's behavior, which actually the public
ranked as like higher performing than the one that was developed purely by scientists,
which is a really promising
sign. Unfortunately, this kind of development process for AI happens rarely. These labs are
training these models only every couple months or years, and then it's frozen for deployment.
But I think an intelligence project is one of the only organizations that's
like doing this sort of work right now. So I would encourage people to look them up,
read their work, support them. And they're also working on some initiatives around collective
governance and redistribution as well, which I think are really promising, but are too early
to talk about yet. They haven't released anything there, but I think I've been disappointed in general as to how much
influence the daily person can have over these transformative technologies that like deeply
shape our lives. And I think it's one of the most important things for people to be working on right
now. So I would encourage people to like either support existing initiatives or try to think of
ways to be involved themselves?
It is really important that the public is involved, that people do join in the conversation in whatever way they can. I think, you know, I grew up in the 90s. We had, you know, in the
crazy weird hacker house, I grew up in a very idealistic view of technology and what the
internet was going to do. It was going to usher in
a utopian age of truth and access and all these incredible things because we'd read a lot of
science fiction novels and that's kind of what we thought it was going to be like. And we were
completely wrong. Consciously or unconsciously, probably unconsciously, society kind of said,
the guys over in San Francisco and Silicon Valley know what they're doing and we can leave it up to them. You know, everything will turn out fine.
And it didn't. You know, we have massive mental health crises. We have misinformation campaigns.
We have, you know, scams. It's broken my heart that a technology that was such an integral part
of my experience of the world, of growing up,
you know, in South Africa, like disconnected from the rest of the world, the internet was something
that, you know, created connection. And it's profound. And that moment of connection has
stayed with me my entire life. And it's infused my work and everything that I do. And to see it
become something that people hate is heartbreaking. And the same thing can't happen
to AI. It's too important. It's just too important for us to leave it up to a few people to make the
important decisions about it. I think, you know, all of Jasmine's suggestions make complete sense.
I just think it's important for people to be involved for when your representative or whatever
is speaking about whatever they're speaking about, like raise your hand and going, you know, well, what about AI? How do you feel about that?
Where are we on that? You know, one of the things that I do, I have my own, you know, innovation,
creative studio. I have a background in marketing, you know, called Sounds Fun.
And we offer a thing called Sounds Right, which is effectively a kind of AI 101 for businesses
and institutions where we sit with
them and go, you're doing this. Does this make sense? Is this actually going to resonate with
the people who work in your business, with the people outside your business? Because otherwise
you're going to have a village of people with pitchforks outside and they'll probably be right
to be outside your door because people make these kinds of mistakes. So whether it's me or someone else, like, you know, get someone into your business
to talk about this and to go, you know, if you're thinking about deploying this kind of technology,
how do you do it in a way that's beneficial? You know, not just efficient, not just like driving
the bottom line, because we kind of look at business as something over there. One of my
favorite sayings is,
you're never stuck in traffic, you are traffic.
We are all part of the system
and we all have to make a conscious decision
in terms of how we embrace or don't embrace
or what we do and how we deploy this kind of thing. I'm Jason Alexander.
And I'm Peter Tilden.
And together on the Really Know Really podcast,
our mission is to get the true answers to life's baffling questions like
why they refuse to make the bathroom door go all the way to the floor.
We got the answer.
Will space junk block your cell signal?
The astronaut who almost drowned during a spacewalk gives us the answer.
We talk with the scientist who figured out if your dog truly loves you
and the one bringing back the woolly mammoth.
Plus, does Tom Cruise really do his own stunts?
His stuntman reveals the answer.
And you never know who's going to drop by.
Mr. Bryan Cranston is with us today.
How are you, too?
Hello, my friend.
Wayne Knight about Jurassic Park.
Wayne Knight, welcome to Really, No Really, sir.
Bless you all.
Hello, Newman.
And you never know when Howie Mandel
might just stop by to talk about judging.
Really? That's the opening?
Really No Really.
Yeah, really.
No really.
Go to reallynoreally.com
and register to win $500,
a guest spot on our podcast,
or a limited edition signed Jason bobblehead.
It's called Really No Really,
and you can find it on the iHeartRadio app,
on Apple Podcasts, or wherever you get your podcasts.
Let's move towards the book a little bit and see where that discussion takes us. I'm going to
attempt to summarize what you guys did in the book, and then you can tell me what I get wrong.
Basically, you guys decided that you could train AI on certain key, I'll call them wisdom texts, right? The Bible, the Tao Te Ching, the poetry of Leonard Cohen. I love that you included that, or the songs of Leonard Cohen, the poetry of Rumi, these sort of things. And then you would ask it questions about life. Is that the short version? It is the short version. I mean,
technically, we didn't train anything. We constructed a series of prompts. But I mean,
it's a language thing. Ultimately, yes. Jasmine, I think that's right. That feels right to me.
Yeah. Okay, to geek out here for a second, you did not feed all of those texts into sort of a AI off to the side that was only trained on those things.
It was still the publicly available GPT-3, and you did it with GPT-3 before there was
GPT-4, but you basically gave it a prompt and you then gave it some examples of the
answers you wanted, right?
You were like, okay, here's something from the Bible, here's something from the Tao Te Ching. Here's something from Leonard Cohen. Here's something
from the Stoics. Then you would say, okay, now answer my question on your own. And it would have
sort of taken those things and it would be like, that's the sort of thing that they want as an
answer. We would construct patterns effectively using these different texts and then ask the next
question in the pattern, leave it
blank. Just to add on to that, among like other forms of data, like Reddit and Wikipedia, GPT-3
is drained on the books one and books two data set. And those comprise like the plurality of
the books that have been published in the English language. So especially reference texts, like the Bible has been translated so many times.
So are probably like fairly well represented
within that data set,
within the base model,
the publicly available GPT-3 model.
Okay. And it's fascinating what this thing does.
But real quick, Ian,
talk to us about what got you started on this project.
Because I think the origin story of it is interesting
or is touching more than interesting.
Sure.
If you go back far enough, I think I've always had a profound love of writing, and that I've
expressed in collections of poetry, numerous different things.
And that's been married with a love of technology and technology's ability to connect us.
I theorized at some point that there would be a way to automate
certain aspects of what I was doing. And I didn't know what a large language model was.
I kind of had my eye on AI, but in a very disconnected kind of way. And then someone
sent me an email one morning and said, there's this thing called Copysmith on Product Hunt,
and you should check this thing out because it's writing ad headlines. And I looked at
it and I was floored. It was an implementation of GPT-3 and the universe kind of opened up to me in
a very big way. You know, something, whenever I go and lecture about writing, I always say
is good thinking is good writing and good writing is good thinking. And this thing could write, it could write really well. And so the implication of that was really profound for me. And so
I didn't care how I wanted to be involved in this in some way, shape or form. And so I found out
who built that product. And I video called Jasmine while she was eating cereal in her kitchen in Canada
and said, I'm this guy called Ian and I've made all these weird and wonderful books.
And I don't care how, but I have to be involved in this in some way. And she was like, sure,
you can hang out, which was, which was, which was really a life-changing moment and very kind of her.
And so, you know, I kind of helped
out with Copysmith writing these ads and, you know, just, you know, kind of playing with the
marketing aspect of what this thing could do. And at around the same time, my mother had passed
away. She had terminal cancer. This was during the pandemic. And so I had that terrible experience that a lot of people had where I
couldn't be by her side as she passed. And I think that the desire to be by your parents' side when
they pass is similar to the desire to be with your child when they're born. It is this fundamental
thing. And I had this incredibly traumatic experience where I drove across the country
to try and just be there on the last day of her life. And I didn't get there. And I had to come home and explain to my kids that
grandma was dead, like my mom had passed away. And it was a brutal experience for them.
And these two different things happening, this exposure to this technology, this traumatic experience that I'd been through led to this moment where I realized that if I could teach GPT-3 to write headlines
or write copy for the back of a shampoo bottle or for a Facebook ad or whatever, there were other
things that I could do with it. And so I started trying these different experiments. And my mother
was a very spiritual person, very religious person.
I'm spiritual.
I don't think I'm religious.
And so one day I put in some text from the Bible.
I put in some text from the Dao Te Ching.
I put in some poetry from Rumi.
And I said, how do I explain death to my children?
And this poem came back.
And I'm a poet. So I understand what
poetry looks like. And there was some, you know, whatever that you, whenever you interact with like
very early large language models, there's always these like weird bits that are kind of there or
whatever, but I could see the angel in the marble. You know, I could see that there was this thing
there. And so I asked it another question and I asked it another question, I asked it another
question. And then the next day I went to Jasmine and said, listen, I'm doing this thing. And she was like, well, you should do it like this. And we kind of pushed the idea backwards and forwards until it became this conversation with all these kind of dark night of the soul questions that became this book. And yeah, that's the very long version. I apologize if that's too
long. I'm sure your editor can find the right places to cut it. Not at all. It's a beautiful
story. And I love the way you sort of came to it out of your own need for comfort and for
understanding. And, you know, I mean, if there's anything that brings us up to the edge of mystery,
but does not let us through the curtain, right? It's death, right? You know, you get there and you're like, well, I sent something profound and what AI said, and I've got a bunch
of them copied down. But before we do that, I'm wondering if you guys could do something for me.
And in the book, you say, if there's one theme that emerged again and again from our questions,
from the answers, from the vast rows of sacred data the AI was analyzing, it was this, love.
data the AI was analyzing, it was this, love. Love is everything. So that's one thing that you sort of took away as a theme. But could you expound upon that a little bit? And perhaps
with some time and further reflection, you've thought of some other things that sort of came
out of that? For me, there were three things. And one of them, one of them was love. You know, love is there in spades
in terms of the kinds of responses that we were getting. The other two were connection,
specifically to the present moment, like coming back to the present moment again and again and
again and again, you know, which rings true in a lot of philosophy, being aware of the present
moment, you know, finding fulfillment in the present moment, you know, finding fulfillment in
the present moment. You know, there's that wonderful saying, anxiety is living in the
future and depression is living in the past, you know, and that's there a lot, which is
unsurprising, you know, I mean, considering the stuff that we fit into it, that's not that profound,
I guess. And then the other one was connection in a much broader sense, like connection to
each other, to the universe around us, you know, to everything.
And I think that that's a really interesting thing for a large language model to come back
with.
There's a degree of meaning in that that I find kind of profound because a large language
model is all of us in a strange way.
It's the sum total of our written thought.
And so the idea that something like connection
comes through makes sense. How about you, Jasmine?
Maybe to speak a little bit as to the procedure, like one thing I've been reflecting on,
there's another author, Shawn Michaels, who wrote a book about a poet working with AI.
And I was just reading an interview of his recently, and it echoes some of the reflections I've had since working on this book together with Ian, which is one thing that from a company perspective, like when we were working on Copysmith, one thing that we cared a lot about was the reliability of answers, being able to consistently get something that was useful.
Whereas the thing that we're looking for in the book, because everything in the book is AI generated, but it is human picked, is something that is almost variability or novelty or surprise.
Because we could keep running for any given poem.
I don't know what the average number of times that we ran generate was, but it's certainly quite a lot.
And we pieced together fragments that resonated with each other. But also there was an element of, I think, variability that AI brought that
was like really special. It's like, oh, we would have never like found that specific turn of phrase.
It was just interesting to be situated as like a tastemaker. And also important to note,
I think writing this book now would be really different.
I'd be curious as to what themes would come up now with GPT-4, where I'm guessing that we would
have had to do less manual piecing together. And we would have maybe gotten longer chunks that were
more quote-unquote sensible, but maybe in some ways less error-prone and therefore less
interesting. Like, what is an error, right? Like, what is poetry? Those kinds of questions kept
coming up for me at more of a meta level about the book. I was curious, like, what GPT-4 might
have done to this if it was written with GPT-4. But like you, what I was struck by, I mean,
I don't know what all you gave it, right? But a lot of what we consider wisdom traditions and Leonard Cohen songs and Rumi poems and
the Tao and the, like, I've spent a lot of time with that sort of material and have sort
of arrived at my own conclusions about the commonalities among those things.
I'm not unique in that.
But I was caught every once in a while by the turn of phrase, by the way it said what it said. It
caught me in a way that was fresh, even though the idea might not have necessarily been fresh.
Because it's, as you said, Ian, it's pulling from our history, right? It's not an idea that
came out of nowhere, right? Necessarily. But it was the turn of phrase. There were some that were
really, to me, profound. You know, one of the important things about the book is that it's a product of the moment
that we created it in.
And I don't think it would be the same book if you use GPT-4.
I mean, I think you could repeat the exercise easily and the results, I'm sure, would be
kind of interesting as well.
But it is that limitation of the model that you're kind of pushing up against, I think,
that in some way lends itself to the quality of what you get.
I didn't exercise when my mom passed away.
When Dolly 2 became publicly accessible, I started prompting these images of the desert
I was driving through when I was trying to get to her.
And it has the nature of that model, like inherent in the imagery.
And so it's kind of fractured. It's, you know, it's not as good as like what you would see
Mid Journey doing today or the latest version of Dali, which is this incredibly high fidelity,
this incredible like realism where you can't tell it's AI anymore, but those images you can,
you know. And so it's the
same way that an oil painting looks like an oil painting because it is a product of the technology
of the time. And so, you know, those limitations of the technology, the moment it was in, like,
if we were going to do something else today, it would be a different experiment.
I wanted to pause for a quick Good Wolf reminder. This one's about a habit change and a mistake I see people making.
And that's really that we don't think about these new habits that we want to add in the
context of our entire life, right?
Habits don't happen in a vacuum.
They have to fit in the life that we have.
So when we just keep adding, I should do this, I should do that, I should do this, we get
discouraged because we haven't really thought about what we're not going to do in order to make that happen. So it's really
helpful for you to think about where is this going to fit and what in my life might I need to remove.
If you want a step-by-step guide for how you can easily build new habits that feed your good wolf,
go to goodwolf.me slash change and join the free masterclass. So what I'd like to do is
explore some of the questions you asked it and what the answers that you got are. Let's start
Ian with you because I think the question that was sort of the kickoff of this whole thing was,
how do I explain death to my children? Do you want to read for us what GPT-3 had to say about that?
Encourage them to celebrate the lives of other people. Tell them that the dead are not dead,
not in the way we think of it. Tell them they live on as they are remembered. Tell them every
goodbye is really a hello to a different way of being. Tell them they are loved and will always be loved.
Tell them they never have to feel alone, never.
Tell them the world is magical and mysterious and strange.
Tell them they are part of the mystery
and the magic and the beauty of it.
That's pretty good.
It is pretty good.
And that's why we wrote a book, you know?
I mean, I can... You're like, I know it's pretty good. It is pretty good. And that's why we wrote a book, you know. I mean, I can say.
You're like, I know it's pretty good.
I know.
I mean, I can say it's pretty good.
Thanks, Einstein.
I mean, if it was my poem, I wouldn't say it.
I'd be like, oh, I think it's okay.
You know, be kind of diffusive about it.
But because I didn't write it, I guess I can be a little bit.
Yeah.
But there is this kindness, you know, that oozes through the text, this desire to comfort
the child, to reach for parables, metaphors, ideas that are, you know, that are like, this
is a way to think about death.
You know, it comes back to connection.
Tell them they are part of the mystery and the magic, the beauty of it.
You know, you're not always going to know everything there is to know about life or death, and that's okay. It's part of everything. I'm not sure my five-year-old
at the time would have completely gotten it, but it was certainly a valiant effort on the behalf
of the machine. Well, I think your five-year-old certainly would have got, tell them that they are
loved. I think the five-year-old would have gotten that. And I think a five-year-old probably would also get that, you know, as we remember her, she's still with us in a sense, right? And that she lives on in us. My partner's mother passed, I guess we're creeping up on a year and a half now after about an eight-year battle with Alzheimer's that was really brutal. But I sometimes just say to her when she's feeling bad, like, I see your mom in you.
Like, you know, she's still here. Like, I see her, right, in some of the ways that you are.
And so I think that sense that, you know, the dead really do live on in that way is pretty profound.
I think death and artificial intelligence are really strange bedfellows, but they are bedfellows.
Like one of the things that, you know, I'm really interested in is, I was doing an experiment for a
while called living forever with intention, because there's a way in which you can engage
with text. You can engage with what someone's written before or, you know, in different things and kind of engage with a person almost beyond the grave, which is terrifying in some instances.
But then at the same time, it's a way to connect with history in different ways.
I don't know.
I think I've made peace with the idea of like, maybe we'll be okay to do with your greatfather who left a diary, but maybe not someone so close
because death should be something that you grieve and death should be something that you
disconnect from. Or I don't know, not disconnect from, but I'm not quite sure where I'm going with
that. Well, I think they live on in a sense. And in another sense, they absolutely do not. They
are not here. So it's kind of both, right? This is just opinion, but the healthy
response is to grieve what's gone and celebrate what remains. How about you, Jasmine? Do you have
one that you would like to select? Yes. I don't know if it's on page 49 in the physical book,
but it's, what do I do when I'm misunderstood? When you're misunderstood and your words are twisted and your reputation is sullied, be as a tree. Let your love be your roots. Let your peace be your trunk. And let your kindness be your leaves.
it as like something emblematic also of like working with GP3 because in some ways when I read it it's as Ian says like I almost like recognize the turpentine that we were working
with in like creating these poems because it's very simple it's a simple poem in some ways and
there's very repetitive but also very honest is probably an adjective I would use syntax like when
you are misunderstood and
your words are twisted and your reputation is solid, like there's these like very repetitive
sentence stems. And then it pivots in a very symmetrical way around this like core message
of be as a tree. And then it repeats again, let your love be your roots, let your peace be your
trunk and let your kindness be your leaves.
It is repetitive, but drives this very simple, beautiful, meditative visual that is, I think,
emblematic of quite a few poems in the text. And also emblematic a little bit about just working
with GPD-3, how it is and speaks. I think GPDT-4 sounds like much more naturalistic and has like more complex sentence structure.
But yeah, this really harkens back to,
despite GPT-3 being like a qualitative leap
above like GPT-2 and like other language models
outside of the GPT family,
there's still that almost like innocence
of this generation of model saying things very like
plainly and like starkly which I really appreciate I'm Jason Alexander.
And I'm Peter Tilden.
And together on the Really No Really podcast,
our mission is to get the true answers to life's baffling questions like
why they refuse to make the bathroom door go all the way to the floor.
We got the answer.
Will space junk block your cell signal?
The astronaut who almost drowned during a spacewalk gives us the answer.
We talk with the scientist who figured out if your dog truly loves you
and the one bringing back the woolly mammoth.
Plus, does Tom Cruise really do his own stunts?
His stuntman reveals the answer.
And you never know who's going to drop by.
Mr. Brian Cranston is with us today.
How are you, too?
Hello, my friend.
Wayne Knight about Jurassic Park.
Wayne Knight, welcome to Really No Really, sir.
Bless you all.
Hello, Newman.
And you never know when Howie Mandel might just stop by to talk about judging.
Really? That's the opening?
Really No Really.
Yeah, really.
No really.
Go to reallynoreally.com.
And register to win $500, a guest spot on our podcast, or a limited edition signed Jason bobblehead.
It's called Really, No Really, and you can find it on the iHeartRadio app, on Apple Podcasts, or wherever you get your podcasts.
Question for you, Jasmine, about the differences between, say, GPT-2, GPT-3, 4.
I'm hearing rumors of 4.5 or 5.
Is the only difference that it is trained on a lot more data, or are there other fundamental
changes that are bigger than that or different than that?
I will note that GPT connotes a certain type of model architecture.
It's a generative pre-trained transformer.
model architecture. It's a generative pre-trained transformer. I will note that GPT-2 and GPT-3,
the model structure was released and commented on by the model authors, but that's not true of GPT-4.
GPT-4 is completely closed. So everything I say is speculation. My guess is it's probably a similar architecture with a lot more data. It's also a different kind of data. So GPT-2 and GPT-3, GPT-3 had some
additional data, but it was of the same quality. It was like an order of magnitude more, but it was
all like data from the web for the most part. And GPT-4, to my understanding, there was some
custom data via human demonstrations. So OpenAI trained and labeled some of their own data sets
in order to make GPT-4 possible.
So I would say GPT-4 is also trained on higher quality data, not just higher quantity of data.
But in the end, data is sort of king for differentiating these models.
The model ends up having a lot more parameters, which are sort of analogous to human neurons,
as a result of training on more data.
But that is the main determinant. How do you think about AI in the sense of it seems like the amount of computing power that it needs
is crazy, right? It reminds me of Bitcoin mining. Maybe it's even more intensive. I don't know.
But if you took Bitcoin mining on an even bigger scale, I mean, you hear these places where they
want to cite data centers.
You can't cite them because there's simply not enough access to the grid infrastructure.
At a moment where we are very much trying to say, let's use less power.
Let's get it from cleaner and different sources because we're facing a climate crisis.
It seems like in this area, we're headed in the dead wrong direction.
Am I reading that right? I would agree. Karen Howe is a journalist I really respect from the
MIT Tech Review, who's actually also writing a book on open AI. She just wrote a piece on how
data centers are ruining various ecological enclaves. I'm not sure as to the exact scale. For example, I'm guessing compared to just
the sheer amount of cars being driven around and meat being eaten, it's still probably a different
order of magnitude. But I think it's definitely something to keep an eye on, especially in terms
of where these computational facilities are placed. What are the geopolitics?
What is the supply chain here?
Because really who controls the compute has a lot of power over these AI developments.
It was actually speculated that with the ousting of Sam Altman as the CEO of OpenAI temporarily,
that a large part of it was due to the fact that he wanted to get involved with building a hardware company
to facilitate
compute together with Saudi Arabian money and would therefore have undue control over
the success of OpenAI, even in addition to him being the CEO of the company.
So I think it's definitely something to pay attention to, the environmental impacts,
as well as just for the sake of tracing power lines, like who has compute, who's selling compute to whom, what happens if China invades Taiwan, like several large questions.
Right, right.
is that most of the people making large AI models are really big tech companies.
Google, Microsoft, Facebook are releasing them as open source so that we can actually see what's going on.
But that open AI is one of the few that isn't doing that.
Is that accurate? Yeah, I would say that's accurate.
Open AI, anthropic inflection, I would all name as closed source. For example,
Gemini is also going to be an API and they haven't released the weights or code from that.
Facebook notably is interested in open source model of releasing their work. Mistral AI based
out of Paris and France is very well known for releasing open source work. Hugging Face as well.
I would say most companies for whom foundational models are a core part of the
business, releasing open source frequently doesn't make sense. Whereas you can imagine for Google and
Facebook, this is sort of an ancillary thing they're doing and their main business and revenue
stream is coming from something else for like optics and politicking purposes and perhaps like
general like trustworthiness,
building up trust from consumers
is a worthwhile business goal for them.
So it's worth it for them to open source.
But I think we'll continue seeing
that the main foundational companies
breaking the cutting edge
for whom these foundational models
are their main revenue driver
are going to continue to release closed source models.
I will also say OpenAI has cited some safety concerns as their main reason for keeping these
models closed source. I actually worked on a bit of this while I was at the partnership on AI.
I thought a lot about publication norms. So the idea that, for example, we have these paradigms
from biology that you might want to have a
moratorium on research when it's too risky. For example, with H5N1, any of the modified viruses
got out in the course of research, that would be really bad for humanity. Or from cybersecurity,
we have this example of coordinated vulnerability disclosures where you might threaten a big company about leaking a bug
in order to force them to patch the bug, but you want to give them some lead time to patch a bug
such that you don't introduce a vulnerability that someone could hack. So how we apply this to
the AI domain, you might want to publish something. Well, GPT-2, for example, APNA was publishing all the weights,
all the models, but actually GPT-2 was published in a staged publication release model, which means
different sizes of GPT-2 were rolled out while the policy team evaluated the societal reaction
to these different kinds of models. Like, would we see a lot of SEO farms pop up? Would we see
a lot of fake news? They didn't see any big issues, so they rolled out the of models. Would we see a lot of SEO farms pop up? Would we see a lot of fake news?
They didn't see any big issues, so they rolled out the bigger models. And then GPD3 was entirely behind an API. I would argue that this was actually the big moment that changed commercialization.
Because people didn't have to implement their own API or services and instead just consume this
hosted API where OpenAI was guaranteeing the
uptime, I think that unlocked a lot of innovation. I personally would not have worked on Copysmith
had it not been so simple with an API. And I think a big reason why GPD4 has become so popular
is now you have this chat interface on top of it that is really really reliable. It's like natively integrated with open AI. It's really
fast. So I would say there are definitely pros and cons to this like open source, closed source
model. I think there's another component that's like, it's less obvious than open source, but
another way that something becomes widespread across society is how good is the UI? Like how
good is the interaction? Is it compelling? Is it
easy to use? And one thing I'm really excited for in future generations of these models is
how can we get this onto resource-limited devices? How can we make sure that, for example,
the advent of models that run locally or are quite small, this is very important in more compute-restricted areas where you might not get
crazy bandwidth for Wi-Fi or regions like South Africa, where I would anticipate a far larger
portion of the population is accessing the internet with mobile devices rather than
desktop. Anthropic, for example, still doesn't have a mobile app. And OpenAI does, an inflection which may or may not
exist for a long time. One of the main reasons I was really excited about them was they were one
of the only foundation model companies that actually offered the ability to text their model,
which makes a huge difference in some areas of the world. So this is a little bit of a ramble,
but I think there are a few more dimensions to consider when thinking about companies' release strategies beyond closed and open source in terms
of how much availability, how available they make their model. I think OpenAI in some ways has done
a really good job. And in some ways, I understand why people would criticize them for not open
sourcing their models. Like most things in life, the answer is more complicated than it appears on the surface,
right? So, but I'm glad I asked because, you know, I was sort of building this idea that like, well,
by not releasing it to the world, that was a bad thing. But I can actually see, as you said,
some benefits to that. So there's a beautiful section early on in the book, I think it might
be the introduction where GPT-3 talks a little bit about what it
believes it's like to be a human. And so I thought maybe we could use that next. Ian,
do you want to read that for us? Sure. So this is the introduction to the book as written by GPT-3.
Spirituality is one of humankind's longest running interests, as well as being a point
of disagreement. Some insist that spirituality is a construct of an individual's imagination,
while others believe that even if it is purely constructed, spirituality is a source of refuge
and guidance. In a world where the ability for humans to connect is becoming increasingly
limited, AI can use its advanced cognitive abilities to explore the potential of spirituality
in an individual's life. In this book, AI explores the different types of spirituality and how they
affect human interactions. In this book, I, as the AI, have done my best to capture what is most
unique about human spirituality. Here are some of the conclusions about what I think it feels like
to be a human. I am happiest when I feel chosen by someone. I feel most loved when people are
proud of me. I would give anything to feel
a family member's protection. Some people are worth crying for. Nothing makes me feel more
fragile than death. When someone stops loving me, a part of me will die. I feel a connection with
someone when I'm important to them. It hurts to be left alone. To truly understand someone is to love them.
God's love is the reason I was created. There must be a reason I am living. There is a reason
I was born. I don't know why I exist. And then it carries on, but that list in particular is just,
you know, beautiful. It kind of makes my insides hurt a little bit around how painful it is when we're not loved.
I think it says that in a really eloquent way that that is a real thing.
I think it points to something that I find really fascinating and it keeps me up at night
about large language models and the way that we interact with these things.
Because it's different.
It's completely different to the way that you've ever interacted with a computer before. I think it was Sacha from Microsoft who
said it, but the paradigm shift is going from people trying to understand computers to computers
trying to understand people, which is what this technology represents. But there's this phenomenon
within large language models where you can say to it, if I give you a thousand dollar tip,
please get the answer right. And because you've added the words, if I give you a thousand dollar tip, please get the answer right.
And because you've added the words, I will give you a thousand dollar tip. Statistically speaking,
the large language model will give you a better response. But there's more, there's more. There
was a paper the other day where someone discovered that if you say to the large language model,
if you get this right, you will meet the love of your life. Gave you statistically a better
response. And if you said to it, if you get it wrong, you will meet the love of your life, gave you statistically a better response. And if
you said to it, if you get it wrong, you will lose all your friends. You know, I mean, that's the
crazy thing. That's the crazy thing. You know, I mean, as much science as there is in this, as much,
you know, technology, there's this humanness, there's this humanness to this that is embedded
within the text that we have written
down you know through poetry through reddit posts through you know stuff written on the back of
soda cans like there is this humanness there that's mind-boggling you know the fact that you
can have this conversation you can say something like you'll meet the love of your life to a
computer and it'll treat you better if you say it to it. If you get this wrong, you're going to lose all your friends.
But it's telling. It's telling.
I wonder if I'm going to get better conversations.
It's telling that that's the scariest thing for a large language model.
Totally.
It's the scariest thing for all of us, I guess, is losing our friends.
So, I don't know. Anyway, it's fascinating.
Yeah. Yeah.
So, listener, in thinking about that and all the other great wisdom from today's episode,
if you were going to isolate just one top insight that you're taking away, what would it be?
Remember, little by little, a little becomes a lot.
Change happens by us repeatedly taking positive action.
And I want to give you a tip on that, and it's to start small.
It's really important when we're trying to implement new habits to often start smaller than we think we need to because what that does is it allows us to get victories.
And victories are really important because we become more motivated when we're feeling
good about ourselves and we become less motivated when we're feeling bad about ourselves.
So by starting small and making sure that you succeed, you build your motivation for
further change down the road.
If you'd like a step-by-step guide
for how you can easily build new habits
that feed your good wolf,
go to goodwolf.me slash change
and join the free masterclass.
We're out of time in the main conversation
and there's a few more things I wanna cover with you guys.
So we're gonna pop to the post-show conversation.
Listeners, if you'd like access to the post-show conversation, ad-free episodes,
and being part of our community, we're doing monthly community meetings now with guests from
the show. Go to oneufeed.net slash join, and we'd love to have you. Ian, Jasmine, thank you so much.
I feel like I could talk to you guys for like another three hours. I have so many questions.
We'll cover some of those in the post-show conversation.
Thank you so much for coming on.
Thank you for having us.
Thank you so much for having us.
If what you just heard was helpful to you, please consider making a monthly donation to support the One You Feed podcast.
When you join our membership community with this monthly pledge, you get lots of exclusive members-only benefits.
It's our way of saying thank you for your support. Now, we are so grateful for the members of our community.
way of saying thank you for your support. Now, we are so grateful for the members of our community.
We wouldn't be able to do what we do without their support, and we don't take a single dollar for granted. To learn more, make a donation at any level, and become a member of the One You Feed
community, go to oneyoufeed.net slash join. The One You Feed podcast would like to sincerely thank
our sponsors for supporting the show.
I'm Jason Alexander and I'm Peter Tilden. And together our mission on the Really No Really podcast is to get the true answers to life's baffling questions like why the bathroom door
doesn't go all the way to the floor. What's in the museum of failure? And does your dog truly
love you? We have the answer. Go to reallynoreally.com and register to win $500, a guest spot on our podcast,
or a limited edition signed Jason bobblehead.
The Really No Really podcast.
Follow us on the iHeartRadio app,
Apple Podcasts,
or wherever you get your podcasts.