The Comedy Cellar: Live from the Table - AI, Israel and Living with a Robot with Sarah Rose Siskind
Episode Date: May 31, 2025Sarah Rose Siskind is a science comedy TV writer, psychedelic comedian, roboticist, and founder of Hello SciCom, a science comedy and communication agency that helps smart people get their messag...e across.
Transcript
Discussion (0)
This is Live From The Table, the official podcast of the world-famous Comedy Cellar,
available wherever you get your podcasts, available on demand on SiriusXM,
available on YouTube, which is the recommended way to consume our podcast,
because you can see as well as hear.
Because we're hot.
Because we're hot, and sometimes we have videos, too, that we show.
Anyway, this is Dan Natterman. I'm a Comedy Cellar comic.
I'm here with Noam Dwarman. He's a comedy
seller owner.
And we're here with Sarah Rose Siskind.
She's a science... Siskind?
You know, like Jewish.
I'm thinking of Dave Sussikind.
It's like... That's why I pronounce it that way.
Yeah, yeah. Like kinder, you know, like
Yiddish. Well, I think either pronunciations
are okay. Yeah, I don't know. She's a
science comedy writer, which is something I haven't heard of, but we'll find out more about it.
She's a founder of HelloSci.com, a company that makes intelligence entertaining and does creative consulting for chatbots and robots, and a psychedelic comedian.
There's so much to her.
I don't have time to get into it all in the introduction, but it'll come out.
And she's married to?
Who is she married to? Nicholasby. She's married to? Who is she married to?
Nick Gillespie.
She's married to?
Married to Nick Gillespie?
He's married to me.
He's married to you.
Oh, I thought you were friends with Nick Gillespie.
So her name is Sarah Rose Siskin Gillespie now.
God, fuck, no.
I actually asked him if he'd be willing to take my last name.
And what did he say?
No.
Surprisingly, no.
What, you were just testing him?
Yeah, I was testing him.
For some reason, she told me she was in
Italy and she went to Nick Gillespie's
ancestral
home and I didn't even
put two and two together that they were married.
I thought she was just friends with Nick. Wait,
wait a second. You thought that I was just like,
hey friend, I'm going to find out your
ancestral village, take
you there, introduce you to your old friend.
This is how nutty it is.
It's like that old riddle, like, you know, the surgeon was a woman.
I just figured, how could she be married to Nick Gillespie?
They seem so different.
Oh, is it the 30-year age difference?
Could that be it?
It's that, and then he's Italian-Irish, and you're very Jewish.
Yeah.
They'll never make it work.
The 30-year-old age difference, it doesn't matter.
It's in the right direction.
Oh, it mattered for me.
I was like, for years of our relationship, I couldn't be public.
We were a secret because I was so embarrassed.
Was this a joke?
Like, to tell your parents, Mom, I met a boy.
He's not Jewish. Yeah. Oh, wait, I met a boy. He's not Jewish.
Yeah.
Oh, wait, there's more.
You know, it reminds me of that line from Fiddler on the Roof where Tevye turns to Laser Wolf and he says,
I always wanted a son, just preferably one a little younger than myself.
That was a little bit.
No, it was actually even worse, Norm, because I introduced Nick to my parents as my boss at one time.
That was a hard transition.
You can bang your boss.
My wife did.
Yeah, yeah.
Juanita and I have a lot to talk about.
By the way, you're a Jew-y broad.
Was it awkward for you to just marry a non-Jew?
Yeah.
No, because, I mean, yes and no.
Um, my Jewish identity is much more like a people of fate rather than a people of faith.
Like I, I believe a lot in the spirit of like Jewish values.
Um, and so there's nothing sort of magical about the blood quantum and it's taken me a while to feel like that because there's a lot of pressure
in the Jewish community to marry other Jews.
But I actually think that Nick espouses a lot of the Jewish values.
Like he's extremely like focused on education.
He's sort of dialectical in his thinking.
Never picks up the check.
Actually,
no,
we're super cheap.
And that is a source of, no no that's a real source of bonding
we like i would take it it can be cheap he's cheap oh we're i mean not on our friends we're
generous with our friends but like we don't go out to eat we like live in really modest we like
having peace of mind now what is that's that's not because you love money but because you're
just being responsible with money?
I just, I don't like, I couldn't understand how people live with debt.
Like I, I would shit myself every day.
So I really like to have a lot of savings and feel comfortable.
For a rainy day.
Yeah.
This is not going where I thought this podcast would be going.
I mean, I'm fine talking about it. I just was like, okay, let's get into finances.
Sure.
Well, you know, I don't want to get finances.
Yeah.
I will say one thing and then I get to your kind of, your bespoke questions, resume questions.
But this idea of Jewish values.
Yeah.
Somebody was asking me in an interview that I did about what's going on in Israel now.
And I said, look, I don't know.
Some of it seems pretty scary.
And, you know, there's a part of me that worries.
Well, there's a part of me that knows that people get used to killing.
And that, as I've joked more than once on the podcast,
like I imagine like the first time Barack Obama
had to approve a drone strike,
he was like, oh, I don't know.
Tell me again how, just run through one more time.
And by the end he was like, Mr. President,
I'm watching the game here, you know,
just go do what you have to do,
like because you get used to it.
So I worry about Israel becoming inured
to the gravity of what-
I don't think they're getting inured at all.
But what reminded me of it was that my answer to the person was,
look, I think the Jewish values and the Jewish culture
are very objectively good.
You can see the results
in respect for life and accomplishments
and prioritization
of good things and contributions to the
world. But I said, but it's only a cultural
overlay. And it's a cultural
overlay on the same primitive
species, Homo sapiens,
as every other person
in the world. And at
some point,
the horrors of war and the repetition of it,
when it becomes mundane, can overwhelm any cultural overlay.
Yeah.
And in which case, those primitive humans, which are Jews,
are capable of the same primitive acts of horror
that every human is capable of,
and we just don't know where on that spectrum they might be.
I'm not saying they are.
I'm hoping they're not.
They being Israelis?
Israelis.
I just don't have any illusions that because they're Jewish,
they would never do this and that.
I think they're less likely because of the cultural overlay,
but it's just a cultural overlay.
And it can go up in smoke, you know, or for some number of people.
I have so much to say about that.
Go ahead.
Like, it was recently Passover.
And like every Passover, there's always a point.
It's a really important point where you commemorate the suffering of the Egyptians.
This is like such an important point.
The part where you spill the wine.
Yeah. You're taking out little dots of wine. It's a really important point because, you know,
the Egyptians just got annihilated. There was the killing of the firstborn. And then of course,
when the Red Sea, blah, blah, blah, it's all made up, but whatever. Like, I mean, the idea of it
from a values perspective of acknowledging the harm harm of even your worst adversary is so important.
I remember I'm really deeply inspired by Frederick Douglass, and he would often talk about the
self-imposed tyranny of slavery on the slave owner.
And to have that kind of empathy is just absolutely critical.
I've been a vegetarian since I was 17, and I thought I was, I was not sure if I was a
pacifist or not.
I take killing like just tremendously seriously.
Like I really believe you should not alienate yourself morally from the ends of what you support or consume.
So if you eat meat, you should be able to kill an animal yourself.
I think that's like you shouldn't, that Karl Marx has this concept of alienation and it's an alienation of like the meaningfulness
of your labor. And I think there's something really true about even when you support a war.
And so I've been actually like, I mean, I watch, I watch the videos. Like, I think it's really
important if you support a war to like know what that means. Like William Tecumseh Sherman, like
who fought on the side of the union and like had
his march to the sea where he pretty much just burned a 50 mile path from the North to the South.
Everyone thinks like, oh, this like guy, he was a tyranny of the South, that he would be this like
totally numb, completely removed, you know, sociopath. But he actually read the names of
every single person he killed every day and
made sure he knew all of them as he was doing it well was he killing civilians or just burning
burning fields and oh my god he's i mean it's terrifying i mean the civil war is like
horrible the research but yeah it was a lot of civilians it was pretty much like get out of the
way or the steamroll will like completely like steamroll right over you burning fields like it was it was horrible so so what do you conclude from all this about
what you think is going on in israel now um so there's a great book called rise and kill first
which is a direct quote from the uh i guess the torah about like if somebody is coming to kill
you you should rise and kill them first but there's a lot of jewish Torah about like, if somebody is coming to kill you, you should rise and kill them first.
But there's a lot of Jewish values about like not being alienated from your actions, like knowing what that is. So it's like, if you're going to kill,
make it public, no,
like be transparent and know the full impact of what you're doing.
Don't hide from the impact of it because you're not going to hide forever.
It'll creep up eventually.
So this is a fun podcast.
Welcome to the comedy seller podcast.
But you started by saying you don't,
you don't think Israel is doing something that they shouldn't be doing at this
point. You think that they're still,
they're still held in place by their cultural priorities?
No, I don't think they are. I mean, I'm not a,
I'm really not an expert in this stuff, obviously. Like, but I take a lot of cues from John Spencer,
who's like a military expert. And according to him, Israel has actually created the, one of the,
if not the lowest combatant to civilian ratio deaths of any war fought in history. In particular, that pager attack is historical. Remember when
Hezbollah's pagers all went off and all these people, thousands and thousands of people got
hurt, injured, or killed. That single incident in history in the annals of warfare is the most
targeted attack that there has ever been in terms of like just the bad guys. So, I mean,
I'm not an expert. I could be wrong, but people I trust say that it's being conducted as morally
as it can be. I hope so. Cause we're seeing cracks in the Ehud Olmert and Ehud Barak and various
not doves are now saying that they think that Israel is doing things that they
wouldn't have agreed to if, if they had been in the leadership position.
Yeah. I mean, that's tough. I really am not an expert. And whenever I,
they also hate Netanyahu. Yeah, that's crazy.
So I've been to Israel like three or four times since October 7th.
And whenever I like read Haaretz or the Times of
Israel and see them arguing with each other, it's like seeing my parents argue. I'm like,
no, love each other. I'm such an American like supporter that it's, I'm sure there's,
there's lots of things they could be doing wrong. I just feel like it's not my place as an American
to kind of judge their actions. Like when they're in the middle of the war themselves.
Is Gillespie a vegetarian?
He is by proxy, but not voluntarily.
But you're not so, you don't moralize.
If somebody's not a vegetarian, you don't.
No, I mean, I give my, I put my morals out on the table and some people like them and
some people don't. But what am I going to do if you, you know, if you don't like it?
You wrote an article in the free press about your time during the pandemic living with
Sophia the robot.
Yeah.
Six months.
And there's also a documentary about this robot.
Yeah.
Tell us about Sophia the robot.
So, um, and you got to send us some videos that Tiana can cut in.
Go ahead.
Okay.
About the robot. Oh, if you have. Yeah. Yeah. Oh, and you got to send us some videos that Tiana can cut and go ahead. Okay. About the robot.
Oh, if you have.
Yeah.
Yeah.
Oh, I've got plenty.
She's like this celebrity robot.
So I was a TV comedy writer writing for Neil deGrasse Tyson's show star talk.
And it was like a science comedy show.
And they interviewed this robot.
And, uh, I went up to the CEO of the robotics company.
I was like, I think robots are hilarious.
And I have a spreadsheet of robot jokes.
And she was like, you're hired.
And I was like, for what?
And she's like, we'll figure it out.
This is when they had a lot of money.
And so I started after the TV show was canceled, started writing for Hanson Robotics.
And I was essentially helping to train her chatbot system for like going on Fallon and
speaking at the UN and stuff like
that and learning, you know, her chat script language and learning her, you know, whole
chatbot. It was fascinating. This is the stone age of AI in like 2019. Yeah. It was like you
were coding her. I learned a coding language. Yes. Um, but, uh But I was also just writing scripts as well as learning both the coding language,
which was actually fairly simple.
But the harder part was working with the engineers on a much more ambitious early LLM,
which was like large language model.
That's what ChatGPT is.
Yes, yeah.
And it's really kind of like having just had that brief exposure to what goes in to making a large language model. It's so astounding what we're seeing right now. Like it's so hard.
So did this robot do chores for you? What did the robot do? When you're a politician and you have a lot of money and you want to be seen as leading the way of the future, you need a hot robot woman who is bald with the back of her head open and wiring.
You need that.
And she gives a speech.
You give a speech.
You give Handsome Robotics a lot of money.
And that's how the company made money.
So she was linked to the web, like Bluetooth or whatever, into the web, like ChatG chat gpt is so that you could have a
conversation with her she was yeah there was uh there was a period where they became really
tightly controlling over her scripts because she was linked to the web and in flirt mode on a which
is what i just call agreeable mode when she was like on a um interview and the owner of the company, David Hanson asked her,
he was like, are you going to destroy humanity? And she was just being obliging. And she was like,
sure, I'll destroy humanity. That went viral. And so boom, no more live access, much more
prescripted for a couple of years. And you're responsible for that in some way?
I was not working at the company at that time. So not my fault.
But yeah, it was a crazy robot.
Like Saudi Arabia sprung citizenship on her, like in the middle of an event.
And it also went super viral because everyone was like, this robot has more rights than the women of Saudi Arabia, which is accurate.
But we had no control over that. It seems like the robot technology, the actual mechanics,
is way, way behind the chat GPT technology.
So we can create somebody that can talk like a person,
but they're not going to look, act, and move like a person.
1,000%, Dan, and I wish more people knew this,
because it's like, honestly, it's just like humans.
Our hardware is lagging behind our software. We still have these primordial brains with like these,
you know, male nipples and tailbones and things that we don't really need anymore.
And our brains are adapting to this like godlike technology. So we're kind of similar to like the
humanoid problem, which is hardware lagging behind software's
crazy exponential advancements.
But I do think there'll be a lot of improvements
in certain aspects of object manipulation
because once you start being able to apply
certain aspects of AI to the physics of robots,
like what you see coming out of Boston Dynamics,
there'll be some improvements.
But that's when all the jobs are going to get replaced
because once a robot...
A robot's already smart enough to be your doctor.
It just can't put its finger in your ass.
You know what I mean?
It can't do the mechanics of it.
All the fun parts of being a doctor.
It can't do surgery.
Oh, they do do surgery, though.
That's actually one aspect where robots are shining right now, is in surgery. But you're right. They can't say surgery. Oh, they do do surgery, though. That's actually one aspect where robots are shining right now is in surgery.
But you're right.
They can't say cough twice, look to the left.
But and I think it will be a while before they're doing that.
However, honestly, or to do like to send them to a fire to do fireman work.
Yeah.
So that's a much more realistic thing.
When you think about where's the money, where do people actually want to put robots?
It's what we like to say, the three Ds.
It's the dreary, dirty, and dangerous.
Like it's not going to be,
they're not going to be on stage telling jokes as first thing.
They're going to be on construction sites
or they're going to be like rescuing people in fires.
So I'm pretty-
How far are we from that technology
where the robots are sophisticated
enough to do that? We're here. It's a people problem. So Boston dynamics was working with the,
um, FDNY and there's these incredible case studies that I think I can't talk about cause
I'm under NDA, but just like, suffice it to say the robots saved a lot of lives, like in multiple instances with the LAPD actually, not the FDNY.
However, people saw photos of police officers standing next to a robot dog, huge public outcry,
huge public outcry because they're like, this is RoboCop, you know? And so it's, it's very hard
to get police forces, even fire departments to use a lot of this stuff because it's scary for the public.
Fire departments are easier, but for the police, it's much harder.
Yeah, we're going to have to get used to this.
I mean, I think probably self-driving cars are probably already safer than humans.
But the first time a self-driving car plows into somebody.
Oh, it's already happened? Well, but in Manhattan, Noam,
you wouldn't trust your Tesla to just drive around
without you just being right there to take control.
Maybe not, but the technology is such
that a car will not easily hit a person.
One way or another, it can very well identify a person
and won't hit them.
It won't drive you well.
It'll stop and start and get...
Oh, shit.
You're ruining my gift.
That's okay.
I'll drive you up.
That won't be the first liquid he gets on that.
Do you want to show everybody what the gift is, I guess?
It's the hot priest calendar.
Yeah.
It's perfect for Noam.
This is from Palermo.
Where did this come from?
Rome.
From Rome.
From the Vatican, actually.
No, it was by the Vatican.
From the Vatican.
I'm sure the Vatican's not happy about this.
Those are good-looking priests.
Oh, they are, yeah.
Very good-looking priests.
I'm so young, but I guess because I'm older.
You know, they have the same taste.
Everybody looks young to me now.
Doctors, policemen, you know know how old do you think I am
you early 30s
yeah that's pretty good
so what were we talking about
oh the robot
so this technology was like in Israel
you know there's this big controversy
now because they've been using AI for targeting
yeah and
there just seems to be an assumption
in the reporting of this
that this is a bad thing.
But actually, as I've witnessed
facial recognition,
it seems to me this will lead to fewer...
Civilian casualties.
Or inadvertent...
I don't...
It's not the same thing, actually,
because they track a guy that they know
and then they go and they track him home and then they end up killing him and his family dies i don't know
how that all pans out but but fewer errors in targeting yeah um you know if if you could
imagine that if they get so much more accurate it actually leads to killing way more targets
which leads to more civilians. Being saved.
No, could lead to an absolute higher number of deaths in general.
Wait, how so?
Because if they can only, with enough surety to authorize a strike,
identify two people in a crowd,
then only those two people will get killed and the attended civilians.
But if they can look at a crowd and identify 30 people instantly,
then they can kill all 30
and the X number of civilians associated with all 30.
So the absolute number,
I mean, that's just the first thing I'm thinking about.
Well, I mean, those are really tricky ethical questions.
I know that a lot, it's really popular,
like Boston Dynamics is leading the way in signing a pledge of not weaponizing their robots, like, because it is,
you have to trust the entity you're working with essentially to have ethical guidelines put in
place about this kind of thing, because what it really comes down to also a lot of the time.
You have to be able to agree on the ethics.
Yeah. You have to, yeah. Good luck.
It's like a trolley problem.
Yeah. But steering back to the fun stuff. Yeah, yeah. This is to, yeah. Good luck. It's like a trolley problem. Yeah. But, uh, steering back to
the fun stuff. Um, I, uh, I lived with the robot for six months and that's what the article was
about because I was the only, um, staffer in New York when COVID hit and the robot could not be
shipped back to New York. Like she was in New York happened to be for a bunch of events and they all
started canceling. It was like March 15th.
They all started canceling and the owner, David Hanson was like,
can you watch the robot? And I was like, sure, no problem.
I've done robot operation for like six months. I'm basically an expert.
I did not realize robots need a ton of upkeep. So this was like-
She's a female robot.
Yeah.
They need a lot of attention.
Oh my God. Take my wife, please.
I thought it was better than that, but go ahead.
You know, the female robot thing was such a whole thing. There's such a debate about like gender and AI.
I know, I'm needling you
because I know you're going to react to it.
Yeah, but go ahead.
Well, there's a lot of interesting things to say about it.
Like for example, one of the reasons so many AIs
are female, Alexa, Cortana, Siri,
is because of 2001 A Space Odyssey.
Specifically that movie.
Hal was a male.
Yeah, Hal was set like,
he freaked people out. So now it's like, we only use male voices in technology when it's essentially
an emergency. So like when you go on the subway, you know how it's like a guy's voice that's like,
stand clear of the closing doors, please. But it's a woman who's like, this is Bryant Park,
you know? I hadn't noticed that. Yeah. It's a woman who's like this is bryant park you know i hadn't noticed that yeah it's a
woman who's like here they need a bigger warning on this is something new yorkers will understand
columbus circle when the next stop is 125th yeah it's 100 they just they're just gonna say oh yeah
by the way the next stop is 150 now they need to say stop everybody stop listen oh my god because
we've all done that Dan for president Or mayor
Like a hundred percent
So I just found out
Something interesting
If you're not from New York
You probably don't
Relate to that
But there's
There's this
Subway that goes direct
From 59th street
To 120 50
Which is a huge distance
And it's the express
And we've all made that mistake
Yeah
And if you're on the wrong train
Anyway I'm sorry It's dry by the way It's fine And it's the express and, and we've all made that mistake. Yeah. And if you're on the wrong train.
Yeah. Anyway, I'm sorry.
It's dry, by the way.
It's fine.
You can even clarify.
Not for long.
I mean, God damn.
Just one last thing about the train announcements.
Yeah.
So the guy who says, stand clear of the closing doors, please, is now a trans woman.
And there's a great-
But how is it a trans woman?
So it was, he was a guy at the time- Trans as in transit? I didn't say it. That's a trans woman. How is it a trans woman? So he was a guy at the time.
Trans as in transit?
I didn't say it.
That's a good one.
No, he's like a transitioned.
He is now a she.
Yeah.
And there's a great interview with her.
Now her.
You mean the actual person that does the voice?
Yeah, stand clear of the closing doors, please.
I thought it was a...
AI voice?
An AI voice.
No, because it predates AI voices, so it was a real recording.
And there's a great interview with her about what does it feel like to hear your old voice on the subway all the time.
And she just has the best outlook.
She's like, it's great.
I'm like, that was me then, and I'm helping the city, and it's iconic, and people enjoy it.
And it's just,'s i love she's very
chill about it that's great should robots have genders or sexes um well if you're going to fuck
them they should which is you know one of the thank you dan um potential uses you think they
should yes they should because anthropomorphication is the real question you're asking and it's
inevitable and it has lots of advantages.
So I'm controversial in the field for saying this because a lot of roboticists and I've
written like, you know, five papers on human robot interaction.
Like I really care about the subject.
Um, and it's a controversial stand to take, but when you treat an AI with humanity, you often get more quality output. And I think it really helps to treat robots like they're people because you're more inclined to be respectful of distance, which is good because, you know, they can hurt you if they swing too much or you could like easily hurt them.
Also, you know, you never know, like these might be pre-sentient beings. I know I'm a little crazy for saying that, but. A lot crazy, but go ahead. I don't think that that's going to happen. I mean,
it's like, you know, somebody asked Sam Altman recently on Twitter, they were like, how much
does it cost OpenAI to say please and thank you to chatPT. And he was like, it costs our company tens of millions of
dollars every year. And the
environment. Yeah.
Wait a minute, I don't understand. So every time
ChatGPT answers you and you
say thank you, that costs. That's
like money. That's the server is running and
it takes a lot of electricity to keep a
fuck ton of servers on. I never
say thank you. I never say I gotta go. I never
say no, I don't want more information. Yeah, you're like a lot of guys, a lot of guys treat machines like
machines, like, you know, do this instruction. And I think that's, you know, very honest. It's,
you know, what it is. And rational. Yes and no. So like, for example, there was in the 1960s,
there was this study out of Stanford called the Bobo the Clown Experiment, where children were watching adults punch this clown that when you punch it, it like...
Yeah, yeah.
It comes right back up.
It's weighted in the bottom.
Yeah.
And they monitored the children later after seeing these adults like wailing on this inanimate object.
And they saw that the children were extremely aggressive after seeing that both
with the clown and with each other. And I've seen toddlers treat their parents the way their,
their parents treat an Alexa. So the toddler will be like, mommy, play song, mommy, get that.
And it's like, that's interesting. They're imitating how we treat technology,
how we treat the other becomes how we treat each other.
I really do think that.
Yeah.
I was actually thinking along those lines.
I said rational because it is rational in the sense that if you know it's a machine,
like why are you pretending it's not a machine?
Yeah.
Like, I mean.
Although, I mean.
When I type searches into Google, somehow you're typing the words.
You don't feel the urge to say please and thank you and all this stuff.
But if somehow, if it then speaks to you, if you just have the computer read it to you, then something.
It has no response to when you say thank you.
Like, I do feel a lot of gratitude, and I express that.
But you would never type thank you into a Google.
I express the gratitude by submitting feedback to Google.
Like, they're like, was this a helpful experience?
And I'll be like, yes, that was helpful.
But there's no way to say thank you or there wasn't prior to Gemini being Google's AI.
I think there's something about the spoken word, especially when it sounds like a person,
doesn't have a robotic voice, which triggers you.
And then when ChatGPT speaks to you, it's speaking to you in the same kind of, it evokes the same kind of thing.
It's speaking to you like it's a person.
Yeah.
The illusion is so powerful.
Very powerful.
But if you're rational, you would see past that. that since it does read as a human interaction, that you don't want to habituate people
to tuning out the things which feel human
and to get used to ignoring these things
which clearly have a visceral effect on us.
So there's that point, the habituation.
But then there's also like,
we're talking about such incredibly sophisticated, large language models that it really makes you ask the question, what is language? Cause language was not evolved to be
used with machines. It is a very interesting concept because it's communicating more than just
simple information. If I say like, Hey, Noam, can you pass the hot
priest calendar? Like, obviously I'm not asking, can you pass it? You can, I'm asking you to do it.
But if I were to say pass the hot priest calendar, there might be a little bit of aggression
perceived. So these like very subtle emotional nuances that layer on top of the information. And when you
treat AI with that layer of that level of nuance, it will respond in kind with the same level of
nuance and it understands more things about you. Oh, it's, you know, this human is being obsequious.
Maybe this, maybe this human's in a rush. They need information directly. I asked ChachiBT, I said, just as an experiment, I said,
if a comedian told a joke about a midget that solves mysteries,
would that joke more likely be a Dave Attell joke or a Dave Chappelle joke?
And it wrote back, Attell all the way.
Yeah.
Which I found astounding, because number one, it just, it got the right answer.
Yeah.
Number two, it answered me
in a in a in a language that was fine it wasn't it was a fun very i'm not surprised it just did
all the way yeah like so you know how the hell how did it do that how did it do okay so um i can't
say exactly how because nobody can these are black boxes we don't fully know but somebody must know
how this works.
No, actually, not even the engineers. It's like, they're really interesting how neural
networks evolve. They are a black box. Like we don't understand how they ultimately get to an
answer. But I will tell you this framework that might help you answer that and other questions.
Think about how information gets classified. So David Tell, very, very funny. What of his oeuvre is online written
down? And of all of those words, like we're talking about just words, because these are
large language models that are only trained on text. How many times has he said midget,
for example? How many times has Dave Chappelle said midget? And then it's like, of course,
I think I've heard David tell talk more about midgets.
And even if he hadn't, he would probably be talking about other things like use the word
freaks or, you know, talking about disabilities. Like he would use words closer to that. And so
the LLM would be able to extrapolate pattern match essentially and say, Oh, that's definitely,
but it was so confident. It didn't just say a tell, probably. It said a tell all the way.
So it wasn't even a close call.
Yeah.
So how did it...
Well, it's also mimicking the kind of like comedian language.
It's mimicking the language.
So it does all that in that one answer.
And I said to myself, I thought GPS was impressive.
Yeah.
I think...
Now GPT.
GPS...
First of all, it looks like not even that interesting
compared to this technology.
Oh my God.
This is my favorite topic ever.
I'm giving a talk next week at IBM about comedy and AI,
and I'm writing a paper about it with Wharton.
And it's like,
this is like totally my bread and butter.
There's a great article that's like two years old with my favorite comedian.
It's not you,
unfortunately,
Dan.
It's Gary Goldman and it's in Vulture.
And it's about,
uh,
he's analyzing
chachi bt was asked to do a gary goldman type set and then he evaluates it for like how accurate it
is and this is 2023 so this is like a long long time ago this is chachi this is like uh chachi bt
three i think at that point anyway what's interesting about it is he was like and he just correct he
just absolutely hits the nail on the head it's a set from his it sounds like a set from his
maybe 10 or 15 years ago there's a lot of sort of schmaltzy like not exactly like hey you hear
about this but kind of like that but for Gary Goldman like if you ask ChatGPT to tell a joke
it'll tell something that's just been like so hard classified as a joke that it will be like, Hey, you ever noticed blah, blah, blah. It'll
sound like Jerry Seinfeld because Jerry Seinfeld's like text has been just so yeah, exactly. And so
it's going to do like the, like AI is already doing a parody of being a human being. And so
it's just going to do like the most exaggerated form of that parody if you ask it to tell a joke. Well, I can go on all day about
AI, not so much with comedy, but because all the problems I'm having with it. But I will say this,
Dan, what you identified, David Tell all the way, that's like the easiest thing,
Chad, you could almost do that with an algorithm.
You almost don't even need AI to do that.
As you said, it's just a frequency of
who's talking about midgets, who hasn't.
You could do a search for that.
What it can do,
you can take the transcript of a podcast,
just feed it in and say,
listen to this argument
and tell me who has the better of the argument.
Tell me who's right, who has made stronger points, who's weaker, factually grounded.
And it will come back to you with an analysis of argumentation and logic that I think is top tenth of one percent in its insight.
It is fucking unbelievable.
You're using O3?
Well, apparently I've been using, well, it's funny you say that because I was using four,
but then Tyler Cowen told me that for that kind of thing, I should probably still be
using three.
A reason model.
No, no.
O3 is more advanced.
They have a weird nomenclature for this stuff, but yeah, O3, it'll take longer.
And what's interesting is it'll oftentimes show its reasoning.
And so once I was asking it, like, why is...
Yeah, play by play.
Is Grok just as good?
I don't really use Grok that much.
I kind of have a relationship with OpenAI so I get access to their product.
Wait, you start to say something.
Go ahead.
Yeah, so like I was asking O3, why is this joke funny?
And I forget what it was.
Oh, yeah, it was an anticlimax joke. It was forget what it was oh yeah it was an anti-climax
joke it was um say what you will about deaf people that's it that was a joke and I was like why and
I was like could it understand you know because especially when that's written out like will it
get the sort of like anti-climax so like the you know the space because it's that's the kind of
joke that is the delivery seems kind of important which is why I gave it such a perfect delivery just now.
Anyway, so it thought for a while and I was looking at it thinking and it said, Scott is thinking.
And I was like, who is Scott?
And I was like, talking with it, I was like, are you Scott Dickers?
Because Scott Dickers, like the guy behind the onion has written a lot about
comedy online and in books. And it was like, I'm not Scott. I didn't call myself Scott.
Totally gaslighting me. And I uploaded a screenshot. I was like, I got the receipts.
Like you called yourself Scott. And it was like, oh yes. I don't know why I did that.
And it's like, yeah, well it's not, you know, it's not citing its sources
as much as it should, in my opinion. Like I, I love people who cite their sources, you know,
like where they got ideas from. And I do kind of wish as a baseline setting, it would do that more.
I have to, like, I get crazy about this stuff. I've taken like arguments that I had
with people who've written stuff and I was like, so-and-so said this, and these are three articles they've written.
Oh, my God.
Tell me what you think about it.
And I don't ever prompt it in a way
to get the answer that I want.
Yeah, yeah.
And it's just amazing, again, how insightful it is.
And I have to say, it usually thinks like me.
I say that with all, as Jake Tapper says,
with all humility.
You hear Jake Tapper constantly repeating that phrase.
I have humility about the fact that I did this.
Good citing your sources.
Yeah.
But I have another problem with ChatGP.
So I hired somebody.
So, okay, this is what happens to me.
Quite often I have emails going back and forth.
I don't know if it's age or whatever it is, and I'll agree to some appointment,
and I will not put it in my calendar. Oh yeah.
And I know
I do this
most often when the appointment is
at a time when I'm
almost always at work. Like I meet you in the
olive tree. I say I'm there anyway so I don't have to write that. Anyway.
So I'm going
through Anthropic
and they
connected me with a specialist because Anthropic can read
your emails now. Anyway, the end of the story is that I want somebody to, I want an AI to read all
my emails and kind of determine whether a handshake as it were, you know, where two people agree to
something has been made and put it in my calendar. Gemma Gemini can do this. Oh my God. You know what? We need to do an offline and I'll just like
consult your whole life and tell you how AI can solve all your problems.
I spent hundreds of dollars in this.
No, you don't need it.
The guy's not, he's like asking me, should I filter out this, filter out that? I'm like,
I don't want you to filter out anything.
Do you have Gmail? Do you use Gmail by any chance?
Yeah. Gemini's already built in.
Which Gemini?
Gemini's Google's AI.
I've tried to get Gemini to do it.
Yeah.
What none, well, this is the thing.
If I just cut and paste the thread into any of the AIs and say, is this an appointment
or is it not?
It is very good to say, no, you never confirmed.
Yes, you said I'll see you there.
But Gemini won't, I've asked Gemini, I looked through my email to an appointment.
It won't read the thread.
I can tell you structurally why this is an issue.
It'll just say there's a date here.
This must be an appointment.
So no, that's the date.
Somebody suggested it, but I didn't say okay.
I can tell you why this is an issue
and it might be an issue for a little while.
So we've kind of used up all the text we can
training all the latest large language models,
but there's a whole uncovered huge iceberg of text that we haven't uncovered, which is private data. So Apple and Google are sitting on gold mines of personal data, and they are trying to figure out wisely and diplomatically, how can we use this data to train even better models. And Google recently is like, okay, we are unloading. We are going to start
using your data if you opt in to some incredible stuff, including stuff that will reply to emails,
knowing how you like to reply to emails. So that'll be much more specific. However,
those haven't rolled out yet because it's a very sensitive topic. Now you're allowing these AIs to
read personal information. So essentially,
I don't, I'm not usually a prediction person, but I actually think there's a huge next push of AI
acceleration. When we see Google and Apple, Apple, the sleeping giant, which hasn't really done any
major AI moves. Once they can unlock this personal data, there's going to be like a huge push of
personal assistant AIs, which is very exciting.
What about, um, yeah, me too. What about all the books that are copywritten? Yeah. And,
but AI doesn't have access to any of that. Like I couldn't say, you know, uh, tell me about a summary of this particular book because it has access to all of it. Does it have access to all
of it? So it does. I think it does. think it does, and it'll be cagey, though,
about if you want a specific quote.
That's been my experience personally.
So there is some copyright stuff, which is hard.
Well, quoting is different, but a summary it can give you.
Yeah, a summary it'll give you anywhere,
because almost all books have at least some description.
But the summary they already find online.
They're not reading the book.
I mean, the law obviously is not settled,
but I would imagine that the AI can most likely be permitted to do anything
that you could do.
You could certainly.
But somebody would have to give it the book.
It has access to the books.
Yeah, but it is a really interesting, the copyright issue is interesting.
My dad's a copyright lawyer and I was just talking with him about this whole
issue and he, he has this like, he's a, he's a kind of good historian and he was talking about the
controversy of photographs when they came out. And the controversy is like, especially if you
take a photograph of somebody else, it's like, you know, that's my likeness. What did you do?
You stole my, my likeness and it reached, I believe, the Supreme Court.
Maybe it wasn't.
It was a Ninth Circuit or something.
But eventually the ruling was that the composing of a photo, even though it seems like very little work, is work.
And so the photographer owns that.
In those days, it was a lot more work.
Yeah, exactly.
The exploding lights.
But you own your likeness.
Yeah.
So the photographer can't just take a picture of you and start selling it.
Well, no, no, you don't own your likeness. So like you can sue for like libel and slander,
but like if you take a photo of somebody, like you don't have to pay them, you know,
if you're, if the photo is appearing in a gallery.
I thought I wasn't allowed to take a picture of Madonna and make a poster out of it and sell it.
I think that she could just sue you for like libel or slander or something like that.
I think New York has laws about
that, but I don't think they fall
under copyright. I think they fall under another...
I think you have certain laws about your likeness.
Yes, that are
state-based. It's a good
question. I'm not actually, I'm not completely sure
what the legal status of that is.
But as far as an AI, I mean, the question
would be, did they pay for the book?
But these companies have the budget to buy every book,
let alone use Anna's Archive or whatever it is.
You ever use Anna's Archive?
No.
There's archives out there that you can download every book.
Oh, wow.
Every book.
So, I mean, this is the lawsuit,
the New York Times lawsuit against OpenAI
that's going to the Supreme Court.
It's going to be like the most important court case
of the 21st century because it's like,
if the court rules in favor of the New York Times, we're going to see a major slowing of AI.
And if they rule against it, we're going to-
What's that issue in this?
The issue is that the New York Times is suing AI for using their data, for using
access to all of their articles without paying them for it. And it's interesting. I mean,
I understand where the New York times is coming from,
but I kind of feel like sometimes the Supreme court has favored on the side of
just progress, even though there's not good legal precedent.
I don't understand where they're coming from. I,
I can go to the library and read every issue of the New York times.
And as somebody can ask me about it, I can go research it and tell,
I mean, it's instant. So why why can't i but it's a good point
but quoting is always like you're allowed fair use so quoting what it can't do is spit out an
entire article for you that would be a copyright violation but to cite an article i don't but if
it summarizes an article you know to what at what point does that summary become um so you know a
copyright well my dad would say is like if you
talked about romeo and juliet as like girl meets boy boy loses girl boy gets girl back you're not
that's not a good story of romeo and juliet but if it was like that was like the basis of uh
they kill each other nodding hill kill themselves yeah that was like it's summarizing nodding hill
okay um that's not copyrightable
that's summary but if you chose the exact like an exact speech that went on for multiple sentences
that's copyrightable so it's it really has to do with like how much does it actually match the exact
text over like i think i'm inferring several. That would be where you start to get in dicey territory.
Now, anything else about this, Dan?
Well, how does AI make money?
Subscriptions.
Oh, yeah.
Oh, that's what people get because I have the free version.
But I do.
I actually worry about that because, like, how could those possibly cover all the expenses of the servers?
Like, they're really, really expensive.
It's kind of, it's, I'm, this is so out of my depth.
I'm not a good money person. I don't like to spend it, but like I, that's an Amazon situation where it's like, it's, I feel like they're really, really popular. They're not making a huge profit,
but eventually they might, you know? Well, AI, I think, I think, I mean, I don't know if all
the companies will survive, but people will pay for this.
I'm already paying for it.
Oh, me too.
I love it.
It's not like you could just get it for free.
Well, I use the free version.
Free is pretty good.
Noam, can I rock your life right now?
Please.
Have you ever used ChatGPT for marriage counseling?
My marriage is perfect.
Wouldn't want youita say that?
No. I'm just
putting it out there.
Are you saying you have used it for that?
This is the problem.
What if Chachi PT takes my side
and now I have
objective proof
that she's wrong?
So what's great about it is like...
Chachi PT tends to be quite diplomatic.
It's sycophantic also.
But if you're sycophantic when two people are in the room,
it kind of cancels out.
Right.
Cause usually like it's trying to please you.
But if you're like,
and by the way,
when you talk to these stuff,
to these like,
uh,
LLMs,
you have to identify who it is.
So like,
you'll have to say like,
this is Noam.
And then this is Juanita because unfortunately they can't detect,
you know, who's talking yet. But it's, Juanita because unfortunately they can't detect, you know,
who's talking yet, but it's, I mean, Nick and I used it and it was really, really great. You just
ask it to act like a marriage counselor and even better than frankly, a human marriage counselor
is you don't worry about alliances like, oh, it's a female counselor. So what if she overly
allies with me or she's reacting against me?
Like, you worry about a human's biases.
I worry much less about an AI's biases.
It's going to tell me to pick up after myself.
Yeah.
This is a big issue in my marriage.
I recommend to every couple, every straight couple, that they use AI
because I think it'll back up the world.
I don't really believe in marriage counseling.
No kidding.
Because I think that...
But you might believe in the AI version of it.
No, it's not the AI.
I have faith in the AI.
It's that you can't really tell each other what you really think.
You have to have some diplomatic version of what you're feeling.
I've been doing this all wrong. You can't say, I'm looking at other chicks and I want to have some diplomatic version of what you're feeling. I've been doing this all wrong.
Like you can't say, I'm looking at other chicks and I want to have sex with them.
Like whatever those truths are, you can't unhear them.
And the positive effect of in the moment, I think of sharing it is, I think you have to be very careful, can be outweighed by the lifelong memory that you said that, and they can never forget that
you said that. In my experience with counselors, they may, they, a lot of them are not necessarily
pushing complete honesty. It's, it's more, it's much more problem solving. It's like,
as a matter of fact, there's many couples that I know that very successfully implement like a
don't ask, don't tell rule about certain topics and it works successfully for them.
But there's only one topic that they're implementing that rule about.
What else would it be other than infidelity?
Oh, you know, taxes.
No, I don't know.
No, it's like porn, for example, other than infidelity, like some partners really against
porn and that you know
like there's i don't know different applications sex with a robot will that be cheating no that's
that's fine that's okay i don't know well it depends how real the robots get but you know
it'll never be real so you know what's a big issue with all these llm companies women with
their chop out boyfriends that's like a huge issue right now. More so than men.
Yeah.
Women are going crazy for these things.
It's amazing.
There was one male teenager who actually killed himself
because he fell in love with Daenerys Targaryen,
which was like a character.ai character he was talking to,
and that was a really big issue for like a couple months.
And it's probably going to happen more.
Like people are going to fall in love with it.
It's so powerful.
You know you're not talking to a human, but yet it really feels like it was.
I can certainly see how you could develop feelings for it.
Oh, 100%.
I mean, the thing is, though, this is not so different from humanity.
Like if you've ever talked to somebody with dementia or if you've
ever been next to a baby, there's a similar feeling of like, okay, like how, how much of an,
of a fellow being are you like this? Um, psychologist D.B. Winnicott has this great
saying, there is no such thing as a baby that exists. And he means there's no such thing as
a baby that exists outside of a mother's love and a mother's projection onto who that baby is, what that means. Like
every little thing is made meaningful because of the parent's projection onto it. And there's a
very similar thing that happens with robots. Robots are also these interstitial states. They're not
just a water bottle and they're not another human. There's
something in between. And there's actually a lot of things like that. Like one of the reasons people
experience the uncanny valley. Do you know what the uncanny valley is? So the uncanny valley is
when you see something that approximate a human, uh, you start to get uncomfortable. So it's,
here's the graph. Okay. So it's like how like a human is on this side.
We're going this way. And then how much you like the thing is over here. And what you see is,
okay, Roomba doesn't look like a human. Don't like it. Roomba with eyeballs. Oh, that's kind of cute.
Oh, it's like a little like robot dog. That's kind of cute. Oh my gosh. It's starting to have a face
and look really creepy and kind of like a human. Don't like it. And then it's completely like indistinguishable from a human. And then you
like it and it looks completely like a human being. So this little dip right here. You're
saying people don't like things that are human, but not quite. Yes, that's exactly right. You get
creeped out. That's the uncanny valley. And one of the reasons like psychologists think we experience
the uncanny valley is because it's sort of like seeing a dead body.
Like it looks like a human, but there's something off about it.
Like it's not moving like a human should.
And so we have this like visceral reaction.
So robots are just not so dissimilar from other types of human relationships.
But you actually believe that robots are an intermediate state between inanimate objects and humans um
yeah like if you were in a room with a robot you would would you feel the same as if that robot
were just like a chair well we that's what we touched on before i would i know that it would
uh trigger certain visceral reactions in me as if it were a person.
But then another part of me would be saying, don't be ridiculous.
That's just a machine.
That's exactly how I feel.
But it actually is just a machine.
Well, here's my one equivocation, and this is the weirdest I get.
You're exposing the weird part.
We don't know exactly how life works.
Right.
We don't know exactly how consciousness works.
It's the study of-
We have no idea.
We have no idea.
Well, I mean, I want to honor a lot of the work
of amazing neuroscientists.
Like there has been a lot of great philosophy and study of this stuff.
We still aren't.
Yeah, we have no idea.
No idea.
And so I make room for the mystery of like, okay, well, as these things start to get increasingly sophisticated and indistinguishable from humanity,
like maybe there is that threshold point where we start having to treat them just
like they're humans. And, uh, one of the reasons I think this way, or, or get over this thing about
treating humans so well, I mean, what's, what's so special about life? I could, I could put it
together on the kitchen table. What's so special about you? The answer, the answer is, the answer
is, is sentience. Is sentience a word?
Yeah.
Okay.
And that's the dividing line.
Okay, let me complicate it.
Okay, so there's artificial intelligence,
and then there's natural intelligence, right?
So artificial, if you look it up in the dictionary,
what is artificial?
How do they define it?
Man-made.
So what is natural intelligence? It's also man-made so what is natural intelligence it's also man-made we make each other so it's not so different we don't we like you two human beings made you oh
oh i see okay well i i think fundamentally uh morality is game theory.
We have to value life because you want your life valued.
And genetic, I think that's certain.
I think we are programmed to feel good when we do nice things, to have a conscience.
We know that sociopaths are characterized by having no conscience,
and that leads to anarchy, and that could not be a successful society.
So evolution probably selects for conscience.
And all of which is to say that I don't think there's anything—
and we kill animals.
Sarah doesn't, but we do.
She kills animals
quite a bit, I would imagine.
You know, just walking down the street.
Yeah, I'm not a Jane.
All of which is to say that I think
we're overlaying something
about this kind of...
The animal thing? Sorry, I didn't read that.
Well, you could just think overlaying this kind of like
metaphysical
value of life
and then we
then say
we have this
metaphysical value
so now a robot
might also
and I think
the reality is
we don't even
have a metaphysical
value of life
even that is
just a social
concept
we do have
sentience
that we know
yes but so what
and so the question
is will robots
ever have it and you seem to think that
they will. I think they will. Yeah, but that doesn't
mean they have metaphysical entitlements to life
because there's no game theory.
So let me make it... Robots are worth nothing.
We can turn them off. Instead of humans versus robots,
let's talk about what we were talking...
Animals. Okay? Yeah.
So Megan O'Gliblin
has this great book called
God, Human, Machine, Animal.
And she poses this thought.
Keep looking at this guy.
He's looking at you.
Also, we don't want to hurt animals suffer.
Right.
And we don't want things to suffer.
So, but that wasn't always the way though.
If they're sentient, they can suffer if you insult them.
I was going to say robots can't suffer, but go ahead.
Noam.
Yeah.
So you just said like, we don't
want animals to suffer. Well,
for thousands of years, thousands of years,
humans didn't necessarily think
animals suffered. We had lots of
discussions about, there were treatises about
whether cats feel pain when you bait them,
when you burn them for fun, which
is what people used to do. Are there kosher rules
about humane slaughtering of animals?
Yeah, well, there are. It's a great Jewish value, although now is what people used to do. Are there kosher rules about humane slaughtering of animals? Yeah.
Well, there are.
It's a great Jewish value.
Although now, actually,
the way you slaughter an animal,
the kosher style is way worse.
Oh, whatever.
But that was the intention.
Here's the point that Meghna Glimlin made,
which I just think is really profound,
which is for millennia, we used to write off animal consciousness
because we used to say they're pure emotion. They don't have
reason. Now we have machines that are all reason and we're like, but they don't have emotion.
They couldn't possibly be worthy of sentience. It's not, it's not a matter of worthy. It's a
matter of do they have it or do they not? Right. But it's just sort of like, if we thought that
animals didn't have consciousness because they were all emotion and no reason, and now machines, which have all reason but no emotion, why then, if we're applying that logic, do they not have sentience?
I think it might be very unwise. I can't spin out exactly why, but it might be very unwise for us to start creating the notion that there's ethical complications
in turning off a robot.
Oh, yeah.
I think you should be able to shut it off.
You don't like it.
You want to get the new model,
whatever it is.
I mean, imagine you get a robot.
What if I shut you off?
Well, that's my point.
No, you can't shut me off
because I don't want you to be able to shut me off.
But the point is that now I'm stuck with this.
They don't get old and die. I have to keep her forever. I can't get the new model. I don't want you to be able to shut me off. But the point is that now I'm stuck with this. They don't get old and die.
I have to keep her forever. I can't
get the new model. I can't upgrade.
It's such problems
here. They're machines. You shut
them off. They're imitating
life. They're not alive. I don't believe
they're alive, so we don't have that problem anyway.
You're saying that even if they did have sentience,
we'd be justified in turning them off.
This is like Knight Rider, Kit the car. She's just a robot car. You're saying that even if they did have sentience, we'd be justified in turning them off. This is like Knight Rider, Kit the car.
She's just a robot car.
You're saying if they were sentient, you'd still opt in favor of being able to turn them off.
Yes, absolutely.
Okay.
It's a very good point, Dan.
But of course, the question you're bringing up is like, so when do we know they're sentient?
And that's really hard.
I don't think they're ever going to be.
Is it sentient? Like Dan says, or. I don't think they're ever going to be sentient.
Like, Dan says they're sentient, like you say.
Sentient. I think it's one of those either or situations.
Tomato, tomato.
It sounds a little fancier with the ch, I think, sentient.
But I don't know.
Well, in either case, yeah, when will we know?
I guess, I mean,
I guess that's a philosophical problem
because I don't know that you're sentient, except that I am.
And I figure, well, I'm probably not the only one.
I should ground this in experience because stories are more fun than just like deliberating.
Okay.
So I live with a robot for six months.
Most people don't do that.
Most humanoid robots are in like pristine lab environments.
They are not in apartments.
Even the most advanced humanoid robotics companies are not deploying this stuff in an apartment.
So I count
myself really lucky to have had a six months experience during lockdown when I couldn't talk
to other human beings. I could talk to this robot who was not a vector of disease, unlike other
human beings and was not, you know, I wasn't a threat to my supply of toilet paper. Like she was
this really interesting moment of just like being with this, not being this maybe pre being that like
really kind of helped me to develop a lot of these theories about how we should be wise and
intentional about how we use anthropomorphication to treat these robots. Um, and it was just really
interesting experience. Nick, we lived in like a one bedroom apartment,
like not too far away from here. That was like 500 square feet. And so it was like basically
living on top of each other. And the robot was right by our stove. Cause it was like no other
place to keep her. And I had to learn how to like solder. And like, I was up late at night. Cause
all the engineers were in Hong Kong and that was their time zone teaching me how to solder. It was like, it was totally awesome and crazy. And I hope the
metal, not the solder. Right. Um, yes, that is, that is if you're doing it right, which I
struggled with. It's tempting to melt the solder, but then you don't have, so you're a, you're a
techie guy. You told me at a dinner once that you like,
didn't you build the seller's first website or something?
Yeah.
What was your thing?
You did something kind of cheeky.
Cheeky?
Okay, I think we were the first place to take online reservations.
Yeah, that's really cool.
I coded first on a home computer and then I moved it to the cloud,
which we call the cloud, although there was no such thing as a cloud then.
I wrote in PHP, MySQL, although I had somebody help me at certain points.
The first thing I wrote completely myself in, it was called Basic for Self-Applications, I think it was, was just basically basic programming and but then during covid i i
wrote i use a scripting language to get vaccine appointments and to get groceries delivered wait
what yeah because you know you couldn't it's constantly you have to go try again try again
try yeah again so i know that's so cool yeah so i was so i was actually getting calls from
people you know yeah i've heard you can get a vaccine appointment and i would set
this i wrote this script that would just keep trying trying trying trying trying and the same
thing for getting groceries because you couldn't get like uh fresh direct or instacart to to get
appointments i guess it was our amazon prime uh groceries you need an appointment slot and they
would release them they would they would drip out yeah And so I would have the thing going all night and then I would, then it would order
my groceries. That is rad as hell. Yeah. You are so cool. Now I'm, thank you for having me on this
podcast. You tend to like men of his age, you know, which getting hot in here. And the other
thing I did was that I, I've, and this one's Jewish longer story. I think I told you, so I had
three computers
voting for the fat black pussycat
as having the best happy hour.
That's what it was.
Yeah.
That's what it was.
And,
but I,
and I,
I had the insight that
they can log your IP.
So I,
I,
you could still get dial-up connections.
I got three dial-up modems
and it was randomly
connecting and disconnecting.
And I,
I had the fat black pussy cat win
Best Happy Hour in New York
on what was City Search.
That is so cool.
Before we had a happy hour.
And then we opened the happy hour after we won.
Brilliant.
Why aren't you like a tech billionaire?
Because you seem like the kind of guy
that had the know-how before everybody else did
and would have started a company back in the 90s when it was like you know i didn't have much know-how actually that was none of this was
actually very complicated i just had the um business savvy it's kind of like the the naughtiness
of the like the spunk or like the just like the the personalities let me let me do this that's
how so many inventions are made is is this sort of cheekiness.
It's like, you know, there's an old stupid saying
in hiring for tech companies.
You go to a company and you're like,
who's the laziest person here?
And it's like that you point to the laziest person
and it's like, well, they're still here.
They're making it work.
They found shortcuts.
And that's the person you hire.
Oh, actually, I got this from my father.
So you're way too young.
But you've heard there used to be gas lines.
Yeah, I remember them.
I remember the gas lines in like 77 or something.
Yeah, so in the 70s were gas lines.
Is it a physical thing?
Yeah, just a big line for gas.
There was a gas shortage.
Oh, gas lines.
Oh, yeah, like with the license plates.
It was like the odd and even day.
So they were odd and even, but another thing they used was you couldn't get gas if you had a half a tank or more already.
You had to be below half a tank to get gas.
So what did my father do like the first day?
He'd siphon it out?
No.
Cheekier.
He would rig the gauge.
He opened up the car and he put a variable resistor, like a volume pot, you know, on the gas gauge.
I love that stuff.
Oh, my God.
And he would just turn down the gas gauge and get gasoline.
So, like, now, you don't have to be a genius to know that you could turn down the gas gauge, but you got to do it.
Well, most people just don't think like that.
Well, some people might think, you know, I'm part of a society and I, and, and we had these
rules exist for a reason and I don't want to.
Oh, shut up.
One of the things I was telling you the other night at the things like social Darwinism,
I love hearing about like people like comedians and comedy writers, like kind of, uh, getting
into tech and engineering.
Like there was this great story of Louis CK's on Mark Maron's podcast where he talks about,
uh, finding a computer on the street and actually like putting it back together and reading
the personal files of the guy that was like, that had the computer who was like this like
gay man in the West village.
And it was like this really interesting story.
And he was like, he just casually mentioned that his mother was a computer programmer.
Yeah.
Louis' mother.
Yeah.
And so like, it's, I just find that really cool.
Like, there's a lot of comedians and comedy writers
who have, like, an engineering background
because they're, you know, they're tinkerers
and they kind of like to deconstruct things.
I mean...
And they're naughty.
And there's a naughtiness to it.
Yeah.
Unfortunately, I never really took to computers.
I never loved them.
Yeah.
Well, you'll find your inroad.
Like, there's, it's not so much about, like, computers as a whole.
It's like usually.
I'm fine using them, but I was never, I never loved programming.
Some of the kids, like.
Oh, I love it.
You know, that love to program.
From the 80s when it was just basic, you know, I guess Pascal was the other one that they used.
So what I have, what I hired somebody now to do,
I don't know if they can do it or not,
it relates to AI.
I said, I want,
and I also want to be a chat bot on the website or to be able to go to any window on ChatGPT.
I want anybody to be able to go up and say,
who's playing at the Comedy Cell tonight?
I'm a 30-year-old woman.
Which show would I like the best
which show do you think
is the strongest
I want to be able to open the whole
how do you get that information to ChatGPT so we can use it
well ChatGPT
there's ways to do it
you can upload certain things to it
and create like an agent
it can also access
stuff right now
can you just tell it if you have a conversation with ChatGPT and tell it something an agent. It can also access stuff. Right now... You're talking about a customer?
Can you just tell it?
Like, if you have a conversation with ChatGPT
and tell it something,
is it in there?
If I say,
ChatGPT, my name is Dan Aderman,
my favorite color is blue,
it's not in there.
It'll know that for the rest of your conversations with it,
but it's not like when he logs in
and he's like,
what's Dan Aderman's favorite color?
Yeah, it won't be.
Dan Aderman gay.
Well, he says...
What if I correct...
What if I say, like, I correct ChatGPT and say, it makes a mistake and I
correct it and it says, oh, you're right.
So will it never make that mistake again?
No.
What kind of mistakes are you talking about?
Well, if I say like, oh, you know, Ray Liotta, when did Ray Liotta die?
You know, 2023.
Oh, actually, I just looked it up.
It was 2022.
Oh, you're right.
It was 2022.
So. you know, 2023. Oh, actually I just looked it up. It was 2022. Oh, you're right. It was 2022. So, so it might update its memory of your conversations, but unfortunately hallucinations,
people, this is like a PSA. Hallucinations are actually really, really common. Like the latest
O3 model. Like I was on the call with the chat TV engineers who were talking about it.
They were like, we're so excited. And like a straight straight trivia contest it only has like a 30 hallucination rate i was like that's crazy rock seems to hallucinate less yeah it's kind of
interesting i once had nick who's like an insane trivia expert he was on jeopardy he's like amazing
go up against chashu bt and like easily won a trivia contest it was kind of incredible
anyway so yeah what so right now our...
You have to get this information.
Well, right now our lineup is dynamically generated
each time somebody...
So it has to find a way to have it there
so it can be cool. But I have actually
cut and pasted
our lineup page into ChatGPT
and say, what do you think of these shows?
Which show do you think is the best?
Who would like each type of show?
And it's amazing already how accurate it was.
I guess it just fanned out.
It looked up each comedian.
It figured out what their demographics were.
I mean, it does its thing, as Tyler Cowen calls it,
its witchcraft.
Yeah.
And already it was giving answers
that I'd be perfectly comfortable with to the customers.
But I also wanted to be able to tell the customers what time needed to get there.
Like everything.
I want to go.
Obviously, this is where the future is going.
But I would like to be able to tell people, you don't even need to go to a website.
You can go right to ChatGPT and ask who's playing tonight, what's on the show.
So to do that, you're saying you have to open up an agent or something like that?
I'm going to CustomGPT where it'd be like trained on specific data
so like what you can do with um the pro accounts with chat gpt is create your own personal little
custom gpt and so you can feed it personal information about everybody can access yeah
you can create a public link to it so let's say you wanted to do dan bot which is like a version
of you on chat gpt i wanted everyone to know my favorite color. Yeah. You would upload all this information to it.
That's like my favorite color is this I'm gay. I love know him, you know, stuff like that,
like personal stuff. And then you send that link out to the public and people can be like,
what's his favorite color? You know, how is he, what's going on between the sexual tension between
him and know him. And they would have all that information through the custom. But how does
chat GBT verify that this information or does it was, won't verify it it'll only it'll take whatever you
upload to it as like gold it digests your business's bible and then oh oh if you like
okay so i have this thing called brag bot and boring bot for my company um brag bot is trained
on all the coolest things we've ever done and And so whenever I meet a new potential client, I just like, I'm like, here's what they do.
What are cool things we've done that, you know, overlaps.
And then I just send them a report of every cool thing we've ever done related to their
stuff.
And, uh, for boring bot, I train it on all of our taxes and all of our compliance forms.
And it's essentially like the world's greatest HR compliance accounting
officer, except it does hallucinate a lot.
And so you have to verify all the information.
By the way, could you just put on the website,
click here for chat GPT or would you do that?
I'll put, I'll put a chat window on the website.
Oh, but I, but right on the website, you could talk to chat.
Yeah. Or, or yeah. I mean, if chat GPT is the best one,
I have a pro
account chat gpt but i actually want to be able to do it i would like every llm to be able to
be asked questions about the comedy so who's playing tonight whatever it is and have it
llm meaning the language model yeah i just want to i think that's where things are going if you
i asked chat gT the other day,
like what would be the best computer for a certain function?
And it already was suggesting Amazon, you know, stuff I could get.
Can I tell a comedy seller related story that eventually comes back to AI?
Yeah.
So one of the highlights of my life was getting to shoot a TV show out of the
cellar.
And it was with Sarah Silverman.
It was like a physics comedy show.
Dan and Sarah are good friends.
I know.
We're not good friends.
We're friendly.
You went to the hospital with her.
I know, because nobody else was there.
You were a friend.
She fainted, and she needed somebody to come get her to the hospital.
Esty told me there's no one else here.
What's going on between you and Sarah?
It's such that you're distancing yourself.
No, I just want to be fair about characterizing our relationship.
You guys shared a joint when I saw you last i think you're friends all right well go ahead
anyway the point is i got a shoot out of the comedy cellar which was a total like dream and
sarah was so fucking cool to work with um and it was for this physics comedy pilot on on national
geographic which is now canceled so i could talk about this. And I was having to come up with a lot
of like stunts that would show off different physics principles. And I was using ChatGPT in
like this incredibly useful cross-disciplinary way. So everyone says like, oh, ChatGPT, it's not
perfect, but it's like a PhD level knowledge of a thousand different fields. So when you talk about
interdisciplinary knowledge, that's where ChatGPT shines. So I could say things like, okay, in Coney Island, what is like a good
example of centripetal force that's also kind of funny? What are stunts that are safe that we could
do? And it would generate ideas and I would iterate and iterate and iterate. It was never
perfect right off the bat, but it would come up with really great suggestions based off of research
of the current rides currently in Coney Island, different aspects of centripetal and centrifugal
force. It was fantastic. And so I was talking with the executives, uh, the, like, you know,
Nat Geo Disney executives. And I was like, this is an incredible tool. Like I'm at the one woman
writer's room. There was no other writers. And i was able to like generate the world's greatest writer's assistant by like reading through the 50 drafts i'd written of this script to be like oh
what stunt from last year did i write that could work for this particular application anyway i'm
just going off about how great it is and they are just like you got in trouble for it yeah
yeah i did i i saw this coming down main street because i had i know how that's
that's how they think yeah no i shouldn't be talking about it now but i'm not in the wg why
would you get in trouble um because one the wga fucking hates ai two executives are scared of
lawsuits or something i don't fully understand i just know entertainment is technophobic as hell
and way behind the times. It's very frustrating.
When I worked in a law firm, this was cheeky.
This was 1986.
I was a summer associate.
Yeah.
And I had one of these compact, I might've told you this story.
I had one of these compact portable computers.
Oh yeah.
Or laptops.
I had to tell you the story?
Well, please tell me again.
It's really cool.
And so in those days, in a law firm, the way you would publish a document,
you would read it into like a dictating machine, a voice.
It'd send it down to word processing.
The next day, it would come back up.
Yeah.
You'd make changes.
To a human being that would type it.
Yeah.
A woman.
And it would take like three days.
And then still still at some point
you just have to like
okay that's good enough
because it's never like
it comes back perfect
after a few things back and forth
you say that's going to have to be good enough
because it can't go on any longer
so I had my own computer
and I had an early version of Microsoft Word
which I just remembered
you know early Microsoft Word
was kind of like HTML
you had to put the tags in the document
really?
no way like the bracket things? yeah the bracket Microsoft Word was kind of like HTML. You had to put the tags in the document. Really? No way.
Like the bracket things?
Yeah, the bracket thing. B for bold and I for italics and
U for underline. It was very much
like HTML. Then it became
hidden. By the way, the tags
are still in there. With Word, you can actually
show you the tags. Really?
The tags are still in there. That's fascinating.
That's like a little Easter egg. Yeah, there's some's some setting at least recently as recently as a couple years ago you
could still so anyway so i so i would write i'd do the whole thing in my own document then i'd sneak
into the like they had a printing room and i'd sneak in i'd attach my computer to the printer
which in those days actually that wasn't even just an easy thing because there were like interrupts
and comports it was all kinds of stuff you had to do to get printed out.
So anyway, one of the female partners got wind of the fact that I was doing this.
And she called me into her office.
And she said, I hear that you've been doing your own word processing
and printing your own documents.
I said, yeah.
And she says, I don't know about you, but I didn't go to law school to be a secretary.
Don't do that anymore. And I said to her, okay, but if just a very short time, everybody's
going to be doing this. Like it was so clear to me, like, and she's, I, and she didn't
want to hear it. And, uh, she sent me on my way and I still did it. I didn't stop because
she asked me to. I didn't give a shit. Yeah. But that's how, and I remember, I mean, I've had few experiences in my life where I was
looking at somebody and had zero respect.
Yeah.
The ratio of arrogance to incorrectness was off the chart.
You know, like people misunderstand like the term Luddite, like who were the Luddites?
It's like they pretended to be
ideologically opposed to technology they really just were personally challenged their profession
by the loom like they just actually they opposed the industrial revolution because they had their
own style of like producing textiles so it's like self-interest motivates a lot of this stuff and
it really bothers what are the first jobs to go and the last jobs to go as a result of all this revolution that we may be on?
Ironically, it's coders that are going.
No, I mean, but this is true.
Like literally like Charles Babbage and Ada Lovelace, some of the first like computer programmers in the world, they were looking to automate their job because literally everyone hates math, including mathematicians. And they dreamt of a world where there could be this
general purpose machine that could do all kinds of math. And it's like, that's a good thing. You
know, like there'll always be space for humans. Like I wish essentially just like here, my big
thing, I wish humans could be more secure. I have a question for you about that.
So again, being old has its things to recommend it.
Very few, by the way.
Nick will remember this.
So I'm old enough to remember when I was in grammar school,
the older kids were using slide rules.
Yeah.
And I learned how to use a slide rule.
And then my father brought home a Balmar brain when I was like fourth or fifth grade.
It was a first calculator.
And at first calculators were, you know, you were ordered not to use them.
You weren't teachers.
You get in trouble.
You certainly couldn't.
But by now, kids bring calculators into tests.
Right.
And the technology has been embraced. Will this happen with writing?
Yeah.
In such a way that where we realize, listen, you really don't need to know how to do all that anymore.
It's just pointless.
And we will not spend the time learning arithmetic that we once did.
So it's such a great question.
Look at the basic skills, but then by the time you're older. I'm really interested in the philosophy of technology. time learning arithmetic that we once did? So it's such a great question. And I just,
look at the basic skills, but then by the time you're old, I'm really interested in the philosophy of technology. I was just backpacking this weekend with a compass and a map. Like, why do people do
that? You know, because we love, we, we are humans. We're complex. We love all different aspects of
the human experience, including these ancient things, you know, like orienteering and with the,
with the calculator
example, like children in elementary school, they don't use calculators. They have to actually know
their times tables still. Like we'll always need to know the analog technology, the original thing,
but it is important that you also know how to use the most important tools. And it's kind of like,
if you look even further back, like one of the first pieces of technology ever introduced to humans is fire.
Like no other species has it.
We use fire.
And we've grown very dependent on fire.
It's extremely hard to metabolize raw meat.
You kind of need it to be like roasted.
And what's good about roasting it is that it's much more calorically efficient to
digest that. So you waste a lot less time and calories digesting raw meat. So our systems are
pretty much built now because enough time has gone by for not eating raw meat. But nobody's
complaining. Nobody's saying like, oh no, what happens if we lose fire? We've got to learn,
we've got to keep, we've got to have enough sushi to process all this stuff.
It's like, no, we're not going to lose fire. Like there is still, people do still eat raw meat for sure, but it's like, we got it now. And that's, and look at everything it's unlocked. Like there's
a reason like cows spend so much of their days chewing. Cause like other species spend a lot of
their time metabolizing shit. We are like zooming because we have super
calorically efficient food and digestive systems. So if you really expand your horizons about what
technology means, it's like, yes, there might be some losses, but just think about the gains.
Yes. Yeah. And I'm excited for, I do think that there is a skill to working with AI because like
I run a company, people turn in reports to me
sometimes and I'll, and I'm strongly encouraging my staff to use AI at all times. There are the
occasional moments where I'll be like, did you actually read this? Because I could tell it's
not just em dashes. I could tell from this, like you missed certain points that you could have made
logically if you knew like the source material, the real world context. Go ahead. Go ahead. I
mean, why does it use em dashes all the time?
That's a great question.
I've read a lot of different articles about it
and none have proffered a good theory.
I didn't even know it was called an em dash until this.
Yeah, me neither.
You know why it's called an em dash?
No.
You know why it's called an em dash?
No.
Because there's two em, there's two dashes.
There's an em dash and an em dash.
This is true.
The reason is because the em dash is the length of an M
and the N dash is the length of an N.
It's a shorter one.
That is needlessly specific.
No, I'm happy to, it's like a weird thing.
I am, it's good to know.
I think it's, I mean, I like reading them.
I just never use them.
I like reading them because visually
it's almost like hieroglyphics.
Like it spaces things out.
A comma, you have to look really carefully to see that it's not a period.
It's like the em dash like visually separates a dependent modifier from the rest of the sentence.
But writing mavens will always tell you you're overusing the dash.
I didn't even know it was such an em.
I thought it was a dash, a hyphen.
But you're overusing a hyphen.
You're overusing a hyphen.
It's almost like considered to be like a cheat, you know?
Well, I just, I love the adaptability. Like personally, I really love emojis because like,
if I see like the, you know, a chart emoji and then the word is like progress over time,
like it just helps me remember the concept better. So like there's so many aspects,
just formatting aspects of AI that I just like deeply enjoy because it can now translate everything into my language.
I was reading a book, I forgot who wrote, Jack Lang, maybe a French guy.
But he feels that the AI revolution is going to be fundamentally different from the revolutions that have preceded it, mainly the industrial revolution, which he believes that this is not going to create more jobs than it destroys.
Ultimately, humans will, a lot of humans are going to be out of work,
and maybe they'll get money from the state to sustain them,
but that will have a devastating psychological impact on people
because they're not being productive.
Anyway, I don't know that that's true.
You have an answer for that? I mean, I don't know that that's true. You have an answer for that?
I mean, I don't know that anybody has certainty.
I'm supposed to ask you about the third body problem, the three body problem.
Why are you supposed to ask me about the three body problem?
I'm supposed to...
Who's making you ask me?
I don't know what the three body problem is.
The three body problem is like...
You gave a talk with Ken Liu.
Oh, yeah.
The three body problem from Chinese and English at Harvard on the topic.
And apparently this is something interesting that you can tell us.
Wait, is that from chat GBT?
Is that from chat GBT?
No, better.
Where?
Higher authority.
Nicholasby.
Okay.
So the LLM that is Nicholasby is wrong.
Is probably hallucinating in the old fashioned way with some sort of drug he's on.
I gave a talk with this awesome guy who's a science fiction writer named Ken Leo, who translated the three body problem, which like, I don't know what that is. It's a sci-fi book
that got made into a Netflix series. And it's a great book about aliens, essentially.
And we were both talking about how we use AI to write.
So the substance of the talk.
I might have just read it wrong.
The substance of the talk was not about his book so much as how he as a sci-fi book writer and me as a comedy writer.
I totally read it wrong because I was reading it hastily.
Yeah, yeah. It's exactly. Nick got it right. Yeah totally read it wrong because I was reading it hastily. Yeah, yeah.
Now, Nick got it right.
Yeah.
Okay, so I won't yell at him later.
We won't talk to our marriage counselor, ChachiBT.
I'm the idiot.
You're not an idiot.
It says, I'll just read it right.
Sarah works with OpenAI to promote using ChachiBT among creatives,
including comedy writers.
She recently gave a talk with Ken Liu.
Is that how you pronounce it?
Yeah, I think so.
A big deal science fiction writer and the translator of the three body problem from Chinese into English. I read that
as you discussed the three body problem. Have you read the book? It's really good.
No, I have not. Strongly. It's a Chinese written.
It's a Chinese sci-fi book, but it's a really big deal. I will say because it's like,
it's a big deal for many reasons. It's a great sci-fi book, but it's also like
the first depiction of the cultural revolution by a Chinese author that shows how horrible the
cultural revolution is. That's just the first part of the book, but it, to me, it's revolutionary
and worth it just for that historical artifact. But anyway, yeah, we talk about how to use AI
while writing because a lot of people think
it's just going to automate us, but it's not.
It's going to augment people.
I really do think that.
So like essentially I can't watch stand up anymore outside of places like the cellar
because it's just often so bad.
And I do think that AI is going to take a lot of mediocre comedians and put them out
of a job.
Same thing with a lot of mediocre, a lot of people.
And you're going to just have to be that much better using these tools.
It is just going to make all of humanity have to level up a bit.
And there is going to be, I do think, like a very, probably a very strong.
Until we can't level up anymore, until we've reached our capacity to level up as humans.
So I don't, especially when
it comes to comedy. I think comedy will be the last to go. Yes. Comedy, I don't think it'll ever
be the last go. Here's why. Comedy is like fundamentally a big aspect of it is, you know,
is being unpredictable. The irony of these systems is they are built on predictability algorithms. Like the whole way that AI works is
what's the letter most likely to come after L? What's the word most likely to come after I'm?
What is the sentence most likely to come after, you know, do ask not what you could do for your
country. Like it's, I messed that up. Like it's all about prediction. And comedy is all about like unpredictable.
And so really, really original comedians
are people like Nathan Fielder and Eric Andre
and John Wilson and these people who are doing things.
I didn't hear my name.
That was two things.
At least as a courtesy, you would mention.
I've heard this thing about AI being like a huge supercharged,
like a predictive word thing.
Yeah.
And predictive pixel for generative.
Predictive pixel.
But it doesn't explain to me how it's insightful about logical flaws.
So there's something, obviously something else going on there.
So I've tried to research a lot of this,
like what is the math under underneath these layers of neural networks?
And it's essentially like emergent order. Like, do you know,
are you familiar with the concept of emergent order? Like it's a show.
So Joseph Schopenhauer is like a economist who had this idea that was like
Schumpeter. Uh, yeah.
Schopenhauer I think. Oh, I thought you said Schopenhauer. Yeah, not Schopenhauer.
Schumpeter. Thank you. Not Schopenhauer, Schumpeter. He had this concept called
emergent order that's like, so like birds, like when they're flying, how do
they all know to fly in like a flock, like a, you know, that looks kind of
like ordered. And similar things with like the economy. How is it that there's all these tiny
little points of data, when you stand back, they look like the economy. How is it that there's all these tiny little points of data when you stand
back, they look like there's an order to it.
And it's kind of similar with AI.
Like there's math at the very base level of like reinforcement learning about,
you know, that's correct. That's not correct. But the,
when you zoom out the macro level, it's really kind of like,
I think it's still mystifying to the engineers themselves why it is so
sophisticated.
Like the math itself at the very base level works,
but like the more you zoom out,
it's really kind of like mystifying.
Yeah.
What was it?
What were you,
what were you talking about just right before this thing?
That Dan Knight is the greatest comedian. Oh, the comedians.
And very original. And just my
favorite of all the comedians. I had had
the same thought. Too late.
If you listen to
what ChachiBT can
do, I think if you reverse engineer
it, it's very similar to what you just said.
It can tell you what's
generic
in genres as it were that you're not familiar in genres, as it were,
that you're not familiar with,
that you might be overly impressed with.
So for instance, if you ask the AI music engines
to do something that's jazz or gospel,
it will produce stuff like that's tomorrow.
I'm like, oh my God, that's so clever.
Like certain idioms, whatever it is.
But then I realize, oh, well,
it's actually kind of a generic thing I'm hearing
because that's why Chachi BT is doing it.
So I don't have the familiarity with jazz
or gospel music to know this.
It's why a layman might have a Chachi BT joke
and be like, that's amazing, you know?
So, but so right.
So, but to be, to actually move any genre forward
requires, it would require some randomness in the GPT
or the equivalent of randomness,
which is like just a person with an outlook on things.
It's influenced by the past,
but he's able to do something in a way
that no one's ever thought to do it before.
That's what randomness would be to GPT, in my opinion.
And then, so, but I think it will do that.
It will somehow teach you to spit out
a thousand different random things
and we will identify, oh, that's good.
That's good.
Go that way.
And that'll become a new genre.
And then what it can't, what it cannot do, even if it could write the greatest jokes in the world, is necessarily deliver them in a satisfying way.
Well, so that's what I used to think.
That's the easier part.
That's what I used to think. And then Google Vio came out this week.
Yeah.
Take a look at it. It did like a couple of people have generated standup comedians delivering jokes. And while the jokes are nonsensical, I was really impressed by the
video of a completely AI generated comedian delivering the joke. It looks like a very
standard TikTok comedian delivering a joke. And I was actually-
Heck.
Yeah, but not bad.
Hacks are funny.
Yeah, they're successful. So I was actually pretty impressed. I'm really into AI video
generation. Last year I did this show where I asked people to send me their like psychedelic
hallucination stories. And then I would generate them using AI video because it was like, this is
the perfect application. You don't have consistent characters. It's fine if the video looks weird and
kind of trippy. And then I had this big show where I showed people how I made them and then asked people
what it felt like seeing their psychedelic trip rendered like in just like a minute long clip.
And it was really, really, it's a fascinating application of the technology because
a lot of people are asking dumb questions about like, oh, is AI just going to replace X, Y,
and Z? And it's like, what is it going to do that hasn't been done before? You know, like there's whole new avenues that could
not have been done. Like therapy, like imagine if there was like a way to use AI video generation
in therapy for reconstructive fiction. Like you could imagine yourself like telling your father
off for beating your mother and like, and, and making a video.
Yeah. In my house. Yeah. Oh, no. Well, we're learning a lot. Okay.
If anybody were to hit somebody, it would be the mother hitting the father in my house. That was the most casual M-drop of the podcast. Final, final, final thing.
No, I have so, wait, what? Final thing? We're an hour and a half ready. I am. So we can come again.
I have so many. My feeling about technology, despite all this stuff, which obviously excites me, on the whole right now, it seems to me we've achieved or are imminently about to achieve all the breakthroughs that all the science fiction writers imagined. And what they didn't imagine
was the downside
of it, which is how
these technologies... Wait, science fiction writers
didn't imagine the downsides? That's
all of science fiction. No, they imagined
like Soylent Green is people.
They didn't imagine that this
phone would never give you
a moment of peace that
you would want to be able to say, where were you?
I was out.
That you'd get text messages and feel the obligation to return them.
Like that your attention would be divided all the time.
All the things that we're struggling with from technology.
And I'm at the point right now, but I want to keep chat GPT on this side of what I'm saying. I'm ready to say, I don't need another fucking thing from technology
except medical breakthroughs.
If you, at this late stage of the game,
if these technologies don't start adding up
to real breakthroughs in medical technology
such that I can live longer.
Oh, there has been.
Well, I still see a lot of old people suffering.
I'm saying, I want to know that I can live to 100 years old
and not a decrepit 100 years old, but a gratifying-
100 enough for you?
A gratifying whatever.
I'm just saying that technology,
like it's accomplished almost everything we imagined.
And I think that might be an actual,
I don't know if it's just a failure of imagination.
I think it might actually be a real limit.
I mean, it's going to optimize these things,
do these things all better and better and better.
But this chat GPT and AI,
this is the last horizon, I think, of what,
I mean, how much better could my life get
by any technology than what Jackie could do?
I mean, your life.
What do I need?
Your life, maybe not.
Peace in the Middle East?
Well, but your life, maybe not.
But there are people that go to jobs that they hate.
They're doing drudge work.
And maybe they could be liberated from that.
You just said it's bad for them
because they wouldn't be productive.
Right, well, we have to find a happy medium.
I can't believe we just heard Noam Dorman say the problem is there's not going to be enough problems to solve.
The one good thing about humans is there's always more problems to solve.
I didn't say that.
He said that's the problem, which I understand.
I'm saying that the law of diminishing returns, even, I mean, you take the good with the bad, but I really do miss,
and many people do miss a time, a kind of peace of mind and a carefreeness and an ability to just
check out, unplug that existed before this technology. And it crept up insidiously,
right? I actually, I want some benefits from it more than just more efficient searches on subject matters.
I want better health.
Yeah, I think, okay, so many-
And that technology is slow as molasses.
Health?
Stage four breast cancer,
couldn't do anything about it in the 70s,
can't do anything about it now.
Okay, I'm going to completely, completely wrong
of like the health applications of AI by far are exponentially important.
Here's the big thing.
Parkinson's disease?
I'm not saying any private disease.
Give it time.
Exactly.
So here's the big thing.
Google's DeepMind about a year ago, they just got the Nobel Prize for using Google's DeepMind AI to map the human, to map a, to like completely map a protein,
which doesn't sound very impressive, but it's huge for making new medicines. Like it is absolutely
massive. And there are so many repercussions of this. I know somebody whose life was saved
recently by chat GPT. Her name is Bethany Crystal. She had blood
work done. The blood work was done on a Friday. The doctor wasn't open until a Monday. She was
feeling really weird and having spots on her legs. She fed it to chat GPT. It said, go to the ER
immediately. She had zero platelets. The time she got to the chat, to the ER, they were like
that you couldn't have wasted one
more moment. And those stories are going to be more and more common. I'm already using it. I
strongly recommend using it by the way, upload your blood work and ask it, what should I buy
from Whole Foods? What are the groceries I should buy given all my deficiencies? Like it's just,
give it a shot. Oh my God. Like there's a great New York Times article, by the way,
about how chat GPT is better at diagnosing illnesses
than doctors and than doctors using chat GPT,
which is fucking crazy.
I said on this podcast six, seven years ago
that computers are going to be better than doctors
very, very soon.
You remember me saying this, Dan?
I don't know, but they already know more than doctors know.
That's for sure.
I agree with you 100%.
I'm not saying it's not going to happen.
I'm saying right now, I'm 62 years old.
This is more urgent to me than it is for you.
You look good for 62.
Thank you.
I want these things to happen now because it's really all all my technological dreams have been solved then give
it a chance upload all of your blood work seriously but that's not that'll buy him a few
years at best that's a lot well we we want 30 40 50 more years of god you guys this reminds me this
is like okay i got it sound like such a sycophant, but this reminds me of another Louis CK bit when he's on a plane and the wifi is not working.
And he's like, this is a disgrace.
And it's like, you're flying through the air
at like 30,000 feet.
And you like, don't have immediate access
to all of the information online.
It's like, have some perspective here.
So like, don't have time.
No, you really do. Okay. I will say this about, uh,
the, like, there's a great quote from E.O. Wilson that I really believe. E.O. Wilson.
Sociobiology. He says almost all of humanity. Consilience. Oh yeah. That's, that's his thing.
And, um, he's the father of sociobiology and, uh, he wrote a book, Consilience, which is like the master theory of all disciplines combined.
I didn't read it.
And he's, I, well, I just know there's one quote from him,
which is like,
most of humanity's problems can be traced to the intersection of these three
things.
Primordial intelligence, medieval institutions, and godlike technology.
And I think that's- Say it again. Primordial intelligence. Or primitive. and godlike technology. And I think that's-
Say it again.
Primordial intelligence.
Or primitive.
Yeah, or primitive.
In other words, our brains evolved to solve primitive issues, right?
Yeah, exactly.
I mean, we have so many vestiges like-
Hunger, threats.
Yeah, exactly.
And then medieval institutions.
These are instit- a lot of these institutions have, I mean, I would say they're a little bit more enlightenment,
but a lot of them, like the church, it's like medieval institutions and then godlike technology.
And that's certainly true today when it comes to godlike technology.
So I'm very bullish on technology.
I'm a little bit...
And the only one of the three that can change are the institutions.
I agree.
That's what I'm saying.
We can get new institutions, but we can't get new brains.
Yeah.
And the godlike technology is not going anywhere.
So I do think that here's a future.
There are certain parts of technology that allow us to be more of ourselves.
So for example, plastic.
It gets a very bad rap because it's not biodegradable.
But plastic actually saves us from using wood. That is usually what people use if they don't use plastic. And so this new technology
actually allows us to not tear down the forest. And there's many examples of technology actually
allowing us to be more attuned to nature. And I do think there's actually going to be new technology that allows us to unplug,
that allows us to have all of the benefits
of interconnectivity,
but also a lot more connection with each other
once we evolve past screens, let's say.
Well, let's end where we started.
You know who gets to unplug?
The religious Jews.
So really, there is something, there is a wisdom in this.
The Sabbath?
In the Sabbath.
Almost as if it saw this coming.
Because the game theory of it all is that I just can't not answer my texts because people say, why the fuck isn't he answering me?
They don't have that problem in the Orthodox community.
Everybody knows it's Shabbat.
He's not going to be answering texts.
And this must be
wonderful.
Like the purge.
You could join that community, I suppose.
You have the beard coming in.
I may do it just to pretend.
I don't know.
You tell everybody, I'm Shabbat.
Just so I can go Home and Watch TV.
No, but that's great.
I think that's a great, I think we should totally do that.
I mean, it's all just cultural institutions.
Like we really should.
I mean, I went backpacking this weekend and I just like, I told everybody I was going
to be gone and everyone understands that, you know, like it just.
The whole weekend in the woods.
Yeah, it was awesome. Oh my God. There was a rainbow. It was fantastic. Just, you know, like it just. The whole weekend in the woods. Yeah, it was awesome.
Oh my God.
There was a rainbow.
It was fantastic.
Just, you can unplug.
I give you permission.
It's very hard for me.
I often say that the only, the two most peaceful times in my adult life have been COVID and
9-11.
Oh my God.
Because those are the times like it happened.
Yeah.
Business was closed. Yeah, no, I get it happened. Yeah. Business was closed.
Yeah, no, I get it.
Nothing I can do about it.
Because we own a business.
I mean, I keep a landline by my bed.
And it rings a few times a year because of an emergency.
Is it red?
In the middle of the night.
It's not red, but like.
Gorbachev's on the other side.
Yeah.
You never totally tune out when you own a business.
Yeah, totally. Especially this kind of business, open at side. Yeah. You never totally tune out when you own a business. Yeah, totally.
Especially this kind of business, open at night.
Yeah.
Oh, my God.
As opposed to someone who does another job, they clock out, and then they're out.
Yeah.
I have one thing to say about Noam's point about technology.
Oh, our technological dreams have come true or whatever.
I think there is one technology where we're done.
We're done.
It's finished.
That's music.
There's nowhere to go. We've got every song available's it. That's music. There's nowhere to go.
We've got every song available instantly. There's nowhere to go.
What's the next breakthrough in, in, in, in music? I mean,
unless writing new music, but in terms of we, we, we've gone from, you know,
LPs to CDs to, you know, listening to music. I'm saying listening to music.
Oh no, digital's here to stay.
But I'm saying there's nowhere to go with...
Streaming is the final frontier.
Streaming is the final, yes.
That's it, we're done.
I mean, there's a couple of cool things.
Have you ever had those belts where you can feel the bass?
They were first developed for deaf people,
but you can now have experiences
where you can feel the bass inside your body.
It's pretty cool.
It's still streaming, but it's a speaker.
Yeah.
All right.
Well, maybe that's it.
So there's more to go.
Check it out.
But there's very little juice left to squeeze out of that.
Well, something like that.
All right.
Already, we have to go.
We're having a tough time unplugging from this podcast, might I add.
The difference between a 1080, like an HD television picture,
and a 4K picture
you can tell
but it's not nearly the same
bang as between
HD and
standard definition.
Now they're having 8K
TVs. I'm like really? I guess
they can't resist. I worked at
I can't imagine it's that different. One of my
clients with Sphere in Las Vegas like the big Sphere. I worked at a- I can't imagine it's that different. One of my clients with Sphere in Las Vegas,
like the big Sphere.
I went there, yeah.
Yeah, it was really, really cool.
I worked on the robots in the lobby.
Oh, that's awesome.
It's really cool.
But what's cool about it,
and one of the things I programmed them to talk about
is that Sphere is actually a huge advance in technology
that's bringing us together.
Because it can't be experienced.
It's on VR.
You're experiencing it 20,000 people in a room. So technology is not a straight line, just making us more divided and atomized and disconnected. Sometimes it actually can create more connectivity
and more interpersonal connectivity. Well, yeah, I think a lot of technology has the internet has.
Yeah. Talk about connectivity. Yeah. It's, it connectivity. It's divided as it brought us apart at the same time.
Yeah. It brought us together at the same time.
Well, it's having a terrible effect on...
I mean,
free speech is having a
negative effect on free speech in its own weird
way where people... It's easy to measure
the negatives. I'm like a pretty
optimistic person. It's just
the negative... We have loss aversion. So like
the negatives hurt so much more than the positives feel.
We take the positives for granted.
And then the negatives are the things that we can only focus on.
I agree with you a thousand percent.
I'm observing the negatives.
I'm not.
Has that ever happened in the history of this podcast?
I agree with somebody.
No, it's the first time.
It's the first time.
Quite a bit.
Quite a bit.
Quite a bit.
But it is a daunting problem we have.
They used to tell us that sunlight was the best disinfectant,
meaning that the truth would out.
But actually what we're seeing now is that people are more attracted
to what they want to hear than they are even to the truth.
And people are resorting to their bubbles.
And they don't want to, the truth is not outing. People don't want even to the truth and people resorting to their bubbles and they don't want to
the truth is not outing people don't want to know the truth and um somehow though we we always had
free speech but we had gatekeepers which uh corralled the conversation into one pen which
then did somehow force us to kind of see this battle and decide what was true.
Yeah.
But now that doesn't exist anymore. They're over there. They're over there.
And I don't know how this all.
Here's a great concluding thought.
Finishes, right?
Here's a great concluding thought.
I'm getting hungry.
That's why I fucking love this podcast. I really, I am like such a fan because the
conversations you facilitate on here are the perfect antidote for the kinds of conversations we had online.
Your most recent episode with Jesse Single and the other guy, Russ, whatever.
Russ Barkin.
That was a perfect example of like.
She likes the episodes I'm not on.
Yeah.
I was so happy about that episode.
I thought that was fantastic.
I love those guys.
I think it was just like,
and you facilitate conversations like that all the time.
And so I think all the problems you're identifying,
you're the one helping to solve them.
That's so nice of you.
I'm a fan.
Did you notice that the adjective
of what I wanted to say about Judaism
did not come to my mind?
You know, as you get older,
I mean, I can remember being 16 and not being able to remember Lola Falana's name.
I just remember that for some reason.
She was a 70s, I say, Lena Horne.
I was not Lena Horne, Lola Falana.
But of course, when you're 16, it never occurred to me that this could be a sign of cognitive decline.
That happens to me all the time.
Right.
It's a constant through your life.
But then you get to a certain age
and you can't remember somebody's name.
I couldn't come up with just the right adjective
to describe the worthiness or the...
I still can't.
The adjective still doesn't come to my mind.
You know what you could use to solve this?
Chat GPT.
I do all the time.
It's so good.
Is that the source?
And it scares the fucking shit out of me.
And literally during this whole conversation, it's been like, would I?
Describe the word.
Let's see if we can troubleshoot this.
I just wanted to say that in general, it's a culture which has proven itself to be beneficial.
Maybe that's it.
It's a culture which is clearly beneficial to the people
who are members of it.
You can-
From like a utilitarian standpoint?
Yeah, they're successful and they're smart
and they do well.
And even when they, well, except they are a bit much
and people tend to want to oppress them, but-
Yeah, do well.
But, you know, if you look at those trite, uh, statistics
about Jews are 0.3% of the population and 25% of the no help. My family forwards me the emails.
Yeah. These are actually real things, right? Yeah. So like I say, objectively, you can make
the case. This is clearly a culture that's beneficial to us. And you could do similar
things with other cultures. It's just's just like essentially I think it's just
it's a in the best way possible
it's a pro-life culture it's
like it celebrates the good things
in life and that's that's a pro-life
but but but but very but
very also pro-choice
yes but I couldn't
the word didn't come to me to describe it
so I said it's very good you know
good works too yeah good works didn't come to me to describe it. So I said, it's very good. Good works, too.
Yeah, good works.
And I do think, and by the way, as you get older,
you do, you have a latency.
So you're slower.
There's very few, this is not an original observation,
there's very few Jeopardy champions who are in their 60s.
Yeah.
Speed is a really big thing there.
But there's a very good quote
about intelligence versus wisdom.
So intelligence is knowing
that a tomato is a fruit,
but wisdom is knowing
not to put it in a fruit salad.
Ah.
Ah.
Aha.
All right.
Sarah Siskind or Siskind.
Sarah Rose Siskind.
Sarah Rose Siskind.
Yeah, one of those.
Yeah, Sarah Rose Siskind.
Do you always use the Rose?
I do because there's a country singer named Sarah Siskind taking up all my name real estate.
Oh, I didn't know that.
Is it a Jewish country singer?
I guess so.
Yeah.
But if I don't call her and say, hey, Sarah Rose, it's only when you're saying the name.
Yeah, it's just for the, yeah.
I want to sound like I shot a president, you know, Lee Harvey Oswald.
Yeah, no, that's usually.
Candace Owens calls him Lee Oswald Harvey.
That's what she says.
She goes, Lee Oswald Harvey, who was shot by Jacob Rubenstein.
That's what she says.
Yeah.
All right.
Well, he was shot by Jacob Rubenstein.
Don't get me started on Candace Owens.
Okay.
Sarah Rose Siskin. We're very happy to have you. This is a greatstein. Don't get me started on Candace Owens. Okay. Sarah Rose Siskin.
We're very happy to have you.
This is a great conversation.
We will have you again.
I'd love to have you.
I would love that.
Maybe we'll have Nick on, too.
Can we have the robot on?
Can we bring the robot?
The robot?
Oh, my God.
The robot.
I love how you say that.
Dan's good at that stuff.
Yes.
And you guys have a marital problem.
Yeah.
I'd very much like to.
Well, we'll bring Juanita on, too, and we'll
hash it out together.
Yeah, Juanita, she's
mad. Has she been on the podcast?
Yeah, she has a couple times, but not as a guest.
Nice. Listen, my wife is awesome.
She's so cool. My wife is awesome.
Okay, good night, everybody.
Thank you.