The Comedy Cellar: Live from the Table - Jason Crawford
Episode Date: February 24, 2023A former software engineering manager, Jason Crawford is the founder of The Roots of Progress, a nonprofit dedicated to establishing a new philosophy of progress for the 21st century. In this episode ...we talk self-driving cars, Amazon and the endless possibilities of AI.
Transcript
Discussion (0)
Good evening everybody, welcome to Live from the Table here in beautiful Ixtapa, Mexico. My name is Noam D here. I'm in beautiful Ixtapa, Mexico.
My name is Noam Dorman. I'm the owner of The Comedy Cellar.
Dan Natterman has canceled at the last minute, which brings me ever closer to my throwing in a towel on his podcast once and for all.
So maybe this guy will be our last guest. I don't know. I'm joined by Periel Ashenbrand.
Periel, would you like to introduce yourself?
Yes, hello.
Welcome to the show.
I don't know if he canceled at the last minute or if there was a miscommunication.
I mean, he didn't tell me he wasn't making it, or that wasn't clear to me until this morning.
Would you agree with the following?
What?
Would you agree with the following? What?
Would you agree with the following, that these are not the habits of successful organizations?
Well, I mean, to be fair, Noam.
We don't need to be fair.
We don't need to be fair.
Well, you didn't say, oh, by the way, I'm in Mexico, and so I'll be joining via satellite.
That has nothing to do with what I'm talking about.
I'm here.
Well, that's true.
And you know who else is here?
Jason Crawford.
Jason Crawford is here.
Hi, Jason.
Hello.
Thank you so much for joining. Will you do the honors of giving him a good introduction up to Dan's standards?
I absolutely will.
Okay, go ahead.
Jason Crawford is the founder of The Roots of Progress,
a nonprofit dedicated to establishing a new philosophy of progress for the 21st century.
He writes and speaks about the history and philosophy of progress,
especially in technology and industry.
He's also the creator of Progress Studies for Young Scholars,
which is an online learning program about the history of technology for high schoolers.
And he was formerly a software engineering manager and tech startup founder.
He's also a writer.
So welcome to the show, Jason Crawford.
Thank you so much for joining us.
Thanks so much for having me on.
Great to be here. How are you doing? So you first came across my brain when Coleman Hughes,
you know who Coleman Hughes is? I know of Coleman, yes. Yeah. He sent me a tweet of yours,
which I'm looking at it now. It has 1.3 million views. I wonder if you had any inkling that you would get
that kind of attention for this short, direct tweet. But it said, did any sci-fi predict that
when AI arrived, it would be unreliable, often illogical, and frequently bullshitting? And I
think you hit the nail exactly on the head there for anybody who's, you know, taking some time with chat GPT
and, you know, kind of like Captain Kirk questioned it and tried to get it to contradict itself like
that. It says ridiculous things. It, it says things about me that weren't true at all. But
anyway, so just, let's just start there. What's your, you don't have to limit yourself to that
tweet. What's your, what's your current take on all this chat GPT AI stuff?
So I just published an essay on this this afternoon,
and the title of the essay was Can Submarines Swim?
So there's this famous quote from a computer scientist from decades ago
where he said that the question of whether machines can think
is about as relevant as whether submarines can swim.
And so you think about it, right? And so, you know,
you think about it, right? A submarine gets through the water, but it doesn't literally swim in the sense of flapping at some fins or something. And there's a lot of other examples like this, right?
So an automobile does not gallop like a horse. An LED does not burn like a candle. A camera doesn't
draw or paint. A telephone doesn't speak. You know, an airplane flies, but not by flapping its wings.
Even a washing machine or a dishwasher, right, gets things clean, but not by scrubbing with arms
the way a human would do it. So we build all these machines. And very often they do things
that previously no machine was doing that maybe only a human or an animal was doing. But they
often do them in a very different way, right? It doesn't just look like repeating the exact same kind of motions. And so I think that's what's going on with AI right
now, that, you know, it's doing things that previously we thought that only humans could do,
like coming up with poetry or fiction, or, you know, or just more generally, you know, writing
prose. But I think it's doing it in a way
that's very different from the way that a human does it.
And so I think there's kind of two basic mistakes
you can make in thinking about this.
One is to like anthropomorphize it
and to think that, wow, it talked to me,
it made a sentence and therefore it's thinking,
it has feelings, it has wants and desires,
it has a hidden agenda,
it's gonna take over the world or whatever, right?
And then there's another mistake you can make, which is kind of the opposite mistake you hear
in the debate going on right now, which is that, well, if it's not a human, if it's just a program,
or, you know, somebody said, oh, it's like taking the internet and putting it in a blender.
And then it just spits out these random, oh, it's just a statistical, you know, word processor or something like that. And then you can think, well, if it's
just one of those things, then it'll never do X, Y, Z things that people can do. And it used to be
that people thought machines would never play chess. And then they thought, you know, well,
maybe machines will never, you know, write fiction or create works of visual art. And those things are, well, computers have been
playing chess for decades. And now we're seeing that they can write prose and they can create art
and there's even AI creating music right now. And so I think we just have to continually be
questioning ourselves about like, okay, well, here's a thing that we thought that only a human
could do through our wetware.
Can this thing actually be reduced to math and logic in some way that maybe we didn't expect and it took some genius to invent? But maybe it can actually be done. Maybe computers can do it,
even if they do it in a totally different way. I'm muted because my kids are making a lot of
noise. They're decidedly human. They don't listen. So I'll do the second one first. So
you talk about AI writing prose, writing music. I've heard some of this music. I think it was a
Google AI that did. It's pretty remarkable. I suggest everybody look it up and listen to it.
But I'm still trying to understand whether AI is really writing prose and really
writing music, or is it just disassembling all the prose that's already been written
and reassembling it in some novel way that makes it look like it's writing. I mean,
I suppose it's writing it, but you know, that it's creating prose.
Similarly with music,
you know, it's one thing to be able to learn music
and then spit it back in some way.
It's another thing to be able to like hear a sound
or a rhythm of raindrops
and then be inspired to create a new musical genre,
basically like, would it ever get to that so that's
not a direct question but you you see where i'm going with all this so yeah totally i mean so you
ask is it really writing prose or is it just sort of spitting words out well you know again
spitting words out that is it spitting new forms of pre-written words you know right no it's like
that's spitting words out randomly, but it's sort of learning
from other things that were written
and probabilities
of what word follows what word
and just speaking out.
Sorry to interrupt you.
Go ahead.
Yeah, yeah.
I mean, what's funny,
you know, there's that meme,
I think it's from the movie
iRobot, right?
Where the human says,
you know, to the robot,
like, can you write a symphony?
Can you, you know,
can you do all these things that only humans can do? And the robot says, can you? Right? So maybe the robot is
only spitting out, you know, words that it sort of it ran a bunch of stuff on the internet, and
then it kind of mishmashed in a different way. I mean, you could argue a lot of human writing is
like that. Um, yeah, I think that so look, um, I think the important thing to realize, like, yeah, it might be a different process.
And there's some things that are absolutely very different about what it's doing.
But the outcomes it's generating are really quite remarkable, right?
I mean, if you had asked me a few years ago whether this kind of thing could ever be done by any kind of computer ever i would have
told you i don't know maybe not and now here it is right in front of us right um so here are some
ways that it's obviously not the same though um these chatbots there's no they have no sensory
experience of the world right they have no um like their entire you know unquote understanding, and I think it does anthropomorphize
it a bit to call it understanding, but I'll just use that word. Their entire understanding of the
world is through words. So they understand associations between words. They don't understand
associations between words and any sight, sound, or anything that we would consider to be sensory
experience, right? In fact, they have no direct contact with reality, right?
They've never gotten out and seen anything for themselves.
They've never had any experience, right?
They've literally just been processing words.
It's really amazing what you can do just processing words, right?
And I think we've all, the world has learned a lesson about what can be,
what outcomes can be accomplished through just processing words and finding statistical correlations.
That is an amazing discovery in itself.
But yeah, it obviously, it obviously doesn't understand what an apple is the way that you
do or a sunrise or, you know, or a symphony or anything like that, because it hasn't had
the sensory experience um i mean the other thing is of course it's not uh trying to it's not it's words in in no way is there any
attempt to make the words match reality right so it has no way to be truthful in fact um you know
one thing i said in this essay is that these chatbots are they're really bullshitters in the technical philosophical sense of that term.
There's a there's an essay by Harry Frankfurt called On Bullshit and where he says, like, the key thing about bullshit is it's different from a lie because a liar knows what the truth is.
And he's trying to convey something that he knows to be false.
He wants you to believe a false thing.
The bullshitter doesn't care what's true or false.
He just wants to say things that you'll believe or accept. So he's just sort of
picking things out to suit his purpose and it doesn't matter to him whether they're true or
false. And so I think that's kind of how these language models and these chatbots are acting.
There's nothing built into them that gets truth or falsehood or has any kind of you know direction
at that um what they do is they come up with kind of statistically likely you know uh words to follow
other words and um but again i don't want to i think we could we could go wrong by saying well
if that's all they're doing then they will never be able to i think all bets are off at this point
will will an ai of of this ilk be able to create a best-selling novel that everyone thinks is totally original and mind-blowing and an emotionally moving experience? I would not want to bet against that at this point. I mean, maybe not. But our minds have already been blown so much in just the past couple of years. I think it would be prudent to not make any hard calls and just sit back and watch and see what's about to happen.
So a couple of things come to mind when you're saying all this stuff.
So part of it I'm just wondering is that the technology is so beyond our ability to comprehend.
I wonder if that distorts it to us.
So like, you know, you hear these stories about some,
they bring a Polaroid camera to some place
that has never seen modern technology
and people are afraid it steals their souls
and they can't believe how it works.
Or even something as like,
people who understand how an airplane works,
and I know the laws of aerodynamics are not that
complicated apparently they get it and it's it's almost like yeah of course it flies but for me who
really doesn't understand that i still am in awe every time i see an airplane lift off so this is
so many orders more sophisticated than flight i'm just wondering if there's some like to the people who understand how this works
they are like yeah of course these people are in awe of it but it's not you know it's not what they
think it is it doesn't they're they're they're attributing a mystical quality to it and i'll
add to that there's these articles recently about this bing chat bot that was saying things like
or analogous to like i'm depressed depressed or like, and people are like,
oh, they're troubled by it. I'm like, but it's just words. Like it's just a computer spitting
out words. And now you think because it spits out the words, I'm sad or I'm, you know, I'm bored
that you really believe that somebody is sad or bored, but they do believe it. It's somehow,
it's primitive. So something to that analogy is correct, right?
Part of what we're doing here is just that it's so beyond our ability to understand this.
We are those isolated people on an island somewhere seeing a Polaroid camera, you know, and we just can't believe it, right?
Yeah.
I mean, I think it's an extremely human tendency to anthropomorphize things right and when you see something doing
something literally talking to you i mean it's an it's it's just very natural um but yeah i mean i
think we have to i think you have to sort of resist that temptation and i'll get from from
the other side i'm sorry i'll let you from the other side since creativity to me anyway, this is my theory of it, is somehow somebody injecting
some randomness into what's come before and coming out with his own take on it. And computers
can generate random things. I could imagine in music or in literature with some interjection of something random it could spit out all sorts of
like types of music we've never heard before and then maybe one out of a thousand or one out of
ten thousand one out of a million of them would be very very pleasing to humans i don't i don't
know that the computer could predict what would interest a human listener.
I mean, maybe with enough feedback, right?
If you started putting things out on the internet and people started rating them, and then it could build a model of which ones, right?
I mean, so that's one of the things.
These systems can be built where, you know, you build one artificial intelligence to generate things. You can build a second artificial intelligence to rank them,
and then you can run them in a loop where the generating one keeps sort of putting things out there,
getting scored by the evaluator, and then it just learns to get through many, many iterations of that,
which, of course, the programs can do way faster than we can intervene.
Right now, of course, you know,
things are not quite that good. And so a lot of the best, hello, a lot of the best, you know,
outputs from AI are really being created together with humans in a loop where the human is doing a
lot of selecting and steering and, you know, choosing kind of which of the best outputs,
you know, out of a bunch of different things.
And there's a lot of back and forth.
I don't know if it will always be that way.
For a while in chess, it was like that as well,
where a human and a computer working together
could beat any human,
and they could also beat any computer alone.
I think we've gotten past that point,
and now the computers are so good,
they don't need us, essentially.
Well, did you read about this average, I think we've gotten past that point and now the computers are so good they don't need us essentially.
Well, did you read about this average for kind of a professional grade Go player beat the supposedly unbeatable AI that plays the game Go by doing something really dumb that AI would never predict anybody would do?
So that to me was very Star Trek.
Anyway, Perrielle, go ahead.
Well, I was just going to say that I think, you know, the great novels of, you know, humankind's part of what makes them so moving is that they're nuanced and they're rooted in human experience,
right? I mean, it seems that a bot, for lack of a more sophisticated term perhaps,
could never achieve that by the sheer fact that that's the one experience that it can't have.
I mean, something like chess seems like you can understand how,
with all of those sort of infinite or almost infinite possibilities,
they're within a framework right you're saying
that emotion is different than a silly you know a certain a certain number of moves you can make
right and like you said about music it's like well maybe in like out of a million or 10 million
it would accidentally hit all of those notes but jason made a good point and then it would begin
to learn from its accidents
what worked and then refine that maybe at the same time there's advances in neurology and i mean who
knows i can only imagine 500 years from now if the planet is still functioning yeah i mean even 50
years from now i think um i i think we are we've hit a point where AI is moving extremely rapidly.
The progress of the last just four or five years has been amazing.
And I think it's very, very hard to predict what the next few years are going to hold.
And Perry, you're right that a machine doesn't have that experience, but it has something else, right?
So again, I just wouldn't assume that because the machine doesn't have or can't do what we do,
that it couldn't create some outcome that we think is necessary for that, right?
Maybe it doesn't have the experience, but maybe somehow by reading everything humans have ever wrote, which no human can do, right?
Then maybe it will come up with something amazing, right?
So it can do things that we can't, just like we can do things that it can't.
And who knows which things are necessary and which are sufficient to create some amazing output.
I already think, and I had arguments five, six years ago
with friends of mine who got mad.
I'd already preferred to be diagnosed by computers
than humans.
And I mean, humans know,
like my theory is that it used to be the doctor was the guy who had
basically the the best memory in the town he could hold the most things at once and you're
maybe a little too young but there was a time they would examine you they go into the back and they'd
pull out these books that only they had access to you know is very very expensive and they would
and they would diagnose you it's the best of ability. But now, um, obviously it's, it's completely different. The, the, the worst computer exceeds
the, the worst computer exceeds the doctor's memory infinitely. The computer can spit out a,
um, a probability table of every single thing that these symptoms could mean and never forget a single one and then
you know run through them one by one to narrow it down this is how doctors go wrong all the time or
the doctors forget or they get older i mean there's a million different ways that i'd prefer
to be diagnosed or or at least never diagnosed without the use of a computer over somebody spitting,
putting every single piece of data
they have about me into a computer
and saying to the doctor,
did you think of this?
Did you think of that?
We had experiences in our own family
where we had to go doctor to doctor to doctor
before they were able to diagnose
something relatively simple.
You know, they just didn't think of it.
And when we told them,
like slapping their forehead,
oh, yeah, of course, you know.
I had something one time called, I had Ramsey-Hunt,
which looks like Bell's palsy.
And the doctor told me it was an ear infection.
And the next day I was in the hospital and I called the doctor.
And I said, you know, and I told him I didn't think it was an ear infection.
I said, listen, Dr. Kaufman, I don't have an ear infection.
It's like I told you, it's something else. And my whole face is paralyzed. And he says, oh,
fascinating. That was his answer. Fascinating. I'll never forget that. You never want to hear
that from a doctor. Fascinating. You never want to hear that you're an interesting case.
But of course, if he had a computer, if he had a computer, he would have simply said,
well, it could be an ear infection, but there's a chance it could be these other three things. So let me ask the question. So
he didn't do that. Anyway, I'm talking a lot. What's your take on all that?
No, that's a good idea. I mean, so we're far from the point where the computer can diagnose
you all by itself. That actually is something that does, I think, I would expect, require
high resolution sensory experience in multiple modalities. Um, but I do think that what,
what you just suggested, which is coming up with ideas. So what the, and I mean,
so the other reason I wouldn't want to be diagnosed by computer right now without any
human intervention is that these chatbots and, you know, the, the state of, of the art of these
language models, they're very, they can actually be creative. Um, they're not always logical and
they're definitely not reliable with facts this
is the great irony you know sci-fi for decades for you know kind of portrayed machine intelligence
like think of data from star trek right the supremely logical um you know extremely intelligent
but he doesn't have any emotions and now that we actually get the ai and it's like messing up basic
facts and it can't make basic logicalences. But it has this wild imagination.
In fact, you know, we sort of thought that maybe the way, you know, you might have thought that the way things, the order that things would get automated in would be we'd start with like basic, you know, manual labor.
And then we would automate the more sophisticated white collar jobs, office jobs. And then finally, the last thing to get automated
would be art and poetry
and music, if that could ever be automated by
computers. And it seems to be happening in reverse.
It turns out that the poetry
and the art are some of the
first things that we can do.
And robots still seem very far
away.
Because poetry and art can't be proven to be
a mistake or wrong or you know yeah right
it's you right there's no there's no strictly wrong answers right um but what but what we can
do with humans and computers working together is exactly the kind of thing you suggested where the
computer comes up with a bunch of possibilities and uses a sort of more active imagination or a
more uh you know it's it's read more of the literature. So the computer can read all of the
medical literature, and then it can come up with a very creative list of possibilities for what
matches these symptoms. And then the doctor can use a sort of filtering human intelligence to say,
no, not that, not that. Oh, I didn't think of that. Let's run this test. That could actually
be a very profitable way for a human and computer to work together
for something like diagnosis
at the current stage of AI development.
Why aren't they already doing that?
That seems like so...
Ego, ego.
Ego, right.
That's what I was going to say
when you said that to the doctor.
Like the last thing a doctor ever wants to hear
is a patient being like, what about this?
By the way, Perrielle, studies have shown that male doctors are more reluctant to do it than
female doctors. I'm sure that's true. I know you're sure. I made that up, but I knew you would.
Go ahead, Jason. Yeah, no, I mean, you say, why aren't they doing it already? Well, I mean,
these things take a while to build and develop and get deployed, right?
We've just seen in the last few months, right, ChatGPT was released and kind of the wider world
woke up to how amazing these things can be. And so now everybody's imagination is going, but
it takes years for someone to then say, hey, I think this could get turned into this kind of
product and figure out how to do it and then go quit their job and then do a startup and then
raise some money and then hire some engineers and then build a thing
and then get it tested and then to get it through regulations right because anything in in uh it was
going to go have any kind of medical diagnosis would probably get regulated by the fda as a
medical device and you'd have to go through years of testing so you know these things always it's
like you know why didn't the very you know we know, we figured out some of the basic laws of electricity in like the 1830s.
And we didn't have light bulbs and generators and an electrical system until the 1880s.
So that took like 50 years, right?
Like, why does it take so long for these things?
Well, it's just a natural process of, you know, from kind of the science to the invention to the demonstration, the prototype, the viable business the deployment unless it's a covid vaccine yeah that's true well so so covid is a
great example um i say of what what the how fast the world can solve a problem when it becomes the
world's number one problem and when you don't just have like one random or a small number of
inventors working out of their garage and nobody believes that it's ever going to come to anything but you have literally hundreds of parallel efforts
every lab that could that could possibly be working on the problem is working on the problem
there were you know we got yeah sure we got one or two we got like two vaccines within the first
year maybe four or five but there were literally over 200 something like 250 i think vaccine
efforts all going on in
parallel around the world. And then another 300 efforts to do therapies, you know, drugs that
cure it. And so, yeah, when you have 500 different projects all going on at once,
a few of them will succeed very quickly. But it turned out it was ivermectin all along.
Staring us right in the face the entire time
the one that we have still doesn't work that well
it works pretty well
I mean
ish
well let's not get sidetracked by that
but I mean we're back to normal
aren't we I mean would we be back to normal
without it that's pretty good
pretty effectual
so
now I lost my train of thought
oh so one of the things about uh chat gpt and all and all the bots right now they are locked down
i i imagine it's very very good uh proactive um uh what's the word um preemptive measures they are locked down they will not answer any question
that might have uncomfortable truths to it regarding religion race sexuality all that stuff
but i'm pretty sure that there are some truths out there which will turn out to be true or some things that
people say which will turn out to be true that um will disappoint us in terms of the fact that
the universe was unkind and unfair and at some point i suppose AI will have to be unchained to answer truthfully all the questions
put to it. What are your thoughts on all that? Do you think people want to hear the truth?
If it tells them that they're smarter and more beautiful, yes.
No, no, no. That was a quip, but I think you're fundamentally right. So one thing to understand about all these chatbots is there is the liable to say stupid things, false things, harmful things, toxic things, definitely things that you don't want to, you know, the reporters, you know, printing in the newspaper. where once the core language model has been created, there is another process to refine them
and to train them and nudge them in the direction
of being nice and friendly and helpful and truthful
and not rude and not offensive and so forth.
And so, in fact, one of the hypotheses that I've heard,
the most compelling hypothesis I've heard for, why is Bing's AI, you know, Sydney,
why is Sydney so relatively unhinged in saying these weird, crazy things,
like compared to ChatGPT, which is better,
or there's other ones that are still in beta.
But, like, one of the hypotheses is that its refining process was sort of done in an inferior and maybe hasty way.
And so, you know, and so we're getting something that's a little closer to just kind of like a core raw language model.
But, you know, OpenAI has already announced that they intend to make something closer to the raw model available to people who want to do their own refining
um they've basically already said like yeah we're gonna there's gonna be a bunch of different um
uh language models out there um or or ais or chatbots or whatever you want to say
and um they are different people will refine them for different reasons right and i think some of
these will be like um somebody wants to just train it for a specialized
purpose like a company a large company might want to train an ai on all of its internal documents
so that people could ask the ai questions about what's going on within the company right that's a
that's a mundane kind of business uh purpose but other people will want to train it to have their
own ideology or religion or um you, espouse their own ideas or
proselytize for them. And they, okay. So they don't like that. Um, chat GPT, you know, has a
little bit of a left-wing bias and it will, uh, you know, and it won't say anything nice about
Trump. Um, you know, but maybe you can get it to praise Obama and, you know, somebody's going to
say, Oh, I want something with, uh, you know, I want a right way. Somebody's already trained like
a right-wing version of chat GPT.
So, yeah, you can do this. Right. And people will.
And I think I think the future is not one enormous brain that we all talk to.
It's rather a profusion of, you know, I don't know, thousands, millions, billions of AIs that are all trained for like very, you know, that each one has a different personality.
And you pick the one that has, you know, the personality that you, that is most interesting or useful to you at any given time. Is it a matter of time before they start getting
canceled? Sort of, right? I mean, Bing already got canceled, right? And there's been a, they say
it's been lobotomized. I'm not in the Bing beta yet myself,
so I haven't gotten to play with it,
but I just read the headlines.
I mean, there's all sorts of fascinating issues
related like, you know, have self-driving cars.
And then at some point,
self-driving car is going to have to make
a split-second decision about what to do.
Essentially, when it comes upon
one of these philosophy hypotheticals about,
you know, there's three kids here and an old man there, whatever these things are.
And I don't know what, as a result of the democratic process, we're going to have to decide what ethics these AIs are going to use when they have to make split-second life-and-death decisions.
That's going to be a hell of a national conversation, right?
Maybe.
I sort of suspect
that that kind of problem is overblown.
But, yeah, I mean,
it might exist.
I'm sorry to say, it's overblown in a sense that
these things will happen very rarely. But it won't be overblown in a sense that these things will happen very rarely
but it won't be overblown in the way humans react
to them, one person
on planet earth that gets killed
rather than the kid or whatever it is
that's going to be all we talk about
yeah, that's for sure, that's absolutely
that's the nature of the news, that's the nature of the
headlines and the
nature of press and the way people
talk about things.
I am optimistic that self-driving cars will greatly reduce road fatalities, right?
Wow.
Oh, absolutely, right?
I mean, first off, these self-driving cars are going through an enormous amount of testing before they ever get put on the road, most of it in simulation, right?
I've driven in
one of them i mean i've been in a in a car with no human driver that took me through the streets
of san francisco and it was frankly it was almost boring because it was just driving it was just
going forward and it stopped at the stop lines stop signs and it signaled its turns and it did
all these you know i've had i've had u Uber and Lyft rides that were much more harrowing
than anything you're going to have
in a self-driving car.
And look, there's...
Go ahead. Sorry. Finish.
I mean, there's 50,000...
I've never seen one of these.
You sit in the back.
And what's in the front? Nothing?
Nobody. I think there was a steering wheel
that was moving itself.
People would prefer to see a robot there with his there was a steering wheel that was moving itself. Oh, my God.
People would prefer to see a robot there with his hands on the steering wheel.
No, really, she would.
So, yeah, I mean, there's no question that self-driving cars, I mean, first of all, they won't miscalculate anything in terms of the trajectory to get someplace before somebody else.
They won't be distracted.
They don't get nervous. They don't get tired. They don't drink.
They don't get angry.
They don't have ego. They don't have road rage.
There's so many
failure modes
of humans.
Look how fast
I can make this Tesla.
They won't be trying to get laid. Here is the interesting question. Oh, Perry, look how fast I can make this Tesla. Yeah, they won't be trying to get laid.
But here is the interesting question.
How much safer do they need to be in order for people to accept them?
Meaning like if they were just like 1% safer than humans,
the stories about tens of thousands of people dying and self-driving cars
would make them totally untenable.
Even if everybody could internalize the fact that they actually on the overall, they were safer.
You follow me. We wouldn't accept it.
What would the ratio of human deaths to to automated driving deaths need to be before you think society will accept them?
Yeah, I don't know. I mean i mean look they're already rolling out right yeah there are
cars in in i think at least three major cities that are you know taking people around without
drivers and i think the companies that uh that are building them like waymo and cruise have done a
pretty good job of pr and they've done a pretty good job of just like the actual uh safety testing
i mean part of the challenge of proving that a um by the way that
a self-driving car is safer than humans is that the um you know the the fatality rate is uh of
in the u.s for driving is something like one per hundred million vehicle miles so if your cars have
driven 99 million vehicle miles you know know, with zero deaths, you still
haven't proved that they're safer than humans. So we're just, we're going to have to, you know,
put them on the road, be as careful as we can about it. Um, but, uh, but we're just going to
have to test them out, um, and try them out. And then the next step is, uh, we will be forbidden
from driving ourselves anymore.
That's going to be a hell of a thing, too.
I love to drive.
Yeah, and well, you'll still be able to drive.
You'll just do it on a closed track, right?
Yeah, like Rain Man, slow on the driveway.
But it'll be a hobby, right? We still ride horses, right?
And we still listen to vinyl, and we still light candles,
and we still knit socks and scarves and things.
We still do all these things that machines now do better.
We just do them.
We do them for fun and for hobbies.
So that's what it looks like?
Is that like that's like in a hundred years?
Remember when we had to drive ourselves places?
Totally. And then not only that, but you had, we had to, yeah, we, we, we drove,
we drove to places ourselves and, uh, we use the GPS, you know, to take us direct to the destination. And then we would drive around for 20 minutes looking for parking, right?
Our grandkids are never going to believe that they're going to be like, you did what the car
didn't just go park itself. You didn't just get out and the car drove off without you. Oh my God,
that must've been so freaking annoying. All right. right let's hit let's hit a couple other things you're kind of a futurist
obviously what what do you have a you have a take on um what year i would need to be born before
i would uh likely live to be 200 years old oh man uh i mean i'm hoping there's someone today who
will live uh to be 200 but mean, these things are really unpredictable.
So I'm I'm actually less of a futurist than I am of a historian.
And I like to look to historical, you know, analogies.
But like the big picture view of history actually is that progress actually is not just exponentially growing.
It's it's faster than exponential, like the like the very speed of the exponential curve itself speeds up
over time and
I think it's just extremely
hard to predict the coming decades
the faster things go the harder
it is to see around the corner
or to see through the fog of the future
Do we not
expect to hit some kind of law of diminishing
returns even on progress or no?
It's not infinite right? Or maybe it even on progress or no? It's not infinite, right?
Or maybe it is.
I don't know.
It's not infinite.
But I think the diminishing returns, the limits, the only limits are the laws of physics.
The speed of light, the amount of matter and energy available to us within the galaxy.
And we are nowhere near those limits.
We still have many, many orders of magnitude to go.
But I was born in 1962.
I'm probably a little bit early for the rapid extension of life.
But you never know.
Maybe you'll be the first one.
I don't know.
All right.
On a total another matter.
First of all, he has jasoncrawford.org where he has a lot of interesting posts.
One of them, I don't want to put you on the spot because I don't know when you wrote these things.
You may not remember them.
But one of the things that speaks to me because I own a business was about Amazon and the culture at Amazon.
And I know culture at Amazon.
And I know people hate Amazon.
I don't know if you think that's fair or unfair.
Amazon's become like a bugaboo.
But I don't hate Amazon, although I don't like the idea
that people were peeing in bottles,
if that was actually true.
But in general, I think Amazon
is a remarkable, positive force in the world.
It certainly helped us during COVID and helps us in so many different ways. And of course,
as humans, we just take these things and put them in our pocket and that becomes a new baseline.
That's what we think. But what's your, well, first of all, you feel, you feel the same way I do about Amazon. You're, you're an Amazon supporter. Yeah, totally. So for context, I used to work there.
That's why I wrote an essay about the culture there, uh, before I was, so you mentioned,
yeah, you mentioned my personal site. These days I do most of my writing at, at, on the roots of
progress, which is my progress blog. But that, that personal site is where I used to. So before
I was a full-time kind of writing about history, uh, and, and technology and progress where I used to. So before I was full time kind of writing about history and technology and progress, I used to be in the tech industry.
So I spent almost 20 years in the tech industry. I was a startup founder. I was a software engineering manager and so forth.
And, yeah, at one point, a long, long time ago, I worked for Amazon and for a few years.
And so I got to I got to see the culture there. Yeah. So I love Amazon. And actually, I think I think they're actually one of the best loved brands. I think they, uh, they rate extremely highly on trust and, um, uh, you know, brand
loyalty and things like that. So I think actually most people like Amazon, but sure. I mean,
they're a big successful company and they're the most successful, you know, online retailer.
And so of course there are some people who love to hate them and they're going to get
attacked in the press and so forth.
Anytime anybody sort of gets sufficiently big and successful, they're going to attract haters.
That always happens.
Well, the left hates Amazon.
I mean, AOC chased Amazon out of New York. One of the dumbest things I've ever seen a politician do.
Almost, you know, it can't even be, I can't even come up with a logic to defend that.
But they did it, so that has to speak for logic.
But anyway, one of the things that Amazon says, which speaks to me, it encourages everybody there to think as an owner.
And I'll read from you.
Young man, I'm on from you. Young man.
I'm on the podcast.
Can you please.
That's my son.
And where is it?
It says.
Essentially.
That everybody that works for Amazon.
Is supposed to think like an owner.
And make decisions.
For the customer.
The way an owner would.
And I just add to that.
I've noticed as an owner of a company that unfortunately, the only person who really,
truly cares about the customers in his heart and soul is the owner. I'll even digress more.
So much of capitalism is analyzed in terms of having skin in the game financially and people risking money, whatever it is. But if you speak to entrepreneurs, it's emotional spin in the game,
emotional skin in the game, uh is often the first thing
they talk about i don't even follow how much money i make week to week i need to know that i need to
follow it at some point but if one customer writes me about being treated rudely or was unhappy or
didn't like something and it cuts me like a knife. I can't explain to somebody how it affects me. I mean,
the closest you can analogize it to is when somebody says something horrible about one of
your kids behind their back, even. It would never affect them, but you just can't bear to hear it.
So this concept of thinking like an owner, well, it might be fanciful. I don't know if you can
really get employees to think like an owner.
That really is the key to having a good organization. And I've told all my employees
at various meetings when we give them all the rules, I said, but you know, you can break any
rule you want at any time if you can say you were doing it to make the customer happy. And as a
matter of fact, you're responsible to break any rule there is. The last thing I want to hear is like I was just following orders, you know, because these rules are guesses.
They're rules of thumb guesses at what the world is going to throw at you.
And likely we try to think of everything, but there's always going to be scenarios we didn't think of.
And you're going to have to analyze those situations from the customer's point of view.
So that's a mouthful.
Tell us about Amazon and how they managed to do that. analyze those situations from the customer's point of view. So that's a mouthful. What do you,
tell us about Amazon and how they managed to do that. Yeah. Ownership is strong in the culture there. I mean, it was, you know, when I was there, which has been a years ago, I was there like 2004
to 2007. So it's been, it's been a long time. I mean, everything I've heard of it is that,
is that that ownership culture is still strong.
But ownership kind of meant two things to me when I was there.
One is that you act like an owner about your own tasks and responsibilities, which means there's basically no excuse for not getting something done.
If a team has a goal and they're dependent on some other team for the data they need or the platform or the, the access or whatever,
you know, this, it's kind of not an accepted excuse to say, well, the other team didn't deliver. So we couldn't deliver. You were supposed to figure out some way to do it,
you know, um, whether that's get them to deliver the thing, work around them,
go get it yourself, whatever. Right. That was kind of the only thing that was respected.
But then the other thing is this, um, uh, is this, this thing you're talking about being so difficult to create in employees, which is, yeah, really think about, act like an owner of the business, right?
And think about what's best for the business as a whole.
And ultimately, yeah, I mean, for the most part, that means what is best for the customer.
How do you foster that?
I mean, part of it is that you have to hire for it.
Part of it is that you advertise that that's who you are
so people who want that kind of environment
will come join you
and people who don't want that kind of responsibility
will shy away. Part of it
is to just recognize it in people when they do
it and reward them for it and hold them up
as examples.
It's ultimately the same as
any other question about company culture.
It's kind of like, how do you instill an idea?
But if you're the kind of person who does care about the customer,
then I think working at a place where you are encouraged to do that is extremely rewarding.
Right, but you care about the customer.
Go ahead, Perrielle. I'm sorry.
No, I was going to say, you care about the customer because you are the owner.
I mean, how do you get somebody
who has to pee in a bottle
and is getting paid 14 cents an hour
to care about...
Yeah, I mean, I can't really speak
to the fulfillment center employees.
I never worked in a fulfillment center.
I worked in headquarters.
I worked on the software, so...
You weren't peeing in bottles. No. Well, they did not give us free
soda. That was about the, you know, that was about the worst, which a lot of other tech companies
were getting. So we sort of felt deprived. And one, there was, there was one time when one very
ill-advised executive at Amazon decided to save money.
They were going to take the aspirin out of the kitchen or the storeroom.
So then if you had a headache, you just had to go home.
That decision was very quickly reversed.
You hit on something that jibes with my business wisdom, which is that it's very difficult to motivate people to do these things. You really
need to hire people who are like that already. And then your task is not to alienate them,
which is that, which is, and you can alienate employees because if you don't appreciate them,
if you, if you, if you, well, basically if you don't appreciate them, if you, well, basically,
if you don't appreciate them or if in some way the people who like to be nice to customers
feel like they're criticized for doing so or unappreciated for doing so, as I've already said,
then they become alienated and they don't want to work there anymore, or they might just disengage emotionally somehow, but then they're very unhappy.
And it's hard to screen for that. So probably the best thing to do, and I'm not good at this,
is to let people go early when you see right away that they don't do that. Another thing I've
noticed is that it's easier in your initial phases of the business when the owner is seen working very hard.
He's there all the time.
He's struggling.
He's emotional about it.
And that's inspiring in some kind of way.
It becomes much more difficult once a place like the Comedy Cellar is a mature, successful business with tons of customers.
And we're trying to maintain rather than build.
The psychology of a business that's maintaining
is completely different than the psychology
of a business that's doing stuff, trial and error,
try this, try that, excited, high-fiving
when new sales goals are met. There's just so many things
that, and these are the happiest times of my life as an entrepreneur, various,
not when we weren't making money, just the growing periods of this stuff. And I know
all my employees were very, very loyal at that time because we were all kind of in it together.
Now, I mean, after all, they don't make more money. Well, I guess the waiters and waitresses do in some way. They make more money when we're busier, but in general,
they don't. And so these are very difficult psychological puzzles for, I think, business
owners to figure out. And even the greatest of them, if you read about Bed Bath & Beyond
and how they've just fallen apart,
these Harvard business school types,
they can't figure it out either, right?
Business is hard.
Yeah.
Yeah, that growing versus maintaining mentality
you talked about,
I mean, Bezos kind of had a term for that, right?
The growing mentality is the day one mentality. And there was this, uh, there was this, you know, phrase at Amazon, it's still
day one. And Bezos kept repeating that over and over, you know, until, uh, and people kept asking
him, is it day two yet? Nope. It's still day one, you know, well, what time is it? I mean, you know,
and then he would say, well, I think we haven't even hit the snooze button yet. And so, uh, you
know, he just drilled this into you. Like we need need to think about it like we're still at the very beginning of a long, long journey.
Well, I mean, I have two quick stories.
One time, this is in the 90s maybe, I bought some really expensive Bose earbuds or something.
And when earbuds first came out, like $400 or $500 at the time.
And I opened the box and they were empty.
So I called up Amazon, and they said,
no problem, Mr. Dorman, we'll send you another box.
So they sent me them again,
and I opened the box, and they were empty.
Now I have $1,000 worth of missing EarBuzz,
and I have to call Amazon again.
I'm like, they're never going to believe me, right?
So I called them up, and they said,
no problem, Mr. Dorman, we'll send you another pair, but if you wouldn't mind signing for them this time, up and they said, no problem, Mr. Dwarman, we'll send you another pair.
But if you wouldn't mind signing for them this time, I'm like, no problem. They didn't question
me at all. They sent me another, another time I ordered an air hockey set for my kids and I had
six months, I think, to assemble it. That was the terms of the thing. And a year later, I hadn't
assembled it. And then I went to assemble it a year later and it was missing a key component
and wouldn't work. And I had to call up Amazon and tell them, listen, you sent me this a year ago
and they're never going to like, what am I going to do? I'll just buy another one, whatever.
And they're like, no problem, Mr. Dorman, we'll send you. They didn't even mention,
they didn't even mention the fact that I was six months past the deadline. Now,
maybe they look you up and they see how much business you do with them. Do they do that? Do you know?
I think they do,
yeah. And I think they're much more
inclined to just sort of, you know, go with
customers always right if you're obviously a
loyal customer.
Who never pulled a fixed one before.
Yeah, yeah. Or just like
you've bought some stuff,
you know, as opposed to you came in the,
oh, what do you know by coincidence the very
first time you uh ordered anything it was an expensive item that somehow didn't arrive yeah
yeah um but i think uh yeah amazon's great uh they're the only company that i have seen who
gives you like that notices themselves when they owe you a refund and gives the refund to you
practically without you even asking right yeah so like um you know they they they ship something for you internationally and then you get an email from them. It's like,
oh, we noticed that we, uh, overestimated the bat tax and we owe you a dollar and 38 cents.
It's already been refunded to, you know, I mean, just like, or you were streaming a video from us
last night and, um, you know, we noticed you had some trouble with playback. We're so sorry about
that. Um, so here's a refund.
Just no other company that I've seen will... Every other company...
I mean, you go to an airline, right?
You have a flight, and your flight's three, four, five hours late, right?
And are they just going to call you up and say, hey...
I mean, they know the flight was late.
Are they going to call you and say, hey, we're so sorry.
We're going to practically give you...
No.
But if you call them, then they'll say, oh, I'm so sorry.
Here's a refund, right? But they make you do it.
Amazon doesn't make you do it.
If you're lucky.
If you're lucky, right.
If you're lucky, they'll give you a voucher or something, right?
But Amazon doesn't even make you do it.
They notice when they screw up and they want to be the first to notice
and they're going to practically give it to you because it's the right thing to do.
And it seems pretty simple logic to know that that's the right thing for business to do.
You would think.
But it's painful, right?
Yeah, that's exactly what I was going to say.
That's exactly what I was going to say.
There's a psychological thing in people.
I'm not going to give that money back.
But it's painful, yeah.
300 people were on a flight that was four hours late and you want me to give them all 100 vouchers?
So that's what you just $30,000 in vouchers right there. You know, you want me to just drop that, you know,
it's worth every penny. It's worth it. I mean, I do stuff like this all the time in my business
and everybody thinks I'm crazy and I'm, I'm convinced I'm convinced it's the right thing
to do because there's no limiting principle. Otherwise either you got to decide you're
going to treat people the way you'd want to be treated yourself.
You can afford it.
Most of these companies can afford it.
Nobody's going bankrupt because they do this.
Or you have to really embrace really creative logic on a day-to-day basis in real time.
Create standards as you go along to rationalize
why you did this, but you didn't do that, whatever it is. So, uh, so you want to say hi, that's my
son, Manny. All right. Okay. So, uh, listen, you're, you're quite an interesting, uh, guest.
You're not located in New York, are you? Uh, no, uh, we're in, uh, we just moved to Boston
actually after many years in California. Well, if you ever get to New York, I hope you'll visit us. Why don't you leave us with whatever other than AI is the hot topic on
your mind these days that would be interesting to people? Yeah, sure. Well, I am, so I'm currently
writing a book. Book is going to be about the history of technological and industrial progress,
kind of what were the key discoveries and inventions that, you know, created the modern world and gave us our standard of living? And sort of what should
we make of all this? Like, is progress actually good? Can progress continue? That kind of thing.
And yeah, one of the topics I've been researching lately that I've been really interested in is like,
why did it take us so long in historical terms to create machines to automate so much of labor. When we had some kinds
of machines for thousands of years, windmills and water mills and all kinds of things, it's amazing
what they had in the Middle Ages, you know, in terms of machines that could do work. And yet
most work was still done by hand. So obviously they had the idea to build machines to lessen our
load of labor, but they weren't able to do it somehow or lessen our our load of labor but they uh they but they
weren't able to to do it somehow or they didn't do it why did it take until the 17 and 1800s for us
to you know automate so much of uh of manual labor that's a that's a topic i've been researching and
we'll hopefully write about soon you have you have a short answer yeah i think it turns out it's
it's actually a lot harder than it looks to build a machine that does. So the machines they had in the Middle Ages did very brute force motions.
So like grinding grain, pounding hammers, you know, kind of very just like crude high force, you know, sawing, grinding, pounding motions.
And I think to do a lot of, it turns out a lot of human labor was fairly dexterous.
So if you think of spinning thread,
very delicate motions of the fingers that are needed,
and it was just much harder to make machines
that were precise enough
and built to high engineering tolerances.
We needed metal, we needed gear-cutting machines,
we needed machine tools,
we needed a whole manufacturing substrate and technology
to build the precision machinery that could do these kind of much more delicate human tasks.
Yeah, we don't sufficiently, we're not sufficiently in awe of what it is we have around us.
This will be the last thing I'll say.
At the Comedy Cellar before everybody enters, they put their phones in one of these kind of plastic bubble-wrappy envelopes that Amazon uses to send stuff out.
And we're able to give these out by the hundreds every night.
They cost, I think, less than a cent.
And you look at them, and there's fashion, there's bubble wrap, there's printing on it.
There's so much manufacturing, so much technology.
There's chemical technology that goes into this.
There's practical technology.
There's fine tolerances of assembly.
Like I said, there's printing.
Just the printing alone you'd think would cost more than a quarter of a penny to do.
And all of this comes to us at a price which is so cheap.
It's basically free.
The shipping, this is shipping, it probably comes from China, just the shipping alone,
you would imagine, would bring the price point above where it is, even if the shipping just rocks.
No, it doesn't. Somehow, this system of capitalism is able to do this and it doesn't seem possible and we don't sufficiently
appreciate it in my opinion we see the people pissing in bottles yeah i get it right absolutely
penny though because on the other side there's somebody taking a pee break once every 23 and a
half hours it's worth it perry go ahead jason Jason. I mean, I was just going to say,
yeah, I mean,
you were singing my song,
I think we take
the modern world for granted
and we take technology
and industry
and our standard of living
for granted.
And we absolutely
should not do that.
And a big part of my mission
is to help people
look around
at industrial civilization
with awe and wonder
and gratitude
for all of the
problems uh that people used to face that we that are so far behind us now and the unprecedented
life that we get to live yeah well that's i could not think of a better way to end well it's been a
pleasure i hope you enjoyed your your stay here uh in in ixtapa with me and uh um okay i guess
that's it perriel please give him my information
and of course,
everybody comes to New York at some point.
When you come to New York,
please stop by the Comedy Cellar,
say hello, come see a show,
whatever you want.
Absolutely.
I'll introduce you to some interesting people here.
Pleasure to meet you.
Yeah, you too.
Thanks a lot.
It was a fun conversation.
Thanks so much for coming.