Lex Fridman Podcast - #132 – George Hotz: Hacking the Simulation & Learning to Drive with Neural Nets
Episode Date: October 22, 2020George Hotz (geohot) is a programmer, hacker, and the founder of Comma.ai. Please support this podcast by checking out our sponsors: - Four Sigmatic: https://foursigmatic.com/lex and use code LexPod t...o get up to 40% & free shipping - Decoding Digital: https://appdirect.com/decoding-digital - ExpressVPN: https://expressvpn.com/lexpod and use code LexPod to get 3 months free EPISODE LINKS: Comma.ai's Twitter: https://twitter.com/comma_ai Comma.ai's Website: https://comma.ai/ George's Instagram: https://www.instagram.com/georgehotz George's Twitch: https://www.twitch.tv/georgehotz George's Twitter: https://twitter.com/realgeorgehotz Comma.ai YouTube (unofficial): https://www.youtube.com/channel/UCwgKmJM4ZJQRJ-U5NjvR2dg PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/LexFridmanPage - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 07:02 - Will human civilization destroy itself? 09:49 - Where are the aliens? 14:36 - Tic Tac UFO and Bob Lazar 17:04 - Conspiracy theories 19:07 - The programming language of life 23:28 - The games that humans play 31:58 - Memory leaks in the simulation 34:29 - Theories of everything 36:14 - Ethereum startup story 44:02 - Cryptocurrency 53:28 - Self-help advice 57:08 - Comma.ai 59:02 - Comma two 1:07:50 - Tesla vs Comma.ai 1:16:53 - Driver monitoring 1:30:34 - Communicating uncertainty 1:32:22 - Tesla Dojo 1:38:50 - Tesla Autopilot big rewrite 1:45:09 - How to install the Comma Two 1:49:44 - Openpilot is Android & Autopilot is iOS 1:58:59 - Waymo 2:10:12 - Autonomous driving and society 2:12:24 - Moving 2:15:29 - Advice to Startups 2:28:32 - Programming setup 2:31:32 - Ideas that changed my life 2:39:37 - GPT-3 2:42:57 - AGI 2:47:00 - Programming languages that everyone should learn 2:53:33 - How to learn anything 2:56:05 - Book recommendations 3:04:28 - Love 3:06:17 - Psychedelics 3:08:38 - Crazy
Transcript
Discussion (0)
The following is a conversation with George Hots, aka GeoHot, his second time on the podcast.
He's the founder of Kama AI, an autonomous and semi-autonomous vehicle technology company
that seeks to be to test the autopilot what Android is to the iOS.
They sell the Kama 2 device for $1,000 that will install in many of their supported cars can keep the
vehicle centered in the lane even when there are no lane markings.
It includes driver sensing that ensures that the driver's eyes are on the road.
As you may know, I'm a big fan of driver sensing.
I do believe Tesla Autopilot and others should definitely include it in their sensor suite.
Also, I'm a fan of Android and a big fan of George,
for many reasons, including his non-linear out of the box
brilliance, and the fact that he's a superstar programmer
of a very different style than myself.
Styles make fights, and styles make conversations.
So I really enjoyed this chat, and I'm sure we'll talk
many more times on this podcast.
Quick mention of a sponsor followed by some thoughts related to the episode. First is
ForSigmatic, the maker of delicious mushroom coffee. Second is the coding digital, a podcast
on tech and entrepreneurship that I listen to and enjoy. And finally, ExpressVPN, the VPN I've used for many years to protect
my privacy on the internet. Please check out the sponsors in the description to get a
discount and to support this podcast. As a side note, let me say that my work at MIT on
autonomous and semi-autonomous vehicles led me to study the human side of autonomy enough
to understand that it's a beautifully complicated and interesting problem space
Much richer that what can be studied in the lab
In that sense the data that comma AI
Tesla autopilot and perhaps others like catalog supercruis are collecting gives us a chance to understand how we can design safe
semi-autonomous vehicles for real human beings in real world
conditions.
I think this requires bold innovation and a serious exploration of the first principles
of the driving task itself.
If you enjoyed this thing, subscribe on YouTube, review it with 5 stars and up a podcast,
follow on Spotify, support on Patreon, or connect with me on Twitter and Lex Friedman.
As usual, I'll do a few minutes of ads now and no ads in the middle.
I'll try to make these interesting, but I give you time stamps, so if you skip, please
still check out the sponsors by clicking the links in the description.
It's the best way to support this podcast.
This show is sponsored by Four Sigmatic, the maker of delicious mushroom coffee. It has lines made mushroom for productivity and chaga mushroom for immune support.
I don't even know how to pronounce that.
I don't know what the heck it is, but it balances the kick of the caffeine nicely to give
me a steady boost to focus in long deep work sessions.
Apparently, it doesn't leave you with the, quote, awfully jittery feeling of caffeine,
though I don't think I get that jittery feeling from any kind of supplement, caffeine,
red bull or otherwise.
I'm pretty sure my body is mostly made up of caffeine at this point.
To be honest, I drink coffee and tea more for the comforting warmth and the ritual of
it.
I don't think the caffeine even works anymore.
I find that little rituals like these help calm the mind enough to settle in for the deep,
distraction-free thinking. Anyway, exclusively for you, the listener of this podcast,
get up to 40% off and free shipping on mushroom coffee bundles to claim this deal, go to forsegmentic.com slash
Lex or use the code Lex pod at checkout.
Again, you'll save up to 40% and get free shipping when you go right now to forsegmentic.com
slash Lex or enter code Lex pod at checkout and fuel your productivity and creativity
with some delicious mushroom coffee
This show is also sponsored by the decoding digital podcast that is hosted by app direct
Co CEO Dan Sacks. It's a relative to the new show
I started listening to where every episode is an interview with an entrepreneur or expert on a particular topic in the tech space.
I like the recent interview with Michelle Zatlin of CloudFlair about security.
I've been progressively getting more and more interested in hacking culture, both on
the attack on the defense side, so this conversation was a fun, educational 45 minutes to listen
to.
I think this podcast has a nice balance between tech
and business that many of you might enjoy,
especially if you're thinking of starting a business
yourself or worker to start up.
As you may or may not know,
I'm thinking through this process myself,
finding the balance between careful planning
and throwing caution to the wind
and just going with the heart or the gut.
Anyway, check out the coding digital on an Apple podcast, or wherever you get your podcast.
Give them some love and encouragement to help make sure that the podcast keeps going.
This show is also sponsored by ExpressVPN.
It looks like the social dilemma documentary on Netflix has gotten people to talk about
surveillance capitalism and the value of your data.
As you may know, I see our social media systems as somewhere between totally broken and needing
improvement. But one of the key aspects is for people to get more control over their data.
ExpressVPN is one mechanism for doing that because it hides your IP address which websites can use to personally identify you.
Using ExpressVPN makes your activity more difficult to trace and sell to advertisers.
Obviously, at least from my perspective, the responsibility should be on the social networks themselves,
but for now, a good VPN like ExpressVPN can help.
I'm also working on a bunch of different technological solutions to this problem.
So if you don't like the idea of tech companies exploiting your personal
information, then visit express VPN.com slash Lex pod right now.
And you can get three months extra of ExpressVPN for free.
That's express VPN.com slash Lex pod to protect your data.
Again, go to ExpressVPN.com slash Lex pod to learn more. I think they wanted me to say that
like 20 times, but I'll stick to just three. And now here's my conversation with George Hots.
So last time we started talking about the simulation, this time let me ask you, do you think there's intelligent life out there in the universe?
I've always maintained my answer to the Fermi paradox.
I think there has been intelligent life also in the universe.
So intelligent civilizations existed, but they've blown themselves up.
So your general intuition is that intelligent civilizations quickly, like there's that parameter
in the Drake equation, your sense is they don't last very long.
Yeah.
How are we doing on that?
Like, have we lasted pretty good?
Oh, no, we do.
Oh, yeah.
I mean, not quite yet.
Well, it's like, it's telling you as you're casky, IQ required to destroy the world falls
by one point every year.
Okay, so technology democratizes the destruction of the world.
When can a meme destroy the world?
It kind of is already, right?
Somewhat. I don't think we've seen anywhere near the worst of it yet.
World's gonna get weird. Well, it's going to get weird.
Well, maybe a meme can save the world.
We thought about that.
The meme lord Elon Musk fighting on the side of good versus the meme lord of the darkness,
which is not saying anything bad about Donald Trump, but he is the lord of the meme on the dark side.
He's a Darth Vader of memes.
I think in every fairy tale, they always end it with
and they lived happily ever after.
And I'm like, please tell me more about this happily ever after.
I've heard 50% of marriages end in divorce,
why doesn't your marriage end up there?
You can't just say happily ever after.
So the thing about destruction is it's over after the destruction.
We have to do everything right in order to avoid it.
And one thing wrong, I mean, actually this is what I really like about cryptography.
Cryptography, it seems like we live in a world where the defense wins.
Versus like nuclear weapons, the opposite is true.
It is much easier to build a warhead that splits into a hundred little warheads than to build something that can, you know, take out a hundred
little warheads. The offense has the advantage there. So maybe our future is in crypto,
but...
So cryptography, right, the Goliath is the defense. And then all the different hackers
are the, are the David's. and that equation is flipped for nuclear war.
Because there's so many like one nuclear weapon destroys everything essentially.
Yeah, and it is much easier to attack with a nuclear weapon than it is to like the technology required to intercept and destroy a rocket is much more complicated than the technology required to just, you know, orbital trajectory send a rocket to somebody. So, okay, your intuition that the
were intelligent solutions out there, but it's very possible that they're no longer there.
It's kind of a sad picture. They ender some steady state. They all wirehead themselves.
Which wirehead? Um, stimulate,ulate their plighters, centers.
And just, you know, live forever in this kind of stasis.
They become, well, I mean, I think the reason I believe this is because where are they?
If there's some reason they stopped expanding.
So otherwise they would have taken over the universe.
The universe isn't that big.
Or at least, you know, let's just talk about the galaxy, right?
70,000 light years across.
I took that number from Star Trek Voyager.
I don't know how true it is.
But, yeah, that's not big.
70,000 light years is nothing.
For some possible technology, you can imagine that the leverage like wormholes or something
like that.
You don't even need wormholes.
Just a von Neumann probe is enough. you can imagine that the leverage like wormholes or something like that. You don't even need wormholes. Just if on Newman probes enough,
if on Newman probe and a million years of sublight travel,
and you'd have taken over the whole universe,
that clearly didn't happen, so something stopped it.
So you mean if you, for like a few million years,
if you sent out probes that travel close,
what's sublight, meaning close to the speed of light?
What's a point one say?
And it just spreads.
Interesting.
Actually, that's an interesting calculation.
Huh.
So what makes you think that we'd be able to communicate with them?
Like, yeah, why do you think we would be able to be able to comprehend
intelligent lives that are out there?
Like, even if they were among us kind of thing, like,
or even just flying around. Well, I mean, that's possible. It's possible that there is some sort of
prime directive that'd be a really cool universe to live in. And there's some reason they're not
making themselves visible to us. But it makes sense that they would use the same, well, at least the same entropy.
Well, you're implying the same laws of physics. I don't know what you mean by entropy in this case.
Oh, yeah. I mean, if entropy is the scarce resource in the universe,
so what do you think about like Stephen Wolf from and everything is a computation?
And then what if they are traveling through this world of computations? So if you think of the universe as just information processing, then what you're referring to
with entropy, and then these pockets of interesting complex computations swimming around, how do
we know they're not already here?
How do we know that this, like all the different amazing things that are full of mystery on Earth
are just little footprints of intelligence
from light years away?
Maybe.
I mean, I tend to think that as civilizations expand,
they use more and more energy.
And you can never overcome the problem of waste heat.
So where is their waste heat?
So we'd be able to, with our crude methods,
be able to see like there's a whole lot of waste heat. So where is their waste heat? So we'd be able to, with our crude methods, be able to see like,
there's a whole lot of energy here.
But it could be something we're not, I mean, we don't understand dark
energy, right? Dark matter.
It could be just stuff we don't understand at all.
But they can have a fundamentally different physics, you know, like that,
that we just don't even cover.
I think, okay, I mean, it depends how far out you want to go.
I don't think physics is very different on the other side of the galaxy.
I would suspect that they have, I mean, if they're in our universe,
they have the same physics.
Well, yeah, that's the assumption we have, but there could be like super
trippy things like, like our cognition only gets to a slice and all the possible instruments
that we can design only get to a particular slice of the universe.
And there's something much like weirder.
Maybe we can try a thought experiment.
Would people from the past be able to detect their remnants of our,
we'll be able to detect our modern civilization.
And I think the answer is obviously yes.
You mean past from my hundred years ago?
I'll see you go back further.
Let's go to a million years ago.
Right, the humans who were lying around in the desert
probably didn't even have,
maybe they just barely had fire.
They would understand of a 747 flew overhead.
Oh, in this vicinity, but not if a 747 flew on Mars,
because they wouldn't be able to see far,
because we're not actually communicating that well
with the rest of the universe.
We're doing okay, just sending out random like 50s tracks
and music. True, and yeah, I mean, they'd have to, you know, we've only been broadcasting radio waves
for, um, 150 years, and there's your light cone. So yeah, okay.
What do you make about all the, recently came across this, having talked to David Fraver.
recently came across this. Having talked to David Fraver, I don't know if you caught what the videos that Pentagon released and the New York Times reporting of the UFO sightings.
So I kind of looked into it, quote unquote, and there's actually been like hundreds of thousands
of UFO sightings, right? And a lot of it, you can explain it in different kinds of ways.
So one is that could be interesting physical phenomena, two,
it could be people wanting to believe.
And therefore they conjure up a lot of different things that just, you know,
when you see different kinds of lights, some basic physics phenomena, and then you
just conjure up ideas of possible out there mysterious
worlds. But, you know, it's also possible, like you have a case of David Fraver, who is
a Navy pilot, who's, you know, as legit as it gets in terms of humans who are able to perceive
things in the environment and make conclusions whether those things
are thread or not.
And he and several other pilots saw a thing.
I don't know if you follow this,
but they saw a thing that they've since then called TikTok
that moved in all kinds of weird ways.
They don't know what it is.
It could be technology developed by the United States, and they're just not aware of it
and the surface level from the Navy, right?
It could be different kind of lighting technology or drone technology, all that kind of stuff.
It could be the Russians and the Chinese, all that kind of stuff.
And of course, their mind, our mind can also venture into the possibility
that it's from another world.
Have you looked into this at all?
What do you think about it?
I think all the news is a sly op.
I think that the most plausible.
Nothing is real.
Yeah, I listened to the, I think it was Bob Lazar
on Joe Rogan.
And like I believe everything this guy is saying. And
then I think that it's probably just some like MK Ultra kind of thing, you know.
What do you mean? Like, they, they, they, you know, they made some weird thing and they
called it nailing spaceship. You know, maybe it was just to like stimulate young physicist's
minds, tell them it's alien technology and we'll see what they come up with, right?
Do you find any conspiracy theories compelling?
Like, have you pulled at the string of the rich complex world of conspiracy theories that's
out there?
I think that I've heard a conspiracy theory that conspiracy theories were invented by the
CIA in the 60s to discredit true things. Yeah. So, you know, you can go to ridiculous conspiracy theories,
like flatter than Pizzagate and,
you know, these things are almost to hide like
conspiracy theories that like, you know,
remember when the Chinese locked up the doctors
who discovered coronavirus?
Like I tell people to have said,
I'm like, no, no, that's not a conspiracy theory.
That actually happened.
Do you remember the time that the money used to be
backed by gold and now it's backed by nothing?
This is not a conspiracy theory.
This actually happened.
Well, that's one of my worries today with the idea of fake news
is that when nothing is real, then you dilute the possibility of anything being true by conjuring
up all kinds of conspiracy theories.
And then you don't know what to believe.
And then the idea of truth of objectivity is lost completely.
Everybody has their own truth.
So you're used to control information by censoring it.
Then the internet happened and governments were like,
oh shit, we can't censor things anymore.
I know what we'll do.
You know, it's the old story of a story of like
tying a flag with a leopard con tell you his gold is buried.
And you tie one flag and you make the leopard con swear to not remove the flag.
And you come back to the field later with a shovel and this flag is everywhere.
That's one way to maintain privacy, right? and not remove the flag and you come back to the field later with a shovel and this flag's everywhere.
That's one way to maintain privacy, right? It's like, in order to protect the contents of this conversation, for example, we could just generate like millions of deep-fake conversations where
you and I talk and say random things. So this is just one of them and nobody knows which one was
the real one. This could be fake right now.
Classic, secondography technique.
Okay, another absurd question about intelligent life.
Because you're an incredible programmer outside of everything else we'll talk about just
as a programmer.
Do you think intelligent beings out there, the civilizations that were out there,
had computers and programming?
Did they, doing naturally, have to develop something where we engineer machines
and are able to encode both knowledge into those machines and instructions that process that knowledge,
process that information to make decisions
and actions and so on.
And with those programming languages,
if you think they exist, be it all similar
to anything we've developed.
So I don't see that much of a difference
between quote unquote natural languages
and programming languages.
I think there's so many similarities.
So when I ask the question,
what do alien languages look like?
I imagine they're not all that dissimilar from hours.
And I think translating in and out of them
wouldn't be that crazy.
It was difficult to compile DNA to Python and then to C.
There is a little bit of a gap in the kind of language
we use for touring machines and the kind of languages
nature seems to use a little bit.
Maybe that's just, we just haven't understood the kind of language that nature
used as well yet.
DNA is a CAD model. It's not quite a programming language. It has no sort of serial execution.
It's not quite a, yeah, it's a CAD model. So I think in that sense, we actually completely
understand it. The problem is, you know, well, simulating on these CAD models.
I played with it a bit this year.
It is super computationally intensive.
If you want to go down to like the molecular level,
where you need to go to see a lot of these phenomenon, like protein folding.
So yeah, it's not that it's not, it's not that we don't understand it.
It just requires a whole lot of compute to kind of compile it
For human minds. It's inefficient both for the pro for the data representation and for the programming
Yeah, it runs well on raw nature
It runs well on raw nature and when we try to build emulators or simulators for that
Well, the madslot and I've tried it
It runs in yeah, you've commented elsewhere.
I don't remember where, that one of the problems is simulating nature is tough.
And if you want to sort of deploy a prototype, I forgot how you put it, but it made me
laugh.
But animals or humans would need to be involved in order to try to run some prototype code
like if we're talking about COVID and viruses and so on,
if you were trying to engineer
some kind of defense mechanisms like a vaccine
against COVID or all that kind of stuff
that doing any kind of experimentation
like you can with like autonomous vehicles
would be very technically and ethically costly.
I'm not sure about that.
I think you can do tons of crazy biology and test tubes.
I think my bigger complaint is more, all the tools are so bad.
Like literally, you mean like like libraries and...
I'm not pipetting shit.
Like you're handing me out, I gotta, no.
No, no, there has to be some.
Like automating stuff and like the human biology is messy.
Like it seems like...
Look at those Doranos videos, they were joke.
It's like little gantry, it's like a little XY gantry high school science project
with the PyPet.
Like, really?
You gotta be something better.
You can't build like nice micro-fluidics
and I can program the, you know, computation
to buy it interface.
I mean, this is gonna happen.
But like, right now, if you are asking me
to PyPet 50 milliliters of solution amount,
this is so crude. Yeah. asking me to pipette 50 milliliters of solution amount.
This is so crude.
Okay. Let's get all the crazy out of the way.
A bunch of people ask me since we talked about the simulation last time,
we talked about hacking the simulation. Do you have any updates,
any insights about how we might be able to go about
hacking simulation if we indeed do live in a simulation.
I think a lot of people misinterpreted the point of that South by talk.
The point of the South by talk was not literally
to hack the simulation.
This is an idea is literally just,
I think, theoretical physics.
I think that's the whole goal.
You want your grand unified theory, but then, okay, build a grand unified theory search
for exploits.
I think we're nowhere near actually there yet.
My hope with that was just more to like, are you people kidding me with the things you
spend time thinking about? Do you understand like kind of how small you are? You are, you are
bites and God's computer, really? And the things that people get worked up about.
And, you know, so basically it was more a message of, we should humble ourselves. That we get to, like, what are we humans in this bike code?
Yeah.
And not just humble ourselves, but like, I'm not trying to make people guilty or anything like
that.
I'm trying to say, like, literally, look at what you are spending time on.
Right?
What are your referent to?
Your referent to the Kardashians?
What are we talking about?
Um, I prefer to know the Kardashians. You have one knows that's kind of fun. I'm referring more to like
the economy. You know, this idea that we got to up our stock price. Like, or what is the
goal function of humanity? You don't like the game of capitalism. Like you don what is the goal function of humanity?
You don't like the game of capitalism.
Like you don't like the games we've constructed
for ourselves as humans?
I'm a big fan of capitalism.
I don't think that's really the game we're playing right now.
I think we're playing a different game
where the rules are rigged.
Okay, which games are interesting to you
that we humans have constructed and which aren't which are
productive and which are not actually maybe that's the real point of the of the talk. It's like
Stop playing these fake human games. There's a real game here. We can play the real game
The real game is you know nature wrote the rules. This is a real game. There still is a game to play.
But if you look at Sargentrop, I don't know if you've seen the Instagram account nature as metal,
the game that nature seems to be playing is a lot, a lot more cruel
than we humans want to put up with. Or at least we see it as cruel. It's like the bigger thing eats the smaller thing and does it to impress another big
thing so it can mate with that thing and that's it. That seems to be the entirety of it.
Well, there's no art, there's no music, there's no comma AI, there's no comma one, no comma
two, no George Hots with his brilliant talks at Southwest.
I disagree though. I disagree that this is what nature is. I think nature just provided
basically a open world, MMO RPG. And you know, here it's open world. I mean, if that's
the game you want to play, you can play that game. Isn't that beautiful? I know if you played Diablo, they used to have, I think, cow level where it's, so everybody
will go just, they figured out this, like the best way to gain, like experience points.
This is just slaughter cows over and over and over.
And so they figured out this little sub-game within the bigger game that this is the most
efficient way to get experience points.
And everybody somehow agreed that getting experience points in RPG contacts where you
always want to be getting more stuff, more skills, more levels, keep advancing.
That seems to be good.
So might as well spend sacrifice actual enjoyment of playing a game, exploring a world, and spending like
hundreds of hours of your time in Cal level. I mean the number of hours I spent in Cal level
I'm not like the most impressive person because people have probably thousands of hours there with ridiculous
So that's a little absurd game that brought me
joints when we are dopamine drug kind of way. So you don't like those games. You don't you don't
think that's us humans feeling the nature. And that was the point of the talk. Yeah. So how do we
hack it then? Well, I want to live forever. And wait.
I want to live forever.
And this is like the goal.
Well, that's a game against nature.
Yeah.
Immortality is the good objective function to you.
I mean, start there and then you can do whatever else you want, because you've got a long
time.
What if immortality makes the game just totally not fun?
I mean, like, why do you assume immortality is somehow, it's not a good
objective function. It's not immortality that I want. A true immortality where I could not die.
I would prefer what we have right now, but I want to choose my own death, of course.
I don't want nature to decide when I die. I'm going to win. I'm going to be you.
And then at some point, if you choose commit suicide, like how long you think you'd live?
Until I get bored.
See, I don't think people, like brilliant people like you that really ponder living a long
time, are really considering how, how meaning of this life becomes.
Well, I want to know everything and then I'm ready to die.
As long as it's...
Why do you want, isn't it possible that you want to know
everything because it's finite?
Like the reason you want to know, quote unquote,
everything is because you don't have enough time
to know everything.
And once you have unlimited time, then you realize, like, why do anything?
Like why learn anything?
I don't want to know everything.
I'm ready to die.
So you have, yeah, it's not a terminal not, it's not in service of anything else.
I'm conscious of the possibility, this is not a certainty, but the possibility of,
of that engine of curiosity that you're speaking to is actually a symptom of,
uh, the finiteness of life.
Like without that finiteness, you curiosity would vanish like like a morning fog.
All right, cool.
Then let me solve immortality and we change the thing in my brain that reminds me of the fact
that I'm immortal tells me that life is finite shit. Maybe I'll have it tell me that life ends
next week, right? I'm okay with some self manipulation like that. I'm okay with deceiving.
Oh,
Rika, changing the code. Yeah, we're in if that's the problem, right? If the problem is that I will
no longer have that that curiosity, I'd like to have backup copies of myself, uh, which I
prefer. Yeah. Well, which I check in with occasionally to make sure they're okay with the trajectory
and they can kind of override it. Maybe a nice like, I think of like those wave nets, those like
logarithmic go back to the copy.
But sometimes it's not reversible.
Like, I've done this with video games
once you figure out the cheat code,
or like you look up how to cheat old school,
like single player, it ruins the game for you.
Absolutely, I know that feeling, but again,
that just means our brain manipulation technology's not good enough yet.
Remove that cheat code from your brain here.
What if we, so it's also possible that if we figure out immortality, that all of us
will kill ourselves before we advance far enough to be able to revert the change.
I'm not killing myself till I know everything, so.
That's what you say now, because your life is finite.
I think self-modifying systems comes up with all these hairy complexities.
Can I promise that I'll do it perfectly?
No, but I think I can put good safety structures in place.
So that talk in your thinking here is not literally referring to a simulation in that our universe is a
kind of computer program running in a computer.
It's more of a thought experiment.
Do you also think of the potential of the sort of a boss drum, Elon Musk, and others that talk about an actual program that simulates our
universe.
Oh, I don't doubt that we're in a simulation.
I just think that it's not quite that important.
I mean, I'm interested only in simulation theory as far as like it gives me power over
a nature.
If it's totally unfalseifiable, then who cares?
I mean, what do you think that experiment would look like?
Like somebody on Twitter asks George,
what signs we would look for to know whether or not
we're in simulation, which is exactly what you're asking is like,
the step that precedes the step of knowing how to get more power
from this knowledge is to get an indication
that there's some power to
be gained. So get an indication that you can discover and exploit cracks in the simulation or it
doesn't be in the physics of the universe. Yeah. Show me, I mean like a memory, it could be cool.
Like some scrying technology, you know?
What kind of technology?
Scrying.
What's that?
Oh, that's a weird, uh, uh,
scrying is the, is the, uh, paranormal ability
to, uh, like, like remote viewing,
like being able to see somewhere where you're not.
So, you know, I don't think you can do it by chanting
in a room, but, um, if we could find,
is a memory leak, basically.
It's a memory leak. Yeah, you're able to access parts you're not supposed to.
Yeah, yeah, yeah, yeah. And thereby discover shortcut.
Yeah, maybe memory leak means the other thing as well, but I mean like yeah,
like an ability to read arbitrary memory.
Right. And that one's not that horrifying. The right one start to be horrifying.
Read it, right. So the reading is not the problem.
Yeah, it's like heart-fleet for the universe.
Oh boy, the writing is a big, big problem.
It's a big problem.
It's the moment you can write anything,
even if it's just random noise.
That's terrifying.
I mean, even without that,
like even some of the nanotech stuff that's coming, I think.
I don't know if you're paying attention, but actually, Eric Weinstein came out with the theory of
everything. I mean, that came out. He's been working on a theory of everything in the physics world,
called Geometrachicunity. And then for me, from computer science person, like you,
Stephen Wolfram's theory of everything of like
hypergraphs is super interesting and beautiful, but that not from a physics
perspective, but from a computational perspective. I don't know, have you paid
attention to any of that? So again, like what would make me pay attention and like
why like a hate-string theory is okay, make a testable prediction, right? I'm
only interested in, I'm not interested in theories for their intrinsic beauty.
I'm interested in theories that give me power over the universe.
So, if these theories do, I'm very interested.
Can I just say how beautiful that is because a lot of physicists say I'm interested in
experimented validation and they skip out the part where they say to give me more power in the universe.
I just love the...
No, I want the clarity of that.
I want 100 gigahertz processors.
I want transistors that are smaller than atoms.
I want power.
That's true, and that's where people from aliens to this kind of technology where people are worried that governments, like who owns that power?
Is it George Hots?
Is it thousands of distributed hackers across the world?
Is it governments?
You know, is it Mark Zuckerberg?
There's a lot of people that, I don't know if anyone trusts
anyone individual with power, so they're always worried.
It's the beauty of blockchains.
That's the beauty of blockchains, which we'll talk about.
On Twitter, somebody pointed me to a story,
a bunch of people pointed me to a story a few months ago,
where you went into a restaurant in New York
and you can correct me if I'm this is wrong,
and ran into a bunch of folks
from a company in a crypto company
who are trying to scale up Ethereum.
And they had a technical deadline related
to a solidity to OVM compiler.
So these are all Ethereum technologies.
So you stepped in, they recognized you,
pulled you aside, explained their problem, and you stepped in and helped them solve the problem. They're by creating legend status story.
Can you tell me the story, the little more detail, it seems kind of incredible.
Did this happen? Yeah, it's a true story. It's a true story.
I mean, they wrote a very flattering account of it.
They, so optimism is the company's called optimism.
It's spin off of plasma.
They're trying to build L2 solutions on Ethereum.
So right now, every Ethereum node
has to run every transaction on the Ethereum network.
And this kind of doesn't scale, right? Because if you have N computers, well, you know,
if that becomes two N computers, you actually still get the same amount of compute.
All right, this is like, like, all of one scaling. Because they all have to run it. Okay, fine,
you get more blockchain security, but like, the blockchain is already so secure. Can we trade some of that off for speed?
So that's kind of what these L2 solutions are. They built this thing which kind of sandbox
for Ethereum contracts, so they can run it in this L2 world and it can't do certain things in L
world. In L1, I'm going to ask you for some definitions? What's L2? Oh, L2 is layer 2.
So L1 is like the base of the theorem chain.
And then layer 2 is like a computational layer
that runs elsewhere, but still is kind of secured by layer 1.
And I'm sure a lot of people know,
but Ethereum is a cryptocurrency, probably one of the most
popular cryptocurrency, second to Bitcoin. And a lot of interesting know, but Ethereum is a cryptocurrency, probably one of the most popular cryptocurrency second to Bitcoin.
And a lot of interesting technological innovations there.
Maybe you can also slip in whenever you talk about this and things that are exciting to
you in the Ethereum space.
And why Ethereum?
Well, I mean, Bitcoin is not term complete.
Ethereum is not technically term complete with the gas limit, but close enough.
With the gas limit, what's the gas limit?
Resources?
Yeah, I mean, no computer is actually tearing complete.
Right, right.
You're gonna find a ramp, you know?
What is actually all the hot stuff?
With the word gas limit,
you just have so many brilliant words.
I'm not even gonna ask.
What that's what that's,
no, no, that's not my word, that's Ethereum's word.
Gas limit.
Ethereum, you have to spend gas per instruction.
So like different op codes, you use different amounts of gas,
and you buy gas with ether to prevent people from basically
dedosing the network.
So Bitcoin is proof of work, and then what's Ethereum?
It's also proof of work.
They're working on some proof of stake, Ethereum 2.0 stuff.
But right now it's proof of work.
It uses a different hash function from Bitcoin.
That's more ASIC resistance, Christianeer Ram.
So we're all talking about Ethereum at one point now.
So what were they trying to do to scale this whole process?
So they were like, well, if we could run contracts elsewhere,
and then only save the results of that computation,
but we don't actually have to do the compute on the chain.
We can do the compute off chain
and just post what the results are.
Now, the problem with that is,
well, somebody could lie about what the results are.
So you need a resolution mechanism
and the resolution mechanism can be really expensive
because you just have to make sure
that like the person who is saying,
look, I swear that this is the real computation.
I'm staking $10,000 on that fact.
And if you prove it wrong, yeah,
my cost you $3,000 in gas fees to prove wrong,
but you'll get the $10,000 bounty.
So you can secure using those kind of systems.
So it's effectively a sandbox, which runs contracts,
and like, it's like any kind of normal sandbox,
you have to replace syscalls with calls
into the hypervisor.
Sandbox, syscalls, hypervisor.
What do these things mean?
As long as it's interesting to talk about.
Yeah, I mean, you can take the Chrome sandbox.
It's maybe the one to think about, right?
So the Chrome process is doing a rendering. Camp, for example, read can take like the Chrome sandboxes, maybe the one to think about, right? So the Chrome process is doing a rendering.
A camp, for example, read a file from the file system.
Yeah.
It has, if it tries to make an open-syscall and Linux, the open-syscall,
you can't make it open-syscall, no, no, no.
You have to request from the kind of hypervisor process, or like, I don't know what's calling Chrome,
but the,
you know, hey, could you open this file for me?
And then it does all these checks and then it passes the file
handle back in if it's approved.
Yeah.
So that's yeah.
So what's the, in the context of Ethereum,
what are the boundaries of the sandbox that we're talking about?
Well, like one of the calls that you actually
reading, writing any state to the Ethereum contract, to the
Ethereum blockchain.
Writing state is one of those calls that you're going to have to sandbox in layer two, because
if you let layer two just arbitrarily to the Ethereum blockchain.
So layer two is really sitting on top of layer one.
So you're gonna have a lot of different kinds of ideas
that you can play with.
Yeah.
And they're not fundamentally changing
the source code level of Ethereum.
Well, you have to replace a bunch of calls
with calls into the hypervisor.
So instead of doing the CIS call directly,
you replace it with a call to the hypervisor. So instead of doing the syscall directly, you replace it with a call to the hypervisor.
So originally they were doing this
by first running the solidity as the language,
that most Ethereum contracts are written in,
and composed to a bytecode.
And then they wrote this thing they called the transpiler.
And the transpiler took the bytecode,
and it transpiled it into OVM safe bytecode.
Basically, bytecode that didn't make any of those restricted cess calls and added the calls to
the hypervisor. This transpiler was a 3000 line mess. And it's hard to do. It's hard to do if you're
trying to do it like that because you have to kind of like deconstruct the bytecode, change
things about it and then reconstruct it.
And I mean, as soon as I hear this, I'm like, I want to just change the compiler.
Why not? The first place you build the bytecode, just do it in the compiler.
So yeah, I asked them how much they wanted it. Of course, measured in dollars and I'm going, okay. Um, and yeah.
You wrote the compiler.
Yeah.
I modified.
I wrote a 300 line diff to the compiler.
Uh, it's home source.
You can look at it.
Yeah.
Yeah, I looked at the code last night.
Yeah.
It's, it's, it's, it's, it's, yeah, exactly.
It's a good, it's a good word for it.
Uh, and it's, uh, C++.
C++, yeah. So when asked how you were able to do it,
you said you just got to think and then do it right.
So, can you break that apart a little bit?
What's your process of one thinking into doing it right?
You know, they people I was working for are
amused that I said that.
It doesn't really mean anything.
Okay.
I mean, is there some deep profound insights to draw from like how you problem solve from
that?
This is always what I say.
I'm like, do you want to be a good programmer?
Do it for 20 years.
Yeah.
There's no shortcuts.
No.
What are your thoughts on crypto in general? So what parts technically or philosophical
could define, especially beautiful maybe? Oh, I'm extremely bullish on crypto long term,
not any specific crypto project, but this idea of well, two ideas. One, the Nakamoto consensus algorithm
ideas. One, the Nakamoto consensus algorithm is, I think, one of the greatest innovations of the 21st century. This idea that people can reach consensus, you can reach a group
consensus using a relatively straightforward algorithm is wild. And like, you know, it's
the totally Nakamoto. People always ask me who I look up to. It's like, you know, to touch your knock-of-vote, people always ask me who I look up to.
It's like whoever that is.
Who do you think it is?
I mean, I have a mask.
Is it you?
It is definitely not me, and I do not think it's Elon Musk.
But yeah, this idea of our groups reaching consensus in a decentralized yet formulaic way is one extremely powerful idea from crypto.
Maybe the second idea is this idea of smart contracts.
When you read a contract between two parties, any contract,
this contract, if there are disputes, it's interpreted by lawyers.
Lawyers are just really shitty overpaid interpreters. Imagine you had, let's talk about them in terms
of A, in terms of like, let's compare a lawyer to Python, right? So lawyers, well, okay.
That's really, I never thought of it that way. It's hilarious. So Python, I'm paying, I'm paying,
even 10 cents an hour, I'll use the nice Azure machine.
I can run Python for 10 cents an hour.
Lawyers cost $1,000 an hour.
So Python is 10,000 X better on that axis.
Lawyers don't always return the same answer.
Python almost always does.
Cost. Yeah, I mean, just just just cost reliability. Everything about Python is so much better than
lawyers. So if you can make smart contracts, this whole concept of code is law. I love and I would
love to live in a world where everybody accepted that fact.
So maybe you can talk about what smart contracts are.
So let's say, let's say, you know, we have a, even something as simple as a safety deposit
box, right?
Safety deposit box that holds a million dollars. I have a contract with
the bank that says two out of these three parties must be present to open the safety deposit
box and get the money out. So that's a contract with the bank and it's only as good as the
bank and the lawyers, right? Let's say, you know, somebody dies. And now we're going to
go through a big legal dispute
about whether, oh, was it in the will?
Was it not in the will?
Well, it's just so messy.
And the cost to determine truth is so expensive.
Versus a smart contract, which just uses cryptography
to check if two out of three keys are present.
Well, I can look at that.
And I can have certainty in the answer that it's going to return.
And that's what all businesses want, a certainty.
You know, they say businesses don't care, Viacom YouTube, YouTube's like, look, we don't
care which way this lawsuit goes.
Just please tell us so we can have certainty.
I wonder how many agreements in this world, because we're talking about financial transactions
only in this case, correct?
The smart contracts. Oh, you can go to anything. You can go to, you can go to, you can put financial transactions only in this case correct the smart contracts.
Oh, you can go to anything. You can go to you can put a prenup in the theory blockchain.
Merit smart contracts. Sorry divorce lawyers. Sorry. You're going to be replaced by Tython.
Okay, so that's, so that's another beautiful idea.
Do you think there's something that's appealing to you
about any one specific implementation?
So if you look 10, 20, 50 years down the line,
do you see any like Bitcoin, Ethereum,
any of the other hundreds of cryptocurrencies
winning out?
Is there like what's your intuition about the space?
Are you just sitting back and watching the chaos
and who cares what emerges?
Oh, I don't, I don't expect it, I don't really care.
I don't really care which one of these project wins.
I'm kind of in the Bitcoin as a meme coin camp.
I mean, why does Bitcoin have value?
It's technically kind of, you know.
What?
Yeah.
Not great.
Like the block size debate,
or when I found out what the block size debate was,
I'm like, are you guys kidding?
What's the block size debate?
You know what, it's really stupid to even talk to people.
People can look it up, but I'm like, wow.
You know, Ethereum seems, the governance of Ethereum
seems much better.
I've come around a bit on proof of stake ideas. You know,
very smart people thinking about some things. Yeah, governance is interesting. It does feel like
Vitalik, it does feel like an open, even in these distributed systems, leaders are helpful
because they kind of help you drive the mission and the vision and they put a face
to a project. It's a weird thing about us humans. Geniuses are helpful, like the talp. Yeah, believe
leaders are not necessary. Yeah. So you think the reason he's the face of a Theorem is because he's a genius.
That's interesting.
I mean, that was, it's interesting to think about that we need to create systems
in which the quote unquote leaders that emerge are the geniuses in the system.
I mean, that's arguably why the current
state of democracy is broken is the people who are emerging as the leaders are not the most
competent, are not the superstars of the system. And it seems like at least for now, in the crypto
world, oftentimes the leaders are the superstars. Imagine at the debate they asked, what's the sixth
amendment? What are the four fundamental forces in the universe? What's the integral of two to the X?
Yeah, I have love to see those questions asked and that's what I want as our leader. It's a little bit of a base rule
Yeah, I mean even oh wow you're hurting my brain. It's that
My standard was even lower,
but I would have loved to see just this basic brilliance,
like I've talked to historians.
There's just these, they're not even like,
they don't have a PhD or even education history.
They just like a Dan Carlin type character
who just like, holy shit, how did all this information get into your head?
They're able to just connect Genghis Khan to the entirety of the history of the 20th
century.
They know everything about every single battle that happened and they know the game of
thrones of the different power plays and all that happened there.
And they know like the individuals and all the documents involved.
And they integrate that into their regular life.
It's not like they're ultra history nerds.
They're just, they know this information.
That's what competence looks like.
Yeah.
Because I've seen that with programmers too.
That's what great programmers do.
But yeah, it would be, it's really unfortunate
that those kinds of people aren't emerging as our leaders.
But for now, at least in the crypto world,
that seems to be the case.
I don't know if that always, you could imagine
that in a hundred years, it's not the case.
Crypto world has one very powerful idea going for it,
and that's the idea of forks.
The character world has one very powerful idea going for it, and that's the idea of forks. I mean, you know, imagine, we'll use a less controversial example.
This was actually in my joke app in 2012.
I was like Barack Obama, Mitt Romney, let's let him both be president.
All right, like imagine we could fork America and just let him both be president.
And then the Americas could compete.
And you know, people could invest in one, pull their liquidity out of one, put it in the
other.
You have this in the crypto world, Ethereum forks into Ethereum and Ethereum classic.
And you can pull your liquidity out of one and put it in another.
And people vote with their dollars, which forks companies
should be able to fork.
I'd love to fork and video, you know?
Yeah, like different business strategies.
And then try them out and see what works.
Like even take, yeah, take Kamei that closes its source
and then take one that's open source and see what works.
Take one that's purchased by GM and one that remains Android renegade in all these different
versions and see the beauty of comma AI or someone can actually do that.
Please take comma AI and fork it.
That's right.
That's the beauty of open source.
So you're, I mean, we'll talk about autonomous
vehicle space, but it does seem that you're really knowledgeable about a lot of different topics.
So the natural question, a bunch of people ask this, which is, how do you keep learning new things?
Do you have, like, practical advice? If you're to introspect like taking notes, allocate
time, or you just mess around and just allow your curiosity to drive.
I'll write these people a self-help book and I'll charge $67 for it.
I will write. I will write. I will write on the cover of the self-help book. All of this advice
is completely meaningless. You're going to be a sucker and buy this book anyway.
And the one lesson that I hope they take away from the book is that I can't give you
a meaningful answer to that.
That's interesting.
Let me translate that.
Is you haven't really thought about what it is you do systematically because you could
reduce it and then some people I mean I've met brilliant people that
This is really clear with athletes some are just you know
The best in the world that's something and they they have zero interest in writing us up like a self-help book
But or how to master this game and then there's some athletes who
or how to master this game. And then there's some athletes who become great coaches
and they love the analysis, perhaps the over analysis.
And you right now, at least at your age,
which is an interesting, you're in the middle of the battle,
you're like the warriors that have zero interest
in writing books.
So you're in the middle of the battle.
So you have, yeah.
This is a fair point.
I do think I have a certain aversion to this
kind of deliberate intentional way of living life.
You're eventually the hilarity of this, especially since this is recorded.
It will reveal beautifully the absurdity when you finally do publish this book, and I guarantee you, you will. What do you thought?
The story of comma AI,
it'll be a biography written about you.
That'll be better, I guess.
And you might be able to learn some cute lessons
if you're starting a company like comma AI
from that book.
But if you're asking generic questions
like how do I be good at things?
Dude, I don't know.
Well, I mean, the interesting thing.
Do them a lot. I do them, I mean, the interesting thing. Do them a lot.
Do them a lot.
But the interesting thing here is learning things outside of your current trajectory,
which is what it feels like from an outsider's perspective.
I mean, that, I don't know if there's a device on that, but it is an interesting curiosity.
When you become really busy, you're running
a company.
Hard time.
Yeah. But like, there's a natural inclination and trend, like just the momentum of life
carries you into a particular direction of wanting to focus and this kind of dispersion that curiosity can lead to
gets harder and harder with time.
Cause you get really good at certain things
and it sucks trying things that you're not good at,
like trying to figure them out.
When you do this with your life streams,
you're on the fly figuring stuff out.
You don't mind looking dumb.
No.
You just figured out pretty quickly. Sometimes I try things and I don't figure them out. You know what I'm looking dumb. No. You just figured out pretty quickly.
Sometimes I try things and I don't figure them out.
My chest rating is like a 1,400,
despite putting like a couple hundred hours in
as pathetic.
I mean, to be fair, I know that I could do it better.
If I did it better, like don't play,
you know, don't play five minute games,
play 15 minute games at least.
Like I know these things, but it just doesn't.
It doesn't stick nicely in my knowledge tree.
All right, let's talk about Kama AI. What's the mission of the company? Let's like look
at the biggest picture.
Oh, I have an exact statement. Solve self-driving cars while delivering shipable intermediaries.
So long term vision is have fully autonomous vehicles
and make sure you make money along the way.
I think that doesn't really speak to money,
but I can talk about what
solve self-driving cars means.
Solve self-driving cars, of course, means
you're not building a new car,
you're building a person replacement,
that person can sit in the driver's seat
and drive you anywhere a person can drive
with a human or better level of safety, speed, quality, comfort.
And what's the second part of that?
Delivering shipable intermediaries is, well, it's a way to fund the company, that's true.
But it's also a way to keep us honest.
If you don't have that, it is very easy with this technology to think you're
making progress when you're not. I've heard it best described on hacker news as you can
set any arbitrary milestone. Meet that milestone and still be infinitely far away from solving
self-driving cars. So it's hard to have like real deadlines when you're like crews or waymo when you don't
have revenue.
Is that, I mean, is revenue essentially the thing we're talking about here?
Revenue is capitalism's best drunk consent, capitalism, the way that you get revenue
as a real capitalism, communism, the way that you get revenue as a real capitalism,
communism, the real capitalism camp, there's definitely scams out there. But real capitalism is
based around consent. It's based around this idea that like if we're getting revenue, it's because
we're providing at least that much value to another person. When someone buys a thousand dollar
comma two from us, we're providing them at least a thousand dollars of value, but they wouldn't buy it.
Brilliant. So can you give a world-wwind overview of the products that come AI provides, like throughout
its history and today?
I mean, yeah, the past ones aren't really that interesting.
It's kind of just been refinement of the same idea.
The real only product we sell today is the Comma 2, which is a piece of hardware with cameras.
So the Comma 2, I mean, you can think about it kind of like a person.
You know, in future hardware, it will probably be even more and more person-like.
So it has, you know, eyes, ears, a mouth, a brain,
and a weight interface with the car.
Does it have consciousness?
Just kidding. That was a trick question.
I don't have consciousness either.
Me and those tell me two of the same. You're the same. I have a little more compute than it. It only has like the same computer.
Interesting. B, you know, you're more efficient energy wise for the compute. You're doing far more efficient energy wise.
20 paid of lots, 20 wants.. Do you lack consciousness? Sure. Do you fear death? You do.
You want immortality?
Of course I fear death.
Does Kamiai fear death?
I don't think so.
Of course it does.
Very much fears, well, fears negative loss.
Oh yeah.
Oh.
Okay.
So Kami, so Kami too, when that come out,
that was a year ago?
No, two.
Early this year.
Wow, time, it feels like, yeah.
2020 feels like it's taken 10 years to get to the end of it.
It's a long year.
It's a long year.
So what's the sexiest thing about comma to feature wise?
So I mean, maybe you can also linger on like,
what is it?
Like what's its purpose?
Cause there's a hardware, there's a software component.
You've mentioned the sensors,
but also like what are the features and capabilities?
I think our slogan summarizes it well.
Thomas Logan is make driving chill.
I love it.
Okay.
Yeah, I mean, it is, you know, if you like cruise control,
imagine cruise control, but much, much more.
So it can do adaptive cruise control things,
which is like slow down for cars in front of it,
maintain a certain speed,
and it can also do lane keeping, so stay in the lane
and do it better and better and better over time.
That's very much machine learning-based.
So there's cameras is there's a driver
face camera too. What else is there? What am I thinking? So the hardware versus software.
So open pilot versus the actual hardware, the device. What's can you draw that distinction?
What's one of us the other? I mean, the hardware is pretty much a cell phone with a few additions, a cell phone with a cooling system
and with a car interface.
Can I can do it?
And by cell phone, you mean a Qualcomm Snapdragon?
Yeah, the current hardware is a Snapdragon 821.
It has a Wi-Fi radio, it has a military radio, it has a screen.
We use every part of the cell phone.
And then the interface with the car is specific to the car, so you keep supporting more and
more cars.
Yeah, so the interface to the car, I mean the device itself just has four can buses,
has four can interfaces on it, they're connected through the USB port to the phone.
And then, yeah, on those four can buses, you connect it to the car,
and there's a little part to do this.
Cars are actually surprisingly similar.
So, the car is the protocol
by which the cars communicate,
and then you're able to read stuff and write stuff
to be able to control the car depending on the car.
So, what's the software set?
What's Open Pilot?
So, I mean, Open Pilot is the hardware,
it's pretty simple compared to Open Pilot.
Open Pilot is, So, I mean, OpenPilot is the hardware is pretty simple compared to OpenPilot. OpenPilot is...
Well, so you have a machine learning model, which it's in OpenPilot.
It's a blob.
It's just blob of weights.
It's not like people are like, oh, it's closed source.
I'm like, it's a blob of weights.
What do you expect?
You know, how could I-
Just primarily new on that work base.
You will. OpenPilot is all the software kind, just primarily neural network based. You will.
Open pilot is all the software kind of around that neural network,
that if you have a neural network that says, here's where you want to send the car,
Open Pilot actually goes and executes all of that.
It cleans up the input to the neural network, it cleans up the output and executes on it.
So connects, it's the glue that connects everything together.
Runs the sensors, does a bunch of calibration
for the neural network, does deals with like,
you know, if the car is on a banked road,
you have to counter steer against that.
And the neural network can't necessarily
know that by looking at the picture.
So you do that with other sensors, infusion,
and localizer, OpenPilot also is responsible
for sending the data up to our servers so we can learn from it,
logging it, recording it, running the cameras, thermally managing the device,
managing the disk space and the device, managing all the resources and the device.
So what, since we last spoke, I don't remember one, maybe a year ago, maybe a little bit longer,
what, how has OpenPilot improved? We did exactly what I promised you.
I promised you that by the end of the year,
we would be able to remove the lanes.
The lateral policy is now almost completely end to end.
You can turn the lanes off and it will drive.
Drive slightly worse on the highway if you turn the lanes off,
but you can turn the lanes off and it will drive well-trained, completely end to end on user data.
And this year we hope to do the same for the longitudinal policy.
So that's an interesting thing.
You're not doing, you don't appear to be in the, you can correct me, you don't appear
to be doing lane detection or lane marking detection or kind of the segmentation task
or any kind of object detection task,
you're doing what's traditionally more called end-to-end learning.
So entrained on actual behavior of drivers
when they're driving the car manually.
And this is hard to do.
It's not supervised learning.
Yeah, but the nice thing is there's a lot of data.
So it's hard and easy, right?
It's a we have a lot of high quality data, right?
Like more than you need in the set.
Well, we've wear more than we do.
We've wear more data than we need.
I mean, it's an interesting question actually because in terms of
amount, you have more than you need, but the, you know,
driving is full of edge cases.
So how do you select the data you train on? I think this is an interesting open question. Like, what's the cleverest
way to select data? That's the question Tesla is probably working on. That's, I mean, the
entirety of machine learning community, they don't seem to really care. They just kind of select data,
but I feel like that if you want to solve, if you want to create intelligent systems, you have to pick data well.
All right. And so would you have any hints, ideas of how to do it well?
So in some ways, that is the definition I like of reinforcement learning versus supervised learning.
In supervised learning, the weights depend on the data. Right?
And this is obviously true, but in reinforcement learning,
the data depends on the weights.
Yeah.
And actually both ways.
That's poetry.
How does it know what data in China will let it pick?
We're not there yet, but that's the eventual.
She's thinking this almost like a reinforcement learning framework.
We're going to do RL on the world.
Every time a car makes a mistake, user disengages, we train on that and do RL on the world.
Ship out a new model, that's an epoch, right?
And for now, you're not doing the Elon style promising that it's going to be fully autonomous.
You really are sticking to level two and like it's supposed to be supervised.
It is definitely supposed to be supervised
and we enforce the fact that it's supervised.
We look at our rate of improvement in disengagement.
Open pilot now has an unplanned disengagement
about every 100 miles.
This is up from 10 miles, like maybe maybe a maybe a year ago. Yeah, So maybe we've seen 10X improvement in a year,
but a hundred miles is still a far cry from the 100,000 you're going to need. So you're going to
somehow need to get three more 10Xs in there. And you're what's your intuition? You're basically
hoping that there's exponential improvement built into the baked into the cake somewhere.
Well, that's even better. I mean, 10x improvement.
That's already assuming exponential, right?
There's definitely exponential improvement.
And I think when Elon talks about exponential,
these things, these systems are going to exponentially improve.
Just exponential doesn't mean you're getting 100 gigahertz processors tomorrow.
It's going to still take a while because the gap between even our best system and humans
is still large.
That's an interesting distinction to draw out.
If you look at the way Tesla is approaching the problem and the way you're approaching
the problem, which is very different than the rest of the self-driving car world, so
let's put them aside, you're treating most of the driving tasks as a machine learning problem. And the way
Tesla is approaching it is with a multitask learning. We break the task of
driving into hundreds of different tasks. And you have this multi-headed network
that's very good at performing each task. And there's presumably something on top
that's stitching stuff together in order
to make control decisions, policy decisions
about how you move the car.
But with that allows you, there's a brilliance to this
because it allows you to master each task,
like lane detection, stop sign detection, traffic light
detection, drivable area segmentation, vehicle bicycle pedestrian
detection, there's some localization tasks in there,
also predicting how the entities in the scene are going to move.
Like everything is basically a machine learning task.
Well, there's a classification, segmentation, prediction.
And it's nice because you can have this entire engine,
data engine that's mining for edge cases for each one of these tasks.
And you can have people, like engineers,
that are basically masters of that task.
They become the best person in the world that, as you talk about, the cone guy for Waymo.
Yeah, we're good old cone guy.
The become the best person in the world at cone detection.
So that's a compelling notion from a supervised learning perspective.
Automating much of the process of edge case discovery and
retraining neural network for each of the individual perception tasks.
And then you're looking at the machine learning in a more holistic way, basically doing
end-to-end learning on the driving tasks, supervised, trained on the data of the actual driving
of people they use comma AI.
Like actual human drivers, their manual control, plus the moments of this
engagement that maybe with some labeling could indicate the failure of the
system. So you have a huge amount of data for positive control of the vehicle,
like successful control of the vehicle, like successful control
of the vehicle, both maintaining the lane as I think you're also working on logitude,
no control of the vehicle, and then failure your cases where the vehicle does some wrong
that it needs disengagement.
So, like, what, why do you think you're right and Tesla is wrong on this?
And then do you think you'll come around the Tesla way?
Do you think Tesla will come around to your way?
If you were to start a chess engine company,
would you hire a bishop guy?
See, we have, this is Monday morning quarterbacking,
as, yes, probably.
So, oh, our rook guy, oh, we stole the rook guy from that company. Oh, we're going to have real good rooks. Well, there's not many pieces, right? You can, uh, you know, there's not
many guys and gals the higher. You just have a few that work on the bishop, a few that work in
the work. But is that not ludicrous today to think about in a world of alpha zero?
But alpha zero is a chess case, so the fundamental question is how hard is driving compared to chess?
Because so long term end to end will be the right solution.
The question is how many years away is that?
And to end is going to be the only solution for level five.
For the only way we get that.
Of course, that of course,
Tesla's gonna come around to my way.
And if you're a rook guy out there, I'm sorry.
The cone guy.
I don't know.
We're gonna specialize in each task.
We're gonna really understand rook placement.
Yeah.
I understand the intuition you have.
I mean, that,
that it's very compelling notion that we can learn the task end to end, like the same compelling notion you might
have for natural language conversation. I'm not sure. Because one thing you sneaked in
there is this version that it's impossible to get to level five without this kind of approach.
I don't know if that's obvious.
I don't know if that's obvious either.
I don't actually mean that.
I think that it is much easier to get to level five
with an end to end approach.
I think that the other approach is doable,
but the magnitude of the engineering challenge
makes able humanities capable of. So, but what do you think of the Tesla data engine approach,
which to me is an active learning task
that's kind of fascinating,
is breaking it down into these multiple tasks
and mining their data constantly for like edge cases
for these different tasks?
Yeah, but the tasks themselves are not being learned.
This is feature engineering.
for these different tasks. Yeah, but the tasks themselves are not being learned.
This is feature engineering.
Yeah, I mean, it's, it's a, it's a higher abstraction level
of feature engineering for the different tasks.
It's task engineering in a sense.
It's slightly better feature engineering,
but it's still fundamentally its feature engineering.
And if anything about the history of AI has taught us
anything, it's that feature engineering approaches will always be replaced and lose to end to end.
Now to be fair, I cannot really make promises on timelines, but I can say that when you look
at the Code for Stockfish and the Code for Alpha Zero, one is a lot shorter than the other.
A lot more elegant and required, a lot less program hours to write. Yeah, but there's a lot more murder of bad agents on the Alpha Zero side.
By murder, I mean, agents that played a game and failed miserably.
Yeah. Oh, oh, in simulation that failure is lessably. Yeah. Oh, oh. And in simulation, that failure is less costly.
Yeah.
In real world, it's,
Wait, do you mean in practice?
Like Alpha Zero has lost games miserably?
No, I haven't seen that.
No, but I know,
but the requirement for Alpha Zero is,
I see me later.
To be able to like evolution, human evolution,
not human evolution,
biological evolution of life on earth,
from the origin of life, has murdered trillions upon trillions
of organisms on the path thus humans.
Yeah.
So the question is, can we stitch together a human-like object
without having to go through the entirety process of evolution?
Well, no, but do the evolution in simulation.
Yeah, this is the question can we simulate?
So do you have a sense that sense this possible to simulate some aspects?
Mue 0 is exactly this.
Mue 0 is the solution, too.
This Mue 0, I think, is going to be looked back as the canonical paper.
And I don't think deep learning is everything.
I think that there's still a bunch of things missing to get there.
But Mue 0, I think, is going to be looked back as the kind of cornerstone paper of this
whole deep learning era.
And Mu0 is the solution to self-driving cars.
You have to make a few tweaks to it, but Mu0 does effectively that.
It does those rollouts and those murdering in a learned simulator and a learned dynamics
model.
It's interesting.
It doesn't get enough love.
I was blown away when I was blown away when I read that paper.
I'm like, okay, I've always said a comma.
I'm going to sit and I'm going to wait for the solution to self-driving cars to come along.
This year I saw it.
It's me zero.
Yeah.
So, back to let the winning roll in.
So your sense, just to elaborate a little bit, the link on the topic, your sense is new on networks
will solve driving. Like we don't need anything else. I think the same way chess was maybe the
chess and maybe Google are the pinnacle of like search algorithms and things that look kind of
like a star. The pinnacle of this error is going to be self driving cars.
The pinnacle of this error is going to be self-driving cars. But on the path that you have to deliver products, and it's possible that the path to full
self-driving cars will take decades.
How long would you put on it?
Look what do we, you're chasing it, Tesla's chasing it.
What do we talk about five five years, 10 years?
Five years.
In the 2020s.
In the 2020s.
Yeah.
The later part of the 2020s.
Well, the neural network, that would be nice to see.
And on the path to that, you're delivering products,
which is a nice L2 system.
That's what Tesla's doing, a nice L2 system.
We'll discuss that.
Better every time, L2, the only difference between L2 and that's what Tesla is doing a nice L2 system. We'll discuss that every time.
L2, the only difference between L2 and the other levels is who takes liability and
I'm not a liability guy, I don't want to take liability.
I'm going to level 2 forever.
Now, on that little transition, I mean, how do you make the transition work?
Is this where driver sensing comes in?
Like, how do you make the, because you said 100 miles,
like, is there some sort of human factors psychology thing
where people start to over trust the system,
all those kinds of effects, once it gets better,
and better, and better, and better,
or they get lazier and lazier and lazier?
Is that, like, how do you get that transition right?
First off, our monitoring is already adaptive.
Our monitoring is already seen adaptive.
Driver monitoring.
Is this the camera that's looking at the driver?
You have an infrared camera in the...
Our policy for how we enforce the driver monitoring is seen adaptive.
What's that mean?
Well, for example, in one of the extreme cases, if the car is not moving, we do not actively
enforce driver monitoring.
If you are going through a 45 mile an hour road with lights
and stop signs and potentially pedestrians,
we enforce a very tight driver monitoring policy.
If you are alone on a perfectly straight highway,
and this is, it's all machine learning, none of that is encoded.
Well, actually, the stop is encoded, but.
So there's some kind of machine learning estimation of risk.
Yes.
Yeah, I mean, I've always been, she's fan of that.
That's a, because it's difficult to do
every step into that direction
is a worthwhile step to take.
It might be difficult to do really what, like us humans are able to estimate risk pretty
damn well, whatever the hell that is, that feels like one of the nice features of us humans.
Because like we humans are really good drivers when we're really like tuned in and we're good
at estimating risk like when are we supposed to like tuned in. And we're good at estimating risk, like when are we supposed to be tuned in?
Yeah.
And people are like, oh, well, why would you ever make
the driver monitoring policy less aggressive?
Why would you always not keep it at its most aggressive?
Because then people are just going to get fatigue from it.
Yes.
When they get annoyed, you want them.
Yeah.
You want the experience to be pleasant.
Obviously, I want the experience to be pleasant,
but even just from a straight up safety perspective,
if you alert people when they look around and they're like, why is this thing alerting me?
There's nothing I could possibly hit right now. People will just learn to tune it out.
People will just learn to tune it out, to put weights on the steering wheel, to do whatever,
to overcome it. And remember that you're always part of this adaptive system. So all I can really say about how this scale is going forward is, yeah, something we have
to monitor for.
We don't know.
This is a great psychology experiment at scale, like we'll see.
Yeah, it's fascinating.
Track it.
And making sure you have a good understanding of attention is a very key part of that psychology
problem.
Yeah.
I think you and I probably have a different different come to it differently, but to me,
it's a fascinating psychology problem to explore something much deeper than just driving.
It's such a nice way to explore human attention and human behavior, which is why, again,
we've probably both criticized Mr. Elon Musk on this one topic from different
avenues. So both offline and online, I had little chats with Elon and
like I love human beings as a as a computer vision problem, as an AI problem is fascinating.
He wasn't so much interested in that problem.
Yeah, I problem is fascinating. He wasn't so much interested in that problem.
As like in order to solve driving,
the whole point is you want to remove the human
from the picture.
And it seems like you can't do that quite yet.
Eventually yes, but you can't quite do that yet.
So this is the moment where you can't yet say,
I told you so to test it.
But it's getting there because I don't know if you've seen this.
There's some reporting that they're in fact starting to do driver model.
Yeah, they ship the model chat on mode.
With though, I believe only a visible light camera. It might even be fisheye.
It's like a low resolution.
A low resolution visible light.
I mean, to be fair, that's what we have in the EON as well.
Our last generation product.
This is the one area where I can say our hardware
is ahead of Tesla.
The rest of our hardware way, way behind,
but our driver monitoring camera.
So you think I think on the third row Tesla podcast,
somewhere else I've heard you say that,
obviously eventually they're
going to have driver monitoring.
I think what I've said is Elon will definitely ship driver monitoring before he ships level
five.
A brief or low and I'm willing to bet 10 grand on that.
And you bet 10 grand on that.
I mean now no one will take the bet but before maybe someone would have I should have got
my money.
Yeah.
It's an interesting bet.
I think I think you're right. I'm actually on a human level because he's been,
he's made the decision, like he said that driver monitoring is the wrong way to go.
But like, you have to think of as a human as a CEO, I think that's the right thing to say when like sometimes you have
to say things publicly they're different than when you actually believe because when you're
producing a large number of vehicles and the decision was made not to include the camera.
Like what are you supposed to say? Like our cars don't have the thing that I think is right to have.
Yep, like our cars don't have the thing that I think is right to have
It's an interesting thing but like on the other side as a CEO I mean something you could probably speak to as a leader. I think about me as a human
To publicly change your mind on something how hard is that well especially when asked holes like George hot say I told you so
All I will say is I am not a leader and I am happy to change my mind.
Um, and I will, yeah, I do.
Uh, I think he'll come up with a good way to make it psychologically.
Okay, for him.
Well, it's such an important thing, man, especially for a first principle,
thinker, because he made a decision that driver monitoring
is not the right way to go. And I could see that decision. And I could even make that
decision. I was on the fence too. Like I'm not even, driver monitoring is such an obvious
simple solution to the problem of attention. It's not obvious to me that just by putting
a camera there, you solve things.
You have to create an incredible compelling experience, just like you're talking about.
I don't know if it's easy to do that.
It's not at all easy to do that, in fact, I think.
So as a creator of a car that's trying to create a product that people love, which is what
Tesla tries to do, right?
It's not obvious to me that, you know,
as a design decision, whether adding a camera
is a good idea.
From a safety perspective, either, like,
in the human-fabricous community,
everybody says that like you should obviously have
driver sensing, driver monitoring,
but like, that, that's like like saying it's obvious as parents you shouldn't let your kids go
out at night, but okay, but like they're still going to find ways to do drugs. Yeah, you have to
also be good parents. So like it's much more complicated than just the, you need to have drive monitoring.
I totally disagree on, okay, if you have a camera there and the camera is watching the
person, but never throws an alert, they'll never think about it.
Right?
The driver monitoring policy that you choose to, how you choose to communicate with the
user is entirely separate from the data collection perspective.
Right. So, you know, like, there's one thing to say, like,
you know, tell your teenager they can't do something.
There's another thing to, like, you know, gather the data.
So you can make informed decisions.
That's really interesting.
But you have to make that, that's the interesting thing about cars.
But even true with Kamei, like, you don't have to make that, that's the interesting thing about cars. But even true with KMAI, you don't have to manufacture the thing into the car.
You have to make a decision that anticipates the right strategy long term.
So you have to start collecting the data and start making decisions.
I started it three years ago.
I believe that we have the best driver monitor explosion in the world.
I think that when you compare it to Supercruises,
the only other one that I really know that shipped,
and ours is better.
What do you like and not like about Supercruises?
I mean, I had a few.
Supercruises, the sun would be shining through the window,
would blind the camera, and it would say
I wasn't paying attention
when I was looking completely straight,
I couldn't reset the attention with a steering wheel,
touch, and supercruise with disengage.
Like I was communicating to the car,
I'm like, look, I am here, I am paying attention,
why are you really gonna force me to disengage?
And it did.
So it's a constant conversation with the user,
and yeah, there's no way to ship a system
like this view can can, OTA.
We're shipping a new one every month.
Sometimes we balance it with our users on Discord.
Like when sometimes we make the drive monitoring
a little more aggressive and people complain,
sometimes they don't.
We want it to be as aggressive as possible
where people don't complain and it doesn't feel intrusive.
So being able to update the system over the air
is in the central component.
I mean, that's probably, to me, you mentioned, I mean, to me, that is the biggest innovation of Tesla,
it made it. People realize that over the air updates is essential.
Yeah.
I mean, was that not obvious from the iPhone?
The iPhone was the first real product that I would t8, I think.
Was it actually, that's brilliant, you're right.
I mean, the game consoles used to not, right?
The game consoles were maybe the second thing that did.
Well, I didn't really think about one of the amazing features
of a smartphone, isn't just, like, the touchscreen
isn't the thing.
It's the ability to constantly update.
Yeah, it gets better.
It gets better.
I love my iOS 14. Yeah, it took better. It gets better. I love my house 14. Yeah. Well, one thing that I probably disagree with you on on driving monitoring is you said that it's easy. I mean,
you tend to say stuff is easy. I'm sure. I guess you said it's easy relative to the external perception problem.
So can you elaborate why you think it's easy?
Feature engineering works for driver monitoring.
Feature engineering does not work for the external.
So human faces are not human faces and the movement of human faces and head and body
is not as variable as the
external alignment.
Yes, and there's another big difference as well.
Your reliability of a driver monitoring system doesn't actually need to be that hot.
The uncertainty, if you have something that's detecting whether the human's paying attention
and it only works 92% of the time, you're still getting almost all the benefit of that because
the human, like you're training the human, right?
You're dealing with a system that's really helping you out.
It's a conversation.
It's not like the external thing where, guess what?
If you sort of into a tree, you sort of into a tree, right?
Like, you get no margin for error.
Yeah, I think that's really well put.
I think that's the right, exactly the place where we're comparing to the external
perception and the control problem, driving modern is easier because the bar for success
is much lower. Yeah, but I still think like the human face is more complicated actually
than the external environment, but for driving you don't give a damn.
I don't need something that complicated.
To have to communicate the idea to the human that I want to communicate, which is, your system might mess up here, you got to pay attention.
Yeah. That's my love and fascination is the human face. It feels like this is a nice place to create products that create an experience in the car.
It feels like there should be more richer experiences in the car.
That's an opportunity for something like on my eye or just any kind of system, like
a Tesla or any of the autonomous vehicle companies,
is because software is, there's much more sensors
and so much is running on software
and you're doing machine learning anyway.
There's an opportunity to create
totally new experiences that we're not even anticipating.
You don't think so.
Now.
You think it's a box that gets you from A to B
and you wanna do do it chill.
Yeah, I mean, I think as soon as we get to level three on highways, okay, enjoy your
candy crush, enjoy your hulu, enjoy your, you know, whatever, whatever.
Sure, you get this, you can look at Screen Spaceway.
Versus right now, what do you have?
Music and audio books.
So level three is where you can kind of disengage in stretches of time. Well, you think level three is possible.
Like on the highway, going for a hundred miles and you can just go to sleep.
Oh yeah.
Sleep.
So again, I think it's really all on a spectrum.
I think that being able to use your phone while you're on the highway, and like this all
being okay, and being aware that
the car might alert you when you have five seconds to basically.
So the five second thing is you think it's possible?
Yeah, I think it is.
Oh yeah.
Not in all scenarios.
Right.
Some scenarios is not.
It's the whole risk thing that you mentioned is nice.
It's to be able to estimate like how risky is this situation?
That's really important to understand. One other thing you mentioned comparing comma
and autopilot is that something about the haptic feel
of the way comma controls the car,
when things are uncertain,
like it behaves a little bit more uncertain
when things are uncertain.
That's an exciting point.
And an autopilot is much more confident always even when it's uncertain until it runs into trouble
That's that's a funny thing actually mentioned that Elon I think and then the first time we talked
He was inviting is like communicating uncertainty
I guess comment doesn't really communicate uncertaintyly communicates it through haptic feel.
Like, what's the role of communicating uncertainty?
Do you think?
We do some stuff explicitly.
Like, we do detect the lanes when you're on the highway
and we'll show you how many lanes we're using to drive with.
You can look at where things the lanes are.
You can look at the path.
And we want to be better about this,
we're actually hiring one hire some new UI people.
UI people, you mentioned this. Because it such an it's a UI problem too, right?
It's we have we have we have a great designer now, but you know, we need people who are just going to
like build us and debunk these UIs, Qt people and Qt. Is that what the UIs done with this Qt?
Moving the new UIs in Qt. C++ Qt.
Tessley is into. Yeah, we had some react stuff in there
React JS or just react reactors has its own language right react native react native
React is JavaScript framework. Yeah, it's all it's all based on JavaScript, but it's you know, I like C++
What do you think about
Dojo with Tesla?
And there are four way into what appears to be
specialized hardware for our training on that.
I guess it's something maybe you can correct me
from my shallow looking at it.
It seems like something like Google did with DPUs,
but specialized for
driving data. I don't think it specialized for driving data.
It's just legit, just DPU. They want to go the Apple way, basically everything required and the chain is done in-house. Well, so you have a problem right now, and this is
one of my concerns, I really would like to see somebody deal with this if anyone out there is doing it.
I'd like to help them if I can.
You basically have two options right now to train.
Your options are Nvidia or Google.
So Google is not even an option.
Their TPUs are only available in Google Cloud.
Google has absolutely onerous terms of service restrictions.
They may have changed it, but back in Google's terms of service,
it's had explicitly you are not allowed to use Google Cloud ML
for training autonomous vehicles or for doing anything that competes with Google
without Google's prior written permission.
Well, okay.
I mean Google is not a platform company.
I wouldn't touch TPUs with a 10-foot pole so that leaves you with the monopoly
Nvidia and video so I mean that you're not a fan of well
Look, I was a huge fan of in 2016 Nvidia Jensen came sat in the car
Cool guy when the stock was $30 a share. Nvidia's stock is skyrocketed.
I witnessed a real change in who was in management over there in 2018. And now they are
let's exploit. Let's take every dollar we possibly can out of this ecosystem. Let's charge $10,000
for A100's because we know we got the best in the game. And let's charge $10,000 for A-100s because we know we got the best shit in the game.
And let's charge $10,000 for an A-100 when it's really not that different from $3080, which
is $6.99. The margins that they are making off of those high-end chips are so high that,
I mean, I think they're shooting themselves in the foot just from a business perspective,
because there's a lot of people talking like me now, who are like,
somebody's got to take Nvidia down.
Yeah, where they could dominate and video could be the new intel.
Yeah, to be inside everything essentially.
And and and yet the winners in certain spaces like, you know,
Tom was driving the winners.
Only the people who are like
desperately falling back and trying to catch up and have a ton of money, like the big automakers
are the ones interested in partnering with Nvidia. Oh, and I think a lot of those things are
going to fall through. If I were Nvidia, sell chips. Sell chips at a reasonable markup.
To everybody. To everybody without any restrictions.
Without any restrictions. Intel did this.
Look at Intel.
They had a great long run.
Nvidia is trying to turn their, they're like trying to productize their chips way too much.
They're trying to extract way more value than they can sustainably.
Sure. You can do it tomorrow. Is it going to up your share price?
Sure. If you're one of those CEOs, it's like how much can I strip mine this company? And, you know, and that's what's weird about it tomorrow. Is it going to up your share price? Sure, if you're one of those CEOs, it's like, how much can I strip mine this company?
And that's what's weird about it, too.
Like the CEO is the founder.
It's the same guy.
I mean, I still think Jensen's a great guy.
It is great.
Why do this?
You have a choice.
You have a choice right now.
Are you trying to cash out?
Are you trying to buy a yacht?
If you are fine.
But if you're trying to be
the next huge semiconductor company, cell chips.
Well, the interesting thing about Jensen
is he is a big vision guy.
So he has a plan for 50 years down the road.
So it makes me wonder like,
how does price gouging fit into it?
Yeah, how does that,
like it doesn't seem to make sense to plan.
I worry that he's listening to the wrong people.
Yeah, that's the sense I have to sometimes because I
despite everything I think Nvidia
is an incredible company. Well, one, so I'm deeply grateful to Nvidia for the products they've created.
Me too.
Right?
And so, the 1080 Ti was a great GPU.
It's still a lot of them.
Still is.
Yeah.
But at the same time, it just feels like, it feels like you don't want to put all your stock
in Nvidia.
And so, the Elon is doing, what Tesla is doing with autopilot in Dojo is the Apple way,
because they're not going to share Dojo with George Hots.
I know.
They should sell that chip.
Oh, they should sell, even their accelerator.
They accelerate it.
It's in all the cars.
The 30 watt one.
Sell it.
Why not?
So open it up.
Make me, why does that supposed to be a car company?
Well, if you sell the chip, here's what you get.
Yeah.
Makes a money on the chips.
It doesn't take away from your chip.
It's going to make some money, free money.
And also, the world is going to build an ecosystem of tooling for you.
You're not going to have to fix the bug in your 10-H layer.
Someone else already did.
Well, that's an interesting question.
I mean, that's the question Steve Jobs asked.
That's the question Neil Amoska's perhaps asking is, uh, do you want Tesla stuff
inside other vehicles inside, potentially inside like a I robot vacuum cleaner?
Yeah. I think you should decide where your advantages are.
I'm not saying Tesla should start selling battery packs to all the makers, because battery
packs to automakers, they are straight up in competition with you.
If I were Tesla, I'd keep the battery technology totally as it's ours, we make batteries.
But the thing about the Tesla TPU is anybody can build that.
It's just a question of, you know,
are you willing to spend the, you know, the money?
It could be a huge source of revenue potentially.
Are you willing to spend the $100 million?
Anyone can build it.
And someone will.
And a bunch of companies now are starting
trying to build AI accelerators.
Somebody's going to get the idea right.
And yeah, hopefully they don't get greedy
because they'll just lose to the next guy who
finally and then eventually the Chinese are going to make knockoff and video chips and that's
from your perspective I don't know if you're also paying attention to stand Tesla for a moment
they've Elon Musk has talked about a complete rewrite of the neural net that they're using
that seems to again I'm half attention, but it seems to involve basically
a kind of integration of all the sensors to where
it's a four dimensional view, you have a 3D model
of the world over time, and then you can,
I think it's done both for the, for the actually,
so the neural network is able to,
you know, more holistic way, deal with the world and make predictions and so on, but also
to make the annotation task more, you know, easier, like you can annotate the world in one
place and they kind of distribute itself across the sensors and across a different, like
the hundreds of tests that are involved
in the hydronet. What are your thoughts about this rewrite? Is it just like some details that are
kind of obvious, that are steps that should be taken, or is there something fundamental that could
challenge your idea that end-to-end is the right solution? We're in the middle of a big rewrite
now as well. We're having a chapter in a model in a bit bit. What kind? We're going from 2D to 3D. Right now, all our
stuff, like for example, when the car pitches back, the lane lines also pitch back, because we're
assuming the flat world hypothesis, the new models do not do this. The new model is out
with everything in 3D. But there's still no annotation. So the 3D is more about the output.
Yeah.
We have we have we have Z's and everything.
We've Z's.
Yeah.
We added Z's.
We added Z's.
We unified a lot of stuff as well.
We switched from tensed flow to pie to art.
My understanding of what Tesla's thing is, is that their annotator now annotates
across the time dimension.
Uh, I mean, cute.
Am I building an annotator?
I find their entire pipeline.
I find your vision, I mean,
the vision event to end, very compelling,
but I also like the engineering
of the data engine that they've created. In terms of supervised learning pipelines,
that thing is damn impressive. You're basically the ideas that you have
hundreds of thousands of people that are doing data collection for you by doing their experience. So that's kind of similar to the Kama AI model.
And you're able to mine that data based on the kind of
edge cases you need.
I think it's harder to do in the end to end learning.
The mining of the right edge cases.
That's what feature engineering is actually really powerful because us humans
are able to do this kind of mining a little better. But there's obvious, as we know,
there's obvious constraints and limitations to that idea.
Carpathia just tweeted, he's like, you get really interesting insight. If you sort your
validation set by loss and look at the highest loss examples.
Yeah. So yeah, I mean, you can do, we have, we have a little data engine like the Angular
training a segment. And it's not fancy, it's just like, okay, train the new
segment, run it on 100,000 images, and now take the thousand with highest loss.
Select 100 of those by human, put those, get those ones labeled, retrain, do it again. So it's a much less well-written data engine.
Yeah, you can take these things really far and it is impressive engineering and if you truly need
supervised data for a problem, yeah, things like data engine are at the high end of the,
what is attention? Is a human-baying attention. I mean, we're going to probably build something that looks
like data engine to push our driver monitoring further. But for driving itself, you have
it all annotated beautifully by what the human does.
So, yeah, this is, I mean, that applies to driver attention as well. Do you want to detect
the eyes? Do you want to detect blinking and pupil movement? Do you want to detect all the like face alignments, landmark detection and so on? And then doing
kind of reasoning based on that? Or do you want to take the entirety of the face over time
and do end to end? I mean, it's obvious that over eventually you have to do end to end
with some calibration and some fixes and so on. But it's like, I don't know when, that's the right move.
Even if it's end to end, there actually is,
there is no kind of, you have to supervise that with humans.
Whether a human is paying attention or not
is a completely subjective judgment.
Like you can try to like automatically do it with some stuff,
but you don't have, if I record a video of a human,
I don't have true annotations anywhere in that video. The only way to get them is with,
you know, other humans labeling it, really. Well, I don't know. So if you think deeply about it,
you could, you might be able to just depending on the task, you may be a discover, self-entertaining
things like, you know, you can look at like steering wheel
versus something like that.
You can discover little moments of lapse of attention.
Yeah.
I mean, that's where psychology comes in.
Is there an indicator?
Because you have so much data to look at.
So you might be able to find moments
when there's like just in attention.
But even with smartphone,
if you want to text smartphone use, you can start the zoom in.
That's the gold mine, sort of the comma AI, and Tesla is doing this too, right?
They're doing annotation based on self-supervised learning too.
It's just a small part of the entire picture.
That's kind of the
challenge of solving a problem in machine learning if you can discover
self-enitating parts of the problem. Our driver monitoring team is half a person
right now. Once we have two people people on that team I definitely want to look at self-addedating stuff for yeah for attention
Let's go back for a sec to
to a comma and
What you know for people who are curious to try it out?
How do you install a comma in say a 2020 auto Corolla or like what are the cars that are supported?
What are the cars that you recommend?
And what does it take? You have a few videos out, but maybe through words, can you explain
what's the take to action install the thing? So we support, I 2020 Corolla great choice, the 2020 Sonata. It's using the stock
longitudinal, it's using just our lateral control, but it's a very refined car. Their
longitudinal control is not bad at all. So yeah, Corolla, Sonata, or if you're willing
to get your hands a little dirty and look in the right places on the internet, the Honda Civic is great, but you're going to have to install a modified EPS firmware in order to get a little bit more torque.
And I can't help you with that. Comment is not officially endorsed that.
But we have been doing it. We didn't ever release it.
We waited for someone else to discover it and then you know and you have a discord server where people
There's a very active developer community. Yeah, I suppose to
So
Depending on the level of experimentation you're willing to do
That's a community
If you if you just want to buy it and you have a supported car. Yeah, it's
Ten minutes to install.
There's YouTube videos, it's IKEA furniture level.
If you can set up a table for mykea,
you can install a Comma 2 in your supported car,
and it will just work.
Now you're like, oh, but I want this high-end feature,
or I want to fix this bug, okay?
Welcome to the developer community.
So what, if I wanted to, this is something I asked you
offline, like a few months ago.
If I wanted to run my own code,
so use comma as a platform and try to run something like OpenPilot,
what does it take to do that?
So there's a toggle in the settings called enable SSH.
And if you toggle that, you can SSH into your device,
you can modify the code, you can upload whatever code
you want to it.
There's a whole lot of people.
So about 60% of people are running stock comma,
about 40% of people are running forks.
And there's a community of, there's
a bunch of people who maintain these forks
and these forks support different cars
or they have different toggles.
We try to keep away from the toggles that are disabled or ever monitoring.
But there's some people might want that kind of thing.
And like, yeah, it's your car.
It's your, I'm not here to tell you, we have some, we ban.
If you're trying to subvert safety features, you're banned from our discord.
I don't want anything to do with you,
but there's some forks doing that.
Yeah, got it.
So you encourage responsible forking.
Yeah, we encourage, some people, you know,
yeah, some people, like, like there's forks
that will do, some people just like having a lot of readouts
on the UI, like a lot like flashing numbers
to this forks to do that.
Some people don't like the fact that it disengages
when you press the gas pedal,
there's fork that disable that.
Got it.
Now, the stock experience is what,
like, so it does both linkkeeping and launch
to no control altogether,
so it's not separate, like it is an autopilot.
No, so, okay.
Some cars, we use the stock longitudinal control.
We don't do the longitudinal control in all the cars.
Some cars, the ACC's are pretty good in the cars.
It's the lane keep that's atrocious
and anything except for autopilot and supercruise.
But you know, you just turn it on and it works.
What does this engagement look like?
Yeah, so we have, I mean, I'm very concerned
about mode confusion.
I've experienced it on super cruise and an
autopilot where like autopilot, like autopilot disengages, I
don't realize that the ACC is still on the lead car moves
slightly over. And then the Tesla accelerates to like, whatever
my set speed is super fast and like, what's going on here?
We have engaged and disengaged. And this is similar to my understanding, not a pilot,
but my understanding is either the pilot is in control
or the co-pilot is in control.
And we have the same kind of transition system.
Or either open pilot is engaged
or open pilot is disengaged.
Engaged with cruise control, disengaged with either gas,
brake, or cancel.
Let's talk about money. What's the business strategy for comma?
Probably.
Boys, you're good.
There's a congratulations. What is basically selling, we should say comma cost
a thousand bucks, comma two.
Two hundred fifty interviews to the car as well. It's twelve hundred all said done.
Nobody is usually up front like this.
You gotta add the tack on, right?
Yeah, I love it this side.
I'm like gonna lie to you.
Trust me, it will add $1,200 of value to your life.
Because it's still super cheap.
30 days, no questions asked, money back, guarantee,
and prices are only going up.
You know, if there ever is future hardware,
it costs a lot more than $1,200.
So comma, three is in the works.
So it could be.
All I will say is future hardware is going to cost a lot more in the current hardware.
Yeah, the people that use, the people I've spoken with that use comma, that use open
pilot, they, first of all, they use it a lot.
So people that use it, they fall in love with it.
Oh, our retention rate is insane.
It's a good sign.
It's a really good sign.
70% of Comma 2 buyers are daily active users.
Yeah, it's amazing.
Oh, also, we don't plan on stopping selling the Comma 2.
Like, like it's, you know.
So whatever you create that's beyond Comma 2. Like, like, it's, you know. So whatever you create that's beyond Comma 2,
it would be, it would be potentially a phase shift.
Like, it's so much better that,
like you could use Comma 2 and you can use Comma whatever.
It depends what you want.
It's 3.41, 42.
Yeah.
You know, autopilot hardware 1 versus hardware 2.
The Comma 2 is kind of like hardware 1.
Got it, got it. You can still use it. Got it. Got it. I think I heard you talk about retention
rate with the VR headsets that the average is just once. Yeah. Just fast. I mean, it's such a fascinating
way to think about technology. And this is a really, really good sign. And the other thing that
people say about common is like they can't believe they're getting this $4,000. Right? It seems like some kind of steal.
So, but in terms of like long-term business strategies,
it basically to put, so it's currently in like a thousand plus cars.
1200.
More.
So, yeah, Dali's is about, Dalyze is about 2000, weekly is about 2500, month is over 3000.
Wow.
We've grown a lot since.
We've got to stop.
Is the goal, like can we talk crazy for a second?
I mean, what's the goal to overtake Tesla?
Let's talk.
Okay, so.
I mean, Android did overtake I.
Yeah, that's exactly it, right?
So, they did it. I was exactly it right so
They did it I actually don't know the timeline of that one. They but let let let's talk
Because everything is an alpha now the autopilot you could argue is an alpha in terms of towards the big mission of autonomous driving right and so
What yeah as your growth overtake to millions of cars?
essentially of course what, yes, your goal to overtake millions of cars, essentially.
Of course.
Where would it stop?
Like, it's open source software.
It might not be millions of cars with a piece of comma hardware,
but yeah, I think open pilot at some point will cross over
autopilot in users, just like Android, cross over iOS.
How does Google make money from Android?
It's complicated. Their own devices make money.
Google makes money by just kind of having you on the internet.
Yes, Google searches built-in, Gmail is built-in. Android is just a shell for the rest of Google's ecosystem kind of.
Yeah, but the problem is, Android is not, is a brilliant thing.
I mean, Android arguably changed the world, so there you go.
That's, you can, you can feel good, ethically speaking,
but as a business strategy, it's questionable.
So hardware.
So hardware.
I mean, it took Google a long time to come around to it,
but they are now making money on the pixel.
You're not about money, you're more about winning.
Yeah, of course. But if only if only 10% of open pilot devices come from
comma AI, it still make a lot. That is still yes. That is a ton of money for our
company. But can't somebody create a better
comma using open pilot? Or are you basically saying we'll all compete
to compete? Well, I can be. Is can you create a better android phone than the
Google pixel? I mean, you can, but like, you know.
I love that.
So you're confident, like, you know what the hell you're doing.
Yeah.
It's, it's, uh, uh, competence and merit.
I mean, our money, our money comes from, we're a consumer electronics company.
Yeah.
And put it this way.
So we sold, we sold like 3000 commenters, um, 2500 right now.
Uh, and,000,000. I'm in 2500 right now.
And like, okay, we're probably going to sell 10,000 units next year, right?
10,000 units, even just $1,000 unit, okay, we're at 10 million in revenue.
Get that up to 100,000, maybe double the price of the unit.
Now we're talking like 200 million revenue.
We're talking like zero. Yeah,000, maybe double the price of the unit. Now we're talking like 200 million revenue. It's actually like a money.
One of the rare semi-atomal,
certain almost vehicle companies are actually making money.
Yeah.
Yeah.
You know, if you have,
if you look at a model,
when we were just talking about this yesterday,
if you look at a model and like you're testing,
like you're AB testing your model,
and if you're one branch of the AB test,
the losses go down very fast in the first five epochs. That model is probably going to converge to something considerably
better than the one with the losses going down slower. Why do people think this is going
to stop? Why do people think one day there's going to be a great, like, well, Waymo's
eventually going to surpass you guys? Well, they're not.
Do you see, like, a world where, like, a Tesla or a car like a Tesla would be able to basically
press a button and you like switch to open pilot?
You know, you know, you know, like load in.
No, so I think so.
First off, I think that we may surpass Tesla in terms of users.
I do not think we're going to surpass Tesla ever in terms of revenue.
I think Tesla can capture a lot more revenue per user than we can, but this mimics the Android
iOS model exactly.
There may be more Android devices, but there's a lot more iPhones than Google Pixels.
So I think there'll be a lot more Tesla cars sold than pieces of Comma hardware.
And then as far as a Tesla owner being able to switch to open pilot, does iOS, does iPhones
run Android?
No, but you can if you really want to do it, but it doesn't really make sense.
Like it's not.
It doesn't make sense.
Who cares?
What about if a large company like Automakers 4GM Toyota came to George Hots or on the tech space,
Amazon, Facebook, Google came with a large pile of cash.
Would you consider being purchased?
Do you see that as a one possible?
Not seriously, no.
I would probably see how much shit they'll entertain for me.
And if they're willing to like jump through a bunch of my hoops, then maybe, but like,
no, not the way that M&A works today.
I mean, we've been approached.
And I laugh in these people's faces.
I'm like, are you kidding?
Yeah.
You know, because it's so, it's so demeaning.
The M&A people are so demeaning to companies.
They treat the startup world
as their innovation ecosystem.
And they think that I'm cool with going along with that.
So I can have some of their scam fake fed dollars.
You know, fed coin.
What am I gonna do with more fed coin?
You know, fed coin.
They're fed coin, man.
I love that.
So that's the cool thing about podcasting actually is
people criticize I don't know if you're familiar with the last Spotify
Giving Joe Rogan 100 million
I talked about that
And you know they respect
Despite all the shit that people are talking about Spotify
People understand that podcasters like Joe Rogan know what the hell they're doing.
Yeah.
So they give them money and say, just do what you do.
And like the equivalent for you would be like, George, do what the hell you do because you're
good at it.
Try not to murder too many people.
Like try, like there's some kind of common sense things like just don't go on a weird rampage of...
Yeah, it comes down to what companies I could respect, right?
Right.
You know, could I respect GM never?
No, I couldn't.
I mean, could I respect like a Hyundai?
Morse, right?
That's, that's a lot closer.
Toyota. What's your? like a Hyundai, more so, right? That's that's a lot closer.
Toyota. What's your? Nah, no, it's like the Korean is the way. I think I think that, you
know, the Japanese, the Germans, the US, they're all two, they're all two, you know, they
all think they're too great to be able to. What about the tech companies? Apple. Apple
is of the tech companies that I could respect. Apple is the closest. Yeah, I mean, I could never. It would be ironic.
It would be ironic.
Oh, if, if, if, come to my eyes, it's, it's acquired by Apple.
I mean, Facebook, look, I quit Facebook 10 years ago because I didn't respect the business
model.
Um, Google has declined so fast in the last five years.
What are your thoughts about Waymo and his present and future?
Let me, let me, let me, let me, Let me start by saying something nice, which
is I've visited them a few times and I have written in their cars and the engineering that
they're doing, both the research and the actual development and the engineering they're
doing and the scale they're doing and the scale
they're actually achieving by doing it all themselves is really impressive.
And the balance of safety and innovation and like the cars work really well for the routes
they drive.
Like they drive fast, which was very surprising to me.
Like it drives like the speed limit or faster the speed limit, it goes.
And it works really damn well in the interfaces nice.
And Chandler is on it, yeah.
Yeah, and Chandler is in a very specific environment.
So it, you know, it gives me enough material in my mind
to push back against the madmen of the world,
like at George Hots, to be like,
like, because you kind of imply there's zero probability
they're going to win.
And after I've used the FDI of ridden in it,
to me it's not zero.
Oh, it's not for technology reasons.
Bureaucracy?
No, it's worse than that.
It's actually for product reasons, I think.
Oh, you think they're just not capable
of creating an amazing product? No, I think. Are you think they're just not capable of creating an amazing product?
No, I think that the product that they're building doesn't make sense.
So a few things.
You say the wayimos are fast.
Benchmark a wayimo against a competent Uber driver.
Right.
Do Uber drivers faster?
It's not even about speed.
It's the thing you said, it's about the experience of being stuck at a stop sign because pedestrians are crossing nonstop.
I like when my Uber driver doesn't come to a full stop at the stop sign.
You know? And so let's say the WayMoz are 20% slower than than an Uber.
You can argue that they're going to be cheaper.
And I argue that users already have the choice to trade off money for speed.
It's called Uber Pool.
I think it's like 15% of rides at Uber Pools, right?
Users are not willing to trade off money for speed.
So the whole product that they're building is not going to be competitive with traditional
ride sharing networks.
Like and also whether there's profit to be made depends entirely on one company having
a monopoly.
I think that the level for autonomous ride sharing vehicles market is going to look a lot
like the scooter market.
If even the technology does come to exist, which I question,
who's doing well in that market? It's a race to the bottom.
Well, it could be closer like an Uber and a lift where it's just a one or two players.
Well, the scooter people have given up trying to market scooters as a
Practical means of transportation and they're just like they're super fun to ride
Look at wheels. I love those things and they're great on that front
Yeah, but from an actual transportation product perspective. I do not think scooters are viable And I do not think level four autonomous cars are viable
If you let's play a fun experiment if you ran
Let's do Tesla and this do Waymo
All right, if Elon Musk took a vacation for a year
You just said screw it. I'm gonna go live in an island no electronics and the board decides that we need to find somebody to run the company
And they they decide that you should run the company for a year. How do you run Tesla differently?
I wouldn't change much.
Do you think they're on the right track?
I wouldn't change. I mean, I'd have some minor changes,
but even my debate with Tesla about end-to-end versus segments,
like that's just software. Who cares?
Right? It's not like you're doing something terrible with segments
You're probably building something that's at least gonna help you debug the end-to-end system a lot
Right
It's very easy to transition from what they have to like an end-to-end kind of thing
but uh what and then I presume you would uh
In the model Y or maybe in the model 3 start adding driver sensing with infrared. Yes, I would add I would add infrared camera infrared lights right away to those cars.
It start collecting that data and do all the kind of stuff.
Yeah, very much. I think they're already kind of doing it. It's an incredibly minor change. If I
actually were to you have Tesla first off, I'd be hard-vide that I wouldn't be able to do a good
job as Elon and then I would try to understand
the way he's done things before.
You would also have to take over his Twitter.
So.
I don't tweet.
Yeah, what's your Twitter situation?
Why are you so quiet on Twitter?
I mean, like what's your social network presence like?
Because on Instagram, you do live streams, you understand the music
of the internet, but you don't always fully engage into it.
You're part time.
Why do you stuff a Twitter?
Yeah, I mean, Instagram is a pretty place.
Instagram is a beautiful place.
It glorifies beauty.
I like Instagram's values as a network.
Twitter glorifies conflict, glorifies shots,
taking shots of people, and it's like,
Twitter's Donald Trump, or perfectly.
They're perfect for each other.
So Tesla's on the right track, and you view it.
Yeah.
Okay, so let's really try this experiment. on Teslas on the right track and you view it. Yeah. Okay.
So let's try, let's like really try this experiment.
If you ran Waymo, let's say they're, I don't know if you agree, but they seem to be at the
head of the pack of the kind of, what would you call that approach?
Like, it's not necessarily lighter based because it's not about lighter.
Level four robot taxi.
Level four robot taxi all in before any before making your revenue.
So they're probably at the head of the pack.
If you were.
Said, hey, George, can you please run this company for a year?
How would you change it?
I would go.
I would get Anthony Levin Dowski out of jail.
And I would put him in charge of the company.
Let's start break that apart.
Why do you want to destroy the company by doing that?
Or do you mean you like renegade style thinking that pushes,
that like throws away bureaucracy and goes to first principle thinking,
what do you mean by that?
I think Anthony Levin DaDazs gives a genius and I think he would come up with a much better
idea of what to do with Waymo than me.
So you mean that unironically, he is a genius?
Oh, yes, absolutely.
Without a doubt.
I mean, I'm not saying there's no shortcomings, but in the interactions I've had with him, yeah.
What?
He's also willing to take, like,
who knows what he would do with Waymo?
I mean, he's also out there, like,
far more out there than I am.
Yeah, his big risks.
What do you make of him?
I was going to talk to him in his podcast,
and I was going back and forth.
I'm such a gullible naive human,
like I see the best in people. And I slowly started to realize that there might be some people
out there that have multiple faces to the world.
They're deceiving and dishonest.
I still refuse to, I trust people,
and I don't care if I get hurt by it, but like,
you know, sometimes you have to be a little bit careful, especially platform-wise and podcast-wise.
What am I supposed to think? So you think he's a good person?
Oh, I don't know. I don't really make moral judgments.
It's difficult to...
I mean this about the Waymo. Actually, I mean that whole idea very non-ironically about what I would do.
The problem with putting me in charge of Waymo is Waymo is already $10 billion in the whole.
Whatever idea Waymo does, look,
Commas profitable, Commas raised $8.1 million.
That's small money.
I can build a reasonable consumer electronics company and succeed wildly at that
and still never be able to pay back Waymo's 10 billion.
So I think the basic idea with Waymo, well, forget the 10 billion because they have some backing, but your basic thing is like, what can we do to start making some money?
Well, no, I mean, my bigger idea is like, whatever the idea is that's going to save Waymo, I don't have it.
It's going to have to be a big risk risk idea and I cannot think of a better person
than Anthony Levantowski to do it.
So that is completely what I would do, CEO of Waymo.
I've called myself a transitionary CEO,
do everything I can to fix that situation up.
I can't see it.
Yeah.
Yeah, just like I can't do it.
Like I can't, I can't,
or I mean, I can talk about how what I really want to do is just apologize
for all those corny, you know, ad campaigns and be like, here's the real state of the technology.
Yeah, that's, like I have several criticisms.
I'm a little bit more bullish on Waymo than you seem to be, but one criticism I have
is it went into corny mode too early.
Like it's still a startup.
It hasn't delivered anything. So it should be like more renegade
and show off the engineering that they're doing,
which can be impressive as opposed
to doing these weird commercials of like
your friendly car company.
I mean, that's my biggest, my biggest sniper waymo
was always, that guy's a paid actor.
That guy's not a waymo user.
He's a paid actor.
Look here, I found his call sheet.
Do kind of like what SpaceX is doing with the rocket launch
is just get, put the nerds up front,
put the engineers up front and just like show failures too.
Just, I love SpaceX's.
Yeah, the thing that they're doing is right
and just feels like the right,
but we're all so excited to see them succeed.
Yeah.
I can't wait to see what it will fail, you know, like you lied to me.
I want you to fail.
You told me the truth.
You'll be honest with me.
I want you to succeed.
Yeah.
Uh, yeah.
And that requires the, uh, the Renegade CEO, right?
I'm with you.
I'm with you.
I still have a little bit of faith in Waymo for the renegade
CEO to step forward, but it's not it's not John Kraftsack. Yeah, it's you can't it's not
Chris Homestown and I'm those people may be very good at certain things. Yeah, but they're
not renegades. Yeah, because these companies are fundamentally even though we're talking
about billion dollars,
and all these crazy numbers, they're still like early stage startups.
I mean, if you are pretty revenue and you've raised 10 billion dollars, I have no idea.
Like, this just doesn't work. It's against everything Silicon Valley. Where's your minimum viable product?
Where's your users? Where's your growth numbers? This is traditional Silicon Valley.
Why do you not apply it to what you think you're too big to fail already?
Like...
How do you think autonomous driving will change society?
So the mission is for comma to solve self-driving.
Do you have like a vision of the world of how it'll be different?
Is it as simple as A to B transportation or is there like, because these are robots?
It's not about autonomous driving and then of itself, it's what the technology enables.
I think it's the coolest applied AI problem. I like it because it has a clear path to monetary value,
but as far as that being the thing that changes the world.
I mean, no, like there's
cute things we're doing in common,
like who to thought you could stick a phone on the windshield and it'll drive.
But like really, the product that you're building is
not something that people were not capable of imagining 50 years ago
So no, it doesn't change the world in that front
Could people have imagined the internet 50 years ago only true junior genius visionaries?
Yeah, everyone could have imagined autonomous cars 50 years ago. It's like a car, but I don't drive it
See I I have the sense and I told you like I'm my long-term dream is
Robots with which you have deep with whom you have deep connections
and there's different trajectories towards that and I've been thinking so been thinking of
launching a startup. I see autonomous vehicles as a potential trajectory to that that that that
autonomous vehicles is a potential interjective to that. That's not where the direction I would like to go,
but I also see Tesla or even Carmayae pivoting into robotics,
broadly defined at some stage in the way
that you're mentioning the internet didn't expect.
Let's solve, you know, what I say a comma about this,
we can talk about this, but let's solve driving cars first.
We gotta stay focused on the mission.
Don't, don't, you're not too big to fail.
For however much I think, comma's winning.
Like, no, no, no, no, no, you're winning
when you solve level five self-driving cars.
And until then, you haven't win and won.
And, you know, again, you wanna be arrogant
in the face of other people?
Great.
You wanna be arrogant in the face of nature?
You're an idiot.
Right.
Stay in mission focused. Brilliant put. Like I mentioned, thinking of launching a startup,
I've been considering actually before COVID, I've been thinking of moving to San Francisco.
Oh, I wouldn't go there. So why is, well, and now I'm thinking about potentially Austin.
And we're in San Diego now. San Diego, come here.
So why, what, I mean, you're such an interesting human,
you've launched so many successful things.
What, why San Diego, what do you recommend?
Why not San Francisco?
Have you thought, so in your case,
San Diego with Qualcomm and Stavdragon,
I mean, that's an amazing combination.
But that wasn't really why. That wasn't the why. No, Qualcomm was an afterthought. Qualcomm was a nice thing to think about.
It's like you can have a tech company here. Yeah, good one. I mean, you know, I like Qualcomm, but
No, um, was the way San Diego better than South Africa. What is San Francisco suck? Well, so okay. So first off, we all kind of said like we want to stay in California.
People like the ocean, you know, California for, for its flaws.
It's like a lot of the flaws of California are not necessarily.
California's a whole and they're much more San Francisco specific.
Yeah.
San Francisco.
So I think first to your cities in general have stopped wanting growth.
Uh, well, you have like in San Francisco, you know,
the voting class always votes to not build more houses because they own all the houses and
they're like, well, you know, once people have figured out how to vote themselves more
money, they're going to do it. It is so insanely corrupt. It is not balanced at all, like
political party-wise. You know a one-party city. And for all the discussion of diversity,
it stops lacking real diversity,
a thought of background,
of approaches, strategies of ideas.
It's kind of a strange place.
That it's the loudest people about diversity and the biggest
lack of diversity.
Well, I mean, that's what they say, right?
It's the projection.
Projection?
Yeah.
Yeah, it's interesting.
And even people in Silicon Valley tell me that's like high up people, everybody is like,
this is a terrible place.
It doesn't make sense.
I mean, coronavirus is really what killed it.
San Francisco was the number one exodus during coronavirus.
We still think San Diego is a good place to be.
Yeah.
Yeah, I mean, we'll see.
We'll see what happens with California a bit longer term.
Like Austin's an interesting choice.
I wouldn't, I don't know, really anything bad to say
about Austin, either except for the
extreme heat in the summer, which, you know, but that's like very on the surface, right?
I think as far as like an ecosystem goes, it's cool.
I personally love Colorado.
I love how that's great.
Yeah, I mean, you have these states that are, you know, like, just way better run.
California is, you know, it's especially San Francisco. It's a lot of Thai horse and like, just way better run. California is, you know, it's especially San Francisco.
It's a lot of Thai horse and like, yeah.
Can I ask you for advice to me and to others about
what's the take to build a successful startup?
Oh, I don't know. I haven't done that.
Talk to someone who did that.
Well, you know,
this is like another book of years that I'll buy for $67, I suppose.
So there's, um,
one of these days, we'll sell out. Yeah, that's right.
Jail breaks are going to be a dollar and books are going to be 67.
How I, how I do, Jail broke the iPhone by George Hots.
That's right.
How I jail broke the iPhone and you can do
In 21 days
God, okay, you can't wait but quite so you have an introspective you have built a
very unique
Company, I mean not not you but you and others
But I don't know. There's no, there's
nothing. You have an interest, but you haven't really sat down and thought about like, well,
like if you and I, we're having a bunch of, we're having some beers and you're seeing
that I'm depressed and whatever I'm struggling, there's no advice you can give. Oh, I mean, more beer.
More beer.
Yeah, I think it's all very like situation dependent.
Here's, okay, if I can give a generic piece of advice,
it's the technology always wins.
The better technology always wins,
and lying always loses.
Build technology and don't lie.
I'm with you, I agree very much.
Long run, long run, sure.
That's the long run, yeah.
And you know what, the market can remain irrational
longer than you can remain solvent, true fact.
Well, this is an interesting point
because I ethically and just as a human believe that
Because I ethically and just as a human believe that
like like hype and smoke and mirrors is not at any stage of the company is a good strategy.
I mean, there's some like, you know, PR magic kind of like, you know, you know, I don't know new product
Yeah, if there's a call to action, if there's like a call to action like buy my new GPU
Look at it. It takes up three slots,
and it's this big, it's huge, buy my GPU. Yeah, that's great. But like if you look at, you know,
especially in that in AI space broadly, but autonomous vehicles, like you can raise a huge amount of
money on nothing. And the question to me is like, I'm against that. I'll never be part of that. I don't think I hope not.
Willingly not.
But like, is there something to be said to essentially lying to raise money,
like fake it till you make it kind of thing?
I mean, this is Billy McFarlane,
the Fire Festival, like we all experienced,
what happens with that?
No, no, don't fake it till you make it.
Be honest and hope you make it the whole way.
The technology wins.
Right, the technology wins.
And like, there is, I'm not,
yeah, you just like the anti-hype,
you know, that's a Slava KPSS reference,
but hype isn't necessarily bad.
I loved camping out for the iPhones.
And as long as the hype is backed by substance,
as long as it's backed by something I can actually buy,
and it's real, then hype is great,
and it's a great feeling.
It's when the hype is backed by lies
that it's a bad feeling.
I mean, a lot of people call you a mosque of fraud.
How could he be a fraud?
I've noticed this, this kind of interesting effect,
which is he does tend to over promise and deliver,
what's the better way to phrase it?
Promise, a timeline that he doesn't deliver on,
he delivers much later on.
What do you think about that?
Because I do that, I think that's a programmer thing.
Yeah. I do that as well. You think that's a really bad thing to do? Or is that okay?
I think that's, again, as long as like you're working toward it and you're going to deliver on and
it's not too far off. Right. Right. Like, the whole, the whole autonomous vehicle thing.
It's like, I mean, I still think Tesla's on track to beat us.
I still think even what they're, even what they're missteps, they have advantages.
We don't have, you know, Elon is better than me at, at like, marshaling massive amounts
of resources.
So, you know, I still think given the fact there,
maybe makes them wrong decisions, they'll end up winning.
And like, it's fine to hype it if you're actually gonna win,
right? Like if Elon says,
look, we're gonna be landing rockets back on Earth
in a year and it takes four.
Like, you know, he landed a rocket back on Earth
and he was working toward it the whole time. I think there's some amount of like, you know, he landed a rocket back on Earth and he was working toward it the whole
time. I think there's some amount of like, I think what it becomes wrong is if you know
you're not going to meet that deadline. If you are lying. Yeah, that's brilliantly put.
Like, this is what people don't understand, I think. Elon believes everything he says.
He does. As far as I can tell, he does. And I detected that in myself too.
Like if I, it's only bullshit
if you're conscious of yourself lying.
Yeah, I think so.
Yeah.
No, you can't take that to such an extreme, right?
Like in a way, I think maybe Billy McFarlane
believed everything he said too.
Right, that's how you start a callton
and everybody kills themselves, yeah. Yeah, like it's, you need, you need, if there's like some factor on it, it's fine.
And you need some people to like, you know, keep you in check.
But like, if you deliver on most of the things you say and just the timelines are off,
it does piss people off though. I wonder, but who cares? And in a long
long history,
the people, everybody gets pissed off
at the people who succeed.
Which is one of the things that Frustre traced me
about this world is, they don't celebrate
the success of others.
Like, there's so many people that want Elon to fail.
It's so fascinating to me.
Like, what is wrong with you?
So Elon Musk talks about people short,
like they talk about financial,
but I think it's much bigger than the financials.
I've seen the human factors community.
They want other people to fail.
Why?
Like, even people, the harshest thing is like,
you know, even people that like seem to really hate Donald Trump,
they want him to fail.
Yeah, I know.
Or like the other president, or they want Barack Obama to fail.
It's like, you're wrong.
Almost have ball.
It's weird, but I want that I would love to inspire that part
of the world to change because
Well, dammit if the human species is gonna survive we celebrate success
Like it seems like the efficient thing to do in this objective function that like we're all striving for
It's to celebrate the ones that put like figure out how to like do better at that objective function as opposed to like
Dragon them down back into back into the mud.
I think this is the speech I was given about the commenters on Hacker News.
So first off, something to remember about the internet in general is commenters are not
representative of the population. I don't comment on anything.
Commenters are representative of a certain sliver of the population.
On Hacker News, a common thing I'll say is when you'll see something that's like promises
to be wild out there and innovative.
There is some amount of checking them back to Earth, but there's also some amount of, if this thing succeeds, well, I'm 36 and I've worked at large tech companies my
whole life, they can't succeed because if they succeed, that would mean that I
could have done something different with my life, but we know that I could
enough, we know that I couldn't have, and that's why they're going to fail.
And they have to root for them to fail to kind of maintain their world image
To know and they comment. Well, it's hard. I so one of the things one of the things I'm considering
Start up wise is to
Change that because I think the I think it's also a technology problem. It's a platform problem. I agree
It's like because the thing you said most people don't comment, I think most people want
to comment.
They just don't because it's all the assholes for commenting.
Exactly.
I don't want to be grouped in with that or not.
You don't want to be at a party where everyone is an asshole.
But that's a platform problem. I can't believe what
reddits become. I can't believe the group thinking reddit comments.
There's a reddit is interesting one because they're subreddits. You can still see, especially
small subreddits, that are little havens of joy and positivity and deep,
even disagreement, but nuanced discussion.
But it's only small little pockets.
But that's emergent.
The platform is not helping that or hurting that.
So I guess naturally,
something about the internet,
if you don't put in a lot of effort to encourage
nuance and positive good vibes, it's a naturally going to decline in decay. I would love to see someone do this well. Yeah. I think it's very doable.
I think actually, so I feel like Twitter could be overthrown.
Twitter could be overthrown. Yashobak talked about how, like, if you have like and retweet, like, that's only positive
wiring, right?
The only way to do anything like negative there is with a comment, and that's like that
asymmetry is what gives Twitter its particular toxicness.
Whereas I find YouTube comments to be much better because YouTube comments have an up
and a down and they don't show the down votes.
Without getting into depth of this particular discussion, the point is to explore possibilities
and get a lot of data on it.
Because I mean, I could disagree with what you just said.
The point is it's unclear.
It's as an explorer in a really rich way,
like these questions of how to create
platforms that encourage positivity.
Yeah, I think it's a technology problem,
and I think we'll look back at Twitter as it is now.
Maybe it'll happen within Twitter,
but most likely somebody over-throws them
is we'll look back at Twitter
and say, we can't believe we put up
with this level of toxicity.
You need a different business model too.
Any social network that fundamentally
has advertising as a business model,
this was in the social dilemma,
which I didn't watch, but I liked it.
It's like, you know, there's always the,
you're the product, you're not the, uh, but they had a new one's take
on it that I really liked. And it said, the product being sold is influence over you.
The product being sold is literally your, you know, influence on you.
Like, that can't be. If that's your idea, okay, well, guess what?
It can't not be toxic.
Yeah.
Maybe there's ways to spin it, like with giving a lot more control to the user and transparency
to see what is happening to them as opposed to in the shadows as possible, but that can't
be the primary source of the resources.
But the user's not, no one's going to use that.
It depends.
It depends.
It depends. I think that the, you're not going to use that. It depends. It depends.
I think that the, you are not going to, you can't depend on self-awareness of the users.
It's another, it's a longer discussion because you can't depend on it, but you can reward
self-awareness.
Like, if for the ones who are willing to put in the work of self awareness, you can reward them and incentivize and perhaps be pleasantly surprised how many people
are willing to be self-aware on the internet.
Like we are in real life.
Like I'm putting in a lot of effort with you right now, being self-aware about
if I say some stupid or mean, I'll like look at your body language.
Like I'm putting in that effort, it's costly.
It's for an introvert.
Very costly.
But on the internet, fuck it.
Like most people are like, I don't care if it's hurt somebody.
I don't care if this is not interesting or if this is the mean or whatever.
I think so much of the engagement today on the internet is so disingenuous too.
Yeah.
You're not doing this out of a genuine.
This is what you think.
You're doing this just straight up to manipulate others.
Whether you're in, you just became an ad.
Okay, let's talk about a fun topic which is programming.
Here's another book idea for you.
Let me pitch.
What's your perfect programming setup?
So like, this is by George Hots.
So like, what? Listen, you're, you're, give me,
give me a MacBook Air, sitting in a corner of a hotel room and you know, I'll still
ask you. So you really don't care. You don't fetishize like multiple monitors, keyboard,
those things are nice and I'm not going to say no to them. But do they automatically unlock tons
of productivity? No, not at all.
I have definitely been more productive on a MacBook Air and a corner of a hotel room.
What about IDE?
So which operating system do you love?
What text editor do you use IDE?
What is there is there something that is like the perfect, if you could just say the perfect productivity set up
for George Hots.
It doesn't matter.
It doesn't matter.
It doesn't matter.
I guess I code most of the time in VIM.
Like literally I'm using an editor from the 70s.
You know, you didn't make anything better.
Okay, VS code is nice for reading code.
There's a few things that are nice about it.
I think that you can build much better tools.
How like Ida's X refsefs work way better than VX's VS codes.
Why?
Yeah, actually, that's a good question.
Why?
I still use, sorry, EMAX, for most.
I've actually, no, I have to confess something dark.
I've never used BIM.
I think maybe I'm just afraid
that my life has been a waste.
I'm so, I'm not even gelical about Emax, I think.
This is how I feel about Tenderflow versus PyTorch.
Yeah.
Having just like we've switched everything to PyTorch.
Now, put Monson to the switch.
I have felt like I've wasted years on TensorFlow.
I can't believe it.
I can't believe how much better pie torch is.
Yeah.
I've used Emacs and VIM, doesn't matter.
Yeah, still just my heart somehow,
I fell in love with the list.
I don't know why.
You can't, the heart wants what the heart wants.
I don't understand it, but it's just connected to me.
Maybe it's the functional language
that first I connected with.
Maybe it's because so many of the AI courses before the deep learning of evolution were taught
with a list of mine. I don't know. I don't know what it is, but I'm stuck with it. But at the same
time, why am I not using a modern ID for some of these programming? I don't know. They're not
that much better. I've used modern IDs too. But at the same time, so like to just, when not to disagree with you, but like I like
multiple monitors, like I have to do work on a laptop and it's a pain in the ass.
And also addicted to the Kinesis weird keyboard.
You could see there.
Yeah.
So you don't have any of that.
You can just be in a Macbook.
I mean, look at work.
I have three 24 inch monitors.
I have a happy hack and keyboard.
I have a razor death header mouse.
Like, but it's not essential for you.
No.
Let's go to a day in the life of George Hots.
What is the perfect day productivity wise?
So we're not talking about like,
hundreds of Thompson drugs and let's look a productivity. Like what's the
day look like? I'm like, hour by hour. Is there any regularities that create a magical George
Haught's experience? I can remember three days in my life and I remember these days vividly when
I've gone through kind of radical transformations to the way I think.
And what I would give, I would pay $100,000 if I could have one of these days tomorrow,
the days have been so impactful.
And one was first discovering Elias Evukowski on the singularity and reading that stuff.
And like, my mind was blown.
The next was discovering the hotter price and that AI is just compression,
like finally understanding AIXI
and what all of that was,
I like read about it, I was 1819, I didn't understand it.
And then the fact that like lossless compression
implies intelligence, the day that I was shown that.
And then the third one is controversial. The day I found a blog called Unqualified Reservations.
And we read that and I was like,
Wait, which one is it?
That's what's the guy's name?
Curtis Yarvan?
Yeah.
So many people tell me I'm supposed to talk to him.
Yeah, the day he looks, he sounds insane.
That's brilliant, but insane or both, I don't know.
The day I found that blog was another like, this was during like like, game or gate and
kind of the run up to the 2016 election and I'm like, wow, okay, the world makes sense
now.
This, this, like, I had a framework now to interpret this.
Just like I got the framework for AI and a framework to interpret technological progress.
Like, those days when I discovered these new frameworks were.
Oh, interesting.
So it's not about, but what, what was special about those days when I discovered these new frameworks were. Oh, interesting. It's just not about, but what was special about those days?
How do those days come to be?
Is it just you got lucky?
Like, you just encounter a hotter prize
on hacking users and like that?
Like what?
But you see, I don't think it's just,
see, I don't think it's just that like,
I could have gotten lucky at any point
I think that in a way you were ready at that moment. Yeah, exactly to receive the information
But is there some magic to the day today of like like eating breakfast and it's the mundane things
Nothing. No, I drift I drift through life without structure.
I drift through life hoping and praying that I will get another day like those days.
And there's nothing in particular you do to be a receptacle for another for day number four.
No, I didn't do anything to get the other ones. So I don't think I have to really do anything now.
I took a month-long trip to New York, and the Ethereum thing was the highlight of it,
but the rest of it was pretty terrible.
I did a two-week road trip, and I had to turn around.
I had to turn around driving in Gunnison, Colorado, past through Gunnison, and the snow strikes
coming down.
This is a pass up there called Monarch Pass, and I had to get through to Denver.
You've got to get over the Rockies and I had to turn my car around. I watched
a F-150 go off the road. I got to go back and that day was meaningful. It was real. I actually
had to turn my car around. It's rare that anything even real happens in my life. Even as mundane
is the fact that there yeah, there was snow.
I had to turn around, stay in Ghoneston, leave the next pack.
Something about that moment for real.
Okay, so it's interesting to break apart the three moments you mentioned.
If it's okay, so I always have trouble pronouncing this name, but allows you your kowski so what how did your world view change and starting to consider the
the exponential growth of AI and a GI that he thinks about and the the threats of artificial
intelligence and all that kind of ideas like can you is it just like can you maybe break apart
like what exactly was so magical to you?
Is it transformation experience?
Today everyone knows him for threat to an AI safety.
This was pre that stuff.
There was I don't think a mention of AI safety on the page.
This is old Yuczuki Scott.
He'd probably denounce it all now.
He'd probably be like, that's exactly what I didn't want to happen.
Sorry man. Is there something specific you can take from his work that you can remember.
Yeah.
It was this realization that computers double in power every 18 months and humans do not.
And they haven't crossed yet.
But if you have one thing that's doubling every 18 months and one thing that's staying
like this, you know, here's your log graph, here's your line, you know,
you calculate that.
Okay.
And that that opened the door to the exponential thinking, like thinking that like, you know,
what technology we can actually transform the world.
It opened the door to human obsolescence.
It opened the door to realize that in my lifetime, humans are going to be replaced.
And then the matching idea to that of artificial intelligence with a hotter prize, you know,
I'm torn.
I go back and forth on what I think about it.
But the basic thesis is it's a nice compelling notion
that we can reduce the task of creating an intelligent system,
a generally intelligent system, into the task of compression.
So you can think of all of intelligence
in the universe, in fact, as a kind of compression.
Do you find that, just at the time you
found that as a compelling idea, do you
still find that a compelling idea?
I still find that a compelling idea.
I think that it's not that useful day to day, but actually one of maybe my quests before
that was a search for the definition of the word intelligence.
And I never had one.
And I definitely have a definition of the word compression.
It's a very simple straight forward one.
And you know what compression is?
You know what loss is, is loss is compression,
not loss is loss is compression.
And that is equivalent to intelligence,
which I believe, I'm not sure how useful
that definition is a day to day,
but like I now have a framework to understand what it is.
And he just 10X the prize for that competition
like recently a few months ago.
Ever thought of taking a crack at that?
Oh, I did.
Oh, I did.
I spent the next, after I found the prize,
I spent the next six months of my life trying it.
And well, that's when I started learning everything about AI.
And then I worked my career as a verbat
and then I read all the deep learning stuff,
and I'm like, okay, now I'm caught up to modern AI.
And I had a really good framework to put it all in
from the compression stuff.
Right, like some of the first deep learning models
I played with were G-U-T-T basically,
but before Transformers, before I was still RNNs
to do character prediction.
But by the way, on the compression side,
I mean, especially when your own networks,
what do you make of the lossless requirement
with the hot price?
So human intelligence and your networks
can probably compress stuff pretty well,
but it'll be lossy.
It's imperfect.
You can turn a lossy compression into a lossless compressor pretty easily using an arithmetic encoder,
you can take an arithmetic encoder and you can just encode the noise with maximum efficiency.
So even if you can't predict exactly what the next character is, the better a probability
distribution you can put over the next character, you can then use an arithmetic encoder to, all right, you don't have to know
whether it's an ear and eye, you just have to put good probabilities on them and then,
you know, code those. And if you have, it's a bit of entry thing, right?
So let me on that topic, be interesting as a little side tour, what are your thoughts
in this year, about GPT-3 and these language models and these
transformers? Is there something interesting to you as an AI researcher or is there
something interesting to you as an autonomous vehicle developer?
No, I think it's cool for what it is, but no, we're not just going to be able to scale
up to GPT-12 and get general purpose intelligence.
Like your loss function is literally just, you know, you know, cross-entry be lost on the character.
Right? Like that's not the loss function of general intelligence.
Is that obvious to you?
Yes.
Can you imagine that, like to play devil's advocate on yourself, is it possible that you can,
the GPT-12 will achieve general intelligence
with something as dumb as this kind of loss function?
I guess it depends what you mean by general intelligence.
So there's another problem with the GPTs,
and that's that they don't have a,
they don't have long-term memory.
Right, all right, so like, just GPT 12,
a scaled up version of GPT 2 or 3,
I find it hard to believe.
Well, you can scale it in,
yeah, so it's a hard-coded length,
but you can make it wider and wider.
Yeah.
You're going to get cool things from those systems, but I don't think you're ever going
to get something that can build me a rocket ship.
What about soft driving?
So, you can use Transformer with video, for example. You think, is there something in there?
No, because we use a grew.
We use a grew.
We could change that grew out to a transformer.
I think driving is much more Markovian than language.
So Markovian, you mean like the memory, which aspect of Markovian?
Markovian, I mean that like most of the information in the state of T-minus one is
also in the inf- is in state T.
I see.
Yeah.
Right.
And it kind of like drops off nicely like this, whereas sometimes with language, you have
to refer back to the third paragraph on the second page.
I feel like there's not many, like a, like you can say, like speed limit signs, but there's
really not many things in autonomous driving.
It looked like that.
But if you look at the play devil's advocate,
is the risk estimation thing that you've talked about
is kind of interesting, is it feels like there might be
some longer term aggregation of context necessary
to be able to figure out like the context.
I'm not even sure. I'm believing my
devil said we have a nice we have a nice like vision model, which
outputs like a one or two four dimensional perception space.
Um, can I try transformers on it? Sure. I probably will.
At some point we'll try transformers and it will just see, do they do better? Sure. I'm
I'm, I'm, I might not be a game changer. No, well, I'm not like, like, might
transformers work better than grooves for autonomous driving? Sure. I'm, I'm, I might not be a game changer. No, well, I'm not like, like, like, might Transformers work better than Groose for autonomous driving? Sure. Might we switch? Sure.
Is this some radical change? No. Okay. We used to slightly different, you know, we switched from
our NNs to Groose. Like, okay, maybe it's Groose to Transformers, but no, it's not. Yeah.
I, well, on the, on the topic of general intelligence, I don't know how much I've talked to you about it. Like, what do you think will actually build an AGI?
Like, if you look at Ray Kurzweil with singularity, do you have like an intuition about
you're kind of saying driving is easy?
And I tend to personally believe that solving driving will have really deep, important impacts on
our ability to solve general intelligence.
I think driving doesn't require general intelligence, but I think they're going to be neighbors
in a way that it's deeply tied, because driving is so deeply connected to the human experience that I think solving one will
help solve the other.
So I don't see driving as easy and almost like separate than general intelligence.
But what's your vision of a future with a singular?
Do you see there will be a single moment, like a singularity where it will be a face shift?
Are we in the singularity now?
Like what do you have crazy ideas about the future in terms of AGI?
Definitely in the singularity now.
We are.
Of course.
Look at the bandwidth between people.
The bandwidth between people goes up.
Right?
The singularity is just, you know, when the bandwidth...
What do you mean by the bandwidth?
Communications tools.
The whole world is networked.
The whole world is networked and we raise the speed of that network, right?
Oh, so you think the communication of information
in a distributed way is an empowering thing
for collective intelligence?
Oh, I didn't say it's necessarily a good thing,
but I think that's like,
when I think of the definition of the singularity,
yeah, it seems kind of right.
I see, like, it's a change in the world beyond which,
like the world be transformed in ways that we can't possibly imagine. No, I mean, I think we're in the world beyond which, like the world,
be transformed in ways that we can't possibly imagine.
No, I mean, I think we're in the singularity now
in the sense that there's like, you know,
one world in a monoculture and it's also linked.
Yeah.
I mean, I kind of share the intuition
that the singularity will originate
from the collective intelligence of us
and versus the like some single system AGI type thing.
Oh, I totally agree with that.
Yeah, I don't really believe in like a hard take off AGI kind of thing.
Yeah, I don't think, I don't even think AI
is all that different in kind from what we've already been building.
With respect to driving, I think driving is a subset of general intelligence, and I think
it's a pretty complete subset.
I think the tools we develop at KAMA will also be extremely helpful to solving general
intelligence, and that's, I think, the real reason why I'm doing it.
I don't care about self-driving cars.
It's a cool problem to beat people at.
But, yeah, I mean, yeah, you're kind of, you're of two minds. So one, you do have to have a mission, you want to focus and make sure you get
you get there, you can't forget that. But at the same time, there is a thread that's much
bigger than the kinesi entirety of your effort. That's much bigger than just driving.
With AI and with general intelligence, it is so easy to
to looid yourself into thinking you figured something out when you
happened. If we build a level 5 self-driving car, we have
indisputably built something. Is it general intelligence? I'm not going to debate
that. I will say we've built something that provides huge financial value.
Yeah, beautifully put. That's the engineer in Crito, like just just build the thing. It's like that's why I'm with with with with the Elon on the go to Mars. Yeah, it's a great one.
You can argue like who the hell cares about going to Mars, but the reality is set that as a mission,
get it done, and then you're going to crack some pro problem that you've never even expected in
the process of doing that. Yeah. I mean, I think if I had a choice between humanity going to Mars and solving self-driving
cars, I think going to Mars is better, but I don't know, I'm more suited for self-driving
cars.
I'm an information guy.
I'm not a modernist.
I'm a plus modernist.
Postmodernist.
All right.
Beautifully put.
Let me drag you back to programming for a sec.
What three, maybe three to five programming languages should people learn, do you think?
Like if you look at yourself, what did you get the most thought of from learning?
Well, so everybody should learn C and assembly.
We'll start with those two, right?
Assembly.
Yeah.
If you can't code an assembly, you don't know what the computer's doing.
You don't understand, like, you don't have to be great in assembly, but you have to code in
it.
And then you have to appreciate assembly in order to appreciate all the great things C
gets you.
And then you have to code and C in order to appreciate all the great things Python gets
you.
So I'll just say assembly C and Python will start with those three.
The memory allocation of C and the fact that, so some of these give you a sense of just
how many levels of abstraction you get to work on in modern day program.
Yeah, yeah, yeah, graph coloring for assignment register assigned in compilers.
Yeah.
You know, you got to do, you know, the compiler computer only has a certain number of registers,
yet you can have all the variables you want to see function.
So you get to start to build intuition about compilation, like what a compiler gets you. What
else? Well, then there is, then there's kind of, so those are all very
imperative programming languages. Then there's two other paradigms for
programming that everybody should be familiar with. One of them is
functional. You should learn Haskell and take that all the way through, learn a language with dependent types like Coc.
Learn that whole space, like the very PL theory heavy languages.
And Haskell is your favorite functional?
What is that the go-to, you'd say?
Yeah, I'm not a great Haskell programmer. I wrote a compiler in Haskell once.
There's another paradigm, and actually there's one more paradigm that I'll even talk about.
After that, that I never used to talk about when I would think about this, but the next paradigm is learn veriloggar VGL.
Understand this idea of all of the instructions executed once.
If I have a block in verilog and I write stuff in it, it's not sequential. They all execute it once.
And then like think like that, that's how hard
it works. To be so I guess assembly doesn't quite get you that. Assembly is more about
compilation and very log is more about the hardware. Like giving us that's what actually
is the hardware is doing. Assembly C Python are straight like they sit right on top of
each other. In fact, C is, well,
let me see, it's kind of coded in C, but you could imagine the first C was coded in assembly,
and Python is actually coded in C. So, you know, you can straight up go on that.
Got it. And then very loud gives you that's brilliant. Okay. And then I think there's another
one now. Everyone's Carpathic also programming 2.0, which is learn a, I'm not even going
to, don't learn TensorFlow learn PyTorch.
So machine learning, we've got to come up with a better term than programming 2.0 or,
um, but yeah, it's programming language learning.
I wonder if it can be formalized a little bit better, which, we feel like we're in the
early days of what that actually entails.
Data-driven programming.
Data-driven programming, yeah.
But so fundamentally different as a paradigm than the others.
It almost requires a different skill set.
But you think it's still, yeah.
Applied towards versus a chance of flow of pie torch wins. It's still, yeah. And pie torch versus tons of flow pie torch wins.
It's the fourth paradigm.
It's the fourth paradigm that I've kind of seen.
There's like this, you know, imperative, functional hardware.
I don't know a better word for it.
And then ML.
Do you have advice for people that want to, you know, get into programming, want to learn programming.
You have a video, what is programming new lessons?
That's the mission point.
And I think the top comment is like, warning, this is not for noobs.
Do you have a new, like, TLDW for that video, but also a new but friendly advice on how to get into programming.
We were never going to learn programming by watching a video called Learn Programming.
The only way to learn programming, I think.
The only one is the only way everyone I've ever met who can program well learned it all
in the same way.
They had something they wanted to do,
and then they tried to do it.
Then they were like, oh, well, okay, this is kind of,
it'll be nice if the computer could do this.
Then that's how you learn.
You just keep pushing on a project.
So the only advice I have for learning programming is go program.
Somebody wrote to me a question like, we don't really, they're looking to learn
about recurring on networks.
And saying like, my company's thinking of doing, using recurring
neural networks for time series data, but we don't really have an idea of where
to use it yet.
We just want to like, do you have any advice on how to learn about these,
these kind of general machine learning questions?
And I think the answer is, like like actually have a problem that you're trying to solve.
And just I see that stuff. Oh my God, when people talk like that, they're like,
I heard machine learning is important. Could you help us integrate machine learning with macaroni and
cheese production? You're just, I don't even, you can't help these people.
Like, who lets you run anything?
Who lets that kind of person run anything?
I think we're all beginners at some point, so.
It's not like they're a beginner.
It's like, my problem is not that they don't know about machine learning.
My problem is that they think that machine learning has something to say about macaroni
and cheese production.
We're like, I heard about this new technology. How can I use it for why?
Like, I don't know what it is, but how can I use it for why?
That's true. You have to build up a intuition of how, because you might be able to figure out a way, but like the prerequisite is you should have a macaroni and cheese problem to solve first.
Exactly. And then, too, you should have more traditional, like the learning process involved, more
traditionally applicable problems in the space of whatever that is of machine learning,
and then see if it could be applied to my criteria.
And you start with, tell me about a problem.
Like if you have a problem, you're like, you know, some of my boxes aren't getting enough
macaroni in them. Can we use machine learning to solve this problem?
That's much, much better than how do I apply machine learning to macaroni and cheese?
One big thing, maybe this is me, uh, talking to the audience a little bit,
because I get these days so many messages, advice on how to like, learn stuff. Okay.
is a device on how to learn stuff. Okay.
This is not me being mean.
I think this is quite profound, actually, is you should Google it.
Oh, yeah.
Like, one of the skills that you should really acquire as an engineer, as a researcher,
as a thinker, like one, this two, two complementary
skills, like one is with a blank sheet of paper with no internet to think deeply. And
then the other is to Google the crap out of the questions you have. Like, that's actually
a skill. I don't know, people often talk about, but like doing research, like pulling
at the thread, like looking up different words,
going into like GitHub repositories with two stars and like looking how they did stuff,
like looking at the code or going on Twitter, seeing like there's little pockets of brilliant
people that are like having discussions.
Like if you're a neuroscientist, going to signal processing community, if you're an AI
person going into the psychology community, like switch communities, I keep searching, searching, searching,
because it's so much better to invest in like finding somebody else who already solved
your problem than to try to solve the problem. And because they've often invested years
of their life,
like entire communities are probably already out there who've tried to solve your problem.
I think they're the same thing. I think you go try to solve the problem. And then in
trying to solve the problem, if you're good at solving problems, you'll stumble upon
the person who solved it already. Yeah, but the stumbling is really important. I think
that's the skill that people should really put, especially in undergrad, like search. If you ask me a question, how should I get started
in deep learning, especially, like that is just so Googleable. Like the whole point is you Google
that and you get a million pages and just start looking at them. Yeah. Start
pulling at the thread, start exploring, start taking notes, start getting advice from
a million people that have already spent their life answering that question actually.
Oh, well, I mean, that's definitely also when people ask me things like that and trust
me, the top answer on Google is much, much better than anything you're going to tell you, right? Yeah.
People ask, it's an interesting question.
Let me know if you have any recommendations.
What three books, technical or fiction or philosophical had an impact on your life or you
would recommend perhaps?
Maybe we'll start with the least controversial, infinite jest.
Infinite jest is a...
Did you have a foster house? Yeah, it's a book about wire heading, really.
Very enjoyable to read, very well written. You know, you will grow as a person reading this book.
It's effort, and I'll set that up for the second book, which
is pornography.
It's called Atlas Shrugged, which...
Atlas Shrugged is pornography.
Yeah, I mean, it is.
I will not defend the, I will not say,
Atlas Shrugged is a well written book.
It is entertaining to read, certainly, just like pornography.
The production value isn't great.
There's a 60-page monologue in there
that Anne Ranz editor really wanted to take out.
And she paid out of her pocket
to keep that 60-page monologue in the book.
But it is a great book for a kind of framework
of human relations.
And I know a lot of people are like,
yeah, but it's a terrible framework.
Yeah, but it's a framework.
Just for context, in a couple of days,
I'm speaking with four, probably four plus hours
with Yaron Brooke, who's the main living,
remaining objectives, objectives.
Interesting.
So, I've always found this philosophy quite interesting.
How many levels, how repulsive,
some large percent of the population find it,
which is always funny to me when people are unable to even
read a philosophy because of some, I think that's
more about their psychological perspective on it. But there is something about objective
ism and Iran's philosophy that's deeply connected to this idea of capitalism, of the ethical life is the productive life that was always compelling
to me. I didn't seem as I didn't seem to interpret it in the negative sense that some
people do.
To be fair, I read that book when I was 19.
So you had an impact at that point, yeah.
Yeah, and the bad guys in the book have this slogan from each according to their ability to each
according to their need. And I'm looking at this and I'm like, these are the most
cards. This is Team Rocket level cartoonishness, right? No, bad. And then when I realized that was
actually the slogan of the Communist Party, I'm like, wait a second. Wait, no, no, no, no, no,
just you're telling me this really happened?
Yeah, that's interesting.
I mean, one of the criticisms of her work
is she has a cartoonish view of good and evil
like that there's like the reality,
as Jordan Peterson says, is that each of us
have the capacity for good and evil in us
as opposed to like, there's some character
who are purely evil and some characters
that are purely good.
And that's in a way why it's pornographic.
Yeah.
The production value.
I love it.
Well, evil is punished and that's very clearly like, you know, there's no, you know,
you know, just like porn doesn't have, you know, the character growth, you know, neither
does Alan Shrug to like.
Really?
Well put.
But at 19 year old George Hots, it was good enough. Yeah. Yeah.
What's the third? You have something? I could give these two I'll just throw out.
They're sci-fi, a permutation city. Great thing to try thinking about copies
yourself. And then the minute that is Greg Egan. He's a... That might not be his
real name. Some Australian guy. It might not be his real name, some Australian guy.
It might not be Australian.
I don't know.
And then this one's online.
It's called the Metamorphosis of Prime Intellect.
It's a story set in a post singularity world.
It's interesting.
Is there either of the worlds do you find something
close off against you thinking them that you can come in on?
I mean, it is clear to me that
Metamorphosis of Primalect is like written by an engineer, which is
it's very, it's very almost a pragmatic take on a utopia in a way.
Positive or negative?
That's up to you to decide reading the book.
And the ending of it is very interesting as well,
and I didn't realize what it was.
I first read that when I was 15.
I've read that book several times in my life.
And it's short, it's 50 pages, I want you to go read it.
What's, sorry, it's a little attention.
I've been working through the foundation.
I've been, I haven't read much sci-fi in my whole life
and I'm trying to fix that in the last few months.
That's been a little side project.
What's to use the greatest sci-fi novel
that people should read?
Or is that, or,
I mean, I would, yeah, I would say like, yeah,
Permitation City, Metamorphosis,
or I'm gonna like that.
I don't know, I don't know.
I didn't like foundation.
I thought it was way too modernist.
I feel like Dune and like all of those.
I've never read Dune.
I've never read Dune.
I have to read it.
Fire upon the deep is interesting.
Okay, I mean, look, everyone should read.
Everyone should read Neural Man.
So everyone should read Snow Crash.
If you haven't read those, like, start there. Yeah, I haven't read Snow Crash. They're should read neuro man. So I want you to read snow crash if you haven't read those like start there
Um, yeah, I'm not a snow crash. I'm not sure. Oh, it's very interesting
Go to Lachorbach and if you want the controversial one Bronze Age mindset
All right, I'll look into that one
Those aren't sci-fi, but just around that books
So a bunch of people ask me on Twitter and Reddit and so on for advice.
So what advice would you give a young person today about life?
What yeah, I mean looking back you especially when you're a young
Younger you did and you continue to you've accomplished a lot of interesting things.
Is there some advice from those?
From that life of yours that you can pass on?
If college ever opens again,
I would love to give a graduation speech.
At that point, I will put a lot of
somewhat theoretical effort into this question.
Yeah, at this. You haven't written anything at this point.
Yeah, you know what, always wear sunscreen,
this is water, like,
I think you're plagiarizing.
I mean, you know,
but that's the, that's the like,
poor clean year room, you know,
yeah, you can play Jurassic from all of this stuff.
And it's,
there is no...
Self-help books aren't designed to help you, they're designed to make you feel good.
Like whatever advice I could give, you already know.
Everyone already knows.
Sorry, it doesn't feel good.
Right?
Like, you know?
You know, you know, I wonder what, if I tell you that you should eat well and read more,
and it's like I don't do anything,
I think the whole genre of those kind of questions
is meaningless.
I don't know.
If anything, it's don't worry so much about that stuff.
Don't be so caught up in your head.
Right.
I mean, in a sense that your whole life,
if your whole existence is like moving version of that advice,
I don't know.
Yeah.
There's something.
I mean, there's something in you that
resist that kind of thinking in that in itself
is just illustrative of who you are.
And there's something to learn from that,
I think you're clearly not overthinking stuff.
Yeah, and you know what?
It's a gut thing.
I even when I talk about my advice,
I'm like, my advice is only relevant to me.
It's not relevant to anybody else.
I'm not saying you should go out
if you're the kind of person who overthinks things
to stop overthinking things.
It's not bad. It doesn't work for me. Maybe it works for you. I don't
know.
Let me ask you about love.
Yeah.
I think the last time we talked about the meaning of life, and it was kind of about winning.
Of course. I don't think I've talked to you about love much, whether romantic or just love for the
common humanity amongst the soul.
What role has love played in your life?
In this quest for winning, where does love fit in?
Well, word love, I think, means several different things.
There's love in the sense of, maybe I could just say, there's like love in the sense of opiates and love in the sense of oxytocin and then love in the
sense of maybe like a love for math. I don't think fits into either those first two paradigms.
So each of those, have they, have they they have they given something to you?
In your life?
I'm not that big of a fan of the first two.
Um, what?
The same reason I'm not a fan of the same reason I don't do opiates and don't take ecstasy.
And there were times, look, I've tried both.
Um, I like to opiate way more than I liked ecstasy. And there were times, look, I've tried both. Um, I like rupees way more than I liked ecstasy. Uh, but they're not
deethical life is the productive life. So maybe that's my
follow with with with those. And then like, yeah, a sense of, I
don't know, like abstract love for humanity. I mean, the abstract
love for humanity. I'm like,
yeah, I've always felt that and I guess it's hard for me
to imagine not feeling it and maybe for people who don't.
And I don't know.
Yeah, that's just like a background thing that's there.
I mean, since we've brought up drugs, let me ask you,
this is becoming more and more part of my life
because I'm talking a few
researchers that are working on psychedelics. I've eaten shrooms a couple of
times and it was fascinating to me that like the mind can go like just fascinating
they mind can go to places I didn't imagine it could go. I was very friendly and
positive and exciting and everything was kind of hilarious
in the place. Wherever my mind went, that's where I went. What do you think about psychedelics?
Do you think they have, what do you think the mind goes? How have you done psychedelics?
What do you think the mind goes? Is there something useful to learn about the places it goes?
What once you come back? You know, I find it interesting that this idea
that psychedelics have something to teach
is almost unique to psychedelics, right?
People don't argue this about infatomines.
And I'm not really sure why.
I think all of the drugs have lessons to teach.
I think there's things to learn from opiates. I think there's things to learn from infat to teach. I think there's things to learn from
opiates. I think there's things to learn from fatamines. I think there's things to learn from
psychedelics, things to learn from marijuana. But also at the same time, recognize that I don't
think you're learning things about the world. I think you're learning things about yourself.
Yeah. And you know, what's the even, I might have even been, uh, might have even been a Timothy
Leary Codd, I don't know what I'm going to say about him, but the idea is basically like,
you know, everybody should look behind the door, but then once you've seen behind the door,
you don't need to keep going back.
Um, so I mean, and that's my thoughts on, on all real drug use too.
Is that maybe for caffeine?
It's a, it's a little experience that, experience that it's good to have, but...
Oh, yeah, I guess psychedelics have definitely...
So you're a fan of new experiences, I suppose?
Yes.
Because they all contain a little, especially the first few times,
it contains some lessons that we picked up.
Yeah, and I'll...
We visit psychedelics maybe once a year.
Usually small, smaller doses.
Maybe they turn up the learning rate of your brain.
I've heard that. I like that.
Yeah, that's cool.
Big learning rates have pros and cons.
Last question. This is a little weird one.
But you've called yourself crazy in the past.
First of all, on a scale of 1 to 10 how crazy would
you say? Oh I mean it depends how you you know when you compare me to Elon Musk and I think
11 del skin that's so crazy. So like like a 7? Let's go with 6. 6. 6. 6. What? I like, seven's a good number. Seven?
All right.
Well, I'm sure it's by day changes, right?
So, but you're in that area.
What in thinking about that, what do you think is the role of madness?
Is that a feature or a bug if you were to dissect your brain?
So okay.
From like a like mental health lens on crazy,
I'm not sure I really believe in that.
I'm not sure I really believe in like a lot of that stuff,
right, this concept of, okay, you know,
when you get over to like like like like hardcore bipolar
and schizophrenia, these things are clearly
real somewhat biological.
And then over here on the spectrum,
you have like ADD and oppositional defiance disorder
and these things that are like, wait,
this is normal spectrum human behavior.
Like this isn't, you know, where's the line here
and why is this like a problem?
So this is this whole, you know, the neurodiversity of humanity is
it's huge like people think I'm always on drugs people are saying this to me on my streams and
like guys, you know, like I'm real open with my drug use. I tell you if I was on drugs and I had
like a couple coffee this morning, but other than that, this is just me. You're witnessing my brain
in action. So, so the word madness doesn't even make sense and then you're in the rich
neurodiversity of humans. I think it makes sense but only for like some insane
extremes. Like if you are actually like visibly hallucinating. All right. Um, you know, that's okay.
But there is the kind of spectrum
on which you stand out.
Like that, that's like if I were to look,
you know, at decorations and a Christmas tree,
something like that.
Like if you were a decoration,
that would catch my eye.
Like that thing is sparkly.
What if, whatever the hell that thing is,
there's something to that.
Just like refusing to be boring,
or maybe boring is the wrong word,
but to, yeah, I mean,
be willing to sparkle, you know?
It's like somewhat constructed.
I mean, I am who I choose to be.
I'm gonna say things as true as I can see them.
I'm not gonna, I'm not gonna lie.
And but that's a really important feature in itself.
So like whatever the newer diversity of your,
whatever your brain is, not putting constraints on it that force it to fit into the mold of what society is like,
defines what you're supposed to be.
So you're one of the specimens that,
that doesn't mind being yourself.
Being right is super important, except at the expense of being wrong.
It's super important, except at the expense of being wrong. Without breaking that apart, I think it's a beautiful way to end it.
George, you're one of the most special humans I know.
It's truly an honor to talk to you.
Thanks so much for doing it.
Thank you for having me.
Thanks for listening to this conversation with George Hots, and thank you to our sponsors, ForSigmatic, which is the maker of delicious mushroom coffee
decoding digital, which is a tech podcast that I listen to and enjoy, and ExpressVPN, which
is the VPN I've used for many years.
Please check out these sponsors in the description to get a discount and to support this podcast.
If you enjoy this thing, subscribe on YouTube, review it with 5 stars and apple podcasts, follow
on Spotify, support on Patreon, or connect with me on Twitter at Lex Friedman.
And now, let me leave you with some words from the great and powerful Linus Tower of
Ald.
Talk is cheap, show me the code.
Thank you for listening and hope to see you next time.
you