Lex Fridman Podcast - #131 – Chris Lattner: The Future of Computing and Programming Languages
Episode Date: October 19, 2020Chris Lattner is a world-class software & hardware engineer, leading projects at Apple, Tesla, Google, and SiFive. Please support this podcast by checking out our sponsors: - Blinkist: https://blinkis...t.com/lex and use code LEX to get a free week of premium - Neuro: https://www.getneuro.com and use code LEX to get 15% off - MasterClass: https://masterclass.com/lex to get 15% off annual sub - Cash App: https://cash.app/ and use code LexPodcast to get $10 EPISODE LINKS: Chris's Twitter: https://twitter.com/clattner_llvm Chris's Website: http://nondot.org/sabre/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/LexFridmanPage - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 07:12 - Working with Elon Musk, Steve Jobs, Jeff Dean 12:42 - Why do programming languages matter? 18:42 - Python vs Swift 29:35 - Design decisions 34:53 - Types 38:41 - Programming languages are a bicycle for the mind 41:13 - Picking what language to learn 47:12 - Most beautiful feature of a programming language 56:36 - Walrus operator 1:06:03 - LLVM 1:11:15 - MLIR compiler framework 1:15:21 - SiFive semiconductor design 1:27:56 - Moore's Law 1:31:09 - Parallelization 1:35:37 - Swift concurrency manifesto 1:46:26 - Running a neural network fast 1:52:03 - Is the universe a quantum computer? 1:57:44 - Effects of the pandemic on society 2:14:56 - GPT-3 2:19:15 - Software 2.0 2:32:41 - Advice for young people 2:37:24 - Meaning of life
Transcript
Discussion (0)
The following is a conversation with Chris Latner, his second time in the podcast.
He's one of the most brilliant engineers in modern computing, having created a LLVM compiler
infrastructure project, the Klan compiler, the Swift programming language, a lot of key
contributions to TensorFlow and TPUs as part of Google.
He served as vice president of autopilot software at Tesla, was a software innovator and leader at Apple,
and now is at Sci-Fi of a senior vice president of platform engineering looking to revolutionize
chip design to make it faster, better, and cheaper. Quick mention of each sponsor,
followed by some thoughts related to the episode. First sponsor is Blinkist, an app that summarizes
key ideas from thousands of books, I use it
almost every day to learn new things, or to pick which books I want to read, or listen
to next.
2nd is Nuro, the maker of functional sugar free gum and mints that I use to supercharge
my mind with caffeine, alcyanine, and b vitamins.
3rd is Masterclass, online courses from the best people in the world on
each of the topics covered from rockets to game design to poker to writing and to guitar.
And finally, cash app. The app I use to send money to friends for food, drinks and unfortunately
lost bets. Please check out the sponsors in the description to get a discount and to support
this podcast. As a side note, let me say that Chris has been an inspiration to me on a
human level, because he is so damn good as an engineer and leader of engineers, and yet
he is able to stay humble, especially humble enough to hear the voices of this agreement
and to learn from them. He was supportive of me and this podcast from the early days, and for that I'm forever grateful.
To be honest, most of my life no one really believed that I would have mopped a much.
So when another human being looks at me, it makes me feel like I might be someone special,
it can be truly inspiring. That's the lesson for educators. The weird kid in the corner with a dream is someone who might need your love and support
in order for that dream to flourish.
If you enjoyed this thing, subscribe on YouTube, review it with 5 stars on our podcast, follow
on Spotify, support on Patreon, or connect with me on Twitter, at Lex Friedman.
As usual, I'll do a few minutes of ads now and no ads in the middle.
I try to make these
interesting, but I give you time stamps, so if you skip please still check out the sponsors by
clicking the links in the description, it's the best way to support this podcast.
This episode is supported by Blinkist, my favorite app for learning new things.
Get it at Blinkist.com slash Lex for a 7 day free trial and 25% off after.
Blinkist takes the key ideas from thousands of nonfiction books and condenses them down
into just 15 minutes they can read or listen to.
I'm a big believer of reading at least an hour every day.
As part of that, I use Blinkist almost every day to try out a book I may otherwise never
have a chance to read.
And in general, it's a great way to broaden your view of the ideal landscape
out there and find books that you may want to read more deeply. With Blinkist,
you get unlimited access to read or listen to a massive library of condensed
nonfiction books. Right now, for a limited time, Blinkist has a special offer
just for you, the listener of this podcast. Go, Blinkist has a special offer just for you, the listener of
this podcast.
Go to Blinkist.com slash Lex to try it free for seven days and save 25% off your new
subscription.
As Blinkist.com slash Lex, Blinkist spelled B-I-L-N-K-I-S-T.
I'm not very good at spelling.
Okay.
This show is also sponsored by Nurell, a company that makes functional gum and mints that's
supercharge your mind with a sugar-free blend of caffeine, althinian, and B6B12 vitamins.
It's loved by Olympians and engineers alike.
I personally love the mint gum.
It helps me focus during times when I can use a boost.
My favorite is to chew it for like 10 minutes at the start of a deep work sessions, behind
a standing desk, typing frantically.
That's when I need the energy most I think to get the ball rolling.
By the way, Cal Newport author of deep work, a book I highly recommend will eventually
be on the podcast.
I talk to him often.
He's a friend, he's an inspiration,
he has his own podcast that you should also check out called Deep Questions.
Anyway, each piece of neurogum is about one half cup of coffee worth of caffeine. I love
caffeine. I also just love coffee and tea makes me feel like home.
Anyway, Neuro is offering 15% off when you use code Lex at checkout. Go to get Neuros.com and use code Lex.
This show is also sponsored by Masterclass.
$100 a year for an all-access pass to watch courses from literally the best people in
the world on a bunch of different topics like Chris Hadfield on Space Exploration, Neil
LeGrasse Tyson on Scientific Thinking and Communication,
Will Wright, creator of SimCity and Sims, both one of my favorite games, Carlos Santana, one
of my favorite musicians on guitar, Gary Kasparov on chess, you know, I say more about who
my favorite chess player is, and Daniel Nagarano on poker many more.
Maybe one day I'll do a masterclass on how to drink vodka and
ask overly philosophical questions of world class engineers who are too busy to bother
with my nonsense. By the way, you can watch it on basically any device, sign up at masterclass.com
slash Lex to get 15% off the first year of an annual subscription. That's masterclass.com
slash Lex. Finally, they
show is presented by cash app, the number one finance app in the app store. When you
get it, use co Lex podcast, cash app, let's you set money to friends by Bitcoin and invest
in the stock market with as little as one dollar. I'm thinking of doing more conversations
with folks who work in and around the cryptocurrency space.
Similar to AI, I think, but even more so, there are a lot of charlatans in this space,
but there are also a lot of free thinkers and technical geniuses whose ideas are worth
exploring in depth and with care.
If I make mistakes in guest selection and details in conversations themselves. I'll keep trying to improve
correct where I can and also keep following my curiosity wherever the heck it takes me.
So again, if you get cash from the App Store Google Play and use the code Lex podcast, you
get $10 and cash will also donate $10 to first an organization that is helping to advance
robotics and STEM
education for young people around the world.
And now here's my conversation with Chris Ladner. What are the strongest qualities of Steve Jobs, Elon Musk, and the great and powerful
Jeff Dean since you've gotten a chance to work with each?
You're starting with an easy question there.
These are three very different people.
I guess you could do maybe a pairwise comparison between them instead of a group comparison.
So if you look at Steve Jobs and Elon, I worked a lot more with Elon than I did with Steve.
They have a lot of commonality.
They're both visionary in their own way.
They're both very demanding in their own way. They're both very demanding in their own way. My sense is Steve is much more human factor focus where Elon is more technology focused.
What does human factor mean? Steve's trying to build things that feel good, that people love, that affect people's lives, how they live.
He's looking into the future a little bit in terms of what people want. I think that Elon focuses more on learning how exponentials work and predicting the development of those.
Steve worked a lot of engineers.
That was one of the things that are reading the biography and
how can a designer essentially talk to engineers and get their respect?
I think so I did not work very closely with Steve.
I'm not an expert at all.
My sense is that he pushed people really hard,
but then when he got an explanation that made sense to him,
then he would let go.
He did actually have a lot of respect for engineering,
but he also knew when to push.
When you can read people well,
you can know when they're holding back,
and when you can get a little bit more out of them. I think he's very good at that. If you compare the other folks, Jeff Dean,
Jeff Dean's an amazing guy. He's super smart as the other guys. Jeff is a really, really
nice guy, well-meaning. He's a classic Googleler. He wants people to be happy. He combines it with
brilliance so he can pull people together in a really great way. He's definitely not
a CEO type. I don't think you'd even want to be that.
Do you know if he's still programs? Oh, yeah, he definitely programs. Jeff is an amazing
engineer today, right? And that has never changed. So it's really hard to compare Jeff to either of those two. I think that Jeff
leads through technology and building it himself and then pulling people in and inspiring them.
And so I think that that's one of the amazing things about Jeff. But each of these people,
you know, with their pros and cons, all are really inspirational and have achieved amazing things.
So it's been a, it's been, I've been very fortunate to get to work with these guys.
For yourself, you've led large teams, you've done so many incredible, difficult technical
challenges.
Is there something you've picked up from them about how to lead?
Yeah, so I mean, I think leadership is really hard.
It really depends on what you're looking for there.
I think you really need to know what you're talking about.
So being grounded on the product,
on the technology, on the business, on the mission
is really important.
Being understanding what people are looking for,
why they're there.
One of the most amazing things about Tesla
is the unifying vision, right?
People are there because they believe in clean energy
and electrification, all these kinds of things.
The other is to understand what really motivates people,
how to get the best people,
how to build a plan that actually can be executed, right?
There's so many different aspects of leadership
and it really depends on the time of the place,
the problems, you know, there's a lot of issues
that don't need to be solved.
And so if you focus on the right things and prioritize well, that can really help move things.
Two interesting things you mentioned.
One is you really have to know what you're talking about.
How you've worked on a lot of very challenging technical things.
Sure. So I kind of assume you were born technically savvy, but assuming that's not the case,
how did you develop a technical expertise?
Like even at Google, you worked on,
I don't know how many projects,
but really challenging, very varied.
Compilers, TPUs, hardware, cloud stuff,
a bunch of different things.
The thing that I've become comfortable with as I've gained experience is being
okay with not knowing. And so a major part of leadership is actually it's not
about having the right answer, it's about getting the right answer. And so if
you're working in a team of amazing people, right, and many of these places,
many of these companies all have amazing people, right? And many of these places, many of these companies,
all have amazing people.
It's the question of how do you get people together?
How do you get, how do you build trust?
How do you get people to open up?
How do you people get people to, you know,
be vulnerable sometimes with an idea
that maybe isn't good enough,
but it's the start of something beautiful.
How do you, how do you provide an environment
where you're not just like top
down, that shall do the thing that I tell you to do, right? But you're encouraging people
to be part of the solution and providing a safe space where if you're not doing the right
thing, they're willing to tell you about it. So you're asking dumb questions. Oh, yeah,
dumb questions are my specialty. Yeah. Well, I've been in the harbor room recently and
I don't know much at all about how chips are designed
I know a lot about using them. I know some of the principles and the ours technical level of this
But it turns out it turns out that if you ask a lot of dumb questions you get smarter really really quick
And when you're surrounded by people that want to teach and learn
Themselves can be a beautiful thing
So let's talk about programming languages if it's okay.
Sure. At the highest absurd philosophical level because I don't get romantic on
me. I will forever get romantic and torture. I apologize. Why do programming
languages even matter? Okay. Well, thank you very much. Do you say, why should
why should you care about any one programming
language or why do we care about programming computers or?
No, why do we care about programming language design, creating effective programming languages,
choosing a one program language as well as another program language, why we keep
struggling and improving through the evolution of these programming languages.
Sure, sure.
OK, so I mean, I think you have to come back
to what are we trying to do here?
So we have these beasts called computers
that are very good at specific kinds of things.
And we think it's useful to have them do it for us.
Now, you have this question of how best to express that,
because you have a human brain still that has an idea
in its head, and you want to achieve something.
So, well, there's lots of ways of doing this. You can go directly to the machine and speak assembly language, and then you can express directly what the computer understands. That's fine.
You can then have higher and higher and higher levels of abstraction up until machine learning and you're designing it in the neural net to do the work for you. The question is where, where along this way do you want to stop
and what benefits do you get out of doing so?
And so programming languages in general, you have C,
you have Fortran, Java, and Ada, Pascal, Swift,
you have lots of different things.
They'll have different trade-offs, and they're tackling
different parts of the problems.
Now, one of the things that most programming languages do is they're trying to make it so that you
have pretty basic things like portability across different hardware. So you've got, I'm going to run
on an Intel PC, I'm going to run a RIS-5 PC, I'm going to run on ARM phone or something like that,
fine. I want to write one program and have it portable, and this is something that assembly
doesn't do. Now, when you start looking at the space
of programming languages, this is where I think it's fun,
because programming languages all have trade-offs,
and most people will walk up to them
and they look at the surface level of syntax
and say, oh, I like curly braces, right, like tabs,
or I like, you know, semi-colons or not, or whatever, right?
Subjective, fairly subjective, very shallow things.
But programming languages, when done right, can actually be very powerful.
And the benefit they bring is expression.
Okay.
And if you look at programming languages, there's really kind of two different levels to them.
One is the down in the dirt, nuts and bolts of how do you get the computer to be efficient,
stuff like that, how they work, type systems,
compiler stuff, things like that.
The other is the UI.
And the UI for programming language
is really a design problem.
And a lot of people don't think about it that way.
And the UI, you mean all that stuff with the braces
and the stuff's the UI.
And what it is, and UI means user interface.
And so what's really going on is it's the interface between the guts and the human.
And humans are hard, right?
Humans have feelings, they have things they like, they have things they don't like.
And a lot of people treat programming languages as though humans are just kind of abstract
creatures that cannot be predicted.
But it turns out that actually there is better and worse.
People can tell when a program language is good or when it was an accident, right?
And one of the things with Swift in particular is that a tremendous amount of time by a
tremendous number of people have been put into really polishing and making it feel good.
But it also has really good and nothing bolts underneath it.
You said that Swick makes a lot of people feel good.
How do you get to that point?
So how do you predict that tens of thousands, hundreds of thousands of people are going
to enjoy using this user experience of this programming language?
Well, you can look at it in terms of better and worse.
So if you have to write lots of boilerplate or something like that,
you will feel unproductive.
And so that's a bad thing.
You can look at it in terms of safety.
If, like, C, for example, this is what's
called a memory unsafe language.
And so you get dangling pointers.
And you get all these kind of bugs
that then you have spent tons of time debugging.
And it's a real pain in the butt.
And you feel unproductive.
And so by subtracting these things from the experience
you get, you know, happier people.
But again, keep interrupting.
I'm sorry, but it's so hard to deal with.
If you look at the people,
people that are most productive on stack overflow,
they are, they have a set of priorities.
Yeah.
They may not always correlate perfectly with the experience
of the majority of users.
If you look at the most upvoted quote unquote correct and sense that overflow is usually
really sort of prioritized as like safe code, proper code, stable code, you know, that kind of stuff.
As opposed to like, if I want to use go-to statements in my basic, right, I want to use go-to
statement.
Like, what if 99% of people want to use go-to statements?
Are you just completely improper, you know, unsafe syntax?
I don't think that people actually, like if you boil it down,
you get below the surface level.
People don't actually care about go-to's
or if statements are things like this.
They care about achieving a goal.
So the real question is, I want to set up a web server
and I want to do a thing, whatever.
Like how quickly can I achieve that?
And so from a programming language perspective,
there's really two things that matter there. One is what libraries exist and then how quickly can you put it together
and what are the tools around that look like. When you want to build a library that's missing,
what do you do? This is where you see huge divergence in the force between worlds.
You look at Python, for example. Python is really good at assembling things, but it's not so great at building all the libraries.
And so you get, because of performance reasons, other things like this, is you get Python
layered on top of C, for example.
And that means that doing certain kinds of things, well, it doesn't really make sense to
do in Python.
And instead you do it in C. And then you wrap it, and then you have, you're living in two
worlds and two worlds.
Never is really great because tooling and the debugger doesn't work right,
and like all these kinds of things.
Can you clarify a little bit what you mean by Python is not good at building libraries,
meaning it doesn't make any kind of...
Certain kinds of libraries.
No, but it's just the actual meaning of the sentence.
Yeah.
Meaning like it's not conducive to developers to come in and add libraries,
or is it the duality of the dance between Python and C?
Python's amazing. Python's great language. I did not mean to say that Python is bad for libraries.
What I meant to say is that Python is really good at,
that you can write in Python, but there are other things like if you want to build a machine learning framework,
you're not going to build a machine learning framework in Python because of performance, for example,
or you want GPU acceleration or things like this.
Instead, what you do is you write a bunch of C or C++ code or something like that,
and then you talk to it from Python.
And so this is because of decisions that were made in the Python design
and those decisions have other counterbalancing forces.
But the trick when you start looking at this from a programming language perspective,
did you start to say, okay, cool.
How do I build this catalog of libraries that are really powerful?
And how do I make it so that then they can be assembled into ways that feel good
and they generally work the first time because when you're talking about building a thing,
you have to include the debugging, the fixing, the turnaround cycle,
the development cycle, all that kind of stuff into the process of building the thing.
It's not just about pounding out the code.
And so this is where things like catching bugs
at compile time is valuable, for example.
But if you dive into the details in this,
Swift, for example, has certain things like value semantics,
which is this fancy way of saying that when you treat
a variable like a value,
it acts like a mathematical object would.
Okay, so you have used PyTorch a little bit.
In PyTorch, you have tensors.
Tensors are, and dimensional grid of numbers.
Very simple, you can do plus and other operators on them.
It's all totally fine.
But why do you need to clone a tensor sometimes? Have you ever owned it that? Yeah. Okay. And so why is that? Why
do you need to clone a tensor? It's the usual object thing that's in Python. So
in Python and just like with Java and many other languages, this isn't unique to
Python. In Python, it has a thing called reference semantics, which is the
nerdy way of explaining this. And what that means is you actually have a pointer to a thing instead of the thing. Now this is due to a bunch of implementation
details that you don't want to go into. But in Swift, you have this thing called value
semantics. And so when you have a tensor in Swift, it is a value. If you copy it, it looks
like you have a unique copy. And if you go change one of those copies, then it doesn't update the other one because you
just made a copy of this thing.
Right.
So that's like highly error prone in at least computer science, math, centric disciplines
about Python, that like the thing you would expect to behave like a math, like math, it
doesn't behave like a math. Like math? It doesn't behave like math, and in fact, quietly,
it doesn't behave like math, and then
can ruin the entirety of your math.
Exactly.
Well, and then it puts you into bugging land again.
Yeah.
Now, you just want to get something done,
and you're like, wait a second, where do I need to put clone?
In what level of the stack, which is very complicated,
which I thought I was using somebody's library,
and now I need to understand it to know where to clone a thing.
And hard to debug, by the way.
Exactly.
And so this is where programming languages really matter.
So in Swift, having value semantics,
so that both you get the benefit of math working like math,
but also the efficiency that comes with certain advantages
there, certain implementation details
that are really benefit you as a programmer.
So they've identified the value's mechanics.
How do you know that a thing should be treated like a value?
Yeah, so Swift has a pretty strong culture and good language
support for defining values.
And so if you have an array, so tensors are one example
that the machine learning folks are very used to.
Just think about arrays.
Same thing, where you have an array, you put, you create an array, you put two or three or four things into it, and then you pass it off to another function.
What happens if that function adds some more things to it? Well, you'll see it on the side that you pass it in, right? This is called reference semantics. Now, what if you pass an array off to a function,
it scrolls it away in some dictionary
or some other data structure somewhere, right?
Well, it thought that you just handed it that array,
then you return back and that reference to that array
still exists in the caller,
and they go and put more stuff in it, right?
The person you handed it off to
may have thought they had
the only reference to that. And so they didn't know what they, this was going to change underneath
the covers. And so this is where you end up having to do clone. So like, I was pass a thing.
I'm not sure if I have the only version of it. So now I have to clone it. So what ValuCemanix
does is it allows you to say, Hey, I have a, so in Swift, it defaults to value semantics.
Also, default to value semantics.
And then because most things end up being like values, then it makes sense for that to
be the different.
And one of the important things about that is that arrays and dictionaries and all these
other collections that are aggregations of other things also have value semantics.
And so when you pass this around to different parts of your program, you don't have to do
these defensive copies. And so this is great this around to different parts of your program, you don't have to do these defensive copies.
And so this is great for two sides, right?
This is great because you define a way of the bug, which is a big deal for productivity,
the number one thing most people care about.
But it's also good for performance because when you're doing a clone, so you pass the
array down to the thing, it's like, I don't know if Abales has it, I have to clone it.
Well, you just did a copy of a bunch of data.
It could be big.
And then it could be the thing that called you
is not keeping track of the old thing.
So you just made a copy of it,
and you may not have had to.
And so the way the value semantics work is in Swift
is that it uses this thing called copy-on, right?
Which means that you get the benefit of safety
and performance.
And it has another special trick. Because you think it certain languages like Java for example
They have immutable strings and so what they are trying to do is they provide value semantics by having pure immutability
Functional languages have pure immutability in lots of different places and this provides a much safer model and it provides value semantics
The problem with this is if you have immutability,
everything is expensive. Everything requires a copy. For example, in Java, if you have a string
X and a string Y, you append them together, we have to allocate a new string to hold XY.
If they're immutable. Well, and strings in Java are immutable. And if there's, there's optimizations for short ones,
and it's complicated, but generally, think about them
as a separate allocation.
And so when you append them together,
you have to go allocate a third thing
because somebody might have a pointer
to either of the other ones, right?
And you can't go change them.
So you have to go allocate a third thing.
Because of the beauty of how the SwiftViceMex system works out,
if you have a string
on Swift and you say, hey, put in X, right? And they say, append on Y, Z, W, W, it knows
that there's only one reference to that. And so it can do an in place update. And so you're
not allocating tons of stuff on the side. You're not, you don't have a list of problems.
When you pass it off, you can know you have the only reference. If you pass it off to multiple different people,
but nobody changes it, they can all share the same thing.
So you get a lot of the benefit of purely mutable design,
and so you get a really nice sweet spot.
Did I have that scene in other languages?
Yeah, that's interesting.
I thought there was going to be a philosophical narrative
here that you're going to have to pay a cost for it.
It sounds like I think value semantics is beneficial for easing of debugging or minimizing
the risk of errors, like bringing the errors closer to the source, bringing the symptom
of the error closer to the symptom of the air closer
to the source of the air, however you say that.
But you're saying there's not a performance cost either
if you implement it correctly.
Yeah, well, so there's trade-offs with everything.
And so if you are doing very low-level stuff,
then sometimes you can notice a cost,
but then what you're doing is you're saying,
what is the right default?
So coming back to user interface,
when you talk about programming language
is one of the major things that Swift does
that makes people love it, that is not obvious
when it comes to designing a language,
is this UI principle of progressive disclosure of complexity?
Okay, so Swift, like many languages, is very powerful.
The question is, when do you have to learn the power as a user? So Swift, like many languages is very powerful. The question is, when do you have to learn the power as a user?
So Swift, like Python, allows you to start with like
print hello world, right?
Certain other languages start with like public static void main
class, like all the ceremony, right?
And so you go to teach a new person,
hey, welcome to this new thing.
Let's talk about public access control classes.
Wait, what's that?
A string system.out.println, like packages, like,
gah!
Right.
And so instead of you take this and you say,
hey, we need packages, modules.
We need powerful things like classes.
We need data structures.
We need like all these things.
The question is, how do you factor the complexity? and how do you make it so that the normal case scenario is
that you're dealing with things that work the right way and the right way and give you good
performance by default. But then as a power user, if you want to dive down to it, you have full
C, C performance, full control over low level pointers. You can call Malik if you want to call Malik.
This is not recommended on the first page of every tutorial,
but it's actually really important
when you want to get work done.
And so being able to have that is really the design
in program language design.
And design is really, really hard.
It's something that I think a lot of people kind of outside
of UI, again, a lot of people just think is subjective, like there's nothing,
you know, it's just like curly braces or whatever, it's just like somebody's preference,
but actually good design is something that you can feel. And how many people are involved with
good design? So if we look at Swift, but look at historically, I mean, this might touch like,
look at historically, I mean, this might touch like, there's almost like a Steve Jobs question to like, how much dictatorial decision making has required versus collaborative, and we'll
talk about how all that can go wrong or rights.
But yeah, well, Swift, so I can't speak to in general all design everywhere.
So the way it works with Swift is that there's a core
team. And so core team is six or seven people, ish something like that that is people that have been
working with Swift since very early days. And so by early days is not that long ago. Okay, yeah,
so it's it became public in 2014. So it's been six years public now, but, but so that's enough time
that there's a story arc there.
Okay. And there's mistakes have been made that then get fixed and you learn something and then you,
you know, and so that what the core team does is it provides continuity. And so you want to have a,
okay, well, there's a big hole that we want to fill. We know we want to fill it. So don't do other
things that invade that space until
we fill the hole. Right. There's a boulder that's missing here. We want to do, we will do
that boulder even though it's not today. Keep, keep out of that space.
And the whole team remembers the remembers the myth of the boulder that's there.
Yeah. Yeah. There's a general sense of what the future looks like in broad strokes and
a shared understanding of that combined with a shared understanding of what has
happened in the past that worked out well and didn't work out well. The next level out is you have
the what's called the Swift Evolution community and you've got in that case hundreds of people that
really care passionately about the way Swift evolves and that's like an amazing thing to again
the core team doesn't necessarily need to come up with all the good ideas. You got hundreds of
people out there that care about something and they come up with
really good ideas too. That provides this like, ruck tumbler for ideas.
And so, the evolution process is a lot of people in a discourse forum that are like,
cashing it out and trying to like talk about, okay, well, should we go left or right or if we did
this, it would be good. And, you know, here you're talking about hundreds of people, so you're not going to get consensus necessarily.
You're not obvious consensus.
And so there's a proposal process that then allows
the core team and the community to work this out.
And what the core team does is it aims to get consensus
out of the community and provide guardrails,
but also provide long-term, make sure
we're going the right direction,
kind of, things. So does that group represent like the how much people will love
the user interface? Like, do you think they're able to capture that? Well, I mean,
it's something we talk about a lot, something we care about. How well we, how
well we do that, it's up for debate, but I think that we've done pretty well so far.
Is the beginner in mind?
Yeah, like you said, the progress of disclosure.
Yeah. So we care a lot about a lot about that, a lot about power, a lot about efficiency,
a lot about their manufacturers to good design.
And you have to figure out a way to kind of work your way through that.
And so if you like think about like a language I love is Lisp. Yeah, probably still because I use Emax, but they haven't done anything any serious work
on Lisp, but it has a ridiculous amount of parentheses.
Yeah.
I've also, you know, with Java and C++, the braces.
You know, I like, I enjoyed the comfort of being between braces.
Yeah, well, what's the fight on is really decided to drop just like, and last thing to me, as a design, if I was a language designer,
God forbid, is I would be very surprised that Python with no braces would nevertheless somehow be comforting also.
So like, I can see arguments for all of these.
So look at this, this is evidence
that it's not about braces versus tabs.
Right.
Exactly.
You're good.
It's a good point.
Right.
So like, you know, there's evidence that,
but see, like, it's one of the most argued about things.
Oh, yeah.
Of course, just like tabs and spaces,
which it doesn't, I mean, there's one obvious right answer,
but it doesn't actually matter. What's that? Let's not, let's not, come mean, there's one obvious right answer, but it doesn't actually matter.
Was that?
Let's go on, let's go on.
Come on, we're friends.
Come on, get me a try and do it in here.
People are going to have to do, but we're going to tune out.
So do you are able to identify things that don't really matter for the experience?
Well, no, it's always a really hard, so the easy decisions are easy, right?
I mean, you can find, those are not the interesting ones.
The hard ones are the ones that are most interesting, right?
The hard ones are the places where, hey, we want to do a thing.
Every agrees we should do it.
There's one proposal on the table, but it has all these bad things associated with it.
Well, okay, what are we going to do about that?
Do we just take it?
Do we delay it?
Do we say, hey, well, maybe there's this other feature that if we do that first, this will work out better. How does this, if we do this,
are we paying ourselves into a corner? And so this is where, again, you're having that core team of
people that has some continuity and has perspective, has some of the historical understanding,
is really valuable because you get, it's not just like one brain, you get the power of multiple people
coming together to make good decisions and then you get the best out of all these people
and you also can harness the community around it.
What about the decision of whether in Python having one type or having strict typing?
Yeah, let's talk about this.
I like how you put that by the way. so so many people would say that Python doesn't have types
I don't have type. Yeah, but you're I've listened to you enough to where
I'm a fan of yours and I've listened to way too many podcast and
Yeah, so I would argue that Python has one type and so
So like when you import Python into Swift,
which by the way, works really well,
you have everything comes in as a Python object.
Now, here they're trade-offs, because it depends
on where you're optimizing for.
And Python is this super successful language
for a really good reason, because it has one type.
You get duck typing for free and things like this.
But also, you're making it very easy to pound out
code on one hand, but you're also making it very easy to introduce complicated bugs, the after-to-bug,
and you pass the string into something that expects an integer, and it doesn't immediately die.
It goes all the way down the stack trace, and you find yourself in the middle of some code that you
really didn't want to know anything about, and it blows up, and you're just saying, well, what did I do
wrong? Right? And so types are good and bad, and it blows up and you're just saying, well, what did I do wrong?
And so types are good and bad and they have trade-offs are good for performance and certain
other things depending on where you're coming from.
But it's all about trade-offs.
And so this is what design is about weighing trade-offs and trying to understand the
ramifications of the things that you're weighing, like types or not, or one type or many types.
But also within many types, how powerful do you make that type system
is another very complicated question
with lots of trade-offs.
It's very interesting, by the way.
But that's like one dimension.
And there's a bunch of other dimensions,
GIP compiled versus Static compiled,
garbage collected versus reference counted
versus manual, manual manual memory management versus like in like all these different trade-offs
and how you balance them or what make a program language good.
Okay, and currency.
Yep.
So in all those things, I guess when you're designing the language, you also think of how
that's going to get all compiled down to, if you care about performance, yeah.
Well, and go back to list, right?
So list, also I would say JavaScript is another example of a very simple
language. Right.
And so one of the, so I also love list.
I don't use it as much as maybe you do.
Yeah. No, I think we're both everyone who loves list.
But it's like, you love, it's like, I don't know, you know, I love
Frank Sinatra, but like how often do I seriously
listen to Frank Sinatra?
Sure.
Sure.
But you look at that or you look at JavaScript, which is another very different, but relatively
simple language.
And there are certain things that don't exist in the language, but there is inherent complexity
to the problems that we're trying to model.
And so what happens to the complexity?
In the case of both of them, for example, you say, well, what about large scale software development?
Well, you need something like packages.
Neither language has a language of fordance for packages.
And so what you get is patterns.
You get things like NPN.
You get things like these ecosystems that get built around.
And I'm a believer that if you don't model at least
the most important inherent complexity in the language, then what ends up happening is that complexity gets pushed elsewhere and where
it gets pushed elsewhere.
Sometimes that's great because often building things as libraries is very flexible and very
powerful and allows you to evolve and things like that.
But often it leads to a lot of unnecessary divergence in the force and fragmentation.
And when that happens, you just get kind of a mess.
And so the question is how do you balance that? Don't put too much stuff in the language because
that's really expensive and makes things complicated, but how do you model enough of the inherent
complexity of the problem that you provide the framework and the structure for people to think about?
Well, so the key thing to think about with programming languages and you think about. So the key thing to think about with programming languages, and you think
about what programming language is, therefore, is it's about making a human more productive.
Right. And so, like, there's an old, I think it's Steve Jobs quote, about the bicycle
for the mind. Right. You can, you can, you can definitely walk, but it'll get there
a lot faster if you can bicycle on your way. And a programming language is a bicycle for the mind.
Yeah.
It's basically, wow, that's really interesting way to think about it.
By raising the level of abstraction, now you can fit more things in your head.
By being able to just directly leverage somebody's library, you can now get something done
quickly.
In the case of Swift UIF, is this new framework that Apple has released recently for doing
UI programming.
And it has this declarative programming model, which defines a way entire classes of bugs.
It builds on value semantics and many other nice Swift things.
And what this does is it allows you to just get way more done with way less code.
And now your productivity is a developer is much higher.
And so that's really what programming languages should be about,
is it's not about tabs versus spaces or curly braces or whatever.
It's about how productive do you make the person.
And you can only see that when you have libraries that were built
with the right intention that the language was designed for.
And with Swift, I think we're still a little bit early.
But Swift UI and many other things that are coming out now are really showing that, and I think that they're opening people's
eyes.
It's kind of interesting to think about how the knowledge of how good the bicycle is,
how people learn about that.
So I've used C++.
Now this is not going to be a trash talking session about C++, but you see plus plus for a really long.
I'm going to go there if you want.
I have the scars.
I feel like I spend many years without realizing like there's languages that could, for my particular
life style, brain style, thinking style, there's a language that could make me a lot more productive
in the debugging stage, in the development stage, and thinking like the bicycle for the
mind that I could fit more stuff into my. I thought it was a great example of that, right? I mean,
a machine learning framework in Python is a great example that it's just very high abstraction level.
And so you can be thinking about things on a very high level, algorithmic level,
instead of thinking about, okay, well, am I copying this tensor to a GPU or not?
Right? It's not what you want to be thinking about.
And as I was telling you, I guess the question I had is, how does a person like me or in general
people discover more productive languages? As I've been telling you offline, I've been
looking for like a project to work on and swift, so I can really try it out. I mean, my
intuition was like doing a hello world is not going to get me there to get me to experience
the power of the language. You need a few weeks to change your metabolism. Exactly. I didn't
think that would be beautifully put. That's one of the problems with people with diets.
I'm actually currently, to go in parallel,
but a small tangent is I've been recently eating only meat.
Okay.
Okay, so most people are like,
the thing that's horribly unhealthy or whatever,
you have like a million, whatever the
science is, it just doesn't sound right.
Well, so back when I was in college, we did the Atkins diet.
That was a thing.
Similar.
But you have to always give these things a chance.
I mean, I was not dieting, but it's just the things that you like.
If I eat personally, if I eat meat, just everything,
I can be super focused, more focused than usual.
I just feel great.
I've been running a lot,
doing pushups and pull ups and so on.
And Python is similar in that sense from here.
Where are you going on this?
No, no.
I mean, literally, I just, I felt ahead,
like a stupid smile on my face
when I first started using Python.
I could code up really quick things like I would see the world. I'll be empowered to write a script to
you know, to do some basic data processing to rename files on my computer. Yeah, right. And like Pearl didn't do that for me.
Yeah, right. And like Pearl didn't do that for me.
Yeah.
I mean, kind of a little bit.
Well, and again, like none of these are about which is best or something like that, but
there's definitely better and worse here.
But it clicks.
Right.
Well, yeah.
And if you look at Pearl, for example, you get bogged down and scalars versus arrays versus
hashes versus type globs and like all that kind of stuff.
And Python's like, yeah, let's not do this.
And some of it is debugging.
Everyone has different priorities.
But for me, it's, could I create systems for myself
that empower me to debug quickly?
Like I've always been a big fan, even just crude asserts.
Like always stating things that should be true.
Which in Python, I found in myself doing more because of type all these kinds of stuff
Well, you could think of types in a program language as being kind of assert. Yeah, they could check the compile time, right?
So how do you learn anything? Well, so this or how do you how do people learn new things right this this is hard
People don't like to change
people generally don't like change around them either. And so we're all very slow to adapt and change. And usually
there's a catalyst that's required to force yourself over this. So for learning a programming
language, it really comes down to finding an excuse, like build a thing that the language
is actually good for, that the ecosystem is ready for and so
And so if you were to write an iOS app for example that would be the easy case. Obviously you would use Swift for that right there. Enjoyed
So Swift runs on Android. Oh does it. Oh, yeah, yeah, Swift runs a lot of the word
So okay, so Swift Swift is built on top of LVM. Yeah,M runs everywhere. LVM for example, builds the Android
kernel.
So they realize this.
Yeah, so Swift is very portable, runs on Windows, it runs on lots of different things.
And Swift's side to the Swift UI, and then there's a thing called UIKit.
So can I build an app with Swift?
Well, so that's the thing.
Is the ecosystem is what matters there.
So, SwiftUI and UIKit are Apple technologies.
Okay, got it.
And so, they happen to, like, SwiftUI happens to be written
in Swift, but it's an Apple proprietary framework
that Apple loves and wants to keep on its platform,
which makes total sense.
You go to Android and you don't have that library.
Yeah.
Android has a different ecosystem of things that hasn't been built out and doesn't work as
well with Swift.
So, you can totally use Swift to do arithmetic and things like this, but building a UI with
Swift on Android is not a great experience right now.
So if I wanted to learn Swift, what's the,
I mean, one practical different version of that is
Swift for TensorFlow, for example.
And one of the inspiring things for me
with both TensorFlow and PyTorch,
is how quickly the community can like
switch from different libraries.
Like, you could see some of the community
switching to PyTorch,
no, but it's very easy to see.
And then TensorFlow is really stepping up its game.
And then there's no reason why.
I think the way it works is basically,
there has to be one GitHub repo,
like one paper steps up.
It gets people excited.
It gets people excited.
And they're like, ah, I have to learn this.
Swift for what Swift again, and then they learn
and they fall in love with them.
That's what happened by Toysh as a...
There has to be a reason, a catalyst.
And so, and there, I mean, people don't like change,
but it turns out that once you've worked with one
or two program languages, the basics are pretty similar.
And so, one of the fun things about learning program languages,
even maybe Lisp, I don't know if you
agree with this, is that when you start doing that, you start
learning new things.
You have a new way to do things and you're forced to do them. And
that forces you to explore and to put you in learning mode. And
when you get in learning mode, your mind kind of opens a little
bit. And you can you can see things in a new way, even when you
go back to the old place.
Right. Yeah, it's so it so it was list was functional. Yeah, but I wish there's a kind of window
Maybe you can tell me if there is there you go. This is a question
To ask what is the most beautiful feature in a programming language before I ask it
Let me say like with Python I remember I saw list comprehension
Yeah, it was like when when I really took it in.
I don't know, I just loved it.
It was like fun to do that kind of...
Something about it to be able to filter through a list and to create a new list, all
in a single line, was elegant.
I could all get into my head and it just made me fall in love
with the language.
So let me ask you a question.
Is there what to use the most beautiful feature
in a program language that you've ever encountered in Swift,
maybe, and then outside of Swift?
I think the thing that I like the most
from a program in language. So I think the thing that I like the most from a programming language.
So I think the thing you have to think about
with a programming language, again, what is the goal?
You're trying to get people to get things done quickly.
And so you need libraries, you need high quality libraries,
and then you need a user base around them
that can assemble them and do cool things with them.
And so to me, the question is, what enables high quality
libraries?
Okay.
Yeah.
And there's a huge divide in the world between libraries who enable high quality libraries
versus the ones that put special stuff in the language.
So programming language is that enable high quality libraries?
Right.
So what I mean by that is expressive libraries
that then feel like a natural integrated part
of the language itself.
So an example of this in Swift is that int and float
and also ran string, things like this.
These are all part of the library.
Int is not hard coded into Swift.
And so what that means is that because int is just a library thing to find in the standard library,
along with strings and arrays and all the other things that come with the standard library,
well, hopefully you do like int.
But anything that any language features that you needed to define int,
you can also use in your own types.
So if you wanted to find a Quaternion or something like
this, right? Well, it doesn't come to the standard library. There's a various special set of people
that care a lot about this, but those people are also important. It's not about classism, right? It's
not about the people who care about instant flow. It's more important than the people who care about
Quaternions. And so to me, the beautiful things about programming languages is when you allow those communities
to build high quality libraries that feel native,
that feel like they're built into the compiler
without having to be.
What does it mean for the int to be part of not hard-coded in?
So what is an int?
Okay, int is just a integer in this case.
It's like a like a 64 bit integer or something like this.
But so like the 64 bit is hard coded or no?
No, none of that's hard coded.
So int if you go look at how it's implemented,
it's just a struct and Swift.
And so it's a struct and then how do you add two structs?
Well, you define plus. And so you's a struct and then how do you add two structs? Well, you define plus.
And so you can define plus on int. Well, you can define plus on your thing too. You can define
int as like an is a odd method or something like that on it. And so yeah, you can add methods on
the things. Yeah. So you can you can define operators like how it behaves. Yeah. That's
useful when there's something about the language which enables
others to create libraries which are not hacky. Yeah, they feel native. And so one of the best
examples of this is Lisp, right? Because in Lisp, like all the libraries are basically part of
the language, right? You write term rewrites systems and things like this. And so can you, as a counter example,
provide what makes it difficult to write a library that's native?
Is it the Python C?
Well, so one example, I'll give you two examples.
Java and C++, there's Java and C.
They both allow you to find your own types.
But int is hard code go to in the language.
Okay.
Well, why?
Well, in Java, for example, coming back to this whole reference semantic value semantic
thing, int gets passed around by value.
Yeah.
But if you make, if you make like a pair or something like that, a complex number, right?
It's a, it's a class in Java, and now it gets passed around by reference by pointer. And so now you lose value semantics, right? It's a class in Java, and now it gets passed around by reference,
by pointer. And so now you lose value semantics, right? You lost math. Okay. Well, that's not great,
right? If you can do something with in, why can't I do it with my type? Yeah. Right. So that's,
that's the negative side of the thing I find beautiful is when you can solve that, when you can have full expressivity,
where you as a user of the language
have as much or almost as much power as the people
who implemented all the standard built-in stuff,
because what that enables is that enables
truly beautiful libraries.
You know, it's kind of weird because I've gotten used to that.
That's one, I guess, other aspect of program language design. You have to think, you know, the old first principles thinking, like, why are we doing it this way?
By the way, I mean, I remember, because I was thinking about the Waller operator and I'll ask you about it later, but it hit me that like the equal sign for assignment. Yeah.
Like, why are we using the equal sign for assignment?
It's wrong.
And that's not the only solution.
Right.
So if you look at Pascal, they use colon equals,
or assignment, and equals for equality.
And they use less than greater than instead of the not equal.
Yeah.
Like, there are other answers here.
So, but like, and yeah, I'll ask you all,
but how do you then decide to break convention,
to say, you know what?
That everybody's doing it wrong.
We're gonna do it right.
Yeah.
So it's like an ROI, like return on investment trade off, right?
So if you do something weird, let's just say,
like not, like colon equal instead of equal for assignment.
That would be weird with today's aesthetic.
And so you'd say, cool, this is theoretically better.
But is it better in which ways?
What do I get out of that?
Do I define a way class of bugs?
Well, one of the class of bugs that C has
is that you can use if x equals know, if X equals without equal equals,
if X equals Y, right? Well, turns out you can solve that problem in lots of ways.
Clang, for example, GCSE, all these compilers will detect that as a, as I'd likely bug
producer warning. Do they? Yeah. I feel like they didn't or clang that GCSE didn't.
It's like one of the important things about programming language design is like
you're literally creating suffering in the world.
Okay, like I feel like, I mean one way to see is the bicycle for the mind, but the other
way is to like minimizing suffering.
Well, you have to decide if it's worth it, right?
And so let's come back to that.
But if you look at this and again this is where there's
a lot of detail that goes into each of these things, equal and C returns a value. That's messed up.
That allows you to say x equals y equals z, like that works and z.
Is it messed up? Well so that most people think it's messed up by thing.
Is it messed up? Well, so that most people think it's messed up, I think.
It is very by messed up, what I mean is it is very rarely used for good and it's often
used for bugs.
Yeah.
Right.
And so that's a good definition of stuff.
You could use, you know, it's a, in hindsight, this was not such a great idea, right?
Now, one of the things with Swift that is really powerful and one of the reasons it's actually
good versus it being full of good ideas is that when we launched the FOG, we had a lot Right now one of the things with Swift that is really powerful and one of the reasons it's actually good
versus it being full of good ideas is that when when we launched Swift One we
announced that it was public, people could use it, people could build apps, but it was going to change and break. Okay when Swift 2 came out we said hey it's open source and there's this open process which
people can help evolve and direct the language. So the community at large, like Swift users can now help shape
the language as it is.
And what happened is that part of that process
is a lot of really bad mistakes got taken out.
So for example, Swift used to have the C-style plus plus
and minus minus operators.
Like what does it mean when you put it before or versus after?
Right. Well, that got cargo halted from C into Swift
or really I might-
What's cargo halted?
cargo halted means brought forward
without really considering it.
Okay.
This is maybe not the most PC term,
but I have to look it up in urban dictionary.
Yeah, so it got pulled into C without,
or it got pulled into Swift without very good consideration.
And we went through this process and one of the first things got ripped out was plus plus and minus minus
because they lead to confusion. They have very low value over saying, you know, x plus equals one.
And x plus equals one is way more clear. And so when you're optimizing for teachability and clarity
and bugs and this multidimensional space that you're looking for teachability and clarity and bugs and this multi-dimensional
space that you're looking at, things like that really matter. And so being first principles
on where you're coming from and what you're trying to achieve and being anchored on, the
objective is really important.
Well, let me ask you about the most, sort of this podcast is about information, it's about drama.
Let me talk to you about some drama.
So you mentioned Pascal and Colin equals, there's something called the Waller's Operator.
And Python 3.8 added the Waller's Operator.
And the reason I think it's interesting is not just because of the
feature. It has the same kind of expression feature you can mention is see that it returns the
value of the assignment. And then you can comment on that in general. But on the other side of it,
it's also the thing that toppled the dictator. It finally drove Guido to step down from the D.F.
all the toxicity of the community.
So maybe what do you think about the Waller's operator
and Python?
Is there an equivalent thing in Swift
that really stress tested the community?
And then on the flip side, what do you think
about Guido stepping down over it?
Yeah, well, if I look past the details of the water
software, one of the things that makes it most polarizing
is that it's syntactic sugar.
What do you mean by syntactic sugar?
It means you can take something that already exists
in language and you can express it in a more concise way.
So, okay, I'm gonna play those advocates.
So, this is great.
Is that objective or subjective statement? Like, can you argue that basically anything is syntactic sugar or not? No, not
everything is syntactic sugar. So, for example, the type system, like, can you have classes
versus like, do you have types or not? Right? So one type versus many types
is not something that affects syntactic sugar. And so if you say, I want to have the ability to
define types, I have to have all this like language mechanics to define classes and, oh, now I have
to have inheritance and I have like, I have all this stuff. That's just making my language more complicated. That's not about sugaring it.
Swift has the sugar.
So like Swift has this thing called iflet
and it has various operators are used
to concisify specific use cases.
So the problem with syntactic sugar,
when you're talking about,
hey, I have a thing that takes a lot to write
and I have a new way to write it.
You have this horrible tradeoffoff which becomes almost completely subjective, which
is how often does this happen and doesn't matter. One of the things that is true about human
psychology, particularly when you talk about introducing a new thing, is that people over
estimate the burden of learning something, and so it looks foreign when you haven't gotten
used to it. But if it was there when you haven't gotten used to it.
But if it was there from the beginning, of course, it's just part of Python.
Like, unquestionably, like, this is just the thing I know. And it's not a new thing that you're worried about learning.
It's just part of the deal. Now with Guido, I don't know, Guido, well, have you passed cost much?
Yeah, I've met him a couple of times, but I don't know, no,
we do well.
But the sense that I got out of that whole dynamic was that he
had put the not just the decision maker weight on his
shoulders, but it was so tied to his personal identity that
he took it personally and he felt the need and he kind of put
himself in the situation of being the person instead of building a base of support around him.
I mean, he, this is probably not quite literally true.
But by too much, so there's too much, too much, too much concentrated on him, right?
And so, and that can wear you down.
Well, yeah, particularly because people then say, Guido, your horrible person, I hate this
thing, blah, blah, blah, blah, blah, blah, blah.
And sure, it's like, you know, maybe 1% of the community that's doing that.
But Python's got a big community.
And 1% of millions of people is a lot of hate mail.
And that just from human factor, we'll just wear on here.
What to clarify, it looked from just what I saw in the messaging for the, let's not look
at the million Python users, but at the Python core developers,
it feels like the majority, the big majority on a vote were opposed to it.
Okay, I'm not that close to it.
So I don't know.
So this, okay, so the situation is like literally, yeah, I mean, the majority of the core
developers are against it.
So, right.
And they weren't, they weren't even like against it.
It was, there was a few, well, they were against it,
but they against it wasn't like,
this is a bad idea.
They were more like, we don't see why this is a good idea.
And what that results in is there's a stalling feeling.
Like, you just slow things down.
Now, from my perspective, you could argue this,
and I think it's very interesting
if you look at politics today
and the way Congress works, it's slowed down everything.
It's a dampener.
Yeah, it's a dampener, but that's a dangerous thing too,
because if a dampener thinks,
like, you know, the dampening results awesome. What are you talking about? Like,
it's a low pass filter, but if you need billions of dollars
injected into the economy or trillions of dollars, then
suddenly stuff happens. Right. And so for sure, so you're talking
about, I'm not defending our political situation just to be
clear. But you're talking about like a global pandemic, I was
hoping we could fix like the healthcare system and the
education system.
Like, you know, I'm not a politics person.
I don't, I don't, I don't know.
When it comes to languages, the community's kind of right in
terms of it's a very high burden to add something to a language.
So as soon as you add something, you have a community of
people building on it and you can't remove it.
Okay.
And if there's a community of people that feel it and you can't remove it. Okay.
And if there's a community of people that feel really uncomfortable with it, then taking
it slow, I think is is is an important thing to do.
And there's no rush, particularly if with something that's 25 years old and is very established
and, you know, it's not like coming coming into its own.
Um, what about features?
I, well, so I think that the issue with with Guido is that maybe this is a case where he realized
it had outgrown him and it went from being or the language.
So Python, I mean, Guido is amazing, but Python isn't about Guido anymore.
It's about the users.
And to a certain extent, the users own it.
And you know, Python, Guido spent years of his life
a significant fraction of his career on Python.
And from his perspective, I imagine he's like,
well, this is my thing, I should be able to do the thing
I think is right.
But you can also understand the users
where they feel like, you know, this is my thing.
I use this like, and, I don't know, it's a hard thing.
But what, if we could talk about leadership in this, I use this and I don't know, it's a hard thing.
But what, if we could talk about leadership in this, cause it's so interesting to me,
I'm gonna make, I'm gonna work.
Hopefully somebody makes it,
if not, I'll make it a water-soperator shirt
because I think it represents to me,
maybe it's my Russian roots or something.
It's the burden of leadership.
Like, I feel like to push back, I feel like progress can all, like most difficult decisions,
just like you said, there will be a lot of divisive, divisive in this all over, especially
in the passionate community.
It just feels like leaders need to take those risky decisions that if you listen, that with some non-zero
probability, maybe even a high probability, it would be the wrong decision, but they have
to use their gut and make that decision.
Well, this is like one of the things where you see amazing founders.
The founders understand exactly what's happened and how the company got there and are willing
to say, we have been doing the thing X, the last 20 years.
But today, we're going to do thing Y, and they make a major pivot for the whole company,
the company lines up behind them, they move, and it's the right thing.
But then when the founder dies, the successor doesn't always feel that agency to be able
to make those kinds of decisions.
Even though they're CEO, they could theoretically do whatever.
There's two reasons for that, in my opinion, or in many cases, it's always different.
But one of which is they weren't there for all the decisions that were made.
And so they don't know the principles in which those decisions were made.
And once the principles change, you should be obligated to change what you're doing and change direction.
And so, if you don't know how you got to where you are, it just seems like gospel.
And, you know, you need not get a question that you may not understand that it really is the
right thing to do. So you just may not see it. That's so brilliant. I never thought of it that way.
Like, it's so much higher burden.
When is the leader you step into a thing that's already worked for a long time?
Yeah. Yeah. Well, and if you change it and it doesn't work out, now you're the person
screwed it up. People always second guess that. Yeah. And the second thing is that even if you
decide to make a change, even if you're theoretically in charge, you're just a person that
thinks they're in charge. Meanwhile, meanwhile you have to motivate the troops
You have to explain it to them in terms of understand you have to get them to buy into and believe in it because if they don't
Then they're not gonna be able to make the turn even if you tell them, you know, their bonuses are gonna be curtail
They're just not gonna like buy into it, you know, and so there's only so much power you have as a leader and you have to understand
What that what those limitations are. Are you still BDFL You've been BDFL of some stuff.
You're very heavy on the B, the benevolent dictated for life.
I guess LVM, you're still in the LVM world.
What's the role of... So then on Swift, you said that there's a group of people.
Yeah.
So if you contrast Python with Swift, right, one of the reasons.
So everybody in the core team takes the role really seriously.
And I think we all really care about where Swift goes.
But you're almost delegating the final decision making to the wisdom of the
group. And so it doesn't become personal.
And also when you're talking with the community,
so yeah, some people are very annoyed
as certain decisions get made.
There's a certain faith in the process
because it's a very transparent process.
And when a decision gets made,
a full rationale is provided, things like this.
These are almost defense mechanisms
to help both guide future discussions
and provide case lock and like Supreme Court
does about this decision was made for this reason and here's the rationale and what we want
to see more of or less of.
But it's also a way to provide a defense mechanism so that when somebody's griping about it,
they're not saying that person did the wrong thing.
They're saying, well, this thing sucks and later they move on and they get over.
Yeah, the analogy is Supreme Court, I think, is really good.
But then, okay, not to get person on the Swift team, but like, is there, is there, it just
seems like it's impossible for their, for division not to emerge.
Well, each of the humans on the Swift Court team, for example, are different.
And the membership of the Swift Court team changes slowly over time, which is, I think, a healthy
thing.
And so each of these different humans have different opinions.
Trust me, it's not a singular consciousness of finding such as the imagination.
You've got three major organizations, including Apple, Google, and sci-fi, all working together.
And it's a small group of people but you need
high trust. Again, it comes back to the principles of what you're trying to achieve and understanding
what you're optimizing for. I think that starting with strong principles and working
towards decisions is always a good way to both make wise decisions in general but then be able
to communicate them to people so that they can buy into them.
That is hard.
And so you mentioned LVM, LVM is going to be 20 years old this December.
So it's showing its own age.
Do you have like a dragon cake plan?
Oh, I should definitely do that.
Yeah, if we can have a pandemic cake and everybody gets a slice
of cake and it gets, you know, sent through email. But the, but Alabama's had tons of
its own challenges over time too, right? And one of the challenges that the LVM community
has in my opinion is that it has a whole bunch of people that have been working in LVM
for 10 years, right?
Because this happens somehow. And LVM has always been one way, but it needs to be a different way.
Right. And they've worked on it for like 10 years. It's a long time to work on something. And
you know, you suddenly can't see the faults and the thing that you're working on. And LVM has
lots of problems and we need to address them and we can make it better. And if we don't make it better,
then somebody else will come up with a better idea.
And so it's just kind of of that age where the community is like in danger of getting
too calcified.
And so I'm happy to see new projects joining and new things mixing it up.
Fortran is now a new thing in the LVM community, which is very, very good.
I've been trying to find a little tangent,
find people who program in Cobalt or Fortran, Fortran,
especially to talk to, they're hard to find.
Yeah, look to the scientific community.
They still use Fortran quite a bit.
Interesting thing you kind of mentioned with LVM or just in general,
that if something evolved, you're not able to see the faults.
So do you fall in love with the thing over time or do you start hating everything about the thing
over time? Well, so my personal folly is that I see maybe not all but many of the faults
and they grade on me and I don't have time to go fix them. Yeah, and they can magnify it over time.
Well, and they may not get magnified,
but they never get fixed.
And it's like sand underneath,
you know, it's just like grading against you.
And it's like sand underneath your fingernails
or something, it's just like,
you know it's there, you can't get rid of it.
And so the problem is that if other people don't see it,
right, nobody ever, like I can't go,
I don't have time to go write the code and fix it anymore, but then people are resistant
to change.
And so you say, hey, we should go fix this thing.
Like, oh, yeah, that sounds risky.
Well, is it the right thing or not?
Are the challenges, the group dynamics, or is it also just technical?
I mean, some of these features, like, I think as an observer, it's almost like a fan in the, you know, as a spectator, the whole thing.
I don't often think about, you know, some things might actually be technically difficult to
implement. Example, this is we built this new compiler framework called MLR. MLR is a whole new
framework. It's not, many people think it's about machine learning. The ML stands for multi-level because compiler people can't name things very well, I guess.
Can we dig into what ML IR is?
Yeah, so when you look at compilers, compilers have historically been solutions for a given
space. So LVM is a, it's really good for dealing CPUs,. Let's just say at a high level. You look at Java.
Java has a JVM.
The JVM is very good for garbage collected languages
that need dynamic compilation.
And it's very optimized for a specific space.
And so hotspot is one of the compilers
that gets used in that space.
And that compiler is really good at that kind of stuff.
Usually when you build these domain-specific compilers,
you end up building whole thing from scratch.
For each domain.
What's the domain?
So what's the scope of a domain?
Well, so here I would say, like, if you look at Swift,
there's several different parts to the Swift compiler,
one of which is covered by the L of the M part of it.
There's also a high-level piece that's specific to Swift,
and there's a huge amount of redundancy
between those two different infrastructures,
and a lot of re-implemented stuff
that is similar but different.
What is LLVM defined?
LLVM is effectively an infrastructure,
so you can mix and match it in different ways.
It's built out of libraries.
You can use it for different things,
but it's really good at CPUs and GPUs.
CPUs and the tip of the iceberg on GPUs.
It's not really great at GPUs.
Okay.
But it turns out to languages that then use it
to talk to CPUs.
Got it.
And so it turns out there's a lot of hardware out there
that is custom accelerators.
So machine learning, for example,
there are a lot of matrix multiply accelerators
and things like this.
There's a whole world of hardware synthesis.
So we're using MLR to build circuits.
Okay, and so you're compiling for a domain of transistors.
And so what MLR does is it provides a tremendous amount
of compiler infrastructure that allows you
to build these domain-specific compilers
in a much faster way and have the result be good.
If we're thinking about the future, now we're talking about like ASIC, so anything?
Yeah, yeah. So if we project into the future, it's very possible that the number of these these kinds of A6 very specific infrastructure
the architecture things like multiplies exponentially.
I hope so.
So that's MLI.
So what MLI, what MLI does is allows you to build
these compilers very efficiently.
Right now, one of the things that coming back
to the LVM thing and then we'll go to hardware
is LVM is a and then we'll go to hardware is,
LVM is a specific compiler for specific domain.
MLR is now this very general, very flexible thing that can solve lots of different kinds of problems.
So LVM is a subset of what MLR does.
So MIRR is, I mean, instead of ambitious project then.
Yeah, it's a very ambitious project.
Yeah. And so to make it even more confusing,
MLR has joined the LVM umbrella project,
so it's part of the LVM family.
Right.
But what this comes full circle is now
folks that work on the LVM part,
the classic part that's 20 years old,
aren't aware of all the cool new things
that have been done in the new thing,
that MLR was built by me and many other people
that knew a lot about LVM,
and so we fix a lot of the mistakes that live in LVM.
I mean, we have this community dynamic where it's like,
well, there's this new thing, but it's not familiar.
Nobody knows it, it feels like it's new,
and so let's not trust it.
And so it's just really interesting
to see the cultural, social dynamic that comes out of that.
And, you know, I think it's super healthy because we're seeing the ideas percolate and we're seeing the
technology diffusion happen as people get more comfortable that they start to understand
things in their own terms.
This just gets to the, it takes a while for ideas to propagate, even though they may be
very different than what people are used to.
Maybe let's talk about that a little bit, the world of A6.
Well, actually, you have a new role at Sci-Fi.
What's that place about?
What is the vision?
For their vision, I would say the future of computer.
So, I lead the engineering and product teams at Sci-Fi.
Sci-Fi is a company who was founded with this architecture called Risk-5.
Risk-5 is a new instruction set, instruction set sort of things inside of your computer that
had run things.
X86 from Intel and ARM, from the ARM company and things like this or other instruction sets.
I've talked to Dave Patterson, who's super excited about Risk-5.
Dave is awesome. Yeah, who's super excited about risk five. Dave is awesome.
He's brilliant.
The risk five is distinguished by not being proprietary.
And so XA6 can only be made by Intel and AMD.
Arm can only be made by ARM.
They sell licenses to build arm ships to other companies, things like this.
MIPS is another instruction set that is owned by the MIPS company in Norway, and it gets licensed out, things like this. Mips is another instruction set that is owned by the Mips company now wave and then it gets licensed out things like that.
And so RIS 5 is an open standard that anybody can build chips for. And so CIFI was founded by three of the founders of RIS 5 that designed and built it in Berkeley, working with Dave.
And so that was the genesis of the company.
CIFI today has some of the world's best RIS 5 cores and we're selling them.
And that's really great.
They're going to tons of products.
It's very exciting.
They're taking this thing that's open source and just being trying to be or are the best in the world at building these things.
Yeah.
So here it's the specifications open source.
It's like saying TCP IP is an open standard or C is an open standard.
But then you have to build an implementation of the standard.
And so sci-fi on the one hand pushes forward and defined and pushes forward the standard.
On the other hand, we have implementations that are best in class for different points in the space, depending on if you want a really tiny CPU or if you want a really big beefy one that is faster, but it uses more area and things like this.
What about the actual manufacturer?
Which is so like what?
Yeah, so where's that all fit?
I'm gonna ask a bunch of dumb questions.
That's okay.
That's how we learn, right?
Right.
And so the way this works is that there's generally
a separation of the people who designed the circuits
and then the people who manufacture them.
And so you'll hear about fabs, like TSMC and Samsung
and things like this, that actually produce the chips,
but they take a design coming in
and that design specifies how the, you know,
you turn code for the chip into little rectangles
that then use photo lithography to make mask sets and then burn transistors onto a chip or onto silicon rather.
So we're talking about mass manufacturing.
Yeah, they're talking about making hundreds of millions of parts and things like that.
And so the fab handles the volume production, things like that.
But when you look at this problem, the interesting thing
about the space when you look at it is that these, the steps that you go from designing
a chip and writing the quote-unquote code for it's in things like Fairlog and Languages
like that, down to what you hand off to the FAB is a really well studied, really old problem.
Okay, tons of people have worked on it, lots of smart people have built systems and tools.
These tools then have generally gone through acquisitions and so they've ended up at three different major companies that build and sell these tools.
They're called EDA tools like for electronic design automation.
The problem with this is you have huge amounts of fragmentation. You have loose standards.
fragmentation, you have loose standards, and the tools don't really work together. So you have tons of duct tape and you have tons of loss productivity.
Now, these are tools for designing.
So the risk five is an instruction.
Like, what is risk five?
Like, how deep does it go?
How much does it touch the hardware?
How much does it define?
How much is the hardware is?
Yeah, so risk five is all about giving a CPU. How much does it touch the hardware? How much does it define how much of the hardware is?
Yeah, so RIS-5 is all about given a CPU,
so the processor and your computer,
how does the compiler, like Swift compiler,
the C compiler, things like this,
how does it make it work?
So it's what is the assembly code?
And so you write RIS-5 assembly
instead of XA6 assembly, for example.
But it's a set of instructions.
There's a post-set instructions.
What do you say it tells you how the compiler works?
Well, sorry, it's what the compiler talks to.
Okay.
And then the tooling you mentioned,
the disparate tools are for what?
For when you're building a specific chip.
So RIS 5.
In hardware.
In hardware.
Yeah.
So RIS 5, you can buy a risk five core from
sci-fi and say, Hey, I want to have a certain number of run a certain number of gigahertz. I want
it to be this big. I want to be have these features. I want have like I want floating point or not,
for example. And then what you get is you get a description of a CPU with those characteristics.
Now, if you want to make a chip, you want to build like an iPhone chip
or something like that.
You have to take both the CPU,
but then you have to talk to memory,
you have to have timers, IOs, GPU, other components.
And so you need to pull all those things together
into what's called an ASIC,
an application specific integrated circuit.
So a custom chip.
And then you take that design, and then you have to transform it into something that the
fabs, like TSMC, for example, know how to take to production.
Got it.
So, but yeah, good.
And so that process, I will, I can't help but see it as, is a big compiler.
Yeah, yeah.
It's a whole bunch of compilers written without thinking about it through that lens.
And so it's in the universe of compiler.
And it's like compilers do two things. They represent things and transform them.
So there's a lot of things end up being compilers.
But this is a space where we're talking about design and usability.
And the way you think about things, the way
things compose correctly, it matters a lot.
And so, Sci-Fi is investing a lot into that space, and we think that there's a lot, a lot
of benefit that can be made by allowing people to design chips faster, get them to market
quicker, and scale out because, you know, at the end of Moore's Law, you've got this
problem of you're not getting free performance
just by waiting another year for a faster CPU.
And so you have to find performance in other ways.
And one of the ways to do that is with custom accelerators
and other things in hardware.
And so what we'll talk a little about,
a little more about ASICs, but do you see that a lot of people, a lot of companies will try
to have a different sets of requirements that this whole process to go for? So like almost
different car companies might use different and like different PC manufacturers. So is this, like, is risk five in this whole process,
is it potentially the future of all computing devices?
Yeah, I think that, so if you look at risk five
and step back from the silicon side of things,
risk five is an open standard.
And one of the things that has happened
over the course of decades,
if you look over the long arc of computing,
somehow became decades old.
Yeah.
So you have companies that come and go
and you have instructions that's that come and go.
Like one example of this out of many is Sun,
was Spark.
Yeah.
Sun went away.
Spark still lives on it, if you just do,
but we have HP had this instructions
that called PA risk.
So PA risk was its big server business and had tons of customers.
They decided to move to this architecture called itanium from Intel.
Yeah.
It didn't work out so well.
Yeah.
Right.
And so you have this issue of you're making many billion dollar investments on instruction
sets that are owned by a company.
And even companies as big as Intel
don't always execute as well as they could.
They may have their own issues.
HP, for example, decided that it wasn't
in their best interest to continue investing
in this space because it was very expensive
and so they make technology decisions
or they make their own business decisions.
And this means that as a customer, what do you do?
You've sunk all this time, all this engineering, all this software work, all customer, what do you do? You've sunk all
this time, all this engineering, all this software work, all these, you've built other
products around them and now you're stuck. Right. What RIS 5 does is provide you more
optionality in the space because if you buy an implementation of RIS 5 from CIFive, and
you should, they're the best ones. But if something that happens to CIFI in 20 years, right? Well, great.
You can turn around and buy a RIS 5 core from somebody else. And there's an ecosystem of people
that are all making different RIS 5 cores with different trade-offs, which means that if you
have more than one requirement, if you have a family of products, you can probably find something
in the RIS 5 space that fits your needs. Whereas with if you're talking about XA6 for example, it's, it's, it's only going to bother to make certain classes of devices.
Right. I see. So, maybe a weird question, but like if sci-fi is like infinitely successful
in the next 20, 30 years, what does the world look like? So like, how does the world of computing change? So too much diversity and hardware
instruction sets, I think is bad. Like we have a lot of people that are using lots of
different instruction sets, particularly in the embedded, like very tiny microcontroller
space, the thing in your toaster that are just weird and different for historical
reasons.
So the compilers and the toolchains and the languages on top of them aren't there.
So the developers for that software have to use really weird tools because the ecosystem
that supports is not big enough.
So I expect that will change.
People will have better tools and better languages, better features everywhere that them can serve as many different points in the space.
And I think RIS-5 will progressively eat more of the ecosystem because it can scale up,
it can scale down sideways, left, right? It's very flexible and very well-considered and well-designed
instructions. So I think when you look at sci-fi of tackling silicon
and how people build chips, which is a very different space,
that's where you say, I think we'll see a lot more
custom chips and that means that you get much more battery life,
you get better, better tuned solutions for your IoT thingy.
So you get people that move faster.
You get the ability to have faster time to market, for example.
So how many customers?
So first of all, on the IoT side of things,
do you see the number of smart toasters increasing
exponentially?
So and if you do, how much customization per toaster is there?
Do all toasters in the world run the same
Silicon, like the same design,
or is it different companies have different design?
How much customization is possible here?
Well, a lot of it comes down to cost.
And so the way that chips work is you end up paying
by one of the factors is the size of the chip.
And so what ends up happening just from an economic
perspective is there's only so many chips that get made
in any year of a given design.
And so often what customers end up having to do
is they end up having to pick up a chip that exists
that was built for somebody else
so that they can then ship their product.
And the reason for that is they don't have the volume of the iPhone.
They can't afford to build a custom chip.
However, what that means is they're now buying an off-the-shelf chip that isn't a perfect fit
for their needs and so they're paying a lot of money for it because they're buying silicon
that they're not using.
Well, if you now reduce the cost of designing the chip, now you get a lot more chips and
the more you reduce it, the easier it is to design chips.
The more the world keeps evolving,
and we get more AI accelerators,
we get more other things, we get more standards to talk to,
we get 6G, right?
You get changes in the world that you want to be able
to talk to these different things.
There's more diversity in the cross product
of features that people want. And able to talk to these different things. There's more diversity in the cross-product of features that people want.
And that drives differentiated chips
in another direction.
And so nobody really knows what the future looks like,
but I think that there's a lot of silicon in the future.
Speaking of the future, you said Moore's Law Legile is dead.
So do you think, do you agree with Dave Patterson and many folks that Moore's Law is dead?
Or do you agree with Jim Keller, who's standing at the helm of the pirate ship, saying it's
still alive.
Still alive.
Yeah.
Well, so I agree with what they're saying and different people are interpreting the end of Moore's Law
in different ways.
Yeah.
So Jim would say, you know, there's another thousand X left in physics and we can, we can
continue to squeeze the stone and make it faster and smaller and smaller geometries and
all that kind of stuff.
He's right.
So Jim, Jim is absolutely right that there's a ton of, ton of progress left and we're not at the limit of physics yet
That's not really what Moore's Law is though
If you look at what Moore's Law is is that it's a very simple
evaluation of okay, well you look at the cost per I think it was cost per area and the most economic point in that space
And if you go look at the the now quite old paper that describes this
Moore's law has a specific economic
aspect to it and I think this is something that Dave and others often point out and so on a technicality. That's right
I look at it from
So I can acknowledge both of those viewpoints. They're both right. They're both right. I'll give you a third wrong
Yeah, the point that maybe right in its own way, which is single
threaded performance doesn't improve like it used to.
And it used to be back when you got a, you know, a penium 66 or something.
And the year before you had a penium 33.
And now it's twice as fast, right?
Well, it was twice as fast at doing exactly the same thing.
Like literally the same program ran twice as fast, you just wrote a check and waited a year, a year and a half.
Well, so that's what a lot of people think about Moore's Law, and I think that is dead.
And so what we're seeing instead is we're pushing people to write software in different ways. We're pushing people to write CUDA so they can get GPU compute and the thousands of cores
on GPU.
We're talking about C-programmers having new P-threads because they now have 100 threads or 50 cores
in a machine or something like that.
You're now talking about machine learning accelerators that are now domain-specific.
When you look at these kinds of use cases,
you can still get performance.
And Jim will come up with cool things
that utilize the silicon in a new way,
is for sure, but you're also going to change the programming model.
Right.
And now when you start talking about changing the programming model,
that's when you come back to languages and things like this too,
because often what you see is,
like, you take the C programming language, right?
The C programming language is designed for CPUs.
And so if you want to talk to a GPU,
now you're talking to, it's cousin Kuda.
Okay, Kuda's a different thing
with a different set of tools, a different world,
a different way of thinking.
And we don't have one world that scales,
and I think that we can get there. We can have one world that scales and I think that we can get that.
We can have one world that scales in a much better way.
On a small tangent then I think most programming languages are designed for CPUs, for single
core, even just in their spirit, even if they allow for parallelization.
So what does it look like for programming language to have parallelization or massive
parallelization as massive parallelization
as it's like first principle.
So the canonical example of this is the hardware design world.
So Verilog, VHDL, these kinds of languages,
they're what's called a high level synthesis language.
This is the thing people design chips in.
And when you're designing a chip, it's kind of like a brain where you have infinite parallelism
Like you've got you're like laying down transistors transistors are always running
Okay, and so you're not saying run run this transistor then this transistors in this transistor
It's like your brain like your neurons are always just doing something. They're not clocked
Right there. There's there's. They're doing their thing.
When you design a chip, or when you design a CPU, when you design a GPU, when you're laying down
the transistors, similarly, you're talking about, well, okay, how do these things communicate?
And so these languages exist. Verilog is a mixed example of that. Now these languages are really
great. You have be very low level.
Yeah, they're very low level.
And abstraction is necessary here,
and there's different approaches at that.
And it's itself a very complicated world.
But it's implicitly parallel.
And so having that as a, as the domain
that you program towards makes it so that by default,
you get parallel systems.
If you look at CUDA, CUDA is a point halfway in the space where in CUDA, when you write a CUDA
kernel for your GPU, it feels like you're writing a scalar program. So you're like, you have ifs,
you have for loops, stuff like this, you're just writing normal code. But what happens outside of
that in your driver is that it actually is running you on like a thousand things at once.
outside of that in your driver is that it actually is running you on like a thousand things at once. Right. And so it's parallel, but it has pulled it out of the programming model.
And so now you as a programmer are working at AS in a simpler world and it's solved that for you.
Right. How do you take the language like Swift?
You know, if we think about GPUs, but also ASICs, maybe if we can dance back and
forth between hardware and software, is, you know, how do you design for these features
to be able to program, make it a first class citizen to be able to do like Swift, for TensorFlow
to be able to do machine learning on current hardware, but also future hardware like TPUs and all
kinds of ASICs that I'm sure will be popping up more.
Yeah.
Well, so a lot of this comes down to this whole idea of having the nuts and bolts underneath
the covers that work really well.
So you need, if you're talking to TPUs, you need MLIR XLA or one of these compilers that
talks to TPUs to build on top of.
And if you're talking to circuits,
you need to figure out how to lay down the transistors
and how to organize it and how to set up clocking
and like all the domain problems that you get with circuits.
Then you have to decide how to explain it to a human.
What is the UI?
And if you do it right, that's a library problem,
not a language problem.
And that works if you have a library or a language which allows your library to write things
that feel native in the language by implementing libraries, because then you can innovate in
programming models without having to change your syntax again.
In like, you have to invent new code formatting tools and like all the other things that
language has come with.
And this gets really interesting.
And so if you look at this space, the interesting thing once you separate out syntax becomes
what is that programming model?
And so do you want the kuda style, I write one program in it runs many places.
Do you want the implicitly parallel model?
How do you reason about that? How do you give developers chip architects
the ability to express their intent?
That comes into this whole design question
of how do you detect bugs quickly?
So you don't have to tape out a chip to find out it's wrong.
Ideally, right?
How do you, and this is a spectrum,
how do you make it so that people feel productive?
So their turnaround time is very quick.
All these things are really hard problems.
And in this world, I think that not a lot of effort
has been put into that design problem
and thinking about the layering in other pieces.
Well, you've on the top of concurrency,
you've written the Swift concurrency manifest.
I think it's kind of interesting.
Anything that has the word manifest it's kind of interesting. Anything that
has the word manifesto in is very interesting. Can you summarize the key ideas of
each of the five parts you're written about? So what does the manifesto?
Yes. How about we start there? So in the Swift community, we have this problem, which is on the
one hand, you want to have relatively small proposals
that you can kind of fit in your head, you can understand the details at a very fine-grained
level that move the world forward. But then you also have these big arcs, okay? And often,
when you're working on something that is a big arc, but you're tackling in small pieces,
you have this question of, how do I know I'm not doing a random walk? Where are we going?
How does this add up?
Furthermore, when you start that first small step,
what terminology do you use?
How do we think about it?
What is better and worse in the space?
What are the principles?
What are we trying to achieve?
And so what a manifesto in the Swift community does
is it starts to say, hey, well, let's step back
from the details of everything.
Let's paint a broad picture to talk about how,
what we're trying to achieve.
Let's give an example design point.
Let's try to paint the big picture
so that then we can zero in on the individual steps
and make sure that we're making good progress.
And so the Swift concurrency manifesto
is something I wrote three years ago.
It's been a while, maybe more.
Try and do that forters and concurrency.
It starts with some fairly simple things like making the observation that
when you have multiple different computers or multiple different threads that are communicating,
it's best for them to be asynchronous. And so you need things to be able to run separately and
then communicate with each other. And this means asynchronous. And this means that you need a way
to modeling asynchronous communication.
Many languages have features like this.
Async await is a popular one.
And so that's what I think is very likely in Swift.
But as you start building this tower of abstractions,
it's not just about how do you write this.
You then reach into the, how do you get memory safety?
Because you want correctness,
you want debuggability and sanity for developers, and how do you get that memory safety into
the language. So if you take a language like Go or C or any of these languages, you get
what's called a race condition when two different threads or Go routines or whatever touch
the same point in memory, right? This is, like, maddening problem to debug,
because it's not reproducible generally.
There's tools, there's a whole ecosystem of solutions
that built up around this,
but it's a huge problem when you're writing concurrent code.
So, what Swift, this whole value-spanic thing,
is really powerful there,
because it turns out that math and copies
actually work even in concurrent worlds.
And so, you get a lot of safety just out of the box, but there are also some hard problems
and it talks about some of that.
When you start building up to the next level up and you start talking beyond memory safety,
you have to talk about what is a programmer model.
How does a human think about this?
So a developer that's trying to build a program think about this and it proposes a really old model with a new spin called actors actors are about saying we have
islands of single threadedness logically. So you write something that feels like it's one programming,
one program running in a unit and then it communicates asynchronously with other other things.
And so making that expressive and natural feel good,
be the first thing you reach for and being safe by default,
is a big part of the design of that proposal.
When you start going beyond that,
now you start to say,
cool, well, these things that communicate asynchronously,
they don't have to share memory.
Well, if they don't have to share memory
and they're sending messages to each other,
why do they have to be in the same process?
These things should be able to be in different processes
on your machine, and why just processes,
well, why not different machines?
And so now you have a very nice gradual transition
towards distributed programming.
And of course, when you start talking about the big future,
the manifesto doesn't go into it, but accelerators are things you talk to
asynchronously by sending messages to them.
And how do you program those?
Well, that gets very interesting.
That's not in the proposal.
So, and how much do you want to make that explicit,
like the control of that whole process
explicit to the program?
Yeah, good question.
So when you're designing any of these kinds of features
or language features or even libraries,
you have this really hard trade off the F2MAC,
which is how much is it magic or how much is it in the humans' control?
How much can they predict and control it?
What do you do when the default case is the wrong case?
Yeah, and so when you're designing a system, I won't name names, but there are systems
where you, it's really easy to get started and then you jump it. So let's pick like logo.
Okay, so something like this. So it's really easy to get started. It's really designed for
teaching kids, but as you get into it, you hit a ceiling. And then you can't go any higher. And then what do you do? Well, you have to go switch to different world and rewrite
all your code. And this logo is a silly example here. This exists in many other languages.
With Python, you would say like concurrency, right? So Python has the global interpreter
lock. So Threading is challenging in Python.
And so if you start writing a large scale application
in Python and then suddenly in the concurrency,
you're kind of stuck with the series of bad trade-offs.
There's other ways to go where you say,
like, voice all the complexity on these are all ones.
And that's also bad in a different way.
And so what I prefer is building a simple model
that you can explain that then has an escape hatch.
So you get in, you have guardrails,
you memory safety works like this in Swift,
where you can start with, like by default,
if you use all the standard things,
it's memory safe, you're not going to shoot your foot off.
But if you want to get a C level pointer to something, you can explicitly do that.
But by default, there's guard rails.
There's guard rails.
Okay.
But, you know, whose job is it to figure out which part of the code is paralyzable?
So in the case of the proposal, it is the human's job.
So, they decide how to architect their application,
and then the runtime in the compiler is very predictable.
So, this is in contrast to,
like, there's a long body work including on Fortran
for auto-paralyzing compilers.
And this is an example of a bad thing.
And so as a compiler person, I can drag on compiler people.
Often, compiler people will say, cool, since I can't change the code, I'm going to write
my compiler that then takes this unmodified code and makes it go way faster on this machine.
Okay.
Application development.
And so it does pattern matching.
It does like really deep analysis,
compiled people are really smart. And so they want to do something really clever and tricky.
And you get 10x speed up by taking an array of structures and turn it into a structure of arrays
or something, because it's so much better for memory. There's bodies, tons of tricks.
They love optimization. Yeah, you love optimization. Everyone loves optimization.
Everyone loves optimization. And it's this promise of
build with my compiler and your thing goes fast. Yeah. Right. But
here's the problem. Lex, you write, you write a program, you
run it with my compiler, it goes fast, you're very happy. Wow,
it's so much faster than the other compiler. Then you go and you
had a feature to your programmer, you refactor some code. And
suddenly you got a 10X loss in performance. Well, why? What
just happened there? What just happened there's
you, the heuristic, the pattern mansion, the compiler, whatever analysis it was doing
just got defeated because you didn't in line of function or something, right? As a user,
you don't know, you don't want to know. That was a whole point. You don't want to know
how the compiler works. You don't want to know how the memory hierarchy works. You don't
want to know how it got parallelized across all these things. You wanted that abstracted away from you. But then the magic is lost as
soon as you did something and you fall off a performance cliff. And now you're in this
funny position where, what do I do? I don't change my code. I don't fix that bug. It
costs 10X performance. Now what do I do? Well, this is the problem with unpredictable
performance. If you care about performance, predictability is a very important thing.
And so what the proposal does is it provides
an architectural patterns for being able to layout your code,
gives you full control over that, makes it really simple
so you can explain it.
And then if you want to scale out in different ways,
you have full control over that.
So in your sense, the intuition is for a compiler is too hard to do automated parallelization.
Like, you know, because the compilers do stuff automatically.
That's incredibly impressive.
Further things, right.
But for parallelization, we're not even we're not close to there.
Well, it depends on the programming model.
So compile there's many different kinds of compilers.
And so if you talk about like a C compiler,
a Swift compiler, or something like that,
where you're writing imperative code,
parallelizing that and reason about all the pointers
and stuff like that is a very difficult problem.
Now, if you switch domains,
so there's this cool thing called machine learning,
right?
So machine learning nerds, other endearing things like, you
know, solving cat detectors and other things like that have done this amazing breakthrough
of producing a programming model operations that you can pose together that has raised
levels of abstraction high enough that suddenly you can have auto paralyzing in pilots. You can write a model using TensorFlow
and have it run on 1,024 nodes of a TPU.
Yeah, that's true.
I didn't even think about, like, you know,
because there's so much flexibility
in the design of architectures that ultimately
boiled down to a graph that's paralyzed for you.
And if you think about it, that's pretty cool.
That's pretty cool, yeah.
And you think about batching, for example,
as a way of being able to exploit more parallelism.
Like that's a very simple thing that now is very powerful.
That didn't come out of the programming language,
nerds, right, those people.
Like that came out of people that are just looking to solve
a problem and use a few GPUs and organically developed
by the community of people focusing on machine learning.
As an incredibly powerful abstraction layer that enables the compiler people to go
and exploit that.
And now you can drive supercomputers from Python.
Well, that's pretty cool.
And so just to pause on that, because I'm not sufficiently low level, I forget to admire
the beauty and power of that.
But maybe just to linger on it,
like what does it take to run a neural network fast?
Like how hard is that compilation?
It's really hard.
So we just skipped,
you said like it's amazing that that's a thing,
but how hard is that of a thing?
It's hard.
And I would say that not all of the systems
are really great, including the ones I helped them.
So there's a lot of work left to be done there.
Is it compiler nerds working on that or is it a whole new group of people?
Well, it's a full stack problem, including compiler people, including APIs, like Keras
and the module API and PyTorch and Jax.
And there's a bunch of people pushing on all the different parts of these things.
Because when you look at it, it's both, how do I express the computation?
Do I stack up layers?
Well, cool.
Like setting up a linear sequence of layers is great for the simple case, but how do I do
the hard case?
How do I do reinforcement learning?
Well, now I need to integrate my application logic in this.
Right?
Then it's, you know, the next level down of, how do you represent that for the runtime?
How do you get hardware abstraction? And then you get the next level down of saying, like represent that for the runtime, how do you get hardware abstraction?
And then you get the next level down of saying, forget about abstraction, how do I get the peak performance out of my TPU or my iPhone accelerator or whatever, right, and all these different things.
And so this is a layered problem with a lot of really interesting design and work that going
on in the space and a lot of really smart people working on it. Machine learning is a very well-funded area of investment right now and so
there's a lot of progress being made. So how much innovation is there on the
lower level so closer to the to the ASIC? So redesigning the hardware or
redesigning concurrently compilers with that hardware is that if you were to
predict the biggest you know the equivalent of Moore's law improvements
in the inference and the training of neural networks
and just all of that,
where is that gonna come from?
You think,
Sure, you get scalability,
you have different things.
And so you get, you know, Jim Keller,
Trinkering Process Technology,
you get three nanometer instead of five or seven,
or 10, or 28, or whatever.
And so that, that marches forward and that provides improvements. you get three nanometer instead of five or seven or 10 or 28 or whatever.
And so that that marshes forward and that provides improvements.
You get architectural level performance.
And so the, you know, a TPU with a matrix multiply unit and a systolic array is much more efficient than having a scalar core doing multiplies and adds and things like that.
You then get a system level improvements.
So how you talk to memory, how you talk across a cluster
of machines, how you scale out, how you have fast interconnects
between machines, you then get system level programming
models.
So now, you have all this hardware, how to utilize it.
You then have algorithmic breakthroughs where you say,
hey, wow, cool.
Instead of training in a resident 50 and a week,
I'm not training it in 25 seconds.
And it's a combination of new optimizers
and new just training regimens and different approaches
to train.
And all of these things come together to push the world forward.
That was a beautiful exposition.
But if you were to force the bit all your money and one of these, why do we have to?
Unfortunately, we have people working all this.
It's an exciting time, right?
So I mean, you know, Open the I did this little paper show on the algorithmic improvement.
You can get, it's been, you know, Open the I did this little paper show in the algorithmic improvement you can get, it's been, you know, improving exponentially. I haven't quite seen the same kind of analysis
on other layers of the stack, I'm sure it's also improving significantly. I just, it's a,
it's a nice intuition build there. I mean, there's a reason why Moore's Law, that's the beauty of Moore's Law, is somebody writes a paper that makes
a ridiculous prediction.
And it, you know, becomes reality in a sense.
There's something about these narratives when you, when Chris Latina, a silly little
podcast makes all bets all his money on a particular thing, Somehow it can have a ripple effect of actually becoming real.
That's an interesting aspect of it because like it might have been,
you know, we focus with Moore's law,
most of the computing industry really,
really focused on the hardware.
I mean, software innovation,
I don't know how much software innovation there was in terms of.
What, until Giveth Bill takes away.
Yeah, I mean compilers improve significantly also.
Well, not really, so actually, I mean, so I'm joking about how software's gotten slower,
pretty much as fast as hardware got better, at least in the 90s.
There's another joke, another law in compilers, which is called, I think it's called probe steens law, which is compilers double the performance of any given code every 18 years.
So they move slowly. Yeah, well, so well. Yeah, it's exponential also.
Yeah, but it's making progress. But, but there again, it's not about, the power of compilers
is not just about how do you make the same thing go faster.
It's how do you unlock the new hardware?
A new chip came out.
How do you utilize it?
You say, oh, the programming model, how do we make people more productive?
How do we have better error messages?
Even such mundane things, how do I generate a very specific error message about your
code?
It actually makes people happy.
They know how to fix it.
It comes back to how do you help people get their job done.
And then in this world of exponentially increasing smart toasters, how do you expand computing
to all these kinds of devices?
Do you see this world where just everything's a computing
surface?
You see that possibility?
Just everything's a computer?
Yeah, I don't see any reason that that couldn't be achieved.
Turns out that Sam goes into glass and glass is
pretty useful too.
And you know, like, why not?
Why not?
So the very important question then, if we're living in a simulation and the simulation
is running a computer, like, what's the architecture of that computer, do you think?
So you're saying, is it a quantum system?
Yeah, like this whole quantum discussion is needed or can we run it?
Anna, I don't know, with a risk five architecture,
a bunch of CPUs.
I think it comes down to the right tool for the job.
Yeah, and so.
And what's the pilot?
Yeah, exactly, that's my question.
Did I get that job?
Be the university compiler.
And so there, as far as we know, quantum systems
are the bottom of the pile of turtles so far.
Yeah.
And so we don't know efficient ways
to implement quantum systems without using quantum computers.
Yeah, and that's totally outside
of everything we've talked about quantum.
But who runs that quantum computer?
Yeah.
Right, so if it, if we really are living in a simulation, outside of everything we've talked about. But who runs that quantum computer? Yeah. Right.
So if we really are living in a simulation,
then is it bigger quantum computers?
Is it different ones?
Like how does that work out?
How does that scale?
Well, it's the same size.
It's the same size.
But then the thought of the simulation
is that you don't have to run the whole thing
that we humans are cognitively very limited.
Do you check points?
Do you check points? And the point at which we human, around the whole thing that, you know, we humans are cognitively very limited. You jack points. You jack points.
Yeah.
And, uh, and if we, the point at which we human, so you basically do minimal amount of,
uh, what is it, uh, the, the Swift does, um, on right copy.
Copy, right?
Yeah.
So you only, yeah.
You only adjust the simulation and every parallel, parallel universe theories, right?
And so, and so every time a, a decision a decision is made, somebody opens the short end of your box, then there's a fork.
And then this could happen.
And then thank you for considering the possibility.
But yeah, so it may not require the entirety of the universe to simulate it.
But it's interesting to think about as we create this higher and higher
fidelity systems. But I do want to ask on the quantum computer side because everything we've
talked about with your work with sci-fi, with everything with compilers, none of that includes
quantum computers, right? That's true. So, DeFi ever thought about what a, you know,
this whole serious engineering work
how quantum computers looks like of compilers,
of architectures, all that kind of stuff.
So I've looked at a little bit,
I'd know almost nothing about it,
which means that at some point,
I will have to find an excuse to get involved.
Because that's how it works.
But do you think that's a thing to be like, is it, what's your little tingly sense of the
timing of when to be involved?
Is it not yet?
Well, so, so the thing I do really well is I jump into messy systems and figure out how
to make them figure out what the truth in the situation is, try to figure out what, what
the unifying theory is, how to figure out what the unifying theory
is, how to like factor the complexity, how to find a beautiful answer to a problem that
has been well studied and lots of people have bashed their heads against it.
I don't know the quantum computers are mature enough and accessible enough to be figured
out yet, right?
And the, I think the open question with quantum computers is, is there a useful problem
that gets solved with quantum computer that makes it worth the economic cost of like having
one of these things and having, having legions of people that set it up. You go back to the 50s,
right? And there's the projections of the world can only need seven computers.
Right.
Well, and part of that was that people hadn't figured out
what's useful for.
What are the algorithms we want to run?
What are the problems that get solved?
And this comes back to how do we make the world better,
either economically or making some ways life better
or solving a problem that wasn't solved before,
things like this.
And I think that just we're a little bit too early
in that development cycle because it's still literally
a science project, not a negative connot still, like, literally a science project,
not a negative connotation, right?
It's literally a science project.
And the progress, there's amazing.
And so I don't know if it's 10 years away.
If it's two years away, exactly where that breakthrough happens.
But you look at machine learning,
it, we went through a few winners before the
AlexNet transition and then suddenly it's break out
moment. And that was the catalyst that then drove the talent
flocking into it. That's what drove the economic applications
of it. That's what drove the technology to go faster because
you now have more minds thrown at the problem. This is what
caused like a serious knee and deep learning
and the algorithms that we're using.
And so I think that's what quantum needs to go through.
And so right now it's in that formidable finding itself,
getting literally the physics figured out.
And then it has to figure out the application that makes this useful.
But I'm not skeptical that I think that will happen.
I think it's just, you know, 10 years away, something like that.
I forgot to ask, what programming language do you think the simulation is written in?
Oh, probably Lisp.
So not Swift.
Like if you're at the best, I'll just leave it at that.
So I mean, we've mentioned that you worked
with all these companies, we've talked about all these projects.
It's kind of, if we just step back and zoom out
about the way you did that work.
And we look at COVID times, this pandemic we're living through,
that may, if I look at the way Silicon Valley folks
are talking about it, the way MIT's talking
about it, this might last for a long time, not just the virus, but the remote nature.
The economic impact. I mean, it's going to be a mess.
Do you think what's your prediction? I mean, from sci-fi to Google to
To just all the places you worked in just Silicon Valley you're in the middle of it What do you think is houses? Whole place gonna change? Yeah, so I mean I I really can only speak to the tech perspective I
Am in that bubble
I
Think it's gonna be really interesting because the Zoom culture of being remote
and on video chat all the time
has really interesting effects on people.
So on the one hand, it's a great normalizer.
It's a normalizer that I think will help communities
of people that have traditionally been represented
because now you're taking in some cases a face off
because you don't have to have a camera going.
And so you can have conversations
without physical appearance being part of the dynamic,
which is pretty powerful.
You're taking remote employees that have already been remote
and you're saying you're now on the same level
and footing as everybody else.
Nobody gets whiteboards.
You're not gonna be the one person
that doesn't get it participating in the whiteboard conversation
and that's pretty powerful.
You've got, you're forcing people to think asynchronously in some cases, because it's
hard to just get people physically together and the bumping into each other forces people
to find new ways to solve those problems.
And I think that that leads to more inclusive behavior, which is good.
On the other hand, it's also, it's just sucks, right? And so the, the, the, the, the action communication or just sucks being not
with people like on a daily basis and collaborating with them.
Yeah, all of that, right? I mean, everything, this whole situation is terrible. What I meant primarily was the, I think that
most humans like working physically with humans. I think this is something that not everybody,
but many people are programmed to do. And I think that we get something out of that that is very
hard to express, at least for me. And so maybe this isn't true of everybody. But, and so the question
to me is, you know, when you get through that time of adaptation,
right, you get out of March and April
and you get into December and you get into next March
if it's not changed, right?
Sorry, terrifying.
Well, you think about that and you think about
what is the nature of work?
And how do we adapt?
And humans are very adaptable species, right?
We can learn things and when we're forced to, and there's a catalyst
to make that happen. And so what is it that comes out of this and are we better or worse off?
Right. I think that, you know, you look at the Bay Area, housing prices are insane. Well, why?
Well, there's a high incentive to be physically located because if you don't have proximity,
you end up paying for it in commute. right? And there's there has been huge social
social pressure in terms of like you will be there for the meeting, right, or whatever scenario
it is. And I think that's going to be way better. I think it's going to be much more than
norm to have remote employees. And I think this is going to be really great. Did you, uh,
do you have friends? Do you hear of people moving? I know one family friend that moved.
They moved back to Michigan and they were a family
with three kids living in a small apartment
and like we're going insane.
Right, and their intact has been worked for Google.
So first of all, friends of mine
have are in the process of or have already lost
the business. The thing that represents their passion, their dream, it could be small
entrepreneur projects, but it could be large businesses like people that run gyms, like
do restaurants, like tons of things. So, but also people like look at themselves in the
mirror and ask the question of like, what do I want wanna do in life? For some reason they don't,
they haven't done it until COVID.
Like, they really ask that question
and that results often in moving
or leaving the company or where it's starting your business
or transitioning to different company.
Do you think we're gonna see that a lot?
Like in the...
Well, I can't speak to that.
I mean, we're definitely gonna see it at a higher frequency than we did before, just
because I think what you're trying to say is there are decisions that you make yourself
and big life decisions that you make yourself and I'm going to quit my job and start a new
thing.
There's also decisions that get made for you.
I got fired for my job.
What am I going to do?
That's not a decision that you think about,
but you're forced to act. I think that your forced to act moments where global pandemic
comes and wipes out the economy and now your business doesn't exist. I think that does lead
to more reflection because you're less anchored on what you have. And it's not a, what do I have to lose versus what do I have to gain, A, B comparison?
It's more of a fresh slate.
Cool, I could do anything now.
Do I want to do the same thing I was doing?
Did that make me happy?
Is this now time to go back to college
and take a class and learn a new skill?
Is this a time to spend time with family?
If you can afford to do that,
is this time to literally
move into the parents?
I mean, all these things that were not normative before
are suddenly become, I think, very devalued systems
change.
And I think that's actually a good thing
in the short term at least, because it leads to,
there's been an over optimization along one set of priorities
for the world, and now maybe we'll get to more balance and more interesting world where
that people are doing different things, I think it could be good.
I think there could be more innovation that comes out of it, for example.
What do you think about all the social chaos when the middle of like?
It sucks.
You think it's, let me ask you, I hope you think it's all gonna be okay.
Well, I think humanity will survive.
The, from the next extension, oh, we're not gonna kill it.
Yeah, well, I don't think the virus is gonna kill all the humans.
I don't think all the humans are gonna kill all the humans.
I think that's unlikely, but I look at it as progress requires a catalyst.
So you need a reason for people to be willing to do things that are uncomfortable.
I think the US at least, but I think the world in general, is a pretty unoptimal place to live in for a lot of people.
And I think that what we're seeing right now
is we're seeing a lot of unhappiness.
And because of all the pressure,
because of all the badness in the world that's coming together,
it's really kind of igniting some of that debate
that should have happened a long time ago.
Right? I mean, I think that we'll see more progress.
You're asking offline, you're asking about politics
and wouldn't be great if politics moved faster
because there's all these problems in the world
and we can move it.
Well, people are inherently conservative. you're asking about politics and wouldn't be great if politics moved faster because there's all these problems in the world and we can move it.
Well, people are inherently conservative.
And so if you're talking about conservative people, particularly if they have heavy
burdens on their shoulders because they represent literally thousands of people, it makes
sense to be conservative.
But on the other hand, when you need change, how do you get it?
The global pandemic will probably lead to some change.
And it's not a directed, it's not a directed plan.
But I think that it leads to people asking really interesting questions
and some of those questions should have been asked a long time ago.
Well, let me know if you've observed this as well.
Something that's bothering me in the machine learning community,
I'm guessing it might be prevalent in other places
is something that feels like in 2020
increase level of toxicity.
Like people are just quicker to pile on.
They just be, they're just harsh on each other
to like mob, pick a person that screwed up and like make it a big thing.
And is there something that we can like, have you observed that in other places?
Is there is there some way out of this?
I think there's an inherent thing in humanity that's kind of an us versus them thing,
which is that you want to succeed and how do you succeed? Well, it's relative to somebody else. And so what's happening in at least in some part is that with the internet
and with online communication, the world's getting smaller. Right. And so we're having some
of the social ties of like my name, my town versus your town's football team, right, turned into much larger, larger and yet shallower problems.
And people don't have time, the incentives,
if clickbait and like all these things,
it's kind of really, really feed into this machine.
And I don't know where that goes.
Yeah, I mean, the reason I think about that,
I mentioned to you this offline a little bit,
but you know,
I've a few difficult conversations scheduled, some of them political related, some of them
within the community, difficult personalities that went through some stuff.
I mean, one of them I've talked before, I will talk again as Yal Nacoon.
You got a little crap on Twitter for talking about a particular paper and the bias
within a data set. And then there's been a huge, in my view, and I'm willing, comfortable
saying it, irrational, over exaggerated pile on, on his comments, because he made pretty
basic comments about the fact that if there's bias in the data
there's going to be bias in the results so we should not have bias in the data. But people
piled on to him because he said he trivialized the problem of bias like it's a lot more than just
bias in the data. But like yes, that's a very good point. But that's not what he was saying.
but like, yes, that's a very good point, but that's not what he was saying.
It's not what he was saying.
And the response, like the implied response
that he's basically sexist and racist
is something that completely drives away
the possibility of nuanced discussion.
It's one nice thing about like a pocket long form
conversation as you can talk it out, you can
lay your reason out, and even if you're wrong, you can still show that you're a good human
being underneath it.
You know, your point about you can't have a productive discussion.
Well, how do you get to the point where people can turn, they can learn, they can listen,
they can think, they can engage versus just being a shallow
like, and then keep moving. And I don't think that the progress really comes from that.
And I don't think that one should expect that. I think that you'd see that as reinforcing
individual circles and the us versus them thing. And I think that's fairly divisive.
and the us versus them thing. And I think that's fairly divisive.
Yeah, I think there's a big role in,
like the people that bother me most on Twitter
when I observe things,
is not the people who get very emotional, angry,
like over the shop,
it's the people who like prop them up.
It's all the,
it's that,
I think what should be the,
we should teach each other is to be
sort of empathetic.
The thing that it's really easy to forget, particularly on like Twitter or the internet
or the email, is that sometimes people just have a bad day. Right? You have a bad day
or you're like, I've been in the situation where it's like between meetings, like fire
off a quick response to an email because I want to like help get something unblocked, phrase it really objectively
wrong.
I screwed up.
And suddenly this is now something that sticks with people.
And it's not because they're bad.
It's not because you're bad.
Just psychology of like you said a thing.
It sticks with you.
You didn't mean it that way, but it really impacted somebody
because the way they interpreted it,
and this is just an aspect of working together as humans.
And I have a lot of optimism in the long term,
the very long term, about what we as humanity can do,
but I think that's going to be,
it's just always a rough ride.
And you came into this by saying,
like, what is COVID and all the social strife
that's happening right now mean?
And I think that it's really bad in the short term,
but I think it will lead to progress.
And for that, I'm very thankful.
Yeah, it's painful in the short term though.
Well, yeah, I mean, people are out of jobs,
like some people can't eat, like it's horrible.
And, but, you know, it's progress.
So we'll see what happens.
I mean, the real question is when you look back 10 years, 20 years, 100 years from now,
how do we evaluate the decisions that are being made right now?
I think that's really the way you can frame that and look at it and you say, you integrate
across all the short-term whorliness that's happening and you look at what that means
and is the, you know,
improvement across the world or the regression across the world significant enough to make
it a good or a bad thing. I think that's the question.
Yeah. And for that, it's good to study history. I mean, one of the big problems from you
right now is I'm reading the rise and fall. The third rike.
Like reading.
So as everything is just, I just see parallels and everything.
It means it's, you have to be really careful not to overstep it,
but just the, the thing that worries me the most is the pain
that people feel when, when a few things combine,
which is like economic depression,
which is quite possible in this country,
and then just being disrespected by some kind of way, which the German people are really
disrespected by most of the world, like in a way that's over the top, that something can build up,
and then all you need is a charismatic leader to go either positive or negative and both work, as long as
they're charismatic.
Yeah.
And it's taking advantage of, again, that inflection point that the world's in and what
they do with it could be good or bad.
And so it's a good way to think about times now, like on the individual level, what we
decide to do is when history is written, you know, 30 years from now, what happened in 2020, probably history is going to remember 2020.
Yeah, I think so.
Either for good or bad, and it's like up to us throughout it, so it's good.
Well, one of the things I've observed that I find fascinating is most people act as though the world doesn't change. You make decision knowingly, right? You make a decision
where you're predicting the future based on what you've seen in the recent past. And so if something's
always been as rained every single day, then of course, she expects to rain today too, right?
On the other hand, the world changes. All the time, constantly, like for better and for worse,
right? And so the question is, if you're interested in something that's not right, what is the
inflection point that led to a change?
And you can look to history for this.
Like what is the catalyst that led to that explosion that led to that bill that led to
that, like you can kind of work your way back or from that.
And maybe if you pull together the right people and you get the right ideas together, you
can actually start driving that change and doing it away that's productive and hurts
fewer people.
Yeah.
Like a single person, single event can turn all of you.
Absolutely.
Everything starts somewhere and often it's a combination of multiple factors, but yeah,
this is these things can be engineered.
That's actually the optimistic view that I'm a long term optimist on pretty much everything
and human nature, you know,
we can look to all the negative things that the humanity has, all the pettiness and all the
like self-surfingness and they just the the cruelty, right? The biases, the just humans can be
very horrible, but on the other hand, we're capable of amazing things. And the progress across 100-year chunks is striking.
And even across decades, we've come a long ways.
And there's still a long ways to go,
but that doesn't mean that we've stopped.
Yeah, that kind of stuff we did in the last 100 years
is unbelievable.
It's kind of scary to think what's going to happen.
It's scary like exciting. Scary in a sense that it's kind of sad It's kind of scary to think what's gonna happen. It's scary like exciting.
Yeah.
Scary in a sense that it's kind of sad that the kind of technology is gonna come out in 10,
20, 30 years. We'll probably too old to really appreciate because you don't go up with
it. It'll be like kids these days with their virtual reality and their, uh,
their toy talks and stuff like this. Like God is this thing and like, come on, give me my,
you know, static photo.
My Commodore 64, yeah. Exactly. Okay. Sorry, we kind of skipped over. Let me ask on, um,
you know, the machine learning world has been kind of inspired. Their imagination
captivated with GPT-3 and these language models. I thought it'd be cool to
Get your opinion on it. What's your thoughts on this exciting world of
It connects the computation actually
Is of language models that are huge. Yeah, and take
Multi-many many computers not just the train, but to also do inference on.
Sure. Well, I mean, it depends on what you're speaking to there.
But I mean, I think that there's been a pretty well understood maximum
and deep learning that if you make the model bigger and you shove more data into it,
assuming you train it right and you have a good model architecture,
that you'll get a better model out. And so on one hand, GPD 3 was not that surprising.
On the other hand, a tremendous amount of engineering
went into making it possible.
The implications of it are pretty huge.
I think that when GPD 2 came out,
there was a very provocative blog post from OpenAI
talking about, and we're not going to release it
because of the social damage it could cause
if it's misused. I think that's still a concern. I think that we need to look at how technology is applied
and well-meaning tools can be applied in very horrible ways and they can have very profound
impact on that. I think the GPT-3 is a huge technical achievement and what will GPT-4B
will probably be bigger and more expensive in and train, really cool architectural tricks.
Do you think, I don't know how much thought you've done on distributed computing,
is there some technical challenges that are interesting, that you're hopeful about,
exploring in terms of a system that, like a piece of code, that, you know,
with GPT-4, that might have, I don't know, hundreds of trillions of parameters, which
have to run on thousands of computers.
Is there some hope that we can make that happen?
Yeah, well, I mean, today, you can write a check and get access to a thousand
TPU cores and do really interesting large scale training and inference and things like that
in Google Cloud, for example, right? And so I don't think it's a question about
scale, it's a question about utility. And when I look at the Transformer series of architectures
that the GPD series is based on, it's really interesting
to look at that because they're actually very simple,
simple designs, they're not recurrent.
The training regimens are pretty simple.
And so they don't really reflect like human brains, right?
But they're really good at learning language models,
and they're unrolled enough that you can simulate
some recurrence.
And so the question I think about is, where does this take us?
So we can just keep scaling it, have more parameters, more data, more things, we'll get a better
result for sure.
But are there architectural techniques that can lead to progressive faster pace?
This is when, how do you get, instead of just making it a constant time bigger,
how do you get an algorithmic improvement out of this? Whether it be a new training resume,
if it becomes sparse networks, for example, human brain sparse, all these networks are dense,
the connectivity patterns can be very different. I think this is where I get very interested
and I'm way out of my league on the deep learning side of this,
but I think that could lead to big breakthroughs.
When you talk about large scale networks,
one of the things that Jeff Dean likes to talk about,
and he's given a few talks on is this idea
of having a sparsely gated mixture of experts kind of a model
where you have different nets that are trained and are
really good at certain kinds of tasks.
And so you have this distributor across a cluster and so you have a lot of different computers
that end up being kind of locally specialized in different demands.
And then when a query comes in, you gate it and you use learn techniques to route to different
parts of the network.
And then you utilize the compute resources of the entire cluster by having specialization within it. And I don't know where that goes,
or if it starts to, when it starts to work, but I think things like that could be really interesting as well.
And then on the data side too, if you can think of data selection as a kind of programming,
yeah, I mean, at the sense, if you look at like Kapati talked about software 2.0, I mean, in a sense,
data is the programming.
Yeah.
So, let me try to summarize Andrei's position really quick before I disagree with him.
Yeah.
So, Andrei Kapati is amazing.
So, this is nothing personal with him.
He's an amazing engineer.
And also a good blog post writer. Yeah. Yeah, he's a great communicator
I mean, he's just an amazing person. He's also really sweet
So his basic premise is that
Software is suboptimal. I think we can all agree to that
He also points out that deep learning and other learning-based techniques are really great because you can solve problems in more structured ways
with less like ad-hoc code that people write out and don't write test cases for in some cases and so they don't even know if it works in the first place
And so if you start replacing systems of
imperative code with deep learning models, then you get better a better result
models, then you get better, a better result. And I think that he argues that software 2.0 is a per, pervasively learned set of models, and you get away from writing code. And he's
given talks, re talks about, you know, swapping over more and more and more parts of the code
to being learned and driven that way. I think that works. And if you're pretty, pretty
just supposed to liking machine learning, then I think that that's definitely a good thing.
I think this is also good for accessibility in many ways because certain people are not
going to write C code or something. So having a data-driven approach to do this kind of stuff,
I think, can be very valuable. On the other hand, they're huge trade-offs. It's not clear to me
that software 2.0 is the answer. Probably
Andre wouldn't argue that it's the answer for every problem either. I look at machine learning
as not a replacement for software 1.0. I look at it as a new programming paradigm.
Programming paradigms, when you look across demands, is they have structured programming
where you go to if then else or functional programming where you go from go to's to if
then else or functional programming from LISP and you start talking about higher order
functions and values and things like this or you talk about object oriented programming
you're talking about encapsulation subclassing inheritance you start talking about generic
programming where you start talking about code reuse through specialization and different
type instantiations. When you start talking about different reuse through specialization and different type of instantiations.
When you start talking about differentiable programming,
something that I am very excited about
in the context of machine learning,
talking about taking functions and generating variance
like the derivative of another function.
Like that's a programming paradigm.
It's very useful for solving certain classes of problems.
Machine learning is amazing at solving certain classes of problems.
Like you're not going to write a cat detector
or even a language translation system by writing CCO.
That's not a very productive way to do things anymore.
And so machine learning is absolutely the right way to do that.
In fact, I would say that learn models are really
the one of the best ways to work with the human world in general.
So anytime you're talking about sensory input of different modalities, anytime that you're
talking about generating things in a way that makes sense to a human, I think that learn
models are really, really useful.
And that's because humans are very difficult to characterize.
And so this is a very powerful paradigm for solving classes of problems.
But on the other hand, imperative code is too.
You're not going to write a bootloader for your computer
with a deep learning model.
Deep learning models are very hardware-intensive.
They're very energy-intensive because you have a lot of parameters.
And you can provably implement any function with a learn model,
like this has been shown, but that doesn't make it efficient.
And so if you're talking about caring about a few orders of magnitudes worth of energy usage,
then it's useful to have other tools in the toolbox.
What's also robustness, too? I mean, yeah, exactly. All the problems of dealing with data and
bias and data, all the problems of, you know, software 2.0 and one of the great things that Andres is arguing towards,
which I completely agree with them,
is that when you start implementing things with deep learning,
you need to learn from software 1.0
in terms of testing, continuous integration,
how you deploy, how do you validate all these things
and building systems around that,
so that you're not just saying,
like, ooh, it seems like it's good, ship it.
Right?
Well, what happens when I regress something?
What happens when I make a classification that's wrong?
And now I heard somebody, right?
I mean, the same she after reason about.
Yeah, but at the same time, the bootloader
that works for us humans
is looks awfully a lot like a new on network, right?
So it's messy and you can cut out different parts of the brain.
There's a lot of this neuroplasticity work that shows
that it's gonna adjust.
It's a really interesting question.
How much of the world programming could be replaced
by software 2.0?
Like with...
Oh, well, I mean, it's provably true
that you could replace all of it.
Right, so then it's quite so. Anything that's a function you could replace all of it. Right.
So anything that's a function you can.
It's not a question about if I think it's an economic question.
It's a what kind of talent can you get?
What kind of trade-offs in terms of maintenance?
Those kinds of questions, I think, what kind of data can you collect?
I think one of the reasons that I'm most interested in machine learning
is a programming paradigm, is that one of the reasons that I'm most interested in machine learning is a programming paradigm,
is that one of the things that we've seen across computing in general is that being laser
focused on one paradigm often puts you in a box. It's not super great. And so you look
at object-oriented programming. It was all the rage in the early 80s. And everything has
to be objects. And people forgot about functional programming, even though came first.
And then people rediscovered that,
hey, if you mix functional and object-oriented
and structure, like mixing things together,
you can provide very interesting tools
that are good at solving different problems.
And so the question there is,
how do you get the best way to solve the problems?
It's not about whose tribe should win, right?
It's not about, you know, that shouldn't be the question.
The question is how do you make it so that people can solve those problems, the fastest,
and they have the right tools in their box to build good libraries, and they can solve these
problems. And when you look at that, that's like, you know, you look at reinforcement learning,
as one really interesting subdomain of this. Reinforcement learning often, you have to have the
integration of a learned model combined with your Atari
or whatever the other scenario it is that you're working in.
You have to combine that thing with the robot control for the arm.
And so now it's not just about that one paradigm.
It's about integrating that with all the other systems that you have, including often legacy
systems and things like this. And so to me, I think that the interesting thing to say
is like how do you get the best out of this domain
and how do you enable people to achieve things
that the other ways couldn't do
without excluding all the good things we already know
how to do.
Right, but okay, this is a crazy question,
but we talked about about GPT-3, but do you think it's possible
that these language models that, in essence, in the language domain, software 2.0 could
replace some aspect of compilation, for example, or do program synthesis replace some aspect
of programming?
Yeah, absolutely.
So I think that the learned models in general
are extremely powerful.
And I think the people underestimate them.
Maybe you can suggest what I should do.
So of access to the GPT-3 API, would I
be able to generate Swift code, for example?
Do you think that could do something interesting?
And we'll go forward.
So GPT-3 is not probably probably not trained on the right corpus.
So it probably has the ability to generate some Swift.
I bet it does.
It's probably not going to generate a large enough body of Swift to be useful.
But like, take it a next step further.
Like, if you had the goal of training something like GPT-3, and you wanted to train it to
generate source code, right, it could definitely do that.
Now the question is, how do you express the intent
of what you want filled in?
You can definitely write scaffolding of code
and say fill in the hole and sort of put in some
four loops or put in some classes or whatever
and the power of these models is impressive.
But there's an unsolved question, at least unsolved to me,
which is how do I express the intent of what to fill in?
Right?
And kind of what you'd really want to have, and I don't know that these models are up
to the task, because you want to be able to say, here's a scaffolding, and here are the
assertions at the end, and the assertion's always pass.
And so you want a generative model on the one hand, yes?
That's fascinating, yeah.
Right? always pass. And so you want a generative model on one hand, yes. That's fascinating. Yeah. Right. But you also want some loop back,
some reinforcement learning system or something where you're actually
saying, like, I need to hill climb towards something that is more correct.
And I don't know that we have that. So it would generate not only a bunch of
the code, but like the checks that do the testing, it would generate the test.
I think I think the humans would generate the tests. Right. Okay. The test, but it will be that do the testing, it would generate the tests. I think the humans would generate the tests, right?
Because the test would be fascinating.
Well, the tests are the requirements.
Yes, but the, okay.
So because you have to express to the model what you want to, you don't just want gibberish
code.
Look at how compelling this code looks.
You want a story about four horned unicorns or something.
Well, okay.
So exactly.
But that's human requirements. But then I thought it's a compelling idea
that the GPT-4 model could generate checks
like that are more high fidelity
that check for correctness.
Because the code it generates,
like say I ask it to generate a function that gives me the Fibonacci
sequence.
Sure.
I don't like.
So decompose the problem, right?
So you have two things.
You have, you need the ability to generate syntactically correct Swift code that's interesting,
right?
I think GPT series of model architectures can do that.
But then you need the ability to add the requirements.
So generate Fibonacci.
The human needs to express that goal.
We don't have that language that I know of.
No, I mean, it can generate stuff.
Have you seen what GPT-3 can generate?
You can say, there's interface stuff.
It can generate HTML. it can generate basic for loops
that give you like a pick HTML.
How do I say I want Google.com?
Well, no, you could say, or not, not literally Google.com.
How do I say I want a web page that's got a shopping cart into this and that?
Yeah, it does.
I mean, so, okay, so just, I don't know if you've seen these demonstrations, but you type
in, I want to red button with the text that says, hello, and you type that natural language
and it generates the correct HTML that done this demo.
It's kind of compelling, so you have to prompt it with similar kinds of mappings.
Of course, it's probably handpicked.
I got to experiment.
That probably, but the fact that you can do that once, even out of like 20, it's probably handpicked. I got to experiment that probably, but the fact that
you could do that once, even out of like 20, is quite impressive. Again, that's very basic.
Like the HTML was kind of messy and bad. But yes, the intent is, the idea is the intent to
specify the natural language. And so I have not seen it. That's really cool. Yeah. Yeah, but the
question is the correctness of that. Like visually, you can check. Oh, the button is read. But the for more.
For more complicated
functions or the intent is harder to check this goes into like
and be completeness kind of things like I want to know that this code is correct. In general, a giant thing that just some kind of calculation,
it seems to be working.
It's interesting to think like should the system also try
to generate checks for itself for correctness?
Yeah, I don't know.
And this is way beyond my experience.
The thing that I think about is that there doesn't seem to be a lot of
equational reasoning going on.
There's a lot of pattern matching and filling in and kind of propagating patterns that
have been seen before into the future and into the generator's ult.
And so if you want to get correctness, you kind of need their improving kind of things
and like higher level logic.
And I don't know that you could talk to Yon about that and see what the bright minds
are thinking about right now.
But I don't think the GPT is in that that van.
It's still really cool.
Yeah.
And so probably who knows, you know, maybe reasoning is is is overrated.
Yeah, it's over.
I mean, do we reason?
Yeah.
How do you tell? Right?
Are we just pattern matching based on what we have and then reverse justifying it to ourselves?
Exactly. The verse. So like, I think what the neural networks are missing. And I think GPT
form might have is to be able to tell stories to itself about what it did.
Well, that's what humans do. Right. I mean, you talk about like network explainability,
right? And we give neural nets about network explainability, right?
And we give normal notes of our time about this.
But humans don't know why we make decisions.
We have this thing called intuition.
And then we try to say this feels like the right thing,
but why?
And you wrestle with that when you're making hard decisions.
And is that science?
Not really.
Let me ask about a few high-level questions, I guess.
Is you've done a million things in
your life and been very successful. A bunch of young folks listen to this, ask for advice
from successful people like you. If you were to give advice to somebody, you know, an undergraduate student or some high school student about pursuing
a career in computing or just advice about life in general.
Is there some words of wisdom you can give them?
So I think you come back to change and, you know, profound leaps happen because people
are willing to believe that change is possible and that the world does change
and are willing to do the hard thing that it takes to make change happen. And whether it be
implementing a new programming language or implementing a new system or implementing a new research
paper, designing a new thing, moving the world forward in science and philosophy, whatever,
it really comes down to somebody who's willing to put in the work.
Right. And you have the work is hard for a whole bunch of different reasons.
One of which is you, it's work, right?
And so you have to have the space in your life in which you can do that work, which is
why going to grad school can be a beautiful thing for certain people.
But also there's the self doubt that happens.
Like you're two years into a project,
is it going anywhere?
Well, what do you do?
Do you just give up?
Because it's hard?
Oh, no.
I mean, some people like suffering.
And so you plow through it.
The secret to me is that you have to love what you're doing.
And follow that passion, because when you get to the hard times,
that's when, you know,
if you love what you're doing, you're willing to kind of push through.
And this is really hard because it's hard to know what you will love doing until you start
doing a lot of things.
And so that's why I think that particularly early in your career, it's good to experiment.
Do it a little bit of everything.
Go take the survey class on,
you know, the first half of every class in your upper division, you know, lessons and just get
exposure to things because certain things will resonate with you and you'll find out, wow,
I'm really good at this. I'm really smart at this. Well, it's just because it's the worst
way your brain. And when something jumps out, I mean, that's one of the things that people often
ask about is like,
well, I think there's a bunch of cool stuff out there. Like, how do I pick the thing? Like, uh,
yeah, how do you, how do you hook in your life? How did you just hook yourself in and stuck with it?
Well, I got lucky. I mean, I think that many people forget that a huge amount of it or most of it is luck, right? So let's not forget
that. So for me, I fell in love with computers early on because they spoke to me, I guess.
Well, language did they speak that? Basic. Basic. But then it was just kind of following
a set of logical progressions,
but also deciding that something that was hard
was worth doing and a lot of fun.
And so I think that that is also something
that's true for many other domains,
which is if you find something we love doing,
that's also hard.
If you invest yourself in it and add value to the world,
then it will mean something, generally.
And again, that can be a research paper,
that can be a software system, that can be a new robot,
that can be, there's many things that that can be,
but a lot of it is like real value comes from doing things
that are hard.
And that doesn't mean you have to suffer,
but it's hard.
I mean, you don't often hear that message.
We talked about it the last time a little,
but it's one of my, not enough people talk about this.
It's beautiful to hear a successful person.
Well, in self-doubt and in posture syndrome,
and these are all things that successful people suffer with as well,
particularly when they put themselves in a point
of being uncomfortable, which I like to do now and then just because it puts you in learning mode.
Like if you want to, if you want to grow as a person, put yourself in a room with a bunch of people that know way more about whatever you're talking about than you do and ask dumb questions.
And guess what? Smart people love to teach often, not always, but often. If you listen, if you're prepared to listen, if you're prepared to grow, if you're prepared
to make connections, you can do some really interesting things.
I think a lot of progress is made by people who kind of hop between domains now and then,
because they bring a perspective into a field that nobody else has, if people have only
been working in that field themselves.
We mentioned that the universe is kind of like a compiler.
The entirety of it, the whole evolution is kind of a compilation.
Maybe our human beings are kind of compilers.
Let me ask the old sort of question
that I didn't ask you last time,
which is what's the meaning of it all?
Is there a meaning? Like, if you asked a compiler, why, what would a compiler say? What's the meaning
of life? What's the meaning of life? You know, I'm prepared for not to mean anything. Here
we are, all biological things program to survive and propagate our DNA.
And maybe the universe is just a computer and you just go until entropy takes over the world and it takes over the universe and then you're done. I don't think that's a very productive
way to live your life, if so. And so I prefer to buy a store the other way, which is saying the world
has a lot of value.
And I take, I take happiness out of other people and a lot of times part of that is having
kids, but also the relationships you build with other people.
And so the way I try to live my life is like, what can I do that has value?
How can I move the world forward?
How can I take what I'm good at and like bring it, bring it into the world and how can how can I, one of these people that likes to work really hard
and be very focused on the things that I do.
And so if I'm gonna do that,
how can it be in a demand that actually will matter?
Because a lot of things that we do,
we find ourselves in the cycle of like,
okay, I'm doing a thing, I'm very familiar with it.
I've done it for a long time.
I've never done anything else, but I'm not really learning. I'm not really, yeah, I'm keeping things going, but there's a, there's
a younger generation that can do the same thing, maybe even better than me, right? And
maybe if I actually step out of this and jump into something I'm less comfortable with,
it's scary, but on the other hand, it gives somebody else a new opportunity and also then
puts you back in learning mode, and that can be really interesting.
And one of the things I've learned is that
when you go through that, that first,
you're deep into imposter syndrome.
But when you start working your way out,
you start to realize, hey, well, there's actually
a method to this.
And now I'm able to add new things,
because I bring different perspective.
And this is one of the good things
about bringing different kinds of people together.
Diversity, a thought, is really important.
And if you can pull together people that are coming
at things from different directions,
you often get invasion.
And I love to see that, that aha moment, where you're like,
what we've like really cracked this is something
that nobody's ever done before.
And then if you can do it in the context where it adds value,
other people can build on it, it helps move the world then. That's what really excites me.
So that kind of description of the magic of the human experience, you think we'll ever create
that in like an AGI system, you think we'll be able to create, give AI systems the sense of
meaning where they operate in this kind of world
exactly in the way you've described,
which is they interact with each other,
they interact with us humans.
Sure, sure.
Well, so I mean, why are you being so speciesed?
Right?
All right, so AGIs versus BioNet.
So first of all, you know,
versus BioNet.
BioNet.
You know, what are we but machines, right?
We're just programmed to run our, we have our objective function that we are optimized for.
Right. And so we're doing our thing. We think we have purpose, but do we really?
Yeah. Right. I'm not prepared to say that those newfangled AGIs have no soul, just because we don't understand them.
Right. And I think that would be when they exist,
that would be very premature to look at a new thing through your own lungs without fully understanding it.
You might be just saying that because AI systems and the future would be listening to this.
Oh, yeah, exactly. You don't want to say. Please be to me, you know, on SkyNet. SkyNet kills everybody. Please spare me. So why's why's look ahead thinking? Yeah, but I mean, I think that people will spend a lot of time worrying about this kind of stuff
And I think that what we should be worrying about is how do we make the world better? And the thing that I'm most scared about with
EGI is it's not that that
that necessarily the SkyNet will start shooting everybody with lasers and stuff like that to use us for our calories. The thing that I'm worried about is that humanity, I think, needs a challenge.
And if we get into mode of not having a personal challenge, not having a personal contribution,
whether that be your kids and seeing what they grow into and helping helping guide them whether it be
Your community that you're engaged in you're driving forward whether it be your work and the things that you're doing
The people you're working with in the products you're building and the contribution there if people don't have a
objective
I'm afraid what that means and
I
Think that this would lead to a rise of the worst part of people, right?
Instead of people striving together and trying to make the world better,
it could degrade into a very unpleasant world.
But I don't know.
I mean, we hopefully have a long ways to go before we discover that.
And fortunately, we have pretty on the ground problems with the pandemic
right now. And so I think we should be focused on that as well. Yeah, ultimately just as you said,
you're optimistic. I think it helps for us to be optimistic. So that's uh, take it until you make it.
Yeah, well, and why not? And I was the other side, right? So I mean, uh, I'm not a person,
a very religious person, but I've heard people say, like,
oh yeah, of course I believe in God.
Of course I go to church because if God's real,
you know, I want to be on the right side of that.
And if it's not real, it doesn't matter.
Yeah, it doesn't matter.
And so, you know, that's a fair way to do it.
Yeah, I mean, the same thing with nuclear deterrence,
all global warming, all these things, all these threats natural and junior global warming, all these threats
natural, engineered pandemics, all these threats we face,
I think it's paralyzing to be terrified of all the possible ways
we could destroy ourselves, I think it's much better,
or at least productive to be hopeful and to engineer defenses against these things
to engineer a future where like, you know,
see like a positive future and engineer that future.
Yeah, well, and I think that's another thing
to think about as, you know, a human,
particularly if you're young and trying to figure out
what it is that you wanna be when you grow up,
like I am, I'm always looking for that. The question then is, how do you want to spend your time?
And right now, there seems to be a norm of being a consumption culture. Like, I'm going
to watch the news and revel in how horrible everything is right now. I'm going to go find
out about the lay of satrocity and find out all the details of like the terrible thing that happened and be outraged by it
You can spend a lot of time watching TV and watching the news at common or whatever people watch these days. I don't know
But that's a lot of hours, right? And those are hours that if you're turned into being productive
Learning growing experiencing
You know when the pandemic's over going exploring turn into being productive, learning, growing, experiencing,
you know, one of the pandemics over going exploring,
right, he leads some more growth
and I think it leads to more optimism and happiness
because you're building, right,
you're building yourself, you're building your capabilities,
you're building your viewpoints,
you're building your perspective
and I think that a lot of the consuming of other people's messages leads to a negative viewpoint,
which you need to be aware of what's happening, because that's also important,
but there's a balance that I think focusing on creation is a very valuable thing to do.
Yes, well, you're saying people should focus on working on the sexiest field of all,
which is compiler design.
Exactly.
Hey, you can go work on machine learning and be crowded out by the thousands of graduates
popping out of school that all want to do the same thing, or you could work in the place
that people overpay you because there's not enough smart people working in it.
And here at the end of Moore's Law, according to some people, actually, the software is a
hard part too. Optimization is truly beautiful.
Also on the YouTube side or education side, it'd be nice to have some material that shows
the beauty of compilers.
That's a call for people to create that kind of content as well. Chris,
you're one of my favorite people talk to such a huge honor that you would waste your time talking
to me. I always appreciate it. Thanks. I mean, I'm talking today. The truth of it is you spent a
lot of time talking to me just on walks and other things like that. So it's great to catch up. Thanks.
Thanks for listening to this conversation with Chris Latner.'s great to catch up. Thanks. Thanks for listening to this
conversation with Chris Latner. And thank you to our sponsors. Blinkist, an app that summarizes
key ideas from thousands of books, Neuro, which is a maker of functional gum and mints that's
supercharged my mind, masterclass, which are online courses from world experts, and finally cash
app, which is an app for sending money
to friends.
Please check out these sponsors in the description to get a discount and to support this podcast.
If you enjoyed this thing, subscribe on YouTube, review it with 5,000 Apple podcasts, follow
on Spotify, support it on Patreon, connect with me on Twitter, Alex Friedman.
And now let me leave you with some words from Chris Latner.
So much of language design is about trade-offs,
and you can't see those trade-offs unless you have a community
of people that really represent those different points.
Thank you.