Future of Coding - Propositions as Types by Philip Wadler
Episode Date: November 19, 2023The subject of this episode's paper — Propositions as Types by Philip Wadler — is one of those grand ideas that makes you want to go stargazing. To stare out into space and just disassociate from ...your body and become one with the heavens. Everything — life, space, time, existence — all of it is a joke! A cosmic ribbing delivered by the laws of the universe or some higher power or, perhaps, higher order. Humanity waited two thousand years, from the time of the ancient Greeks through until the 1930s, for a means to answer questions of calculability, when three suddenly arrived all at once: General recursive functions by Gödel in 1934, with functions of sets of natural numbers. Lambda calculus by Alonzo Church in 1936, with anonymous single-variable functions. Turing machines by Alan Turing in 1937, with a process for evaluating symbols on a tape. Then it was discovered that these three models of computation were, in fact, perfectly equivalent. That any statement made in one could be made in the others. A striking coincidence, sure, but not without precedent. But then it was quietly determined (in 1934, again in 1969, and finally published in 1980) that computation itself is in a direct correspondence with logic. That every proposition in a given logic corresponds with a type in a given programming language, every proof corresponds with a program, and the simplification of the proof corresponds with the evaluation of the program. The implications boggle the mind. How could this be so? Well, how could it be any other way? Why did it take so long to discover? What other discoveries like this are perched on the precipice of revelation? Philip Wadler is here to walk us through this bit of history, suggest answers to some of these questions, and point us in a direction to search for more. And we are here, dear listener, to level with you that a lot of this stuff is miserably hard to approach, presented with the symbols and language of formal logic that is so often inscrutable to outsiders. By walking you through Wadler's paper (and the much more approachable Strange Loop talk), and tying it in with the cultural context of modern functional programming, we hope you'll gain an appreciation for this remarkable, divine pun that sits beneath all of computation. Links => patreon.com/futureofcoding — but only if you back the Visual Programming tier!! I'm warning you! Wadler's Strange Loop talk Propositions as Types Cocoon is good. It's not, like, Inside or Limbo good, but it's good. Actually, just play Inside. Do that ASAP. Hollow Knight, also extremely good. Can't wait for Silksong. But seriously, if you're reading this and have haven't played Inside, just skip this episode of the podcast and go play Inside. It's like 3 hours long and it's, like, transformatively great. Chris Martens has done some cool work (eg) bringing together linear logic and games. Meh: Gödel, Escher, Bach by Douglas Hofstadter Yeh: Infinity and the Mind by Rudy Rucker Heh: To Mock a MockingBird by Raymond Smullyan. The hierarchy of automata Games: Agency as Art The Incredible Proof Machine is what some would call a "visual programming language" because proofs are programs. But it's actually really cool and fun to play with. Approach it like a puzzle game, and give it 10 minutes or so to get its hooks into you. "Stop Doing Logic" is part of the Stop Doing Math meme. Unrelated: Ivan's song Don't Do Math. Bidirectional Type Checking, a talk by David Christiansen List Out of Lambda, a blog post by Steve Losh Nobody noticed that these links were silly last time, so this time I'm drawing more attention to it: Ivan: Mastodon • Email Jimmy: Mastodon • Twitter This link is legit: DM us in the FoC Slack https://futureofcoding.org/episodes/068Support us on Patreon: https://www.patreon.com/futureofcodingSee omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
Hello, Jimmy.
Hello. Thanks for greeting me. That's very polite of you.
How was your Strange Loop?
It was pretty good, except for there was some weirdo who wanted to hang out with me.
Didn't really enjoy that.
Do you mean me?
Yeah, no, we got to meet in person for the first time, which was great.
It was a wild experience with Strange Loop.
We had a dinner for Future of
Coding, and so many people came out. I feel so
bad that not everyone got their food,
because the restaurant struggled
a little bit with the Strange Loop traffic.
Ivan being one of those people
who did not get his food.
Yeah, but that's the price you pay
for getting to see so many people
in person.
I have food at home.
But at Strangeloop, I have people.
Yes, you don't get it at conferences.
They've gotten rid of food.
No.
Yes.
I think my highlight of Strangeloop was at every social occasion possible,
going up to any random person and saying,
hey, do you want to come outside and record something for a sec? Also, you can't tell that guy jimmy you can't tell him what we're doing yes you have to come outside and say
dykstra like it's a swear word but don't tell him that was very fun i i got to do that for a couple
of days and got probably like 40 people or something like that all told so i had no idea
what was going on just that i wasn't allowed to know what was going on and it
took me a while to even know that i was not allowed to know what was going on yeah i did uh follow
ivan out once or twice but sadly never caught the dykstra conversations i mean it's good you know i
think people might not realize that like i get to hear the shenanigans on the episode for the first time
when they get to hear them right like ivan just is like we must stop using textual programming
languages no syntax no grammar no parser no compile time only live visual programming Only live-ish world programming environments.
I'm like, well, all right.
What happened to the conversation we had?
I never know, right?
Like Intercal, I mean, I had no idea.
I had no idea either.
Yeah, there were just like little bits and parts
that I had, you know, a part in, and that was about it.
But most episodes, I'm like,
is this going to be a new song about my dog in here?
Or is there going to be just, you know, us having a chat?
Yes, I have previously described it as
you hear the episode the way the canvas hears the paintbrush.
That Intercal one that that was actually fun because your experience and and well there was nobody else on that episode but if there was somebody else on that episode their
experience was very different in that we structured it kind of like a game like it was sort of like a
little bit of a scripted like here are some rules and we're all gonna try and follow them at times oh yeah
and so there's a there's a version of that episode that is is very different from the one that went
out on the feed that that was also interesting and i think that idea of like hey what can we do
in the recording being totally divorced from hey what can we do when the episode comes out
that separation is really interesting to me. Not just in a like,
oh, I'm going to go and do a bunch of weird things
and ha ha, you had no idea.
Surprise.
But also like,
what structure can we have in the recording process
that makes it fun and energizing for us
in a way that might not be,
the audience may not be directly conscious of that,
but they might get like an interesting feeling result out of it.
I like thinking of that kind of stuff.
Yeah, I think it's the same sort of thing
that I've thought about for the conference.
When you're at a conference and you're just an attendee,
you have one experience.
And then when you're a speaker,
I guess when I was an attendee,
I thought, oh, the speakers must have,
sometimes there's a lounge or whatever.
I was like, it must feel so different being a speaker. And the answer
is it does. You're just nervous the whole time and don't get to enjoy the conference until after
your talk is done. But I do think it would be interesting. Like, how can you make it so that,
you know, there's a difference in experience here between speakers and not. And then for,
you know, most of these these recordings the experience is not
that different from the episode but inner cow was definitely one of those where it was more like
auditioning for a play uh than it was recording a normal podcast episode it was like all right
director ivan over here is gonna give you some lines and tell you to read them yeah like i had
a i had a not a script
but like a here's a sequence of activities that we're gonna play together yeah and then i took
jimmy and nobody else through those activities and then uh yeah well good episode thanks everybody
i'm out i'll see you later that was great, another fine episode of the Future of Coding podcast.
Another very normal episode.
And it's time for Jimmy's Philosophy Corner.
It's time for Jimmy's Philosophy Corner.
Yeah, as you might be able to tell from this awkwardness here uh ivan ivan wasn't as big of a
fan of this paper i don't know if that's the right word he was uh hesitant before we read it to to do
this paper that would maybe be more accurate okay so opening thoughts this is pretty cool
like this you know props as types that idea and and wadler's paper and and i also
watched uh wadler's strange loop talk just for additionally trying to drill this into some part
of my brain that would be willing to receive it like it's it's it's very cool and i think i get it
and i think i care but it is very it's hard for me this is not my this is not my strong suit this is not my area
um and so yeah like you know this this it's another great jimmy episode we have some ivan
episodes this one's going to be a great jimmy episode it's going to be a lot of jimmy this time
and i'm just going to be here like, yeah, Gertle's pretty cool.
Kleene's pretty cool. I don't know. So what about you, Jimmy? How does this paper strike you?
So I remember when I read this paper for the first time, I had just met my now wife. I was
driving up on the weekends always because I lived about two hours away.
So I would drive up on the weekends, and we were hanging out,
and she had fallen asleep after a long day,
and I was just up because I don't sleep super early.
She did.
I just started reading this paper on this big format e-reader that I had,
and this paper just struck me. To me, like, I am not, you know, I have
some, I've done some self-study in philosophy. I have no formal background. I'm not a big, like,
formal logic kind of person, to be completely honest with you. But I loved the way this paper
connected so much of kind of the history of how all of this programming came to be
and then this kind of larger element this kind of bigger vision of what programming ought to
look like what what good programming is that's kind of maybe a little bit implicit in uh the
paper but definitely in the talk that philip w gave, which I also saw in person, not that long after or whatever, at Strange Loop.
I don't know.
This paper really has always stuck with me.
I think it's a very well-written paper.
It's very clear, despite being a very intense subject
in some ways, a very formal subject.
I think of all the papers that are kind of
in the technical weeds in some sense,
this is probably one of the few that I felt like we could actually do a podcast on you know there's almost
nothing else in this uh tradition of really caring about types in a deep sense that we could do
as a podcast and do any service to it yeah yeah like there's so many of them are about all right let's
go through this theorem and prove that thing and like okay let me introduce this notation and yeah
and it's you know trying to be whereas this paper is not discovering anything new its contribution
is not like i am giving something new that hasn't been said before its contribution is re-explaining
things that have already been out there
in a way that makes it more accessible
for more people.
And on that note,
so you read the paper first
and then watched the talk, right?
Yes.
The talk didn't exist when I read the paper.
And I watched the talk first
and then still haven't read the paper.
You just watched the movie, didn't read the book so is that is that
really who you were who you are and who you were ivan watching the movie instead of reading the
book um uh yeah except i don't usually watch the movie either um i'll just like watch the trailer
and then go well yeah that's not for me but but in all seriousness though so watching the talk first and and reading
the paper and watching the talk again um i feel like and maybe you'll disagree with this but i
feel like they are covering the same material in basically the same way aside from the differences in medium, and I find the talk much more approachable than the
paper. I think that's true. And so if you're somebody who hasn't already read this paper
and maybe isn't super game to go and absorb a whole bunch of formal logic notation and follow
induction rules or whatever the hell all these philosophy corner
people do i would recommend watching the talk because the talk it has some spots where for me
as a as a non-logician it it's a little bit tough but if you realize like oh when it's showing like
you know this big sort of slide full of notation and some of it's
red and some of it's blue what it's doing is saying look this stuff on the blue side and this
stuff on the red side are the same not hey these both exist at the same time it's just like let me
overlap these two images it's like there's some there's some like graphic design work that i
think could have made the slides and the talk a lot clearer and more helpful but in general just like listening
to wadler talk about this stuff like he's such a charming like absolutely cheerful like warm
person who um who says some things that i very very emphatically disagree with to the core of my being but he says them
in such a way where i'm like ah yeah okay so i would i i would be a huge um i i would advocate
for if if this is any interest to you at all but not very much interest watch the talk uh the
strange loop 2015 talk called propositions as types if you are the kind of person who's like
no you know what i like my category theory i i like my haskell i am already in that camp you've
probably already read this paper so i don't think you have to be in that camp to read the paper so i
i view reading the a paper versus watching the talk by the author in this case like philip
wadler's talk is the paper in talk format right like it is
very direct i mean same jokes appear in both yes that's the level of which we're translating here
um i feel it like i feel audiobooks versus reading okay right like so like i have nothing
against audiobooks but and i i do listen to audio books on occasion and there are certain books that i would
rather listen to than sit and read uh but the way in which for at least for me personally when i
consume an audio book i don't get the time to like sit and contemplate right like the flow continues
there's never a point where i get to go did i understand that let me go back let me think let me contemplate
and so if you're just trying to get an overview of what this is about i would agree like the talk
is it's a really good talk it's a very it makes it easier to understand but i do think that there's a
certain sense in which like taking the time to sit and read and stopping when you don't understand and think about
it really can make your understanding deeper that that would be why i would say like if you
think that there's something really interesting here sit with the paper and read it like i i
think that you'll get more now this is again this is just me being nerdy and weird, right?
But there's a different experience to sitting and reading versus listening.
Definitely.
I'm going to take the exact opposite position
and say that when you are watching the talk,
you can, of course, pause and rewind at any time,
and I do because I struggle with this stuff. And so I do sit with any time and i do because i i struggle with this stuff and
so i do sit with things and i do go back and i do re-watch some parts several times but i also
get out of it wadler's inflection and his body language and the like the smirk on his face or
the way he tilts his head back and lifts his chin up when he says and you can tell
can't you like all of these things that he does to me bring a depth that is not there as much in
this writing because well wadler is a good writer he is writing a paper that is meant to go into a journal.
Is this ACM?
Where is this going into?
It's going into submitted for publication.
It doesn't say publication where.
But this is like a paper that is meant to conform to bleeds out a lot of expression that you do get
when you see somebody actually deliver the thing on a stage with lighting as a minor work of
theater. I never have, and this might be the difference, like for me, when I watch a talk,
I almost always don't actually watch the talk
unless I'm in person.
If it's a YouTube video or whatever, it's a talk.
Either I have it on my phone and I'm listening to it in the background,
just listening to the talk,
or if I try to pull it off on the computer,
I get sufficiently distracted enough
that I'm not going to just sit and watch the person.
Maybe that's part of the difference.
I end up almost always just defaulting it to it being audio but i would i mean i think that this is something our audience
you know they can figure this out for themselves but yeah clearly they have a at least they like
some audio content or they wouldn't listen to our you know incredibly long podcast or maybe they
only get through like the first 10 minutes they're just here for the game reviews up front and then they leave oh shit we gotta sneak one of those in uh cocoon is good y'all it's not
like inside good but it's good play cocoon have not played yeah it's one of the people who made
uh limbo and inside oh i liked limbo i haven't played inside oh my god jimmy inside is one of
the best games i have ever played.
I've been, I bought a Steam
deck and have been kind of getting back
into games. The problem is I realized
Hollow Knight works way better on the
Steam deck than on the Switch.
And I'm like way
better at the game because there's way less input
lag and I never realized how much input
lag there was on the Switch.
But now i have
to repeat this game for the third time because uh i'm good at it now i also did that uh earlier
this year expecting that silk song was just around the corner yep same and i also did that
like two years ago expecting that silk song was just around the corner yeah yeah it's definitely
just around the corner yeah oh it's definitely just around the
corner yeah oh definitely and unity doing their nonsense is definitely not gonna not gonna have
any impact on the gaming industry or on their release of this game that is i think built in
unity yes um inside is like a three-hour game and it is very much like turn off the lights and put on some big like sound sound is incredibly
important for this game like it is good to know of of any of the games i have played i think this
one does stuff sound wise more interestingly than like any other game i can name off the top of my
head so so have some good headphones play it in the dark it's like
three hours cannot recommend it highly enough okay cocoon not as good not as good as one of the best
games ever made um but all still still you know a little bite-sized snack of some more of that
kind of goodness and the great thing about games is that they bring connections.
Deep connections
between various bits
together, just like propositions
as types does.
Uh-huh.
In fact, you might say that
powerful insights arise
from linking two fields of study
previously thought separate.
I might think that. You're right.
Yep.
So we should get into this paper.
We must.
Yeah, we really should.
All right, so I read the first sentence, right?
Examples include Descartes' coordinates,
which link geometry to algebra,
Planck's quantum theory, which links particles to waves,
and Shannon's information theory, which links particles to waves, and Shannon's information theory,
which links thermodynamics to communication.
Such a synthesis is offered by the principle of proposition as types,
which links logic to computation.
At first sight, it appears to be a simple coincidence,
almost a pun,
but it turns out to be remarkably robust,
inspiring the design of automated proof assistance and programming languages and continuing to influence the
forefronts of computing yeah come on like that is a good starting paragraph it is good and and it's
like and what it is doing right like we'll get into you know concretely what it
does but just broadly like setting this up as like look we're gonna find there's a thing over here
and a thing over there and they have this rich correspondence between them like i love doing that
there's so much of that that you get to do in in programming and in the sciences and
in math and this is just like look here's a bunch more and here's a cool recipe for finding even
more of them new ones that haven't been discovered yet or whether it's like ah we've got a thing over
here and we can expect that there's going to be a thing over there but we haven't found it yet
or there might be two things that already exist and we don't know that they have a correspondence
between them so let's go find that like this is this is fun stuff this is very cool yeah um that's
it that's all it seems like there's it seemed like there was a butt coming with this okay okay and
that well but that's literally like the whole
paper uh we've done it we have uh summarized this great work i've been ivan reese thank you very
much for your time and i'll see you next month when we don't know what we're doing next month
either either either either or either either it's fine either is fine or either is fine either or either is fine okay so jimmy bring us into this morass
okay so i mean yes this paper starts off with the bang of what it's gonna like kind of slowly
get you to appreciate over the rest of the paper is that there is this deep correspondence and he
points out that there's kind of three correspondence here this propositions as types okay then there's proofs as programs
and then there's simplification of proofs as evaluation of programs so there's these three
things that are a correspondence between logic and programming logic has propositions proofs and
simplification of proofs programming has types programs and evaluation of programs and already
we are in trouble already yeah we have well we have five of these well four four or five of these six things are on Ivan's show list.
So the things I like
are evaluation of programs.
That's good because we need
to have our programs have an effect in the world.
Programs, now they're, I'm not so
sure, are they spatial? Do they
exist? I don't know. A little bit
iffy. Types, get out.
And then we have all this logic
stuff that's like,
it doesn't mean anything to anybody.
So now we're in trouble.
Okay, so ignoring that.
So the as here, the interesting thing is like,
we're thinking of propositions as types.
Really, you can think of it as they're the same thing.
They're equal, right?
When we say as, we're not thinking,
oh, we can kind of think about these
things as types. It's no, propositions are types, proofs are programs. Simplification of proofs
is the evaluation of a program. That's the interesting bit here is this directly connected.
Now, Ivan says that types aren't great. I know a lot of our listeners would very much disagree.
But I think even Ivan doesn't really mean the statement he's saying.
Because in order for us to really talk about types,
here I think it's important that, like, yes,
in some ways the types we're talking about in this paper
are the static types of programming languages.
But in other ways, they're not.
Yeah. I kid,
and they're definitely a different thing.
In all seriousness.
There are some programming languages,
of course, like Haskell, that's really
influenced by this tradition. And even
Wadler was involved in getting
generics into Java. He helped implement
all of that. And there's some direct
connections here. But there are a few papers that trace the history of types in programming languages and show how
they're complicated, and we might do those at some point. In this specific context, I think what is
meant by types is types as you would find in the simply typed Landek calculus. Yes. Not types as in like, you know, the syntax of declaring a type for an argument to a function
or something like that in a particular programming language.
Like there's the fundamental concept of types from a very pure like logic perspective versus
the actual interface that you encounter when you do real programming
and i do have a substantial critique not just a jokey jokey critique along these lines but i will
save that for later okay yeah and i think it'll be interesting to kind of dive in to part of what
this paper touches on that i think will be really interesting is looking at like why did these types
arrive and yeah in mathematics and in logic?
And what do they actually do for us? And we'll get to there.
I don't want to jump ahead to that yet.
But I think I will put out there just for people who might not know,
propositions.
I think that's a term that oftentimes is a little maybe thrown around,
but it's really just statements in logic.
So it's something that makes a claim.
So the cat is on
the mat is a classic example that's given in philosophy of a proposition, right? So it's a
declarative statement. And then of course, proofs are just proving something. And then simplification
of proofs, it's taking big complicated proofs and making them into the smallest possible form. And I think already,
if you start thinking about this fact that when we are writing a program, according to this,
we're making a claim, right? There is some proposition going on here, like in our types,
and then there's these proofs. So our whole program is a proof of something. And then when we run our program,
we run that proof,
we simplify that proof
and we get out an answer.
I think that is a really weird
and counterintuitive way
to think about programming.
And this is what,
when I first read this paper,
struck me so much.
Maybe, you know,
this was, I mean, almost a decade ago at this point
it was something i had never encountered before and i think encountering it for the first time
this doesn't just apply to proof assistance you know i had heard about these like mechanical
things or whatever but this is any program you have written it corresponds to some proof or other
i i find what helped me and may help you, listener, wrap your head around this correspondence,
is thinking of not just evaluation of any program, not just, you know, writing code in any programming language,
but specifically something Lisp-like.
When the code that you write is some sort of structure of self-similar things that's like nested, not like a series of statements to perform one after the other, like an imperative kind of thing, but a more nested, you know, built up structure, you can take some bit of code and replace that little bit of code with whatever it would evaluate to.
So if you have some function call, you just like run that function call you have simplified the program a little bit
by evaluating that little sub part of it and the ultimate result of your whole program is when you
do that all the way down when you like evaluate all of the code and replace all of the function
calls with whatever they return and collapse that big structure of code down to a single value at the end assuming
you know you're you're writing in a lisp that that works in that style because there's you know
different flavors of lisp but to me visualizing it that way really helped me see the relationship
where it's like oh yeah when looked at in that light evaluation of this program is like taking
this large structure and gradually reducing it down and
reducing it down until you're left with the simplest structure that results.
Yeah, I do think that's a very intuitive way of looking at it. One of the things I want to,
I think we have to emphasize up front, though, is this isn't, this isn't just a simplistic
idea here, right? When, when you first mentioned this to people, like there's a direct connection
between logic and programming.
One thing people might think is, yeah, duh.
Of course, we do Boolean logic, we do all of this all the time.
Of course it's logical.
We use logic in our programs, but that's not it.
That's not what's being said here.
And it's not that there's just logic. I think one of the other things people don't realize is how many different forms of logic there are here.
So we get a list that's propositional, predicate, second order, intuitionistic, classical, modal, and linear.
There's temporal.
There's all sorts of different kinds of logic. And for every single one of these, there is some way of taking that form of logic and
translating it into something computational. I think of Chris Martin's work, where they take
linear logic and create stories or analyze games, things like that, right? Where there's this, there is this very
deep connection between these logic and these computational elements. This is what this paper
starts with is that this, this notion is deep. It has a breadth. It is a much deeper connection
than you might think. And then the question becomes why? Why is this even the case?
And we're not going to answer the question yet,
but why would it be that there is this connection
between all of these things?
What makes it so that logic on one hand
and programming on the other
are somehow so intricately related?
So intricately related as to be the same thing just manifested in different
ways but that to their core they share an identity yes absolutely and so we get kind of some history
we get i mean we get like most of this paper i think is pulling in a bunch of different historical
things it actually is a little hard to think about like what's the right way to go through this paper because so many of the papers we read
are kind of structured as arguments yeah whereas this is just a lot of facts this is a lot of
things that happened so you know we first discovered this kind of stuff with a number
of different people like there's all these names
being dropped in this paper of like here's kind of the history so we get gensen we get church we
get turing we get henley milner curry howard but it all kind of starts with the first name actually
here is Aristotle.
The origins of logic lie with Aristotle and the Stoics and classical Greeks,
Occam and the Scholastics and the Middle Ages,
and Leibniz's vision of the calculus radiocinator.
I did not plan on how to say that word.
Radiocinator?
I've got one of those in my sink.
Radiocinator?
Yeah, yeah, radiocinator.
At the dawn of the Enlightenment.
I should have planned on how to say that.
Yeah, and then we get Bull and DeMorgan and Frigga.
I'll let you pronounce the next name because I know.
Pierce and Piano.
Nope, it's Purse.
Oh, ha ha.
No.
I got you.
I tricked you.
And is it Pano?
Is it Piano?
Is it Pano?
Piano, I think is right.
The one who has the arithmetic.
Yeah, but it is famously Purse.
And everyone mispronounces it because it looks like pierce and it should be
pierce but it's purse great so i just tricked you great cool he's an american pragmatist
oh so i should know better aha he's the founder of american pragmatism oh well i'll genuflect in
his direction as you throw me under this oncoming bus and then um we also get at the dawn of the 20th century north and whitehead
rhodoprenecipia mathematica famously north and whitehead yes north and whitehead uh and they
they showed this this idea that like we really have a way of possibly grounding mathematics in logic.
And we'll find out later that that doesn't work. But at the time, this was a real big
accomplishment. I mean, Principia Mathematica is this huge book. It's a masterwork. And it goes
through all sorts of things, kind of grounding arithmetic and sets and doing all sorts of things, grounding arithmetic in sets
and doing all sorts of stuff that hadn't been done before.
This inspired Hilbert.
This is really where the start of computing happens,
is Hilbert's program.
He didn't write a program.
He made a program, which was kind of confusing in this context.
Yeah, like a research program.
Yeah, his project
yes yes hilbert wrote the verse no he definitely did not uh ada lovelace did but yeah hilbert's
open collective support us on hilbert's patreon hilbert launched a podcast in which he uh yes he
he did he wanted to solve the what it was called the decision problem
which in german is angle bartson dung problem
in shooting's problem problem problem i have no idea um uh so to develop an effectively
calculable procedure to determine the truth or falsity of any statement.
So the idea here is, can we solve all of mathematics?
Given some mathematical statement, can we have some mechanistic way of knowing whether the statement is true or false?
Now, I have a question for you, historian Jimmy.
So I know that the goal of Principia Mathematica was to frame mathematics, like, or to reconstruct all of mathematics so that it would be built out of a single sort of seed element.
That at the very bottom of mathematics was set theory, and if you could get set theory, then you could get all the rest of mathematics out of that yeah and if memory serves
like they made principia mathematica but i'm my recollection is that they weren't satisfied with
it for some reason or maybe they were and it was later they discovered oh it is actually not
correct i'm spoiling a little bit for anybody who doesn't know the story that ultimately it's like
oh yeah principia did not actually do that.
It did not succeed.
Yes.
So one of the things that came out of Principia is that naive set theory didn't work.
Right.
So the idea of naive set theory is like, oh, really, this is supposed to come from some intuitive idea that like if there's a collection of objects, we could collect together that collection and call that a set.
So one of the things Russell was actually trying to do is ground mathematics and logic
and these very logically intuitive principles, but also ground mathematics in the physical world.
That was also part of Russell's plan here.
So when we talk about sets of objects like he wanted to say what
are numbers well we could there's these like fancy ways of doing numbers uh with like iterative sets
but that's not really what russell looked at it was like what is the set what is the number two
well it's the set of all pairs right so if you pair up every object in the world and you put each of those pairs into a set
that set of all pairs is the number two biblical
so one of the things that happened though as part of this is that russell's paradox was one of the
things he discovered is that naive set theory doesn't work because of russell's paradox which is you know probably
something people have heard of before um but it's the set of all sets that do not contain themselves
is one way of putting it or there's a barber who shaves all and only the men who don't shave
themselves yeah yeah yeah principia kind of left a bunch of things undetermined and we'll get to russell's solution
for that in just a second and so hilbert comes along and and is like hey can i build some kind of
algorithm some kind of process to take any statement in logic, or I guess by extension in math,
and determine whether that statement is true or false.
And this is something that maybe ideally would have been settled by Principia Mathematica,
but hasn't been, and is sort of like the root of this question,
can you find some way to calculate the truth or falsity of a statement,
is also at the root of what is incomplete about Principia Mathematica.
Yeah, yeah. It laid a bunch of foundations, but there were still a lot of questions left unasked.
And, you know, in 20th century fashion, there was kind of this arrogance that
we now can solve all of mathematics once and for all, right?
We will not only set mathematics on its firm foundation,
but we will solve all the problems.
And then who comes along but...
Kurt Gödel showed that at the very same conference,
the day before Hilbert made this proclamation,
that you actually can't do that.
So we call them the incompleteness theorems, right? You'll hear of Gödel's incompleteness.
But what he really showed was that you either have one of two choices. You can make a system that is complete, and that it can express all of the things you want to express but it's inconsistent it's
going to have some inconsistency there or you can make something that is completely consistent
but incomplete there's things that you want to express with it that you can't
just to get in a plug while we're here my favorite book that has explored this space is not girdle escherbach um it is a book called infinity and
the mind by rudy rucker this book uh covers like busy beaver and a whole bunch of other
just delightful examples and so it's uh that is a an easy plug for me to make the whole thing is
online you can go read the whole thing in a web browser if you want but uh i recommend getting a little like uh paperback and going and sitting by a river all right so we've
got girdle girdle comes along and goes boom you wanted this principia mathematica that explained
all of math in terms of you know the set theory no you don't get that also you don't get any other
formulation you cannot explain all of mathematics built out of any single seed.
They are always going to either be incomplete or inconsistent.
Now what?
Well, as part of this program,
one of the interesting things here is
Gödel's explanation wasn't fully appreciated,
wasn't fully accepted right away.
There was still a lot of work here.
Hilbert clearly wasn't aware of it when it first happened, but it also didn't have these vast implications.
In fact, today it's kind of ignored in a lot of ways.
Set theory still is the underlying foundation of mathematics, even though we know it's incomplete.
So that's interesting. But it also, Hilbert's project, despite being actually impossible,
spun out a bunch of people to try to search for this decision problem.
And of course, they weren't able, of course, to actually make the decision problem.
But what they were able to do is figure out a nice way of showing that it's impossible,
of showing that once and for all, there is no way to make this decision problem happen.
So you might think, okay, so why would that even matter if Gödel has already shown mathematics is incomplete?
Well, it might be the case that our system has these problems, or there's some way around it,
or Gödel's proof wasn't quite right a
lot of people is it's a fairly complicated setup and so even if girdle might have shown some of
this you still have this problem of what exactly would a decision problem algorithm look like
if we could do it what would be the elements involved? And maybe we can't do it
for all things, but maybe the things we can do it for are useful enough. And so we get that there
were three independent definitions by three different logicians. And I'll spoil the joke
that's in both the paper and the talk. Like buses, you wait 2,000 years for a definition of
effectively calculable,
and then three come along all at once.
So the
three were Lambda Calculus,
published in 1936
by Alonzo Church,
Recursive Functions, proposed
by Girdle at lectures in Princeton
in 1934 and published in 1936 by Stephen Kleeney.
And Turing Machines, published in 1937 by Alan Turing.
And, go ahead.
No, I mean, it's like, I have already finished the podcast.
I'm off, like, eating lunch at this point.
Uh-huh, uh-huh. Yeah. It's rather late I'm off like eating lunch at this point. Uh-huh.
Uh-huh.
Yeah.
It's rather late for you to be eating lunch.
Usually that would have been acceptable, but.
Podcast days throw my schedule to hell.
Uh-huh.
Oh, all right.
So is hell one of the words I should be censoring?
No.
I have this debate. It's like, which one of the words deserves the, like, you sensor?
Like, which words are deserving of that?
Hest.
Et cetera.
And which ones just can slip the hell on by
and it's no big deal?
I don't know.
I mean, that's definitely not a word
that needs to be censored.
I think you can go on, you know,
television standards, right?
Yeah. I don't know if on, you know, television standards, right? Yeah.
I don't know if those are different in Canada, though.
So in America,
what are the words that are censored on television?
You're just trying to get me to say words.
I mean...
Because I've only had one.
It's a podcast.
You can say a lot of words.
It's okay.
Oh, yeah, yeah.
But I just...
I like the idea
that I've only had to be censored once.
Now, Ivan could, of course, just randomly choose to censor me.
But I like the idea that I've made this very conscious choice.
Probably not forever.
But like the next time I get censored, it will be because I decided it was worth it.
Yeah.
Not because Ivan goaded me into saying so.
So. So.
So.
Church and Turing, right?
I think this is a classic tale.
Everyone knows the love story of Church and Turing.
The tale as old as time.
This is fairly rehearsed.
I mean, if you haven't heard about kind of Church and Turing
and the difference here,
I don't think we're going to go into all of the details here.
Needless to say, Church came up with the lambda calculus.
Turing came up with Turing machines.
And these are very, very different ways
of talking about what is effectively calculable.
To put that in English,
they're very different ways of modeling computation.
We're all programmers listening to this. If you're're not a programmer you're not listening to this it's like there um that statement you're like a family member or friend no you took my
complete consistent statement and introduced doubt uh-huh um i can't believe you girdled me into that.
So it's like Lambda calculus is the approach to computation that underlies a lot of beloved functional languages
like most LISPs and Haskell and the ML family and all that.
And then Turing machines is the model of computation
that underlies what, like pretty much nothing at this point?
I mean, you could think of it
as kind of like the machine way
of looking at computing.
Well, it's one of the machines.
Yeah, I mean, of course the
simplified approach was not.
The distinction here I'd want to draw
is you can talk, lambda calculus
is like, oh right, denotational
semantics. What does this mean
by defining it in terms of math?
Whereas Turing machines, operational semantics,
what does it do?
I can understand what this is by thinking of a machine
and it working and what it does.
So state machines are in that tradition of Turing
more than lambda calculus.
These are loose connections here. Yeah, these are loose connections. of Turing more than lambda calculus. And this is like...
These are Lou's connections here.
Yeah, these are Lou's connections.
Lou drew these connections.
Okay.
But I think one of the things that is remarkable
is just how incredibly different these two things are,
and yet they're fundamentally the same.
Yes.
And we get the reason that these came they were both discovered to use a word that i shouldn't use till later and you can tell
or invented one of the two at the same time yes at the same time is like what we're trying to do
is figure out a way that we can compute all of these proofs.
So one thing you could do, you could imagine a way of solving the decision problem is someone feeds you a statement and you just say whether it's true or not.
But another way is just to list all the true statements.
And this is something that you can kind of, I think, more intuitively think about for a Turing machine is the Turing machine might have been able to generate all of the true statements. And it turns out there's a statement it can't actually prove.
There's something about Turing machines that make them undecidable. And this is the halting
problem. This is Gödel's incompleteness theorem, but in a physical machine. That's what's so
fascinating to me about Turing's way of doing it.
And the halting problem is a question of, well, if I give a Turing machine some arbitrary program,
can it determine whether it halts or not? Of course, there are some programs that will be
able to figure out, yes, that halts. Some people hear this and are like, of course I can figure
out if a program halts. Yeah, of course. There's some programs, it's very easy to figure out do they halt or not. Some programs you can
say yes, some programs you can say no, but can you do this for every program? Can you always know?
And the answer is no. One example, I guess, of a Turing machine program that you can't know
is where you feed the program to itself. Let's assume we have this halting detector.
One of the things we can do is we can write a program that takes that halting detector,
and we just run it in our program. We figure out if it thinks it's going to terminate or not,
and then we do the opposite. And so we kind of got this self-referential problem here. And so in the same way that Turing machines have this halting problem,
so does lambda calculus.
What I like is that the way that we broke the Turing machine
is by introducing recursion.
And that's also how we break set theory.
Is that when you want to break set theory, you just have to introduce the idea that sets can contain themselves or sets can contain like descriptions of themselves, depending on like, you know, what what mechanisms you want to bring in.
But it's this like self-reference is frequently the thing that introduces inconsistency into what would otherwise be a complete system so
if you have some right the incompleteness theorem you can't have both complete and consistent so you
can make a really big complete system but to be complete it needs to include self-reference
and that is the thing that introduces the inconsistency in many cases. And if you take away the self-reference,
then you can have a big, encompassing, consistent system,
but it's incomplete because it doesn't have self-reference.
Yeah, and it's the same exact thing with lambda calculus, right?
The way in which you can see the halting problem,
I think the halting problem in Turing machines is so much cooler
than the halting problem in lambda calculus because it's a machine and it feels like, oh, we should be able to say anything about a machine in the physical world.
That just feels wrong that we can't.
Whereas what happens in Lambda calculus is if you have the untyped Lambda calculus, you can take a function and apply it to itself.
That's the idea and if you do this i think the
most fun one is you do it twice and you start getting this just growing gigantic lambda term
because each time you apply the function it's applying it's returning itself twice and then
it's returning and you get this big huge growing term that will never end yeah so it will never
halt and so if you think about this again
and this propositions is type things this is saying as we try to simplify our proof our proof
is getting larger and larger and larger and larger forever and so there is no simplifying of it but
we know it's not in simplest terms because we still need to apply these operations and so this podcast is um complete but it is inconsistent because i'm going to reference
this podcast um earlier when i recommended the book infinity in the mind by the rudy rucker um
i meant to reference it here because it's this kind of stuff about like oh what are the tasty different kinds of
you know oh if you're doing a lambda calculus simplification simplications and you uh and you
you know just return the the same program but two of them each time you you know try to simplify it
actually returns two of itself and it grows and grows and grows without bound like this is a book
full of those kind of fun things
that you can do to break these constructs.
So I just thought I'd reference that,
self-reference that.
You were saying?
There's also Tamaka Mockingbird, is that it?
Yeah, yeah.
Yeah, which is like a kind of a series of puzzles
or kind of teaching you logic through some puzzles here.
And I think it's a pretty good book.
I haven't like sat and read it all the way through but i enjoy looking at tamaka mockingbird
i haven't read it but a lot of smart people say it's a good book so when some smart person says
it's a good book i can go yeah yeah yeah a lot of smart people i know say it's a good book
so the thing is this is actually so this halting problem seems completely intractable but we actually know how
to solve it we know how to solve it by just adding types what okay so so yes we still get this problem
where it's incomplete or not right but all well typed programs in the simply typed Lamb calculus will halt.
Hmm.
Because you don't have recursive types because it's not what?
Yeah, because the typing rules make it so it is a type error
to apply a function to itself.
And so these type rules don't, in some ad hoc way,
say you can't do recursion.
It's as soon as you add the types, that stuff you were doing to do recursion just doesn't type check.
And that's, I think, really fascinating because if we go back to Principia Mathematica, Principia, Principia, I'm going to say it both ways so that I'm right one of the times.
Incipient Mathematica, North and White the times incipient mathematica north and whiteheads incipient mathematica yes um and so if you uh if you go back to that this is exactly how russell
solved his paradox he invented or discovered one of the two the idea of types and you can tell
he put this idea of types onto sets he said that sets have a certain type and there's
kind of this hierarchy and these rules around how you're able to have sets inside of each other or
not now this was not the solution the mathematics community ended up actually going with if you go
look at like zfc and the way that math like set theory is defined now It's not a theory of types. But this was Russell's idea.
And so the exact same idea is applied now to lambda calculus. Now I have no idea. Is there
a Turing machine? Types? Like a typed Turing machine?
Yeah. Have you heard of such a thing? No, but I'm
like, you know, arm's reach away from Google.
Typed Turing machine.
Also, we need to figure out when we're going to
start the second part,
the second segment, because we're an hour in.
I don't want a three-hour podcast.
So when do we reboot
and search to the second segment?
Did you know? Did you have an advance
planned out, like, ah, here's first segment,
here's second segment, like where in the paper we split it up?
My plan was to go, was like right around here. around here okay cool so i'll let you bring us to that transition um
simply typed turing machine um i mean no all right so somebody somebody in the community go
out and do a simply type turing yeah how do you change a Turing machine so that it just has types,
so it has rules, so that the halting problem is trivial?
There's got to be a way.
I guess push down automata might be the answer.
Something like that.
I'm going to take the under on that.
I'm going to say push down automata plus one cent.
Because they're not Turing complete, right?
It's been a while.
No, they're not.
Yes.
Is that the most powerful before you get to Turing machines?
It's not Turing complete?
I'm sure it's not the most powerful in the hierarchy,
but I think it's of the popular ones,
of the different finite automata and regexes
and all that kind of stuff in that stack. Yeah they're near the top yeah yeah so maybe it's something in there um
so yeah people please tell me because i'm interested yep um so i feel like at this point
you know we just have to take a break because i can see ivan just like glazing over like usually at this point in
episodes ivan's like all energized and about to go on his uh by i mean every podcast rant of visual
programming don't you have to go let out your dog isn't that usually the thing that happens at around
this time like that actually the thing that happens it's actually really funny because uh so we're recording this on a different day than what we usually do
at a different time and so janice is home taking care of lemon
and so there is no natural break point and so we just have to make a break awkwardly i don't you don't have
to do that if you want i i do not like this no it's perfect yeah it's so forced it gives up the
bit prematurely people will know that we're doing a bit as opposed to what normally happens.
Where people don't know that having Dykstra being yelled out is a bit.
No, no, no, no. I'm not joking here. I'm actually being serious.
But I'm usually not in on the bits. That's why it's different.
No, well, I don't know about that. I don't know if that's true. You're rarely not in on the bits.
I don't know of any bit I've been in on.
Other than Intercal?
I wasn't in on that.
I had no idea what was happening.
I had no idea either.
Oh, but I mean, you knew that there was a bit happening.
The only bit I knew was that it was weird.
That's what I mean.
But I haven't been in on the bit.
Right, but I'm not in on the bit at the time we record either.
It's fine. Mm-hmm. No. Yeah, yeah, no, I know. That, but I'm not in on the bit at the time we record either. It's fine.
Mm-hmm.
No.
Yeah, yeah, no, I know.
That's what I'm saying.
That's why this is different.
It's because I'm in on the bit at the time that we're recording?
Because this is a pre-planned bit that we're both in on.
I guess.
And that is ruining the purity for you.
No, the problem is I don't want the audience to be in on the bit.
But they're not in on the bit.
They are if we have to force a transition.
If we start up the next segment and you're like,
like if I play the like lemon stinger,
if you're like, oh, hold on, I got to go let lemon out.
And I play the lemon stinger and then we start up again then the audience is not in on the bit yet i mean if you really i think that that i don't know well what do you want the bit you designed you designed this
bit this is your responsibility i did not design the bit was your idea no it wasn't that totally
was the episode into chunks that was you no this was a back and forth that like you had said you want to do something and.
And you said, OK, we'll put it into chunks because the paper kind of cleanly and we can just each time.
I think it was back and forth.
Speaking of back and forth.
Hello, Ivan. Hello, Jimmy. hello ivan hello jimmy
well yeah what's new since last time
so i know this paper is not your cup of tea but i was actually kind of surprised at how many people on our patreon voted for types how many contrarians
out there knew that by picking the type tier they would be intentionally undermining my
visual programming hegemony yeah i don't know if that's true in fact visual programming is losing
no yeah yeah is it actually now it's. If I go to patreon.com
slash future of coding
and I look
at the three and only three
membership tiers,
three and only three, type systems
with 13 members, visual
programming with 10 members, and end user programming
with 13 members, this is a
conspiracy.
This will not stand.
Where are my visual programming cronies?
Where are my flying monkeys? Where are my
little nodes? Where are my
little nodes?
Where are all my little nodes out there?
Come out with your wallets and show what
the prevailing
approach to programming ought to be over
the next several centuries.
Yeah, I was kind of surprised.
I figured end-user programming would be high up there,
but types, people are digging the types.
And I think it makes sense.
As much as I'm not a type theory maximalist,
I'm not somebody who really thinks
everything ought to be typed
and any untyped languages are bad
or whatever
dynamically typed however you want to call them i think types are having their moment right now
yeah types are really like on the ascent yeah there are so many people who care about types
and i think it's i don't think it's an accident like i don't think it's just purely a fad i think
types have gotten a lot better it has happened to me i
have switched and this this past yesterday even as we record i was writing code in typescript
and i got a type error and i couldn't figure and i knew that the code was correctly typed but i
couldn't figure out how to hint to typescript no like trust me if these conditions are met that
type is definitely going to be
narrowed to this thing, and you think it's not. I couldn't figure out how to do that,
so I just refactored my whole system so it was a lot more confusing, and then it satisfied
the type checker.
I actually hopped on a call with a friend who was insistent that TypeScript was wrong
about his type, and that we should go add a feature or fix the bug.
And so we went and hacked on the TypeScript type checker
and made his code type check
and then realized why TypeScript was implemented that way.
I still hold it was a better way of implementing it anyways.
But people like Wadler who have all this theory behind it,
I'm sure, would have a good reason why it shouldn't be.
But if you already have an unsound type system why not uh i i there you go here's an idea for you right um so you need to make a uh
musical programming language okay and then i need to like it's in my blood yeah you need to uh i
know that this is a desire a need that you have it's on your
maslow's hierarchy you know the most important one uh and so you need to have it have a type
a type system that's an unsound type system because it doesn't make any sounds so i i'm reading this book that i actually think you might like and i think we got to find some
stuff off of this is completely irrelevant but i don't care uh so it's called games art as agent
agency as art i think it's agency as art games art as games agency games agency as art. Games, art as games, agency. Games, agency as art.
It's a philosophy.
Games, colon, agency as art.
Yes, colon.
Games, colon, agency as art.
It's a philosophy.
It's a philosophy book about games.
Colon, agency as art.
Yes, I butchered it.
I'm sorry. So it's a philosophy book about games and trying to
defend them as a unique art form uh but one of the games it mentioned that just sounds so
interesting to me is called sign this is like a not a video game this is like an actual game it
was created in the 70s i'm pretty sure like this avant-garde game have you ever heard of this never okay so the idea is every person usually play this together physically
it's kind of a i guess a board game i don't know what you call games that aren't a board game
that aren't a tabletop rpg yeah just a game yeah like like um chess right what do you call chess
well chess is a board game oh yeah i guess, I guess so, because it's boring.
That joke has never been made before.
Oh, never.
So anyways, every person is given a kind of inner secret.
I guess it's a role-playing game in some ways.
They're supposed to play this character,
and their character has this inner secret
that they want to share with people,
like something about a childhood trauma or something they deeply believe but when you play
the game you have to be completely silent you cannot speak at all and you have to now learn
how to communicate your deep secret that you want to share with people. And that's the whole game. Nobody can talk, nobody can write,
and you have to communicate your secret with everybody.
And I just think that sounds like such a fun game.
Yeah, yeah.
And the author has a house rule
that I actually think is really interesting.
I think I would only want to do it
after I've already played the game once.
But their house rule is, after the game is over,
you can't say what your inner secret was.
So you leave the game having communicated it,
but never having the confirmation in English
that that's really what happened.
I just love that idea.
I don't know how it's just the unsound and all of that you
know popped into my head but i think that sounds like such an interesting game yeah so uh i
definitely want to get a group of people you have to be in person there's no way it would be so
impossible to do this over zoom right but yeah i really want to play that now it sounds really fun
yeah also a game about war one but you
like are investors this is actually a board game but you're investors in the countries and you get
to take over the countries by investing in them rather than attacking so it's like risk but you
don't care about the countries it's called imperial i think that also sounds really fun um segue off that jimmy uh so speaking of things that happened in and around a world war
no i mean so as part of you know the stuff that had happened in world war one and then we get to
world war ii a lot of this computing stuff became very relevant so much of this was funded by the military.
And so many of these people ended up at Princeton.
In fact, Church and Turing who had Princeton,
which did so much in this computational realm.
See, perfect.
Clear, great segue.
Yeah.
So, okay.
So now we've gone through some of this.
We have Church.
We have Turing.
There's two different ways of doing effectively calculable. And in some ways, this is very relevant. In some ways, it's not.
The reason we really got here was because we care about church. We care about lambda calculus. And
so this was kind of a history lesson of how lambda calculus came to be. And it was supposed to be this logic, and it ran into the same problem
that we see with Gödel, with the incompleteness theorem,
with halting problems.
But then a bunch of other stuff's happening at the same time.
In 1935, there's Gerhard Gensen.
And Gensen is a logician.
It has nothing to do with mathematics,
nothing to do with this decision
problem. He just comes up with this really great way of writing down logical formula.
And his big thing here was having these kind of two parts, these introduction rules and another
thing that I did not, elimination rules. That's the other one. I thought there were three things.
Introduction, elimination,
and one other thing, because you need the three
parts, right?
Universal quantification, implication,
conjunction, disjunction.
Ah,
proof rules should come in pairs, a feature not
present in earlier systems such as Hilbert's.
Threeness of pairs.
In natural deduction, these are introduction and elimination pairs.
And so we get this really nice system
that kind of formalized in a much better way
and made much clearer what was going on with our logical systems.
And this is important because this ends up bridging
to our propositions as types idea.
This system of Jensen's is what allows Curry to notice in 1934
that there's this deep connection between implication and functions.
And this is what's so wild to me.
So we get 1934 curry realizes this and then in 1969
howard circulates a xeroxed manuscript that isn't published until 1980
and just so that people know curry and howard like we're throwing a lot of names around these
are our two new characters curry and howard we've got curry howard we've got genson we've got church we've got turing we we
have not been talking about curry and howard yet they are new they have just joined the story yes
and so what we ultimately get is that howard publishes well doesn't publish this paper until
curry's death there's a kind of a paper in 1980 you know the german word that i'm not going
to try to pronounce where you publish a bunch of papers after someone dies in honor of them
he published this paper in 1980 and he shows that there's this deep correspondence between
conjunction meaning and a and b is the cartesian product so you can think of this as just pairs. So A and B
is like the tuple A, B.
So in logic,
you have conjunction,
and in set theory, you have the
Cartesian product.
Or in programming, you have tuples.
And then there's disjunction, A or B,
and then this is the same as
a disjoint sum,
which you can also think of as like algebraic data types, enums
where you can say, oh, I have event A, event B
or I have an either or I have a maybe
or any of these things, right?
These are the same kind of structures.
And then you have implication, which is also functions.
So A implies B is the same as a function from A
that returns B.
Given an A, get a B.
And he shows that these are in a correspondence.
Yes, that the things that you have on the logical side in each of these,
conjunction, disjunction, implication,
correspond cleanly to Cartesian product, disjoint sum, and a function on the programming side.
Yeah, and he continues on and
expands this even more to things that are maybe a little less common like existential quantifiers
and universal quantification and this ends up becoming dependent types so when i was first
learning about this stuff hey jimmy's been talking about philosophy for a while here comes ivan to
ground it in reality again um when i was first learning about this stuff my hook into this
and i don't have i don't have a formal like oh yeah this exactly maps to that and this exactly
maps to that like thing prepared like i'm not i'm not a formalist so i'm not gonna have that but the
thing that i was reminded of the thing that echoes here if it'm not a formalist so i'm not going to have that but the thing that i was reminded of
the thing that echoes here if it's not a direct correspondence is the boolean operations that you
take on geometric primitives so the things like when you're working in you know a drawing app or
a 3d modeling app you have these boolean operations like union subtraction intersection those
operations like to the mathematicians in
the room, what I'm about to say is very obvious, but to the artists, this might be a revelation.
Those operations are like the geometric, like in physical space, you can do this with like
scissors and paper kind of version of set union, set intersection, or having like disjoint sets where it's like,
you know, these two sets do not overlap. There's a clean mapping there between the geometric version
of that and this set version or this logic version of those things. And that was a really helpful
hook for me, because it gave me something I could picture in my mind as a, oh, when we're talking about
conjunction or the Cartesian product or something where it's like, oh yeah, we've made a thing that
has both of these. Like that was very intangible for me and very hard to appreciate until I was
like, oh, that's the same as the union operation you can take on two shapes. That's how I climbed
on board this train when i was first
learning about it so i don't know if that's going to be helpful for anybody out there if you're like
me and coming at this from an arts background and not a philosophy background but uh if you're out
there uh hey i see you yeah i i mean i i think that this stuff can seem very and it is it's like
very formal and it doesn't it's like very formal.
It doesn't seem very applicable.
But what this ends up getting us is that you can write programs that end up being proofs.
And you can, if you take this view that your program is a proof, you end up getting really
interesting systems.
So if you look at like Idris, I think it's probably the most approachable of these kind
of dependently typed languages out there. It's not the most approachable of these dependently typed languages
out there. It's not the most
approachable language in the world.
Technically, TypeScript has half of dependent
types at this point, but they're not
really displayed in such a way that you would think
of them as dependent types.
It's an underspecified, buggy
implementation of half of Idris.
Yeah, probably true. If you look at of Idris. Yeah, probably true.
So if you look at how Idris does things,
you can start writing these proofs in the type system.
You can start saying, for example,
that when you implement factorial,
you can know that your implementation
at compile time has to be correct.
And there's some more fancy ways of doing it,
but there's also some very simple ways where you can have a type,
which is factorial 5 equals 120.
That is your type.
So I say x is type fact of 5 equals 120.
And at compile time, if that is not not true you will get an error which is fascinating
if you start looking at all of the different things you can do with idris all these different
proofs you can write you can start to see how you can have a meta language to talk about your
programs so many times we think about types as just arbitrary rules that someone shoved on top
of it of my program to restrict me at least if you're coming from a dynamic type background.
It's like, but why can't I do this?
Whereas once you kind of go into these even more powerful types, you start seeing, oh, there are these ways in which I could have some guarantees about my program and I can write programs about my program just in this sub
language of types. And they're really tricky and they're really not the most fun thing to do
in all honesty, other than as puzzles. And I think this is where Idris makes it really fun
is they have this kind of interactive way of doing it that actually does make them feel like puzzles. But here, the important
bit is that like our programs we're writing today, even if we're not using these proof assistants,
they correspond to some sort of logical statement. Now that statement might be inconsistent because
our program has an infinite loop that we didn't realize was in there. It might be that, oh, well,
in order to really analyze it,
we would have to have some really complicated higher order
temporal logic or something like this.
But we actually start seeing these things being used
in real stuff, like Rust type system
is applying logic to programs
in a way that hadn't been done before
with an affine type system this
is how the borrow checker works and i don't know i know that like to me this stuff as i kept as i
keep reading this paper yes like i'm looking at kind of what we have left in here and there is
a lot of kind of like in the weeds so like intuitionistic logic which ivan i'm sure has no
opinions on i have i don't even know i like that the first paragraph starts off in gilbert and
sullivan's the gondoliers and it's like oh i like gilbert and sullivan and then the whole rest of
this section i'm like i have no idea what this means or why i care about it i don't know it
has to do with like so i'll tell you the answer is intuitionists were just wrong and you shouldn't
listen to anything they have to say great cool that section's done uh they don't believe in the
law of excluded middle which you actually might like the law of excluded middle or not believing
in the law of excluded not believing in the law of excluded middle or not believing in the law of excluded
not believing in the law of excluded middle okay that'd be more i feel like that's more your jam
you don't want to hold on to anything so the law of excluded middle is something is either
true or false basically right is kind of the idea there is no third option so if i say like
ivan is wearing a green shirt that can either be true or not true and nothing else.
Yep.
It's got to be one of those two.
Yes.
So either A or not A.
Yep.
But if you made a statement that's like, well, Ivan's a big jerk.
Well, actually, no, that can be true or it can be not true.
It depends.
Now, of course, you would quibble and say, well, yes, but your statement wasn't very
good because, you know, it left a whole bunch of ambiguity and yada yada yada and i would say this is where this turns from being fun into being just
like books level boring but that's not what they're talking about it's not about ambiguity
they want to say that that's just not true that a statement is not either true or not true
that it's only true if you've proven whether it's true and it's only false if you've
proven that it's false so you can't actually do that thing until after you've proved one side of
the other yes yeah so you can't use a or not a as part of your proof yes prior to proving a or
proving not a so the the programming example think, actually makes this a little clearer.
Imagine we have an enum or an algebraic data type, whatever you want,
that it has one option A and one option B.
Actually, we'll just call them A and not A.
That's what they're called, A and not A.
I want to have one of
those two, right? I need to have either a or not a that's my disjunction here. If I say, hey, I have
a variable of type, I should have given that, that enum a name, my enum, I have a variable of type,
my enum, what's the value? Well, the value either has to be a, or it has to be not a.
Otherwise, I just don't have a value. There is, I can't run this program. I don't have any value
for this variable. I have to either have one option or the other. And that's what they want
to say the world's like. You have to have it. You have to hold it in your hand. You have to
have a proof of it. And if you don't have a proof, you can't know it. So they reject all sorts of
things that normal
mathematicians accept and they're just wrong it might be useful it might be fun it might make you
feel uh like a rebel to reject a long honored principle but nah it's just wrong oh you'd like
this ivan i'm gonna dunk all over it well you would because because you like getting dunked on you want to you want to be a contrarian
yeah so for example they don't buy that there are certain kinds of irrational numbers hmm so pi does
not have an infinite number of digits in fact nothing has an infinite number of digits because
show me that number if you can't show me that number if you can't hold it in your hand for
lack of a better term right if you can't compute it you can only compute approximations of it yeah it does not
have an infinite number of numbers yeah this feels tenuous because it's like the the the decimal
expansion of a number is different from an actual number like pi the actual number pi is not the
same as the decimal expansion of pi those are like different things and pi is very tangible like you can very easily like if you can draw a circle you can you
can get some pie but like i baked a pie just yesterday in fact no i didn't actually oh you'd
like this it's an anecdote that goes nowhere and actually was not true. Ivan, check this out. Check this out.
I'm going to bring in some nonsense.
You know, that thing you do.
How the paper structure goes, I think,
actually lends itself more towards the talk or the paper
than our discussion on it.
So I want to kind of synopsize some of this
and get to what I think is the most interesting bit about this paper,
and it's what made it stick with me okay so there is this great little diagram here of genson's uh system of how
you do natural deduction where it shows a bunch of logical proofs and the reason we get this little
intuitionist thing is because that's the system he's going to be showing and he's trying to be
thorough and like this is intuitionistic logic and so he's going to be showing and he's trying to be thorough and this is intuitionistic logic.
And so you get this direct natural deduction.
And so it's a bunch of this blue text here and it's things like A, B, and then a line
and then A and B.
Or A and B and then a line and then A.
And it's these little formula that are going on here.
Ivan's laughing at me.
We're having such podcast.
And then we scroll down and we get the lambda calculus.
And now there are two colors.
There are the red and the blue.
And it turns out that lambda calculus is just this natural deduction,
but with some extra little things thrown on it.
There's some little type of descriptions to it.
This is one
where I think the talk
unambiguously does a better job,
or not unambiguously, inarguably,
indisputably does
a better job than the paper. That the talk's
presentation of this, where it does like
first let's look at the logic,
then let's look at the lambda calculus,
now let's look at the logic and the lambda look at the lambda calculus now let's look at the logic
and the lambda calculus at the same time is like a much clearer more coherent digestible approachable
also provably better way than what this paper does we get all of this and it's really nice when you
read i think it's really nice when you read the I think it's really nice when you read the paper. And it is better in the talk. He presents it in a way that's a little bit more directly intuitive.
But you can actually now kind of walk through this.
For me, it was like, oh, I both understood the lambda calculus a little bit.
But I also know logic.
And I had never understood these diagrams.
These are the diagrams that you see in all these fancy type theory papers
that are so confusing to
me like i struggle so much to to interpret them as code but here since it's such a simple system
and not these big complicated ones it is a lot easier to be like oh okay i i kind of see it now
i'll actually uh say that the thing that helped me the most in understanding proofs is, I think it's incredible.pm.
Is that the game?
Yeah, so this is a
visual programming language.
Well, I wouldn't say that.
I would, because it's a visual
proof system, which proofs our programs.
That actually gets to my substantial
critique later on.
Thank you for reminding me that I need to do that.
Yeah, the incredible proof machine.
Yeah, the incredible proof machine.
So if you want to go do some of these proofs,
I had a bunch of them solved, but it was on an old computer.
I need to go back through these.
You kind of get this node wire style interface
that you can do proofs in.
And you get immediate feedback if your proof is wrong
or has some
type error or whatever.
Some issue. I'm not going to pretend
it's the best interface ever.
But it's definitely way better to me
to solve these proofs here than try to do
it on pen and paper and not
find out any, did I do it right?
And then you have to go check.
It's fun. I enjoyed it.
I actually have to run upstairs for a it's fun i enjoyed it um i actually i have to run
upstairs for a sec i will be right back there i forced a break i knew you were just gonna pretend
well we're it's we're at an hour and 40 minutes. If we're going to do this, we need to... Yeah, we're wrapping up. It's great.
I think this is perfect timing.
We had like 10, 15 minutes of non-podcast banter.
Nonsense.
So I think we're shooting good time.
Yeah, because that felt like a good spot
where we had talked about the middle section
and now we can talk about the conclusion
and that sort of stuff.
And then our thoughts, which is really what I want
all of this to be. It's just us thinking about the, like the conclusion and that sort of stuff. And then our thoughts, which is really what I want, like all of this.
Exactly.
Right.
It's just like us thinking about the,
the idea here.
Um,
um,
yeah.
How do we set this up?
How do we set this up?
We didn't do a clean enough job of establishing the pit to actually
set up why so wait at the top of the episode i awkwardly said hello and then when we came back
after you didn't let lemon out we i said hello you awkwardly said hello now we need another
we need to begin another section but we don't have anybody to
awkwardly say hello aha so i think we just have a moment of silence that translates very well
into podcast listeners who have the skip silence yes a moment in silence in lieu of having somebody
here to say hello for the for the third time this segment intentionally left silent
it has an unsound type
have you uh have you refreshed the patreon recently uh no no okay did you go sign up a bunch
no no no no
just trying to skew the numbers a little bit you know you know put my hand on the scales i
it still shows the same amount for me yeah no you have to look at the tears anyways oh unsound type
system like tears in the rain unsound type system are the future of programming um one more just
before we get back into it one more thing to segue off of um jimmy would you mind in our slack reading the following meme why am i reading bad logic math memes stop doing logic
arguments were not supposed to be formalized so many rules yet no world use found for going beyond
modus ponens wanted to prove things anyway for a laugh? We had a tool for that.
It was called induction.
Hello there.
How are possibly a n plus one
divided by not a of n plus one?
Over, over.
It's like the little horizontal line,
whatever that is called.
Yeah, that's just division.
Yeah.
No, what?
In logic, that's division?
Well, they didn't write it.
Like, that's just a division sign.
But over is division.
But that...
No, I thought it was like A space B, and then a horizontal line, A and B.
Yeah, okay, yes.
But that's not what this is.
Yeah, but this is just collapsed onto one line
it's the version of that where they're like one over the other but they just like flattened it
out okay so that would be given given okay yeah but like but like it doesn't doing anyway Isn't the weather for all fi? For all fi necessarily be of, what's that one?
I always forget.
I don't know.
But the box and diamonds here are relevant.
Anyways, statement dreamed up by the utterly deranged.
Little logicians have demanded your respect for all this time,
for all the arguments and language they've built
This is real logic done by real logicians
And then it's a bunch of logic trees or something
And then it's you have to be logical
They have played us all for absolute fools
But the A's and the E's are replaced with existential and universal quantifiers
Right
Which I think is just rad
A joke that doesn't work and no one i'm not
even gonna describe what you're doing i'm not gonna i'm not gonna give you the the validation
of describing so so so uh possibly and necessity are awesome though i'll just say this so we get to the last section
which kind of had hints throughout here we kind of skipped them which is what should we think about
the status of all of this i think that's the way i'm going to put it and we get this wonderful
plaque from the pioneer spaceship which was one of the things that definitely when you open up
this paper and you
just scroll through it as you do with every paper to figure out how complicated is this paper gonna
be and do i really need to read it because you know you just look for that wall of like types
and stuff like that and all those papers you're like how and then you see this at the bottom
and it's you know two naked people and some some weird lines and some circles and a bow and arrow.
And it's very confusing.
But why is this here?
The question ultimately is,
what would happen if we tried to communicate with aliens
by transmitting a computer program?
Would they understand what we wrote?
What do you think, Ivan?
Would aliens understand our program?
Depends how alien they are.
And that's what Wadler contends is like,
depending on how
wild you want your conception
of alien to be, it's less
and less likely that they would recognize
for instance, the line art illustration
of some human bodies or
the diagram showing the sun and
all the planets in our solar system and maybe they would be able to figure out the way that
we've indicated you know a bunch of pulsars around the black hole at the center of the milky way or
whatever that is but those aliens would still probably have logic. They'd probably still have modus ponens
and they'd probably still have some things
that we've discovered they would have discovered too
because they are inherent to the way that stuff works.
To put it very elegantly,
there are some things that are inherent
to the way stuff works.
Let's let Wadler
answer, right? So he says that
if it was like C, probably not, but if
it was like lambda calculus,
yes.
If we tried to transmit computer
programs, then they
might not understand C because C is
all wonky and
C-like, but lambda calculus is something
more fundamental, and so he says this. We might be tempted to, but lambda calculus is something more fundamental.
And so he says this.
We might be tempted to conclude that lambda calculus is universal.
But first, let's ponder the suitability
of the word universal.
These days, the multiple worlds interpretation
of quantum physics is widely accepted,
which I don't think is true, but okay.
Scientists imagine that in different universes,
one might encounter
different fundamental constants,
such as the strength of gravity
or the Planck constant,
but easy as it may be to imagine a universe
where gravity differs, it is difficult
to conceive of a universe where
fundamental rules of logic fail to
apply. Natural deduction,
and hence lambda calculus,
should not only be known by aliens throughout our
universe but also throughout others so we may conclude it would be a mistake to characterize
lambda calculus as a universal language because calling it universal would be too limiting boom
mic drop yeah which like okay cool i mean fun i like that it's it's it's cute it's nice writing i appreciate
that and then you know this is the same kind of charm that wadler exudes on stage in the strange
loop talk it just charm in a pdf paper and i won't spoil how he ends the strange
loop talk but i it's it's also fun and the q a the q a at the end of the strange loop talk
slaps it bumps it is so good it is not part of this paper this paper has some appendices that
are yeah we did not read. Yeah, Wadler
emails Howard, and they have some
talk back and forth. But the Q&A
at the end of the Strangelove talk is
super duper good. And so I would say, if you skip
the talk, watch the Q&A.
So this isn't just charm, though.
This is, I think, the
big, substantial
point of the paper. He says that
earlier, that these things
aren't invented they're discovered and you can tell that they're universal that when we are
programming we are not just doing something that's a human invention some little fun weird activity
that we've made up to preoccupy nerds so they don't you know ruin other things it is something incredibly fundamental and something
like at the basis of reality that is what we're playing with that is what we're working with that
is what we do when we're programming is something so fundamental not even in other universes could it not be the case. For me, that is why this paper stuck with me.
And I know for you, Ivan, you are unimpressed.
Don't tell me how I'm going to...
You're unimpressed by that idea.
I am not.
But it is amazing.
We get to work with this fundamental material, and the constraints that are placed on us are just those of reality. I love this. And I love that this paper has a bunch of citations about all the different ways in which these logics apply to our system. So there's S5 modal logic, which applies to the spatially distributed computation,
and S4, which is staged computation,
which has to do with compilers.
There's all sorts of different ways
in which all of these things,
all of these fundamental properties,
bring themselves into our programs.
And all we have to do
to find new and novel programming things is to look at this
necessary stuff this fundamental logic of how reality even is and we can discover anything we
could imagine in programming i find that vision of programming fascinating. And Wadler kind of
emphasizes this in the talk more than in the paper, which I did like. He thinks that ultimately,
we should focus on working in programming languages that have a lot more of this discovered stuff,
rather than this invented things. Now, I don't know if I agree with him. But I think it's a,
I think this is such an interesting take on what programming is supposed to be it's not an argument here that we don't have a grand vision
of what the future of programming future of coding the future of code will look like we instead have
a very quiet factual historical statement that's trying to get you to aesthetically appreciate something the way
wadler does and i love that i i don't know that there is a similar paper out there for
any of these other ways of looking at programming that just kind of slowly brings you yes
academically yes a little dryly but with some humor sprinkled in,
but to see things the way the author does, to see the beauty that Wadler clearly really
loves and cares about. And maybe the reason for that is that this kind of joy that Wadler is
feeling about this kind of discovery that was made by Howard and Curry
before him lends itself very well to being expressed in a paper, whereas other similar
kinds of joys that apply to other slices of our field or of other fields lend themselves well
to other kinds of expression. And so I think this paper is a beautiful marriage of that,
you know, core idea, and the medium for expressing that idea. And I think, you know,
the talk also does it well, but you've quite compellingly argued that the paper presents this
idea in an even more intriguing way than the talk, and it can kind of pull on more things so i i teased a couple of
times that i have a substantial critique and i mean substantial not as in like it's a big critique
i just mean like it's not a joke like there's actually some substance to it i mean this
sincerely not like like jokey jokey which is that this correspondence this like hey here's this like divine realization that we've made
that these two things have this very intimate connection between them there's like a a certain
way of looking at this and seeing the beauty of it and basking in that and and appreciating it and
and reflecting on it and then incorporating that into
your work and going you know what i should look for other correspondences like this and there's
another way of looking at it which is hey that that correspondence there suggests a certain
legitimacy that other ways of approaching one's work do not have. And Wadler steps into this a
little bit with the like, oh, some things are discovered and some things are invented. And
there's a little bit of sort of sneering derision directed towards the things that are invented,
as opposed to the things that are discovered. And that is like the dark path that this paper and
this realization about this divine beauty can put you onto is to say, well, if there are these grand
correspondences out there, the proper thing to do is to devote oneself to the discovery of those
correspondences and to use those as the basis for our programming. And the programming that builds upon these ideas
where there's a clear connection between logic and computer science, those programming systems
are the good ones. And the ones that don't do that are the bad ones. And that, I think, is
true to some extent. I don't just automatically reject that. I see the truth of it. I see the value of it.
But I think that it is a kind of closed-mindedness that pervades the academic strain,
the sort of category-theoretic, like Haskell supremacist type of programmer,
of whom there are a lot on Twitter. And I would instead use this as an opportunity to say,
these kind of correspondences also exist
between other aspects of computer science
and things that aren't computer science.
And they don't have the rigorous definition
that you get from something like a correspondence
between formal systems of logic and computer science, have the rigorous definition that you get from something like a correspondence between
formal systems of logic and computer science but they don't need that because they're not trying
to draw a correspondence in a rigorous formal provable domain they're trying to do something
else and that programming as we all interact with it is more than just the construction of programs that are meant to be an echo of a proof
with, you know, made out of propositions and reduced down to some simplified version.
They are things that we build to make games or to run a business or to tell stories or to do
any of the things that we do with programming. The parts of programming that can benefit
from the discovery of these correspondences,
like Rust's borrow checker,
or like, what's the other one?
There's one that's like a CSP, I think.
There's like a logic that corresponds with CSP.
There's the logic that corresponds
with Idris's type system.
There's like all these correspondences
between things in logic
and things in interesting programming languages, assistance that sort of thing but there's also
the side of programming that faces the person like this is we've talked about this a bunch of times
right there's the side of programming that faces the computer and the side of programming that
faces the person there's also the side of programming that faces the universe and that's
what this paper is talking about is here's the side of programming that faces the universe. And that's what this paper is talking about is here's the side of programming that faces the universe and how some parts of programming
correspond to grand cosmic inter universal things in a very profound way. But there are other sides
of programming and I contend that we ought not to neglect them just because this one has such a satisfying framing.
Yeah, I do think that you're right.
I think that this has led a lot of people to think types are the only way.
That there is no other answer besides getting more and more into these more and more advanced types.
And if only everything were dependently typed,
if only we were rigorous about all of the things that we do in programming, could it be good?
I think I totally agree with you that that's a mistake we can make. And I think there's two
ways of getting out of it. And I'm not saying that one's better than the other. I think you
can take both approaches. But one way is what you're saying, which is like, there's just these other aspects, which is just true. There are other aspects,
right? I mean, I think Elm did a really good job of kind of saying like, yeah, we really like types,
we really care about types. And in fact, you know, we have this perfectly sound language. And,
you know, we have a very good type system. system and yet that's not all there is to
programming we're really going to care about this human aspect of things we're really going to make
good error messages we're really going to leverage those types in a way that helps people um and so
we're going to ignore some of the more advanced stuff like moan ads or whatever and and not do
that because we just don't find that good for people. But I think the other way of looking at it is to realize all of these things that maybe some type
advocates aren't as big of a fan of, actually they also have this correspondence. It's just a much
more complicated one. And there's nothing wrong with complicated correspondence. There's nothing
wrong with inconsistent systems. What's nothing wrong with inconsistent systems.
What's wrong with that? It's still something we can discover. It's still something we can
mathematically explain. It's still something we can do all of that. And if some people like those
and you like doing all your fancy type stuff, well, you can interpret it after the fact.
There is a way to interpret these things without imposing it on others.
Just because someone doesn't statically type check their program
doesn't mean there couldn't be a type assigned to that program.
And just because the type might not be expressible
and Haskell doesn't mean it's not valid.
Or that validity is not the goal
for all people in all cases, right? Like the construction of valid programs is like there, there's a certain segment of people out there who really lose their minds whenever you propose like writing something in a way that's really sloppy right like if if i write something in
typescript or uh you know a whole bunch of people are going to feel more comfortable with that than
if i say oh i'm going to write that in coffee script because they'll go how do you know that
the code that you're writing in coffee script is actually gonna run and you'll waste all this time
no i i having spent the past year in typescript i I now 100% know that I spend more time fighting the type checker than I ever did fighting like null references and typos and that kind of stuff in my very dynamic code.
Like it's like clear orders of magnitude difference.
That's like fine because the reason I'm writing in TypeScript is I'm like I have a different goal now.
And when I was writing in CoffeeScript, I had a different goal.
And like no solution matters if you are not looking at it in light of like the context of the problem that you're trying to solve.
So this sort of like, oh, you know, discovery is the be all end all.
It's the most important thing. We should be using programming tools that are full of things that are discovered through, you know, the grand cosmic thread of connections
that we're pulling on. That is an unhelpful statement to make because it invalidates all
these other cases where taking a different approach to coming up with a programming system
would bear more fruit or would bear fruit for the specific problem i see what
you're saying there and i think the irony is here is you know the the most ardent uh typed advocates
are not a big fan of typescript because it's an unsound type system yeah sure which is fascinating
like it's it's now like becoming the most popular language And it's one of the reasons that types are coming into style again.
So many people using TypeScript.
And yet, it did something that most people thought was maybe irresponsible.
Might be a word I've seen thrown around for lack of types.
I would imagine an unsound type system also might feel that way for some people.
And yet, that was TypeScript's thing.
It's like, well, we have these purposes, just like you're saying,
where their goal was to let JavaScript developers convert over,
and they thought they could get benefits even without having the soundness of the type system
and allowing more code to be written in that way, which is interesting.
I guess I'm of two minds on all of this because
i really enjoy writing in typed languages but i also really enjoy writing non-typed languages
and i think it's sad when the discourse becomes one versus the other yeah i i think that you're
right about these different purposes but i also feel like
there's just fun to be had in both worlds and i think that what would also be a shame
well it would be a shame if we're just like oh only if there's this like very direct obvious
this is the discovered thing instead of like the complicated oh yeah well this must have some
interpretation but we don't know what it is.
If it has to be this direct, obviously. The other one would be ignoring all of this because you just want to have fun with programming.
And not realizing, oh actually if I paid attention to some of this logic stuff and some of this discover things, I could make my system more enjoyable for me.
I could improve some bit of it that, oh if I just tweak this design thing here, I would get these properties that I want.
So I think about being able to make concurrency better, distributed programs better.
There's ways in which, oh, if I actually pay attention to the logical properties these systems have, I might be able to let users express all of this in a better way for them. But the only way
I'm going to know that I really can do that, and it doesn't have a bunch of edge cases and weird,
you know, cruft that no one ever wants to deal with is if I like dive in and figure out these
kind of technical constraints of the logic and how things are working. I think it's sad, right?
That's why I don't like this like typed versus dynamic type,
like static type versus dynamic type being the discourse.
Because I think there's stuff we can learn from both.
Or like, I hate seeing somebody make a toy programming language just for fun.
And then people coming and dunking on it because there's some like,
you know, you didn't express it in terms of these,
you know, universal foundational concepts. in terms of these you know universal foundational
concepts so you have this like disharmony at the core of the thing you've made and it's like that's
not what i was trying to do like you know yeah there's a certain like the people who believe
in finding universal harmony among concepts tend to apply a universal lens to the projects that
they look at how's that for a for a big big statement a universal lens to the projects that they look at. How's that for a big statement?
A universal statement?
Yes, exactly.
One of the things I'm reminded of here,
this is about how the attitude you take towards
not only your work, but the work of others.
And I think it's very important to make a distinction between those two.
So when I was working on a JIT compiler for Ruby,
I talked to a bunch of people
who are one, Rubyists, but then also a bunch of people who are compiler engineers. And these are
actually an odd combination of the two sometimes, right? Because what are Ruby's values? It's about
very high level code. It's about letting people express what they want.
And it's not about efficiency. That's kind of like, yes, of course, they want to get there.
But that was not one of the starting goals. It's just like writing the code the way you want to
express it. And then you get compiler engineers who are all about efficiency, right? These are
people who want to dive into all those low
level details who want to get everything as efficient as possible and one example that was
brought up that uh was an optimization for ruby code that exists out there this is you can write
this code it's very actually very efficient in ruby and probably not in almost any other language
i would bet um so there was trying to,
people were trying to bound a number to a range.
So you could have like a min, like one,
and a max, like 10,
and you pass in your number,
and you should either get the minimum as one,
the maximum is 10,
or your number if it's in the middle.
Now all of you right now who are programmers
can think of how you might write that code.
Maybe there's a built-in function for it but if you have to write it yourself like you have
you can do an if statement sigmoid curve with gradient descent uh-huh yeah yeah you train up
a neural network to make sure it gets that right you know etc uh well the way that rubius had
decided to write it and this was like all sorts of libraries were doing this is you have an array
you put one in there you put one in there,
you put 10 in there, and you put your number in there, and you sort the array,
and then you pick the middle number. And that gives you the answer. And anyone who's like a
low level, thinking about performance, thinking about allocations right now is like cringing.
Right? Like there, I can think of so many people I know who'd be like,
wait, why are you allocating an array
just to do a comparison between these numbers?
But instead of having that attitude towards these Ruby developers,
the Ruby implementations went and made it efficient.
So you don't allocate an array when you write that code.
There is no allocation that happens.
The code does exactly what the lowest level version
of that code would do.
And I think we can have the same attitude
if we really care about types,
if we really think they're important.
We don't have to say, oh, and anyone who doesn't
is dumb and has a bad perspective or whatever.
We can say, how can I take what I know about types and make the rest of what people are doing without types better?
I think this discovered attitude of looking at this logical, fundamental thing,
we can say, I know a bunch of these logical things.
How can I help people apply them to their programming if that might not be what they pay attention to just like i know a
bunch of low-level details how can i help other people make their programs more efficient right
i i think that this is something that we as a programming community just struggle with there's
people on either side of this divide who can help people on the opposite side bridge the divide.
Yeah.
Like the people who are fans of the very theoretically pure stuff can offer like, ah, here are all of these fundamental ingredients that have correspondences with things in logic or things in math or category theory and you here's like you know this this set
of lego pieces these building blocks that you can put together and they offer those things but i as
somebody on the other side of the divide from those people the perceived divide feel like those
things are inscrutable like i have a tremendous difficulty understanding like if i want to put
monads in hest like how would i even do
that what would that even mean right and i would have to go become a category theoretician in order
to understand that and i on the other side of this you know divide from those people have a bunch of
like thoughts about interfaces to programs and how people work and how you can make things
learnable and approachable and understandable. And I can help the people who are, you know,
over in theoretically pure land with their posters at splash or whatever,
try and make their work more comprehensible to people who aren't category theoreticians.
And there's some cross pollination there that i think is needed
and so the thing that i would champion is as much as you can ivan of the past two hours don't poo
poo on the things that are foreign to you try to embrace it and find a way to take what you have
to offer and help those people so that they can take what they have to offer and help you yeah and i i think that i think this still even though neither ivan or i are you know
like i've i've dug into a bunch of types theory stuff but i'm not an expert by any stretch of
the imagination yeah but you're a philosopher i'm not even a philosopher by correspondence
you know all of type theory i am uh just like I am not a real programmer, I'm not a real philosopher.
I have no credentials in anything.
But I've looked at all of this stuff. I've enjoyed it.
I've written my share of proofs in Idris,
which I've probably written more proofs in Idris than proofs on pen and paper.
Because the last time I ever wrote a proof was ninth grade geometry but i
wrote a decent number of proofs in idris um so yeah because we never had proofs after that which
i thought was sad i loved proofs um or the incredible proof machine i've probably written
yeah i was gonna say now but yeah uh so i i i really do enjoy this stuff and i do think this is a really rich research
project right i think there's so much we can get out of it at the same time this is the only paper
i think in this kind of tradition that will we could ever cover yeah i looked at the howard paper
the original one and and for the one presented in 1980,
and for a logic paper, it's pretty darn readable,
but it would not make for a good podcast.
There are papers on types that we could do,
but they're all kind of looking at kind of the weirdness of types
or the places in which they don't apply or the history.
But, like, I just just there isn't a lot of
this work that's been translated in such a way that it's not just deep in the technical weeds
and so it's one of the things that i think the patreon like seeing how many people
signed up on type systems i just i know that there's i mean there's podcasts out there talking
about types and all of that,
but with our format, looking at papers,
I don't know what that would look like.
And it's a huge subfield of computer science
that we could spend a lot of time digging into
and playing in that space,
but it's hard.
The inscrutability of a lot of that stuff to outsiders
is a serious detriment to that subfield.
Yeah, try to take some of these...
I've done it.
I've tried to write code
based on reading a paper of these type systems,
and it's not easy.
I mean, there's really cool ones.
If you ever want to do mini-Canron,
a logic programming language,
and do the Henley-Milner type system in mini-Canron,
it's like, oh, that's so elegant and simple,
and yet I still don't understand it.
I can't do it in anything other than mini-Canron.
And I had this idea
forever ago of like a type system i really wanted to see which is like a structural type system and
blah blah and i i feel like i have the chops to do it but trying to convert these things from the
papers to actual working code was so hard and i know that that's partially partially education. I know that people who've been in this forever,
who've done a bunch of type theory things,
just don't find it that difficult.
But I really liked, like Guy Steele gave a talk
at a closure conference talking about all this notation,
and he showed how inconsistent it is,
how difficult it actually is to know
what any notation in a paper means, because there
isn't even a standard for it.
People use it very differently.
And I don't know, I think if you really feel that the future of coding is about type systems,
like if you're really that kind of person, I mean, I think there's a lot there, but I
think it's got to,
maybe the answer just is build a nice language like TypeScript that lets people write a bunch of things.
But I think there's also some of this thought component,
there's some of this advocacy, there's some of this translating the theory,
not just the practice, to other people.
That would be really interesting, because I don't know if you've ever looked at
the implementation of TypeScript's type system,
but it's one file.
It is 40,000, maybe probably more at this point,
40,000 lines.
And wow, is it wild.
Yeah.
And if I'm over here like,
hey, I want to work on HEST.
And from experience, I can say that using a type system
as part of the interface to your visual programming tool is actually often a really good idea like
typing visual programming languages feels really good because it lets you guide the interactions
in a really satisfying way um and and give a little
bit of that structured editor projectional editor kind of feel to what is otherwise like a free
canvas sort of environment like that's a really there's a lot of nice synergy between those things
but if i wanted to go and do that and like pull a really robust type system together and like design a visual interface
around that i would be super duper lost and the thing that i think i'm missing is like what got
me into this position of like thinking about visual programming the way that i do is the like
brett victor style essays that just did a beautiful job taking these very important ideas from the
history of computer science and and software design and just graphic design and all of those
different threads and weaving them together into a really compelling presentation that isn't just
about like oh here's an idea that I'm trying to share with you,
but like here's a meta idea
or a meta process.
Like here's how to approach your thinking.
Here's how to approach your learning.
Here's how to like scavenge these ideas
that are being lost to time
and incorporate them into your work.
And I don't know how to do that
with all this type stuff.
So if there's like somebody out there
who's like the Brett Victor
for Lambda calculus or
whatever or the brett victor for category theory or whatever it is where it's like yeah read these
essays that are written for a somewhat lay audience because like brett's stuff is written
for a pretty lay audience like it's you do not need to be deep in the sauce to understand what
brett is talking about if somebody's doing that for category theory, like here is the 15 minutes at a time way to
completely change your worldview by learning category theory. I haven't seen it. I'm sure
some of it's out there, but it's like, is it that good? And if there was something out there that
good in that area, I think that would be a huge benefit to our field,
to our corner, to the
future of coding, as we so call it.
Yeah, I will say this is
definitely not reaching the level that you're
talking about here, but I will
plug, it's
David Christensen has this talk
at Compose Conference four years ago.
It's bidirectional type checking.
And it's not flashy.
It's not for a layman audience at all.
But what he does in this talk that I really like,
and it helped me go implement my own bidirectional type checker,
is he shows the notation that we've been looking at in this essay,
and then he translates it to code
and talks about how he's doing the translation.
It's very, very nice to see that notation
and then the actual code.
And still, when I like,
I've tried to write a bidirectional type checker before,
like I looked at some paper that was like,
and this type system.
I just had to sit and watch that video.
I'd be like, all right, how did he translate that again?
Like, and it's not obvious, and it obvious, but the bidirectional type checker, I was able to go implement it,
and it's actually like the code is really simple.
There's two sides of this bidirectional type checker.
What a surprise that it would be called that.
There's like the infer, and there's the check, and that's it.
You can infer something, and then the inference will call into check,
and check will call into infer.
It's kind of this little recursive process that's really nice.
I think I haven't seen anyone, while this is good,
it's a very good talk, it's definitely still more academic.
It's still more difficult. I'd love to see that material be more academic. It's still more, you know, it's still more difficult. I'd love to
see that material be more accessible. Something I have thought about writing, but I'm not very
good at those style of essays. Somebody who is, I would love to see that, right?
It's sort of in an awkward in-between space where like you can't cover it with like a Julia Evans
or Kate Compton style zine. And you can't get it done in a julia evans or kate compton style zine and you can't
get it done in a single talk and there's a lot of papers but paper probably isn't the medium
i really think it's like like the the brett victor style explorable would be a really great fit for
this and i've seen like i've read some books that are like like'Reilly, I think, maybe put some out. I don't know. I've read some books that are like, let's actually code up example programs that are at every level in the hierarchy of computation, like we were talking about earlier, like push-down automata and regexes and finite element automata and all those there's some like nice like let's start from the simplest possible thing like basically like modeling a flip-flop or whatever and like work our way up that's very
satisfying but i i don't know of anybody doing that for category theory and for like other systems of
logic or whatever it is that you want to try and make the foundations out of your programming model
in terms of yeah this is not types but i'll throw out another reference that I really liked.
List out of Lambda by Steve Losch.
It's a great blog post that basically starts with high-level code
and turns it all into lambdas, but kind of in a weird way.
So at the end, you get your high-level code,
but the whole implementation all the way down is Lambdas,
and it's in JavaScript, and it's very well organized.
I actually used it as an outline for a talk that I gave at a local conference
where I did exactly that.
We start with some lists and things.
Yeah, list out of Lambda.
So you start with a list, and then you define it all in in terms of lambda and then you build up numbers out of lists and it's it's really nice so it doesn't
get in the types side of things but it's definitely something i'd i'd recommend people checking out if
you are looking for uh to understand kind of like what exactly is lambda calculus and how could i
define everything effectively computable using lambdas um so
really enjoyed that one i wish i gave that talk at a that was recorded because i did a lot of
i did the whole thing live coding like literally every bit of it was live code and that was so
nerve-wracking but i had practiced it so many times and other than me forgetting a macro that
i had built into sublime text it went off without a hitch,
which was so exciting.
I even wrote a custom program.
It was fun.
I put so much effort in for a talk.
It was like so much effort in for a talk
that only like 20 people saw.
It was great.
And then they've all forgotten about it.
Well, I have an ending if you're ready.
Yeah. I think we could say universality considered harmful because it would be too limiting.
Nice.