CoRecursive: Coding Stories - Tech Talk: Total Swift Programming
Episode Date: February 12, 2018Tech Talks are in-depth technical discussions. In simple terms, a total function is a function that produces a well defined output for all possible inputs. Â A total program is a program composed of o...nly total functions. Â A non-total, or partial function, would be a function that can fail given certain inputs. Â Such as taking the head of a list, which can fail if giving an empty list and is therefore non-total. Total programming can be done in any language, however many languages make this easier. Â Some, going so far as to require proof of totality. In this interview Andre Videla discusses how the swift program language encourages programming in a total style. Â He also discusses his love of Idris, proof assistants and how his research into haskell, idris and dependant types have made him a better swift programmer. Â Links: Total Programming In Swift Â
Transcript
Discussion (0)
Hey, welcome to Code Recursive, where we share the stories and the people behind the code.
I'm Adam Gordon-Bell.
This is an FP interview, but I think it's a really broadly applicable one.
Today we're going to talk about Swift, you know, the iOS development programming language.
But really, we're going to talk about total programming.
Total programming is a pretty cool concept.
I don't want to define it because we'll kind of get into that in the interview.
Some languages make total programming easier and some make it harder, but I think it's possible
wherever you're programming. And it really just leads to more reliable code. In my mind,
this idea of totality, it's kind of associated with functional programming, but I don't see why
it has to be. I think of it as like a technique you can use anywhere. And it's just a super
powerful guidepost. You know, it's kind of a great perspective for thinking about your program.
Andre, who we're going to talk to today, that's what he wants to talk about.
How this idea of totality and totality checking, it just led him to write better iOS apps.
Andre Videla is an iOS developer and functional programming enthusiast. Andre,
welcome to the show. Thanks. Thanks for having me. So you have a great post on Medium about how
Idris and type-driven development inspired you and changed the way that you write Swift code.
So I'd like to kind of start with some maybe background and then dive into that.
So for starters, for those who aren't familiar, what is Swift?
So Swift is a programming language that has been introduced by Apple a couple of years ago
and is mostly used in the industry as a tool
to develop iOS apps.
People use it also to start writing server code
to go along with the iOS apps.
And it has some following on Linux
also for general purpose, like command line tools.
It's a programming language that is quite interesting in the sense
that it's both very high level, but also low level in the sense that you need to think about
memory and you need to think about how you move references around.
How is Swift different from Objective-C, which was the previous language for iOS?
So when Apple introduced Swift, they have this great, very great slide, which says,
Swift is like Objective-C, but without the C. It's quite funny. I don't really agree with that,
but it's still very funny to see it that way. I think what they meant is that it's all the nice features that we like to use in Objective-C
or would like to use more of in Objective-C
without the problems of having to deal with unmanaged memory and unsafe references, for example. So the differences are mostly in the type system.
Objective-C has a very, I would say, weak type system
in the sense that it doesn't really help the programmer a lot.
It helps the machine mostly.
It helps the machine knowing what to allocate,
where, how much size it takes.
Just like C, the type system of C
is mostly there for the machine, not for the programmer.
Swift's type system is way better at helping the programmer,
helping the programmer write down better specification,
better types, better abstractions,
and therefore better code.
So the difference are deep between C and Objective-C.
Most of the closer thing we can look,
like the way they are close to each other
is that they use the same runtime,
and they're using the same platform, that is iOS,
and they have the same goldenquote parent, which is Apple.
Other than that, they are very, very different.
So is Swift... Swift is garbage collected, is that right?
Kind of.
People will be very annoying with the term garbage collection
because they will say that anything that is not manually managed is garbage collected, which in the strict sense is technically true if you look up it in
Wikipedia. Swift is what is called automatically reference counted. The compiler will look at your
code. It will look at all the declarations, all the definitions, and all the constructors,
and put little counters that will say,
oh, this has been referenced once, twice, three times,
four times, et cetera.
And put code in between your code that says,
increment the reference count, decrement the reference count.
And when the reference count hits zero,
deallocate this thing, because no thing will
use this reference anymore.
It's a form of garbage collection in the strict sense
that you don't manage it manually.
But it does not have the drawback of what you see,
for example, in more classically garbage collected
programming languages like Java, where you have GC pauses, where the garbage collector,
which is mark and sweep, will stop your execution,
look at all the references that you're using,
mark the one that you're using, and dialogate the rest.
This incurs some latency, for example, in applications
when you scroll down a table.
It's a classical example that iOS developer will tell you
is you cannot have smooth scrolling tables in Android
because of the garbage collection.
The GC will pause the execution of the program,
look at all the references, the one you don't use,
and then keep going, resume the execution.
Swift doesn't have pauses since everything is dealt
by
the compiler. Everything
is dealt with at compile time
and
the execution never stops.
It just keeps going just like a
program written in C
or object to C.
That's interesting.
So in some ways,
the compile time
garbage collection, if you want to call it that, it's a
competitive advantage for Apple because their
apps run more smoothly. Yes, totally.
It's a typical
iOS
versus Android argument to
use to say, oh, at least
our apps don't pause.
And it's very useful
when you make things like games because to use to say, oh, at least our apps don't pause. And it's very useful.
It's very useful when you make things like games because on the Android platform,
whenever you start allocating lots of stuff
or are very dependent on the resources of the device,
which are inconsistent across all devices, even iOS,
you really want to have consistency in performance, and the garbage collector of Java doesn't
allow this.
Arc allows somewhat more predictable performance when comparing in your memory allocation.
I would like to note that it's not because you have
mark-and-sweep garbage collection like Java
that you cannot have good performance.
It's just really hard to do.
And as someone said somewhere on the internet,
I don't remember where it was,
but you can, for example,
in a mark-and-sweep garbage collected language,
make a huge array of references
and mutate them as you go
and never allocate more resources or
deallocate resources because you always have a pointer to them. And that is very close to
just having a huge buffer in C and putting stuff in it. And someone said, whatever your programming
language, if you want performance, it will, no matter what, look like C. Because you don't have a lot of references, you put stuff in there, take stuff out, and that's just what you do in C.
That's a sad thought to me.
So, you know what you're making me think of is Rust.
Yes.
So, how does the garbage collection in Swift vary from Rust?
So, unfortunately, I'm quite unfamiliar with Rust. So how does the garbage collection in Swift vary from Rust? So unfortunately, I'm quite unfamiliar with Rust.
I've tried to use it once, and I made a toy ray tracer.
Turns out making a ray tracer is not the best way
to learn Rust because I don't know how far.
Are you familiar with ray tracing and graphical methods?
No.
Let's go through this very quickly. So a ray tracer is a way of rendering 3D scenes where you put a point in space and you have a set as if they were photons, like particles of light, and see
how they behave in this 3D space filled with objects.
And the goal is to, by emulating trajectory of photons, emulate the way light bounces
off objects to go from, for example, the surface of a table to the surface of a wall to the eye of the camera
or the viewer. And that's a rendering technique that is quite useful for reflections, for example,
because as you can see, it's really natural. Like you project your point somewhere. If there
is an object that intersects this projection,
you can just bounce off it and then keep going until you do 13 or 15 bounces,
and then take the color value of this ray of light.
And this will give you one color, one pixel on your screen.
So that's basic ray tracing.
A ray tracer can be seen as basically a function from, as I said, a set of objects
and a starting point, like the position of the camera in this space.
And it gives you a 2D image.
So it's a function from a space, a set of objects, a camera, to an array, a 2D array.
And since you can see it as a pure function,
you can implement it using mostly pure function.
You don't have to use state.
You can go very easily and very naturally
towards very basic linear algebra.
And that makes Rust very not...
It doesn't make Rust shine,
doesn't make the features of Rust shine
because you can write everything as
integer copy instead of referencing and mutate objects. So that's my experience with Rust,
is basically writing pure functions and writing it just like Swift. So you copy everything and
you never reference anything. So unfortunately, I cannot tell you much about how different it is from Rust. I know that Rust has the boring mechanic that is extremely useful for this exact purpose,
that is referencing stuff, passing references along,
and making sure you don't reference something that is already borrowed by something else
in case you mutate it and break this assumption
that your reference is safe,
where it's actually not because it's been mutated by something else.
But unfortunately, I don't have programming experience with it.
One of the features that kind of makes Rust exciting
is this way that you could share data across threads, for instance.
But if everything's immutable then there's no
advantage there exactly so uh back to swift um how how are nulls handled in swift so the goal
of swift when it came out was um as they said to do objective c without c which meant to learn from the mistakes
of previous programming languages.
I say mistake, but people would say features.
What I really mean is let's make a decision about what kind
of bugs we allow in our language.
And they decided that null pointers were not
an acceptable bug to have in this day and age.
And what they did is implement what we can see in languages like Haskell or Scala, where
you use an algebraic data type called maybe or optional in Swift, which has two possible
values, or two constructors.
It has either none or nothing or nil, or it has a sum constructor
that holds a value, an arbitrary value of any type. And Swift does not have null pointers.
It has some magic behind the scene to be smart about what is a null pointer, what is not, and
represent them as nil. But really what you interact with is this algebraic data type
that is either something which is wrapped into this type
or this nil constructor that has no information about it.
And the goal of this is to eliminate the bug of referencing something
that doesn't exist and using it even if it doesn't exist.
It makes the compiler able to really easily tell that you're making a mistake when you're
referencing something that can be gone from your control.
So for example, you have typically in iOS development, you have a label on the screen
and you're not responsible for instantiating this
label. Something else will at some point, but you don't know when, you don't know where, you don't
know at which point it will happen. And maybe that when you're trying to use it, it's not there yet,
or it's gone. And the way it's represented is using optional. So this label is either there, so wrapped in a sum label, or it's gone or not there
yet. And it's a value of nil. And if you try to use it, the compiler will tell you, you're trying,
if you're trying to use it directly, the compiler will tell you, you cannot use it directly because
you cannot prove that it's instituted yet. You have to do some case analysis to say, is it there?
If it's there, I can use it and I will do this.
And if it's not there, I will do something else.
Either ignore it or maybe do the instantiation and then use it.
Or maybe do something else.
For example, it's an error.
It's a huge mistake.
So I should move to some other view or display an alert that will say this is this has
gone wrong because it was supposed to come from the server and it's not been it hasn't come back
therefore something is wrong with the server and that's make that makes it that makes the whole
category of bug disappear because the compiler will always tell you this might not be there and
therefore you have to handle the case where it's not there
and you contrast this with i was using the java example but we'll keep using it
java has those null references or real pointers expressed as null which are not different in the
type system from regular references if you have an object and you have a reference to this object
well maybe it's null maybe it's not but you don't know and you have a reference to this object, well, maybe it's null, maybe it's not, but you don't know.
And the compiler doesn't tell you.
You could use an optional type in Java, and we are seeing more and more of them with libraries like Guava.
Or more recently, I think it was Java 8 introduced the option type, where you can use option types in order to bypass this problem of not knowing if it's null or not.
But since you don't know, at no point you can be sure that either you are dealing with the neural references,
or you have to deal with the neural references,
or if you're dealing with it, if it makes sense, because, for example, you just instantiate an object,
and you use it three lines later. Technically, it could be null, but there is no reason to believe it's null because you can
see it. The problem gets worse when you, for example, leave the code evolve for a long time
and you have multiple lines going through those two points. Even some functions are called and
some mutation happens. And then really, it becomes much harder to know
if this reference is still valid or not.
And that's what Swift does to eliminate this problem,
is make it a data type.
So this ties in to the topic of totality, I believe.
Swift cannot enforce totality.
However, if you have an option type or any, I guess, some type,
and when you deal with it, you handle all the possible cases,
then Swift is allowing you to write in a total style.
Yeah, exactly.
It really helps to have a compiler that can tell you
whenever you're forgetting something that you really shouldn't.
It also has this extremely nice autocomplete for switching over algebraic datatypes.
So if you write your own algebraic data type that is not optional,
even a complex one,
it will
automatically generate, for example,
a switch case to do case analysis
on it and not forget
any cases, which is
interestingly enough better than the default
GHC option
when you're using Haskell because
for example in Haskell, if you pattern match on a data type,
if you forget a case, it will compile just fine
and explode at runtime if you hit this case.
That's also unfortunate, and that's also not total.
And that's one way where Swift is enforcing, quote-unquote, accidentally totality by making sure that you handle all possible cases at all times and not forgetting any one of them.
You mentioned the halting problem.
Are you comfortable explaining what that is briefly? Briefly? So, basically what it says is you are given an arbitrary program.
It could be anything. It could be literally anything.
Can you tell if this program can terminate?
And if so, will it terminate in a good state or a bad state?
Or are there inputs for which it will not terminate? Or are there inputs for which it will not terminate?
Or are there inputs for which it will terminate?
All those questions are corollary to the halting problem,
but the original is, can you tell if it terminates at all?
And Turing has these very nice proofs using Turing machines
where programs are defined using Turing machines. So a program is a Turing machine.
And the question is, is it possible to make a Turing machine such that if you feed it
another Turing machine, the original Turing machine will tell you if the machine that was fed
terminates or not? So it's the same thing as rephrasing,
given an arbitrary program, can you write another program that can decide if any program terminates
or not? And the proof is surprisingly simple, but a bit contrived, if not used to proofs by
contradiction. Basically, you assume that such a Turing machine exists. And by assuming
that such a Turing machine exists, you can do some neat trick like fitting it to itself and realize
that it cannot tell if it self-terminates or not. And that's pretty funny, right? In self, the problem itself.
And that makes it so you can give up on writing anything that can arbitrarily decide termination of anything.
But it does not tell you that it does not exist a program
or a class of programs that terminate.
And that's what we're betting on with languages like Idris and Raghda,
is that we know there exist some structures that always terminate.
So for example, if you have a function like map on lists,
you can pretty easily prove that it always terminates
by saying that if you take the definition of a list,
it's two constructors, either nothing, or the tail of the list, like the last element,
sorry, the last empty element, or it's a concatenation between a head and another list.
So the empty list is just the first constructor. A list of one element
is the head and the empty list. A list of two elements is a head and list, which contains one
element, etc. And by doing case analysis on this constructor, you can say, well, if it's empty,
then you return the empty list when you map across it. If it's not empty, you
have the const constructor, which has the head and the tail. If you take out the head,
apply the function on the head and concatenate this result with the recursive call of map
on the tail of the list using the same function, what you just did is define a recursive call on a constructor that is smaller than the constructor that you
were given to begin with. Because since you analyze the cons case, the list that is inside
the cons case is by definition one smaller than the list that you were given in argument.
Since you're doing recursive calls,
you cannot prove that arbitrary recursive calls always terminate. But you can prove that if you make recursive calls on constructor that are always smaller, they converge toward a base case,
which is our empty list. And that's one structure that we know always terminates. If you have a function or a program that makes
self-recursive calls using constructor that are always smaller because you were able to
destructure them using case analysis, then this function will always terminate. And that's
how Idris does it for some cases. um a recursive call and induction are kind of the same thing right
exactly hello yeah sorry about this we're having some technical difficulties yes you cannot trust
technology when it's not total exactly exactly so um and at its root root, in a very simple fashion, a function being total means of all the possible input that come into this, we have to handle all of them. Is that how you would characterize it? One aspect of it, yes. It's probably the most important one when we're dealing with programming because we rarely write programs that go forever accidentally
unless we make, as I said, either a very trivial mistake like while true
or a very complex one with multiple recursive functions calling each other.
That's the case we see most often when programming nowadays is, yes,
you have to handle all the cases, not ignore any of them and make calls to other functions that
are always also total because otherwise you're going to prove that your function is total if
you're calling something else that is not. And in the mathematical sense, divide by zero is an interesting case.
If you have a function that takes all integers and then divides them,
but it takes two numbers and divides one by the other,
there is a case where it's not total.
It's not defined when you divide by zero.
Math cheats because you can always say that...
So what you described is called
not very...
It's not a very original way of calling it,
but it's a partial function.
So if it's not a total function,
it's a partial function.
Why is it partial?
Because only a subset of all the possible arguments can give a result.
So the domain is partial.
There is only parts of it that work.
But you can cheat saying, well, if it's partial,
I can just say that my function is total
by saying that the domain is the same as before,
except the values that don't work.
So divide by, you could say,
is total if you say the domain of divide by is everything but zero. And that's how you would do
it. For example, in Idris, you would say, oh, I have a function divide by, it takes a natural
number, it takes another natural number. And it takes a proof that the first argument is not zero.
Sorry, the second argument is not zero.
And by giving this extra argument as a proof,
what you do is that you constrain the domain
without changing the input-output type.
And yeah, this type of number that doesn't contain zero
is something you might not find, I guess, outside of Idris or Agdo or something.
Interestingly enough, I've posted a tweet maybe two days ago where I had a very similar problem where I had to use a function that takes a list that is of at least one element.
It has to have at least one element.
And such lists have
nice priorities. One of them is if you take the head of the list, you will always get an element. There is no case where you have a list of at least one element or you don't have a head because,
well, if it's a const, you take the head. And if it's the base gate, it still has one element in it. Therefore,
you just extract it and there you go.
The head is total.
Yes, head is total. Well, more the signature list of A to A is total. You can
trust it. It will always work if given that your list is not empty. Now in Swift, there is no way
of writing this in a very simple way. So in Idris, you would just say, my list, it happens that it's
a vector. But the number of element here is n is successor of n, where n is a number. And that means that if n is zero,
then your list is of size successor of zero, so it's one. And if n is more than zero, for example,
four, it means that your list is of size five. And you realize quickly that, well, you have all
the possible numbers except zero. You can have all the you have all the possible numbers except zero.
You can have all the lists, all the possible lists except zero by writing down this type.
And it's very easy to write down this type.
And it's very easy to handle the proof because you can just say, oh, figure this out because, you know, I used this constructor before.
Therefore, it has to be not zero.
Therefore, this function can be called at this time.
And address makes it easy.
But in Swift, there is no way to write this function signature down.
So what you have to do is to write down a data type that is just like lists.
So you have to write enum, non-empty list of elements is either something or cons, and
then just rewrite the whole definition of lists.
And that's annoying, right?
Because you have to, it's fine, right?
Because Swift has protocol and protocol extensions.
So you can just give it the sequence protocol,
and you get map, flat map, filter, drop, et cetera,
for free.
But still, like, I don't know, it was 50 or 60 lines of code that you have to have somewhere in your code base.
We'll call a library for this.
Or it's also 50 lines that can contain a bug in it.
Like, I don't know, for example, your reverse.
My first implementation, the constructor took an array and returned an option of non-empty list
because the array might be empty. If the array is empty, then there is no list to construct,
then you return nil. If it's not empty, then you return some non-empty list, which makes sense,
right? But the way I constructed the list was reversed. And this is not something Swift can
tell you. And this is dangerous, right?
Because you get into a situation where you want to express some things,
but the type system is not good enough,
or the compiler does not give you enough guarantees
about what you're interested in.
And therefore, you make very unfortunate mistakes
that you could avoid with a different type system.
An interesting example you brought to mind is um like moving beyond head
if you wanted to get the the uh nth element of a list so say you want to get the the fifth element
if you want it to be total in the sense that you don't want to crash when you access the nth
element to realize it's not there is you have to return a data type like option or maybe or even result and say
in case it's not there in case i i overflow the array or overflow the list then return the special
value that indicates uh that you you overrun the length of the list and that's that's what
option is very useful for in swift because because those cases happen very often, and there is nothing you can do to prevent them, talking about a Swift type system because I'm saying, oh, it's so
inferior to Idris's type system.
Because in Idris, for example, you could say the function takeEnth takes a list, but it
also takes a proof that the list is of size at list n.
And then you're safe because the only way to call this function is to both have the
list and the proof that it's of a certain length. And then if you try to make a call where you don't
have the proof of the size of the length, the code will not compile and tell you,
I cannot prove that n is of at least this size. Therefore, this function does not type check
because I'm missing an argument. And it's really useful.
It really helps.
But in day-to-day programming, it's really a very nice step towards better programming
to at least have optionals in Swift.
I think it's a very good step for programming in general to have programming languages like
Swift, which merge both performance and low-level code and high-level abstraction, like algebraic data types,
protocol extensions, protocols in general, unions, et cetera.
So Swift has all this machinery to work with optionals,
and it's very useful to achieve totality,
even if it doesn't have all the proof mechanism
that Idris has.
Very true.
You mentioned protocol extensions.
What are protocol extensions?
So some people who are familiar with Haskell or Scala
will be very happy to know that Swift has enough tools to have some way of implementing type classes where you can say this type has
those properties and therefore whatever type that confirms to those properties have those
other functions. It means that you can write, for example, generic operators,
for example, map, filter, flat map,
on things like new data types that you create yourself.
It's a bad example because it doesn't have higher kind of types,
so you cannot actually define map and flat map generally.
But what you can do, for example, what I very often do is that I have a file somewhere called
algebra, or I have monoid semigroup defined somewhere with a, what is it called, the plus
operator, basically between two values of the same type, given a third value of the same
type. And that's already very useful because there are so many types that you can write yourself
that need this property, but they don't have it by default. So if you can just define a couple
things on this type, do a protocol extension saying, I have this type. It comes from somewhere. But it conforms to Monoid.
Then you get the plus operation and all the goodies for free.
And that's very nice.
That's also what happens with, for example,
I mentioned the non-empty data type.
You can conform it to sequence by a protocol extension.
And that gives you map and flat map and filter.
It's not the monadic flat map, etc., but it's already good enough.
That's really a way to get a lot of functionality for free
by conforming to a type that is not a type that you were either aware
or existed at the time of the definition of your data type.
So do you think that Swift should have higher-handed types?
Or do you think that this...
Actually, well, so there is this very funny way of thinking
of programming language that is...
Programming language don't have features.
They just have dreams.
So, for example, the dream of Rust is to have higher-handed types.
I don't know what the dream of Swift is,
but we should go to Haskell.
The dream of Haskell is to have dependent types.
If you go to Scala, the dream of Scala is, well,
Scala has a bit of everything.
But I guess what they really want is some sort of soundness.
I know they have problems now with soundness,
given subsumption and hyacinth types.
So you can define fragment language
by the feature they are missing.
Oh, Go.
Go is missing everything.
Go is missing generics, for example.
Many people would say, oh, Go would be so much better
if it had generics.
So it is hyacinth types, the dream of Swift, maybe.
There are a lot of features that I wish were there.
Hyakanytypes is one of them.
I don't see personally myself using them too much.
I would have liked to have them this week, for example,
for one specific case.
But I dealt without it.
And that's not a feature that I feel really strongly about.
Something like dependent types, I think, have much more value because they allow you to express...
They allow to help you in a very strong way.
I'm saying that because I've also programmed a bit with Idris and I can see how the compiler,
how the relationship between the programmer and the compiler changes once you have dependent types.
I know this will never happen and there is no way of making it happen.
It would be another programming language.
It would not be Swift anymore.
But I think there is value in looking into other features that would implement,
try to see if they would work, try to see if they make sense,
try to see if they are giving enough value compared to the costs they have.
For example, a lot of problems that come with sub-sumption when you have subtyping is you
end up with cases, for example, in Scala until 2.12, or until.a basically, where it's really
hard to compute the intersection between two types.
And you have very complicated cases where the compiler cannot figure out
the highest lower bound of two types.
And that's very annoying.
You lose a lot of performance.
You lose a lot of error reporting.
It's really hard to have nice error reporting to detect those cases and say,
I'm the compiler.
I accidentally went to infinite looping to figure out the lowest upper bound.
And therefore, I cannot do it.
But this is my best guess.
Instead, you get a large page of types upon types upon types.
That makes no sense.
So adding features to programming languages is nice.
How you kind of type are nice.
Now, I don't know if it makes sense to add them in the Swift compiler,
given its state, given how useful they are,
and given how complex it would be to implement them.
There is one thing I would like to say about adding features to Swift,
is I would like to have something.
If there was, for example, I don't know, a monarchy and I was the monographer of languages, and I would have
to decide the fate of Swift tomorrow, which is exclamation mark.
I would really like to have a pragma or something that says at the beginning of the file, this is strictly forbidden.
You cannot have this in this file.
And put this somewhere in documentation or in the type signature saying, this function cannot have
optional force cast or type casts. You cannot cast types to other types arbitrarily. Things like this,
very basic removing of features temporarily because you don't use them, right? Because if
you use them, either you really know what you're doing, and therefore it should not be the default, or if you don't use them, then you should not be using them at all.
And be safe and trust your code
that it's not doing some funny thing behind the scenes.
Hurricane types are a nice dream.
I wish I had them.
I don't know if it's reasonable.
Could you have a code linter
and give it a list of all the partial functions in the standard library and say,
I want my compilation to just fail if I hit one of these?
So what we do at work right now is we use this wonderful library made by those wonderful people of the Swift podcast.
What is it called? I have it here, Swift Unwrapped.
One of them works with Swift Lint.
Swift Lint is a linter for Swifts.
So yeah, that takes you towards an approximate total solution
where you're here,
you could ban the use of functions.
Unfortunately, there are multiple signs like using null is one of the multiple ways that
Swift makes makes program total by proxy. There are others. For example,
the way you have exceptions,
rather the way you don't have exceptions is very nice.
It gives a lot of peace of mind,
let's put it that way,
to think that you can trust every line of code
as being executed
and not potentially crashing
and giving out some huge stack trace that you don't know where it comes from. of code as being executed and not potentially crashing
and giving out some huge stack trace that you don't know
where it comes from.
PAUL LEWIS O' So let's explore that.
So how do exceptions work in Swift?
GUS CLASSEN- So there are no traditional exceptions
in the sense that they are, so for example,
if you take a pre-owned language like Java, which is the go-to example
because everybody knows it and it has been there for 25 years now.
Java has checked and unchecked exceptions.
When I talk about exceptions, I usually mean unchecked exceptions.
Oh, and Scala has them too.
Unchecked exceptions are exceptions
that can happen at any time at any point
during your execution.
Maybe I just skipped over what are exceptions.
I will just do a quick reminder.
So exceptions are what happens when
you reach a state that is not acceptable for your program.
So if it reaches a state where
it's literally stuck or does not know what to do or is not supposed to go there because for example
it's a page fault it will raise it will stop execution and raise an exception and that is
it will go to the last function that has called it and say hey something went wrong please deal
with it because i cannot and if this function Please deal with it, because I cannot.
And if this function cannot deal with it,
it will go to its caller and say,
I don't know what to do with that.
Deal with it.
And it goes that way over and over
until it reaches some function that says, oh, don't worry.
I know what to do.
We'll just use this other code path, because if that went
wrong, it means, I don't know, for example, the server network is down.
Sorry, the network connection is down.
Therefore, we can use local data.
And this function knows how to deal with it.
It catches the exception, resumes execution, and goes towards another code path.
So Swift does not have this mechanism where anything can crash at any point.
But it does not have checkedException either.
So checkedExceptions are exceptions
that you know they can happen, and you
know what kind of errors can happen.
And in Java, it means that you annotate your function
with the list or with the type of exception that you can expect.
And when you call a function that can throw an exception
that is checked, you have to catch it, and you have to take
care of all the possible failures that can happen.
Since you know what they can be, it's
quite easy to be exhaustive and choose an appropriate code
path for every one of them.
Now, it's a bit of a, it's not a solution
to everything, because checked exceptions are quite annoying to deal with, especially when you
have multiple of them stacked together, or nested. And that makes dealing with error painful in the
sense that you have lots of boiler plates. And most of the time you don't know what to do with the error because well you're not responsible if the network goes down
what it's what you're supposed to do i mean for example you're logging in and the network goes
down well you can't log in i guess so you just stop or crash or if it's an assumption that
your program makes um if this assumption is broken, then it makes sense to
crash. So there is no reason to cache this exception. So what you will do in the cache
block is just throw another exception or just plain crash right here. So it's not a solution
to everything. Swift's solution to this is to have another annotation that says this can throw something, but we don't know what it is.
And it's a form of checked exception, except you could say it's worse than Java because you
don't know what kind of exception you can expect. Well, what kind of errors?
But they have nice utility around it. So for example, if you have multiple functions that can throw exceptions,
you can bundle them together in a block and it will all go fine. You can also do something
interesting that is, if you have an exception and you don't care at all about the error,
you can transform this into an optional and say, if it's an error, just put nil instead.
And if it's nil, I know something went wrong and I know how to deal with nil because I've been
dealing with nil my whole life with my whole program in Swift. And that's one way to deal with
it. Behind the scenes, there is no stack trace where you unwind an exception, like I said,
notifying your caller over and over again until it's handled.
So it's not an exception in the traditional sense.
It's closer to an either type because you return something
and it happens that what you return is an error.
But the syntax is very close to exceptions
because you have a do, a try block, and a catch block.
So to answer the question, Swift does not have a result type,
Swift's exceptions are technically like result types
because you return either of two things without unwinding a stack,
but there is no stack unwinding
like traditional exceptions.
Interesting.
What brought you to, so you're an iOS developer,
but you clearly have an interest in some more like Idris, Agda.
What brings you to those languages?
So my history with programming is interesting
because I've, like many people,
started programming, making games,
because video games are very nice.
You play with them, you learn with them,
you have very good experiences.
And it's a way to exhibit your creative impulses. If you want to do something,
but for example, you're not confident in writing or not confident in singing, or it's too hard to
learn music, and you're used to video games, you might think, oh, when I'll grow up, I'll just
write video games because I love them. And that's what I did. I started writing
games and I started working on a game for a long time with a group of people. And doing that,
I discovered that there are problems that I would like to solve that my programming language cannot
solve. Or if it can, it cannot do it in a way that I
would like to do. It cannot express the things I would like to express. So I discovered languages
like Scala and Haskell and was very happy with them for a while until I discovered that, again,
they have some things that I would like to express that they don't allow to express.
And with a bit of time and a lot of work, I discovered that there exist programming languages that allow to do this.
And they are theory improvers like Coq.
And I tried Coq for a bit, but it's not a, let's say it's not very ergonomic for programming actual programs, right?
You wouldn't write games in Coq.
You could.
I don't recommend it.
And then I discovered Idris as a
pregnant language with dependent types
which has all the nice things I like
like the Haskell syntax
like dependent types
and like its
ability to express formal properties, mathematical
properties about your program, like simple proofs and more complex proofs.
And it's a way of writing proofs also that I really enjoy because it's very hands-on.
Some people will say that it's barbaric because you write your case analysis in a way that is not automated.
If you know about Coq, you know about the tactic system.
And the tactic system allows you to write proofs in a way that is extremely expressive, extremely simple and automated, in a way that allows you to extend your proofs very easily. And if you do it in Agda or
Idris, if you try to change something in your proofs, you will have to change a whole bunch
of stuff. So it's not very useful. Well, it's useful, but it's not very ergonomic for a couple
of things, but it allows me to express some problems that I would like to solve. And I said
this in 30 seconds, this evolution took, I don't know took 10 years to go from basic making games to looking at dependent types and type theory and category theory and actual algebra.
It's a huge learning curve, but I think it's worth it. It's worth going through because you really learn to
recognize patterns, to write code that strives for those algebraic properties. And once you
realize that you can use those algebraic properties, you can write code, or you can make architectures that
don't have to be so complex and can use things like type
classes to generate code that you would not have to write.
And code that you don't write is code that is trivially correct.
And that makes your code bases nicer,
and that makes your maintenance nicer,
and that makes your day easier, et cetera, et cetera.
So I think there is very large value in learning
more formal programming language more formal ideas more theoretical subjects like algebra
when you're a programmer because you can use this knowledge maybe not directly but it trains your
brain to to detect patterns and and apply them them where it's useful.
And it's also a very, very nice community
and very nice field of research.
I'm glad that you were able to take learnings
from a more abstract area
and that they help you in your day-to-day.
So if everybody all of a sudden had knowledge of,
of all this information and,
and we all were writing,
you know,
in a total style,
what would software development look like?
It would be very slow.
It would be very,
very slow.
It's already a really slow as is.
One drawback,
one huge drawback of, of writing in a total style
is that lots of stuff that you don't care about
has to be handled.
And most software nowadays don't care about those details.
So you would waste a bunch of time
defining or proving properties
about your architecture, your data types,
that really are not interesting for the end product.
We have lots of cases where you use technology and it has some unexpected behavior,
like you have an application,
you try to launch it, and then it freezes, or it's just slow,
or the input on your screen does not work as expected.
Some tiny things that are a bit annoying,
like you connect your Bluetooth speaker, and it takes a while,
and then it doesn't connect, and then you turn it off,
and you turn it on again, and since you've reset the state,
it's back into its initial state
and therefore the state transitions
are much more predictable
and then goes to the correct state where it connects.
All those are tiny problems
that could be very easily solved
if we had a better way of checking
for those state transitions.
And right now we don't really have ways of dealing with state transitions.
We can write pure functions, for example, in Haskell,
and deal with state,
but we cannot prove that all our state transitions are correct or make sense.
And that's one thing that dependent types allow you to express is,
if I have a state transition from A to B, it is not possible to call this function with something
else than an A. Therefore, I will always go to B. Therefore, there's no crash between A and B.
Yeah, I hope we get there. I think that i think the dependent types will will start
appearing in places at least that's my hope and it'll let us kind of encode these concepts
and have them verified at compile time so i want to be uh conscious of your time i think we've
already gone over uh thanks so much for talking to me, Andre. It's been great. It's been great to talk to you too.
So that was the interview. If you liked this episode, do me a huge favor and think about
who else might like it and share it with them. For me, sharing a tech podcast that I like just
means sharing it in my company's Slack group. There's an off-topic channel and I just throw
it in there. So if it's the same at your work, yeah, share it out. Right now, the main thing I'm trying to do
is just grow the podcast listenership.
Until next time, thank you so much for listening.