The Changelog: Software Development, Open Source - The Roc programming language (Interview)
Episode Date: June 11, 2025Jerod chats with Richard Feldman about Roc – his fast, friendly, functional language inspired by Richard's love of Elm. Roc takes many of Elm's ideas beyond the frontend and introduces some great id...eas of its own. Get ready to learn about static dispatch, platforms vs applications, opportunistic mutation, purity inference, and a whole lot more.
Transcript
Discussion (0)
Welcome everyone, I'm Jared and you are listening to the changelog, where each week
we interview the hackers, the leaders and the innovators of the software world to pick
their brains, to learn from their failures, to get inspired by their accomplishments,
and to have a lot of fun along the way.
On this episode, I'm joined by Richard Feldman, creator of the Rock programming language and
author of Elm in Action.
Rock is a fast, friendly, functional language inspired by Richard's love of Elm.
Rock takes many of Elm's ideas beyond the front end while introducing some great ideas
of its own.
Get ready to learn about static dispatch,
platforms versus applications, opportunistic mutation,
purity inference, and a whole lot more.
But first, a big thank you to our partners at fly.io,
the public cloud built for developers who ship.
We love Fly.
You might too.
Learn more about it at fly.io.
Okay, Richard Feldman and Rock on the changelog.
Let's do it.
Well, friends, Retool Agents is here.
Yes, Retool has launched Retool Agents.
We all know LMS.
They're smart.
They can chat, they can reason, they can help us code,
they can even write the code for us,
but here's the thing, LLMs, they can talk,
but so far, they can't act.
To actually execute real work in your business,
they need tools, and that's exactly
what Retool Agents delivers.
Instead of building just one more chatbot out there, Retool rethought this.
They give LLMs powerful, specific, and customized tools to automate the repetitive tasks that
we're all doing.
Imagine this, you have to go into Stripe, you have to hunt down a chargeback.
You gather the evidence from your Postgres database, you package it all up and you give
it to your accountant. Now imagine an agent doing the same work, the same task in real time and
finding 50 chargebacks in those same five minutes. This is not science fiction. This
is real. This is now. That's retail agents working with pre-built integrations in your
systems and workflows. Whether you need to build an agent to handle daily project management by listening to standups and updating JIRA, or one that researches
sales prospects and generates personalized pitch decks, or even an executive assistant
that coordinates calendars across time zones. Retool agents does all this. Here's what
blows my mind. Retool customers have already automated over 100 million hours using AI.
That's like having a 5,000 person company working for an entire decade.
And they're just getting started.
Retool agents are available now.
If you're ready to move beyond chat bots and start automating real work, check out Retool
agents today.
Learn more at Retool.com slash agents.
Again, Retool.com slash agents. Again, Retool.com slash agents.
Well today I'm joined by Richard Feldman from Rock,, the programming language ROK. That's R-O-C. Richard,
when we first met you were rocking Elm. You were knee deep in Elm. I assume there's some sort of
lineage involvement. ROK is your new, I'll call it new, new-ish language and is it inspired by Elm?
Is it based off of your love of Elm? Do you still love Elm? Tell me the story.
Yes, yes, yes, and yes.
So Rock is, I say it's a direct descendant of Elm.
It came out of my love for Elm,
but also wanting to do different things.
So Elm, for those who don't know,
is a language that compiles the JavaScript.
I've given a ton of Elm talks over the years.
If I were still doing front end development,
that would be the first thing that I would reach for
if I were doing like big complicated web apps. These days I'm doing Rust at Zed. So I've kind of
gotten out of the web dev game. But yeah, I mean, Elm is a wonderful programming language. It's got
a really great design. It's got a really great compiler. And the original motivation for creating
Rock was like, but all it does is front end web development. And there's so much more to programming.
And I wanted something that where I could get like an experience like that, where
I have a language that's very simple, very nice, really focused on ergonomics.
Like it's funny because now I work in rust all day and people talk about,
Oh, rust has such great error messages.
And a lot of people don't know where that came from, but if you read the
blog posts where they originally introduced like, Hey, we're doing all the
error messages.
Yeah.
They cite Elm was like, we want to try and be like Elm and Elm for me is still the gold
standard of like the nicest compiler error messages, like the friendliest compiler.
And that's like a really strong value for rock too, except that rock is for like, I
like to say the long tail of domains.
So like not just servers, that's like kind of the big one that people always talk about.
But also like command line apps, native GUI apps, even like really esoteric stuff,
like extensions for code editors or robotics.
People have made like a clock,
like a physical clock that like has these flaps
that go up and down and they use rock to program
the logic for like changing the flaps
to show different numbers.
Yeah, the goal is to make a language
that's intentionally usable for lots and lots show different numbers. Yeah, the goal is to make a language that's intentionally usable for lots
and lots of different things.
Right.
I think Elm will go down in history as a super niche.
I know it's still around.
I'm speaking as if it's a postmortem,
but I think, you know, and it's Wikipedia page,
like it changed the world in a really good way.
And I think it did break ground with regards to,
well, like compiler DX, I'm not sure what you call it,
but like really caring about the ergonomics
and the experience of using the compiler.
And it was game changing and then everyone's like,
oh, we should totally do that.
It's like, yes, please steal these ideas.
Cause Elm has a lot of stuff figured out
that many people weren't thinking about back then.
Yeah, it reminds me of,
I wanna make sure I get the right band name. I think
it's the Velvet Underground is this band where it's like, it's not that like the Velvet Underground
is like this top selling artists of all time, but rather that like a million, like different bands
that were really, really successful and were like top selling artists pointed to them as like,
you know, inspiration for this or that aspect of their music. Yeah, I think we're sort of like, there was a moment where Elm's rise in popularity was like,
oh, maybe this is going to take over the world. I think at this point we can be like, no,
I think the idea that Elm was going to take over the world is definitely a past tense idea at this point.
Still lots of people very happily using it who are doing web development.
I'm not really plugged into the community anymore just because I'm not doing that type of work anymore.
But like, it's definitely like,
this is a solid like niche with its own community
of happy people.
But I think it's safe to assume that like,
it's gonna stay niche.
It's not gonna like, you know, take over.
It's not gonna replace TypeScript, you know.
Right.
Yeah, there's even a saying to that phenomenon
and the Velvet Underground phenomenon.
In sports, it's called your favorite player's favorite player.
And it's like the person who,
it's kind of like the actor's actor,
where there's like certain actors who aren't A-listers,
but they just have the respect of all the acting community
because they're so good at what they do
and they never made it into stardom,
but they're just high quality and solid.
And they deliver in all these different ways
that everybody respects them.
And there's baseball players that are the same way.
There's probably bands that are the same way,
like Velvet Underground.
And then there's programming language,
where it's like, you know what?
This is a language that your favorite language
as author really respects.
And that's cool.
And it's like, you know,
even if you're not gonna use it personally,
it's like, you should go study it
cause you should go learn like how to do this thing really, really well.
Right.
And so you were not the creator of Elm, right?
Evan.
I can never say Evan's last name.
Shoplitsky?
How do you say his last name?
So he pronounces it Choplicky.
We can go on a micro tangent about this.
Like when I first met him, we like sat down at a cafe in San Francisco and I was like, hey, so how do you say your last name? And he was like, I'm not sure. And I was like, I didn't, I didn't expect that answer.
How old are you?
Well, and he explained that like when he grew up,
everybody said Czeplecki, but then he had very recently,
just by coincidence of timing, been to a conference.
I think it was in Poland.
And I guess it's like a Polish last name.
And they were like, apparently they would say like Czeplecki.
And he was like, so I'm not sure anymore because it's like,
well, they're not sure anymore. And I was like's like a Polish last name. And they were like, apparently they would say like, Cieplicki.
And he was like, so I'm not sure anymore
because it's like, well, the way I've been saying it
my whole life is apparently not the correct way.
So, you know,
I think he still goes, yeah.
But he actually, there's a conference talk of his
where he starts off by introducing it.
And he goes like, hi, I'm Evan Cieplicki.
You might say it differently,
but then he goes on like, it's a conference in Europe. And I was like, oh, I'm Evan Schaplicki. You might say it differently, but then he goes on, like it's a conference in Europe.
And I was like, oh, I know what he's talking about there.
Brilliant guy, Evan.
Like I said, game changing programming language
and runtime or environment for front end development.
But you were like the evangelist for a certain extent.
Like you just fell in love with it.
Your business used it, you know, in production, et cetera.
And so you very much became a mouthpiece
or a promoter of Elm.
And now that you're not doing front end as much,
like Rock is your thing.
And so I guess they've lost kind of a promoter in that sense.
But was the idea like,
I'm gonna now put all my weight behind Rock
and is it gonna be the next big thing
that's gonna take over?
Or is it gonna be your favorite programmer's
favorite programming language? What was the idea there?
Well so as far as like my aspirations for rock I mean pretty directly like this is a language where
I'm like this is the goal is to use it in industry and have it be a successful language and I would
measure success by like people are actually using it at their jobs and are loving it and are you
know getting stuff done with it. As far as like maximum popularity,
it's like sky's the limit.
If this ends up being a top 10
or a top one most popular programming language
in the world, great.
But my focus is like,
I'm not trying to get there by like hyping it,
but rather I'm trying to get there
by making something that is just really great
that people love using.
But the aspiration is like pretty clear.
It's like, this is not a hobby project.
This is not just for fun.
It's like, no, we wanna make something
that's really, really high quality
and that people really love using.
Okay, so how far are you along that path
to total world domination?
Do you have people using it in production?
I'm sure you're using it in some sense.
So I guess you could say that we're using it
like on the Rock website, like, you know.
But not working for Zed.
You haven't convinced Nathan that Rock is the way,
the future.
Hmm.
I have to be careful what I say here.
We're not using it at Zed.
I can say that.
But I think there is a distinct possibility
that either Zed or something like Zed could find a good use for Rock
when it's ready.
So you ask like, you know, how far along
are we on this journey?
There are actually are people using Rock
in production right now, very, very small group
but we've sort of tried to actively discourage that
just because it's, we know that the language is just like not
at the level of robustness where I would personally be like,
oh yeah, like go out and use it.
The person who is using it has actually done it like it's a consultant who's like done a number of different projects with it
and is happy with it and still using it.
But we did recently decide to undergo a rewrite of the compiler.
The compiler is about five years old.
We've learned a lot about the implementation.
There's a lot of things we want to do differently. And we basically sat down at some point and we're like,
okay, what projects do we want to do next? We have a bunch of different contributors
to the compiler. And one person was like, okay, we need to rewrite this part for this goal.
And the other person was like, oh, let's, we need to rewrite this part for this goal.
And eventually we sat down and we're like, wait, so we're going to rewrite like 90% of the compiler.
Why don't we just start fresh and just like get where we want to go.
And then finally have a numbered release because in order to communicate, Hey,
we're still a work in progress.
Things are still changing.
We've intentionally held off.
And so far, even though we're like 30,000 commits in and a whole bunch of
GitHub stars and downloads and people trying it out, we still intentionally
do not have any numbered release.
We've never said here's version 0.1.0.
So that's the milestone that we're working towards next
is like with the rewritten compiler,
that's gonna be 0.1.0.
That'll be our first numbered release.
And we're looking to do that probably not
by the end of 2025, probably sometime in 2026.
The milestone we're working towards now
is we wanna have the new compiler able to be usable
for advent of code 2025,
which means like it's not feature complete or it feature parity with the new compiler able to be usable for advent of code 2025, which means like it's not feature complete
or it feature parity with the old compiler,
but it is at the point where people can actually try it out
and like, you know, get some value out of it.
So that's what we're working towards right now.
Very cool.
Well, one thing I do when I check out new languages
is I like to go, first of all,
I'll do like the little playground of course,
and you have raw compiles to Wasm.
So it's just sitting there in the browser.
So that's really cool.
You can just start typing and hitting entering.
You got a REPL right there on the homepage.
And then I go to the FAQs because I love to see
like kind of the hot takes or the spiciness
because a lot of times you're answering
what people are asking, like what they consider to be bugs,
but you consider features or whatever it is, right?
Why doesn't it do this?
Why did you make this design decision?
And you have a really nice FAQ right there on the website.
And so many things that you say no to,
and like this really intentional way,
like we have no plans of ever doing this.
For instance, a maybe type or an optional type,
no plans of rock self-hosting, you know,
it's written in two languages to a certain extent
and like that's the plan for now.
And you're just very straightforward with like,
no, we're not gonna do that.
And I'm curious, maybe share some of those strong opinions
and where they came from.
Cause you seem like you really know what you want
in this programming language.
Yeah, for sure.
And I think, you know, the design of the language
has evolved, like it started off being so similar to Elm
that actually instead of writing a tutorial and like, here's how to program Rock, there was this document, which actually I think
is still in the repo maybe, that was just called Rock for Elm programmers. And it was like, look,
here's what's different. Like that was it. It was back then, but now it's evolved so much that Rock
very much has a very distinct identity. And I think if you look at Rock, like Rock code and Elm code side by side,
I think you would see some structural similarities.
Like for example, the standard library,
like the API design and the standard library,
I think Evan did an outstanding job with API design.
That's like one of the things he's really, really great at.
I guess underrated at people talk about like the compiler DX
and nice error messages and stuff, but yeah,
he's really, really good when it comes to designing simple APIs that
are also like very good at being reliable and like help you out with edge cases
and stuff like that.
I think the API design, you would see a lot of Elm, like very, very strong Elm
influence in those like simple, but reliable APIs.
But if you look at the code on an individual level, one of the most striking
things that's different is that Rock now has a syntax that
looks a lot more mainstream. There's a variety of reasons for this, but like really, really
simple example of this is like, if you look at the old homepage and actually, I guess
the current homepage still has a little bit of this. In Elm, if you're like, I want to,
let's see, what's a good example. I want to do like a, this is a functional programming language.
So you would do like a map over a list and say like,
I've got a bunch of names
and I want to uppercase all the names.
So in Elm, you would say capital L list.map.
That means like go to the list module,
get the map function space.
Then the transformation function you want to put in there,
like, you know, capitalize them, whatever,
like a little like anonymous lambda or something like that.
Sure.
And then space.
And then the last argument would be the actual list
that you want to map over.
Rocket looks pretty much almost identical
to what you would write in like TypeScript or something
where, except the lambda syntax is slightly different,
but it's like, you'd say like lowercase l list,
like, cause like the name of the variable
dot map, where it looks kind of like a method call. It's not rock is not object oriented,
but the syntax in the new compiler does give you that sort of visual appearance of like,
like method style calls with parentheses, you know, around the arguments, just like you would
see in most languages, commas, separating the arguments. So it visually now looks a lot different from Elm,
even though to me, the much more important thing
is sort of the semantics under the hood.
And that part feels a lot more like Elm.
So yeah, I mean, depending on your perspective,
like one thing I've learned
when we made that syntax transition
is that syntax is obviously a very polarizing thing.
Some people look at that and they say,
oh no, this doesn't feel like Elm, which I love. Syntax is obviously a very polarizing thing. Some people look at that and they say,
oh no, this doesn't feel like Elm, which I love.
Like, ah, this, and like, obviously I'm a huge fan of Elm.
I'm not trying to do something different
for the sake of like wanting to move away
from something that I love.
But at the same time, there's also a lot of other people
who are like, oh, I will actually consider this language now.
Like for a lot of people, syntax being a lot different
from what they're used to is just an absolute deal breaker.
And actually syntax was not the main motivation behind this.
It actually was a semantic thing,
which I'm happy to nerd out about
if you're interested in that.
But it was pretty clear that if we wanted to get
the semantics that we wanted,
we had to make the syntactic change too.
It just would not work with the old syntax.
So from my perspective, it's sort of a moot point.
I'm just not that attached to the syntax
so much as I am the semantics.
And in order to get that,
we sort of needed to make the change.
But you know, if someone like is,
is getting their first impression of the language,
it's now gonna be quite different
than the first impression they would get from Elm
just because the syntax is so much more mainstream looking.
So hovering on that,
and I'm fine with getting into the semantics
because it sounds like you want to
and let's go there.
When I'm thinking about the difference there,
the one being like some sort of type or module dot map,
and then you're passing in the actual list
as well as a function to map over it.
That's one thing, but then you mentioned that now Rock
has this, you know, lowercase list dot map
where the list is a variable that holds the list itself.
Now you're calling map and you're passing it what?
Just the function?
Do you still have to pass a collection?
Is the collection, how does it work?
Nope, just pass the function, yeah.
It's just like a JavaScript.
So that does feel object oriented to me,
where it's like, do this map on me.
I'm a list, do it on me.
I'm an object of collection type or something.
Yeah, how does that work?
Yeah, so the feature name is static dispatch.
And the basic idea is this.
So in JavaScript, what's actually happening there?
So JavaScript is object-oriented with prototypal inheritance.
And so what that means is you have an object,
let's say it's named list.
Maybe I should choose a different name for that,
for example.
Let's say it's called numbers.
So it's clear that we're talking about a variable here.
So you have numbers.
Well, that's not good if they're strings. Let's call it names. numbers. So it's clear that we're talking about a variable here. So you have numbers. Well, that's not, that's not good if there are strings.
Let's call it names.
Okay.
You have names and they're all strings
and you want to uppercase the names.
Okay.
So names is the name of our object in JavaScript.
When you call names.map,
what's happening there is it's going up at runtime,
up the prototype chain.
It's looking at like,
there's some runtime information inside names.
And it's like, oh, names has a prototype.
And it keeps looking at the prototype until one of those at like there's some runtime information inside names and it's like oh names has a prototype and
it keeps looking at the prototype until one of those classes in the prototype has an actual
map function on it or a map method on it and then it says oh I found one great I'm going to call
that one passing in the the one argument which is the function that is going to do the mapping
operation and then inside that method there will be a sort of this that's automatically bound
based on the variable itself.
And then it's going to sort of do its magic.
That's how it works in JavaScript
and object oriented languages.
Usually you don't have prototype inheritance.
You have like, you know, classical inheritance with,
you know, formally defined upfront classes
and stuff like that.
And JavaScript does support that now
and so on and so forth.
And Rock is totally different.
It looks the same,
but what's going on under the hood is way, way simpler.
So Rock is a functional language.
And the way that we,
like one of the most important values
of functional languages is it's like,
everything's done with functions.
We have played in functions.
We don't have a first-class concept of classes or methods
or any of that.
So the way it works in Rock is super simple. It's like, okay, I have this thing called names.
Names has a particular type. The compiler knows what that type is because we're a statically
type check language and we have type inference. So you don't need to annotate any of your
types actually, if you don't want to. You can, we have what I like to call a hundred
percent type inference where all type annotations are completely optional. You can just never
annotate anything if you want. It'll be totally fine. Everything will still work.
But the compiler can always figure it out.
So it figures out, oh, I see that names is a list.
That's like the data type we call, just like Python,
where if you have the square brackets,
that's a list in Rock.
Like JavaScript calls it an array.
Python calls it a list.
Rock calls it a list.
Right.
So I say, OK, the compiler has figured out this is a list.
Well, that list type is defined in some module somewhere.
Some module declares this is what a list is.
The compiler says, okay, names is a list,
let me go look up the list module.
They say, okay, does that list module
expose a function named map?
If so, great, call it.
Passing the names as the first argument to that function,
and then whatever other arguments you gave to it
get passed as the other arguments.
So inside the list module, you have a function named map.
The first argument is the list to map over,
and the second argument is the function, and that's it.
So it's a really, really simple way to say,
like, I want to just take this thing
and just call some function on it
without actually having to declare
what module that function lives in.
That's all sort of inferred by the compiler,
but there's no inheritance.
There's no like prototypes, there's no classes,
there's no any of that.
It's just like plain old,
completely ordinary functions in modules.
And then you just like call them in a syntax
that looks similar to like OO methods,
but it's really just the compiler being like,
here's how I decide what function to call.
Right.
And that's static dispatch.
Gotcha.
So in layman's terms, it's really,
it's just kind of like reorganizing the order of calls.
And it's like, here comes a list.
Does the list module have a map function?
Yes.
Call the list modules map function
with this list as the first argument.
And then take whatever they said
and call the second argument.
And if you want, you can literally call it
in like Elm style if you want.
You can say like capital L list,
like for the list module dot map,
passing names as the first argument,
and the function as the second argument
does exactly the same thing.
The first way is just a shortcut for that.
Now the semantic thing that we wanted to get out with that
is that here's the really cool thing that that gets you you is that now let's say that I have a convention of having a
function in your module named equals and you want to define that to be you know equals like it does
the equals thing. So now we can make double equals the operator just desugared to like, if you have A double equals B, that just desugars to A dot equals like written out
EQALS, parentheses B, and that's it.
And so now we have the concept of just custom equality
for everything.
And how do you make custom equality?
It's like literally when you're defining your new type
in your module, just write a function named equals
that takes, you know, like two of these things
and then that's it.
And there's no restrictions on like, oh, equals means it has to be like, you know, like returns a boolean and this and that.
It's like, no, no, it just, it just sort of works by the fact that, you know, the way that you want equals to work is like everybody is going to sort of use equals in the obvious way.
But there doesn't have to be this like really formal declaration of like, oh, equals is a trait. And you have like, I mean, if you want, you can do like a type alias, you
don't have to write it out all the time. But all of this is just like that one simple semantics
for the dot means that you get stuff like custom equals. You can also do operator overloading
where like a plus B D sugars to a.pl us parentheses B and now great. If you ever want to do custom
plus just implement a function named PL US expose it B and now great. If you ever want to do custom plus, just implement a function named PLUS
and expose it and you're done.
So, yeah, it's a super simple design
that unlocks all these things that in most languages
require a lot more formality and a lot more
like different language concepts.
And it's really easy to teach.
Like you just learned 100% of what there is to that feature.
That's all there is to it. But it just gets you all these things. And yet it's still easy to teach. Like you just learned a hundred percent of what there is to that feature. Right. Like that's all there is to it, but it just gets you all these things. And yet it's still just
all functions. Like one of the things that always annoyed me a little bit about, I'm going to use
Rust as an example, because I like Rust. I use it all day at Zed. In Rust, you have like this sort
of trait hierarchy, like equals, you know, you can also customize equals in Rust, but it's like,
okay, you're gonna have to say, I type implements equals, and then maybe there's like different implementations
of particular traits for like different
like trait type parameters and stuff like this
and yada yada.
And if you look at the documentation
for a particular module and rust,
you'll see all these like trade implementations listed.
And it's all feels much more complicated
than just the rock style where it's like,
hey, there's just a flat list of functions in this module.
That's it.
They're totally ordinary functions. And like, yeah, one of them happens to be
named equals and one of them happens to be named plus or whatever. But they're not special.
They're not magical. They're just like, they're just there. That is your API. That's what's
available in this module. It's all in one place and it's all just plain functions. And
yet you can still use them in more interesting ways just by virtue of like whether they're sort of designed
to work with dot or not from the API designer's perspective.
That is interesting.
So we're down here on this tangent,
but the way that we got here was because you were saying
how you've allowed user feedback
and other people's interests move rock in certain directions
because you weren't so stuck on being exactly like Elm in every case, but you want to do it people's interests, move rock in certain directions
because you weren't so stuck on being exactly like Elm in every case, but you wanna do it a different way.
So you were showing how you haven't had strong opinions,
I guess, on everything,
but you do have some strong opinions as well.
There's things like, you know, rock is not self-hosted.
That's kind of like a milestone for nerdery.
It's like when I'm laying there self-hosts, now I can finally use it, but you're like, no, we're not gonna do that. So that's kind of like a milestone for nerdery. It's like when a language self-hosts, now I can finally use it.
But you're like, no, we're not gonna do that.
So that's just an example of one of the things
that you're opinionated about.
You wanna talk about that one?
You wanna open up broader about your other opinions?
Yeah, I mean, I'm happy to talk about that and others.
So the self-hosting thing,
for those who aren't familiar with that term,
self-hosting is where you basically take
a programming language that you're working on
and you rewrite the compiler in that language. So. So for example, like Java originally before Java existed, the compiler was necessarily written in something else.
Eventually it got to a level of maturity where they decided to rewrite Java's compiler in Java.
Most languages do this. It is super common to self-host.
We have a FAQ entry about why we are intending to never self-host Rock's compiler, and the bottom line is just performance.
It's really, really important to me that Rock have an extremely fast compiler.
In order to get a compiler that is maximally fast,
you need to have certain language features available that are memory unsafe, to be blunt.
You just need more control over memory than what is possible if you want to have,
like Rock is, a language
with automatic memory management as the only way of doing things.
So if we want a maximally fast compiler, either if we want to self-host, we need to add features
to Rock that would make it less safe, which I don't want to do.
That's like a non-goal for the language.
Actually, if anything, we're in the opposite direction where we can go on a tangent about
that too.
But like Rock has a lot of
safety features that are unique. Like I don't know of any language that gives you as strong safety guarantees as Rock can give you about like trusting third party code and stuff like that.
And yeah, we can go out of tangent about that. But basically like if we wanted to self-host and
be as fast as possible, we'd have to contort the language in ways that I think would be bad for the language. And if we wanted to self-host and be okay with
a slower compiler as a result of that, then we're also working against a different goal,
which is like, we really, really, really want you to have a great experience with a compiler.
And a really big component of compiler experience is how fast is the feedback loop? We recently,
in the rewrite of the compiler,
which is actually in Zig instead of Rust,
but these numbers are not because of Zig and Rust.
We were seeing something along the lines of,
it's about three to five times as fast as the old compiler
in some benchmarks of like,
we just took some like module in the standard library
and ported it to the new compiler
and like looked at the before and after.
And it was something like five or six million lines
of code per second that it was able to parse.
And I think, yeah, parse maybe and also parse and format.
And that was counting a lot of lines of comments
because this is a standard library thing.
So it's pretty heavily documented.
And if you take out the comments,
it's still parsing two to four million lines
of code per second.
And that's a real world, real module.
It's not like we just ginned up some fake thing. It was like, this is just how it was
written. We ported it to the new compiler and that was, those are the numbers we got.
Now that's parsing. Parsing is a pretty small percentage of like overall compilation time.
But I guess it gives you an idea of like, this is a really, really strong value for
us. And I should mention, by the way, that if anyone listening is interested in like
getting involved in a compiler, we're really contributor friendly.
Like we've, Rock has always been like a very like, I don't know, community built project.
And because we're doing this big rewrite, there's actually kind of a unique opportunity
right now to like get involved if people are interested.
I'm happy to have anyone like who wants to like get involved in a, in a exciting, high
quality, high performance compiler.
Awesome.
Check the show notes, figure out how to get involved,
everything will be in there.
So is it safe to say that, well, do you describe
Rock as a general purpose programming language?
Is that fair or no?
I think that's fair, but also it's not how I describe it,
just because I don't describe anything
as general purpose languages.
I think like Evan kind of, I forget how he said it,
but he made a really good point about this,
which just kind of stuck with me, which is basically like,
like every language is good at some things
and worse than others.
Like you can be like,
Python's a general purpose programming language.
It's like, oh cool,
so you're going to make an operating system in that?
It's like, well, no, not like that.
Well, but like C you would make an operating system.
It's like, yeah, sure.
It's like, oh, okay.
So C is a good choice for scripting.
No, no, no, not like that, right?
So it's like, what does general purpose really mean?
It's like every language is good at some things
and worse at others.
But I do think it's fair to say
that we wanna cast a very wide net.
Like I do want Rock to be good for scripting.
That's one of the reasons
that you'd never have to write type annotations.
I have an intention of writing like a blog post
at some point comparing like Rock and Python
and maybe like TypeScript and Go, I don't know.
TypeScript is probably more reasonable just to be like,
here, if you write this script in rock versus Python versus TypeScript, here's how they compare in terms of conciseness,
which is important for scripting, but also in terms of what's the developer experience? What
happens if there's an error? How likely is it to go off the rails and blow things up? All things
that you potentially care about. And actually, for scripting in particular, there is a really
cool thing that we can do that's uniquely rock, which is if I get a script from the internet, and this is always like
the big concern with downloading scripts to the internet, everyone's like, never download
and pipe to bash because you don't know what's going to blow up your computer.
We actually can prevent that. Like there is a way you can download a rock script that
you got off the internet and changing one line of code in the script from what the author
gave you, or maybe the author did this for you, one line of code change and you can be like,
I do not care what's in the rest of this script.
I know for sure 100 percent,
no guarantees, no exceptions.
When this script runs,
it will prompt me every time it's going to do something to
my file system or do something that I might not want the script to do.
It has to prompt me and
the script author has no way to work around that.
I, as the runner of the script,
all I have to do is change one,
I don't even need to read the rest of the file,
I'll run whatever you gave me
and I don't have any fear that it's gonna break my computer
because there's this way of basically
sandboxing the whole script,
even if the author did not want it to be sandboxed
and was trying to do something malicious.
Interesting, so let's say you have a.rock file out there on your server
and I go ahead and download that with my rock program.
Yes.
Do I then like prepend this line into the file
and then execute it or like,
how do you actually get that done?
Something like that.
You know, we can go on a big technical tangent about this,
but so basically rock has this concept of,
we call it platforms and applications. And the basic idea is this, every Rock program is designed to be sort
of embedded in, you could think of it as kind of like a framework, although a platform is
not really a framework because it's got more responsibilities than that. It's got sort
of a bigger scope than a framework, but it kind of comes from this observation of like,
if you're building a web app, let's say in Ruby,
you're probably building it on like Rails
or Sinatra or something.
You're not really starting from scratch
and like empty like.rb, no dependencies, you know, thing.
You're gonna build on top of something.
If you're building a game,
you're probably gonna build it on a game engine.
You might like, I know some game programmers
are really hardcore and start from scratch,
but like usually you're building on top of something
that was already there.
So we sort of formalize that.
And because we formalize that into a first class language
concept, there's always a thing you're building on top of.
There's a bunch of really cool benefits that we get out of this.
So the platform is basically like, here is a rock API.
So this is like the equivalent of your Rails API.
Here's what Rails exposes.
Here's the concept of model view and controller
that Rails has and so forth.
And then under the hood, there's a lower level language that's implementing a whole bunch of like, you know, model view and controller that Rails has and so forth. And then under the hood, there's a lower level language that's implementing a whole bunch
of stuff, including all your IO primitives.
So for example, one of the things that's cool about this is that it means that you can get
sort of this domain specific experience, which I loved from Elm, but it's now applied to
more different domains.
So really simple example of this is if I'm doing a CLI app, I really want that CLI app
to have like, right to standard out.
That's an obvious IO thing that I want to do.
And I also want like read from standard in.
That's also a really common thing to want to do in a CLI.
But if you're doing like a GUI app, like a desktop app, like, you know, like a music
player or something like that, do you want standard in?
Is that like a thing?
Like, no.
Why would you want that as an IO primitive?
That's if anything, kind of a foot gun that like,
if I have some dependency that's blocking on standard in,
that's just gonna be a bug.
On a web server, do you want standard in?
Like, do you care about that?
So the point is that like there's different IO use cases
for different domains.
A really clear example of this is in the browser,
like you mentioned that Rock compiles the web assembly.
If you're in the browser,
do you want like a standard library
in your language that has like file system IO in it?
Like Rust does, like Rust also compiles the web assembly,
but Rust standard library is like, cool,
here's how you open a file and write to a file.
And it's like, I can't do that in the browser.
So if I have like any dependencies that I'm building on
that are doing that, it's like,
I don't actually know what they're gonna do in the browser,
but it's not gonna work
because the browser doesn't have a file system.
So what's cool about this is because when you're building your rock application,
you have to pick a platform and the platform not only gives you your domain
specific stuff, like if you're building a web app, you use a web app platform.
And it's like, here's how to do, you know, requests, requests and responses.
And if you're building a CLI app, it's like, okay, here's main.
And here's how to do your IO for that.
But everything is sort of tailored to like the exact use case that you're building a CLI app, it's like, okay, here's main and here's how to do your IO for that. But everything is sort of tailored to like the exact use case that you're building for.
If I'm building a web app, I use a web app platform.
And there are already several of those out there in the wild and in the rock ecosystem.
I'm building a CLI app, I do this.
If I'm building a graphical app, I do this.
You know, game app, right?
Use this like game engine platform.
Getting back to the scripting use case.
So what you're doing there is you're swapping out the platform. Getting back to the scripting use case. So what you're doing there is you're swapping out
the platform. So what you can do is if somebody gives you a rock app, like a dot rock file,
that's a script that they want you to run. And it says, it has to say, here's my platform. And
let's say they use some generic scripting platform. The one line of code that I change is I swap it
out for something called like safe script, which is an API compatible platform with the one that
they used. So everything's still gonna work,
except that SafeScript has for its IO implementation
under the hood, all of these like,
yeah, if you ever tried to read from like at C password,
I'm gonna stop the script and prompt
and there's no way the script author
could do anything about that.
Like we intentionally do not have like an arbitrary
like FFI foreign function interface
where like the script author can be like,
ah ha ha, I'm gonna sneak in some C code in here
and get around your, it's like, no, no, no, you just,
literally all you get is like what the platform provides you.
And so if I, as the consumer of that,
just swap out the platform for some of these API compatible,
there's absolutely nothing the script author can do
to get that.
And there's no escape hatch that they can use
to circumvent that.
That's pretty cool.
So these platforms, are they effectively like
if I was gonna implement a platform,
am I just like, it's like an interface or an API
or is like a another rock program that I run
or how does that whole deal fit together?
So there's two pieces to every platform.
There's the public API,
which is what the application authors like consume.
So one of the hard design constraints of platform authoring
is that if application authors need to not have any, they need to not care what's going on
under the hood with the platform.
They need just to just be like, I don't know.
Like I only know rock.
I don't know any other programming languages.
I only know what rock code is.
And so I don't care what you're using behind the scenes, but if you are doing a
platform, like you are a platform author, you do need to know not only how to make
your public facing rock API that application authors are going to see, but also the lower level piece, which has to be
written in some other language. So people have written these, we call the host is like
the lower level part. People have written these in Rust and Zig. And I think maybe there
was a C++ one, but you can use any language you want. We also have some like, these are
really like hello world level things, but like Swift, like if you want to do an iOS
app, like in the rock repo, you can see these examples. Java people have done like a bunch of different JVM languages
where you can just like write a dot rock file and like, Hey, here it is running on the JVM.
One of the cool things about this is it means that rock is also quite good at getting embedded
into existing code bases because you can basically say like, I'm just going to write a bespoke
one off platform. That's just like my whole code base. And I'm just going to write a bespoke one-off platform that's just like my whole code base. I'm just like exposing like entry points for the rock code. And now you can be like, oh, cool.
I have this big Java code base and I want to like write part of it in rock because I think the
ergonomics are better or the performance is better. We also have a really strong value of
runtime performance. So we actually compete with, we want to be faster than Go, but not quite as
fast as Rust or like Zyg or C++, because that would
require introducing memory and safety, but faster than all the like, I think Go is the
fastest like currently garbage collected language.
But yeah, so you can just sort of insert yourself into existing code bases.
And then like that code base can just kind of import a.rock file.
And one of the other things that's really cool about rock is that we don't have like
a heavyweight runtime.
There's no like virtual machine or anything like that.
It basically compiles down to the equivalent of
if you're writing Rust code
and you're opting into Rust's automatic reference counting
all the time, that's very, very close
to what you're getting with Rock.
Rust programs can run faster than Rock programs
because in Rust, you don't have to reference count everything.
You can just like, in many cases, choose not to do that.
But in rock, that's just sort of done for you automatically
behind the scenes, which makes rock code a lot more concise
and a lot simpler than Rust code among other things.
But it does mean that like, you don't need to worry about
like, oh, am I going to have like, you know,
a JavaScript VM that's like running a rock VM inside?
It's like, there is no rock VM.
It's just like, you just call functions
as if they'd been written in C or Rust or whatever else.
And they just sort of slot in there.
What's up nerds.
I'm here with Kurt Mackey, co-founder and CEO of Fly.
You know we love Fly.
So Kurt, I want to talk to you about the magic of the cloud.
You have thoughts on this, right? Right. I think it's valuable to understand the magic behind a cloud
because you can build better features for users, basically, if you understand that. You can do a lot of stuff,
particularly now that people are doing LLM stuff, but you can do a lot of stuff if you get that and can be creative with it.
So when you say clouds aren't magic because you're building a public cloud for developers
and you go on to explain exactly how it works,
what does that mean to you?
In some ways it means these all came from somewhere.
Like there was a simpler time before clouds
where we'd get a server at Rackshack
and we'd SSH or Telnet into it even
and put files somewhere and run the web servers ourselves
to serve them up to users.
Clouds are not magic on top of that.
They're just more complicated ways
of doing those same things in a way that meets the needs
of a lot of people instead of just one.
One of the things I think that people miss out on,
and a lot of this is actually because AWS and GCP
have created such big black box abstractions.
Like Lambda's really black boxy.
You can't like pick apart Lambda
and see how it works from the outside.
You have to sort of just use what's there.
But the reality is like Lambda's not all that complicated.
It's just a modern way to launch little VMs
and serve some requests from them
and let them like kind of pause and resume
and free up like physical compute time.
The interesting thing about understanding how clouds work
is it lets you build kind of features for your users you never would expect it. And our
canonical version of this for us is that like when we looked at how we wanted to
isolate user code we decided to just expose this machines concept which is a
much lower level abstraction of lambda that you could use to build lambda on
top of. And what machines are is just these VMs that are designed to start
really fast, are designed to stop and then restart really fast, are designed to start really fast or designed to stop and then restart really fast
or designed to suspend sort of like your laptop does when it closes and resume really fast when
you tell them to. And what we found is that giving people those parameters actually there's like new
apps being built that couldn't be built before specifically because we went so low level and
made such a minimal abstraction on top of generally like Linux kernel features.
A lot of our platform is actually just exposing a nice UX around Linux kernel features, which I
think is kind of interesting. But like you still need to understand what they're doing to get the
most use out of them. Very cool. Okay. So experience the magic of Fly and get told the secrets of Fly
because that's what they want you to do. They want to share all the secrets behind the magic
of the Fly cloud, the cloud for productive developers,
the cloud for developers who ship.
Learn more and get started for free at fly.io.
Again, fly.io.
So how close are you to catching Go
and when and how are you gonna catch it?
Well in terms of popularity or performance? No, in terms of performance.
So arguably we already have. So this was back in like 2021 we did a proof of
concept where the benchmark actually gave a talk about this at Strangeloop
was called outperforming imperative with pure functional programming and
basically we did this benchmark where it was a quick sort on,
I think it was like a million 64 bit floating point numbers.
And we compared Rock, Go, Rust,
no, it might've been C++.
And yeah, C++ I think.
Rock, Go, C++, Java and JavaScript.
And for that task, we just barely edged out Go.
We were slightly faster than Go
and Rust or C++ less was faster than us.
Obviously I do think in the new compiler, we will probably be slightly slower than
go at that one, like, like just on the other side of it, because one of the
optimizations that we use to get there, we decided was not worth it anymore.
Like given our experiences with it and we're going to planning on dropping it
from the new compiler.
So I wouldn't be surprised if that one like put us
like slightly below instead of slightly ahead
on that one particular benchmark.
But that was always kind of a silly benchmark
in the first place because it's like, who cares about
like handwritten textbook, quick sort with no optimizations,
you know, like no handwritten optimizations.
But it was, it was kind of a good sign.
I mean, basically the short answer for like how we can be
faster than go is that like go, we do what's called like unbox data structures a lot. So if you have a construct in
Go, which would be kind of the equivalent of like a object, like a plain, you know, unadorned object
in JavaScript, just some data, no like, you know, runtime information or anything on it.
In Go, that's not a heap allocation. It's just plain old stack memory. It's just
really straightforward. Same thing in Rust, same thing in C++, same thing in Rock.
We also don't heap allocate our closures, which is very unusual for a functional language.
We have spent a lot of time making that happen. And that was one of the reasons for the rewrite
was that the approach that we used in the current compiler turned out to have some bugs that we
thought were just like, oh, we just need to squash these bugs and then we'll be fine.
It actually turned out the very last sliver of those bugs, which do come up, actually
are not fixable incrementally.
They require a really serious re-architecting of multiple parts of the compiler.
We do have a proof of concept of like, if you do that, that does fix it.
But the proof of concept is like a really tiny subset of rock.
So it's like, okay, now we need to productionalize that for real.
But yeah, so basically like similarly to Rust,
we don't like box things by default.
We do automatic reference counting for memory management.
And we use LLVM much like Rust and C++ for compiler optimizations.
And LLVM is just really, really good at optimizing your code.
Go does not use LLVM.
So the fact that we are sort of like at parity in terms of like,
generally speaking, how we do memory management, like which things are heap
allocated and which ones are not, but we get LLVM on the other side, generally means that we can
just kind of be faster than Go. The one counterbalancing fact is that we do have more
functional APIs. And again, without going on another long tangent, which we can go on, I guess,
a quick one, if you're interested about our opportunistic mutation. There are API differences
that might mean that Go is faster than us at some things and also vice versa. I guess the brief
tangent about that is opportunistic mutation is also something that we are unique in like industry
focused languages for doing. There's some research languages that do it. So we're not the first to
come up with this. But the basic idea is this,
in a lot of functional languages,
you have like immutable APIs.
You're like immutable data is awesome.
It has all these really nice properties.
It's great.
But there's a downside that like,
if you try to do this in JavaScript, for example,
you're gonna be like defensively cloning stuff all the time.
Or automatically, right?
It's just like cloning, cloning, cloning,
like all these things like, oh, it's immutable,
but I need to get a new one.
So I'm gonna like clone the whole thing
and accept one thing is gonna be different.
That's extremely slow.
So what we do is I call it opportunistic mutation.
There's other names for it in like compiler literature,
like functional, but in place, but that's so long to say.
But the basic thing-
I like opportunistic manipulation.
That's, it's long, it's impressive,
and it sounds like it rolls off the top.
I'm already saying it, so I like that name. Yeah, so the basic idea is that it's long, it's impressive, and it sounds like it rolls off the top. I'm already saying it, so I like that name.
Yeah, so the basic idea is that it's like, if you...
So we do automatic reference counting.
Let's say that I'm like, for example,
I want to append something onto the end of a list.
First of all, in most functional languages,
list refers to like a linked list.
But for us, it's more like JavaScript or Python,
where it's actually like an array, like a flat array in memory.
Normally in JavaScript, like you're like,
I wanna append something onto the end of the list
and get back a new list at the end of the day,
that requires cloning the entire existing thing
and then like sticking something on the end
and now you have the new one.
Well, that's terrible for performance, obviously.
Much faster is to mutate it in place and just say like,
yeah, just stick something on the end, we're done.
So the CPU likes that, but as a programmer,
if you're trying to do everything with immutable data because you like the semantics of that and how things sort of
change together, and you don't have to like worry about like, wait, did it change at this point?
Or that point? It's just like, no, everything just kind of flows from one step to another.
How do you get the performance and the semantics? And our answer to that is this
opportunistic mutation thing. Really, really simple idea. It's basically like, at runtime,
we look at the reference count. And we're like, okay, if the reference count is one, guess what?
Nobody is going, nobody else can, is observing this thing.
So if I just go ahead and stick it on the end,
nobody's gonna know, like it's gonna be fine.
If the reference count is higher than one,
then I can't do that and I do need to clone it first,
but then the clone now has a reference count of one
cause I just made it.
So from then on, like the rest of the pipeline, it's all going to trigger the opportunistic mutation.
So basically you can think of it as like, it is potentially doing the like, you know,
clone your thing, but kind of our hypothesis, and so far this is proven to work out in practice,
was that like, in practice, the way you write your program tends to be you're only passing
around things that only are being used in one place most of the time,
if you're gonna be modifying them anyway.
So the amount of cloning that your program needs to do
actually should be extremely, extremely minimal.
And that's exactly what we've seen in practice.
There are sometimes cases where it's like,
well, I wrote my program in this way,
but it was doing unintentional cloning.
And then I got it to be much faster
by like kind of refactoring it to just do things a little bit differently. So far that's
gone fine as far as like educating people when they're like, hey, my rock program is
slower than I expected. Like, you know, why? And then we look at it like, oh, just write
it in this slightly different way instead of this way and it'll be totally fine. It
kind of remains to be seen like, I don't know, like, what does that look like if you have
a giant rock code base? Is it like a problem or not?
But given the fact that a lot of people write
like JavaScript in like a functional style
where it's just cloning all the time,
clearly it's not like a deal breaker for some people.
But I hope is that what we've seen so far will continue to scale.
Where it's just like, yeah, you just get in the habit of writing things in
like a style that's totally reasonable and looks nice and is easy to understand
and just has these really nice properties.
And what you get is the performance of a language
that's mutating all the time,
but the semantics of a language that doesn't even have
a formal concept of mutation,
except behind the scenes as a performance optimization.
So when I, way back in the day,
I was writing some objectives C
because I was trying to be on Apple's platforms.
I remember they added ARK automatic reference counting
at a certain point and either I was using it wrong
and which is probably the case or it would get screwed up
and maybe it's because you could do it manually
or automatically but like sometimes the count would get off
and things would get funky.
And I'm wondering if ARK is,
was that just a circumstance of like Apple platform
changing over time?
Is Arc solid or it's like, it's not gonna be,
if it says one, it's correct.
Like it's counting correctly no matter what
or could it be off and it thinks you as one
but there's actually something else referenced in it
and like the count got off somehow.
In our case, we actually have never seen that be a problem.
I mean, I guess there might have been like at some point
we had like a bug in like, you know,
our standard library or something.
Reference counting, one of the things that I we had like a bug and like, you know, our standard library or something.
Reference counting, one of the things that I've learned, like the hard way is that reference
counting if done manually is super error prone.
It's like if you're, if you're trying to do the increments and decrements, like it, that
was one of the hardest things in the compiler to get right in the first place was like where
to insert the increments and decrements.
And on top of that, we also have optimizations that are like, oh, in this case,
we can see that it's gonna increment it.
And then like later on the same function decrement it
before it gets passed anywhere.
So just eliminate both of those, don't bother.
But like, for example, early on,
we were trying to do some of this by hand
in like some of the early platforms.
I just remember it was like, it's so easy to get it off
if you're doing any part of it by hand,
even if most of it is automatic.
So I'm guessing that that was really a case of like Objective-C letting you do it either
by hand or automatic and then the by hand parts like messing up the automatic parts.
I totally remember that.
I think you're absolutely right.
It was like I had some by hand and then I'm like switching over to the automatic and it's
like in that migration there was just no way that was going to be right because I had some
manual things that would screw up the automatic and I was like, this is a total mess.
I'm better off just managing myself,
but that was in that circumstance.
I imagine if it's hanging at the compiler level for you
and you're never doing manual in the programming language
itself, it's probably way more reliable.
So if you're just using Rock and like you're doing
application development, it just feels like a garbage
collected language, except that like there's no concept
of a GC pause because like a garbage collected language, except that like there's no concept of a GC
pause because like a traditional like market suite garbage
collector or like tracing garbage collector is like the
category of those things like right, there is some moment
where they're like, doing a you know, like traversing the heap
and like finding what can be freed. Modern tracing garbage
collectors are much better than this than they used to be at in
terms of like minimizing GC pauses and latency and stuff like that.
But reference counting,
one of the reasons that reference counting is pretty cool on like applications
like UI development is just that it's totally incremental.
It's like you're just constantly like freeing little like bits here and there as
the program is running. So there is no,
like you don't need to do something super fancy to avoid like pauses because
you just naturally don't have pauses. It's all sort of like spread out over the, you know, all the different
allocations in the program.
That's really cool.
I mean, take, take that go come on.
Yeah.
But I mean, in all seriousness, in all seriousness, like part of the reason that
we like to be competitive with go is just that I think go does a really good job of,
of delivering really good runtime performance and really good compile time performance. But, you know, I would rather use a language
with rock semantics than a language with Go semantics, but I still want those things.
I still want it like we want to compile faster than Go and run faster than Go because why
settle, you know, like we have the potential to design things in a way where we can pull
that off and we want to pull it off. So, you know,
realizing those dreams is like, I don't know, it's been exciting every time we've gotten a
result where we're like, nice, it's actually like, it wasn't just possible in theory, we pulled it
off in practice. And I'm really excited to like ship 0.1.0 and like, let people actually try out
the whole experience. Yeah, that's gonna be rad. Well, you mentioned Go and, or maybe I did, we both did.
And it made me think of if error not equal nil,
because that's what I think of with Go,
which makes me think about error handling,
makes me think about undefined and nulls,
because oftentimes that's what you're checking out at the time.
And rock does not have a null or a error or a nil
or an undefined.
It also does not have an optional maybe thing,
but there's all, but still,
sometimes you have a thing, sometimes you don't.
So how do you handle the circumstance
where you might be getting some back something
and you might not?
Yeah, so I'm biased,
but I think Rock has the best story of this
of any programming language, but I'm biased.
So let me explain how it works.
So although we don't have something like option or maybe,
we do have result and result works pretty much the way
that it works in like Rust or OCam or something like that,
which is pretty much it's like maybe except that you have
the success, but also you have an error type.
So it's like, maybe it's like,
I either have this thing or I don't, so much like null.
Result is like, I either have this thing
or I have something else that's like an error
that explains like what went wrong.
So like oftentimes that could just be a string
that's like, you know, oh, I couldn't find this thing
because of, you know, whatever reason.
And a special case of that is you can just have like
nothing for the error type,
which is like an empty error type,
but there's no extra info.
And that kind of works the same way as maybe.
The reason that we have that particular design is like,
centralizing everything around result means that you can
improve a lot of ergonomics and say like,
this is just for saying I either have this thing or I don't
like when you're returning it from something,
as opposed to using it for like data modeling as well.
So it would be really weird to have, for example,
in your, I don't know, data model, you're like representing like here's a user or something
and like maybe they do or do not have an email address. It would be weird to put a result
in there and say like, oh, like they have an email address or like error, no email address
in my like data model. That's more of like a functions return result. Cause it's like
this operation succeeded or it failed. It's not really like about data.
That's the like FAQ entry of like, you know,
why do we choose not to have options?
So we do still have result.
Now, the reason I say I think we're the best
at error handling is actually something that's more to do
with what we call anonymous sum types.
So a lot of programming languages have a concept of,
I guess the most common term I've heard for this is algebraic data type. So this is something that a lot of programming languages have a concept of, I guess the most common term I've heard
for this is algebraic data type.
So this is something that a lot of people have said they want in Go.
And the low level version of this is what's called a tagged union.
That's what Zig calls them, for example, or C. And basically what it is, it's just like
you have some series of alternatives.
You can say, let's use like traffic light colors.
You have like red, green and blue.
A lot of languages have that concept
and it's called like an enum.
So it's like an enumeration of like three different options,
red, green and blue.
It's exhaustive, meaning that like,
if you have one of these stoplight color values,
you only have red or green or blue.
There's no such thing as like other,
that's not really a thing.
Now, if you want to introduce a concept of other,
what algebraic data types let you do is you can say red,
I said blue, what color traffic lights?
Red, green and blue.
I don't know, I was just rolling with it.
No, I'm just, I'm often like graphics land, right?
RGB, all right.
In this hypothetical world,
we have traffic lights that have blue in them.
Now, let's go red, green, yellow,
like actual traffic lights. Okay, fair. Different order. So let's say that you want to expand that
and you do want to introduce a concept of other. What you can do is you can say red, green, yellow,
other, and then other has what we call a payload, which means it has some additional data associated
with it. So you could have other and then like a string payload that basically says, okay, it's
either it's red, it's green, or it's yellow. Or if it's other, and then like a string payload that basically says, okay, it's either it's red and it's green or it's yellow,
or if it's other, then I have
this string that describes what the other color was.
The critical distinction there is that if I have red,
green, or yellow, I don't have that payload.
Those are just like, nope, it's just the value and nothing in there.
Only if I'm in the other case,
do I have this extra piece of data.
Now, in a lot of cases,
what you end up doing with
algebraic data types like this is you end up
having payloads in all of the different options,
or most of them, or all of them,
but one or something like that.
What's really nice about them is it allows you to
really specifically say like,
okay, under this circumstance,
I need to provide this extra data,
under this circumstance, I provide
this other totally different shape of data.
It just basically unlocks this really nice experience
of modeling your data in a way where you can say,
okay, under this scenario,
I have this to work with.
Under this scenario, I have this to work with.
Also, when you're constructing that data, it's like,
okay, if I'm in this scenario,
I have to provide these pieces of
information or else I cannot get one of these values.
And so it lines up really nicely between like,
here's what I need to provide if I'm in this situation.
And I'm saying like, hey, we're in this situation.
And also like, here's what I have access to
when I'm in this situation.
And then the exhaustiveness piece of that is still there
where like when you're extracting a value out of this,
it's like, this is where pattern matching comes up,
is you can say like, okay, if I'm in the red scenario,
do this logic, like in a, like a switch statement
or something like that.
If I'm in green, do this, if I'm in yellow, do this.
And if I'm in other, do this,
but also only if I'm in other,
do I have access to that string.
And in none of the other branches,
do I have access to that.
So you can do this in like TypeScript,
but it's pretty clunky in languages like rock
and rust and stuff where this is a first- clunky. In languages like Rock and Rust and stuff
where this is a first-class thing,
it's super ergonomic to do that.
Okay, that's algebraic data types,
but what Rock has is the anonymous version of that.
So it's not like everything I just told you is true
and you can do all that stuff in Rock,
but we have one extra thing, which is that if you want,
you can also build these up on the fly.
So for example, instead of defining upfront,
we have traffic light colors which are red, green, yellow, and other.
I can just write a totally normal conditional that's like,
okay, if this is true, blah, blah, blah,
set this variable equal to red.
In another branch, set this variable equal to green.
In another branch, set this variable equal to yellow.
Rock's compiler will just infer based on my having used that anonymously,
pretty much exactly the same way that it'll infer things based on,
like if you just use curly braces to make an anonymous,
we call them records, but anonymous object in JavaScript or something like that.
You don't define anything upfront,
you just use them and it just says, oh cool.
I will infer that the type of that variable that you've been setting to red,
green, or yellow, or other with a string in it
is just like, yeah, it's like either it's red
or it's green or it's yellow
or it's other with a string in it.
No problem.
I just infer that and you don't need to like
define anything upfront.
Now what's really cool about this
is how it applies to error handling.
Because now what you can do is you can say, okay,
let's say I'm like trying to read from a file.
This is a classic example I like to give
of like why this error handling is really nice.
I'm reading from a file, that file contains a URL.
Inside that URL, I take that URL,
I do an HTTP request to that URL.
I get back the response and the response says like,
here's some information I wanna write to a different file.
So this is a file read, HTTP request, file write.
All of those can fail in different ways.
What's really nice about Rock is that
when you just call those, you write,
it just looks like the most straightforward,
like just TypeScript go, whatever.
You just like call the functions and just do the IO
and it just happens.
But behind the scenes, the compiler is just tracking
automatically using this feature.
Here's all the errors that could happen.
And it's just unioning them together.
And at the end of the day,
what that function will return that does like those,
those three calls is just like, okay, it's a result,
which either has I succeeded in which case,
here's the answer of all those things.
If any one of them failed,
you have an error type that is just an algebraic data type
of the union of all the things that could go wrong.
So it could be like network failure from the HTTP request.
It could be like, you know, file system was read only
from the file, right?
Or it could be like file not found from the read.
All of those things just get put together
and you can just do a pattern match on them.
And it's just like, here are all the possibilities.
The compiler will tell you if you forgot one
because it's like, hey, there was this error
that you didn't handle that was in the union
of these things. But at no point did you have to specify anything. Like
you can write no type annotations anywhere in that program. You just call the three functions
just totally normally. And then you're like, yeah, it just accumulates the errors that
can happen. I guess one detail I did leave off is that we do do something that Rust does
where the sort of like error propagation is explicit. Rust does this with a question mark operator at the end, which is what we
we do the same thing. So basically it's like, if you want to say, run this IO operation
and if it failed, do an early return with the with the error, you put a question mark
at the end of that function call like after the close brand. And this is nice because
it means that all of your control flow is explicit. So like unlike exceptions where
you have to be like
defensively like try catch, it's kind of like inverting
that where it's like, hey, if this might cause an early
return with an error, you can see where those happen
because there are question marks there.
You also use bangs in a certain way that I couldn't tell
if it was idiom or enforced.
I'm assuming it's enforced the exclamation mark at the end
of a call means something.
I don't know what it means. And is it real or is it like Ruby where it's enforced the exclamation mark at the end of a call means something. I don't know what it means
and is it real or is it like Ruby where it's like it should mean that but you could use it
whenever you want. You know it's funny it's like Ruby but it is enforced with a compiler warning.
So this is what we call purity inference. So essentially the way that effects work in Rock.
This is also different from I don't know of any other functional language or imperative language for that matter that does this.
But the basic idea is super simple.
Because Rock's APIs are designed to be based around immutable data, you can very, very
easily write a function that is doing quite a lot of stuff, and yet it's a pure function.
Pure function for those who don't know, short definition is like, a pure function is one
where if you call it, passing the same arguments, you are guaranteed 100% guaranteed
to get the same result back,
like the same answer every single time,
same argument, same result.
And also it doesn't do any side effects.
Like it doesn't affect other parts of the program
that are in an observable way.
So we actually have a concept in the type system
of which functions are pure and which functions are
effectful.
So all the IO functions I just mentioned, those are effectful. The way you can tell the difference is
there's two ways. From a type's perspective, it's a really, really subtle distinction.
If you have the list of arguments for the function and then you have an arrow and then the return
type, that's the type signature for a function. If it's a thin arrow, like dash greater than,
that's a pure function. If it's a thick arrow, that is equals greater than, that's an
effectual function. And so we have this convention, which again is enforced by the compiler, that
effectual functions have an exclamation point at the end of their name. Actually, we did take that
from Ruby. Ruby has that for destructive updates and stuff like that. So basically, if you're,
for example, reading from a file, it'll say file.readBang.
And that tells you that when you call this function,
it's going to be effectful.
Now the compiler also enforces that only effectful functions
can call other effectful functions.
Pure functions are not allowed to call effectful functions.
And that's not because we're trying to be mean,
that's just like the definition of a pure function.
Right, otherwise you're not pure, dude.
Right.
And one of the ways that we actually make use of this is that like, this isn't just like,
you know, we're trying to organize things for the sake of organizing them.
One of the things that the new compiler is doing is that if you're writing a top level
constant, like you're saying like foo equals just like the top of the file, not like inside
a function or anything like that, we actually will.
So first of all,
that you're only allowed to call pure functions
in those constants.
Like you can't be like foo equals
like at the top of your file outside of any function
and just like run effects in it.
But because you're only calling pure functions,
we actually will evaluate that entire thing at compile time.
And you can make that as complicated as you want.
So you can do like really complicated transformations
and like, you know, not have to like hard code so many things, you can like call whatever functions you want. So you can do like really complicated transformations and like, you know, not have to like hard code
so many things, you can like call whatever functions you want
as long as they're all pure.
And the compiler can evaluate them all at compile time
because it knows like, yeah, they're pure functions.
Like they don't have any side effects, it's fine.
Like you just do that.
One of the things that's cool about this from like going back
to earlier on, we were talking about like one
of Rock's goals is to be really good
at getting embedded in other things.
Not only do we not have a virtual machine, but also we don't even have a concept of like
in it, like you don't have to like boot up a Rock program. You can just like call the
entry points and like just whatever happens happens because all of our components are
constants get compiled all the way down to just plain flat values in the in the compiled
Rock binary, which means that like there's no initialization.
You can write, you know, whatever,
like foo equals and then have a bunch
of initialization logic,
but that initialization happens at compile time
rather than at runtime,
just using your ordinary plain rock code,
as long as they're all pure functions.
And so that's a, it's a concrete example
of a way that the rock compiler itself
is using purity for benefits.
But also pure functions just have all these really nice
properties, such as if you're calling a pure function
and you're like, oh, I want to cache this thing
because it's kind of expensive.
It's like, no problem.
You just use the arguments as the cache key.
It's definitely going to work out.
So there's a bunch of stuff like that.
With concurrency, you can run pure functions concurrently, and there's no problem. A lot of stuff like that. And like, you know, with concurrency, like you can run pure functions and like concurrently
and there's no problem.
A lot of stuff like that.
That's awesome.
So in a world where I'm doing operator overloading,
as you described earlier,
and I'm overloading the plus pure function,
I can't go shoving some effectual stuff in there.
I can't call file.read or file.write
inside of there, could I?
So great question.
So, I guess the short answer is it depends.
So, first of all, because of the naming convention, if you tried to name it plus without the exclamation point,
and it was effectful, you would get a compiler warning because it's like,
hey, you're supposed to put an exclamation point there.
And then if you add the exclamation point, then the sugar is not going to work anymore.
So, you could do that.
So, compiler warning is the level of enforcement.
Like I could still get away with it,
but the compiler is gonna tell me,
hey, this is a bad idea.
Ah, okay.
So that is, yes, you're right.
But also we do take it a step further,
which is that, so one of the things that's certainly true
is that oftentimes, even if it's not something
I want to ship to production,
I do like want to use like IO in the middle
of some pure function just for debugging purposes. Like for example, I'm like, I'm really IO in the middle of some pure function just for debugging purposes.
Like for example, I'm like, I'm really lost in the middle of this thing. I just want to write to a
file like what's happening right now so I can debug it. So we also make it so that that is a
compiler warning. I should give the caveat that that doesn't work for the constants use case.
Like when you're doing in the middle of a constant, like that's happening at compile time.
So then we actually just like like don't know what IO is
because that's a platform thing.
So that's not available.
So that is like a hard warning, sorry, a hard error.
But in general though,
this is actually part of Rock's design philosophy.
I've been calling it informed, but don't block.
And what I mean by that is basically like,
we try to make everything to the extent possible,
like non-blocking in terms of like compilation problems.
So we will give you a warning.
And also like we are kind of hardcore about warnings
in that if there are any warnings,
when you build the compiler exits with a non-zero error code.
So that will like fail your CI if there's any warnings.
So like warnings are taken seriously.
It's not like, ah, it's warnings.
Don't worry about it.
It's like, no, like you need to fix all your warnings.
Having said that, it's also true that like, for example, if you have a
compile error at build time, one of my pet peeves about compiled languages
in general is that they will pretty much all, with one exception,
Haskell has a flag where you can kind of turn off part of this.
Um, it's like, if, if I have anything wrong with any part of my entire code
base, I can't run any of my tests until I fix every single one of those.
That always really bugged me because when I spend a lot of my career in working in dynamic
languages, it's like you could always run any test you want.
And that really unlocked all these really nice workflows where I could be like, oh,
I'm going to try out this thing and experiment with it.
And if I like how it's going, then I'll commit to continuing further with it.
But oftentimes I'd try it out.
And even though I know there was a bunch of broken stuff around it,
just by being able to try it out
or like write some tests around it led me to realize,
oh, you know what?
I actually want to go in a different direction.
And I'm really glad that I didn't overcommit to this
and have to fix absolutely everything.
And I really want to preserve that in rock.
So what we do is basically like we,
as much as possible, we'll say like,
hey, I'm going to tell you about this error.
If you actually run this code
and we get to this code path and runtime, it's going to crash because it's like, hey, I'm going to tell you about this error. If you actually run this code and we get to this code path
and run time, it's going to crash because it's like, we can't like it.
In some cases, it's like, there's, there's nothing we can do about like a naming error.
If you like reference a variable that doesn't exist, it's like, look, I'll tell you about it.
Compile time. And if you run the code, as long as you never hit that variable,
like we can run the rest of your program, no problem. But if you get there,
we're going to have to crash because it's like, we can't proceed.
We don't know what variable you're talking about.
But the idea is like informed, but don't block.
Like always tell you like about all the problems we know
about at compile time, give warnings, give errors
but don't block you.
Let you run the program anyway.
Let you run your tests anyway,
because oftentimes that's just the better workflow.
So this is another area where like,
I roll that into like developer experience
of like just giving you the flexibility of like working in different ways depending on what you're
trying to do. On the topic of going to production, what does the deployment or the the shipping,
the sharing, the distribution story look like? Yeah, by default very similar to Go in that like
it just spits out a compiled binary that you can just run on a machine. We also do cross compilation. So this is something that a number of languages have. So basically,
you can just say like rock build, and then you can say dash dash target equals like x64 Linux.
And even if you're on a Mac, it'll spit out a binary that you can just go hand off to x64 Linux.
In practice, I guess that means that you don't need to have like a Windows CI and a Mac CI and a Linux CI. You can just have one CI of whatever you want. And it can
build for Mac and Windows and Linux and all those things. So you don't, nobody needs to
have the rock compiler on their machine to run something that rock built separately.
You can also build to like a dynamically linked library. So this would be like, if you want
to use rock for like, like plugins, like editor extensions or something like that, anything that can load like a C based plugin
or something like that,
that includes like programming languages.
I had a, actually you can see this in the repo.
You can try this out.
Ruby actually will let you import compiled like C modules,
just like straight up.
So you can actually like, if you go to the, on GitHub,
like we have this in the examples directory,
you can like basically like compile rock to
a dynamically linked library that just like says like,
hello world or whatever, load up IRB and just like import
that module. And it's like, hello from rock, you know,
just like straight up no, like no fanciness needed. And
then of course, the third way is WebAssembly, like you can
compile rock directly to compile the WebAssembly
binaries.
Sounds almost too good to be true. What are the downsides,
Richard? What are the downsides, Richard?
What are the downsides?
Oh, sure.
I mean, well, an obvious major downside today
is that the current compiler, although it works.
So first of all, the static dispatch stuff
doesn't exist in the current compiler.
That's one of the reasons for the rewrite
is that we hadn't figured that design out yet.
So what you get today is not as cool as some of the stuff
we talked about earlier.
Second, there are also, like I mentioned with the, it's called Lambda set specialization,
but the thing with the like unboxed closures, there are like known bugs with that that are
blocking certain, like people tried to implement certain really cool projects in rock and they
got stuck because although if you're doing simple stuff like, or like applications, they
don't come up, but as soon as you start trying to do really fancy platforms
that'll unlock a bunch of other things,
they got stuck on these Lambda set bugs that were like,
oh, in order to fix that, we gotta do a big rewrite.
So those limitations, definitely big downsides,
but let's pretend that we've solved all those.
We have the new compiler, it's 2026.
And like, you know.
And then of code, exactly.
We're here, we've arrived.
So now I would say, so these are downsides that are sort
of like downsides we accept as like long-term downsides. Number one, I'll just get this out
of the way. First, it's different. It's like you can't be much better if you're not going to be
much different. So like there's going to be a learning curve. It looks pretty familiar, I think,
from a lot of, you know, a lot of people who are used to mainstream programming languages will look
at the code and be like, cool, I can like follow what's going on,
but there are going to be some semantic differences.
Like for example,
since we don't have a first-class concept of mutation,
there is no like array.push, you know, it's like, oh,
it's always going to be based on append.
And there are implications to that where like code gets
structured a little bit differently,
I would say in a nicer way,
but there is kind of like a ramp up there.
Another thing is that, again,
because we don't have a first-class concept of mutation,
and this is very much because the language design philosophy
is like simple language, small set of primitive,
simple type system, simple, simple, simple,
be simple but powerful and very ergonomic.
There are some algorithms, which actually the reason
I picked quicksort is because it's like,
if you try to write that in a purely functional style,
we didn't have for loops back then,
the new compiler is gonna have for loops,
but in a very limited way where they don't affect purity
and stuff like that, or can be used without purity.
That's actually a controversial thing
in like functional programming circles,
but if you're not like into functional programming,
it's like, yeah, for loops, sure.
Sometimes I want a for loop.
But if you're like, imagine Haskell having a for loop, it's like, yeah, for loops. Sure, sometimes I want a for loop. But if you're like, imagine Haskell having a for loop.
It's like inconceivable.
Well, I do work in Elixir, so I do know
functional yet for loops right there.
Oh, that's funny.
So by coincidence, as we're recording this,
the most recent episode of,
so I have a podcast called Software Inscripted.
We do like a lot of technical deep dives and stuff.
I just had on Jose Valim as the most recent guest.
And literally we're talking about for loops
and he was talking about like the challenges of trying to get
four loops into Elixir, which I guess he's been trying to do, but like they never have
been able to get a design that, that, um, quite checks all the boxes. Uh, it was fun,
fun conversation. Um, but, but in some languages, um, yeah, what did he say? It was something
like, he was like, some people have accused me of being like a sleeper agent, you know,
uh, because I'm trying to get four loops into Elixir, but this is actually like, it was like, some people have accused me of being like a sleeper agent, you know, because I'm trying to get for loops into Elixir.
But this is actually like, and you know, that's without going on another tangent about language
design and for loops.
That is an example of something where I feel confident enough in my love for functional
programming that I'm like, yeah, no, this is a good fit for this language to have for
loops because sometimes that's the best way to write the thing is like, it's just like
an imperative for loop is just the most straightforward nicest code to read for that particular use case
I also bring that up as an example of a downside like you were saying earlier because
there are just some things where it's like the nicest way to write it is to actually have a first-class concept of mutation and
the fact that we are as a simplification not including that in the language means that
You might have to write it in a way
that is not quite as nice
as if you had that tool in your toolbox.
Now, of course, having that tool in toolbox
opens a huge can of worms in terms of like,
now your functions don't have the same guarantees.
You might have to do more defensive cloning, yada, yada.
But you know, it isn't a trade-off
that I'm willing to acknowledge, right?
Like in the spirit of that, like FAQ of like,
no, we think this is worth it.
It's the right way to go.
And you can't have simplicity
without subtracting some things.
So there are some things that we've intentionally subtracted
and that means you have a simpler language,
but it means that some things I acknowledge
are going to be less ergonomic than if we had
you know, more tools for them.
I also like this kind of reminded me
that like the topic of for loops and whatnot
reminded me of another thing that is, I guess,
unusual, makes rock unusual compared to other functional languages, which is that we do have
that concept of like first class effects, where like, you know, you have the exclamation point
and not an implication of that is that you do end up with a form of like, what color is your
function. So like, if you have a deeply destined call of pure functions, and you're like, what color is your function? So like, if you have a deeply-designed call of pure functions and you're like,
oh, I actually wanna do like file IO in this,
you know, the leaf node here
that's got this huge stack of calls above it.
Well, guess what?
You have to convert all those to effectual functions now.
I would argue that unlike the like,
what color is your function?
This is not like, it's just a fact about pure functions.
It's like, yeah, if you do that,
it's not a pure function anymore.
And like, this is just Rock's type system
telling you about that. But again, like, if you're
in an imperative language that doesn't track those things, it doesn't have that in there,
you don't have that downside, you can just like introduce IO at any point and not have
to convert anything, of course, then that comes with the trade off of like, do you know
whether your functions are pure or not? So those would come to mind as concrete examples.
I would also say the fact that I mentioned earlier about the like scripting use, we intentionally do not have arbitrary CFFI. Only the platform gets to do low
level things. The application just doesn't get to do anything, period. If you want, you could design
a platform that says, hey, here's a way where you can, I expose a function that's like, give me a
string, and I'll load a dynamic library off of that. And you, do stuff with that. But it has to be done through the platform.
And that does kind of, you know, FFIs are, are, are have downsides in terms of security and other things,
but they also have upsides. And so the fact that Rock doesn't have one of those, that's another downside.
So I could go on for a while about, you know, downsides of Rock, but yeah,
but that would be against your best interest. So I'll talk to you right there.
What's the library story? Like, how do I work with other people's code?
Yeah. So, so right now it's very simple. We have plans to make it a little bit fancier,
but while still preserving the properties that I want to preserve. So the nicest thing about it,
I would say, is that right now, if you, so I mentioned like rock, we want to be good at
scripting. One of the cool things about how rock can be nice for scripting is that if you want to
have a single.rock file,
and that's all you're distributing to people,
you don't have to give them anything else,
just one.rock file and they can run it.
That.rock file can have dependencies in it
because there's no separate package.json file like that.
It's just like you literally put them
in your Rock code at the top.
You say, I want this dependency
and I want this dependency.
They can all just be in one file if you want.
The dependencies are based on a URL. One of the cool things about this is that I don't know of any other language that's doing this dependency. They can all just be in one file if you want. The dependencies are based on a URL.
And one of the cool things about this is that
I don't know any other language that's doing this actually.
The URLs have to be in a specific format.
Right now, the specific format is that basically
at the end of the URL has to be a hash of the contents
of what's behind that URL.
So this is a security feature.
So the basic idea here is like, if I have this URL and it's like downloading this package
and I'm like, you know, putting it on my system,
what happens if that URL gets compromised?
And like now somebody puts a malicious thing there.
I don't want to download that automatically anymore.
That's really bad for me.
So the fact that we have the hash baked in means
if somebody does compromise that URL,
the worst thing they can do is like take it down and like,
you know, make it 404 or something.
They can't actually like, you know, make it 404 or something.
They can't actually like, you know, give me something malicious because if I do, it's
going to fail the, you know, the check that rock does when it, after downloads, the things
like, Oh, this doesn't match the hash that was in the URL.
And so that's a really nice security feature.
And of course, if they want to change the hash, well, now they have to change the URL.
So your house got a 404 really basic, really simple security measure.
Once it's downloaded, it gets cached
on your machine. Also, we don't have the equivalent of a node modules directory in your local folder.
It's all just in a global immutable cache in your home directory, which means that,
again, thinking about scripting as a use case, you can say, here are my dependencies. There's
also no npm install step. It's just the compiler, when you do rock run and the name of a.rock file,
it just automatically downloads your dependencies into your home directory. If
they're not already there, if they are there, great. We don't need to redownload them again.
It also doesn't need to like contact the central package index for updates. It's just like, yeah,
we just, if they're either in the directory or there's the URL. And then basically like when it
runs, it also doesn't need to do any like caching in your local home directory. The design we have
for caching is also gonna be
in the home directory rather than the local directory.
So much like with, you know,
zero dependency Python, for example,
if you run a script,
it's like, it's not gonna put any garbage
in the directory it runs.
It's just gonna do everything in like a cache directory
and like the home directory
and then just run and that's it.
We also have a design for version ranges,
which is actually a little bit based on how Go does those with their like minimum version selection. and then just run and that's it. We also have a design for version ranges,
which is actually a little bit based on how Go does those
with their like minimum version selection.
Basic version of that is like in addition to the URL
having the hash, we're also gonna have a concept
of like you can put a version number in the URL
and then the compiler will automatically select a version
from those based on like, you know,
like what it finds across all the different URLs
you asked for.
They can ask for versions of one another, yada, yada.
And yeah, there's a whole design for that, but we have not implemented that yet.
But the new compiler will have it.
Yeah, versioning is what I was going to ask you next.
So you hit that one off.
Once you said a hash in the URL, I'm thinking, how do you actually
deal with what version you want?
Yeah.
Really simple answer there is that basically the thing that we're taking
from goes, and they have this whole long blog post about the design of like
minimum version selection.
This really interesting read.
As I understand it, they kind of changed it a little bit from what they wrote
there, but basic idea is this is that each of your dependencies needs to say
like, okay, I depend on at least this version of this file.
And we treat different major versions as sort of basically different packages.
Like they might as well have a different name
because they're just like, yeah,
if you have a different major version,
they're not possibly compatible
or they're potentially incompatible.
So we don't select them.
But if you have different minor versions, that's fine.
What we will select is just like,
what is the lowest minor version that we can get away with
while still satisfying everybody's constraints?
So if package A, like I depend on package A
and it says like, oh, I need, I'm going to use bugsnag as an example.
That was like the error handling thing we used at my last web dev job.
And you can say, okay, great.
I have a package A depends on bugsnag version 1.2.3 and package B that I depend on depends on bugsnag version 3.4.5.
So no, actually I need them to be the same major version. So 1.2.4, let's say 1.2.3 to 1.2.4.5. So no, actually, they need to be the same major version. So 1.2.4, let's say,
1.2.3 to 1.2.4. So each of those URLs has the hash in it. I know exactly what 1.2.3 is and what
1.2.4 is. And I have the hashes for both, et cetera. It's all just the same URL-based design that I
described before. What the compiler can then do is it can say, oh, well, I see that you have,
among all your dependencies, like in the 1.x.y range,
like where they're all, you know,
the major version number is one,
the highest one that we need is 1.2.4.
Great, we'll select that and use that
for both of these two things.
Like the one that needs 1.2.3 also gets 1.2.4.
And that should be fine
because they should be API compatible.
We also wanna do the thing that Elm does,
which is where you basically have the compiler awareness of what major versions mean. And you
can just tell people when they're publishing, like, hey, this needs to be a major version bump,
because you actually made a breaking change to your API compared to the old one. This is something
Elm hardcore enforces. I'm not sure if we're going to be as hardcore about the enforcement,
just because of the URL-based thing. It's a little bit more complicated if you're a little bit more like distributed
and less centralized in how you're getting dependencies. But it is really cool that like
in Elm, like if you try to publish a major breaking change where it's like, yeah, this
actually is like API incompatible with the previous version I published. Elm was like,
Nope, you need to bump the major version number. That's not optional here because like, I can see that you made a breaking change.
I looked at the diffs and the types.
Um, so we want to do the same thing at the very least in the like, inform you.
So like, we don't ever want it to be the case that someone publishes a new version
and is surprised that it like doesn't build and the compiler is going to rely on that.
So it's like, when it's selecting these versions, it's assuming that one dot two
dot three and one dot two dot four, are at least API compatible and your code will still build,
even though obviously they might not be bug compatible.
Maybe you were relying on a bug in 1.2.3 or something,
but that is kind of the price we pay for like code sharing.
Yeah, 5%.
How good is Claude at writing Rock code?
So we actually have something at rockline.org
in the docs section on the built-ins.
We have a little, it's like, this is for LLMs basically.
And it's a little like markdown document
that you can just like either copy paste in the LLM
or just like put in your, you know,
.rules file or whatever.
That's basically like,
hey, here's what this language is all about.
And it's just sort of like a little,
yeah, it's a little primer for like a large language wall.
Does that work pretty well?
Yeah, I mean, it's hard to say because there aren't really
any big rock code bases so far, but definitely,
if you give it enough examples like that,
I've actually, even without using that thing,
I've had good experiences with being like,
I've opened a.rock file and I'm like,
hey, look at this thing, can you write some new function
in that style?
And yeah, I mean, the funniest thing that I've seen it do though is like sometimes because rock is like really not
super representative trading set surprise when there's something that it has not seen any
examples of it'll just make something up and quite often it will guess like something from like elm
or like Haskell like some other functional language because like it's just like this feels
kind of like a functional thing. So like maybe it's, and it'll just kind of throw it in there.
And then usually when that happens, I kind of chuckle,
and I'm like, no, no, it's actually this in Brock.
So it's obviously not going to be as good as languages
that are really heavily in this trading set.
I actually view that as kind of similar
to the ecosystem thing, where historically,
whenever you make a new programming language,
people would always say, well, no one will ever use this
because it doesn't have the ecosystem of,
you know, gigantically popular language, you know, A.
It's like, yeah, okay, but new languages do come up
and exist and like, you know, they weren't there before.
And then like, you know, like Rust, for example,
people would have said like, oh, nobody used Rust
because it doesn't have the C++ ecosystem.
It's like, yeah, but then like that happens over time.
And it's the same thing with large language models.
It's like, no one will use this
because it's not the training set. It's like, okay, I know,
like that's a downside at first. You have to like, when you're an early adopter, you're going to
have to like deal with occasionally it hallucinates some Haskell. But in the same way that like the
ecosystem is small at first, but over time it grows. And then that stops being a downside.
Once you get a certain amount of adoption and the trade off is like as an early adopter,
you get to be a lot more of a like voice and participate like a more prominent contributor to the community.
Um, you know, because you got an early when it was, you know, not the most
polished, right?
I was speaking with Mads Torgerson recently.
He's the lead designer on C-sharp and we were kind of lamenting that.
It's probably never been harder to break out though as a new programming language than it is now
because of these tools and the selection bias
kind of like the rich day richer effect of an LLM
either choosing a tool or a language
because you don't care if you're vibing it
or just not being as ergonomical or as useful to you.
And so maybe you just pick Python because it knows it better.
Whereas you're kind of interested in rock,
but you're like, yeah, I'm not gonna get much help here.
I feel like it's gonna be harder and harder
to actually break out in the next few years.
What do you think?
I thought that my opinion on that
has actually completely 180 since I actually tried it.
So the experience that I had was like intuitively,
that makes sense, right?
But so the experience I had is like,
as I mentioned earlier, we did recently,
well, recently is like several months ago,
decided to rewrite the compiler in Zig.
I had done some Zig, but I really not used it like super
in anger for like a big code base before.
So there was a lot of like learning that I had to do.
What I found was that a Zig,
even though it's like a pretty niche language today,
is plenty well-represented in like Claude's dataset that like you can just kind of jam on it.
It doesn't really hallucinate.
You know, I have not seen any problems with like Claude for like having problems with
that.
But what's really great about it is that I know from plenty of years of experience of
trying out new languages that there is always a ramp up period when you're trying a new language
that's always been a really significant downside,
which is like, I don't know what character to type next.
I know what I want it to do,
but I don't know how to say that in this language.
And when I hit that roadblock,
I always have to go find documentation,
but maybe there's like, it's like a weird symbol.
And so I don't, I can't like Google for it effectively.
And then like, oh, what part of the tutorial
is going to talk about this thing?
Or like, I know conceptually what I want to do, but I don't exactly know even like what to it effectively. And I'm like, Oh, what part of the tutorial is going to talk about this thing? Or like, I know conceptually what I want to do,
but I don't exactly know even like what to search for. I'm just like, here's,
I have this like thing that I want to do. I know the language could do it.
I just don't know how to do it.
That problem is gone in the large language model world,
because I just tell the large language model, Hey, I want to do this thing.
I don't know how to do it in Zig. And it's like, here you go.
I just wrote it for you in Zig. I look at it and I'm like,
I can guess what this code does. Like it's, it looks wrote it for you in Zig. I look at it and I'm like, I can guess what this code does.
It looks like it does what I want.
And if I want, now I know exactly what this code is.
I can go look up the docs if I'm concerned about that.
But it feels so much less choppy and
stumbling around in the darkie
to get ramped up on a language compared to the old days,
where we didn't have access to these tools.
The new user experience,
if it's sufficiently represented in the dataset,
but even if it's not, like I said,
you can help the model out with this thing.
It feels so much easier to get into it.
The other big downside that I mentioned,
the historical thing that everyone
always talked about was ecosystem.
You know what tool is really,
really good at taking an existing library and
porting it to a new language and taking
out 95 percent of the time-consuming drudgery of that. So you could just kind of
like review it like, you know, large language models. Like if you want to get like your
favorite, you know, passion function in rock, for example, like that's something that like
porting that from, you know, whatever language a to like to rock by hand, really, really
painstaking and error prone and like annoying. Now I can just be like, hey, okay, Claude, you know,
port over all the tests first.
I want to make sure all the tests are in place
and I'll hand review the tests to make sure like, okay,
these are doing the same things
as the other tests in that repo.
Great, all right, now start porting over the implementation.
Oh, tests failed, great, go fix them.
Like here's, you have all the source code on both sides.
The amount of time that it should take
to bootstrap an ecosystem and like get it to a point where it's not going to be as big as, you know, like big
long standing ecosystems, but getting into a point where ecosystem is not really a blocker
for people and they can kind of like get into the language and like reach for hashing or
whatever, you know, bug snag off the shelf, I think should be a lot faster.
So putting those things together, I'm like, I actually think it might be easier than ever
for like a new language to break out,
especially because if you're like,
I'm in this big code base that I have
and I want to start using a new language,
it's like easier than ever to start writing code
in that language, like from a, you know,
with the help of a large language model.
And if I'm just gonna be mostly writing in English anyway
to synthesize the new code,
like why don't I have it synthesize code that's like, easier to read and to review
and you know, that like has nicer properties to be like, Oh, this is a pure function. So
I don't have to go like, worry about what the implications are. Like that's, there's
a lot of advantages to reviewing code that you get from searching languages. But if you're
just telling the model what code to write in the first place, like, you know, why would I tell it to generate Python
when I could tell it to generate rock, you know?
Gotcha, that's interesting.
Yeah, I can definitely see that.
If I was gonna pick up rock tomorrow
and I didn't have an actual use case
or care in the world,
I just wanna learn and write some rock code,
I just want the best rock experience.
What would I build where rock really shines
and I could be like, oh, okay, I get this.
What kind of a thing would I build?
That's a great question.
I mean, I don't usually think about it in those terms.
So I would usually turn it around and say like,
what do you want to build?
I would say, I mean, there are three things
that I think rock is currently the best
at, uh, would be like command line apps, uh, like server side web apps.
And there is a platform for doing like native graphical applications.
Um, if you want something that's like a little bit more fun, there's a, this
thing called WASM four, which is like, like really like lo-fi, like retro
gaming kind of stuff in the browser.
So people have made like, um, like a maze thing and like and like a like Rocky bird, which is like Flappy bird.
But so if you want to like do some game stuff
I would recommend that one.
We don't really have a, there is no like serious
game dev rock platform yet.
Nobody's built it, but you know,
someone wants to hit me up.
There also, there have been some like doing rock
in the browser for like, you know, web dev front
and stuff like you certainly can do that, but there isn't anything like really mature.
So I wouldn't recommend that.
I would say like servers and CLIs are the two that are really like the most
robust and like well, well used.
So I would pick one of those two, if that's something that you want to try rock out with.
Cool.
And where does the rock community hang out?
Where do they.
Zulip, uh, so on the website rockline.org.
Yeah, we, uh, we have a Zulip instance.
It's definitely the place to go
if you want to chat about RockThings.
Very cool, we are Zulip users ourselves
and it's nice to always come across other Zulip fans
because we are somewhat few, it's somewhat far between,
but relatively happy.
Oh yeah.
Cool, Richard, well, this has been awesome.
I'm excited about Rock.
I generally do leave these conversations excited about what I'm talking about because that's
just the way it works.
That being said, there's a lot to be attracted to here and I do want to give it a shot.
Maybe I'll build a little app server or something and see how it goes.
Should I wait till advent of code?
I mean, is it worth waiting so I can get that nice syntax sugar or should I just dive right in?
Up to you. I mean, I think, you know, since this stuff's going to change between now and then,
I would probably default to saying wait till advent of code. But, but hey, I mean, like nothing's stopping you from trying out, you know, like what it is right now.
And and certainly like the current compiler is way more feature complete. I would also note that like, you know,
if a common thing that people talk about like
being interested in when it comes to like
developing a language is like, hey, like,
how do I get started on building a real thing?
I don't know if you personally are interested in that
because obviously you're interested in languages.
I don't know if you're interested in implementing languages,
but if anyone, you know, listening is,
we also have a pretty, a lot of experience
on like ramping up new contributors
to the language to like getting involved in like,
hey, I want to like actually like make something
in the time checker happen.
There's like actual opportunities to do that right now,
whereas that won't be true once we get to like,
oh, that one that it'll be a lot more, I don't know,
the number of like beginner friendly projects
will taper off.
Okay, so if I want to do that, is Zulip the answer there
or is there a GitHub?
Yeah, Zulip.
Yeah, we have like a beginner's channel.
It's just like, there's even like introductions in there.
And usually people will just like hop in introductions
and be like, hi, I'm, you know, your name here.
And then like, here's what I'm interested in.
You know, I'm excited about this or that.
And we can chat from there.
Super cool.
The website rock-lang.org.
What's the website for your podcast?
Oh, Software Unscripted.com.
It's also on like all the different podcast places.
Wherever you get your podcasts.
Yeah, and we're on YouTube now too.
Oh, cool.
Well, Richard, it's always a pleasure.
It's nice to catch up with you.
It's been, I think, a year or two
we were together at the last Strange Loop.
I remember that, Yeah, in person.
Yeah.
Strange loop, so great.
There was rumors of like things that were going to arise
and not replace strange loop,
but kind of like carry on the spirit.
Are there any events that have done that
or that you know about?
No, no, I haven't.
I remember hearing at that conference,
there were some rumblings about it,
but I don't think anyone actually did it.
Yeah, I agree.
That's what I got going on too.
I thought maybe you might know more than I do being even more of a language nerd than
I am.
I just think they're interested.
You build these things.
That's a shame.
Somebody needs to go out there and do something, at least in the spirit of Strangelove, because
it was such a great time.
Really was.
Yeah.
Oh well, c'est la vie.
Some of the best things in life, you know, they have a beginning, a middle, and an end
and we just look back at them fondly and, eh, there's nothing wrong with that I guess.
Totally.
Well, thanks again, Richard. Awesome time. Looking forward to rock.
Check it out listeners, check the show notes for all the things
and we'll talk to you on the next one. See ya.
Awesome. Looking forward to it. Thanks.
Have you heard that we're doing a live show on stage at the end of July? That's Saturday,
July 26th at the Oriental Theater in Denver, Colorado, be exact. Why don't you join us for
the entire weekend if you want? We'll be meeting up at a local pub Friday night, recording live on stage on Saturday
morning, hiking Red Rocks, Saturday afternoon, and who knows what else.
It's gonna be a lot of fun.
Ab and I will be there, Gerhard will be there, even Brakmaster Cylinder will be there.
Get all the details at changelog.com slash live.
Thanks again to our partners at fly.io
and to our sponsors of this episode, Retool.
Check out agents at retool.com slash agents.
Well, Apple's WWDC keynote came and went,
but what does it all mean?
Justin Searles and myself wade our way
through all the nitpicky details
on Change Hogan,gon friends on Friday.
Talk to you then. I'm out. Thanks for watching!