The Changelog: Software Development, Open Source - Keepin' up with Elm (Interview)
Episode Date: October 17, 2018Jerod invites Richard Feldman back on the show to catch up on all things Elm. Did you hear? NoRedInk finally had a production runtime error, the community grew quite a bit (from 'obscure' to just 'nic...he'), and Elm 0.19 added some killer new features around asset optimization.
Transcript
Discussion (0)
Bandwidth for Changelog is provided by Fastly. Learn more at fastly.com. We move fast and fix
things here at Changelog because of Rollbar. Check them out at rollbar.com and we're hosted
on Linode servers. Head to linode.com slash changelog. This episode of the Changelog is
brought to you by Hired. One thing people hate doing is searching for a new job. It's so painful
to search through open positions on every job board under the sun. The process searching for a new job. It's so painful to search through open positions
on every job board under the sun. The process to find a new job is such a mess. If only there was
an easier way. Well, I'm here to tell you that there is. Our friends at Hired have made it so
companies send you offers with salary, benefits, and even equity up front. All you have to do is
answer a few questions to showcase who you are and what type of job you're looking for. They work Thank you. requests. You can accept, reject, or make changes to their offer even before you talk with anyone.
And here's the kicker, it's totally free. This isn't going to cost you a thing. It's not like
you have to go there and spend money to get this opportunity. And if you get a job through Hired,
they're even going to give you a bonus. It's normally $300, but since you're a listener of
the changelog, they're going to give you $600 instead. Even if you're not looking for a job, you can refer a friend and Hired will send
you a check for $1,337 when they accept the job. As you can see, Hired makes it way too easy.
Get started at Hired.com slash changelog.
Hello, everyone, and welcome to another episode of The Change Log.
Jared goes solo on this one to talk with Richard Feldman.
They chat about the growing Elm community,
some pretty cool asset optimization features built into Elm 0.19,
Noradink's first production runtime error,
and the biggest blockers to folks adopting Elm. So Richard, it's been almost, well, just past two years since we first had you and Evan on the show
to tell us about Elm, and now we're here to catch up, hear what's new, and learn some more.
So first of all, welcome back.
Yeah, great to be back.
So two years is a long time in internet years.
I'm assuming Elm has leaped forward.
It's still out there.
It's still popular.
People still talk about it.
I still see people retweeting things that you're saying about Elm.
So before we get into the catch-up, why don't you give the elevator pitch what Elm is and
what you use it for.
Yeah, sure.
So Elm is a programming language for building web apps.
It compiles to JavaScript.
People often consider it sort of an alternative to JavaScript frameworks, because in addition to
being a programming language, it also comes with enough tools out the box to build an entire web
app. So we don't really have frameworks in Elm. It's sort of like the language provides enough
that you don't need a framework. And so I work at NoRedInk.
We make tools for English teachers and basically our entire front end, just about,
I guess we have some legacy React stuff from back in the day, but pretty much
everything is in Elm and it's about 250,000 lines of code. Our first commit
was in 2015 so it's been somewhere between three and four years we've been
doing it in
production. Basically, everybody who works on the front end writes Elm full-time. It's
been really great. Some of the stuff that's cool about Elm is, one, it's really, really reliable
and easy to maintain. It has a really amazing, friendly, helpful compiler with really nice error
messages that kind of tell you about problems before they happen to end users. And as a consequence of that, I used to be able to say that we'd had zero runtime exceptions
in the entire time we'd had Elm deployed.
However, unfortunately, last year it happened.
No!
Yeah.
So now I have this graph that I like to show, which is like,
because we have a logging system that tells us if anything crashes.
And so now the chart is like 60,000 JavaScript runtime exceptions, like from our JS code.
And then like, it's, it's not zero, but it's zero pixels on the graph.
Right.
So it is possible.
It can, it can technically happen.
What happened?
Well, it's, it's actually a funny segue into Elm 0.19.
It's a thing that is no longer possible in Elm 19.
So that was the root cause there.
Basically, there was a function called debug.crash.
And it really did what it says on the tit.
If you call it, it crashes the program.
Sounds like you shouldn't call that in production.
You know, it's funny you should mention that.
Because, yeah, you shouldn't call that in production.
But we did. uh sure enough it
got run and then it crashed um so in elm 19 there's actually uh when you run an optimized
build there's a new compiler flag called dash dash optimize which i'm sure we will get into
because that's one of the sort of banner features of the release um when you run that one of the
things that it does is it's it it says hey uh you're still using some debug functions
um take those out before building for production so that would would have prevented us from having
any runtime exceptions but unfortunately that option didn't exist back then so we we blemished
our our previously unblemished record but yeah you know it's funny because when i when i like
you know we found out about it and i i tweeted, um, that got like way more retweets and likes than, than my previous comments that, Hey, it's still been, you know, this many months or this many years without any runtime exceptions.
And I have this theory that maybe it's more credible if you say it's been like, you know, a very small number instead of zero. That's right.
I guess people kind of wonder, well,
maybe you just don't have your logging set up right.
But we did.
Well, it's like even Superman has his kryptonite, you know,
like zero is just,
it's almost unbelievable because it's, it's statistically like, I mean,
then you're still in a blip, but you know, it shows, it shows that,
you know, even, even element, no reading are humans Red Ink are humans as well. They're not perfect.
Yeah, no, we very much are not.
Well, let's talk a little bit about the company. So I mentioned No Red Ink often because there
are a handful of businesses who have done what y'all have done in hiring Evan and allowing him
to work on Elm.
And I like to just promote that activity.
Another one that comes to mind is Shopify,
which hires Sean Griffin to work full-time on Rails.
And I think they're hiring other such positions to fill out even more of their infrastructure team.
Dockyard hired Chris McCord to work on Phoenix.
And it's like his job is to work on Phoenix.
And that's something that they believe in investing in. So just curious from the business end and from your perspective,
like what it's been like having Evan working there, the push and the pull, like has he been
able to dedicate most of his time on Elm proper or does he get pulled into the business things?
Help us understand how that's going. So it's in Evan's contract that he only works on Elm.
He's never, so we hired him in January, 2016.
And he's never done anything like directly for the product.
So the product is basically, it's a web application for teachers to help teach their students
English and more specifically writing.
And Evan basically is really just 100% open source engineer. He just
works on Elm. And my boss, full credit to him, it was his idea to see if we could hire Evan.
Basically, what he said was to Evan, hey, the reason we want to hire you is that Elm's been
really great for us and we don't want to mess with the formula. We want you to keep doing what you're doing. We just want to be kind of more
plugged into it. So yeah, I mean, he basically has, you know, complete autonomy to take Elm in
whatever direction he thinks is best. And we trust his judgment because that's what's led us to
embrace it in the first place. So yeah, I don't know. I guess we're aware that that's not a common thing for a company our size.
So for context, I think it's 26 engineers now.
This is probably a good time to mention that we're hiring.
So if you want to come work with me and Evan on building stuff for teachers, we're super
remote friendly.
I work remotely from Philadelphia.
Evan works from Boston.
The headquarters are in San Francisco, but we go anywhere from West Coast Pacific to Central European time as far as time zones go. So the 26 engineers are pretty widely distributed across that. And the overall company is I think it's 67, I want to say something like that, like between 65, 70 people. And yeah, so for a company our size to hire somebody to 100% work on open source stuff,
I guess it's pretty unusual.
But like I said, we really wanted to sort of keep the good thing going and to, you know,
not mess with the formula that has brought us so many technical benefits.
Yeah, Elm has been much praised for its technical benefits.
I'm curious about the community.
So it's been two years since we've talked, and we like to keep our thumb on the pulse
of which direction things are moving.
And anytime you have a project or a piece of software which has technical prowess, you
always wonder, will it catch on?
Will it have a robust community?
Will there be people who adopt it?
And so, like I said, we're here two
years later and it's still going, but curious from your perspective, how much adoption Elm has gained
and how much the community has really built out around Evan and around your work?
Yeah, that's a great question. So, first of all, I think the biggest change that we've seen
in the past year, so we started doing this state of Elm survey. And so comparing 2017
to 2018, the biggest change that we saw was actually more people using Elm on teams like at
work, rather than as individuals as hobbyists. So in 2017, it was something like 18% of survey
respondents said that they were using Elm at work. And 2018, it was like 40%.
So more than double, which was really fantastic, because one of the, you know, concerns with a
project like Elm is it's like, hey, this is a new programming language. Obviously, that's a bigger,
you know, barrier to a lot of teams trying to adopt it than it is to say we're a library or
we're a framework.
And so there was always that kind of question where it's like, hey, even if this is really great,
even if it has all these benefits,
is that going to be something that teams
are just unwilling to give a real shot to?
And it turns out that the answer seems to be,
actually, they are willing to give it a shot.
And that's really changing.
So as far as absolute growth numbers,
we don't really have a great way to measure that,
in part because around the time of GDPR,
we were like, you know, we could do a bunch of stuff
to make the website compliant with cookies and whatnot,
or we could just stop tracking visits.
And we just decided to stop tracking visits.
So we don't really have even like a bellwether of how many people are...
You could have just blocked all of Europe.
We've seen a few companies doing that as well.
No, I mean, Elm's really big in Europe.
The biggest Elm meetup I ever went to was in London.
It was like 100 people.
And Oslo apparently is just like a hotbed of Elm activity.
Oslo, Norway.
There's just like a bunch of,
there are multiple Elm consulting companies.
There's a bunch of companies using Elm,
you know, to build their
products. So I think the idea of just like, ah, we'll just ignore Europe, it would be a complete
non-starter for us. Yeah. It's interesting you find so much growth of Elm inside of enterprise
and inside of the workplace. Seems like small, new niche languages start off at least, many of
them I'm thinking of in the hobbyist realm,
you know, you think people tinkering, trying it out on their own time, and then maybe it starts
to get penetration as they see value or as they sneak it into their organizations often. What do
you think is the, the, the selling point for Elm that's getting so many businesses to hop on? Is
it that, is it that example, the, the one i was gonna say the zero run the very few runtime errors is that is that what gets people to really dive in and try it
you know on work time so that's that's a great question um i want to go back a step though and
like just point out that i think you know elm like we've gotten like definitely increasing
adoption over time but i can't say elm's like a runaway smash success. It's not like on the level of like a react or an angular, you know, something like that, that, um, I think Elm is like, I like
to say we've graduated from obscure to niche, um, where it's, it's something that, you know, like,
uh, a good number of people have heard of, but a much smaller number of people have actually tried.
Um, and an even smaller number of people are actually using it professionally. So, um, I think like, you know, that's been a really positive improvement, but I can't say
that we're there yet. Like Evan gave this talk a couple of years ago called let's be mainstream.
I don't think I can say that, um, Elm is mainstream yet in terms of adoption. Having said that Evan
also gave a pretty awesome talk about sort of like, what are Elm's goals? Like what does success
look like?
And one of the things he talked about is actually getting back to your point about community.
The conclusion of the talk was basically, let's try to make a really great community
where everybody wants to help each other build awesome things and not worry so much about
adoption or, you know, hacker news or stars on GitHub and just kind of let those things
fall out of having a really
happy, successful functioning community where people are happy. Um, so that's been kind of the,
the bigger focus, um, for like, there are definitely some things we could do that could,
you know, sacrifice Elm's long-term goals, um, for the sake of like driving adoption in the short
term. And we just kind of said that we don't want to do that. Um, we'd rather just let it
grow organically at whatever pace that is. And so far, you know,
we've all been pretty happy with that outcome. I have to say, it does benefit companies who do
adopt Elm for now, because the result of that strategy has been intentionally or not, which I
don't think it has been intentional since we haven't really talked about it on the core team.
But it does mean that there's actually a pretty substantial hiring benefit to companies that adopt it.
So we have seen this and other companies have seen this, that basically there's just more Elm developers out there, like people who want to use Elm at work, than there are companies who have Elm positions, which means that it's actually paradoxically easier to hire high quality Elm developers right now than it is to hire high quality JavaScript developers. Because although there are many more JavaScript developers in the world than
there are Elm developers, there is an even bigger proportion of JavaScript job openings that they
are out there choosing from. So it's sort of like you get to be a bigger fish in a smaller pond by being one of the
few companies that's offering elm jobs and that's been one of the biggest benefits to us outside the
technical realm has just been hiring we like honestly the number one thing that we get on our
cover letters for like why the person applied like they all say the word elm like it's a selling
point for basically everybody who who applies for any kind of front-end
or full-stack position.
To the point where we've actually transitioned,
like when I joined the company,
which was almost five years ago now,
we had a really tough time hiring front-end engineers.
We were able to get back-end
and some full-stack applicants,
but front-end was just, the well was totally dry
until we started using Elm.
And now it's
completely reversed where we now have a much harder time finding backend people than front
end people, because we just get so many applicants who are interested in using Elm.
We even get some backend people who are interested in using Elm. They're like,
Hey, you know, I I'm a backend engineer, but I'm actually kind of curious about this Elm thing.
And that's what got me interested in your job position in the first place. I'd like to kind
of do a little bit of Elm stuff, even if I'm mainly on the backend.
It's pretty cool.
That's interesting.
It makes me think of kind of on a different angle of the same idea.
Not only is it easier to hire because there's less enterprises that are hiring in Elm,
but there's also somewhat of a relationship between a programmer who will learn a new thing on their own and who's like diving
into these niches because they see the technical merits of a language and the quality of that
programmer so it reminds me of um actually a paul graham essay from all the way back in 2004 have
you heard of this one the python paradox yep yeah yeah so i mean you got to put yourself in the time
of 2004 but his point was that Python programmers,
well, I mean, I just have it pulled up here.
Let me read a little bit.
So he says, in a recent talk, I said something that upset a lot of people, that you could
get smarter programmers to work on a Python project than you could to work on a Java project.
Of course, that would have upset a lot of people, right?
He said, I didn't mean by this that Java programmers are dumb.
I meant that Python programmers are smart.
It's a lot of work to learn a new programming language,
and people don't learn Python because it will get them a job.
They learn it because they genuinely like to program
and aren't satisfied with languages they already know.
And so that Python paradox that he's talking about
no longer applies to Python, right?
Because that has gone mainstream.
For sure, yeah.
You can definitely get a good job learning Python, but it does apply, I think, to niche languages and
the kind of programmers like will go out there and teach themselves or dedicate hobby time.
They're usually pretty good programmers. So it kind of works both sides. Yeah, I think it
definitely correlates with a passion for programming, right? I mean, like, this is
something my wife likes to say is, you know, she'll point out that your hobby is also your work. Like when I'm not working on my editing stuff, my hobby is like doing more programming stuff and like running the Philadelphia L meetup and whatnot.
Right. So, yeah. And, and I think, uh, you know, that's, I think it's important that our industry not have that as a requirement, like that people need to do the same thing in their free time that they do in their work time.
But of course, it's inescapable that it's an advantage.
I mean, if you're spending more time engaging with your craft, then you're just on average going to be better.
Otherwise, that would be kind of a waste of time.
And I mean, as as much as I don't want that to become a requirement, I also
appreciate the fact that companies benefit from that.
Yeah.
It's, it's, it's fortunate and yet unfortunate, you know, that's one of those things.
Yeah.
As long as it doesn't become a requirement, I think it's okay. This episode is sponsored by our friends at Rollbar.
How important is it for you to catch errors before your users do?
What if you could resolve those errors in minutes and then deploy with confidence?
That's exactly what Rollbar enables for software teams.
One of the most frustrating things we all deal with is errors.
Most teams either A, rely on their users to report errors,
or B, use log files and lists of errors to debug problems.
That's such a waste of time.
Instantly know what's broken and why with Rollbar.
Reduce time wasted debugging and automatically capture errors
alongside rich diagnostic data to help you defeat impactful
errors. You can integrate Rollbar into your existing workflow. It integrates with your
source code repository and deployment system to give you deep insights into exactly what changes
caused each error. Give Rollbar a try today at no cost to you. No credit card is required.
Our listeners get access to the bootstrap plan with 100,000 events for free for 90 days.
To get started, head to rollbar.com slash changelog. So Richard, as you said, Elm is moving from obscure to niche and its impact has been,
I would say, more than niche and more than obscure because we've well documented on this
show.
And one of the things I love about the changelog and why we try to stay as polyglot as we can,
even though that means we dive into things that sometimes we just can't quite swim that deep. It's because the proliferation and the moving of ideas across different camps and different
languages and different communities is hugely valuable.
And I think two years ago when you and Evan were on, I asked Evan about the feeling he
gets when some of his great ideas and some of the things that Elma's really paved the
way for thinking about the Elm architecture, thinking about just the niceties of the compiler.
And these ideas have either been borrowed or from the great artists, have been stolen and taken to other languages, other projects, other frameworks.
Really a neat thing that has happened.
But that being said, somebody who knows JavaScript today and doesn't know Elm can benefit from a
lot of the stuff that Elm brought to the table.
But what are still some reasons in 2018 to give it a try,
even though a lot of the great ideas have been,
you know,
moved around to other places?
Yeah.
So it's,
it's funny that you mentioned that because from my perspective,
I have a,
I developed sort of a strange relationship with the idea of
Elm's ideas
proliferating in the world.
The big one is the Elm architecture, which
essentially is, I think,
Redux is very similar to the Elm
architecture in a lot of ways.
That's sort of the biggest
way to do application state management
in the React ecosystem,
certainly.
And even in like Angular and Vue, there's sort of like ways you can opt into that, which some people do. Whereas in Elm, it's sort of a foundational concept, everything's built on top of that,
it's the only way to manage app state. And there's actually no other source of global state at all.
And one of the things that's interesting to me is that if i talk to a javascript developer
it's pretty common like who's never used elm it's pretty common that they will say
uh yeah like the elm architecture seems really cool and i i appreciate you know some of the
simplicity that it brings um to organizing your app state and if i talk to elm developers like
who have been doing elm for even a couple of months, nobody mentions that. It's all other stuff. That's just sort of like the table stakes in Elm. And
because it's this kind of foundational primitive, the stuff that people talk about are things like,
I really love the compiler error messages, or I love how everything in the ecosystem just works
well together. Or now with Elm 0 0.19 two of the things that people commonly
mention are one my project builds so fast because there is a big speed up in the compiler um evan
basically rewrote all of the like he wrote rewrote the parser and then he rewrote the exhaustiveness
checker and the type checker and basically by the time he was done pretty much all the insights had been rewritten um for speed and uh the result is that somebody posted um they had like a 50 000 just shy of 50
000 line of code um elm project with something over 100 files and so forth and the entire thing
from scratch building it plus all of its dependencies like on a fresh git checkout um was like under
two seconds and that's like compiling it type checking it uh spitting out the compiled javascript
like everything start to finish was under two seconds and that's not even an incremental compile
which of course they're much faster um so that's like you know i think about like how many people's
you know babble which is javascript to j JavaScript builds, you know, at that scale are running in under two seconds for a fresh build, let alone for an incremental build.
And that sort of, you know, becomes a selling point, becomes something that people are excited about.
And so I think about it in terms of two things that are really exciting to me about Elm. One is the tools,
so that's like the compiler and the tooling around it, like the package manager. And the other one
is the ecosystem, where basically everything is built in terms of Elm, and I don't really have
to worry about compatibility like I did in the JavaScript world. Basically, whenever I install
a new package, I kind of expect it to just work immediately.
I'll say install it, and then I expect
to get the same experience I would get
as if I were just using a new core library
that shipped with the language.
It's sort of that level of smoothness.
And I get, well, so asset size is, I guess,
another thing we should talk about
because that's maybe the biggest selling point of Elm 0.19.
But it's relatively new,
so I don't hear a lot of people talking about it yet.
Well, it was news to me.
Just announced the blog post,
which is linked in our show notes, August 21st.
So we're recording this October 10th, a couple months back.
But this one didn't make the headlines
as much as some of the other things I've seen from the Elm community, even though it's a pretty big deal, especially nowadays where we're trying so hard to get the time to first paint down to as small as possible for our web apps so that they can reach as many people as fast as they can.
Elm 0.19 has made huge strides with regard to bundle size.
Give us the details. Yeah, so basically the comparison point
that we ended up using was the real world app.
So this is basically a project
that's sort of designed to be a bigger cousin to TodoMVC.
So the basic idea is they have
a really detailed specification
for here's how to build this.
It's like a medium clone
it's it's called conduit it's basically you can sign in sign up uh post an article view a feed of
articles favorite articles follow authors unfollow people you can edit some settings so pretty
typical web app type stuff and um they basically have a really detailed spec and they provide all
the styles for you and they have a spec for both the front end and the back end. So if you want, you can try out,
hey, what does it look like if I'm running this application on a React front end and a
Django back end or a Angular front end or with a Laravel back end and all those different
combinations. That's a great idea, by the way. I think I remember seeing that.
It's very useful to be able to swap those in and out and just see how it reacts, right?
Yeah.
And it's a really cool project just to be able to compare.
Like if I'm trying to see, hey, how would this thing be done in this particular technology?
I want to evaluate that technology.
Just having a sort of substantial code base to look at it to say, okay, so I see how this thing maps to that
other familiar thing, you know, technology that I know. So, so we have an Elm implementation of
this. And one of the things that's kind of cool about this is that these are all projects where
the goal is to show best practices, not to like, tune to benchmarks, which is always, you know,
a concern with micro benchmarks is that it's like, well, how much of this is like actually real world versus, you know, uh, something that's
just been kind of like tuned to do the best numbers on the benchmark possible. And pretty
much all of these are like, people just built the apps to, to do a good job showing like how to do
things right. And, um, so what we saw was basically the, the of the blog post is that the React, Ember, and Angular ones have anywhere between, I think it's like 105, something a little bit over 100 kilobytes of minified and gzipped assets for this whole application.
Which is usually like, I don't know, a couple dozen files and a bunch of dependencies and so forth. So 100k is,
and I think down to like, in the 70s, depending on which of those more popular frameworks you're
using. Whereas the Elm one, the entire compiled asset size minified gzipped is 29k, which is
actually just smaller than React by itself, which was a really cool result,
because that basically means that if you're doing a React version of this, even with the
most aggressive possible code splitting, you still couldn't get it down to as small as
the entire L map, you know, with no code splitting, which was really surprising.
How much of that 29 kilobytes just if you can break it out would be application code
and how much of it would
be you know framework or architecture code do you know even you need percentages uh that's so that's
that's a good question and uh it kind of gets into why it's it's hard to measure so the reason that
elm got this to be so small is basically that what 0.19 introduces is function level dead code elimination. So the way that works is,
so ordinarily, you have your application, you install some packages that you depend on.
And by default, in the old world, you would just get absolutely everything that you install with
the package, all the code in that package gets compiled into your bundle. So then you have
module level dead code elimination, aka tree shaking, which is
kind of the target in like the JavaScript ecosystem is like, hey, if everybody uses ES6 modules,
then we can get tree shaking. And that'll be great. And so that's sort of one level of dead
code elimination, where if you don't import a module then it gets excluded
it gets stripped out of your compiled
asset bundle
which is cool
but there's one more level than that
which is function level dead code elimination
which is essentially saying
I import this module
this module exposes 100 functions
let's say
if I'm only actually calling
three of those functions
that's all I'm going actually calling three of those functions,
that's all I'm going to get in my compiled output, the other 97 just will get stripped out. Which means that basically, it doesn't really matter how your modules are organized anymore, you can just
put your functions wherever makes the most sense organizationally. And it also doesn't matter what
you're importing, like which modules you're importing, it only matters which functions
you're actually calling. Those are the only ones that get used.
That's super cool. So it does all the transitive dependencies and stuff to figure out which
functions those functions are calling and so on. So you're not going to, you're not gonna be missing
a function at the end of the day. Exactly. Now, this is really cool. And it's it's one of the
big reasons that Elm 0.19 was able to get such a small bundle size is that however many dependencies we
pull in, it doesn't really matter how big they are. All that matters is how big are the things
we actually use. And the reason we're able to do this is that Elm has its own totally separate
package ecosystem from NPM. So that whole SPA example doesn't actually use NPM at all. It's just only using Elm packages. And so as a
consequence of that, it means you get this system-wide dead code elimination, which is
really great. But it also means that it's kind of hard to measure what percentage of this is
X versus Y versus Z, because it's kind of like, well, what even is Elm's baseline? Like how much of that?
And the answer is, well, it kind of depends on how much of it you're using.
So, you know, that dead code elimination applies to sort of Elm's standard libraries
just as much as any package.
So it makes it pretty tricky to measure.
I guess what you could do is you could kind of like do surgery on the compiled JS
and kind of like map things back and then like,
like sort of categorize all of them and say, Oh,
this came from here and this came from there.
But I don't think anybody's ever tried to do that.
It sounds like a bunch of work.
Yeah.
I was going to say one thing you could do from the other direction and say,
okay, how much application code do I have? Right.
How much application code have I written and assume that you're using all
those functions?
Because why would you write app code for a demo that's unused, I guess. and then say how big is that if i just if i just you know minify it
or do whatever i don't know maybe elm can't do that it can't it can't just like boil this part
of the world without boiling the entire thing especially with its checking and stuff right
yeah i don't think there's a way to directly say like just compile this application code without
its dependencies,
because they all...
Yeah, because it wouldn't compile.
Exactly, right.
It depends on those.
So...
I'm still thinking,
I'm in the Minify world,
I'm still thinking just Minifying all this down,
but it's actually compiling.
Okay, so...
And what's really cool about that
is that it's a benefit that actually gets bigger
the bigger your code base is.
Like, if you have, you know,
an example that's like,
let's say 10 times the size of this application,. Like if you have, you know, an example that's like, let's say
10 times the size of this application, and you've got a bunch more dependencies, because I mean,
the bigger your project is just naturally, the more dependencies you're going to end up having
as a general rule, the more you benefit from this, because each of those additional dependencies
would otherwise represent all that code coming in. But instead, it's like, Oh, no, we're just
going to get what we actually use. And the other cool thing is that Elm shares transitive dependencies. So if I install two
packages that let's say both of them depend on the, I don't know, the JSON library, it's going
to find some version of the JSON library that works with both of those packages and only install that
once. So it can do the dead code elimination,
not only across your direct dependencies, but also across your indirect dependencies as well,
with just the one shared version between them. So you really end up with kind of the minimal set of dependencies you can get. There's some other cool stuff that it does, like automatic record
renaming, like field renaming. One kind of cool thing about that is it does stuff where basically
if you've got records, which are kind of like JavaScript objects, but simpler, they don't have
like prototypes or this or anything like that. And they're immutable. But basically, you'll say
like, maybe you'll have a user record that's got fields like username, email, stuff like that.
When you run lmake with the optimize flag, what it'll do is it'll
actually compile those down to the smallest JavaScript field names it can come up with.
So instead of username and email, it'll compile them down to like A and B, which is ordinarily
not something that's super safe for a minifier like Uglify to do because you might be potentially relying on those with like dynamic
field access using a string right variable but in elm we know that that's not going to happen
with these records because that's just not a feature in elm you can't do that you can only
access them with a dot so because of that it's safe to rename them and one of the cool things
which granted probably doesn't make a big difference in practice, but which I think is really cool, um,
is that it actually goes through your whole program and counts usages, like how many times this field is used so that it can use all the single
letter ones for the most used fields.
And then when you run out of single letters,
then it can move into,
you know,
two letters or something like that,
which is just,
you know,
like how much does that actually save in practice?
Okay.
Yeah.
Probably doesn't really matter. Um, but it's a, it's a cool example of like
how much the compiler knows about your whole program.
We need to sit Evan down and tell him about the law of diminishing return.
I honestly think that was one of those things where, uh, it was like he had to track it anyway.
And so it was kind of like, well, how should I distribute these things? Yeah. Might as well.
Just, just count. That's very cool. cool function level dead code elimination that's the first i've heard of
that you know the next step is now line level dead code elimination so lay that challenge out there
line by line you know speaking of the next version speaking of diminishing returns i mean
there are other potential optimizations out there. Like it could go even further by like eliminating branches of conditionals
that can't possibly get run
because of like, you're using this library,
but we know that like,
it's not possible for that branch to get run.
However, that's like another really big project.
It's kind of a whole different level of challenge.
And at this point it's like, okay,
basically Evan put something out there
about the design for code splitting. Cause right now Elmm does not have like a first class code splitting mechanism. And kind of the goal was, well, let's see how much the dead code elimination does for us. And then let's see if a that's something that there's actually demand for. And B, if there is demand, let's see what people's, you know, code bases actually look like, so that we can kind of design the feature that,
you know, is going to make sense for how their assets end up being in practice, because this
is kind of a whole new ballgame. We don't really know what it looks like to, you know, maybe it
turns out that actually, if you try to code split along these module boundaries, that you end up
with actually more than you would have before, because you lose out on some of the code splitting
benefits. So we're gonna have to see how those things look in practice before thinking about, you
know, even further investments into asset size.
So when you when you say code splitting, you're referring to instead of having a single bundle,
you'll have multiple bundles of smaller size that are kind of loaded dynamically.
Is that what you mean by code splitting?
Yeah, exactly.
Sorry, I should probably define my terms.
All right.
Yeah.
So code splitting and lazy loading.
The basic idea is, let's say you've got a single page application.
So you're going to download one HTML file.
And then when the user transitions to different URLs, that's actually all going to happen
on the client side.
You're not actually going to get a page refresh and a flash of white on the screen.
All that's going to happen is that the JavaScript code, the compiled JavaScript code
is going to go and do HTTP requests to the server saying, hey, give me the data I need to render the
next page. And the idea behind code splitting is you're not only going to say, give me the data to
render the next page, but also you're going to say, give me the code to render the next page.
So that way you don't have to download, let's say you've got, you know, you end up with like 50 pages
on your web app.
You don't really want the end user
to have to download all of that
when they do the first page load.
You'd rather have them download
just enough compiled JavaScript
to render that first page.
And then when they transition
to a different page,
you can then say,
okay, I'll on the fly,
load the code for this new page
and then execute it.
So this is, as applications get bigger,
this is something that people commonly have demand for
in the JavaScript world.
That may very well turn out to be something
that there's also demand for in the Elm world,
just because why wouldn't there be?
But we don't really know
what the design constraints would be yet.
I mean, one of the things about performance optimization
is that the bottlenecks are always where you least expect them.
So now that we have this sort of ecosystem-wide function-level
dead code elimination, what does that mean for code splitting?
How does it impact it?
We don't really know because no one's really ever had it before.
Right.
So now that 0.19 is out there and you have this dead code elimination, which sounds like be a straightforward upgrade and then recompile would be you could you could at least test.
I mean, have you guys tried it?
No writing can just senior bundle size decrease from version to version.
Or is it not that simple?
It's not that simple because we are still blocked on some of our dependencies not being updated yet.
So we're very much,
you don't have the goodness yet.
Not quite yet.
And we're,
we're jealous of the companies that,
you know,
all of their dependencies have already been upgraded and they're already,
you know,
gushing about it and Elm Slack about how awesome you're like,
you're like a Android user on like three versions back on their OS.
Well,
one version back,
but,
but,
you know,
no,
I mean,
we're very excited about it.
Like it's, it's something where we actually back, but no, I mean, we're very excited about it.
Like it's, it's something where we actually track, you know, what are our compiled like asset sizes for each of our different routes.
And so we'll be able to do a pretty cool before and after.
I mean, for us, honestly, the bigger benefit is the compile time.
Cause now, you know, we've got a quarter million lines of Elm code.
You multiply, you know, really fast compile time savings across a big enough code base
that adds up to a lot of increased developer productivity.
Absolutely.
Looking forward to that.
Let's go back to the packages real quick.
So one of the reasons why this is possible,
this function level dead code elimination, like you said,
is because all of the packages are written in Elm
on elmling.org package manager.
And so NPM isn't even touched.
The gift and the curse of NPM
is there's so much out there.
I mean, every piece of code in the universe
is on NPM somehow.
So when we talk about community and advantages,
how much is Elm at a disadvantage
in terms of packages that developers need
versus NPM.
Like, I just think of that because of the limiting factor of you're waiting on some
packages haven't been updated yet.
And I wonder, how big is the package ecosystem?
That's a good question.
I don't know the exact number of packages, but I know that NPM being the biggest in the
world is a lot bigger.
There's no doubt.
So I see it in a couple of different ways.
So one is Elm does have JavaScript interop.
So if worst came to worst,
if I were starting a brand new project
and I really needed to do some,
there was some package on NPM that I was like,
I can't live without this package.
I wouldn't necessarily have to rewrite it in Elm.
I could probably just do JavaScript interop and just get by with that.
Of course, if I do that, then that chunk of code doesn't get me all of Elm's guarantees,
all of its benefits.
The function level dead code elimination, of course, is not there.
The only way to get that that I'm aware of in JavaScript is to do it with the Google
Closure Compiler. So that is like an uglify alternative
that has an advanced mode,
which as long as your code abides by certain rules,
it can do function level dead code elimination.
However, in practice,
it seems like there isn't a lot of code base.
There aren't a lot of code bases out there
that actually happen to abide by those rules
such that they can use it.
As far as I know, the only community that really makes good use of that is the ClojureScript community,
because ClojureScript was specifically designed to emit JavaScript that could be used with Clojure Compiler on advanced mode.
Smart.
Yeah.
So basically, I think now ClojureScript and now Elm are the only two communities that have the function level dead code elimination. Although I think ClosureScript tends to do more in terms of wrapping JavaScript libraries as opposed to sort of rebuilding them from scratch, whereas definitely Elm leans a lot more towards let's do it in Elm, and then we get all the benefits. So I think in practice, we probably get, on a percentage basis, more benefit from it.
But I think they're both capable of it.
So hypothetically, the JavaScript ecosystem could get there, but it would require, it
would kind of be on an app-by-app basis.
It would require you to abide by specific constraints that a lot of apps aren't doing
out there in the wild.
Yeah, and I think a lot of this comes down to ergonomics.
I have kind of a whole series of thoughts I've been kind of fleshing out about just comparing how JavaScript has evolved over the past 10 years, since 2008, when it got fast enough to build web apps a lot of the churn people have been seeing and complaining about with like, oh, my gosh, there's so much stuff coming out all the time and things are changing so fast, really dates back to that.
That performance war that led to JavaScript being really suitable to have rich web apps that were really client-side heavy.
This episode is brought to you by Linode, our cloud server of choice.
It's so easy to get started.
Head to linode.com slash changelog, pick a plan, pick a distro, and pick a location,
and in minutes, deploy your lino cloud server they have drool worthy hardware native ssd cloud storage 40 gigabit network intel e5 processors
simple easy control panel 99.9 uptime guaranteed we are never down 24 7 customer support 10 data
centers three regions anywhere in the world. They got you covered.
Head to leno.com slash changelog to get $20 in hosting credit.
That's four months free.
Once again, leno.com slash changelog.
This episode is brought to you by Raygun, who just launched their APM service.
It was built with the developer and DevOps in mind.
They're leading with first class support for.NET apps, also available as an Azure
app service, and have plans to support.NET Core followed by Java and Ruby in the near future.
After doing a ton of competitive research between the current APM providers, where Raygun APM
excels is the level of detail they're surfacing. New Relic and AppDynamics, for example, are more
business-oriented, where Raygun has been built for developers and DevOps.
The level of detail provided in the traces are amazing, the flame charts are awesome, and allows you to actively solve problems and dramatically boost your team's efficiency when diagnosing problems. Deep dive into root cause with automatic links back to source for an unbeatable issue resolution workflow.
Learn more and get started at raygun.com.
Once again, raygun.com. one question i asked i do remember asking two years ago and you were teasing that i wanted
the state of it because i haven't heard was elm on the server did anything ever come of that or
is it still just a a spark or what the word? A pipe dream in your eye?
Like, what's the situation?
Is that going to happen?
That's not going to happen.
No, that's a great question.
It's not that it did happen.
It's more that I think we have a much better understanding of what that looks like now.
And basically, so so we're as was the case two years ago, and it's still the case now.
Elm does not have first class server side support.
And that's intentional.
We basically want to focus on the browser for now,
but we're sort of keeping an eye on the server.
So one of the perhaps surprising things
that has been kind of guiding this design question
of what should Elm on the server look like, if anything,
is actually WebAssembly.
So one of the things we've been sort of surprised by was
when WebAssembly came out and discovering that actually, this is a thing that all the browser
vendors were on board with, and we're actually supporting, there became this question at some
point of what does WebAssembly mean for Elm? And that kind of transitioned to discussions with some
folks at Mozilla and asking about what's
the garbage collection story going to be like, and asking questions about what should Elm's interop
look like. And where we ended up was kind of discovering that actually, it seems pretty
feasible that Elm could someday compile just to WebAssembly, not to JavaScript at all, and actually
that all of the existing JavaScript interop would
still work. And the reason that's possible is that the way that Elm's JavaScript interop works
is essentially through message passing. It's kind of like a pub sub, like maybe event emitter system.
So you kind of, your Elm app sort of broadcasts events out to JavaScript and then listens for
events coming in from JavaScript. And since that's the whole model, it's like that. And then also you can use some web component stuff if it's just
view specific, neither here nor there. But either one of those interop methods works totally fine
if Elm is compiling to WebAssembly instead of to JavaScript. It can still talk to JavaScript just
as easily as it did before. And nobody on the other side needs to know or care
that it's compiling the WebAssembly under the hood, which could be even bigger for assets and also
even bigger for performance, not just because it gets to have lower overhead, but also because
it opens the door to really exciting concurrency stuff. So right now, Elm is actually very much
intentionally designed to be a language that's potentially great at concurrency.. So right now, Elm is actually very much intentionally designed to be a language
that's potentially great at concurrency. But a lot of that potential is sort of goes to waste
because JavaScript is single threaded. And web workers are, let's say, not usually great for
improving performance of typical web applications in practice, even though in theory, they might
be able to because of serialization overhead. But a lot of that could potentially change if Elm compiled to WebAssembly. Now,
if Elm compiles to WebAssembly, that kind of opens the door to Elm on the server having sort of a
built in way to sort of get off the ground in a way where, or sorry, in an environment where
concurrency actually matters a lot more
and you can have a lot more potential benefits from it.
Because on the client side, concurrency is basically a performance optimization.
But on the server, it can be a pretty fundamental thing as far as throughput,
as far as how much the server can handle,
what kind of a load it's actually capable of processing.
So the potential seems to be pretty high there.
And I don't know if that actually ends up the way that we end up going with it.
But it's been pretty fascinating to sort of realize, oh, hey, this actually seems like
not only a plausible path, but actually a likely path at this point. And we've actually started
basically making design considerations. Like
anytime we talk about any kind of change that might impact the language or the core libraries,
one of the questions that always comes up is, will this still be fine if we're compiling to
WebAssembly instead? And it's basically become something of a design constraint.
So let me make sure I'm understanding you correctly. Are you saying that the work to make Elm compile to WebAssembly
is the kind of work that you would have to do to run it on the server
and so the re-architecting will help you?
Or are you saying that you could actually, once you compile to WebAssembly,
then you just magically be able to run that compiled WASM thing on the server?
Yeah, so I guess I kind of skipped a step.
Okay, thank you.
Yeah, no, that was a total leap.
That's all right.
So basically, Evan wrote out,
like one of the FAQs is,
hey, does Elm run on the server?
And of course, I mean, Elm compiles to JavaScript.
So literally, if you wanted to,
you could compile Elm to JavaScript and run that doesn't mean that you should right yeah well
more importantly it doesn't mean you're going to have a good time if you do that right which means
you shouldn't do it well so so one of the uh the big things that evan points out is that basically
you know compiling to a particular target is about five% of the work of getting to a good experience.
The ecosystem is a huge deal. And so you have all this enormous amounts of design work and
also implementation work to say, what would a good Elm experience on the server be like?
Elm has different design constraints than... I don't think there's any other language that has
all exactly the same design constraints that Elm has.
And so, you know,
there's definitely design work to do to figure out what would a nice experience
look like.
And actually,
so ReasonML just ran into this kind of recently where,
so ReasonML is another programming language that compiles to JavaScript.
Although technically it's a syntax on top of OCaml,
so it doesn't have to compile to JavaScript, although that's kind of what it's what its big pitch is,
because the syntax looks very JavaScripty. Anyway, a lot of people were saying, well,
if I can compile ReasonML to JavaScript, and I can also run OCaml on my server,
why not use ReasonML on the server? And what quickly turned
out to be the case is that unfortunately, that's not enough to get a good experience right out the
box. There's still a huge amount of work to do to basically build an ecosystem around that to answer
questions like, what should a web server look like? What should database access look like?
There's all these different things, you know, working with queues, working with third party
APIs, all these questions that sort of have to be addressed before you have,
you know, something that's an adequate replacement from an ergonomics perspective for something like
Rails or Sinatra or Express or any of the other alternatives that people commonly use.
So what the folks who ended up doing that in the early days were basically doing on reason is they ended up saying, well, okay, we're going to write our business logic and reason. And
then we're actually just going to end up compiling it to JavaScript and then doing a lot of interop
to express just to end up basically using express as our, as our application server.
So, you know, I guess technically you could do the same thing in Elm if you wanted to just use
Elm for your business logic and then use a whole ton of interop to talk to, you know, express. Um, but that's not really
the sort of the Elm experience that people are accustomed to. People are accustomed to things
sort of just working and being reliable and, um, really only having to use interop in very
exceptional cases, not as like a, you know, bread and butter type thing. So I think, um, that's,
that's kind of where
the big amount of work to do exists
is like what's the design of a really nice system?
And that's what brings me back to WebAssembly
is what are the design constraints of that system?
And if one of the design constraints
is we're running in this single threaded,
albeit asynchronous environment,
like because we're compiling to JS and running it on node, that really constrains the API design
space compared to if we're saying, yeah, we just have complete control over concurrency,
we have first class, you know, threads that we can work with under the hood,
we can offer a nicer API at a foundational level on which that whole ecosystem can be built,
if we're compiling to something that has a really nice notion of of threading and this also gets into other questions
like one of the things that that evan discovered in his his research um is that uh so evan's a big
admirer of erlang's um supervision tree model and sort of the way that they handle fault tolerance
and the way that they do servers um which has a lot of really great benefits um and one of the way that they handle fault tolerance and the way that they do servers,
which has a lot of really great benefits.
And one of the things that kind of came out of this exploration is that it seems like that those ideas are absolutely at their most effective when they are part of the foundational
primitives, as opposed to when you try to opt into them using a third party package,
which happens in a lot of languages.
So that's also sort of necessarily part of that initial design. And the way that Erlang is able
to get really, really high throughput and really great fault tolerance is because it has
really great concurrency primitives and also supervision built in from day one. So
philosophically, I think the phrase Evan used was, you know, I built Elm
because I wanted to make something that had a credible claim of being the best experience you
could get for building front end applications. And for me as a user of that, I absolutely think
he succeeded. But he basically said, look, if I'm going to do all the work to bring it to the server,
I would want that same goalpost.
I wouldn't want to just say it's like Elm, but also on the server, but rather saying,
even if you don't use Elm on the front end, this has a legitimate claim to being,
you know, potentially if you're into the types of things that Elm does,
this would be the best choice that you would possibly have out there for servers.
And that's a much higher bar to clear and requires a lot more design.
I was going to say that's a longer field goal to kick.
Yeah, for sure.
Well, especially because in the front end,
it's basically like, who's your competition?
It's like JavaScript and TypeScript
and then several niche alternatives.
On the back end, it's like Python, Ruby, Go, Scala, Java.
I mean, the list just goes on and on.
There's so many different alternatives
that have been around for longer than, in some cases,
longer than JavaScript's even existed.
And
a lot of them have
a lot more claims
to fame, like certainly Erlang in terms of
robustness or Java
in terms of sheer scale
of some of those deployments.
That Elm really
has a long way to go
before I can kind of say,
yeah, we're a serious contender
in that space.
So you're on the front line
of Elm community and adoption.
You go to the meetups,
the conference talks, all this.
Surely you hear a lot of people
that are trying Elm
or have tried to switch or adopt
and they go back to JavaScript
for one reason or the other.
I always think of myself
with Sublime Text and VS Code. You know, every month month or two i try out vs code and there's always just
like one or two blockers i'm just like yeah i'm going back to sublime text and so i just i don't
do that so surely you've heard some of those people where they say yeah this just isn't the
way i like it or that's not up to snuff or i just can't get over this that or the other thing what
are some things people have been saying of why they don't adopt Elm? That's a great question. So I can, man, I mean, you're
right. I am very plugged into that and I can like rattle off a list. So I would say they break down
into a couple of different categories. A common one is team buy-in. So there'll be one person on
the team who's really excited about Elm and everybody else on the team is just kind of like,
we don't really care.
We don't want to learn a new language.
And the idea just kind of dies on the vine.
That's sad when it happens,
but at the same time, it's like,
you know, teams got to work together.
So, you know, I don't think there's really much hope
for success of adopting something if,
you know, any technology,
if only one out of N people, um, actually wants
to use it. Um, so that's certainly a barrier. Um, another one that comes to mind is, uh, basically,
um, learning curve. So Elm is a different programming language. That's just an innately
higher learning curve than learning a library, learning a framework. Um, I, I kind of think
that's the progression like library tends to have the lowest learning curve framework is more than that language is more than that um especially because sometimes when you get
into languages um people end up with roadblocks that are not necessarily matters of it's it's too
difficult to learn but rather that uh people are just uh not interested in learning because there's
like some aesthetic turnoff so elm does have have a different syntax in JavaScript. Quite a lot of people say they like the syntax better. But there are some
people who say actually, I don't like the syntax as much. And this just bothers me too much. I
can't get through the tutorial. So that happens. That's another reason that people don't end up
using out from a perspective of actual like API's and libraries libraries i think the number one thing that that people uh
say i don't know how many people uh walk away from elm because of it but i have heard at least
one person say that they were they did sort of like a hack day project um where they they they
decided they were going to switch front-end technologies and they tried elm and they tried
vue js and they tried react and they tried, I forget what the other one was.
But they ended up not going with Elm because of this, which is JSON decoders.
And so basically, in order for Elm to have the level of reliability that it does, it needs to not only say, like when you get some, you know, some data from the server, it needs to not only say, I've got this data, now I can work with it, it actually needs to sort of like validate and translate it into,
you know, a format that makes sense for Elm. So if you think about it, in the JavaScript world,
if I've got a JavaScript object, and I try to access a field on it, and it's not there,
I get back undefined, and that might very well lead to a runtime exception, you know,
the good old fashioned undefined is not a function, that type of thing.
But in Elm, we don't really have that.
That's all sort of checked by the compiler.
Now, when you get back data from the server in JavaScript,
you can sort of parse that, you know, call json.parse,
and it'll just give you back a JavaScript object immediately,
or it'll throw an exception, which you can wrap a try catch around.
But assuming it parses, then you've got an object,
and now you're playing by the same rules as normal, which is to say, not much in the way of
rules. And TypeScript basically does this the same way. It just sort of says like, trust me,
and you say, okay, I'm going to give up type checking right at the border. I'm not going to
have the compilers help. I'm just going to assume that this JSON sort of fit the shape that I
expected and we'll just kind of go from there. Whereas Elm is sort of more serious
about trying to maintain those guarantees
as your program runs.
And because the compiler can't possibly check
what's coming out of your server
because it's just a blob of data,
you know, it doesn't exist at compile time.
There's nothing to check.
Instead, it basically has this library
called for JSON decoding
that will simultaneously parse the
JavaScript, but also sort of validate it against a schema and say, if that schema doesn't match
what we expected, then it will fail. And you can do error handling, but you kind of have to
specify the error handling upfront. So it ends up resulting in a more reliable system, but it does
mean that you actually have to write out a schema for all of your JSON endpoints.
Whereas in JavaScript, you just don't.
You just say JSON.parse and it's just like, okay, good luck.
Elm's not really into the whole like, let's just pretend problems won't happen.
It's like, no, we're going to try and actually handle the problems and do our best to make
sure that if there is a bug, we know exactly where it happened and we can gracefully recover from it.
So this annoys some people because they're used to not having to do that. And now this is
feels cumbersome or.
Exactly. Yeah. And I mean, people say it's a bunch of boilerplate, right? It's, it's,
it's stuff that I don't have to do in JavaScript and I do have to do an Elm.
So one, so we're kind of like working on this and in in typical um like sort of elm design
sensibilities uh the goal is not so much to say well how can we make this uh less verbose the goal
is actually to say well what's the best way to do this like what's what's what's the end goal here
like can can we find a system where not only does it improve that, but actually,
we find something that solves other problems, which actually, along the way solves that. And that's actually been something I've been doing a lot of research into recently.
And the short answer turns out to be that the people who have the best experience with doing
client server data interaction and Elm tend to have a single
source of truth for the schema. An example of this would be, at Google, they use protocol buffers for
everything. Without going into too much detail, the relevant part here is that they have one
schema file that says, here's what my data on the wire is going to look like. Then they have a tool
that they run that generates both the client-side code that's going to decode that, and then also
the server side code that's going to encode that, and vice versa, if you're sending data from client
to server. So by having this single source of truth between the client and the server in this
schema file, and then using code generation at build time to make sure that the two sides agree,
you can actually sort of make sure that you no longer have the problem of, oh, whoops, I changed what my server
is sending, but I forgot to update my client side code to receive it. If you change the one but not
the other, something in your build is going to break. So that has like a separate, really nice
benefit, even beyond the, you know, hey, it's a lot of boilerplate that I don't want to have to deal with. Um, but as a nice consequence of that, it also addresses that because now instead of
having to define it in multiple places, you only define it in one place. You just say like,
here's my schema file, and then it's going to generate my code on the server. It's going to
generate my code on the client. And so rather than having to sort of like write out, Oh, here's the
shape of my stuff on the client. And then also here's the separate decoder.
You can just generate both of those at the same time for free
from this one schema file.
And while you're at it, also get better reliability
because your build will break
if the client and server get out of sync.
So we've got something like this.
It's not literally protocol buffers,
but on one internal service.
And so far the people who've been working on that system are like, yeah, this is great. This is like,
everything's better. Um, and so that seems likely to be the sort of like the, the shape of, uh,
of a solution to that particular, um, thing that, that, that some people like turn some people off
from the language, um, where it's sort of like a
solution in to the direct uh pain point while also making something else even nicer so tell folks who
are interested in learning elm maybe they're javascript developers and they want to check it
out what's the happiest path to learning elm yeah so um i mean the first resource i recommend
everybody is just the official guide so if you go to elm-lang.org, Elm Like the Tree, then it's got a nice walkthrough that
just sort of gets you start to finish.
But it's pretty short.
So that's, you know, kind of a pro and a con.
It'll get you up and running, but it's not super in-depth.
So I'm writing a book, Shameless Plug, Elm in Action, which goes into a lot more depth. So I'm writing a book, Shadeless Plug, Elm in Action, which goes into a lot more depth.
And it's pretty much designed at people who know JavaScript, at least to some extent,
it doesn't expect that you're a JavaScript master by any stretch. But it uses JavaScript
as sort of a comparison point. So I think if you're coming from JavaScript, that should be
a nice introduction. If you prefer the video thing, I've also got a course on front-end masters, which I recently updated for Elm 19. And I've got two courses on there. One is intro to Elm,
which is basically a day-long course, just gets you zero knowledge of Elm at the beginning,
all the way up through building an application and kind of working on a larger Elm code base
that does single page application stuff and HTTP and all that. And then the advanced
course is for, you know, maybe come back in a couple of months if you've been digging the Elm
thing and get into some of the really cool advanced stuff. Very good. Thanks, Richard.
This has been a lot of fun. Thanks for coming on the show. All right. Thanks.
Thanks for listening to this episode of The Change Log. Assuming you're loving this show, rate, review, or recommend it wherever you listen from.
It helps us reach new, awesome people.
If this is your first time listening to The Change Log,
you can find more episodes and our other compelling shows at changelog.com slash podcasts.
The edit and mix was done by me, Tim Smith,
and the music, as always, is brought to you by the one and only
Breakmaster Cylinder.
Thanks to our sponsors,
Hired, Rollbar, Linode, and Raygun.
Bandwidth is provided by Fastly.
Learn more about them at fastly.com.
We move fast and fix things.
You're a changelog because of Rollbar.
Check them out at rollbar.com.
And we're hosted on Linode servers.
Head to linode.com slash changelog.
Thanks for tuning in.
See you next week.