The Changelog: Software Development, Open Source - The Rust Programming Language (Interview)
Episode Date: April 11, 2015Steve Klabnik and Yehuda Katz joined the show to talk about the Rust Programming Language, a systems programming language from Mozilla Research. We covered memory safety without garbage collection, se...curity, the Rust 1.0 Beta, getting started with Rust, and we even hypothesize about the future of the Rust.
Transcript
Discussion (0)
Welcome back everyone, this is the Change Log and I'm your host Adam Stachowiak.
This is episode 151 and on today's show we're talking to Steve Klabnik and Yehuda Katz,
finally having a conversation about Rust on this podcast.
Lots of deep conversation around the underpinnings of this awesome new system language from Mozilla Research.
We have four awesome sponsors for today's show.
CodeShip, App Quality Bundle, TopTile, and DigitalOcean.
We'll tell you a bit more about App Quality Bundle and TopTile as well as Digital digital ocean later in the show but our friends at code ship released a brand new feature called parallel ci and they want to give it to you today absolutely
for free a 14 day free trial to test out 20 test pipelines with parallel ci it's a brand new feature
you can split your test commands into up to 10 test pipelines this lets you run your test suite
in parallel and drastically reduce the time it takes to
run your builds.
But with this special offer, you're getting 20 test pipelines.
That's 20 times faster than you could have ever built your test suite before.
The integrate with GitHub and Bitbucket, of course, you can deploy to cloud services like
Heroku, AWS, and many more.
Again, get started today for free, absolutely for free.
Or if you're upgrading, use our offer code when you upgrade to a paying plan.
The Change Law Podcast is that code.
Use it to get 20% off any plan you choose for three months.
Head to codeship.com slash thechangelaw to get started.
And now, on to the show.
All right, we're back. We've got Steve Klapnick on the call yahuda caps on the call
and the awesome infamous jared santo what's up guys excited to be here excited to talk some rust
did you know you were infamous jared i just got infamous you just made me so
so guys we've been wanting to talk about rust for so long steve i think we you were helping
kind of coordinate the call way back,
maybe I want to say late last year sometime, but it just wasn't good timing.
So today is sort of perfect timing because we're recording this on April 3rd, 2015,
and today is the day that you guys released 1.0 beta for Rust.
So it's a big day. It's a good day, right?
Yeah, it's been great so far.
Not only has it been a long time in terms of getting a show about rust but i used to actually
post new rust projects to the changelog like two years ago yeah so it's very very long time and
i've just been so overwhelmed i haven't been doing that lately but we've picked up your slack a
little bit we got a weekly email we ship out now called changelog weekly so we've we've been
sprinkling rust in there as we there as we get a chance to.
So, Yudha, how about you, man?
How are you today?
I'm good.
I'm also sprinting headfirst with Ember 2.0.
So I'm sort of doing Ember 2.0 and Rust 1.0
at the same time,
which gives me less time than I would like for either.
And I'm really eager for both of those to be done
so I can get back to a sane open source pace again.
Yeah, and I have JSON API 1.0, which is related to 1.0.
It's just a lot of stuff for us.
Yeah, that kind of leads right into the opening for us, which is that you're both core team members on many projects.
And I guess we can take it one at a time, sort of introduce yourself to those who may not know you, but also
sort of what role you play in the Rust project and then how that correlates to other projects
you're working on. So I guess, you know, pick who wants to go first. I guess, Steve, you can go
first. All right. So I am tremendously bad at bios, but hello, I'm Steve. If you don't know
what's up with me, I used to do a lot of Ruby work, but then I found Rust and have sort of transitioned to doing Rust, first of all, full-time for Mozilla.
But also I did it as a hobby for like two years before that started.
I'm in charge of documentation on the Rust project.
And yeah, I was made a part of the core team, not only from having a large amount of contributions, but also to acknowledge that documentation is a really, really important thing.
And we should have someone involved in making decisions that affect documentation.
Yeah, I've been doing open source for a while. Also, like Steve, mostly in the Ruby,
but also JavaScript space. I got involved in Rust actually because a couple of years ago,
the product that I work on at work needed something that was significantly more performant,
but also embeddable. And I got involved in Rust pretty much at the perfect time, right after it stopped having
an identity crisis.
The identity crisis was over before much of the work had gotten done to make it the awesome
language that it is now.
And so mostly I got involved because I was a really big early user.
And I'm glad Rust involved me as a user.
It's something that I care a lot about
in my open source projects
is having people involved in the project
that are part of the decision-making process
that are just there because they're heavy users.
So I contribute a little bit,
but I'm more the voice of the practitioner.
And definitely the kind of usage that I use Rust for
is a little bit different than the kind of usage that is involved in writing the standard library or the compiler or whatever.
One of the dangers of a bootstrap compiler is you make a programming language that's really good at making compilers.
So we wanted to make sure that we had a broader set of use cases than just building the Rust compiler itself, which is why servo is important, but also the stuff that Ute is doing is very important. Steve, I'm interested.
I think I first came across Rust
back when you first published Rust for Rubyists,
which I went back in time and checked that your,
I think your first commit on that was December 22nd, 2012.
So talk about an early adopter.
What was it about Rust way back in the day
that initially got you excited?
So in college, most of my friends actually did operating systems PhDs eventually. And we had
started to work on an operating system. At that time, we knew that C and C++ had some problems,
and D was a really big thing. So we actually worked on building an operating system back in D1,
back in the college days. And I sort of found the web and went into Ruby
and sort of left the system space,
but they kind of continued doing that.
And I'd always remembered that for later.
And I've always sort of had a love for low-level programming,
even though it's not what I've done in my work
in the last couple of years.
So I was at home visiting my parents for Christmas,
and there's not a lot to do in the middle of nowhere
where I'm from.
And so I was like cruising the internet
and found this announcement about Rust 0.5 being released.
And I was like, oh, this is systems programming language.
I haven't done that in forever.
I would love to get into this.
Let me check it out.
And I found that the tutorial,
while it explains what to do,
after I read it, I didn't know how to write a Rust program.
I read it all.
I sat down at an editor.
I was like, what do I do from here?
So I just got in the IRC room
and I started asking dumb questions,
like literally, how do you hello world
and things like that,
and then wrote them all out
into what became Rust for Rubyists.
So that was sort of that Christmas break.
And I found the language really charming.
I found all the people that were involved
really fantastic.
And so I just kind of stuck with it from there. So Rust is a Mozilla project, Mozilla research. And you work
now at Mozilla on it. Can you maybe speak on their behalf of like, why Rust? What was the point?
What's the win for Mozilla? And what's the thrust of the project? Do you know about the Pwn2Own
browser competition that happens? Yeah, they don't last very long.
Yeah, they don't last very long.
But what's interesting is if you look at what the vulnerabilities are.
So I'm not as familiar with this most recent one
because I've been studying the last one a lot more.
But not the one that just ended, but the one before that,
Firefox had four remote code execution vulnerabilities.
And all of those were due to errors
like iterator invalidation and use after free
and this kind of memory unsafety situation.
So Mozilla with Firefox and other projects
writes a lot of C++
and they feel the pain of C++ in many ways.
And so part of the reason to fund Rust development
was to figure out if they could write a good programming language that would make them be able to write web browsers that are safer while not sacrificing performance.
So historically, programming languages have sort of given you this tradeoff of we give you maximum control, but then you have to double check everything versus we don't give you much control, but everything is safe by default.
And so Rust is trying to break that dichotomy down
and give you a language that gives you both things.
Yeah, so I don't work for Mozilla.
You said Rust is a Mozilla project.
One thing I really like about the Mozilla research team
is how much they care about making projects
they work on at Mozilla be real community projects.
And obviously, that's pretty rough because if you have five full-time people working
on something and then you have a community, there's a natural tension between those things.
But I've really enjoyed how much the team there has looked to diversify the group and
increase the number of people involved who are not just people working
at Mozilla. Obviously, Rust has a bunch of PhDs working on it. And that ends up being important
to solve the kinds of problems that Steve was just talking about. Before Rust existed, the whole
story of what Rust is was just an academic concept. And Rust is really the first time that it became
put into use as a production language.
So that's important and does involve hiring some PhDs to do some research work.
But I've also really enjoyed how much the Mozilla team and the Mozilla research organization has,
how much time they spent getting people who are not at Mozilla being important members of the
decision-making process of governance and all that. So when did you first come across it? And how long did it take you between
finding it, it hit your radar, and being like, wow, I'm going to build something with this?
Yeah. So I knew Dave Herman from Mozilla Research. He was a friend of mine. And so I knew that the
Rust project existed conceptually. And like a lot of other people, I was looking for an excuse to
use Rust, but I never really had any good ones. And the product lot of other people, I was looking for an excuse to use Rust,
but I never really had any good ones. And the product that I was working on at work is called
Skylight. It's a production management, it's a performance monitoring app for Rails apps. And so
one of the things that we do is we just have a thing that runs inside your Rails app, collects
data and sends it to our server. So that's just something that we have to write. And so the first
version of that, like you would expect,
was written in Ruby, and that version would basically go
and would monkey patch your stuff or use that to support notifications
or whatever, and then it would get the information sent to the server.
And pretty early on, we discovered that we had some bad memory usage problems.
This is something that a lot of our users reported.
In especially pathological cases, we could end up using 100 or 200 megabytes of memory. But even,
you know, 20, 30, 40, 50 megabytes of memory is a lot of memory to ask someone to give up
to monitor their application. So I was basically tasked with getting the memory management story
under control. So I went in there and I looked at the Ruby application, really evaluated it,
and I made some good progress. I was able to get the memory usage down. I was able to fix some of
the pathological cases. But the process of doing that made me realize that I simply didn't have
the control over the memory usage that I would need to keep this maintained. And then anytime
anybody ever touched the Rails app, or sorry, the Ruby app, there was a good chance that they would
have significant regression. So I had to do really black magic stuff to even get some
modest improvements. So Carl, at the time, one of our co-founders had started to do an experiment
to write the agent in C++. And he actually made some good progress. But I personally don't trust
my C++ code. And I was extremely nervous about having us as a team maintain code that could, in theory, set fault in production.
So it was fairly important to us that if we're asking people to run code, that that code not be able to explode.
So I started poking around at Rust, and I basically said, you know, Rust is still pretty new, but I'm pretty sure I can get a prototype of a small piece in MVP,
which is just the part that serialized and deserialized the data structures into protobufs and sent them to our server. I think I can get
that part done in a week or two. And so I said, if I can get it done, then we should make further
progress. So I spent a week or two and I was successful at doing that part. And actually,
the reason I did that part was that that part in Ruby was one of the worst parts of the system.
It was the one that was most bad in memory. And so pretty quickly, we were able to take this
fairly memory-heavy thing in Ruby,
rewrite it in Rust,
and ship the native binary to our users.
The reason that I was really interested in Rust
and the reason why Carl was interested in C++
was that I had a lot of experience
embedding JavaScript runtimes in Ruby,
both SpiderMonkey and then later on V8.
I worked on those projects.
And embedding a GC inside of another GC is just asking for never-ending pain. So having a language
that we could use without any GC whatsoever and have it do quote-unquote manual memory management
was very attractive. So the TLDR is I had a really big problem, which was write this agent
and have it use less memory.
And I was able, even at that point, to get up and running with something that worked and gave us value in a pretty short amount of time.
And so something that was very, very memory, had good memory usage, didn't have a GC, was very fast.
And also that I could chip with very low risk of seg faults in a short period of time.
Awesome. So Steve was excited. Yohuda was excited. Anytime you guys are excited, the rest of us tend to a short period of time. Awesome. So Steve was excited.
Yehuda was excited.
Anytime you guys are excited, the rest of us tend to get a little bit excited.
A little bit.
Maybe a lot of it.
Let's talk about the language.
It's defining features.
And I'm going to kind of turn to Steve, since it's your job to write the docs and to explain it to us noobs.
And then Yehuda, you can just kind of hop in and help out wherever you think he needs it.
So, the rest defining feature is memory safety without garbage collection.
Steve, can you unpack that for us?
Sure.
So, in the beginning, there were programming languages that sort of let you do whatever you want, right?
Like assembly code.
We'll start from that level of abstraction.
Obviously, this started even before that with machine code, yada, yada. Don't want
to get in there. But things like assembly and the languages that came right after it
gave you this low-level access to memory. And the problem with giving you that low-level access is
that you can do bad things. And this is because naturally a processor just does bad things,
right? Like when you teach a person programming, one of the first things you learn is that computers are not smart.
They're actually stupid and they do exactly what you tell them to, even if what you tell them to do is just terribly wrong.
One of the innovations that came along actually originally in the list paper by John McCarthy was this idea of a garbage collector.
And so instead of you managing memory manually through pointers, you would ask garbage collector for memory and it would give it to you.
And then you're done with it. It would automatically figure out how to get rid of that memory.
So fast forward, you know, 50 years. This is a very common thing.
Most of us work in languages that are garbage collected.
But garbage collectors, like all things in engineering, have upsides and downsides.
And there are certain domains in which a garbage collector's downside is completely unacceptable.
And there's other domains
in which a garbage collector's downsides
might not be as good as their upsides,
even if it's still possibly usable.
So in those domains where it's absolutely impossible,
you pretty much need a language
like in modern days C or C++
that do not have one built into the language.
And so Rust is trying to tackle that sort of space
because when you're building a
web browser, you need a ton of performance. People expect their CSS transitions to be really snappy
and JavaScript to operate very quickly. And so performance really, really matters a lot.
And so in that context, a GC is not really an acceptable amount of latency. There's other ones
too. For example, if you're implementing a programming language and you want to write a
garbage collector, it's much nicer if you're not fighting with a host language as a garbage
collector. So you may want to use one. Or if you're writing a AAA game, you know, when you
need to have 60 frames a second, a GC pause is unacceptable. There's just all sorts of domains
where this kind of thing happens. I mean, I think in this case, it's actually really interesting.
So maybe you want to talk about that for a moment. Yeah, so our domain is just,
we're embedding into a language
that already has a garbage collector
and cycles between two languages
with a garbage collector pretty much cause leaks
no matter how careful you are,
especially if both languages have closures.
So if you're writing and trying to embed
JavaScript in Ruby or Go in Ruby
or interoperating between between Java 8 and Ruby,
the only way that that ends up working correctly is if both parts of the system are talking to the same memory management system.
So if you're, for example, JRuby, the correct solution is that JRuby doesn't come with its own garbage collector.
JRuby uses the host garbage collector.
That's one strategy that you can use.
And that works fine if you're embedding your language
inside of another language, right?
But in this case, Ruby is the host language,
which means that we don't,
and we probably don't want the thing that we're embedding
to use Ruby's garbage collector, right?
We're writing lower level code.
So the only real solution is to have the thing
that we're embedding use the system's memory management.
The system's memory management is malloc, right?
So that's the way to avoid
causing conflicts.
But of course now if the only option
that you have is malloc, now you're writing extremely
low level code that has the possibility
of taking down the entire process with you.
So in my case
we could have written in
a modern dialect of C++ which does
a certain amount of work to
make this plausible.
But I, as a programmer,
I just don't trust myself to write code
that never crashes.
And so I wasn't willing to write,
I wasn't willing to basically go
to NASA levels of engineering
just to write a thing that collected information
from your Rails app.
And I really wanted to use a language
that would give us guarantees about that stuff.
So if Rust didn't exist, I think we would have had a deep struggle inside of the company
because I think there was a strong pressure to use C++
because that would give us the guarantees that we needed in terms of performance.
But a bunch of the rest of us were like, you know, who's going to maintain that?
How are we going to make sure we don't crash?
Who's going to take the support tickets from the guy that's complaining
that we're segfaulting their process?
And so Rust was really
came just at the right time for us because
it allowed us to say, we're going to be able to have low-level
control, we're going to be able to use the systems
memory management, but we're also going to
have absolute confidence that
the program we write doesn't take down the host
with it. And we're not the only people
that are writing programs with this problem. Pretty much any
C extension in Ruby has this kind of
problem. And I would imagine that over time,
more and more cases where people are using C
effectively as a glue layer or as an embedding language,
more and more people will move to Rust
just as a strictly better C.
Yeah, and so that's like the drawbacks of the GC angles.
It sort of leads right into that.
So memory safety without garbage collection
means that we give you this degree of safety that you're not going to screw things up without needing to use the GC to do it.
I think that's probably a good spot for us to pause and hear from a sponsor when we come back.
I want to hear exactly how it gives us this memory safety without garbage collection.
So let's pause. We'll be back in a sec.
I want to share a more personal note today with you about our awesome sponsor, TopTile. Let's pause. We'll be back in a sec. T-A-L.com. It's one of the best places to work as a freelance software developer.
We've been working with TopTal, like I said, for about a year, year and a half now.
And over this year and a half, I've gotten to know their co-founder, Brendan, very, very well.
I love what they're doing for the software development community.
They care deeply about software developers having awesome engagements to work on. And they also care about awesome, having really awesome software engineers to work with them. So they really make the marriage between a business with great
opportunities and an engineer needing great opportunities to work on. They make that
marriage possible. Well, we took our relationship to the next level and went there ourselves.
We're building something very cool behind the scenes here at the Change Log to Power,
the future of what we're becoming. You're going what we're doing we hired a software engineer through top
tiles name's hafael so if you're a member and you're in the members only slack room say hi to
hafael he's in there but i wanted to tell you just how deeply we care about our relationship with
top tile and how much we trust who they are and if you're freelancing right now as a software
developer and you're looking for a way to work with top clients,
maybe even us, on projects that
are interesting to you, challenging,
and using the technologies you want to use,
I would go as far to say that
TopTile is the place for you.
Head to
t-o-p-t-a-l dot com slash developers
that's toptile dot com slash developers
to learn more and tell them the change
they'll send you.
Alright, Steve and Yehuda, we're talking memory safety without garbage collection.
Sounds like Rust has that as a defining feature.
You said it has it, but how does it actually work?
I'll just jump in and say one thing and then let Steve answer it in more detail.
I wrote a blog post about this called Rust Means Never Having to Close the Socket,
which I would recommend people read to get more details about this stuff beyond what we'll talk about here.
But one thing that was pretty like a pretty big aha for me when I started writing rust is that garbage collection is actually pretty awesome at managing the resource called memory.
So garbage collection is able to say when I create a new, you know, if I create 5K of memory and I no longer need it, it will get cleaned up.
But garbage collection is actually very bad at closing resources like files, locks, and things like this.
And if you ever wrote C++, which it turns out most people who write Ruby didn't, there's actually a pretty nice system in C++ and a bunch of other languages,
which basically will automatically
manage resources in the same way that memory is managed. Unfortunately, in C++, that comes at the
cost of a lack of safety, which basically makes it a non-starter for people who are trying to
write in safe language. But it didn't really occur to me before that while I had this awesome
strategy for dealing with memory management, basically, I just did something that got new.
I got a new thing.
And when I was done with it, it got cleaned up.
But if I started to use a file or if I started to use a lock, I suddenly had to do all this manual work to make sure it got cleaned up.
And if I used a socket outside of the area where I was allowed to use it, just like if I tried to use memory outside the area I'm allowed to use it in C or C++. I would get weird errors. And it just didn't occur to me that one of the trade-offs for having a
garbage collector, which is very good at managing memory, is that suddenly I have to do all this
manual work to manage sockets and other kinds of resources, files and things like that.
So Steve can answer the original question. Yeah, that was a good detour though. So it's important
while we're sort of characterizing C++ as this completely no-holds-barred, oh my god, what's going to happen zone,
obviously these are problems that C++ programmers have to deal with, right?
So they have this solution and some terminology they've come up with that we sort of have gone back and forth on the actual usefulness of,
but I guess I'll approach the problem in this way. So when you,
when you make a new variable, and again, we'll stick to memory, even though as you had mentioned,
this is applicable to everything, not just memory. And in some ways the non-memory stuff is cooler,
but whatever, you got to start somewhere. When you say like, I want a variable, that variable
lives for a certain amount of scope, right? So it's valid from where you declare the variable
until that variable goes out of scope. And at that point is when you either, if you're in a manual situation, you have to clean it up yourself.
Or if you're in a garbage collector, it will detect that it's dropped out of scope and then clean up the memory.
And so this is called by C and C++ programmers a lifetime.
So the amount of time, which is sort of weird because it's really based on lexical scoping, generally speaking, that a variable is like valid. So a lot of the most pernicious problems happen
in C and C++ where you have a pointer to some sort of thing, and then the thing you're pointing
to goes out of scope, and therefore the memory is freed, and now you have a pointer to invalid
memory. So what Rust actually does is it basically understands both the scope that variables go into and out of and what things are pointers to those things.
And it's able to, at compile time, tell you, oh, hey, that variable is going to go to scope and therefore this pointer would be invalid, so this is going to be an error.
And we call that the system of ownership and borrowing, which sort of formalizes these semantics. But that's sort of the basic idea is that Rust is able to statically determine what stuff is in scope and what stuff
is out of scope. It's doing it all at compile time, correct? Yes. So there are some things,
there are some more advanced things you can do to like make those checks be at runtime. But the core
of the system is an entirely compiled time check that has no runtime overhead whatsoever.
So you get the exact same assembly code as if you had written correct C,
but you get the compiler checks to make sure that you're doing the right thing.
And one thing that was pretty mind-blowing to me,
well, one thing that was pretty mind-blowing to me,
so I write Ruby code and JavaScript code,
and I'm used to closures all over the place and pointers all over the place.
And however many references that you could possibly want pointing to the same thing and aliasing and all this stuff, I was used to all that stuff.
So when I first started writing Rust and I learned that the basic model of Rust is that some pointer only has one owner at a time.
And if I want to give it to somebody else, I have to give it to someone else and stop using it.
Or I can lend it to somebody else for a fixed period of time and then they can't hold on to
afterwards. I thought that this would be extremely restrictive. I thought that it would be very,
very difficult to program in this way. And in fact, that's effectively what the academics who
came up with this idea thought. They thought this is like a cool idea, but it's extremely
restrictive. It'd be very hard to program in it. And one of the things that I have found as I've
written a lot of Rust code now is that a shocking amount of the code that you already write,
including code with closures, including code with pointers, structures that you put stuff
into and all this stuff, actually can be described in terms of ownership and borrowing. And that when
you start thinking about things in terms of ownership and borrowing, the structure of your
code becomes a lot clearer, right?
So one thing that you may have been able to suss out from what we said is that it's almost impossible and rust, it is impossible and rust effectively to cause a traditional kind of memory leak.
Because traditional kinds of memory leaks are caused by, let's say I have a reference to something and you have a reference to it.
I don't know when I can clean it up and you don't know when you can clean it up. So from a local perspective, nobody knows when the
right time to clean something up is. And you can get into situations where nobody is correctly
freeing the memory. And so you just get a leak. And this can happen even in a garbage collected
language with complicated enough situations. But in Rust, there's never a situation where two of us
think that we own the pointer, right? Either I own the pointer and I let you borrow it, or you own the pointer and
you let me borrow it. But only one of us is responsible for cleaning it up. And in Rust,
neither of us actually has to do the manual cleaning up. It's just an automatic reflection
of these rules, right? So whoever owns the pointer is responsible for cleaning it up,
and the compiler will do that
cleaning up automatically for you.
So I was just amazed
after I wrote, you know,
10 or 20,000 lines of Rust code,
including the big complicated program,
which is Cargo,
and the pretty complicated program,
which is Skylight,
at how infrequently it turned out
that I wanted to go use something
that let me get more dynamic rules.
Like how often the static set of rules
work perfectly for what I needed.
So you have ownership
and only the thing that owns a piece of memory
can write to it or read to it or both.
So that's actually,
Steve, you can go ahead.
So it's actually a little bit different than that.
The owner is the person who is responsible for deallocating that resource.
So whenever they're finished with it, they get rid of it.
They have like 777 permissions on it, right?
They have root permissions for that object.
They can do whatever thing they want to do to it.
And once they're finished, they're responsible for doing the cleanup.
And in this case, Rust inserts that cleanup code for you.
When you say the person,
do you mean like the variable or the thread?
The variable, sorry.
This is a discussion that's hard to have over words.
It's like, oh, makes it a lot easier.
The way I think about it is that the scope that created it.
So when you make a variable,
you by definition have to create it inside of some scope in code, right?
That scope in code is the thing that's responsible for cleaning it up.
So if you make a variable and then don't do anything else with that variable,
and then the scope of code that you created in is finished, that variable will get cleaned up.
But you are allowed, since you created it, to give it to somebody else.
And if you give it to somebody else. And if you give
it to someone else, the same rule applies, right? It gets cleaned up when their scope of code is
left. And this turns, it's a recursive concept, so it's sort of hard to see how effective it is.
But basically what that ends up meaning is that if you look at any piece of code, you can tell by
looking at it whether or not exactly which things that came into it will be cleaned
up when you leave it. Because either you didn't give it to somebody else or you did, right? Those
are the only options. Either you gave it, you transfer ownership to somebody else or you didn't
transfer ownership to somebody else. And that's true at every local point, every local scope in
the program. So what's the process of transferring ownership? Like what's the semantics around that?
So transferring ownership actually is pretty cool. It's just the default way that you give something to something else in Rust.
So if I write a function that says, this is a function that takes like a string, let's say,
and I call that function with a string, that is me transferring ownership. And there's also
the ampersand operator in Rust, which is the reference, how references are referred to in C or C++.
If you use that, then you're lending it.
So effectively, the transferring ownership is not like a complicated API,
like a channel or something like that.
Transferring ownership is just done by calling a function that tries to take ownership.
And the way you take ownership is that you take a value without the ampersand.
And if you take a value without the ampersand. And if you take a value with the ampersand, then you're basically just promising that you won't hang on to it after the point at which you return.
So borrowing is kind of like borrowing with real things.
So if I transfer ownership, I'm saying, hey, you have access to it now and you can do whatever you want and I don't care anymore.
Lending is saying, hey, I'm lending you this thing, but you got to give it back to me when you return.
You can't hang on to it later,
like in a closure or something like that.
There are also mutability and immutability rules
to make sure that there's no concurrency issues there too.
Just to like mention that that's part of it as well.
Yeah, so mutability is actually interesting
because mutability and rust
is actually a different concept to this.
And the whole mutability question
is just the concept of uniqueness, which is that only one thing can mutate something at a
time. So if I give you access to something to mutate, and I also give Steve access to something
to mutate, that's bad. That means that you guys can write on top of each other and can't have any
expectations about what could happen. You could crash, right? But I can give you access to
something to mutate, and then later later on give it to Steve to read.
Or I can give all of you a copy of a thing to read
and that's fine, right?
I just can't give,
I can't give somebody access to something to write
and anybody else access to write or read at the same time.
And that's also totally static.
A compiler figures out whether you're doing that or not
and yells at you if you're doing the wrong thing.
So this probably lends itself pretty well into the concurrency story. Steve, could you're doing that or not and yells at you if you're doing the wrong thing.
So this probably lends itself pretty well into the concurrency story. Steve,
could you talk about that? Yeah. So Rust actually has a bunch of really interesting and unique concurrency things just about it in general. So the first one is that the question
that's on everybody's mind with regards to concur currency today is like, what's your threading story? Do you have channels and that kind of thing?
So what I mentioned is that originally, well, I shouldn't, I'll do it that way. Fine. It's like,
you can always pick the way to tell the story, right? So I'll give you a little bit of history.
Rust used to have both one-to-one and end-to-end threading built in. And the problem with that
is that the abstraction layer that let you choose,
like you could basically say in your Rust program, this Rust program will use one-to-one threading,
or this one will use end-to-M threading, and it was an abstraction, so you would just pick.
So then that overhead meant that green threads were not actually significantly more lightweight
than regular threads. And since Rust is a systems programming language, you need to have access to
systems threads, but end-to-M threading is a runtime kind of issue.
So we made the decision to switch to just one-to-one threading.
So in Rust, as of right now, by default,
it's just got one-to-one threading built in.
Now, there's a whole bunch of discussion you get into around that.
For example, on Linux, threads spawn a lot faster
than you may have expected in the past.
And so it's not that end-to-end is inherently superior or inferior to one-to-one,
but we just got one-to-one right now.
You can actually write end-to-end threading as a library
because Rust is a low enough level programming language
that I.O. is a library concern, not really a language concern.
So there's several people, including one of the people at Tilda,
who's writing alternate I.O. libraries that give you other concurrency models, et cetera.
But what's important about Rust concurrency
is that we have certain types built into the type system
that have certain concurrency properties.
And the standard library uses those to ensure correctness,
which means that if you write an alternate IO library,
you can also gain the same level of safety
with your concurrency that we do built into the language. So for example, Rust has a channel abstraction that's entirely
written in library code. And you can use channels if you'd like, those channels are great. But if
for some reason you don't like the way that we implemented channels, so like our channels are
multi-producer, single consumer channels. If you wanted multi-producer, multi-consumer channels,
you would need to write your own. But because the channel is a library type and not built into the language, you could
get the same safety guarantees around them that we have, which is really, really cool.
And the latest example of something that we did is you could actually... I've been meaning to write
a blog post about this. I don't have a good link for more explanation, but at some point,
I'll have something for you. You can actually do mutable concurrency over stack
allocated data and prove that it's safe
and not have race conditions
in it, which is, well, data races,
which is super impressive and
really hard to explain without code, so I'll just
drop that as a thing.
We have very strong, very
good safety guarantees around concurrency that are
really fantastic, I'm sure you know what to say.
Yeah. Ultimately, the ownership story is basically exactly what you want for data, right?
So typically, I mean, everyone knows that shared mutable state is the root of all evil when it
comes to concurrency. And a lot of languages try to solve that by restricting your ability to have
shared state or mutable state. And Russ basically says shared mutable state is indeed bad,
but shared state is fast and also often very intuitive. So what we're going to do is we're
going to use the ownership system to stop you from sharing and mutating at the same time.
So the ownership system with the same ownership system that you already learned for doing single
threaded programs is also perfect for multi-threaded programs.
So as Steve sort of alluded to, you can write a program that does a fork-join model,
and as long as all 10 of the things that are forking only read the data, that's totally safe,
and the Rust ownership system knows how to think about that. And if you want to have a forked joint system where you have 10 things that are
each mutating something, as long as you don't give them the same thing to mutate, that's also fine.
And so basically what Rust, sort of the innovation of Rust is that Rust has this really
robust ownership story, ownership and borrowing. And ownership and borrowing is
pretty awesome for reasoning about things. It's awesome for performance. It's awesome for letting
you allocate things in the right place, either the heap or the stack, whatever's appropriate.
But it's also really awesome for letting you do things on a lot of different threads and not have
to worry that those different threads are going to be stomping all over each other because things
only own the things that they should own, right?
And Rust already guarantees that you only have a unique owner, one unique owner per thing.
So that's basically perfect. And the awesome thing is that this ownership system is not
a dynamic thing. So like in JavaScript, for example, there is also an ownership system
and you can pass things to another thread and the other thread can do something with it and pass it
back.
But in JavaScript, every single time you pass something around, you have to do all these dynamic checks. And that means that there's a lot of extra overhead to enforcing a pretty good rule,
right? And in Rust, because of the fact that the ownership system is entirely static,
the actual cost is no different than doing shared memory concurrency in C or C++,
but you have guarantees about what can happen
because of the underlying model.
So I think the TLDR is just,
when you start learning Rust,
like the ownership system feels pretty daunting,
but it turns out that it's effectively
one concept that you have to learn
and then it unlocks all these superpowers
that let you write really fast
and complicated code safely.
It's also really just generally,
everyone's terrified of writing concurrent code
because it's very difficult.
And Rust makes many concurrency errors
be compile time errors,
which is just mind-blowing the first couple times
that you see it, for sure.
Well, let's take a break here.
We'll hear from another sponsor.
And then when we get back,
I'm going to give you guys a chance to think
during the sponsor break,
what's your favorite feature besides ownership
and all that it implies. We'll have
each of you a chance to answer that question when we get back.
Over 400,000 developers had
deployed DigitalOcean's cloud.
DigitalOcean is simple cloud hosting
built for developers. In 55
seconds, you'll have full root access to a cloud
server and it just doesn't get any easier than that.
Pricing plans start out affordably at $5 a month
for half a gig of RAM,
20 gigs of SSD drive space,
one CPU, and one terabyte of transfer.
All DigitalOcean servers run on blazing fast SSDs
with tier one bandwidth
and come with private networking.
Use the promo code CHANGELOGAPRIL to get a $10 hosting credit when you sign up.
Again, changelogapril, $10 when you sign up, new accounts only.
Head to digitalocean.com to get started.
And now back to the show.
All right, we are back, Steve.
We've talked about ownership.
We've talked about how that kind of spreads its way through the whole system and gives you lots of wins.
The memory safety stuff, the security stuff.
Surely there's other facets to Rust.
What's another feature that is exciting to you?
Yeah, there's tons of cool stuff.
That's the most unique ones.
That tends to be the one we talk about most often.
My personal favorite pet feature that is other languages, but that Rust has a really interesting take on, is closures.
So Yehuda alluded to this a little bit earlier,
but Rust actually has, because the ownership is still involved in closures,
but the point is that because of that system,
Rust's closure implementation feels just like Ruby's closures.
So for example, let's just talk about a classic example
and see you have a for loop with an array
and you want to add one thing to every element of the array, right?
So normally you're doing this
low-level i equals zero,
i plus plus, you know,
all that kind of shenanigans to deal with this loop managing
overhead because you don't want to pay the
cost of a full closure
and a function call and all that kind of stuff that's indirect.
But due to LLVM's
optimizations and the way that you've implemented
closures,
in Rust, you wouldn't write a for loop like you would in C.
You write a for loop like you would,
well, not a for loop,
but you can write a for loop in Ruby,
but you could also use an iterator
as the most important part.
So the closure as an iterator system
ends up giving it a super high-level feel,
but thanks to the implementation details,
we're actually able to, in an optimized build,
compile to the same
assembly language that you would get out of a for loop if you were doing the low-level stuff. So it
gives you this really high-level feel while still giving you low-level performance. And so, yeah,
to me, closures are a super cool way, and the way that they're implemented is amazing.
And it does that without having the problem of, well, I guess there's a few different kinds of closures,
and Rust is the short version of what I'm saying. But the effect of that is that you can have a
closure that basically does represent synchronous stuff, and that handles the ownership story. It
handles the borrowing story basically automatically. You don't really have to think about what's
exactly happening with a closure like you might expect from a low-level memory managed language.
And it could do things like, oh, you didn't capture
any variables in this closure, so
I'm just going to implement it as a regular function with no
environment overhead and stuff like that,
which is really impressive.
I think the point is that in JavaScript,
the point I was trying to make is that in JavaScript,
a closure is sometimes used for a loop
which can inline everything and just run it
right now, and sometimes it's used for like a callback.
And those are basically the same thing in JavaScript.
So you can't, it's hard to figure out what exactly is going on.
In Rust, you can tell ahead of time, like this is a closure that's going to be used later.
It's going to be used on a thread later.
So the rules about ownership are more restrictive versus this is a closure that's running right now because it's mapping over an array. And that
the rules about that are much less restrictive.
You can basically do whatever you want in there.
So how do you know the difference? Do they just look different
or context?
So sometimes a lot of it's inferred.
Some of it is that
when you take a closure, you can say,
for example, this is a closure that will only run one time.
And if it's something that's a closure that can only run one time,
that means you can transfer ownership into the closure.
Now, the person who wrote the closure doesn't have to think about that.
The person who takes the closure has to say,
I'm only going to use this one time, right?
So there's a few different flavors of closure,
and they're mostly described at the person who's taking them.
The person who's calling them just writes a regular closure like you would in Ruby,
and you get exactly the right set of ownership rules that you would want.
Awesome. So we're going to move on and talk about some security stuff, but I'll just open the
floor here. Anything else feature-wise that you guys are super excited about?
One thing we didn't talk about at all, which is kind of mind-blowing to me, is the type system.
So I wrote a lot of Ruby and JavaScript for a long time.
I pretty much didn't write a considerable amount of code with types forever.
I don't really like Java's type system at all.
The first few times I had to write Java,
it felt like there was a lot of ceremony.
That's not to say Rust doesn't have ceremony.
Of course, any language of the type system does.
But one thing that I really like about Rust's type system is that takes from a lot of sort of what is well known about expressiveness to get to a point where, and this is still sort of a someday thing,
but you can see a world where the expressiveness of what you can do with the Rust type system is pretty close to the expressiveness of what you can do with a dynamic language while being totally safe and fast.
And my favorite example of this is in a dynamic language, when you write code that's polymorphic,
in other words, let's say you take a function, you take something and you call toString on it.
That toString is just looked up at runtime and it calls the right toString. That's what
polymorphism is all about. In Rust, what you would do is you would say something like,
I take a function that implements toString.
So far, that's not that interesting. Java has
that. Go has interfaces.
But in Rust, the normal way that you
say, I take a function that implements toString,
what that does is every single time
you call it, it creates an optimized function
that is optimized for the exact
type that you
called it with. So instead of it
going and looking up at runtime and trying to find that toString function,
which has some costs and also eliminates inlining,
if you have to look something up at runtime,
you of course can't inline it.
In Rust, you're getting a specialized version of that function
for exactly the thing that you called it with,
it's called monomorphization.
And what that means is not just that you avoid the overhead
of going and finding the function, but that you can inline.
And that's actually how Steve's trick with calling.map on an iterator and having that inline all the way to the right kind of assembly.
The way that that works is that every step of the way, you're actually calling functions that are generic,
and they're implemented in a way that's very easy to write specialized versions.
So you write the specialized version, but now that you have the specialized version, you can apply other optimizations.
And by the time you're done running all the optimizations,
you have something that's as fast as writing it by hand,
which is pretty nice.
Okay, one more point on security.
I know the whole point is safety plus speed.
I want to ask one question about security
and then we'll move on to some other stuff
because we're cruising right along here.
The whole point is that we can't shoot ourselves in the foot with memory management.
Is it a panacea using Rust?
Can you just feel 100% safe?
Or can you still possibly write some code that's going to be exploitable?
So not every error is a memory safety error, right?
So Rust's definition of unsafe is very careful to talk about memory safety only.
And that means that Rust applications
will definitely invariably have security issues.
It's not perfect.
That said, it does address the vast majority
of significant, terrifying security errors
because the biggest ones are usually memory safety related.
And that means segfault, right?
So if you can segfault,
then you're talking about a memory safety error. Right, so that's a very common way to get remote code execution is to have a segfault, right? So if you can segfault, then you're talking about a memory safety error.
Right. So that's a very common way to get remote code execution is to have a segfault or stack overflow and shenanigans.
That's not going to happen in Rust code.
But there are other kinds of errors that can cause problems.
And we don't necessarily, although we do try to help with that, nobody's perfect.
Right. I mean, I think it's worth, I think what Steve said is basically correct,
which is that Rust eliminates memory safety issues. But I think it's easy to forget how important that ends up being.
So most people are used to writing in Ruby or JavaScript,
and in Ruby and JavaScript, you simply cannot set fault unless there's a terrifying bug in your program.
And in Rust, that is also true, except that in Rust, you're stack allocating things and
have direct control over memory and you don't have a GC. And it's honestly like the first,
until you realize like, I just wrote a really complicated thing and it's impossible for it to
set fault and really think about that, it's really hard to get it. But I think it's saying something.
It's saying something that you can write something
that is as complicated as the program you wrote in Ruby.
You didn't have to write any malloc or free code.
And it's basically as fast as well-written C++ code,
but can't segfault, can't crash,
can't have memory vulnerabilities,
can't add a bounds error, right? This is,'t have memory vulnerabilities, can't out-of-bounds error.
You have to meditate on it to really get it.
But just because it feels so natural when you're doing it,
it's like, oh, I'm used to writing Ruby code and I'm writing a closure.
Of course I can segfault.
It doesn't feel weird, except that the thing that you're doing is actually quite weird.
The effect is quite strange. You said earlier in the call that you're doing is actually quite weird. The effect is quite strange.
You said earlier in the call, you'd mentioned,
and I know I've been silent for here for a bit.
I just know that a lot of this stuff is much deeper than I can go.
So I've kind of been playing back filter support.
But one thing you talked about,
which was pretty important to you to mention,
was the idea of cargo, what role that plays into crates.
So you've got crates.io, a couple of different terms here for new users of Rust.
What do crates, what are crates and what role does cargo play in that?
Sure. So as people probably know,
I worked on the bundler package manager for Ruby and the cargo package manager for Rust.
And I think, and I obviously use notes,
I'm familiar, pretty familiar with NPM.
And one thing that I think people may underestimate if they're not deeply involved in one of these ecosystems
is how important getting a good package management story
that makes it easy to add dependencies has been.
I think Bundler helped a lot.
People who didn't use Ruby before Bundler might forget how few dependencies that were relative to how many there are now.
And NPM also sort of opened the door.
If you use any NPM project, you probably have hundreds and hundreds of dependencies.
I think in Ruby, it's usually like 50 to 100 dependencies.
And that's actually somewhat extraordinary. And so when I went to work on Rust, one of the first things that I wanted was to make
sure that the ideas that came out of Bundler and NPM, basically ideas that would make it easy to
have a large ecosystem of packages and also to allow a lot of the innovation to happen outside
of the standard library. That's something that I cared a lot about. And this is actually a thing
that not everyone agrees with, right? There are programming languages, I think Python and Go are
good examples of this, where they think it's really, really important to have a rich batteries-included standard library.
And most of the core innovation happens in the standard library.
And one of the things I liked about Rust when I got involved early on was even at that point, there was a lot of interest in taking things that were hard-coded, like the garbage collection type, or exactly how smart pointers
work, and make them things that you could experiment with in the ecosystem, right, as
libraries.
So first, that was just making them libraries in Rust itself.
But by having Cargo and Creates.io, a lot of the things that used to be in the standard
library are still maintained by the core team, but are now cargo packages. And this is sort of an idea that I think
got explored by both Bundler and NPM and a lot of other package managers that came out around
that time. And what was really awesome about working on cargo for me was that I got to say,
okay, let's take a look at sort of the effect of that. Like how did SemVer play into that?
SemVer turns out to be pretty important. NPMPM adds the idea of having duplication, right?
Allowing you to have version 1.x of underscore and 2.x of underscore
and having them both work in the same program.
And Rust allows you to do that.
So how can we do that?
How can we do it without having massive binary sizes
where you have like 57 copies of the glob package in your NPM projects, right?
People who write Rust programs probably care about binary size. You don't want Servo to be four gigabytes projects, right? People who write Rust programs probably care about binary size.
You don't want Servo to be four gigabytes large, right?
So what's awesome for me about Cargo
is that it was at least for me,
the first opportunity to really go start from scratch
in building a package ecosystem
that would take advantage of the fact
that Rust itself is very good
at letting people do things in user space,
but also look at how Rust, sorry,
how Bundler and NPM made community a thing.
Also GitHub, of course, right?
So NPM and Bundler both came out
around the time that GitHub was becoming popular,
but I got to work on Cargo after that was over,
after GitHub is already popular.
People know how GitHub works.
And so I think the way people should think about Cargo
is that Cargo is basically building on what we learned
from the first generation after GitHub.
So it's like attempting to be a second generation
after GitHub package manager.
That is awesome.
And I think this is like, for me,
this is like the big news about open source
is that this works.
Like you can have a packet ecosystem,
you can have user land experimentation,
and you can make that work in the context of a big ecosystem.
You know, one thing that, something you said there, Yehuda,
reminds me back to 131, we had you and Tom on
to talk about the road to Ember 2.0.
Was this how you've learned from,
and I think it just seems like common knowledge,
but you've learned from things that happened elsewhere in other communities that were done well
and implemented in the current community that you're doing your work in.
So in this case, learning from GitHub, learning from NPM in terms of a
package made in the community and the importance here in Rust
just sort of made me reference back to that. I'm just also
wondering if we could expect a cargo ink.
No, definitely no cargo ink.
But I think it's interesting that – so DHH a long time ago had a blog post that said why there is no Rails ink.
And that's still like – I never actually printed it out and put it on my wall, but I kind of want to print it out and put it on my wall about open source.
We do too.
Yeah, we go back to that one.
So that's like really important to me.
But when Rails was first coming out, it actually wasn't entirely clear
how collaboration across the ecosystem was supposed to work.
It's one thing to have, like GitHub is awesome.
GitHub lets people collaborate, but dependencies are a real thing, right?
If you can't have a thing that depends on something
that depends on something else that depends on something else, you can't actually build that high.
And so between all the things that happened over the past five years, we've gone from when I started doing open source where it was like a huge project to add a dependency.
So certainly adding a dependency of a dependency was almost intractable. And then a dependency of a dependency of a dependency was basically like literally nobody ever did that in the open source communities that I was part of to now where it's
sort of, it's the way it works, right? You expect to be able to build large stacks of
URL and abstractions. You expect to not need batters included in the core. You expect the
core to stay small and nimble and focus on capabilities. This is also like the extensive
web manifesto is trying to make that the way the web
works. And this is like, I think it's kind of like, to me, the singularity, right? It's like
figuring out that you can totally change the shape of iteration. Iteration is not just,
like, you can change the speed of iteration by making people work faster, but you can only
change the shape of iteration if the actual process of iteration changes. And in our case,
having dependencies of dependencies with dependencies,
making it so that anybody can work,
collaboratively work together,
the shape of iteration has changed significantly
and it's making things go much faster.
And that's awesome.
So I was happy to be able to make that a part of Rust
because I think,
for me, the most mind-blowing thing about Rust,
which we didn't talk about at all,
is the fact that you can have a browser,
like a web browser, Servo, that is built using the library package manager. The language is package manager. The way you build
Servo is you download, you git clone, and then you run cargo build.
And that's the way you build anything else. And what does that mean? It means that they extract
all kinds of stuff from inside of Servo. Their encoding library, their image processing,
these are all just off-the-shelf libraries
that anyone can use for their own projects.
And they're all built together, put together using the same
approach. And that's new.
C++ doesn't have that. C doesn't have that.
It's like a totally new thing.
So I guess fast-forwarding a little bit
to today, a great day today,
April 3rd. This is from the
core team, the whole entire REST core team, so there is no
byline that says, Steve wrote this, Yehuda wrote this, or someone else wrote this. A great announcement
today, Rust 1.0 beta. What does it mean? I guess you got 172 contributors for this release. What
does it mean for the community to have 1.0 here? What does it mean when you put the label beta on
there in terms of what's out there now and how it could be used? So the big step here is that historically speaking, we've only had one release of the
compiler and that's nightly. Every night a new compiler comes out. With today's release beta,
there's now two versions of the compiler, the nightly version, which continues to be put out
every night, and then the beta version, which was released today. tomorrow there will be a new nightly but there will probably not be a new beta um and so the way that this works is uh six weeks from now probably
probably there's some there's some i want to hand wave slightly you know if we find something
catastrophic we fix it immediately or whatever but the idea is that six weeks from today there
will be a release of one point rust 1.0. And so what happens at that point is the beta becomes the final,
and the nightly on that night becomes the new beta.
So nightly turns along every single night,
and then every six weeks we have a new release of the pre-testing branch
and then the actual release branch.
And so that's the first thing,
is this is the first step towards those kind of train model,
which was originally pioneered by Chrome and Firefox and firefox and it's also used to number um but the other thing that's
a side effect of that is the beta channel comes with stability guarantees which we have never
ever guaranteed basically any kind of stability whatsoever over the eight years of rust's
developments and so that's like the big major change is that we're saying we still
may change some small things,
but basically this is
representative of the actual 1.0
final release, which will have total
backwards, well, total may be a little
strong, but like drop-in replacement,
like 1.1 should be a drop-in replacement for
1.0. So we're offering very strong backwards
compatibility guarantees.
I think one way to think about it is that 1.0 beta is actually not different from 1.1 beta or 1.2 beta or 1.3 beta.
And for people who are not familiar, Steve talked about this, but this is basically how browsers work.
And in my view, this is the future.
Like Ember does it, now Rust does it.
In my view, this is how you should do it.
I'll be doing it for all my projects in the future.
It's awesome.
But the basic idea that there is,
you ship every six weeks,
but you also ship a staggered beta release.
And that beta release is extremely stable.
It's only the features that have been approved that are actually ready to go
and you're just getting some feedback.
You have nightlies that people can subscribe to.
And the really awesome thing for me
about all this stuff is that
it lets people subscribe to a channel that is awesome thing for me about all this stuff is that it lets people
subscribe to a channel that is appropriate for their level of stability reliance, right? So
some people might be unwilling to ever have instability. They need to just keep rolling.
And those people should just use the release channel, right? But some people want the new
features as soon as they're basically ready. Maybe they're not stability guaranteed yet,
but they're basically ready. Those people choose the beta channel.
And some people really want to be bleeding edge.
Those people choose the nightly channel.
And the thing that's awesome about this
is that the core team itself
just does all their work on master, right?
So it used to be that this sort of trade-off
between how stable and unstable you needed to be
was a decision that you have to finally tune.
You have to finally hone as a core team
to figure out what exactly you want to do.
And the real genius of the Chrome model,
which is what started this all,
is that it lets people self-select
into a stability set that they want.
If someone uses Nightly,
they can't complain if things broke.
That's what they signed up for, right?
But if someone uses Stable,
you know that they really care about stability.
And that's something that,
as a person who's worked on a lot of open source, being able to know that people have signed up for the thing that they really care about stability. And that's something that as a person who's worked on a lot of open source,
like being able to know that people have signed up for the thing that they're getting is pretty mind-blowing.
It's pretty awesome.
On that note, on the six-week release cycle, I think it's been only a couple weeks, about three weeks it seems,
maybe two weeks since you tweeted about it.
And then to your discourse for Ember, a question was posed, is the six weeks
release cycle too frequent? What's been some of the feedback from the community and I guess some
of the core contributors to Ember and how it, I guess it might play here to Rust and then as Steve
said, every project he'll ever do. Yeah. So it's actually really interesting because the thing
that's kind of funny about the six week release cycle, so six weeks is not very long. The idea
behind the six-week
release cycle is that unless you've done
something catastrophically wrong,
people can just keep upgrading
every release. So every six
weeks, people can spend a few hours at
most and upgrade. I say
a few hours because in JavaScript, the
dynamism means that people accidentally
rely on private APIs all the time. But in general, that's like a short, quick update. People can just schedule it
as part of their sprint and be happy. And that's something that has actually worked out pretty well
for Ember. I would say even on that thread, most people said, it's awesome. I basically just
upgrade and it's fine. The thing that's kind of unfortunate about it is that that does mean that
if you're a person who can't upgrade every release,
there isn't really any good guidance for you about what else might be a good process.
Right. So if you don't if you can't schedule every six weeks to do an upgrade or if you have very, very extreme stability requirements or if you're using unstable features,
you know, you're doing private stuff or you're building an add on that does private stuff.
Right. It may not be so obvious to you what the right story is. So I think probably what we're going to do, and this is
something we just talked about in the core team meeting today. I think probably what we're going
to do is we're going to create a release every four releases or so. So that'd be like twice a
year. And that release is a release that we say is going to be stable. Now it's a little funny
because all of our releases are stable. We follow Semver, right?
So all our releases are stable.
So really, all we're saying is
if you're, like,
this is a good release
for you to stick on,
we'll maintain
backwards compatibility.
We'll continue to do
security patches
to that release for a while.
And perhaps the most
interesting one,
and this may or may not
end up being important to Russ,
is we have a policy in Ember
that any private API that's heavily used, if it turns out that we have to change it, we don't
just change it off the bat. We do a two-step deprecation, right? So we do a deprecation in
one release, and then in the next release, we'll remove it just so that people know that we're
going to do that. And so maybe one thing that we'll do with this more long-term release process
is we'll say, we won't remove something until the deprecation has crossed over one of these kind of cycles but sort of the funny thing is and this is
like the conversation we had the core team meeting today was everyone was like i don't really see how
this is significantly different from what we're doing right now and my point to them was it's not
significantly different it's just a way of telling people it's a way of being clear to people that
what we're doing right now enables this style of updating whatever you want. People are so used to the idea that an upgrade
is who knows how long, who knows how complicated, who knows how messy
that the idea of upgrading every six weeks seems crazy. So all we're really going to
probably be saying with this process is we'll give you a rolled up changelog, which is
pretty easy. And it is actually safe to do this,
which was already true, but we weren't saying it, right?
It's probably all in the existing processes to make them more foundational and explanatory to the community trying to prop themselves up around Ember.
And then how is it as this same six-week release cycle plays into Rust and any other project that sort of picks this up?
I can see definitely how that's just formalizing what's already
in place. What's pretty awesome about Rust, I think Rust may have less trouble.
Because Rust has such strong typing, I suspect
that some of the kinds of issues that we've seen with Ember where people end up using private APIs
and we end up getting stuck, I suspect those will happen less. Just because
if you break something,
things don't compile,
so you find out very fast.
Like the Canary build,
it won't be like people will limp along
and will fail to compile.
And then it's not easy to go in
and poke in at the internals of something
somebody doesn't want you to poke in at.
So my hypothesis is that
the kinds of deprecations that we have to do
in Ember of private features
will be fewer and more far between in Rust than they were in Ember.
Well, if you're listening now, stay excited because we're going to take a quick break.
We're going to rewind a little bit and kind of go maybe to the noob level,
talking about getting started.
Those are just picking up Rust.
And then we're going to hypothesize a little bit about the future.
Steve's got something particular he wants to talk about.
Let's take a quick break.
Let's do a sponsor.
We'll come right back.
Today's show is sponsored by App Quality Bundle.
If you haven't heard of this yet, you've got to check it out.
It's a time-limited, deeply discounted bundle of web services for building better mobile and desktop apps.
This offer for this expires on April 15th, 2015.
So if it's after that date and you're listening to this, it's too late.
There's a time limit to buy, but not a time limit to use.
What do you get?
Well, first off, you're going to save 89% on a year of Sentry, RunScope, Code Climate,
CircleCI, and Ghost Inspector.
When combined together, each of those services give you complete app quality coverage from RunScope, Code Climate, CircleCI, and Ghost Inspector.
When combined together, each of those services give you complete app quality coverage from mobile to web.
And here's the best part.
What would normally cost you well over $9,000, you're going to get for $999.
That's an 89% huge savings.
Beyond the deeply discounted price, once you purchase it, it won't expire.
This is perfect for new projects, projects that are growing up and need end-to-end quality coverage from mobile web, or for development shops taking care of clients and their services.
So there's only really one caveat to mention, and that is strictly for new accounts only.
There might be some exceptions to this rule, but you'll have to check the fine print or get in touch with them if you've got a specific question. Check it out at buildbetter.software.
That's right, buildbetter.software. Now back to the show.
All right, we're back getting started. I've got some ideas on where people might get started
because I can Google, right? But Steve, where should we pick this up at?
You got a pretty neat idea on maybe where this could begin.
Yeah, so this is sort of a segue from the last chunk that we were talking about, and then I will give you an exact link.
But one of the things that Rust is doing and that I think Yehuda and I are both trying to do with Rust is to bring a lot of the concepts that web programmers are used to doing into this space systems programming that
no one has done before. And Yehuda gave this talk at Gogoruko, which I really thought was
really fantastic and has something that matters for this getting started aspect. So I know a lot
of the people that listen to the changelog and a lot of people that follow me on Twitter are
dynamic language programmers that have never done compiled statically typed languages before.
They've never done low-level programming before. And so there's this really interesting comparison between what Node did and what I hope Rust does for systems. So one of the
things that Node enabled was an entire generation of programmers who had only ever been front-end
devs, quote-unquote. They'd only done a little bit of jQuery, and it enabled them to write back-end
code. And that was like a new superpower for them. Like this whole group of people now have this
ability to do this brand new thing in computing.
And we've seen a ton of really fantastic things sort of fall out of that with these new people getting excited.
And so what I'm hoping is that if you've never done systems programming before, that Rust will be able to help ease you into doing this kind of low-level programming.
And so I don't have all these resources in place yet, but one of the things that's going to be important for the future of Rust
and that I hope to get done in the next six weeks
is to actually have documentation
specifically around
you've never been a systems programmer before,
let's teach you systems programming
as well as Rust. And then
not just, oh, you already are a super
hardcore C++ hacker, here's what
you need to know about how Rust works.
And so I think that's a really important thing. One thing that you can bet will happen is the exact same thing that happened
with Node, which is that there's all these people out there who are already systems programmers,
just like there were all these people who are already backend programmers, and they didn't get
the enabling power of Node. And so you'll hear people say, I don't understand why Rust is so
important. I could do all this stuff with C++. Like, look at my C++ code. I'm already doing all
the things Rust already does.
And those people will be missing the point.
They'll be missing the point that Rust is enabling people
who previously couldn't write C++ to write C++.
It's not, I mean, it will help
people who...
Or don't want to, as you did before.
Right. And it will help people who,
unlike Node, I think it actually is
genuinely an
improvement for C++ writers.
Pretty much strictly a strict improvement. But I think it actually is genuinely an improvement for C++ writers, pretty much strictly a strict improvement. But I think people will miss the point. You can expect that people will miss the
point because this is the story of enabling technologies. Anytime there's a technology
that enables a group of people who weren't good at something to do something other people are
already doing, the people who are already doing it say, I don't see the point of this. This seems
pointless. And something like, do you really want all these people coming in? And for me, the answer is always yes. I always want all these
people who felt intimidated by technology to go in and actually have the power to do the right thing
or have the power to do things with it. And that's something that I've already seen happen for myself
with Rust, and I expect to see it with a bigger group. So on that angle, the getting started
thing. So the best place to get started, and of course I have a slight amount of bias in this,
is we actually have a large amount of documentation on the Rust website that I call the book or the Rust programming language.
And so this is what my baby, it's what I work on the most of the time.
So you wrote this? This is yours?
I mean, other people have helped, but I have done the vast majority of the work.
I was trying to figure it out because that was one of the first on my list
of getting started. I was like, I found this and I found a few other things, but I was very
impressed by the organization and also the writing behind this. So thank you. So one of the things
that is, it's still, you know, maybe by the time the show gets actually published, I'll have a
little bit of these things in place, but I want you to be able to start reading this, and they'll give you a little project that you'll build together.
So right now, it sort of takes a syntactical approach of explaining the syntax of Rust,
and it'll get you started with those basics. But due to some shenanigans, I pulled the project
that used to be there, and I have a better one that's going to be a tutorial that's coming out.
And so that will hopefully be a nice way to get started if you don't know what you want to write
in Rust. So yeah, the book is the most up-to-date and
comprehensive documentation that we have. Part of the reason why it's up-to-date is that the
documentation tools we have actually run the code in the documentation as a test. So if something
in the compiler changes, it will actually break the documentation. And so it's been kept up to date sheerly because commits don't pass
unless it is also up to date.
So there's, of course,
one or two areas where that's not true,
et cetera, hand wave, yada, yada.
But it's generally speaking
the most correct
and up-to-date documentation.
There's also another project
that we have.
It's rustbyexample.com.
This is originally written
by a community member,
and then it was sort of donated
to the Rust core team
when he decided he didn't want to work on it anymore.
And it's more of a, like, small snippets of code, like a top-us kind of, like, approach.
And I, frankly, need to give it a little more love, but it's still pretty good.
And I make sure every night I have a build that tests against nightly, and I make sure that it's been up to date.
So those two resources are the big primary ones and the ones that are most accurate.
Unfortunately, when you're trying to go towards a release,
there's always those last-minute changes you're sort of sneaking in.
The last two weeks, I've seen a bunch of breaking changes.
That means that, and also over the alpha period,
there were a bunch of changes that have made a lot of the other documentation
that exists on the web kind of obsolete.
You'll need a little bit of hand-holding to get going with those.
But another great resource for learners
is the IRC channel that we have in hashtag rust,
pound rust, I guess, in the old terms.
Oh man, wow, I just betrayed myself
by saying hashtag pound rust.
Using too much Twitter.
But the point is, is that the Rust chat room
is a wonderful, welcoming, friendly place
for people to ask even the most basic questions about Rust.
If people are jerks, I will kick them, basically.
We're encouraging people.
I want people to feel comfortable asking any question whatsoever.
And we have a ton of really great people that are around that will help if you get stuck.
So if you do use a bit of documentation or a blog post that's a little out of date,
oftentimes jumping an IRC, someone can tell you, oh, yeah, you just need to tweak the
name of that function or like, oh, this changed that type or something like that.
And so that's also a really fantastic resource for like up-to-date things.
Hopefully now that beta is released, we'll start having more broad community initiatives
that are actually accurate.
But a lot of people understandably have been sort of holding off on their projects until we'll start having more broad community initiatives that are actually accurate.
But a lot of people, understandably, have been sort of holding off on their projects until this stable thing actually happened.
So aside from RSC, do you have a discourse?
Something else that surfaced was the subreddit on ForRus.
It seemed like that was at least a place where there's a lot of interaction
and maybe even where new announcements are happening.
For example, the betas mentioned there, which was submitted by you, Steve.
Yeah, yeah. So we have two official forums. They're both discourse instances. One is at
users.rustlang.org, and that's intended for just general discussion about people who are using Rust.
And then there's internals.rustlang.org, which is used to develop the language itself. So we
have those two things split out just so that, you know,
hello world questions don't interfere
with like deep type theory questions.
And, you know, you can pay attention
to however much of those two things.
We have some people
that only read the internals discussion
and some people that only read users, obviously.
Reddit does exist,
although I'm a Reddit hater.
So I try not to talk about it as much as possible.
But the Reddit, the Rust subreddit
is a shining example of all the things that Reddit is not.
It is also a nice, wonderful, friendly, welcoming place as opposed to the rest of Reddit.
It seemed nice.
I was surprised.
I was like, this is kind of cozy in here.
I like the Rust subreddit a lot.
I think people should also realize that there's a bit of a clash of cultures in the Rust community,
which there's a bunch of people who are writing Rust because they were C++ hackers and they really want Rust to be a better
C++. And then there's a bunch of people that came in because they're being enabled to be systems
programmers for the first time. And so if you come into a conversation and you say something
from the perspective of being a higher level programmer and you get a bunch of stuff thrown
at you from the perspective of being a C++ hacker, don't let that discourage you. I've definitely seen it
happen occasionally, maybe more than occasionally in some cases. I would say, assume that the person
who is talking to you is saying that because they feel passionately about wanting Rust to be a
replacement for C++, but also assume that you don't need to understand necessarily right away everything that they're saying
in order to be an effective Rust programmer.
And importantly, you might have some insights
on the ergonomics of the thing that's being discussed
that a person who is so used to the pain
and suffering of C++ might not be able to see.
When we originally pitched Cargo,
none of the hardcore C++ crowd
believed that they would be using it.
And by now, they're all basically using it.
Right. They're depending upon it.
Yeah, yeah. So both of these sort of groups,
we sort of have three camps in the Rust world.
There's the functional people, the dynamic programming people,
and the C++ people.
And all three of them have different pros and cons to offer each other
in terms of their perspective and experience.
So it's been pretty cool to see those three groups sort of coalesce.
Yeah.
So last week, we had Zach Cipolla on the show.
He's the CEO of Spark.io.
It's an open-source hardware company doing dev kits for Wi-Fi and cellular.
That's episode 150 if you're interested.
But in the post-show, we told him we're talking with you guys this week,
and he was quite excited about Rust,
and he was kind of hypothesizing on embedded Rust
and getting excited about that.
In fact, he pointed us to a project called Zync.
Yep.
Which is an experimental attempt to write an ARM stack,
according to them.
We'll link that up in the show notes as well.
We want to kind of look at the future.
Right now, we're at 1.0 beta,
and we've talked about what all that means. But I'd
like to take a chance to let you guys kind of
prognosticate what you see
Rust doing going forward.
What little niches
will it disrupt, and
where will it play well, and where won't it?
So, maybe start with you, Huda, and then Stephen
can take a shot as well.
So I can give my wistful
hopes for the future,
which is I think Rust is pretty awesome
because the ownership system means
that most code that you write
actually only cares about
the abstract notion of reference
and not exactly how it was allocated.
That's like a core concept of Rust.
So I could definitely imagine in the future
having a world where people are able
to write application layer code
that's either reference counter or GC even.
But it talks to a lower, like a framework layer that's extremely performant.
So I sort of think about Rails, right?
Rails, because the application layer is written in Ruby, the framework layer is written in Ruby, but Ruby has real performance limitations.
And if you start to write Rails in C++ or C
and someone jumped in to understand,
they'd be like, oh my God,
I have no idea what's going on.
Please write this in Ruby.
But because Rust has sort of this natural layer
where it separates allocation,
the cost of allocation from the details
of how you actually work with the objects,
I can easily imagine someone writing a Rails
that was very fast, very efficient, very low level
and worked with the ownership system. But easily imagine someone writing a Rails that was very fast, very efficient, very low level and worked with
the ownership system.
But then the glue code
on top,
the application layer code
was very,
was much more loose,
was GC
or reference counted based.
And that sort of thing
is exciting.
There's a lot of work
that's left to be done.
That's not something
someone could start doing today.
There's language features
that are still left.
But I think,
I say this,
and I'm sure that we're going to get a bunch of Rust people
that say that's impossible.
You shouldn't get people's hopes up.
But I can imagine it happening.
And I want to see something like that happening.
I've been sort of thinking about the release of 1.0
as like an event horizon.
Like all of my hypothesizing about what may happen post-release are sort of like not
important the most important thing is eye on the prize heads down like ship the best possible 1.0
that i can possibly ship because you only get one chance at a first impression i've been joking
that i can't wait for the six-week release cycle to start kicking in for real because like this is
the only stressful releases that we'll have is today and six weeks from today and every release after that is just like oh yeah this is just a friday
like no big deal um so i've admittedly been thinking a little bit less about the future
because i've been so focused on you know the immediate presence um i think that if i had to
say overall it would definitely be much more social kind of aims than it is like specific
technical aims i would love to see rust start to be used to teach operating systems classes in colleges.
We've already had one instance of that happen.
And I would love to see Rust make a lot of more people understand that low-level programming
is not inherently harder than high-level programming.
This could be a whole other show, so I won't get into that a whole lot more.
But I think that different people have different aptitudes.
And some people think that low-level programming is easier than web programming because web programming is actually very complicated.
So I would like to see a new generation of people get interested in doing sort of systems-y stuff.
And I think that we'll be able to help them out with that.
So that's sort of my big focus, more than a specific technical thing.
I'm interested in the social good that we can do.
And also, like, you know, rewriting libraries that need to be safe in a safer language will do a lot of good in the world, too, hopefully.
Awesome, man.
Sounds like really cool stuff.
Unfortunately, we're running low on time here, so we're going to do a few of our closing questions, and we'll probably split them up.
Give Yehuda one, I'll give Steve
one. One question we asked,
maybe I'll pitch this one to Steve,
is if you had a call
to arms to the open source community
with regard to Rust,
and you wanted them to do something to help out,
to get involved, what would you say? What's the
best way? What should people be doing?
I would say, give it a try.
Write down what you think, whether or not it's positive or negative, although should people be doing? I would say, give it a try. Write down what you think,
whether or not it's positive or negative. Although, try to be constructive, please,
for my ego and sanity. And leave a post in our users forum, which since it's a discourse,
you can sign in with GitHub or Twitter. You don't even need to make a real account or anything.
And just let us know what you think. This next six weeks is going to be largely about polish.
And so we can only polish off the sharp edges that you help us find.
So there are undoubtedly a lot of them.
I've already submitted two pull requests today to fix tiny things.
But yeah, like just straight up honest feedback and giving a good shot would be wonderful.
Awesome.
Next question.
This one's for you. You guys guys are kind of uh leaders and finding
new things and kind of steve found rust before i had any idea what what the heck it was um
and so we're always interested with our guests like what's on your radar of course you've been
deeply embedded into ember and into the rust uh ecosystems but do you have anything else that's
kind of tantalizing you,
a project that you're interested in,
or if you have a free weekend
that you'd want to hack on
that perhaps folks haven't heard of?
So mostly I do web stuff.
Okay.
And I think,
maybe I'll just answer this generically
with platitudes,
because I don't actually have any specific project.
Okay.
But I think people underestimate the web
over and over and over again. And I think we're in
the middle of another wave. I think something like 2011 was the last big wave of features that
really fundamentally shifted how people use the web. So things like web workers, typed arrays,
indexed DB, Flexbox. These are all things that I think if you look back, you can see that those
are fundamental game changers.
Some of them made Asm.js possible.
But of course, when they happen, people say, oh, those guys, they're taking a document format and cramming on random blah, blah, blah, blah, blah, whatever, whatever people say.
And I think we're in the middle of another wave. more work on Asm.js, Service Worker, the Houdini project,
which is doing some work to expose more of CSS directly to users,
a bunch of things like that that I think are going to end up being important.
And I find it interesting that it's not, when I look back,
it's not like there's any one, it's people are kind of expected
to either be totally stagnant or changing all the time.
And I kind of see waves.
So I guess keep an eye out for what's going to happen over the next year or two on the web.
And if you want to think about what's coming next on the web, you should think about how to take advantage of the things that are coming and not be so cynical about them.
Very good answer.
Well, it's definitely been fun having you guys here on the show today. I know this has gone a little longer than maybe our norm is, but for those long shows, this tried Rust to try Rust and give constructive, polite, graceful feedback.
Because that's what the world needs, right?
You can't be mean.
You've got to be nice.
There's too many people being jerks on GitHub issues in all directions, and I would like if that not happened anymore.
Yes, totally agree.
Totally agree.
And we echo that.
And we ask the entire community for the same thing.
We do have a couple shows coming up.
I'm going to tease the next one.
So I guess to Yehuda's mention, back to the web.
This is going to the platform, I think, that's pretty strong out there.
It's called WordPress.
We're talking to Roots.io, Sage, a very cool starter theme, and Bedrock, which is a modern WordPress stack.
We're talking to Ben Word and Scott Walkinshaw about that.
We had some awesome sponsors for this show.
CodeShip App Quality Bundle, which is a time-limited, super awesome bundle.
It expires on April 15th, so take a listen to that.
TopTal and DigitalOcean, whom absolutely love what we do here.
But thanks to Steve and Yehuda and Jared and all the awesome listeners,
the members.
And for now,
let's say goodbye,
everybody.
Bye.
Thanks guys. Outro Music