The Changelog: Software Development, Open Source - Servo and Rust (Interview)
Episode Date: November 18, 2016Jack Moffitt joined the show to talk about Servo, an experimental web browser layout engine. We talked about what the Servo project aims to achieve, six areas of performance, and what makes Rust a goo...d fit for this effort.
Transcript
Discussion (0)
Bandwidth for Changelog is provided by Fastly. Learn more at Fastly.com.
I'm Jack Moffitt, and you're listening to The Changelog.
Welcome back, everyone. This is The Changelog, and I'm your host, Adam Stachowiak.
This is episode 228, and today, Jerry went solo talking to Jack Moffitt about Servo, an experimental web browser layout engine.
We talked about what the Servo project aims to achieve, areas of performance, and what makes Rust a good fit for this effort.
We have three sponsors, CodeSchool, Actor Paradise, and GoCD.
First sponsor of the show today is our friends at CodeSchool, and they have a free weekend happening right now.
Everything at CodeSchool, all their content is completely free. Dive in all weekend long from
November 18th to the 20th. Get started on learning that next language or that next thing you want to
learn. At All Things Open, I talked to Carlos Sosa, one of the course instructors at CodeSchool. He's
been there since the beginning. We talked about what makes CodeSchool different and what they mean by learn by doing.
Take a listen.
So what makes CodeSchool different is the fact
that we focus on the practical aspects
of the technology that we teach.
So before we teach anything, we experience it, right?
We write apps on it, of course.
Then we figure out the parts, that specific technology
that are relevant for someone just learning that technology.
Instead of teaching everything there is to know
about a class in Ruby, we teach you how to use a class
or when to use a class.
Or in Go, instead of teaching every single way
that you might come across a struct,
we give you a problem that is solved by using a struct
and we show you what that looks like.
And for a course, for a CodeSchool course,
that might be enough.
Although there's dozens of other ways
that you might come across a struct in Go,
for the purposes of knowing what Go is,
we feel like showing one way of doing something is enough.
Otherwise, we're going to overwhelm the student
with information, which they're not gonna recall anyway.
So if they really wanna further their education in Go, they're gonna research on their own after they go not going to recall anyway. So if they really want to further their education in Go,
they're going to research on their own after they go through the course and whatnot.
But we want to give them that first experience of what it is to write an application in the real world,
and we give them all the tools that they need to do that in the browser.
So that's sort of the learn by doing part.
All right, take advantage of this free weekend from CodeSchool.
Do not miss out.
It's free all weekend long from November 18th to the 20th.
Codeschool.com.
And now on to the show.
All right.
Welcome back, everyone.
We have a big show today, a show that our listeners and our members specifically have been asking about.
They've been saying, give us more Rust, give us some Mozilla.
And what came of that is a show about the ambitious browser engine project
from Mozilla called Servo.
And you know, Servo is a huge project, 597 contributors,
lots of people involved.
And so we thought, who do you even talk to about this?
So we asked Steve Kalabnik, friend of the show, who would be a great person to have on.
And he said, you got to talk to Jack Moffitt.
So that's what we're doing.
We have Jack Moffitt here today.
Jack, thanks so much for joining us on the ChangeLog.
Hi, everyone.
Happy to be here.
So we have a lot to talk about, Jack.
What we like to do to kick off the show is to get to know our guests just a little bit better.
We find hacker origin stories are inspiring and sometimes interesting and insightful. You have quite a history. I was
looking at your Wikipedia and Servo is not your first rodeo. You've been involved in XMPP, Erlang,
or at least maybe not working on the language of Erlang, but using Erlang Icecast, which I think
we might even be using for our live stream still today,
and lots of other projects.
Can you give us a little bit of your origin story?
Sure.
So it sort of starts with Icecast.
So Icecast was the first open source project I worked on.
And that came from, I was going to school at SMU.
SMU had lost their FCC license several times. So the student
radio station only played in one building. And I thought, you know, all of the dorms have Ethernet
jacks in them, and we should be able to get this radio station to everyone. But no one wanted to
pay for, you know, the real networks products at the time, they were pretty expensive. So I started
working on one, a streaming media server, along with a couple other people.
And it sort of grew from there.
So that project started collecting contributors and got more complicated.
As part of that, I joined a startup that was doing internet radio. That startup ran into issues around MP3 royalties at the time, the royalty, the patent owners wanted to charge for actually
streaming of MP3 audio, not just the encoders and the decoders. So I started looking for,
how are we going to solve this problem? We need a royalty free codec. Um, and so at that point,
I met Christopher Montgomery, uh, of zip.org who was working on Agborbis at the time. Um,
we started paying him to finish that off full-time. And then I helped
found the Ziftar Foundation. After that work, I was ready to ship. And it's gone from there.
So I've been quite involved in patent-free audio and video codecs. Even today, there's a project
at Mozilla called Dala, which is doing the same thing for video with many of the same people, including Monty on board.
Those people also ended up at Mozilla independently of me.
And so, yeah, so from there, I did a bunch of startups and various things, always keeping sort of an open source bent about it.
I did some front end JavaScript work with an online games company that we started to do chess online.
We pivoted that into a real-time search engine,
which is a story for a whole other podcast.
And that got me into Erlang.
And so I did the Erlang stuff for a while,
doing a lot of backend infrastructure
for massively multiplayer games,
similar game to Pokemon Go called Shadow Cities.
And then I ended up at Mozilla working on Servo.
So I've been around the block
in terms of the kinds of projects I've worked on.
So in terms of languages that you've been involved in,
it sounds like JavaScript, Erlang, Rust,
perhaps C and C++.
Does that round it out?
Yeah, mostly C, not so much C++,
but otherwise, yes.
Okay.
Any favorites?
I really like Erlang.
Erlang is great.
I also have a really soft spot for Clojure.
I did use Clojure at a couple of places as well.
Both of those languages, I feel, hit a really nice sweet spot for certain kinds of tasks.
I did fall in love with JavaScript and fall out of love with JavaScript probably several times over the course of my career.
Where do you currently stand?
Are you in or out of love at this point?
I'm sort of ambivalent, I guess.
I love it as a deployment language.
It's supported everywhere.
And my daily goal in life
is to make it as fast as possible
for the web platform developers
to make responsive apps
and really good apps that equal the quality of native apps.
And I'm guessing that perspective probably gives you a very special kind of love-hate relationship with it.
Yeah, I mean, it's always frustrating when you want to make some performance optimization and you can't
because either the semantics of the language or the semantics of the web prevent you.
But also it's a fun challenge to figure out, you know, what areas can we make performance
improvements on and how can we achieve that?
And there's a lot of competition in this space, particularly with this project.
So it's fun to, you know, to be the underdog and try to win on performance.
Right.
And how long have you been with Mozilla?
I've been here for about three and a half years.
Okay. Now, when we talk about a lot of where are the people we have on the show coming from in terms of their background or their experience or what brought them into software, you know, there's a lot of people that have kind of a video games interest.
There's others who have language interests or mechanical or hardware interests.
And, you know, we all kind of end up in this software space.
I read something about you that I thought maybe this has something to do with your interest
in programming, but maybe it came afterwards.
Tell me about Lousy Robot.
What's this?
So Lousy Robot is a band that I joined right after I graduated college.
I dropped out of college and did a startup in San Francisco, you know, sort of the traditional
hacker thing to do, I guess. And then later on, I went back and finished. And when I when I finished, I remember thinking, you know, I've always wanted to be in a band. Why? Well, you know, there's no reason I shouldn't. looking for band members and found these guys called lousy robot that had a indie pop band here in albuquerque um and really liked their music and thankfully they really liked me and so
i started hanging out with them and going to practices and stuff and and you know played my
first show on a stage and awesome did that for several years actually did a couple small tours
in the southwest area so did you have you played keyboard for them it says did you have a lot of
keyboard experience prior to this or you just decided i I'm going to learn it, I'm going to do it?
I played piano when I was a kid, but I was always more interested in sound design type stuff. I got
a MIDI keyboard when I was in high school, I think, and I started programming, you know,
things for the Gravis Ultrasound, if anybody remembers those awesome sound cards, you know,
writing my own mod tracker and
sound effects and stuff like that and so i was always sort of had this sort of music hobby sort
of going in the background yeah i've never been able to do as much with the programming side of
that as i've always wanted to but yeah it's definitely been a fascination of mine for a
long time i love that you just decided you know you I'm going to find a band and you find one called lousy robot, which by the way is a spectacular band name.
And, you know, you thought I'm just gonna, I'm going to go be part of this band. And you just
kind of got that done. It seems like, uh, I don't know, ambitious. Yeah. Well, I mean, you, you can't
sit around waiting for, uh, things to happen. Like you gotta, you gotta go after the things that you enjoy doing. You know, most of my career, I've worked remotely for the companies, either I've
started or when I've worked for others. And so it also fulfills sort of a social need that I have,
you know, being trapped in my house all day. Well, it's not really trapped, but, you know,
being in the house all day and not having a much, you know, in-person interaction with the outside
world means that, you know, hobbies like that are really helpful.
I can sort of get the social needs I have satisfied.
Even if I can't get them satisfied at work, I can get them satisfied through hobbies.
Actually, that's one of the reasons why I began podcasting or got involved with a change log was because the same reason.
I work remotely.
I'm a kind of, I have a hired gun, a contract developer.
So I'm, you know, I used to be in my basement coding all day
and now I'm in an office above the garage coding all day.
But I was just very isolated and I live out here
kind of in the suburbs of small town Omaha, Nebraska.
And I just wanted some social interaction
with people that had similar interests
and people that were smarter than me.
So podcasting was a natural fit.
It sounds like I also could have searched out for some electronic bands and tried that
route.
But that sounds probably harder than just up on the microphone and talking to people.
It could be.
I mean, it can be.
Getting up on stage and performing for a bunch of people is definitely an interesting experience.
I recommend everyone try it.
Is that something that you miss?
Yeah, I sort of miss it.
Not so much that I want to be up in front of a bunch of people necessarily, but like
sort of it gives you it gets the adrenaline going very specifically.
And that's a pretty good feeling.
Like it always felt good after a show, especially if you had a decent-sized audience there and they were really into it.
There was just a lot of nice energy in the room.
It always left you feeling good.
Well, if you ever want to consider podcasting, I have a great name for a podcast all about the Rust programming language.
I won't say it here on the air because someone will steal it, but I have a great name for you.
So we can talk offline about that.
Let's talk about Servo.
So this is an ambitious project, like I said in the intro, from Mozilla.
Also has a Samsung angle, which I didn't realize before doing a little bit of background on this.
Samsung's involved.
But let's take it from your angle, Jack.
Tell me about the beginning of Servo and Jack Moffitt.
How did you start being involved with it?
Give us that from your perspective.
So I'd have to say it started with the Rust programming language.
So I've been very interested in different programming languages for a long time. And my career has several that I've managed to use professionally.
I went to the Emerging Languages Workshop at Strange Loop back in 2012.
And Dave Herman gave a talk there.
He's also at Mozilla Research on the Rust programming language.
So there was a whole bunch of people presenting, you know, their own programming languages.
And Dave Herman and Nico were both there talking about Rust.
And I had heard about Rust.
It was sort of in this pool of languages that were sort of systems-y, that were sort of emerging. And I hadn't thought that much of it at the time. And when I heard Dave describe the different kinds of memory usage in Rust, like back then we used to have these sigils for shared pointers and owned pointers and things like that. And it was a lot more complicated syntactically. But all of those concepts really meshed well with
sort of the Erlang knowledge that I had at the time. So Erlang uses message passing as sort of
its, you know, main concurrency primitive. And one of the downsides of using message passing is
that you're copying data all over the place. So whenever you send a message in Erlang erlang you know it's got to copy it and send it to the you know the other
erlang process and you know that that can manipulate it from there and rust has this really
nice thing that falls out of ownership which is since you know that you're the only owner of a
certain pointer when you pass it in a message to another, you know, Rust thread, it can just effectively
give you access to the pointer now and pass the ownership along with it. So no data is actually
copied. So you get all of the beautiful semantics of Erlang message passing, but you get it in a
wonderfully fast implementation in that it involves no data copying. And so that really intrigued me.
So then I started looking more into it
and got pretty interested.
And then I noticed they had a job opening
for basically what I claimed at the time
was the first professional Rust programmer
of leading the Servo project.
So I hopped right on that.
This just sounds like Lousy Robot all over again.
You're like, you know what?
I like Rust.
With Lousy Robot, I want to be part of a band know what? I like Rust. You know, with Lousy Robot,
I want to be part of a band.
I like electronic music.
I can play the keyboard
a little bit.
I'm going to get involved
with these guys.
And with Servo
or with Mozilla,
it was,
Rust is interesting.
Here's an opportunity
to be a Rust developer,
the first one perhaps,
first professional Rust developer,
and I'm going to go get that job.
Is that kind of the gist of it
or is that an unfair characterization? No, I think that's more or less the gist of it. I've always
sort of, you know, people talk about opportunity knocking, but I think that you can't do much when
opportunity knocks if you're not prepared. And also if you don't like, you know, build a bunch
of doors for it to knock on. Right. So I've always spent my career, you know, trying to keep my eye on what's coming,
you know,
what's happening,
what are the opportunities around
so that when something was interesting,
you know,
everything is already lined up
to sort of make it happen.
Interesting.
Let's bookmark that maybe
for the end of the show.
I would like you to perhaps
try to cast forward
and see what's,
where's opportunity going to knock
for young developers
in the next few years.
But I don't want to take us
too far upstream from the main topic, which is Servo.
I've mentioned it.
I've said it's ambitious, but I haven't said exactly what it is.
And sometimes we make this mistake of diving in too deep on our show.
And one time we got to the very end and realized, I don't think we ever clearly stated what the project is in layman's terms so that we could all be on the same page. So that we can all get on the same page, give us Servo in a nutshell. What is it
and what are its goals? So Servo is a project to create a new browser engine that makes a
generational leap in both performance and robustness. So there's two sides of this. One is
browsers as they exist today are largely built on this architecture developed decades ago,
where CPUs only had one core.
The memory was perhaps more constrained.
We didn't have GPUs.
So the kinds of computers that web browsers ran on back then were really different.
At the same time, the kinds of web pages that existed back then were also extremely different, right?
They were not dynamic.
They had very simple styling.
You basically had all the semantics of the styling in the tag names.
Right.
And there was some difference here by browsers.
Then we got CSS.
We got JavaScript.
We got dynamic HTTP requests and things like that. And these days, lots of web pages are basically on par with native applications
in terms of the complexity and the stuff that they're doing.
But the browser architecture is still written for basically these documents.
There have been tons of changes in, say, the JS engine,
but overall, the architecture has been slow to move.
On the other side, on the robustness side,
basically you have that browsers have become so important and so ubiquitous that they become huge targets
for exploits you know security exploits so there's lots of private data going through them
you know pretty much everything i do online goes through my browser so you could find a huge amount of data about me if you could get access to that.
They're also on every computer, so if you can get root access to the machine somehow through the web browser, you can effectively control armies of machines.
So they become very important in a security context, but they also have a very poor track record here. The C++, which all of the engines besides Servo are really written in, just lets you do anything you want with memory at any time.
And people think they're really smart and really careful, and yet we still find new vulnerabilities in pretty much every piece of C and C++ code every day.
They're getting better, but there's only so much you can do.
And so the idea was like, how could we attack these two problems?
So we knew that in order to take advantage of modern hardware, we were going to need
to do parallelism.
And we wanted to somehow solve the safety issues with C++ for parallelism, because one
of the reasons that you don't see more parallel code written in Firefox or Chrome or these things is how incredibly difficult it is to write parallel code when you have sort of free access to memory.
So the Rust and the server projects are sort of tightly intertwined, at least at their origin, in trying to solve this problem.
Right. So when I said ambitious, this thing began late 2012, early 2013 at Mozilla.
And, you know, today, which is let's just call it the end of 2016, we are in kind of a pre-alpha developer preview.
Let's just call it that.
So, I mean, that's you all been working on this for a long time.
And it's it's you've come a long ways. And it's, you've come a long ways,
but it seems like there's still a long ways to go.
Is this just a huge undertaking in scope?
It is.
The web platform is very large.
There are lots of complex features
that all interact with each other,
especially in a webpage layout.
But also just the sheer number of JavaScript APIs
is staggering and more are being added all the time.
And in fact, there's not even enough people on my team to really keep track of all the new changes just to like specifications and stuff, as opposed to, you know, working on all the things that have been specified and developed over the last couple of decades.
So it is enormous in scope. And a large part of the challenge is how
do we attack this problem in such a way that it can be obvious that we're making progress
to the people with the money and also to the outside world so that they can keep interested.
Yeah, because you definitely have the interest of the developer community. The question is,
how long can you maintain that interest until people start calling things vaporware or such other things?
So real quick, we're hitting up against our first break, but let's lay out just the understanding of the team.
I keep saying almost 600 contributors.
Surely those aren't all core team members.
Give us the layout of the project in terms of like who's working on what, at least the size, so we can see the scope of you and your team and the effort, both at Mozilla and perhaps at Samsung, if you have insight on that too. Okay. So we do have a small core team. There's
four of us on there right now, Lars Bergstrom, myself, Josh Matthews, and Patrick Walton.
And then everyone else is sort of, you know, there's a number of people who have
reviewer privileges, we call it. And so those are sort of the wider team. These are people who can approve for code to be checked into the repository. And that sort of access is relatively easy to get
for anyone who's making regular contributions. And then we have sort of, you know, we just have
a ton of people showing up either with a JavaScript API that their application uses and they want
supported, or maybe they're just interested in Rust or web browsers and want to know how they work. So we just get a ton of people coming
and showing up and wanting to know how to contribute. And so we've
developed a lot of strategies to help them. So there's this big community of
hundreds of people who are hacking on Servo. In terms of its
relation to Mozilla, there's
I would say about a dozen people employed full-time
to hack on Servo.
The project itself
is sort of meant to be a community project,
not owned by Mozilla.
So we
have plenty of reviewers who
come from, who are
unaffiliated with other companies. We have them
reviewers who are affiliated with other
companies. And that probably brings who are affiliated with other companies.
And that probably brings us to Samsung.
So Samsung was sort of very interested in this work early on and had some engineers working on it for a while back in 2013 to 2014.
I think at the height they had,
you know,
over a dozen engineers hacking on it.
And the idea for them was basically,
you know,
modern mobile hardware like phones and
stuff have a very similar architecture to sort of modern CPU hardware. They have GPUs, they have
multiple cores, they maybe have different kinds of cores in different configurations. And, you know,
they were making a big bet or they still are making a big bet on sort of Tizen and having
application developers develop for smart TVs and mobile phones
and things like that using the platform.
And so they were very interested in how,
and so they've been doing this for a while.
Tizen is a thing that already exists
and it uses Blink as its engine and WebKit before that.
And they're running into all kinds of performance problems
that also Firefox, the gecko developers
are running into and so they were very interested in the you know what what could be done about this
problem how can we take advantage of modern hardware how can we make this code safer um
yeah it could be i think for them a lot large part of the argument is that the
access to the javascript development community is. Like not having to support arbitrary, random,
not necessarily proprietary,
but just like not one of the standard native application toolkits
and being able to just use the web platform,
it gives you access to a huge amount of developers
that you don't have pretty much any other way.
So I think that was a lot of their motivation.
They have since sort of shifted their focus.
And so there's not very much active involvement from Samsung at the moment, although that could change any time.
Sounds like maybe time somebody goes and updates that Wikipedia article.
Could be.
I think I'm personally not allowed to touch the Wikipedia articles about the projects that myself.
Right, right.
Well, we could use this as a secondary source
or something.
No.
Love Samsung.
Actually surprised.
I was at OSCON London recently
and met some people,
some folks from Samsung
doing cool open source work
and something that I was unaware of
is how much they are invested
in the open source community,
which is awesome.
We love companies that put their money
where their source is.
So that's very cool.
Shout out to Samsung for that. Let's take our first break. When we get back,
Jack, you mentioned these two big goals, performance and robustness, and how Rust
played in nicely to that. I want to dig down deeper on those two things. I know you have
kind of six areas of performance that we're going to talk about. So let's pause here and
we'll get into performance and robustness on the other side of this break.
If you normally fast forward through our ads,
don't do it for this one.
This one's pretty important to us.
We're teaming up with Hacker Paradise
to offer two open source fellowships for a month
on one of their upcoming trips
to either Argentina or Peru.
So if you're a maintainer or a core contributor
or someone looking to dive deeper into open source and you want to take a month off from work to focus solely on open source,
this is for you. For those unfamiliar with Hacker Paradise, they organize trips around the world for
developers, designers, entrepreneurs, and trips consist of 25 to 30 people who want to travel
while working remotely or hacking on their side project. It's a great way to get out, see the
world,
spend an extended period abroad.
And fellowship recipients will receive one month on the program working full-time on open source, free accommodations,
workspace, events, and even a living stipend.
And one thing we're pretty excited about with this
is we'll be following along.
We're going to produce a couple podcasts
to help tell the story of those recipients
who go on this fellowship,
the hacker story, the open source story.
It's going to be a lot of fun.
To apply, head to hackerparadise.org slash changelog.
You'll see a blog post explaining what this is all about, what the open source fellowship
is.
And down at the bottom of the post, you'll have an opportunity to apply.
If you have any questions about this whatsoever, email me, adam at changelog.com.
All right, we are back with Jack Moffitt
talking about Servo and Rust
performance and robustness.
I just had a thought while you mentioned
a few minutes back, Jack, about
Rust and Servo kind of
growing up together as technologies.
And that sounds really great,
especially if you have people on both teams
that are working together, perhaps the same person on both teams.
But it also seems like it makes Servo even more difficult a project because your underpinnings are such a moving target.
Has that been a struggle for you guys as you move along and Rust changes underneath your feet?
It certainly was a struggle back when I started.
So my first day on the job in Mozilla, Servo did not compile.
And there was no easy way to get it to compile.
They were using sort of a pinned version of Rust,
but there was no documentation or infrastructure
or automation around which Rust version
Servo was pinned to.
It just sort of happened to be the one
that was on somebody's machine.
And whenever they happened to upgrade Rust
to another version,
they would also make changes to Servo
and then commit those. So I like in this sort of chaos uh land of like rust doesn't
like servo doesn't compile and you know on top of that maybe a lot of developers haven't experienced
this but when you can't trust your compiler that is an interesting situation um yeah so so you like
try to compile it and the compiler seg faults,
like, what do you do there? So, so I spent like first probably a week and a half just, uh, you
know, updating, um, servo to, uh, the current version of rust, which was a kind of an ordeal
back then because they have this deprecation. They had a deprecation policy back then where,
you know, if they, if they weren't working on a feature and it didn't pan out they would sort of deprecate it and then in the next release and then the
release after that it would get deleted and so a lot of the work on servo happened in rust 0.4
and then i started basically right when 0.6 came out so tons of these features that servo had been
using just didn't exist anymore and then you know as coming on my first day on the job, it was kind of like, okay, so what does
this feature do?
What did it do so I know how to replace it?
And the answer was, I don't even remember.
Wow.
So that was sort of a special situation, but it sort of repeated that way until Rust 1.0
came out in that there were major breaking language changes all the time.
We built infrastructure to pin specific versions of the Rust compiler,
and then we would update it at specific times.
So we would try to keep on top of it,
but usually it would be like once a month,
or if it was a particularly bad run,
maybe it would take a couple of months for us to get an update.
And part of the reason for that churn was that when you would update the version of Rust and you would make all the changes in Servo, you would often find that some bug got fixed in the borrow checker, for example, making some code that you wrote before now invalid.
And maybe that code didn't have a trivial workaround, like just changing the syntax of some API call, right? Like you had to restructure
the function
or maybe it turned out
that what you were doing
was completely illegal
and memory unsafe.
You just,
compiler hadn't caught it before.
Right.
And now you have to go
and rethink some stuff.
And then you would make
these changes
and then you would find
new bugs in the Rust compiler.
So the compiler would segfault
or it would run into
some kind of assertion thing
that was sort of not in your application but sort of in the Rust compiler itself.
And so then you'd say, OK, so now we'll follow a bug against the Rust compiler.
The Rust team is super quick and responsive, so they would fix the bug maybe the next day.
In the meantime, maybe 10 other changes have landed, each with their own bugs, and maybe those also have new breaking syntax changes or something.
And so in order to get the fix that you wanted, now you've got 10 other things that are also
going in there. And so sometimes this will go into a vicious cycle where you'd be spending two weeks
just trying to upgrade Rust and doing this. So it was kind of a mess for a while. When Rust 1.0
came out, this settled down a lot. And now we basically
pin the nightly version. We change it whenever some Rust feature comes along that we need access to.
It's generally like one, you know, a partial base worth of work for somebody and not really a big
deal. On the other side of the coin, being the, you know, quote unquotequote the first professional Rust developer and being Rust's flagship application at the time,
while it had its churn issues,
you probably were like the first-class citizens
when it came time to influencing the language design
or the needs of the language,
even bug fixes and stuff like that,
because if Servo is halted,
I'm sure the Rust team was very interested
in keeping you guys moving.
And was that the case as well? Yeah, they gave us a lot of attention if we found bugs they would fix them
right away um this this has gradually tapered off uh yeah on the run up to 1.0 they they stopped
giving us such preferential treatment um the probably the biggest example of this was the
removal of green threads for native threading. Green threads was something that Servo was sort of designed around at the time. There was no fallback really for it. They
just sort of like removed the carpet out from under us. And these days, Servo is no longer,
I would say, well, maybe it's still the flagship application more or less, but we're not driving
Rust development anymore the way that the needs were back in the early days of Servo.
These days, it definitely has a life of its
own. They definitely take our concerns
into account, but largely
our concerns are the same concerns that everyone
who's using Rust has. For instance,
number one on the list is compile time,
compile performance.
We get
along really well. There's core team members
on the Rust team. There are also core team members on the Servo team.
And it's very nice to have such a good relationship with the compiler.
I think this has resulted in probably more performance than we would otherwise get.
Because if there's some problem that turns out to be a code generation issue in the compiler, like we know the guys who can fix that.
So it's,
it's,
it turns out to be a pretty nice relationship,
even,
even if,
you know,
I would,
I would say selfishly,
you know,
not all of our needs are being,
you know,
at the top of the priority list anymore.
So let's talk about the two aims that you laid out at the beginning for
servo as a rendering engine.
Is that,
is that the fair thing to call it, a rendering engine?
What do you, a browser engine, a layout engine?
Seems like.
I think we've been calling it a web engine these days.
Okay, a web engine.
Just want to use your nomenclature.
So performance and robustness,
and you touched on why Rust is such a good fit for that
in terms of the ownership model
and the memory safety guarantees and things like that,
especially with regard to robustness.
And also you said with the performance of not having to pass around that memory model and the memory safety guarantees and things like that, especially with regard to robustness.
And also, you said with the performance of not having to pass around that memory and getting some things for cheap or free. But you have these kind of like six different areas, like we said,
ambitious, you know, there's subsystems upon subsystems. And you have six areas of performance
optimization or ways that you're going about it. Can you give us some insight into those?
Sure.
Let me touch on those first two things first.
I'll start with robustness
because that comes mostly from Rust
and probably was well covered
when you talked to Steve last time.
Yes.
The inspiration for this
can be sort of summed up with this one example.
So there's a JavaScript API called Web Audio
which allows you to manipulate sound from JavaScript applications. When that was implemented
in Firefox, it had 34 security-critical bugs that were filed against it. So one of the things we did
was sort of look back and see, you know, what kinds of problems could Rust have helped solve?
Instead of just saying, you know, we think Rust will solve this problem, we could go back and inspect the data
and see what it could have solved
if that component had been written in Rust.
So in the case of Web Audio,
there were 34 security critical bugs.
All of them were array out of bounds
or used after free errors,
and all of them would have been prevented
by the Rust compiler had that component
been written in Rust.
So that's sort of like the quick summary,
like 100% of the errors in that API would
have been caught by the compiler before they shipped.
And Web Audio is not a special API.
It has no security properties of its own.
It's not doing anything really crazy.
It's just sort of like your run-of-the-mill JavaScript API.
And that just sort of points out just how dangerous C++ is as an implementation language is that even this thing
that didn't touch anything secure had 34 vulnerabilities where somebody could root
your machine. Yeah. Dramatic change. Yeah. On the performance side, the intuition is basically,
if you look at modern web pages, like Pinterest is a great example a pinterest page has all of these sort of cards that are laid out in sort of a staggered grid
and each of you can imagine that each of those cards could sort of be operated on independently
of the others and so that's where you can kind of see where doing layout and parallel might help
because if you look at web pages they they're highly structured. News sites are another good example.
They often have, you know, lists of articles with a blurb and a picture.
And you can just see sort of the same structure repeated over and over and over.
And it makes sense that, you know, each of those little sub pieces could be handled independently at the same time as the others.
So those were sort of the two input motivations.
And so I'll talk about some of these.
There's basically six of these branches of development that we've been pursuing.
So the first one I'll talk about is CSS.
So Servo does parallel CSS styling.
It does this in, I would say, a not novel way.
The algorithms that existing engines use
for CSS styling are largely untouched.
The only thing we bring to the table really
is using the features of the Rust language
to make parallel implementation of those algorithms very easy.
So for example, the Servo CSS engine
has all the same optimizations pretty much
that modern engines have.
Pretty much we copied those optimizations
from the Gecko and Blink engineers. But being able to use all the cores in the machine is a huge win.
So it turns out that CSS restyling is sort of the best case parallel algorithm. It scales
linearly with the number of cores. So our initial estimates after we wrote the system
showed that it was basically four times faster
on a four-core machine than
styling in Gecko or Blink.
That's restyling.
The next stage after restyling, so once
you compute all the CSS properties and figure
out how they cascade
and all that kind of thing, then you
use those properties plus
objects in the DOM,
elements from the web page, and you compute where those objects are going to be
and how tall they are and how wide they are. So for this, we actually had
to come up with a completely new algorithm based on work that came out of
Leo Mayorovic's parallel layout work.
He has a couple papers for that that I think are in the server wiki if anyone's interested.
But basically the problem with the existing engines is that the way they work is there's, you know, you can imagine there's just like there's a document object model in JavaScript.
There is a parallel one sort of on the C++ side.
And so there's an object that's like, you know, the root of the document and there's an object under that and so on and so forth.
And so when they call layout,
they basically call a function called layout on the root of the tree.
And that's it, right?
And that function does a bunch of work.
And then it calls layout on all of its children
and so on and so forth.
And the problem here is that in each of those functions,
when it's calculating the layout information,
it can look anywhere it wants in the tree.
So, for instance, if I want to find out what the size of my neighbor is, I can just go read that data directly.
If I want to know how tall my parent is or any of my children are, I can just go read that area right out of the tree.
And it doesn't necessarily have to be like things that are right next to me.
So I can go look sort of like way far off in the tree. For instance, you know, if you're in a table, right, the things that might be affected
by the layout of the table of some interior thing might be sort of far away in the tree.
So this is really bad for parallelism, because when you design a parallel algorithm, you have
to be very careful about what data is updating when other things are reading it.
And if you don't know the pattern of data access in an algorithm, it's very hard to sort of change that into a parallel algorithm. Your best bet is to basically put locks around everything and then
try to make lock contention not a problem or to get rid of as many locks as you can.
So this didn't seem like a promising way to start. So instead, the way that it works is we
start from a thing that we know can be parallelized, which is tree traversals. It's very easy to do
parallel tree traversals. For instance, you just have, you know, the very first thread start with
the root object and then, you know, create a job for each of the children it has and they go off
on different threads. And then each of those children creates jobs for its children and they
get scheduled on, you know, whichever threads, threads right so it's pretty easy to describe that it's easy to reason about
yeah um and and similarly going from the bottom up is also pretty similar right like you just
all of the children of a particular node get finished and then once that's done once the
last child is processed you can start processing its parent and all the way up the tree. So if you use that as sort of the constraint that your algorithm has to operate in, and
when I say constraint here, I mean like the data access pattern you need to make this
work is, if I'm going top down, I'm allowed to look at any of my ancestors, but I'm not
allowed to look at my siblings or my children, right?
Because they might be happening on a, they might be getting processed on a different
thread. Any, my parent already got processed or I wouldn't be being
processed now, but all this other stuff could be happening at the same time.
But they may have information that you need, right?
They might. And we'll talk about that in a minute. But it's base case, like you basically
restrict yourself to only be able to read information from things, you know, can't be
written to. And so this means basically your ancestors and yourself and no siblings or
children. It's like a data straightjacket. Yeah. And so you're not able to express all of the
layout calculation in just a single tree traversal. So we use several passes of them.
So basically you go, a good way to think about it is, you know is you go from the bottom of the tree up and you pass along how big you are.
We call it the intrinsic width.
So basically, if it's like an image with a certain size, then of course that's its intrinsic width and it gets passed up.
And so then you get to the top of the tree and now you know how wide everything is sort of requested to be.
And now you can go through and assign the
actual width to everything so now that you know what the width of the parent is which is you know
say set by the window size now i can say okay now then the thing below it must be this wide because
there's only this much space and you can go all the way sort of propagating this information all
the way down to the bottom of the tree and then once you know how wide everything is going to be, now you can go up the tree and figure out how tall everything is.
Because if you know the height of yourself, then you're done.
If you have the heights of all your children,
then you can figure out how tall you are.
And this is where things like line-breaking text would happen.
And so then when you get all the way up to the top of the tree, you're done.
Now you know how wide everything is
and how tall everything is.
So this is pretty simple to reason about.
You have to divide up sort of the layout work
into these three passes.
That's not so much of a problem.
But then we run into this problem that you mentioned
is what if you need to know what your neighbor's doing?
And this happens with css floats so if you float some content in a web page that means that the layout
of the thing next to you is affected by your own layout and so for example when you try to figure
out how wide a paragraph of text is going to be you need to look at what all of the floats are
that your neighbors have to figure out how wide they are. So you know how wide your texts can flow. So this
sort of breaks parallelism because the only way to do this in that sort of constrained problem space
is to defer the calculation to higher up in the tree. So basically if you need to read data from
your neighbor, then you just say, okay,
I know I need to do this. I'll delay the calculation until my parent is getting done.
And then when the parent is getting done, it can go and read, you know, in a bottom-up traversal,
it can go and read any of the children's data it wants. And so you basically have to defer
the calculation to one step later or whatever the the the sort of constraint needs to be
violated in so that works fine but it sort of breaks the parallelism so for that little subtree
now you can't do the things all independently on different threads you have to do them sort of all
in one thread at the same time um so so it's not as it's not linearly scalable like restyling is
but um you can still get a lot of performance there.
So most things turn out to be easily expressible
in those constraints.
CSS floats is an example of one that is not,
although a very popular one.
Well, can we all just agree that CSS floats are the worst?
I mean, think of every web developer on earth
and then add up all the time
that we've collectively spent
dinking with floats inside of Web Inspector
and then think about how much wasted time we have there.
And how much time it's causing you guys headaches
in terms of parallelizing the layout calculations.
Ugh, the worst.
Yeah, it's kind of interesting.
I wonder if Servo, as successful as we hope it will be,
then you'll have this sort of negative feedback loop
for using floats.
Because if you use a float in your page,
it will lay out slower
because it won't be able to use
all of the potential resources of the machine
in every case.
And so what'll happen is,
a good example here is Wikipedia.
So Wikipedia has this
floated sidebar that basically covers the whole page. And so Wikipedia layout in Servo is sort of
one of, it's sort of like a worst case example. But Wikipedia mobile does not have this. It does
the navigation in a different way that doesn't use floats. And so the layout performance of
Wikipedia mobile is vastly improved compared to the normal
desktop Wikipedia case.
And so it could be that
if you like use a lot of floats,
then you'll just get
sort of negative
performance feedback.
And you'll be like,
why isn't my site as fast
as these other sites?
And, you know,
hopefully it'll be well known
that floats is one
of these problems.
And you can sort of
fix that in the code
and we can all make
every page faster.
That'd be awesome, especially if, you know, the work that in the code and we can all make every page faster that'd be awesome
especially if you know the work that you guys are putting in in servo is also getting over to blink
and the other the other engines in terms of just the cross-pollination of that effort because uh
then we have even more of a chance of it being like you know not just in servo driven browsers
but uh and lots of different browsers, you have
this exact same performance problem with floats
or with whatever happens to be
a performance-negative
tool that you're given.
It would be very influential and awesome.
So, cool.
Anything else on layout?
It sounds like y'all put a lot of work into that.
Even describing it to me
is a little bit tough
it was one of the most complicated bits it's one of the bits we did first because we knew how hard
it was going to be um so we got that out of the way um of course we're still adding new layout
stuff it doesn't support every every layout feature that the other browsers do yet but it
supports many of them now um one thing i should add is that after we did those two pieces we sort
of um that's when we started sort of doing some initial rough benchmarking to see how fast it was.
And then we discovered CSS styling scales linearly and stuff.
Parallel layout is also a lot faster.
It's not linear, but you can expect double the performance, especially on pages that don't have parallelism hazards like flows.
But one of the other ideas we had is,
what about power usage?
It's not just performance of wall clock time.
It's like, how are we treating the battery?
Can we do better there?
And so we did some experiments for that.
We had an intern over a summer record a bunch of data
and do some experiments in this area.
And sort of the intuition here was, well, if we can get done faster
than sort of a traditional browser, even if we use all of the cores instead of just one,
like you can make a case that maybe that uses less power to only use one of the cores.
But if we get done faster, then all the CPUs can go back to idle
and therefore can be idle longer than they otherwise would be.
And so we wanted to see if that sort of intuition was correct or what other kinds of things
might affect battery performance.
And so what we did is we took like a normal MacBook Pro and we turned off the Turbo Boost
feature.
So Turbo Boost basically reduces your performance by about 30%, but it affects battery performance by
more than that. So you save about
40% of the battery performance
and only lose 30%
of your CPU performance.
Servo is fast enough that it can
make up all of that performance
in its parallel algorithms. So the
servo performance is
basically unchanged. So it's still as fast or
faster than sort of a traditional engine but it uses 40 less power to get there um so that was
a cool finding i don't know if this will scale forever like how much there is to gain here but
it definitely seems like the initial experiments prove that there's definitely a lot we can do
about power as well
so it's not just about you know sort of you know using all the resources in the world um you know
it turns out that using the architecture sort of the way it's meant to be used uh can can save you
a bunch of power it all it also means that like if you go back to the samsung example that you know
if they can meet the same performance goals that they have for some product,
but do it on a generation-older CPU
because it has multiple cores,
you might be able to save some serious bucks there.
Yeah, so that's about it
on the two sort of parallel style and layout.
Let's tee up a couple more.
We might have you pick, since there's lots of these,
and we want to talk about the current state and the future.
We're hitting our next break.
So, Jack, pick one more.
WebRender, Magic DOM, the Constellation.
What's the most interesting of all of these performance areas that you can share?
And then we'll take a break.
Probably WebRender is the one that people will be most interested in.
The idea here is basically, if you look at sort of CPU architecture diagrams from two decades ago, there's like one core and some cache and stuff like that.
And now they have multiple cores on them.
And we sort of laid that out as one of the motivations for Servo itself.
But if you look even harder, it turns out now there's GPUs on the chips as well.
And those GPUs are getting larger and larger every generation.
So now it turns out that Servo isn't even using half the CPU
or half of the chip because, you know,
while we use all of the cores,
like more than half the die area
is just graphics processing.
So we, you know, we want to be able to use the whole chip.
So how do we get stuff on the graphics processor?
And of course, since it's called the graphics processor,
it makes sense to start with graphics.
So current browsers do
compositing on the gpu which basically means they take a lot of the rendered layers uh you know
basically pixel buffers of the of the different layers and just squash them all together and they
can control sort of uh you know where they appear relative to each other which is how you can do
stuff like scrolling and like some movement animation really fast in modern browsers.
In Servo, we wanted all of the painting to move over to the GPU as well as all of the
compositing.
And so basically, we launched this project called WebRender.
Let's try to explore how this could be done.
And the idea here was immediate mode APIs are really bad for GPUs.
So immediate mode API is like, set the pen color to black,
set my, you know, border size to five,
and then set the fill color to red,
and now draw a line from this coordinate to this coordinate.
So if you do this,
the GPU never has enough information
to be able to figure out
how to order all of the operations
such that they're done most efficiently.
So for example, if you draw a line with that state,
and then you change something
and then the next thing you draw,
you use the same sort of parameters
as the first thing you drew.
Well, if you'd done that in a different order
where you draw the first and the third thing together
and then drew the second thing,
it would be much faster.
So really you want to use
what we call retain mode graphics on GPUs.
This is what modern video games do and stuff.
So the GPU knows sort of
the full scene that it's going to draw and all of the parameters that it can figure out how best to
use its compute resources to do those things. And we realized that web pages themselves are
basically their own scene graphs, right? So once you do the layout, you get what's called a display
list, which is sort of all of the things that you need to draw. And so the idea of WebRender is like if we can come up with a set of display list items
that are expressible as GPU operations, then we can just sort of pass the display list
off to this shader and everything happens really fast.
The side benefit of doing this is that anything that you move to the GPU is like free performance
on the CPU, right?
So now all of a sudden, if we're painting stuff over to the GPU, now we have even more
clock cycles on the CPU to do other work, like for instance, running JavaScript.
So while WebRender doesn't make the JavaScript engine faster, it's not like a new JIT or
anything, it has the effect of there's more CPU cycles for the JavaScript engine.
So, you know, you will necessarily, you will see speed ups in other areas to this sort of second
order effect. Wow. So WebRender, we sort of prototyped this late last year, we landed it in
Servo early this year, we redesigned it to fix a couple of performance problems that we found
right around June of this year.
And now it's basically landed in Servo.
It's the only renderer that's available in Servo
and it's screaming fast.
Some of the benchmarks that we've shown
show things like we'll run a sort of benchmark page
in WebKit and in Firefox and in Blink
and you'll see something like between two and five frames per second
and a web renderer is screaming along at 60.
And actually, that's because of vSyncLock.
It's able to do it at like 250 or 300 frames per second sometimes,
but there's no point.
So it does seem to be quite fast.
So now we're just adding more and more features.
It's got enough stuff
that supports everything Servo can draw.
It doesn't have quite enough stuff to support everything
that, say, Firefox can draw.
But that will be there in due time, probably pretty shortly.
Nice.
Well, let's take this next break.
Up next, Servo, the state of the project, the future,
and how you can get involved.
Stay tuned for that.
And we'll be right back.
Our friends at ThoughtWorks have an awesome open source project to share with you.
GoCD is an on-premise, open source, continuous delivery server that lets you automate and
streamline your build test release cycle for reliable continuous delivery.
With GoCD's comprehensive pipeline modeling, you can model complex workflows for your team
with ease and the value stream map lets you track a change from commit to deploy
at a glance. The real power is in the visibility it provides over your end-to-end
workflow so you can get complete control of and visibility into your deployments
across multiple teams. To learn more about GoCD, visit go.cd slash changelog for a free download.
It is open source.
Commercial support is also available and enterprise add-ons as well, including disaster recovery.
Once again, go.cd slash changelog.
And now back to the show.
All right, we are back.
And before the break, Jack,
we were talking about
all these different ways
that you are,
your team is squeezing
all the performance
you possibly can
out of Servo,
the parallel layout,
parallel styling,
web render,
using the GPU for things.
There's other stuff
that we didn't have time
to talk about.
All of these efforts,
and it sounds like you guys have made huge strides,
especially around the parallel layout
and the work done there.
These beg the question,
how fast is it?
And so you gave us the idea with WebRender
where it was rendering it on the GPU
at, did you say 60 frames per second?
Something like that.
But what about the big picture, like the whole thing thing, swap out Gecko and swap in Servo,
assuming there's feature parity at some point.
What's the win?
So I'll talk a little bit about the qualitative win
and not so much the quantitative at first.
So the qualitative win is pages should get more responsive.
So by getting all of the stuff done in parallel,
we can return to running JavaScript more quickly,
which means your app,
the time between you clicking a button
or triggering an animation or something like that,
and you running the next line of code
or the next event in your event queue is much faster.
You see this already with Servo and things like animations,
where animations in Servo will be silky smooth,
where they might struggle in other browsers. And the way that you'll see this already with servo and things like animations where animations and servo will be silky smooth where they might struggle in other browsers um meaning and the way that you'll see
this is you will get dropped frames so that the animation will sort of stutter or things like that
or scrolling performance won't feel magical another example is uh when you do touch scrolling
on a mobile device right the time between you start the up swipe and the display actually moving
on some browsers can be
pretty slow whereas like you know on ios devices they're always showing these beautiful scrolling
where it feels like the thing is moving under your finger so that's that's what we're trying
to get to is like the really fast and responsive user interactivity stuff the the other sort of
thing there and this is a little more nebulous to describe,
but with every sort of major performance improvement, web developers have been
super creative in finding ways to make the most of it. The same way that when new GPUs come out,
of course, all of the existing games are running faster, but it takes a little while before people
figure out how to fully exploit those games and do even more
unique or crazy things with that hardware.
So I'm hoping that Servo will sort of enable a bunch of things that we don't quite know
what they'll be yet in this new world where apps are much faster.
On the quantitative side, this is an extremely complicated thing to measure.
So I can give you benchmarks for
individual pieces. Those are pretty easy to benchmark in isolation. It's less easy to
compare them with existing browsers, although we've done some of that as well. But in terms
of holistic system performance, what can you expect? And I will say that we do, this is sort of a qualitative way to address it, but we do want the user to feel like there is a major difference just from using the browser and how fast it is.
And sort of a similar way to when Chrome first launched, how people were sort of impressed with how different it felt and how responsive it felt.
We're hoping to have sort of kind of another one of those kind those moments, but maybe even a bigger one of those than people have seen
before. There is a way that we can try to answer this question.
There is a new proposal by some people at Google
called Progressive Web Metrics. The idea here is to measure
to develop metrics that measure things that users
perceive.
So a couple of these are like time to interactivity.
So this measures like how long did it take from when I hit enter in the URL bar
to me being able to meaningfully interact with the app.
And there's sort of a crazy technical definition
of what this actually means that I'll spare you.
But this is a metric that if you
improve this will meaningfully improve the lives of users. And there's a couple others of these.
And that is how I suspect we will measure these performance improvements in Servo compared to
other engines and also how other engines will sort of try to measure their progress in a similar
direction. One nice thing about this idea of these progressive web metrics
is Google wants to make them available to the web authors.
So I think the way that it's specced currently is they fire as events.
So like, you know how there's like document on load
and document ready or DOM ready.
These would be new events that would fire.
So time-tied interactivity would fire when the page is interactive.
And so you as a web developer would be able to
track these metrics for your own applications and use them to make your
applications more interactive and better. But I mean, also browser developers can use
it to improve their site as well. So I think that is where we want
to get to. We want to get to sort of a meaningful set of user-relevant
metrics that all of the
browsers sort of measure and publish and can be compared by web developers. And so I don't have
any results. We don't have progressive web metrics in server currently, but we're expecting to add
them soon. But I don't have the numbers yet for the holistic system performance. But that is how
I think we will get them.
And we do expect to make improvements there.
Now, the quantitative metrics that we do have
are things like existing known benchmarks
like Drumeo.
We've run Drumeo for DOM performance.
We can run things like SunSpider
and all those JavaScript benchmarks,
although they aren't very interesting for Servo
because we're using the same JavaScript engine
as Gecko there.
But so any individual benchmark we can run,
whether or not the performance things
that we've done in Servo
affect those benchmarks enough to make a difference,
it's sort of, you don't know until you try it.
And the reason there's some discrepancies there
is that we try to tackle things like parallel layout, really hard problems that we know we're going to have to invent new technologies or algorithms or something in order to solve them.
And we haven't spent that much time on things that have known solutions that are just missing pieces.
But we know exactly how we're going to attack it, and it's going to be exactly, say, like it is in Blink or Gecko.
Like, for instance instance the network cache there's there's not really anything rust is going to add to how you design a network
cache other than the safety side of it there's not really any performance wins to really be had there
uh that are going to be really user noticeable so we like servo doesn't really have one of these
and of course that makes everything feel really slow when it's fetching stuff from the network
every time so so how sensitive some benchmarks is sort of a function of the individual benchmarks.
And sometimes they run across these things in Servo that aren't really optimized yet
because we sort of know how to do it.
It's not a high priority versus things that measure stuff that we've made direct improvements
on.
Let's talk about timing, you know, the age-old question of when things are going to ship.
Every software engineer's favorite question is, you know, when's it going to be available?
But you all have a pretty good roadmap, public roadmap.
We'll link that up in the show notes to this episode.
It's on the GitHub wiki for Servo.
So you have plans, you have a roadmap laid out, and you've been making huge progress
in many areas. But, you know, this has been a roadmap laid out, and you've been making huge progress in many areas.
But this has been a three, four-year project.
Undoubtedly, at least Jack, yourself, and your team, you guys are probably super ready to get this into the hands of users and not just developer previews.
What's the roadmap look like and the timing, and how are you guys going to roll this out over the next year or so?
Yeah, so this has been a constant struggle.
I mean, we basically started with a project that not only is it a rewrite,
but in order to rewrite that, we rewrote C++ in addition.
So like I say, if all rewrites are failures,
then surely the rabbit hole of rewrites is going to be an uber failure.
And so we want to make sure that these projects aren't failures i think rust is over that hump uh for quite a while servo i'm i'm hoping is over that hump but it
depends sort of on what what people think um in order to to do this we we need to string together
like a sort of a series of enhancements that people can notice, you know, see for
themselves and things like that. We don't want to just sit in a room for 10 years saying we're
working on making the web two times as fast. And then you won't get to find out unless if we
succeeded until 10 years from now. Right. And the whole while you have to sort of like keep investing
mindshare or in Mozilla's case money until until you get the result but we want to get the results sort of as incrementally as as we can for all those reasons so we've we've sort of struggled
with this in servo because the web is so big i mean even since we started the project there's
probably like a year's worth of work that's been added to the platform uh you know that we haven't
even gotten to so whatever whatever right however many man years of work we had when we started, like there's probably like,
you know,
N plus one
every year added to that.
So one of the ways
that we thought about doing this
is by making parts of the engine
compelling enough
that certain types of applications
might benefit from them,
even if they don't have access
to the full platform.
So one way to imagine this
is if
you're a web content author and you're making like a mobile app and you're using web technologies,
since you control the content of the site, you can avoid using features that Servo doesn't support
yet. But you can still take advantage of the performance features that we do have has to offer. So we've been sort of looking around for partners or, you know,
hobbyists or whoever who has the sort of ability to do this and wants to move forward.
We haven't had a whole lot of takers yet, although that's sort of the style that our collaboration with,
for instance, Samsung was in as well.
So that's one way.
So the other way we can get this into users is just make a browser
people can use and iterate it on it from there. Although the amount of stuff that you need to get
to that point is quite large. We did release a servo nightly at the end of June, which has,
you know, a bunch of functionality that you expect from a browser, like, for example,
URL bar and multiple tabs, and the ability to navigate in history and switch between tabs and things like that.
So we're starting to get to a point
where end users,
probably web developers would be the most likely target,
can download a thing of Servo,
give it a spin, see how it works,
play with some of their content in it.
Hopefully they'll find some missing piece
and want to contribute to the project
and help make Servo better or give us feedback about things that are broken and that are important to
them. Or just, you know, keep an eye on how it's going and give us feedback on if our performance
wins are actually something that they, you know, experience meaningfully. And then the final sort
of long term goal is, you know, how do we get this shipping as a real browser to like hundreds of millions of users?
And, you know, that's sort of always been the long-term goal of the Servo project, but
it's unclear how to get there.
So tomorrow, it'll already have happened for your listeners, but Mozilla is announcing
their new quantum project, which is basically getting huge performance wins out of sort
of a next generation browser engine.
And as you can imagine, a key part of this new project
is taking pieces of Servo and putting them into this project.
So they're going to take the Gecko engine
and basically rip out style and rendering
and put in Servo's parallel styling code
and the web render code.
And there's some other stuff they're doing on the DOM side
that isn't related to the Servo project as well in there.
But a huge piece of this is taking technology
that we've developed in Servo
and getting it into a production web engine.
Even though the whole of Servo isn't ready,
we can at least take these individual pieces
and start giving people some incremental improvements
in the existing
web engines.
Well, that's exciting.
Yeah, it's going to be pretty good.
Like I said, on the styling side, it scales linearly.
So the number of cores is sort of directly correlated to how much benefit you get.
With telemetry from our existing user population in Firefox, we can see that at least 50% of
the population has two cores,
which means that style performance will basically double for all of those people. And I can't
remember how, I think it's 25% or something. I don't have the number right in front of me of
people have four cores. And so, you know, they can expect four times performance improvement in that
subsystem. So you might ask like back back to your holistic performance question, like is anyone going to notice
if styling performance is faster?
And I think the answer will be yes
for a couple of reasons.
One is that there are a bunch of pages on the web
that do take a long time to style.
For example,
one that might be relevant to your audience
is the HTML5 specification,
which the single page edition takes multiple seconds
to render in Firefox.
It takes about 1.2 seconds, I think,
just to do the style calculation.
In Servo, that is down now to 300 milliseconds.
So you've gone from something that takes multiple seconds
to something that takes 300 milliseconds.
And then of course, total page load time,
it's something like, I don't know, a third of the total page load time.
So we're talking about taking almost a full second off of the page load time of probably an outlier in terms of page size.
But it's a real performance improvement people will probably notice.
The second way I think people will notice this is in interactive, where you're interacting with an application,
you know, the application in JavaScript code is making lots of changes to the DOM. And then,
you know, layout is running again. So each time that cycle happens, you have to do restyling.
And so making that faster will mean that the engine spends less time in that stage, and it gets back to running your application code. And I think people will notice that, you know,
a responsiveness increase for,
especially like, you know, interactive heavy applications.
And if you couple this with WebRender,
which makes, you know, animations and all that stuff faster,
then you sort of even get more benefit.
So one of the things,
one of the reasons we try to parallelize everything in Servo
is because of Amdahl's law,
which says that like sort of your,
the limit on your performance gain through parallelization
is capped by the longest
serial piece.
If you have a piece of code that's not
parallelized, well, that's just making the performance
of the whole system worse.
You have to parallelize everything to get
everything faster.
Those two pieces go really well
together that are going to ship in Quantum.
And the idea is that those will roll out to users
sometime next year.
Okay.
But they'll probably be available in nightlies and stuff
and people can play around with them before then.
And of course, if you want to,
you can play around with them in Servo right now.
Let's talk about that.
So getting started, you try to make it very easy.
Projects like these of the size and scope,
especially in a systems-level language,
a new one that many people don't know very well,
they're intimidating.
Help us here on the show.
Talk to our listeners about how they can get involved.
Help out, try it out, give it a test drive,
and help push the web forward with you guys.
It's really easy to get involved,
and we have stuff to do for people of all skill sets
and of all sort of language backgrounds,
pretty much.
Most of the code and servos
written in Rust,
we do have a fair amount
of JavaScript stuff that we do
and also Python stuff.
And there's always tooling automation
and things like that
for people who are, you know,
system administrators and things.
One of the ways that we help people
try to get on board
is we have a page called Servo Starters,
which basically is a list of bugs that we have flagged
as easy for new contributors to get to.
And sort of the philosophy here is we pick bugs
that are basically so easy that the hurdle
that people are jumping through is just getting the code checked out,
getting the change made,
the sort of mechanics of getting it on GitHub
and getting review
and sort of interacting with the CI infrastructure.
So that kind of stuff.
But it means that it's pretty easy to get started.
And there's so much stuff missing in Servo.
I know this sounds like I'm talking against my own project,
but the web is huge
the web is really huge so don't don't count that against me there's so much to do that there's
probably some feature that you have personally used that is not implemented that is actually
fairly straightforward um and you can go and uh try a hand at it we also so we have these servo
starters we also have bugs that are called e less easy although that can sometimes be a trap because
sometimes we don't know how much work is actually there.
And it turns out there should have been E-extremely-difficult-run-screaming.
But for people who want to get started contributing, there's a good way to get started.
We have a bunch of people on the team who love mentoring new contributors.
We do this all the time. We also support things like Outreachy and the Google Summer of Code and a couple of other similar programs that are run by different universities for students in various classes.
But we just do a ton of work and try to onboarding new contributors, make sure that there's work for new contributors.
We actually are sort of victims of our own success here.
Rust is sort of popular enough that we have a bunch of people sort of hanging out in the wings.
And then, of course, we do a pretty good job of identifying some of these EZ bugs that they're usually gone within hours of us filing them.
So one of our team members calls these the EZ piranhas, because basically if you dangle some EZ bugs out, like, you know, thousands of fish jump out of the water
to try to snap at it.
Yeah, I'm hanging out on your issues page as you talk
just to give some context to that.
So github.com slash servo slash servo.
There's 1,775 open issues.
Of those, 28 have the EZ label.
And of those, there's only like two or three that aren't,
well, there's four
that aren't actually assigned. So these have been,
you know, you've got 28 E-Easy things
and maybe 24, 23 of those
are already been
taken by the, what do you call them, the E-Easy
piranhas? They've already been snatched up.
Yeah, so we're constantly struggling
to keep up with demand, I guess.
But it's a job that we absolutely love.
Awesome problem.
Yeah, it is an awesome problem.
And I'm very fortunate to be the owner of this problem.
But we're constantly adding new stuff there.
So if people want to contribute and they find out there are no easy bugs left,
you can reach out to us in IRC on the mailing list on GitHub or whatever.
And someone will create an easy issue custom for you based on the kinds of stuff that you're interested in working on.
We have to do this all the time because usually we don't find out they're all gone until somebody shows up going, they're all gone.
I'm so sad.
And then we'll make a new batch.
Can I ask you kind of a philosophical question to a certain degree about this?
Sure.
What's the driver behind desiring so much
contribution? What's the
goal there? We want to get
a web engine
that ships to users. We have
so much work to do that
a dozen paid people
are never going to finish.
If we don't get some other people helping,
then A, we're probably not going to finish.
And B, most of our ideas are terrible, right?
And the only reason that we've had as much success
as we have is through iteration
and sort of attacking each other's ideas
and finding better ways,
or attacking is probably a wrong word there,
but you know what I mean.
Batting around these ideas.
Batting, yeah.
Trying new things.
The more people who are involved, like the more of that that happens.
Just to give some examples, the WebRender, you know, was sort of the brainchild of Glenn
Watson, who's on our team.
And he came from the games industry.
And of course, he was a person that we hired, but he had a completely different perspective.
And that was one of the reasons we hired him about how all of these things work.
And WebRender is the direct result of his different perspective.
And so access to those different perspectives is definitely one of the things we want to get.
And there's also a large amount of people on the team who are really passionate about open source in general and just think, you know, that's how we want to spend our careers is like working with other people, making good stuff that everyone can use.
Well, that definitely resonates with us around here at the changelog for sure.
Very cool.
Well, that sounds like easy is the way to get started. Of course, you mentioned the nightly builds, which you can download and give it a test drive.
Lots to do, lots of work yet to be done,
not just by those at Mozilla or those at Samsung
or those at any specific camp,
but the whole community can get together,
build Servo together, learn some Rust.
Sounds like a great time to me.
Jack, thanks so much for joining us.
Any last thoughts or last words for you
that you want to get out there?
You have the ear of the developer community
before we close out?
Yeah, we'd love to hear feedback
from what you think you could do
with the things that we've already done
or what kinds of performance problems
you struggle with in your unique applications.
We're coming up with new project ideas all of the time.
We're currently starting a new effort
to try to significantly improve DOM API performance,
which we call Magic DOM.
And so we'd love to get feedback
from what kinds of things developers are struggling with.
We'd like people to run the nightly
and let us know what happened on their own sites.
It turns out that if you have people run your code
on the stuff that
they authored, you're much more likely to get a minimal test case that's actionable out of it
because they don't know exactly how to shrink it down. So that's a lot of the kind of stuff that
we would love to get feedback. Even if you're not interested in contributing, we'd love for you to
just take a look and let us know what you thought. Very cool. Well, thanks so much again, Jack
Moffitt. All of the links for this show will be in the show notes.
If you want to get a hold of Jack, we'll have links to him in the show notes.
Servo, of course, all the Wikipedias.
And Jack's even going to send over some slides and some other things that he has in reference to some of these six areas of performance that we discussed.
If you're interested, I know we had to breeze through a couple of those.
So thanks again, Jack.
Thank you to all our listeners.
We really appreciate you tuning in.
Of course, our sponsors, thank you.
We love you as well.
And that is a show, so we'll see you next time.