The Changelog: Software Development, Open Source - Curl turns 20, HTTP/2, QUIC (Interview)
Episode Date: May 31, 2018Daniel Stenberg joined the show to talk about 20 years of curl, what’s new with http2, and the backstory of QUIC - a new transport designed by Jim Roskind at Google which offers reduced latency comp...ared to that of TCP+TLS+HTTP/2.
Transcript
Discussion (0)
Bandwidth for Changelog is provided by Fastly. Learn more at fastly.com. We move fast and fix
things here at Changelog because of Rollbar. Check them out at rollbar.com. And we're hosted
on Linode servers. Head to linode.com slash changelog. This episode is sponsored by our
friends at Rollbar. How important is it for you to catch errors before your users do? What if you
could resolve those errors in minutes and then deploy with confidence?
That's exactly what Rollbar enables for software teams.
One of the most frustrating things
we all deal with is errors.
Most teams either A, rely on their users to report errors
or B, use log files and lists of errors to debug problems.
That's such a waste of time.
Instantly know what's broken and why with Rollbar.
Reduce time wasted debugging and
automatically capture errors alongside rich diagnostic data to help you defeat impactful
errors. You can integrate Rollbar into your existing workflow. It integrates with your
source code repository and deployment system to give you deep insights into exactly what changes
caused each error. For our.NET friends, adding the Rollbar.NET SDK is as easy as adding the NuGet Thank you. for free for 90 days. To get started, head to rollbar.com slash changelog.
All right, welcome back, everybody.
This is the Changelog, a podcast featuring the hackers,
leaders, and innovators of open source.
I'm Adam Stachowiak, editor-in-chief of Changelog.
Today, Jared and I are talking to Daniel Stenberg
about 20 years of Curl.
What a history that is.
What's new in HTTP2 and the backstory of QUIC,
a new transport designed by Jim Roskind at Google,
which offers reduced latency compared to that of TCP plus TLS plus HTTP2.
So Daniel, we last had you on the Changeelog when curl was 17 years old now curl has turned 20
and a lot has changed in those three years but i think we should start with this quote and from a
tweet that juice put out recently which i loved and we retweeted which said 20 years of maintaining
open source and all I ever got is
an awesome career friends all over the world and a gold medal from the Swedish king you got to start
with the gold medal right get to the get to the important stuff first tell us this story so I was
awarded an engineering prize in Sweden it's named after a Swedish engineer called the Pohlheim Prize. So it's an old
distinguished prize that they have been handing out for I think 120 years or so.
So really a prestigious prize given out to engineers and inventors of different things over the years. And yeah, in 2017, I was awarded and given this prize.
And well, it comes in the form of a gold medal
and cash part.
Nice.
And at the award ceremony in October,
I believe it was in 2017,
I was awarded this gold medal from the Swedish king who was there and gave it to me.
So I got to shake his hand and say thanks.
That's awesome.
And in the tweet, which is linked in the notes, there is a picture of you shaking, I assume that's him, the Swedish king's hand there.
Yes.
Now, you just tweeted this a few days back, May 18th. We're
recording this on May 22nd, 2018. So on a time delay, did something bring it to your mind or
did you finally get a copy of the picture that you could share? Why the delay on the tweet?
It has happened back late last year. So I brought it up from a completely different
reason, actually. It was more, previous to that that tweet i tweeted another image that was one of
these funny things you know about one of these fake o'reilly covers from a book that says thanks
thanklessly maintaining open source and a sad llama on it as if sort of you know more of the
constant mantra that is maintaining open source is a bit of a thank
less job many times. And we do a lot of things and, and, but, and then someone
replied to me and said, well, you got a gold medal. So I just had to sort of
show the other side of the coin really, because I think I have gotten a lot of good things
from open source and I enjoy it a lot.
So it's really not an ordeal or a struggle for me.
It's a pleasure and I do it for fun.
So I just wanted to bring out some of the goodness
that I experienced from working with open source.
Well, this is only your second time on the show,
but it's probably the umpteenth time
that your name has been mentioned
since we had you on three years back
because you impressed us so much
with the 17 years of dedication to Curl
and just this relentless pursuit
of what is such a popular, widely used tool
and so relied upon.
This is definitely the web's infrastructure type of a thing um and
and so many people burn out fizzle out projects change you know corporate interest like so many
things go what we might consider wrong wrong in terms of sustainability but with you it's like
you're 20 and you're still rolling do you have a you got a retire you have a retirement date in mind or what are you thinking for this sometimes i i think about what i would do if i wouldn't do this but
no i haven't i'm still enjoying this so much and i haven't i don't see anything else that i want to
do as much as as this and so yeah this is really my really my baby still so very much.
So I keep on doing it for the fun of it.
The thing that is kind of interesting about the 20-year aspect is not so much the length of time,
but the amount of time, I guess, is somewhat the same, but a slightly different side of the coin,
is that it's been involved in your life.
And like, it's been a part of your life since 27.
I'm assuming since it's 20 years, you're now 47, doing some basic math here. It's a lot of your life since 27 i'm assuming since it's 20 years you're now 47 doing some
basic math here it's a lot of time that's your 20s or 30s in your 40s basically i mean it's a
lot of time it is it is totally a part of my life and it's and i've been doing this i've started i
mean the first code i wrote was even before carl so it's like it's strictly speaking more than it's
like the 23 years and yeah there is older And yeah, it's older than my kids.
It's older than my house.
You know, I've switched jobs like three, four times since then.
So it's one of the most constant factors in my life, really.
It's just been with me since forever.
So yes, it's really something that I don't really consider giving up ever, really, because it's me, really.
Do you own the full copyright to Curl, or is it community?
What's the structure, maybe the legal implications to the ownership of it?
I own most copyrights, but not everything.
I haven't really been very strict about it either. So if people contribute a larger chunk
that they want to have their copyright on, that's fine. So we have a bunch of different other
copyright holders on various parts, but I would say that maybe 70, 80% of everything has my
copyrights on it. I asked that mainly because of the question Jared asked, which is what are you
going to do? You know, What would you do otherwise, essentially?
At some point, you have to pass it on.
Of course, yes.
By force or by desire.
Exactly.
Not being morbid or anything.
But it is open source, and it's licensed, extremely liberal, so anyone is free to continue wherever they feel like at that point or at any point, really. is next with regards to the python project and when you have a bdfl if that bdfl is really good
at doing bdfl things everything goes well but eventually there needs to be a passing of the
torch have you put serious thoughts into that or are you are you are you far enough away of course
with that you know we always bring up the somewhat morbid conversation of the bus factor like what
if something bad happens to guido or to yourself, Daniel?
But more likely like, you know,
an eventual retirement from software
or from open source.
Is that something that is actively in your mind
or does it just feel like it's really far away
at this point?
Both, yes.
And I would say that it is active in my mind
in the regards that I've been thinking about it
and I've sort of given it thoughts about how to
do it at some point in time but it's not something that I consider doing anytime soon sort of hand it
over to someone I in my ideal case or my ideal situation would be that it would within the
project there would be one or two or three persons that would be sort of the natural other people
that would take over if I would just get bored one day.
And they would just more or less transparently
just shoulder the tasks that I've been doing
and just continue in whatever means they think they should do it.
But at the same time, I know that the way i do the project i also
know that i have a pretty strong presence myself and i think that that i sometimes also
i don't let others sort of reach that level sometimes before because i think i sometimes
do a little bit too much myself you, why wait for someone else to do it
when I can do it myself, sort of.
And I think that sometimes isn't constructive in that regard
that it doesn't really encourage others to step forward
and show their abilities.
But it's also, in one sense, very much so your life's work.
And so, I mean, talk about difficult to pass on or to to let go even if you
know it's constructive on the long term to let more people into the fold or the ones who you
trust to give them more responsibilities or allow them to come into that when it's like you know
curl daniels it's curl it's it's your project you know it's hard to let go of that right
right yeah but of course I would I would like
the project to be more distributed on onto more people than than we are right now and I'm trying
to make that happen but it's not I think I've sort of laid the groundwork for for one way to work and
it has sort of developed into this so it's not that easy to just say that no no i just want to do a little part in
my corner here you go ahead and do everything else because there aren't that many others who
are prepared to jump in and do the others stuff so especially if the uh i can recall several years
ago when we talked to you before you mentioned how some of the income you've been able to make
obviously has been because of like contract jobs that you've done for various companies to add features or specific things and
you know I'm just imagining that that it's very difficult to like piecemeal and like break off
some of that whenever it's it's so kind of you focused in the minutiae of it and it's not exactly
I don't want to say not the funnest
tool to work on. I've never done it, obviously, but it doesn't have this lore like some other
popular projects might have, like, hey, come and be a contributor. And, you know, you'll have this
glorious open source lifestyle. I don't know. You know, like, I'm not sure there's much draw.
How do you draw people into this project with you? Yeah, that's a good question. I don't know.
I mean, it's the pipelines of the internet right it's internet plumbing yeah and i i think that might be what attracts people then because it's a sort of a fundamental thing that massive effect
everywhere yeah so so if you contribute to curl you can get your little piece of code into a
couple of billion devices over time right of course an interesting sort of feeling or challenge
well i think it's sort of that okay i'll take it back it's one of the things i've been thinking
about i've been putting off a blog post about developer and leveraging software and like i feel
like software developers live at what is perhaps at right now the height of of humans ability to
leverage things and the fact that you can write one line of code, Daniel, and then do a release, and then
that's going to get, you know, eventually it has to trickle down and go through the
release process, but that's going to affect, you know, billions of devices, millions of
people.
That is an incredible amount of leverage.
And I do think that that's attractive from a software developer's stance, because how
can you be most, how can you live the most meaningful life?
It's to have the most positive impact on the most people
and software really lets us do that.
Oh, absolutely.
Sort of just do something little in my corner
and it can seriously influence the entire world
in some ways at least.
So let's zoom out and talk about your community a little bit
because as I've been watching Curl and your blog more closely since you're on the show one thing i did notice
is you do keep it fun uh you do celebrate victories like your 20-year uh celebration
post was awesome with like the titanic reference and you can i can tell that you're still
light-hearted and having fun with it even though you've been doing this for 20 years you have a
a curl conference now you got stickers tell us about some of the stuff you're doing in the
community and who all is part of it with you yeah i think i think all these other things around
the project like that isn't code also makes it fun because i mean some of the oldest other
contributors or maintainers in the project they've been around for i think the oldest
guy has been his i think a little over 15 years now so some of those are really my old friends
by now so it's it's very so you know setting up a little conference to meet over a weekend and just
talk curl for a weekend that's i can't think of much other things that are better more fun to do in the
weekend and and so that's just awesome and of course then sort of making one thing about becoming
more known and and things like getting awards and prizes makes makes you makes people get your eyes open and see us in slightly different
sort of lights or angle
is that suddenly people approach us
with money or ideas
and they can print stickers for us
and hand them over
or they can borrow us their conference rooms
for a weekend or stuff like that.
So stuff also gets easier when you get
known or people realize the impact so people get friendly and we get friends all over so that that
is that is fun so of course i i i like curl and i like working with it and of course i then try to
sort of bring up those fun moments like celebrating 20 years of curl or now we broke the sort of bring up those fun moments, like celebrating 20 years of curl,
or now we broke the sort of,
we have the 32,000 questions or stack overflow,
or now we have 1,700 contributors in the thanks file
and stuff like that.
So, because I want to help out the other contributors
and everyone to make sure that they feel appreciated
and we all appreciate what they do.
And I think it's fun.
It also goes back to this constant question that I say that, yeah, I've been working with
curl for 20 years.
And then the constant, whoa, I used it for 10 years ago and it worked exactly the same.
What have you been doing?
I think we asked you that.
That's actually one of our questions was what's new with curl? What have you been doing these last three years? What have you been doing i think we asked you that that's actually one of our questions
was what's new with what have you been doing these last three years what have i been doing
and that's a completely natural question and it's i mean it's not a bad question it's just that you
know when when you're working with something and and the sort of the facade or the front it's it's
the same and the whole point with the tool and the library is that it should work the same way.
You know, we work really, really hard to make sure
that it keeps working the same way.
But of course, we added a little stuff
and we fixed bugs and stuff.
But the point is that it shouldn't, I mean,
you shouldn't realize that a lot of stuff underneath
actually changed and we sort of, you know,
replaced half of the engine and added a lot of other things
or documented everything again in
another way. You don't have to think about that. But sometimes when I sort of, I want to sometimes
help people or people in the project and people around me to realize that we are actually doing
a lot of things that even if you may not think of all these changes and you used curl the same way 10 years ago. We've actually
also added a whole busload of things just in the last few years and here are some of those things
that you can now do that you couldn't do before and blah blah blah why that is good and how this
helps your application or your usage of this in the future and so on and also sort of we're having a slide
i don't know if it's i should call it feature creep but we're adding a lot of features we have
a lot and you know we have 215 command line options wow sometimes i feel a need to highlight
parts of that to help people actually find out about things that curl can do discovering you
know hidden features or hidden gems so to speak as you're not paying attention to the change log
or whatever exactly so even if even if the things might not be new i can sometimes just write about
it well imagine if you want to do this you can actually do it like this you know with curl you've
been able to do it for a long time but maybe you didn't think about it has anybody ever written
like a curl cookbook or some sort of thing where it's not necessarily like uh like it's just a pamphlet
or maybe it's even only digital but is like here are you know 25 things that you can do with curl
and then the specific examples of those commands because that would be so useful yes there are
there are pages like that so um I try to do that sometimes,
but I'm not the right person to do it.
I'm so entrenched in the details.
So I just get lost.
I've actually written a book about curl
that I'm posting online.
That's what I call everything curl.
That is really everything curl.
So is it going to be like a big,
like a curl Bible kind of thing?
Or is it going to be like, how long is like a curl bible kind of thing or is it going to
be like is it how long is it i guess that's what i'm trying to get at it's long that's what i that's
kind of how i thought about you might say it's everything girl i'm like well that might be too
much girl yeah yeah yeah it is you can find it you can just google it up and it's like yeah if
you print it it's like 250 pages or so is this an
ongoing thing i mean it sounds it is an ongoing thing started in 2015 ish something like that
late 2015 yeah exactly i think it was after our last podcast but yes it's been going on for several
years and it's never going to end either because it's uh it's just so much i know curl changes all
the time too so if if i want to keep up i need to keep up with the book right but it's just so much. I know curl changes all the time too. So if I want to keep up,
I need to keep up with the book.
Right.
But it's an effort to describe curl
and how to use curl in a way
that isn't really just man pages
and reference documentation,
but actually sort of help people
to read up about it in a different way it's kind of like a did you
know kind of thing that's what i think that would be so useful i was thinking you know kind of a
callback to a recent show maybe there's a dev hints out there for curl and of course there is
so dev hints.io slash curl this is kind of what i'm thinking but there's just it's light on
examples there's three examples and i think you could probably come up with some complex use cases
where this would be super handy for this particular thing and then here's your curl command one thing i have seen
a lot of which is really neat is different http tooling specifically like some desktop apps for
mac will actually have like an export to curl button once you've crafted a specific request
right and then you can just like get the curl export and put that in your terminal. And that's really cool.
Yeah, the copy as curl has really become
a popular feature.
And I like that too.
Since now I know, I mean, Firefox, Chrome and Safari,
now all of them have this copy as curl
if you're using their dev tools.
So you can copy from their specific, you know,
if you watch the network traffic from your browser,
you can select a particular request
and do copy as curl from that.
So yes.
Spectacular for trying to replay a very specific thing
in the terminal, you know,
and capture the output
or whatever you want to do from there.
Oh yeah, it's really handy.
And it's a great way to learn how to use,
if you want to do something with curl and get,
it's roughly this that my browser just did
and just get a copy and edit that command line.
I mean, the command line is usually quite long.
213.
That's a lot of features.
Right.
And they're often really repetitive because the browsers,
you know, they set a lot of headers.
So you want to have the exact headers like the browsers do. they set up you know a lot of them very long command lines but
still you you can look at that command line and see this is how you could do it so speaking of
headers and speaking of of you know features i actually found on your blog recently a feature
that i'm very much looking forward to which is a small change but you said like the core feature set changes you know has stayed the same and people say curl works
exactly like it used to uh there are some you know you're doing some some ui brush ups uh specifically
with the dash i command which is probably like if i if you go through my history with curl like in
my command prompt you're gonna find curl dash dash i capital i of course almost every time because i use it for looking at headers and you're adding like bold uh key values
on the header so the header names are bold and then the value you know the text is not bold so
that's like a very small thing but you're not ignoring the facade or the paint you're still
making small improvements to the output as well yes and and sometimes i i have a
hard time to decide what to focus on but you know i think it's fun to do that too so i try to sort
of move around a little bit i can work a little bit on on how things appear on the command line
i changed the one of the progress bar outputs a while ago too, just because it actually makes,
it is somewhat important to some people.
And why not?
And it's fun to work on that sometimes
and then go back to debugging HTTP2 streams
for another day.
But so I mix up
and that's what makes me, of course, enjoy this
since I can do various things.
I can play with the UI one day and then go back and work with protocol stuff another day
and then work on documentation the third day and then write a blog post another day.
So, yeah.
So I've actually just landed in Git.
So it'll be for the next curl release the the code that outputs the headers as bold
well the the name value is the name part is bold and well the value part is not bold it's actually
a very very long feature long time coming well i'm sure a long time coming but you also mentioned
that this is this was not an insignificant amount of code change.
Maybe you weren't set up to do this kind of output,
or why was it a bigger feature than maybe people would think it is?
No, I think it was, well, because it involves,
I think it's mostly a lot of internal decisions
on how to do HTTP and show headers.
And, you know, we have this concept of
headers and curl supports a lot of different protocols and some of them have the internal
concept of headers but I only wanted to do the bold for HTTP headers and then I've used enough
so it was mostly because of how I had done this with curl up until now or not done it.
And also when I had to change how, I don't know how to explain it,
but headers come, you know, from until this here, you know,
character return line feed at the end of the line.
So you want to make sure that you actually do this on a complete header
and not on a partial header
so if it would be an extremely long header it would come i still need code to handle that i
only do the left part and not the right part of it so it was a lot of you know finicky internal
things that made me good old-fashioned yak shave yes and sort of you know i've done a lot of
decisions a long time
ago that was convenient because I didn't do this. And now when I had to go back and make sure that
I could split up the headers like this, then I had to just, you know, remodel a couple of things and
shape it up. But I think it was all good. I think I improved some other tiny things in the process.
And I know that a lot of people will appreciate getting the Heller's Bold,
however small it may sound.
It's one of those details that makes it look better. This episode is brought to you by DigitalOcean.
DigitalOcean is a cloud computing platform built with simplicity at the forefront,
so managing infrastructure is easy.
Whether you're a business running one single virtual machine or 10,000,
DigitalOcean gets out of your way so teams can build, deploy, and scale cloud apps faster and more efficiently.
Join the ranks of Docker, GitLab, Slack, HashiCorp, WeWork, Fastly, and more.
Enjoy simple, predictable pricing.
Sign up, deploy your app in seconds.
Head to do.co.changelog.
And our listeners get a free $100 credit to spend in your first 60 days.
Try it free.
Once again, head to do.co.changelog.
So I guess we came to this conversation through an embarrassing moment for me. It was early in the morning on a Sunday, and somebody in our Slack, Daniel, had said,
hey, what's the state of HP2 and where's it going?
And I'm like, great question.
We should ask Ilya.
We've had him on the show a while back and be great to catch up send him an email with the subject line current state of HTPS question
mark not two and uh I had to quickly check that because that was obviously not right but uh was
reaching out to him to essentially get an update on TLS 1.3 quick and some other stuff so maybe
help us kind of understand.
He said that you're working on this.
You got a lot of stuff going on.
What's going on?
There's a lot of stuff going on.
Well, HTTP2, that shipped,
I mean, kind of three years ago, right?
Our last episode.
And the RFC was published in May 2015 2015 so yeah so and now three years later the work
is of course no longer going on on standard wise on hdp2 very much there are still things
happening in hdp2 but the fundamentals is there and it's good and it's working and it's being used
i could just add and perhaps that if we look at traffic done by Firefox,
we can see that Firefox is using HTTP 2 in, I think,
about 75% of all HTTPS traffic.
So I would say that it's a pretty good,
a significant amount of the traffic is HTTP 2 now.
Counted by volume then, of course, if you look at the other way,
sort of how large percentage of course if you look at the other way sort of you know how how large
percentage of all the web servers in the world that are providing hp2 it's not as nice numbers
then it's upwards i think we're approaching 40 percent of the top 1000 and it's the top 10
million is like 25% or so.
But it's still moving.
And I think the numbers are still raising pretty quickly.
I think they've doubled roughly the last 12 months or so.
They've been doing that for a while.
So it's growing and it's being used and it's being understood. And I think there are areas that have been more successful
and some that have been less successful in the protocol.
And I think already when HTTP2 shipped,
there was this notion that the next protocol revision
wouldn't at all take 16 years to happen.
It would happen much sooner.
That would be nice, wouldn't it?
Yes.
And a lot of the HTTP2 work was also laying the foundation
to make sure that we could iterate protocol versions
much faster and easier and more effortless in the future.
So HTTP2 brought a lot of that infrastructure.
So what is happening?
And at the same time then when HTTP2 brought a lot of that infrastructure. So what is happening? And at the same time then when HTTP2 shipped,
Google had already been running their quick experiments
in the public, you know, in their Chrome browser
and in their service side, server side,
since I believe they went public, I don don't remember 2013 or so with their quick
efforts anyway it google took their efforts to the ietf and said we should make a standard
version of the quick protocol they did that in 2016, late 2016. And QUIC being an experimental protocol that
Google invented then, which is HTTP2-like, but it's done over UDP. And since then UDP
isn't, you know, it doesn't have any, it's not reliable, it doesn't do
retransmissions or anything and there's no security in there or anything. So
you basically implemented a transport stack, basically a TCP-like stack that also features security then because you want to have the, not HTTPS, but HTTPS-like.
So you have UDP and TCP,
don't those operate kind of at the same level of the stack?
So why would you take UDP and then make it TCP-like?
Doesn't that sound like you should just...
Exactly.
Well, i can take
one step back first why would you why wouldn't you invent a new protocol like why if you want
to make it better tcp why not make a tcp2 like on like right now in parallel to tcp but you that
that has basically been ruled out because of all the middle boxes and NATs and firewalls and everything in the world
that makes it really, really hard to introduce any new transport protocols nowadays.
So we're stuck.
So we're pretty much stuck. TCP, UDP, those are the ones we have to choose between.
And so now the answer is, well, we can't change TCP enough to make it faster, better, more secure,
but we can take udp
which is very lightweight and doesn't have any of these things that tcb has and make it tcp like
but not with some of the trappings i guess exactly by by choosing udp and basically do it all
yourself we can then you can basically decide how to do it well Just do whatever you want. And in Google's case, then they have a
fairly large client side implementation and a fairly large server side implementation.
They were in an excellent position to experiment with doing their own protocol over UDP and
implement all this and check it out. How does it work? And it worked really good. And they figured out that this is a protocol we should make a standard for the web and
the internet.
Can either of you give a 10 second slash 60 second version of the difference between TCP
and UDP?
TCP is like setting up a string between two computers, you know, a physical string and
you pass on data in one end
and it will arrive in the other end or it might get connected but the data will arrive or not
arrive at all but it will arrive and it will be unaltered and it will arrive in the same order
that was this one it was sent from the sender. So it's basically a way to transport data
and make sure that it's a reliable transport
and it's in both directions.
But UDP, on the other hand,
is basically sending notes in the air or whatever,
writing pieces of paper and throw it over.
Message in a bottle.
Yeah, it might arrive, it might not.
And it might arrive in another not and and it might arrive
in another order too so it's it's much more lightweight and it's it's meant for it's been
used for you know for dns and traditionally for dns ntp and and and traditionally also for rtp
for video yeah but you but really never on a really wide scale, high speed internet scale like this.
So that has always been one of the biggest concerns.
Will UDP break stuff now?
Because we haven't designed things for UDP at this level.
But over time, it has proven that most of the things actually work pretty well anyway.
And over time, people have also adjusted things and improved infrastructure and routers and things.
So things are going better.
And looking at Google's numbers, they claim that they roughly...
I think my number is old now, but they said already like a year or two ago that 7% of internet is QUIC already.
And that's quite a big share of data running.
So QUIC is the new version of what H2 has been, right?
The evolution of H1 to H2 is now coming to QUIC.
And QUIC is a lot of things.
Because first it was the Quick that Google made.
Right.
And now it's evolved to something else.
Yes.
I mean, because it's a long time ago too
when they started this, I think 2013, 2012
or something like that when Quick was begun.
Yes, exactly.
So I think they went public with it in 2013,
but then they had already been working on it
in private before that.
But then they produced what been working on it in private before that but but then they they
produced what i call google quick and and that is basically sending http 2 over udp with a custom
custom encryption code that so um so you could you could almost use your HTTP2 implementation and just provide that quick stack and it would work.
But when they took that protocol, well, they kept up with documenting how the protocol worked and they had a website for everything and they made it all in the public and they took that their latest
update of the drafts to the IETF and said we should document this protocol this is quick from
google and blah blah blah and when they brought it into the IETF and they started to look at it and
decide decided on how to move forward on this they came to the conclusion that that you know this bundled solution that is one transport protocol
that is only sending http to wasn't ideal for for a protocol or trans transport like this so
they came up with a with a conclusion that it quick should be split in a transport part and
an application part so it should be able to also transport other things
than hdp and dns was one of the first things that were discussed and has been one of the sort of
second protocols that have been involved been in discussion all the time so then it so then
quick became quick the transport and hd over QUIC is the new HTTP.
Is that the final version of QUIC or is that a transitionary version as well?
Well, it's not final because it's not done yet.
So they took it to the ITF and they created a QUIC working group in the ITF.
And within that group, there has been a lot of activities since
then they have they're now doing draft 12 of the specs and they have four different specs I think
there the plan is to be done by November this year 2018 with the spec with the specs although
there are multiple so there are several I think there are four or five specs.
But yeah, I don't think they will stick to this plan because there are still too many loose parts and moving.
I guess the question might be, as a zoom out might be, this is all in an effort to obviously make progress, but to make it easier to iterate on something that has been traditionally harder to iterate on. Yeah, but also sort of when HTTP2 shipped, we all were aware of a lot of
shortcomings and things that we could improve further in the transport protocols. So when we
went to HTTP2, we improved a lot of things from HTTP1.1, but there are still a lot of other things that
HTTP2 can't really do and where it has bottlenecks or problems that we can solve.
And we couldn't really solve them with the TCP in the HTTP2 context, but going too quick, we can
solve even more problems, some of those problems that are still present in hdp2 and and there are i
mean apart from just fixing things in tcp fixing things in tcp is really really difficult in
general you know um because of not only there are many reasons why tcp is difficult to change but
two of them are that again we have a lot of middle boxes over the
internet you know you're talking through NATs and routers and everything and they know in with air
quotes how TCP works so if you if you change how TCP work slightly you know you add a little thing
here there there you break x percent of those boxes and they will refuse to send it because they know that's not TCP anymore.
You mean if you're just tuning parameters
or if you like fundamentally change the protocol?
Because tuning parameters, they shouldn't break.
Like that would just be really bad programming
on those boxes.
Well, yes, but that's the reality.
Just as a sort of a little story into this,
one of the features they added in TCP,
I think it's like seven years ago,
they added the TCP fast open,
which is a way to send the data already
in the first SYN packet in TCP.
You know, when you do a TCP handshake,
you do a SYN and SYN,
there's a three-way handshake.
So in order to gain round trip,
they invented this method where you could add
data already in the SYN packet, in the first packet. So you would save a round trip, you would
get data earlier. And you know, a lot of this struggle is to get data earlier, you know, soon
reduce round trips, get data earlier. So sending data already in the first packet of TCP, that's
potentially, you know, saving know saving well if not tens but
sometimes hundreds of milliseconds if you're far away so that's a huge benefit but implementing and
using this TFO over the internet today it turns out to be a struggle and a pain to make sure that
works because there are so many machines out there that blocks that little new bit that comes saying,
here's a TFO.
That's not TCP the way we want it.
Deny.
Yeah, that's a tough problem.
I mean, you just have so much infrastructure out there.
It's not feasible to change the boxes in the middle
because there's just too many owners,
too many places, too many situations
that you're never going to be able to replace those exactly that's what they call ossification nowadays so
and the grand solution to that is encrypt everything because so so that no none of
these middle boxes can actually you know peek into those little bits and bytes
exactly they can't figure out you wanted this because they don't know it. They just have to
pass it on. So then you can add things over time. So that is one reason why QUIC is now
really, really encrypted to as much as possible, really. But that shows how it's hard to change
even TCP over the wire, but then also just changing the implementations of TCP.
They're kernel-based stacks.
It takes forever.
This TFO implement, then the spec came seven years ago,
is only like a year ago or so that Windows finally implemented it widely.
So it takes forever for this to be implemented widely.
So if you want to iterate fast you can't do it like that. And then there's this another technical
problem with for example for that TCP has and HTTP2 shows is the is a problem with packet loss. You know, when, when HAP2 was introduced,
it was the new method of doing transfers is to do
or a lot of streams over a single physical connection.
So you would typically do 100 streams
over the same TCP connection,
just a lot of logical streams over it,
which is a good way to do a lot of
parallel transfers, but only using one connection.
And this is really good as long as your network is decent.
And it turns out then if your network turns out to be very lossy and you start losing
packets, then having just a single tcp stream
tcp connection is really not ideal because then losing one packet in the middle there
that means that you're waiting for one packet to get resent to get those 100 streams
while as previously you would do typically perhaps six connections per host and you would do sharding, you would maybe have 20 connections or 30 connections with HTTP 1.1 to sites.
So it's almost like faster networks get faster, but slow networks get slower.
Slow as it's unreliable, maybe is the better word, not slow, but unreliable.
Exactly. Slow as in bad radio.
Exactly. Because H2 is really good if you're far away so for people
really really far away from their servers it's it's excellent it's made possibly those who
actually gain the most by h2 because they need to make less tcp connections yeah and and much less
round trips you can you know fire off 100 requests at once, basically, and get the responses instead of this ping pong request, response, request, response, waiting, sending, waiting, saving.
And so, well, this TCP limitation is not there in QUIC.
In QUIC, you create connections, but they're not connections in the same way as TCP has them.
And when you're sending stream,
the streams themselves are reliable within the stream.
So you can send things and you know that the picture or image
of whatever you send, it will arrive in the other end,
unmodified and exactly as it was sent from the source.
But the streams, they are independent from each other. So if you drop a packet somewhere
in the middle that belongs to stream one, stream two can still continue because it still has all
its little packets. So it's only the one that actually has lost packets that has to wait.
So this makes the lossy network situation completely different because then if you
lose a few packets somewhere yeah sure those streams with that belong to those that actually
have lost packets they will have to wait and you know resend the packets and everything but the
others they can continue sounds like thank goodness for udp because it's provided us a loophole around the ossification,
right?
Like we would have been stuck if this UDP hack wasn't available to us.
Exactly.
It is exactly like that.
So yeah, so that's why it has to be UDP.
And that's why we're doing all this work, implementing TCP-like stacks in user space
in both ends.
So QUIC as a protocol is, I would say, far more advanced than H2 then,
because now you also have to implement the transport part and then the HTTP part on top of that. Thank you. New events like live coding sessions and the OzCon Business Summit. And don't miss the fun evening events and receptions, the open source awards, and plenty of networking opportunities for everyone.
Save 20% off with code CHANGELOG on gold, silver, and bronze passes.
Head to OzCon.com to learn more and register. Daniel what if you were just to describe Quicks mission what would it be it would be to reduce
round trips and work this pretty much this transparently the same way as h2 but better and secure by default and always
there's no clear text quick and of course that is the hdp over quick how that will appear there
will be more quick after this quick that's gonna there's more quick coming after this quick wasn't
h2 supposed to be all encrypted too or and maybe they backed up on that at the last minute yeah exactly well h2 in reality i would say over the web on the
internet or the web is encrypted always but the standard allows for for both exactly the spec is
sort of you can do it either way but for quick there's no unencrypted version you you need tls 1.3 that's sort of you can't avoid it you said
there's more quick coming and with this quick hasn't even arrived yet how can we how can we
look for that far down the down the pipeline i think i think that is maybe we don't have to care
about it right now but few but you know when i said this when they when google took this into
itf they decided we should split it into transport and application, and the application is HTTP. And we should prepare for
another application, maybe DNS. And then they also said, oh, well, we also want QUIC to be
able to handle multipath, which is, I don't know if you know about TCP, multipath TCP,
but that's setting up multiple paths between two end
points over the internet but then they decided that maybe we don't have time to get into multipath
in quick v1 so we postponed the multipath part so there's already this talk about arm but then
quick v2 will be making sure that we can actually do DNS and do multipath and stuff like that.
Basically postponed because it hasn't been enough time to cram it in into version one.
So you mentioned earlier that 7% roughly, and that might be an older number of the internet is using Quick.
Specifically, if you're using Google Chrome and you're speaking to Google services, you're most definitely using Quick
and just don't know it. What about the rest of us? Like what's the roadmap look like in terms
of adoption or production use and when we should start thinking about this? Many of us are still
trying to get on H2. And so maybe this is a little overwhelming, but maybe you can skip H2 and go
straight to Quick. I don't know. I would say that say that i mean yeah i i get that there's some notion of that and maybe if you haven't gone to h2
by the end of this year maybe you should consider just going to quick at once but i don't know
well the google quick was not implemented by many others than just than google alone so
right caddy server has an implementation
and there are a few other standalone implementations,
but they have never been widely deployed or adopted.
So the Google Quick version is primarily used by Chrome and the Google servers.
So that is basically what 7% of the internet traffic is.
But the IUTF version of QUIC,
which is then quite different over the wires
in the sense of this divide,
and they changed the crypto layer
and they changed pretty much everything in the protocol.
So the IETF version of QUIC is being implemented
by a lot of different players,
all the ones that you can expect them.
I mean, the browsers and the big server vendors
and the big service vendors like Facebook is a lot on it.
And the CDNs too.
So going into the future,
we will see this getting deployed and used by all the big players
that were involved with H2 deployment.
So are you working on this on
mozilla's behalf on curl's behalf both perhaps how does it fit into into your life now well i'm
actually not that involved in okay i'm mostly sort of i'm participating i'm reading the traffic i'm
getting the news and sort of following that's a steady stream of github issues and stuff like that so yeah i'm participating
a bit for for actually for both interest from a mozilla perspective and a curl perspective but
because of course i i want to make sure how to learn and know how it works and understand
everything and then as soon as it becomes possible and i get the time and energy
implemented to support it in curl.
What kind of timeline would you expect for that?
Would you wait for the draft needs to be formalized, right?
So that November 2018, what they're shooting for, you wouldn't start any sooner than that, would you?
I would, depending on things.
Tell us more, what things? Well, you know, it's like, you know, building a
tower or something or building a house when you can, when can you move in or,
when I implemented H2 for curl, I went in pretty early and started implementing support already in
one of the drafts, I think, a year before it finalized.
And that turned out to be really useful, you know, both as feedback back into the standard
process, but also a lot of just, you know, trying out things and getting everything working
and interrupting with all the other implementers.
So I think it's really useful to get in as early as possible. But not too early from my point of view,
because in the quick world,
and of course, there's so much transport here,
and I want to have the transport part fairly done
by the time I start adding the HTTP parts
on top of that transport stuff.
And then I need to cooperate with others to do this, to do a library.
Or there is already many libraries of course that implement this, but I
am having a particular one in mind and I want to work with those guys to make sure that we get an
HTTP over a QUIC library that works fine with curl and that I can make sure that curl uses.
I'm expecting us or me to start doing that soon. I've actually, so what I would, I've already
started by now, but I think the spec hasn't really moved on as fast as I anticipated it.
And the library is also not really there. So I haven't really had the time. So, well, maybe in a month or so,
I would say hopefully during the summer,
I could get a start on it.
Well, we talked about the ossification
of our infrastructure,
and at least in Curl's case
and on the software side
and on the client side,
we appreciate that you are so eager to jump in
and to help beta test, if you will,
the implementation of these things
and maybe even write one of the early client side implementations of supporting
these things so that, um, we can continue moving forward because when curl, you know,
adopt something, a lot of devices around the world are now can speak that language, right?
So that's pretty cool.
You mentioned DNS as the other potential application of QUIC underneath DNS.
I'm assuming there you wouldn't gain any speed because UDP alone has got to be faster than QUIC, right?
Because QUIC has additional things, but there you're gaining that encrypted connection.
Are there other, am I on point there?
Yeah, and I think there's even greater sort of there's a bigger goal here too
i mean there's this this term that has been used within the itf several times that i can drop here
that they talk about the post tcp world if if if you want to go to things completely tcp less
then you need um you need to do the other protocols over QUIC, basically.
And I'm not sure why they picked DNS as the other protocol to use here, because, I mean,
DNS has its own road forward in other ways.
So I'm not sure exactly how this is going to turn out.
I can't really speak much about why they picked DNS
or what they're anticipating,
what they want from that over QUIC.
Because nowadays we see a lot of DNS going over TLS
and DNS over HTTPS coming.
So we're already sort of fixing up the security parts
and the privacy parts for DNS like this.
So I don't know.
So a post-TCP world.
I've never had this consideration.
So this is my first time even thinking about what are the implications.
Dan, you probably thought about it a little bit more.
What does that imply?
What does that change?
Seems like a simplification, but maybe not because you got to put so much stuff in quick.
One of the interesting things without TCP is what is an HTTPS URL really, right?
So, or an HTTP URL for that matter, but HTTPS URLs, they are basically implying TCP, right?
Or HTTPS is, since, I mean, they're not saying connect to me on UDP port 443 anyway, because
you probably don't have that so that's
that's one of one of the greater challenges is how to move away from that and i just i didn't
mention that but the way you bootstrap into a quick world from hdp or hp2 is that this the
server is replying with an alt services header saying,
you can connect to me,
you can connect to this origin over on this server using this protocol,
blah, blah, blah.
And then you continue from there and you cache that information.
I was actually going to ask about that.
Is that then a UDP request? Like the client sends one of those first?
Or that it can't be a TCP request?
Well, the initial one will be an hdp or you'd rather
upgrade to hdp2 probably first and that response will say you can the next one you can continue on
over here using quick this version blah blah so you still have the required handshaking and you
still have the setup time on that very first request because you don't know if it's going to
be a quick server basically until you do and then from then on you can assume that and you can also cache that in the client
yeah and it has a lifetime so if you have it if you know you're going to provide that for a year
you can set up a really really long lifetime so so everyone will cache that for a long time
but there's also this going back to dosification and stuff ud UDP is also not as successful to use over the internet as TCP.
So there's still this single digit percentage of connections that will fail over UDP
that sort of never handshake quick at all. So you still have to have that fallback mechanism to go
back to H2 if the quick connection doesn't work.
At least that is what we're doing now and for the foreseeable future.
You're telling me there's no such thing as a post-TCP world then
because you're just going to be stuck with it forever.
Post-TCP first.
Possibly. You know how everything gets stuck.
There's always something left of the old technology somewhere,
so we'll never get rid of everything right yeah i mean i'm really just wondering what the the
implications are like why would like the itf is using this term now internally in their
conversations and it's like i don't understand why you would want a post tcp world unless it's
just maybe because it's just old and Quik's better
in every single way eventually.
Yeah, I guess.
I guess it's because
it then solves
the ossification problem.
It allows you to keep on
developing the protocols freely.
That's right.
Much more freely.
So if you want to implement
multipath next year
or in 2020,
you can do that because you have encrypted everything
from the beginning. So there won't be any middle boxes that prevents you from implementing new
cool features that you come up in the future. So I think there's a lot of that.
Except for that first request. You still got to get it through there.
Yeah. Yeah. But then, I mean, that's the current approach. I guess that there will be those who will do the happy eyeballs approach
when you try both at the same time
and you go with the one that responds and stuff like that.
So, I mean, that is also a solvable solution.
You can probably invent something in the future
that will do it differently.
So where should developers out there in open source land,
where should they be putting
QUIC on their radar and thinking about it more or less important in terms of maybe somebody's
running a website like change.com or maybe they're running a network service like Twitch
or something like, is this something that we should all just be patiently waiting for?
Should we be getting involved?
Maybe that depends on who you are and what you're up to.
But what would be your advice with regards to Qwik? I mean, it's of course a
technology that if you're into low latency serving things from either end over the internet,
this is a technology that is coming. So of course, getting familiar with it and how it works and
what it means for you, that's a good start. But it will take a while until there are reliable and solid implementations of this.
So if you want to work on code now, you're pretty early on and you get to get a lot of
funny things and rough edges if you try it out.
But of course, it's a chance to work on this bleeding edge protocol stuff.
So Daniel, you have the ear of the open source community.
You're an elder statesman now, if you will,
being awarded a medal by the Swedish king.
I mean, that's something that doesn't happen every day.
So you got that going for you.
If you could give some closing advice on this time
around to our listeners and to us with regards to open source, software development, life,
whatever it is, as parting words, what would you share with the audience?
My general advice when it comes to open source and software development like this in general is to
first to make sure that you try to find what is fun for you and work on that because other if you don't do that you
end up not doing it at all so so finding your project or your ideas or whatever you
it scratches i mean your itch you scratch your itch and that that makes you actually do something
and that's that's fun and then you can possibly become productive.
And then I think you also need patience.
That's whatever you do in this area of work,
you need to be sure that it's,
some things just takes a lot of time.
And I mean, not only sort of time to get things done,
but also time to find, I mean, to make sure that others find your
project and that you find your users or that whatever, that you get your stuff completed.
Things take a lot of time. Speaking of patience, going back to the beginning of the conversation,
the 20 years post, you mentioned Titanic. You'd mentioned that Google wasn't even formed yet.
And here we just talked about Google leading quick or at least beginning quick.
And now it's where it's at now.
I mean, it's pretty interesting to see the patience it must have taken on your part to deliver Curl and then evolve it over years and be patient with all the change.
Oh, yeah. And so sort of just looking back over time and see
what a different world and a different society we had back then. It's only 20 years, but
most of everything we know today, it wasn't like that 20 years ago.
Well, cool, Daniel. Thank you so much for spending time with us. Thanks for coming back again. Thanks
for your super awesome service to the community in ways I'm sure
the future generations of the entire world will
truly appreciate. Maybe
lesser than we need them to,
but having something that's
so widely adopted and so widely used,
I'm sure it will be around forever
for as long as the internet needs it, right?
Yeah, exactly. As long as it's needed,
it's going to be there. It's 20 years
so far, so 20 years more at least for sure.
All right, that's it for today's episode. Thanks for tuning in.
If you enjoyed this show, do me a favor. Share it with a friend.
Go on Twitter, tweet about it. Go back into Overcast and favorite it.
Or whatever you're using, give it a thumbs up, tell a friend.
And of course, thank you to our sponsors, Rollbar, DigitalOcean, and also OzCon.
We're going to be there, by the way.
So if you're going to be at OzCon this year, make sure you say hi.
And bandwidth for Changelog is provided by Fastly.
Head to fastly.com to learn more.
And we catch our errors before our users do because of Rollbar.
Head to rollbar.com.
And we're hosted on Leno Cloud servers.
Check them out at leno.com slash changelog.
This show is hosted by myself, Adam Stachowiak
and Jared Santo. Editing is by
Tim Smith. And of course, the beats
are by Breakmaster Cylinder. You can find
more shows just like this at changelog.com.
And while you're there, make sure you
sign up for our weekly email. Thanks for tuning
in. See you next week.