The Changelog: Software Development, Open Source - HTTP/2 in Node.js Core (Interview)
Episode Date: December 6, 2016In this special episode recorded at Node Interactive 2016 in Austin, TX Adam talked with James Snell (IBM Technical Lead for Node and member of Node's TSC and CTC) about the work he's doing on Node's ...implementation of http2, the state of http2 in Node, what this new spec has to offer, and what the Node community can expect from this new protocol.
Transcript
Discussion (0)
Bandwidth for Changelog is provided by Fastly.
Learn more at fastly.com.
Welcome back, everyone.
This is the Changelog, and I'm your host, Adams Dekowiak.
This is episode 231, and today is a special episode recorded at Node Interactive 2016
in Austin, Texas.
I talked with many of the speakers of the conference for an upcoming mini-series called The Future of Node,
produced in partnership with the Node.js Foundation and sponsored by IBM.
We'll be releasing those on our new show called Spotlight.
So if you haven't subscribed to our master feed yet, which includes all of our podcasts,
now would be a good time to do so.
Head to your favorite podcast app,
click search,
and search for ChangeLogMaster
and subscribe.
But this episode,
I talk with James Snell from IBM,
the technical lead for Node.
James is also a member
of Node's technical steering committee,
as well as the core technical committee.
He's currently working on Node's
implementation of HTTP2.
I talk with James about the state of HTTP2. I talked with James
about the state of HTTP2, what this new spec has to offer, but more importantly, what the Node
community can expect from this new protocol. We have three sponsors for the show today,
Rollbar, GoCD, and Hacker Paradise. First sponsor of the show today is our friends at Rollbar. Put
errors in their place with Rollbar. Easily get set up for your application.
NPM install dash dash save.
Rollbar.
That'll get you set up with Rollbar's Notifier.
You also need an account, so go to rollbar.com slash changelog.
Sign up, get the bootstrap plan for free for 90 days.
With Rollbar's full stack error monitoring, you get the context, the insights,
and the control you need to find and fix bugs faster.
No more relying on users to report your errors, digging through log files to debug issues,
or dealing with a million alerts in your inbox, ruining your day.
Once again, rollbar.com slash changelog, sign up, get the bootstrap plan for free for 90
days, and now on to the show.
So what's the state of H2 in Node?
I know you're working on it now.
You've recently tweeted about a prototype server.
So the current state is just trying to figure out how it would work in Node.
There's a lot of new things within H2.
It's a brand new protocol, even though it's got the HTTP semantics with the request response headers and that kind of thing.
On the wire, it's very, very different.
So it requires a completely new implementation.
So kind of teasing the edges of what that implementation would need to look like,
how it would work, what the issues are, what the additional state management, what impact that's
going to have on Node. Trying to figure out what that impact is going to be. And then
if we were going to put it in core, if it's something that was going to land there,
what would that look like in terms of APIs and in terms of just kind of the performance profile and that kind of thing?
So that's where we're at.
We had a discussion earlier, Thomas Watson and Sam, I forget his last name from IBM.
Roberts, yeah.
Sam Roberts.
Okay, thank you for jogging my memory.
And he wanted to talk, Sam was really passionate about talking about keeping nodes small.
Yeah.
And Thomas actually coined, I don't know if it's him or not, but he coined the term small core.
Right.
And so one of the discussions we had in that conversation was what should or should not be in node core.
And so as you're developing H2, you've got to be thinking about H1 being there, whether it should stay there.
If you did deprecate it, how you would do that.
So end that argument between them because they didn't really come to a conclusion of
what should happen.
Do you think H2 should be in Node Core or should it be a module?
Personally, I think it should be in Core.
And the reason for that, Node has always been a platform for web development, right?
You know, there's always been that web server.
And that is, you know, it's a primary use case.
Even though there's so many different places Node is being used and in different use cases a
lot of it always goes back to having node and if you look there is no
standard library in node but there's HP right there's URL parsing there's their
support for these fundamental web protocols that are built in and that's
the only thing that's built in right now if HP one wasn't already there I
wouldn't be thinking that we should add HP2, right?
There's other-
You'd think module at that point.
Right, right.
Okay.
There are other protocols that are becoming increasingly more important to the web.
WebSockets, for instance, right?
We don't have WebSockets support in there, and we shouldn't have it because it's not
already there.
QUIC is another one.
You know, it's a protocol that's, you know, starting to gain a lot of traction, you know,
relative to TCP IP. It's got a long ways to go, but it's a protocol that's starting to gain a lot of traction relative to TCP IP.
It's got a long ways to go, but it's a very good protocol.
But I wouldn't support any effort
to actually get it into core unless it
became much more fundamental to the web architecture.
So with H2, the decision basically just comes to,
we already have H1.
We know H2 is going to continue in relevance,
grow in relevance.
We have a lot of people asking
for it. It just makes
a lot of sense to have it
in core and have it available.
We also talked
about, and maybe you can even end this argument
too, we talked about how you define
what should or shouldn't be in core.
And it sounded like you said, maybe I'll answer this for you and you can agree or disagree,
but it sounded like you said around web fundamentals.
Like if it's fundamental to doing web stuff, it makes sense to put in core.
But what do you think about keeping Node Core small or how to define what should or shouldn't be in Node Core?
If it's not already there, then it shouldn't be there.
It shouldn't be added.
Another example of this was URL parsing.
We have URL parse, but it's fundamentally broken in a number of important ways.
It's there, it fundamentally works, but there's quite a few use cases where URL parse just
doesn't function correctly, so we added a new what-wg URL parser.
It's the same parsing API that you
use in the browser for new URL and that kind of thing.
So now we have two URL parsers in core.
And there was a big debate whether that should just go out
as a separate module or does it belong in core.
And that question's still not completely settled.
The only reason that would be added to
core is because url parsing is already in core right right and i think that is the the key
distinction that you know we're not adding something that's brand new that doesn't already
exist as part of the platform we're just evolving what's already there right so that that's that's
where i think we draw the line.
So for those who may not be as familiar as you might be with NodeCore, what exactly makes up
NodeCore to make you say, don't add more to it, just keep things in modules?
So the basic protocol supports, you have DNS, you have UDP, TCP, TLS, HTTP, these fundamentals of just basic web application programming.
That is what core is to me.
Now, there are things that are in support of that.
Obviously, we have to have file system I.O.
We have to have an inventing system, buffer for just basic data management.
I view those as being more utility capabilities in support of the web platform
capabilities that are there.
To me, that is a large part of what Node is.
And if you look at all the different use cases
where Node is being used, those are still
the fundamental things that are being used the most.
Even if you look at Electron, you know, it's, you know, those are basically
web applications, right, that are bundled into a native app, right?
Right.
Yeah, you cannot get away from those fundamental pieces of that basic protocol support.
And that, to me, is what defines Node.
It's almost what you said, I said you said, but you said it.
Yeah.
Web fundamentals.
Web fundamentals, right. If it's but you said it. Web fundamentals.
If it's around that, it belongs in core.
Otherwise, module. Right, otherwise, you don't push it out to the ecosystem.
So you're working on H2.
What's interesting about H2 for the Node community?
That it's actually a very different protocol than H1.
Yeah, it has the same name, but that too
is really, really important.
The fact that it uses a binary framing instead of a text
framing, right, and just line delimitation.
Stateful header compression adds an interesting dimension
of there's a whole lot more state management that
has to occur over long-lived sockets that just doesn't exist currently in Node when you're dealing with H1.
With the header compression and the multiplexing and stuff that the protocol enables, you can get much more efficient use of your connections. And when we start getting into the real world benchmarks
of real applications, rather than the peak load type
benchmarks I've been doing currently,
I think we'll see much more efficient use of Node
and of the connection there.
But it does require a different way
of thinking about your web applications, your web APIs,
because you're not just pipelining individual requests
one at a time.
You can have, the protocol provides no limit
to the number of in-flight requests and responses
you can have simultaneously over a single connection.
And then you add things like push streams on top of that.
It adds a significant new thing that you just
have to consider of how you're building your applications and what the interaction is going to be in terms of performance and concurrency and all
these things that you just don't currently have to deal with.
So I think there's going to be a lot of just kind of coming to terms with the protocol
and getting experience with the protocol and kind of figuring out what those best practices
are, because it's still a very young protocol, you know, and there's not a lot of industry best practice to draw from so you know
it's just kind of let's get it out there and get it in the hands of people to use and you know
see how it evolves from there i talked to michael rogers earlier about kind of the state of the
union so to speak for no jess and he was coming at it from a direction and governance side,
less of a code side.
But one thing he said was a really important factor in this next year
is security.
And so how does H2 play into, or the work you're doing on H2,
support the overall mission of being more secure?
Right.
So there's two things there. With H1 in core right now, a number of design decisions
were made early on to favor performance over spec compliance, right? It turns out that there
are a number of compliance things in the spec that says, don't allow white space in headers.
And there's very good reasons for that,
because you get into request smuggling and response
splitting, and there's a lot of real specific security issues
that come if you allow invalid characters into an H1 request.
Node was like, yeah, we want things to go fast,
so we're not going to check this, we're not going to check that.
And it was a very deliberate decision
not to fully support the H1 spec.
And what we found is that that caused a number of security issues
that we've been dealing with over the past year or two years and stuff like that.
With H2, we're going to be taking an approach where we're going to be very spec compliant.
And we're not favoring performance over that.
We're not sacrificing one or the other.
It is going to be absolutely compliant to the specification without taking those
kind of performance shortcuts.
And that is something that I am emphasizing in my own development as I'm going through
this, that making sure that we're hitting all of those, you know, you must do this or
you must not do this that are fine in that specification.
And I think by adhering to the spec
as closely as we possibly can, we
mitigate a lot of those potential security issues.
The other important thing is that even though H2 does not
require TLS, per the spec, you can do plain text if you want,
the browser implementations, the primary clients of H2 right now,
Chrome, Firefox, Safari, and some of the others,
they require that they will only talk to H2 server over TLS.
It's just mandated.
They won't even connect to a plain text server.
So automatically out of the gate, you're using secured connections. And that alone
is going to be a significant improvement to security. The one kind of limiting factor there
is Node hasn't really had a great reputation as a TLS terminator. A lot of people, just as the
best practice, put a proxy in front of it, right? And then they'll reverse proxy back over a plaintext connection back to Node
just to ensure the performance.
A lot of that has to do with the way the crypto works with the event loop
and OpenSSL and that kind of thing.
So I think a lot of work is going to need to go in to try to improve that
if we want to improve the performance of Node as a TLS endpoint
and improve on that story.
What gets you most excited about H2 being available?
I know you're working on things like you talked about the state of things,
but what's the most exciting to you that's going to change things for it?
Just getting into the hands of developers and seeing what they do with it.
It is a very young protocol.
It is brand new, and I have my issues with it.
I was actually involved with the working group for a while
that was actually creating it,
and I was one of the co-editors on the draft.
So early on, I had some interest in where it could go.
Then I got out of it for a little while.
I had some issues with how it was designed.
And I'm not completely happy with the protocol
by any stretch.
I do have my issues with it.
But I want to see what developers do with it.
I love seeing all the different ways
that people are using Node today in ways
we didn't even imagine that they could or would or anything
else.
And I want to see that also with the protocol, just
the experimentation and just all the
different new types of applications that could be developed or all the different ways that
it could be innovated on and built on.
Any ideas?
Any pontification you could do on what could be built?
There are all kinds of opportunities for more interesting RESTful APIs.
Push streams are something that are really interesting.
And so far, they've only really been looked at as a way of pre-populating a request cache.
I'm going to push it out so you don't have to do it. But I think with REST APIs, push streams offer some really interesting
opportunities for new kinds of APIs that are providing event notifications or the servers
more proactively pushing data to the client. One person I was talking to, and one of the
ways that they were prototyping stuff and using H2 is they would create a tunnel
using over an H2 connection
where they would open the connection with their client,
but then once the connection was established,
they would switch roles, right,
and allow the server to act as the client to the server,
and the client was acting as the server,
and they were doing this as a way of doing testing
over their network environment.
That kind of thing, you can't do that with H1, right?
But because of the multiplexing
and the communication model that exists in H2,
that kind of stuff is allowed, right?
It's something you can do.
H2 is going to enable new extensibility models,
kind of new possibilities for new kinds of protocols
that kind of coexist with the HTTP semantics.
And we already see some of that work already happening
within the working group.
There's proposals for other kinds of protocols
that are layered into the mix.
And, you know, you kind of wonder, well wonder, well, who would do that kind of thing?
Well, look at WebSockets, right?
Look how WebSockets emerged and its relationship with H1 and kind of the difficulties that existed trying to allow you to more naturally experiment with those kinds of new protocols without the pain that we had with trying to introduce WebSockets into it.
So there's a lot of new types of innovations, I think, that could come out of it.
But we need to build a kind of a collective experience working with it in order to be able to tease those things out.
We're going to push pause for just a moment and hear a word from one of our sponsors.
If you normally fast forward through our ads, don't do it for this one. This one's pretty
important to us. We're teaming up with Hacker Paradise to offer two open source fellowships
for a month on one of their upcoming trips to either Argentina or Peru. So if you're a
maintainer or a core contributor or someone looking to dive deeper into open source
and you want to take a month off from work
to focus solely on open source,
this is for you.
For those unfamiliar with Hacker Paradise,
they organize trips around the world
for developers, designers, entrepreneurs,
and trips consist of 25 to 30 people
who want to travel while working remotely
or hacking on their side project.
It's a great way to get out, see the world, spend an extended period abroad.
And fellowship recipients will receive one month on the program working full-time on
open source, free accommodations, workspace, events, and even a living stipend.
And one thing we're pretty excited about with this is we'll be following along.
We're going to produce a couple of podcasts to help tell the story of those recipients
who go on this fellowship, the hacker story, the open source story.
It's going to be a lot of fun.
To apply, head to hackerparadise.org slash changelog.
You'll see a blog post explaining what this is all about, what the open source fellowship is.
And down at the bottom of the post, you'll have an opportunity to apply.
If you have any questions about this whatsoever, email me, adam at changelog.com. You mentioned some things you're not happy with,
with the HTTP protocol. I couldn't let you not tell me what those are. So,
what are the gotchas? What are the things that are just bugging you about this protocol?
Staple header compression. It's very effective, right get some you know in terms of headers in hp
are very repetitive you know you know you're sending the same data over and over and over
again you know cookies or you know user agent strings you know all these kinds of things and
when it turns when it comes to actually what's transmitted over the wire, it's a lot of waste, like a date, right? Right.
And H1 is 29 bytes because it's encoded as a string.
You know, that could be like more compactly encoded as just a couple of bytes if you're using a more efficient encoding, right?
So it's very, very wasteful as it exists today.
HPAC, which is the Stateful Hydrocompression Protocol NH2, uses this state table that's maintained at both ends.
There is actually two in each direction.
So the sender has two, the receiver has two.
And the receiver gets to say how much state is actually stored.
The sender gets to say what's actually stored in that table. But for the entire life of the connection of that socket,
however long that socket is kept open,
you have to maintain the state, right?
And that doesn't exist in H1 today.
H1 is a completely stateless protocol.
So H2 switches that and makes it where you have to maintain state.
You have to maintain this server affinity, right,
over a long-lived connection.
And even though you're multiplexing multiple requests in flight at the same time,
you have to process those headers sequentially and serialize the access to those things.
Because if that state table gets out of sync at any point, you just tear down the connection.
You can't do anything else on it.
And even over multiplex requests, all of those requests and responses share the same state tables.
So it adds an additional layer of complexity that just didn't exist previously.
And personally, I don't think it was needed.
I think that there were other ways personally, I don't think it was needed, right?
I think that there were other ways. Or was it done differently?
I actually, you know, like I said, I worked on the spec.
I was one of the co-authors.
And I had a proposal for just using a more efficient binary encoding, you know, of certain headers like dates, right?
Or instead of, you know, representing numbers as text text represent them. You know it's binary, right?
The compression ratios weren't as good
But you could transmit that data without
incurring the cost of managing a state right so it'd be just like what h1 has today
We're still sending it every every time, but you're sending less every time.
Makes sense to shrink it rather than adding a state.
I kind of agree with you on the state because it seems like it's adding this extra layer of like,
it's almost like somebody shakes your hand and doesn't let it go.
Yeah, in a lot of ways that's exactly what it is.
Now, Google has a ton of experience with Speedy, right?
And a lot of what's in HP2 came out of the work that Google did on Speedy,
and I have a huge amount of respect for everything that they did and provided.
HPAC also came out of Google,
so they did a ton of research in terms of what would work, right?
And they had concluded that staple
hydrocompression was the only way to get the, you know, like real benefits out of H2.
You know, I disagreed with some of those conclusions, but, you know, the working group
decided, you know what, this is what we're going to move forward with. And that's what they did.
And at this point, it's like, I don't like it, but, you what it is, and that's what we're moving forward on.
Some of the other things there, in terms of additional complexity,
is H2 has its own flow control, has its own prioritization.
You can have streams depend on other streams,
and when you set the priority on one, it sets the priority for the entire graph.
There's just a lot there that just doesn't exist in H1. set the priority on one, it sets the priority for the entire graph.
There's just a lot there that just doesn't exist in H1.
How much of that do we expose to developers?
Like in Node, we have to provide an API for all this stuff.
Do we provide an API for flow control?
That doesn't exist in Node currently.
How would we even do that in a way that's efficient?
About prioritization, what kind of APIs do we do there?
This additional complexity is something that,
as Node core looking at this,
we have to decide how much of that do we pass on to the user versus how much of that do we do ourselves.
If we do it all ourselves, we're providing fewer knobs
for the users to turn, to tune things,
and we're making it less interesting for them because we're hiding some of those features.
We're hiding those capabilities.
And is that the right thing to do?
So the additional complexity kind of, you know, it's not something we can easily deal with.
It's something we have to kind of.
It's right there in your face.
Right there in your face.
You have to do something about it.
So, stateless compression, that's one thing.
Maybe give me the flip side of that.
Like, what's...
I guess you've already kind of described it to a bit
with the complexity,
but what's the worst that could happen?
The server affinity issue is actually the biggest issue here.
A lot of the proxy software vendors had some real significant problems with H2
as it was being defined, and you had a lot of criticism being put forth.
I can't remember his name, but the author of, I believe it's the Varnish proxy,
is very public in his discontent with the protocol um because of the binary framing and the way the headers are are actually you know um transmitted right you can't do what a lot of
the proxies do currently which is just kind of read the first few lines, determine where you're going to route that thing to,
then stop and just forward it on, right?
Which is a super efficient way of doing it.
You have to process the entire block of headers, right?
Then make the determination of whether you're going to do anything with it or not.
At that point, you basically have to terminate that connection
and open another connection to your backend and you have
so that proxy is actually having
four state tables for compression
and a lot more
stuff that they're having to do that that existing
proxy middleware
currently doesn't have to do.
Right? So, you know.
I guess the way you're against it.
Well, you know, it's...
It could have just gone the other way with just shrunk shrunk it instead of the same thing back and forth.
But just shrink it. It added, you know, it added a lot of complexity.
You know, you know, the plus size is complexity. Like you're talking about the bad side.
But what's the performance performance using that that socket much more efficiently?
You know, I was doing a peak load benchmark here the other day
with just a development image
of H2 in core.
I was serving 100,000
requests at the server.
There was 50 concurrent clients
going over eight threads.
So just as much,
just throw a bunch of stuff at the server
and see what happens. See how quickly it can
respond. With the H1 implementation in core currently,
I was able to get 21,000 requests per second doing that.
But 15% of them just failed, right?
Where Node just didn't respond, right?
And a lot of that has to do with, I was running tests on OSX,
there's some issues there with assigning threads,
how quickly you can assign threads.
And when we get an extreme high load,
you can run into some issues.
With H2, I was able to get 18,000 requests per second,
so fewer transaction rate.
But 100% of them succeeded.
Wow.
Right?
And it was using fewer sockets.
Now, it was keeping them open longer.
The downside of that was it was using significantly more memory,
but it had a better success rate,
and it was using the bandwidth much more efficiently.
The header compression, for example,
we were able to save 96% of the header bytes
compared to h1
right so you know actually it's 96% fewer header bytes sent over the wire
over to you know with 100,000 requests that's a massive savings right and
we're for you know for looking at you know the platform as a service where
people pay in for bandwidth or you know for bandwidth. Saving that much is significant.
A lot of money.
Right.
They'll spend that money in memory, though.
Yeah, yeah, yeah.
They'll make up for it in other ways.
And that increase in performance is significant.
You can't discount it.
With the fact that TLS is there, it's required.
There is an improvement in security.
But there are definite tradeoffs.
And anyone looking to adopt H2 has to be aware of what those tradeoffs are.
And it's something that, as we're going through in core, trying to figure this thing out, there's also going to be trade-offs in terms of API. And one simple example is the fact that the status message in H1,
you know how you have the preamble on a response,
it's HP 11200 OK.
That OK doesn't exist in H2.
They completely remove the status message.
So no more 404 not found it's just 404
Right no more 500 server error. There's no server error right there is no standard way just the number
Yeah, there's no standard way of conveying the status message. They just completely removed it from the protocol
Well there are existing applications out there that use the status message, right?
And actually put content there that the clients read.
Now, it's not recommended, right?
And H1 spec, you know, doesn't assign any semantics, reliable semantics that anyone should use to, like, say, hey, that's a thing we should use.
But as users do, they'll use whatever is available to them, right?
That's a bummer because people will stop saying 200 okay now. They'll just say 200.
They'll say 200. Right, right. 404 not found. The whole joke's, you know.
Right, right.
Nobody will get it anymore. So if you look at Node's API or things like Express, you know,
they have like, you know, here's how you set the status message. Well, that's a breaking change in those APIs when you go to H2.
So we have to make a decision of how closely does the H2 API have to match the H1 API and act the same way when we know that there are distinct differences that mean it can't.
So it makes upgrading or changing to H2 a very deliberate choice.
Yeah, it's going to have to be very deliberate. And it's only going to be in very simple, simple scenarios, which probably
aren't realistic, that somebody would be able to say, okay, it works in both, right? It's going to
be a thing where you have to design your application specifically for H2 in order to take advantage of
the capabilities. It's kind of putting a high barrier in front of it, too.
Exactly.
I mean, you can't expect adoption of what is, as you said,
a better performing protocol if you put a mountain in front of it.
Right, right.
No one's going to want to climb that.
It's less enjoyable or less likely or whatever.
People do it.
We have lots of people that say they really want this.
They really want H2. And we have a lot of people that are talking about it, not necessarily for user-facing,
setting up websites that anyone on the internet can access.
They want to put it in their data center and have server-to-server communication be much more efficient,
which is a huge use case for H2.
Absolutely. server communication would be much more efficient yeah which is a huge use case for for absolutely
too and especially if since that is you know you know you know within protected environments and
you you have more control over what the client and a server there's opportunities there where
you don't have to necessarily worry about tls you can do a plain text connection and you'll get
far greater uh performance out of it but again it has to be a very deliberate choice. All right. Last pause of the show to hear a word from one of our sponsors.
Our friends at ThoughtWorks have an awesome open source project to share with you.
GoCD is an on-premise, open source, continuous delivery server that lets you automate and
streamline your build test release cycle for reliable continuous delivery. With GoCD's
comprehensive pipeline modeling,
you can model complex workflows for your team with ease,
and the value stream map lets you track a change
from commit to deploy at a glance.
The real power is in the visibility it provides
over your end-to-end workflow
so you can get complete control of and visibility
into your deployments across multiple teams.
To learn more about GoCD,
visit go.cd slash chang.com for a free download.
It is open source.
Commercial support is also available and enterprise add-ons as well,
including disaster recovery.
Once again, go.cd.com and now back to the show. So H2, is this something that you're solely working on, or do you have a team working on it with you?
Right now, it's been primarily myself.
I'm working on kind of growing that team of contributors.
Is it in IBM, or is it open source contributors?
It's open source.
I'm doing everything out in the open out on the GitHub repo.
Is it on your user then?
We're doing it under the node organization, so if you go github.com
No, Js. Slash HTTP to yeah everything all the works being done there
I saw that repo there, but I saw like Ryan doll in there, so this is not a new repo
It's so it's a it's a clone of the node core all right, so okay
Even though I understand.
Even though the decision hasn't been made to get it into core yet.
Right.
You're assuming it is.
Assuming it is and developing it is.
I'm falling, yeah.
I was wondering.
I was like, I expected it to be a module.
But then again.
It's being implemented in such a way that we could easily extract it out as a native module if we needed to, if that decision was made.
Right.
It doesn't, I think,
I can't say it doesn't use any. With all this change,
wouldn't it make sense just to cut the cord and
you know, one thing Thomas and Sam
were talking about was verbally
and documentation-wise deprecate it.
Don't do anything to the way
it responds or, you know,
using anything within NodeCore.
Why not just verbally deprecate it and then...
It's way too early for us to do that
H2 is a very immature protocol all right
It still has to be proven and the vast majority of the web is still driven by h1
Going out there and saying that okay
We're gonna deprecate this when h2 has not yet been proven would be very premature.
So what do you do then? You just offer both?
Both, yeah.
And just say that Node is going to be a platform for HTTP development, one and two.
And there will be mechanisms built into the H2 specification that you can actually run H1 and h2 on the same port you know you can have a server that will offer both and the client
negotiates which one they want to use per socket uh we're not quite there yet in terms of how we're
going to figure out how to make that work in node but you know that's a key capability of uh of h2
so if we are going to fully implement that spec,
that means also implementing that upgrade path,
which means we can't necessarily get rid of H1.
And the fact of the matter is
we can't get rid of anything in core, right?
I mean, you see that, you know,
in things like the recent buffer discussions,
whether we deprecate, you know, things.
We just, we can't get rid of things
that are so critical to what the node ecosystem is
doing um that even having a deprecation message in there that would be problematic yeah uh and
something so fundamental as h1 um i don't think we would ever get to a point where we would fully
deprecate yeah i'll retract that deprecation statement and say it more like instead, because when we were having that discussion about the options of deprecating
things was not to put it in where it was a response,
but more so in like documentation where it was frowned upon, you know,
it wasn't forced. And then you're obviously so much more closer.
So I'm just outside of looking in, but I'm thinking like,
if it's so deliberate to choose it, wouldn't it make sense or potentially make sense?
And this would be a decision you will eventually make to offer it as a module instead.
That way you can have a clean break when it is time to move over.
I'm just thinking if it's that deliberate, why not make it that deliberate where it's actually required?
Well, I mean, it's, it's a legitimate, you know, it's a legitimate question. And that's actually one of the, the, the decision that the CTC has
to make, you know, from, from, you know, I have an opinion on it, but, um, you know, it, it,
unfortunately it's not just all up to me, right. We have to listen to the, you know,
you know, folks like Sam and Thomas and the ecosystem and figure out what is the right approach to take. And we're not close enough yet to reaching that decision, right? So I'm being
very deliberate in how I write this code to ensure that if we need to pull it out, if that ends up
being the, you know, the right thing to do, we can. You can. It's not making breaking changes
to any existing part of Node.
It is a very distinct separate code path
from the existing H1 stuff.
It would be a native module
and all the things that come along with native modules.
So there would be some considerations there.
But if we needed to, we could.
And like I said, I have my opinion on what it ultimately should do, but it's up to the community.
It's up to the core team to make that decision for whatever reasons they want to make that decision.
Cool. Let's close with any closing thoughts you might have on this subject.
Anything I might not have asked you that you're like,
I got to put this out there before we close down.
Oh, we've really covered a lot of it.
I mean, kind of the big thing I would say is,
you know, if the folks are really passionate about this,
we need to hear from users.
We need to hear from folks that, you know,
that have ideas on how to implement it, right?
And or how to test or what kind of applications they want to build with this thing.
I've had a lot of conversations so far, but it's a big ecosystem.
There's a lot of people out there.
Right.
So we can't have enough input on that direction. That information, that input, is what's going to help drive that decision
of what's going to happen with this code.
What's the best way for people to reach out to you then?
If it's feedback you want, is it you personally?
Should they go to the repo?
Go to the repo, open issues.
For the folks that really want to get in there,
pull requests are great.
There's been a lot of churn in the code.
I've been getting in there and just hammering away at it for the past few weeks.
With a machete?
Yeah, pretty much.
People have been asking, it's like, well, where are the two do
so we know where to jump in?
I was like, well, I don't even know what the heck
I'm going to do tomorrow, let alone what the recommend
you jump in on.
But it's starting to stabilize more.
And there are very distinct areas that I know for sure, tests, performance, benchmarks, those kinds of things that we absolutely could use some help on.
So anyone that wants to jump in, just go to that repo, take a look at what's happening.
Testing performance, things like that.
Right.
Okay.
We'll link up the repo in the show notes for this.
And, James, thanks so much for closing down, literally closing down Node Interactive.
Oh, yeah.
So thank you so much for taking the time to speak with me.
It is important that we have this conversation.
So I know that the Node community is going to appreciate what you have to say.
Right on.
Thanks, man.
Thanks.
I want to give one more shout out to our friends at Node.js Foundation
for collaborating with us
on this project
and also to our friends at IBM
for sponsoring the Future of Node series.
To get notified
when we launch the full series,
subscribe to ChangeLog Weekly
at changelog.com slash weekly.
Everything we do
gets announced in that email.
And thanks for listening. We'll see you next time.