The Changelog: Software Development, Open Source - OAuth 2.0, Oz, Node.js, Hapi.js (Interview)
Episode Date: October 20, 2015Eran Hammer joined the show to talk about updates to Hapi.js, Node.js, OAuth, and deep discussions about Oz – Eran's replacement for OAuth 2.0....
Transcript
Discussion (0)
Welcome back everyone. This is the changelog and I'm your host Adams Dukowiak. This is episode 178.
And on today's show, Jared and I are joined by Aaron Hammer for an awesome show on Happy, Node, OAuth, and a deep discussion around security,
specifically talking about Oz, Aaron's replacement for OAuth 2.0. We had four
awesome sponsors for the show, Codeship, Toptow, Casper, and Imagix. Our first sponsor of the show
is Codeship. They're a hosted continuous delivery service focusing on speed, security, and customizability.
You can easily set up continuous integration for your application today in just a few steps
and automatically deploy your code when your tests pass.
CodeShip has great support for lots of languages, test frameworks, notification services, and they even integrate with GitHub
and Bitbucket, and you can deploy to cloud services or even your own servers.
Get started today with their free plan. When you upgrade to a premium plan, use our code,
the changelog podcast, and with that code, you'll save 20% off any plan that you choose for three months.
Again, that code is TheChangeLawPodcast.
Head to CodeChip.com slash TheChangeLaw to get started.
And now on to the show.
All right, we're back.
We got myself here, Jared here,
and we have Aaron Hammer back on the show.
Now, Aaron, it's been, wow, I don't even know how long.
It's been at least a year and a half since you've been on the show.
The last time you were on here,
we were talking about Node at Walmart and Black Friday,
what a triumph that was.
So welcome back to the show.
Hey, glad to be back.
And Aaron, I guess I don't want to do your intro for you, but whenever you come on a show like this, how do you introduce yourself?
It's got much easier now.
Yeah.
I'm the founder of a very early stage startup called Sideway, trying to make sharing conversations more interesting and fun.
Really early stage.
And before that, I spent three and a half years at Walmart,
leading the mobile web services team,
building, among other things,
quite a big portfolio of open source projects for Node
for building server-side applications.
So that's kind of where I am.
Before that, I spent a bunch of time doing web standards and um and working on security related
protocols like OAuth and you mentioned um you mentioned Sideway the your very early stage
startup is anything you can mention about that whatsoever? Anything you can share that's kind of secret maybe no one knows about?
Well, I talk about it a lot. I just don't write about it a lot.
Basically,
it's trying to kind of fill the gap between the
high noise, low barrier social media
sites like Twitter and Instagram and Pinterest,
where people can express themselves, but it's very hard to find an audience.
And it's very hard to consume it because it's very, very noisy and very low quality,
low value. But it's something that just jams here and there. And on the other side, you have
the full blogging platforms, you know, if it's WordPress and Medium and all those
where it's pretty tedious and expensive to produce content.
I mean, everybody I know has a blog post idea every day and they rarely write it.
And so, because it's a lot of work.
You have to write, then you have to do spell checking
on it and grammar and then you have to make sure it's linked and has high value and all that so
what i'm trying to do is kind of like people um convert conversations into content you know we
have a lot of these uh kind of like podcasting but written. And so you'll have a conversation.
And when you're done with the conversation, the transcript becomes the actual content that you're producing.
And people can follow the conversation live as it's happening.
But then when it's over, they can read the transcript. And so there's a lot of work involved in building a new kind of basically chat experience that is not optimized for your real-time communication needs, but is optimized for producing a great conversation.
Because if you look at your chat transcripts that you're having, whether it's, you know, Hangouts or Slack or whatever,
whatever it is, they're pretty terrible. So, you know, they get the job done for
communicating something in the moment. But if you're trying to read a conversation
on those tools after the fact, it's just completely useless.
And so the challenge here is to come up with the right user experience
that can basically convert these kind of conversations
into really great written transcripts.
Interesting.
Whether it's an interview or, um,
more of a town hall style or just a casual conversation.
So is this something that you're starting yourself or is this something that
you're starting with other people?
Um,
I,
something that I started myself,
um,
recently closed the small seat round and,
um,
hired,
um,
the, uh, my first employee. So we're a team of two now.
Nice.
Yeah, so that's what I'm doing, spending most of my time on these days. I'm also doing a significant amount of open source work, keeping happy going, and also doing a good chunk of consulting work
with a great company called Neroform.
So yeah, keeping myself busy.
Interesting.
So seed rounds, a little bit of side work,
a little bit of open source work, obviously,
because you can't put that down,
especially whenever you did such a good job transitioning happy whenever you left walmart what what can
you share about uh your departure from walmart and just sort of the the i guess whatever you
want to share about your personal breakup i think you blogged about it quite quite well but
specifically around the community around happy and how well that transition is there any insight
you can share with the open source community about that? It was pretty clear internally
about two years ago
that the organization
was getting to a point
where they got what they wanted
out of the project
and that the resource spent
wasn't really sustainable at those levels moving forward for this particular project.
So basically, the feature set that Happy was providing, Walmart, seems to match their requirement quite well.
And besides bug fixes and small enhancements, it was clearly moving into a direction where
we couldn't justify having four people full-time working on that piece.
And so we worked really hard in order to increase outside involvement.
And it was done in two, like the long tail and the super high quality involvement.
So we build a relationship with companies that had interest in Happy
and made sure that they're also contributing resources to the project.
And at the same time, we made sure to put a governance model in place that will really reduce Walmart's presence in terms
of control and somewhat association with the project so that people outside feel more comfortable
contributing and getting the credit that they deserve without feeling like now either they're
affiliated with a company that they don't
necessarily want to be affiliated with or just giving you know giving free work to Walmart to
then go and boast about so this was a long process and it was a not just a community process it was
an engineering process because in order to get more contributions and diversify your
community you need to have more active developers it's i'm a big believer in the benevolent dictator
model of open source i don't like other models very much um i think there should be someone in
charge and someone who can make the the final decision and if you don't like it you can fork
and do your own thing that's something that uh actually in our last show uh jared you can help me out with this but
ron he was like uh he was just lamenting on just just uh with open source how you can go and say
like trying to find who's in charge and like he's saying you can go back and listen to him he's like
nobody's in charge like ah how it drives him crazy because yeah you get uh it's good for open source that
it is open but it's bad that there isn't really someone in charge they can say like here's where
we're going and and drive it like you said with the pdf model right so um what what a few things
happened you know one happy got too big for me to be the uh i couldn't really be benevolent um it became it became a huge amount of work um
it and i got to the point where i was slowing things down because we had 30 to 50 modules and
i was basically responsible for all of them and i just it just was too much so um and core happy And core, happy core, got so big that we had everything inside.
The router was inside.
Cookie parsing was inside.
A bunch of security stuff was inside.
File processing, view rendering, all these features.
And what we did is we broke it up.
And everything that could be moved out was
moved out.
So it was,
it wasn't a question of what,
what belongs or did not belong in court was,
can we,
from an engineering perspective,
move this piece of software somewhere else and everything that we could,
we moved out.
So we kind of smashed the core framework into a lot of tiny little pieces.
And then basically said, hey, who wants what?
And in order to take over one of those pieces, all you have to do is just kind of show up and start working on it.
You know, they all had no documentation. So the first thing you do is you go add documentation and kind of clean up the code and add more tests and show that you're willing to put in the time on the kind of annoying pieces.
If someone is willing to do documentation, that's a good sign that they'll be willing to do the much more exciting work of writing code.
And so we did all that.
At the same time, we also want to make sure that we're welcoming to everybody. So we put in a code of conduct.
We put a bunch of effort into getting a more diverse core team.
So we recruited prominent community members
that are not just white men
in order to kind of change the face of the project
so the project is more welcoming to everybody.
And the goal wasn't necessarily to like, you know,
have better statistics about how many people,
you know, from different backgrounds are participating.
It's more that if someone comes in, they look at the project page and who are the team,
they can say, oh Ken, well, I can see people like me there.
I'm not going to be the first of my underrepresented group in our environment. And I'm not sure how actual success we had ultimately
in changing the percentages,
but at the same time,
I think that we create a very welcoming environment
so that, you know,
and some of the stuff succeeded,
some of it didn't work.
We tried to do a mentoring program,
but that is something that we're still trying to get working, but it's so much effort to maintain it
that, you know, that's always tricky. But
yeah, so we kind of broke it into pieces
for the governance model that is kind of codifying how
the relationship is between the different modules and the different, you know, lead
maintainers who are their
own little benevolent dictatorship for their piece. So we did put it all in place. We got,
I think at this point, Walmart employees account for probably 1-4% of the active maintainers on the ecosystem as a whole.
And that was great.
So it was a long process,
but it was a very planned process
in order to take something that was very much
a corporate sponsorship setup
and move it into a completely community-based environment.
And we actually recently introduced a sponsorship policy
so that companies can get their logos and their name
associated with the Happy module
so that people who are getting paid by their company to do this work
can give some benefit back to their company. I noticed in Yeah. I noticed in the readme that, uh, the Sideway logo is there
actually. So that there is some sort of sponsorship option for happy. Yeah. Um, that one is funny.
Basically the, I kind of calculated on the cost, um, kind of like my lost wages on happy work every month is between $3,000 to $6,000 a month in terms of if I take the hours I'm spending on ready to go and get other people to do the sponsorship
I'm I'm almost there I'm probably like a month or two away from that I'm probably gonna wait
for another major release and then I'll just say hey if the company wants to put their name on the readme, get a few tweets from the happy account about the sponsorship
and basically cover my cost of doing this work,
then I'm probably going to do that.
Just kind of a two, three months arrangement.
But for now, basically when I do happy work that is not directly benefiting my own startup,
then it's basically my startup is paying for it in a way. Um,
and so, yep. So I put that up there. Um,
we'll see where it goes. Uh, it's, it's, it's kind of like a, you know,
a new, new territory. Uh. I haven't really seen projects
doing that kind of temporary sponsorship of open source.
That's interesting.
I think it's an interesting model.
If someone that was listening,
they were like, hey, I want to sponsor Happy One Month
or something like that.
How would they submit an issue,
get in touch with you privately?
What's the best way?
Oh, whatever they want. My email is all all over the place they can do whatever you know
yeah any whatever they're comfortable usually companies don't want to be public about
asking to sponsor um and so you know my email is everywhere it's aaron at hammer.io i mean
if you can't find my email then you you probably can't read. There you go.
It's always interesting to think about the copyright and the licensing permission that goes into these sponsored community projects, especially one that seemed to spawn out of Walmart and then become a community project.
Looking at your license, it looks like there's multiple copyright holders.
Can you speak to that?
Yeah, so copyright is really simple.
It's basically whoever creates something has the copyright and then
you can't technically ever waive your copyright. You can assign it and you can kind of like give
it to someone else. But copyright law is really, really tricky and also doesn't really matter much
because for the most part, you can't really sue people for the vast majority of copyright on code.
But the bottom line was we kind of asked everybody and we kind of looked at how the BSD license is set up.
And if you see the license, it just says that you need to retain the copyright statement from any previous code you've used, and you need to retain the terms.
You don't have to retain them as an atomic unit.
So originally, Happy started at Yahoo.
And a lot of the initial code was just lifted from the PostMile project that I worked on at Yahoo and then Yahoo Open Source before I left.
And that was copyrighted to Yahoo.
And if you look at the Happy Corp license, it's basically saying it's copyrighted to Yahoo, it's copyrighted to Walmart, it's copyrighted to other people.
Anybody who contributes basically has a piece of the copyright.
And then by contributing, you're basically buying into the BSD license that's sitting there.
So at some point, it became just hard to maintain that list of copyrights
because every pull request, you technically need to go and add that name to the license.
So we just added a link.
We asked the lawyers, and the lawyers said, yeah, it's perfectly reasonable that you have a link to say,
here are the main copyright holders, the vast majority of the code.
But then there's all these other people.
And so copyright is held by whoever contributed a piece of code.
If they work for an employer, then their employer has whatever rights they agreed to.
And all we do is we just say,
hey, all of you are bound by this license agreement.
And so when it's time to decide on a new project,
like who gets their name on the license,
who gets the other contributors, right?
That's kind of the main question people have.
And to me, it's usually whoever starts something gets their name.
So everything I started at Walmart, Walmart got their name there.
Everything I started after I've left got my name on it.
If something was started by someone else, then they can choose who they want to be the primary copyright holder on.
Ultimately, it doesn't really matter as long as it's not out of control.
So I would say that if I'm contributing significantly to something now that was started somewhere else, I might go and add my name as another copyright holder
because I'm doing significant work.
But it's kind of more of like a credit for big contributors
than anything legal.
The other contributors clause kind of covers it.
And in practice, because it's only a copyright license
and it's a very liberal one,
it doesn't really matter at the end.
You can't really do anything about it anyway.
Yeah.
Was it BSD from day one?
Yeah, it was three-class BSD from day one.
That's the one that the Yahoo lawyers asked to use,
and since that was the one,
once you switch to something else,
and let's face it, all the copyright,
all the MIT and BSD and all those are exactly the same um you know one will give you a little bit of liability the other one will give you a
little bit of you know brand protection but it's all the old nonsense it's basically do whatever
you want and you know if we suit you the person who has the more more money will win anyway
so are there conversations uh at walmart about, or was it just you were trusted to do what you thought was appropriate?
I had conversations with the lawyers.
The Walmart lawyers were focused more on trademark issues than copyright issues.
It's a bigger concern for them.
Trademark is, especially for a company that's basically
selling brands and is in business with
pretty much every manufactured good, electronic or
otherwise in the world, the trademark relationships are
really strict.
And so they want to make sure that they're not opening themselves for liability and that people can go out after the fact and take credit for the work.
So what we've done is I've kind of worked with them and the really great team, the legal team there.
And we basically did two things.
One, we made sure that none of the marks we're using are trademarked by anybody else worldwide.
So we're not infringing on anybody else's work.
And at the same time, we kept everything public domain.
So as a matter of policy, we did not trademark a single Happy related mark.
No logos or names.
And so they're basically in the public domain.
So in practice, everything is just covered by the BSD license.
There are no trademarks in Happy.
It's all just copyrighted stuff.
And you can do whatever you want with it.
That covers the logo.
It covers everything else.
We even removed the copyright statement from the website.
It's basically the same as all the other licenses.
So it's a little different, I think, now.
I think that Happy was the very first meaningful open source project done at Walmart.
And so the organization was catching up with us.
We were basically doing things.
And after the fact, they came and said, oh, I guess we're doing open source now.
So maybe we should have a policy about it.
And so we, as a general, the team I was part of, I can say it freely now because pretty much everybody that is relevant has left as well over the last year.
But basically, we were kind of like, you know, do an S forgiveness kind of attitude versus first, you know, go through the entire corporate ladder and make sure it's all approved. But in practice, Mike, I had extensive engineering IP experience.
You know, I play a lawyer on the internet.
And having done three full years at Yahoo,
working with a legal team there on exactly this kind of stuff,
all about, you know, copyright and patents and trademarks.
I have a pretty solid understanding of it.
So whenever we did something without asking permission,
the lawyers came, you know, semi freaking out about it.
And I was saying, oh yeah, we know,
here's where we did this and here's where we did this.
And here's what the policy, they were like, oh,
I guess you know everything about this already.
And so that was really helpful.
It really helped create the kind of trust
that they needed in order to feel comfortable
that we're not doing something particularly stupid.
And then the other thing is,
which made things really easy,
and that's kind of like an advice
for everybody who wants to game the system,
is if you fork something with a license,
you're basically making yourself life,
sorry, you're making your life really easy because you can just inherit what's going on there.
And the lawyers can't really argue anymore
because it's kind of required
for you to keep sustaining that license.
So find yourself a product with very little code,
fork it, and then change it to something else.
But it's like a loophole in all these big corporations.
And so I know you're connected to the extreme,
but basically basically if you
join a company
if you the day
before you join
you go and
open source a
tiny little piece
of code and
then you continue
working on it
then it's much
easier to get
the legal team
to just agree
with the terms
that you set up
there than if
you're starting
from scratch
and now you
have a
so all we need
is a bunch of
shell projects
with each license
available for
forking and
then people can fork away the empty folders you need to have There's a bunch of shell projects with each license available for forking. Yes, pretty much.
Fork away the empty folders.
You need to have some meaningful something in there so that when you go to the legal team, because the big shot lawyers are not stupid.
So they'll say, hold on, there's no code here.
So why can you just start from scratch?
Right.
Or basically is, if you are forking practically, as Jared said, that's sort of a shell project for the license only.
Yeah, so it's helpful if you're starting from something that has some meaning and some value to what you're doing.
But it's really a great system because once you fork an existing piece of work, the default requirement is to just keep that um because switching licenses is is tricky
because now you have dual licenses and some code is under this some of these under that and nobody
likes that um it's also generally easier for most companies to allow you to contribute to an open
source project than to open source their own stuff um And so really, if you instead of creating original work, you are doing a fork or
contributing something else, the legal stuff within big companies becomes much, much simpler
to manage. So there's a lot of way of gaming the system. But ultimately, if you want to make your life easier in a big corporation, being versed in the area is really important.
Because the lawyers, if they feel comfortable, then they'll let you do a lot more.
I mean, the same thing with security i mean i did i got yelled at multiple times for posting you know uh code snippets on on
gist um from like walmart code like during black friday and the infosec team you know immediately
found it and freaked out that you know there's walmart code now being shared and it has port
numbers and other security stuff and so they freaked freaked out. Oh, my. And I was immediately called to the principal's office,
and I was immediately able to tell them,
well, I said, you know, first of all,
you know, maybe you look up who I am.
And then they looked up, and oh, okay.
We're using all his protocols for security,
so maybe he knows something.
And then I said, what are your concerns?
And they said, like, you know,
you can't publish this and this and this and this. I said, well, all the ports have already been
changed. All the sensitive paths have all been, you know, changed. So basically what we posted
is exactly what we're running, only not. And everything that's meaningful for anybody to
understand that internal topology has been already changed in a random way so that
it's not even. And so they saw that and they were like, oh, I guess it's okay then. And then the
next time it happened, they basically said like, we just want to confirm that you did that already
on all this stuff, right? It wasn't like freaking out. It's just like, we just want to cover our
ass that, you know, yes, we have told you that you can't post this kind of information
and you agree that you didn't.
So if you give, you know, a lot of these policies are important,
but at the same time, the people who enforce them are,
sometimes they care more about protecting their jobs
than protecting the actual IP or security of the environment.
And if you just make them comfortable, then that goes a long way.
I'm glad you mentioned Black Friday because that kind of leads us into the next quick topic I wanted to mention.
But I do want to take a quick break before we do that.
So let's break and hear from a sponsor.
And when we come back, we're going to talk a bit about Node,
specifically the foundation, the formation of IO, and then a lot of stuff that's changed since then.
So we'll be right back.
Say hello to TopTile designers.
Our friends at TopTile have done something really, really awesome.
They've expanded into a new market.
They're talking designers.
TopTile has been known as a thriving network of some of the best software developers and engineers out there.
Many of the developers in their network know extremely talented designers,
and they've always had this sort of informal relationship with designer involvement in
TopTown. They've done a little bit, you know, but it hasn't been an exact, you know, product,
so to speak, or internal model. And so they've expanded, they've evolved. Today,
they're extremely excited to announce the official launch of TopTown Designers. What this means now
is the same experience
that you've had on both sides of the fence,
whether you're someone that's looking for
really awesome designers
or you're a really awesome designer
looking for really awesome opportunities.
This is the place for you,
not only if you're engineers,
but also if you're designers out there as well.
So designers, listen up.
It is time to go check out
toptile.com slash designers that's t-o-p-t-a-l
dot com slash designers and tell them the change law sent you
all right we're back with uh aaron hammer and aaron it's it's been a while since you've been
on the show and the last time you're on the show you were talking about nose performance with
uh at walmart on black friday that was a pretty interesting conversation and that as a matter of and the last time you were on the show, you were talking about Node's performance at Walmart on Black Friday.
That was a pretty interesting conversation.
And as a matter of fact,
me and Andrew Thorpe was hosting the show then
and Jared, you weren't on that show,
so that's a bummer.
But since then, Node was fort,
IOJS was created,
the Node.js foundation was formed
and ultimately io and
node decided to reconcile i haven't been keeping up day to day for the past few months on exactly
what's going on there so if you know anything feel free to school me but you did write a post
that had some pretty clear thoughts from you and just quote one thing you said was for the sake of
full disclosure i'm generally opposed to any foundation and this was uh why do not support another foundation and this is probably back in that
drama days but what's what's happened since the last time we talked to you around no that
is interesting to you that you like to talk about uh here on the show so i think a bunch of
interesting things have been going on one is that contribution and participation has really skyrocketed.
I think that the drama part was somewhat necessary
given that there are corporations involved
and legal agreements and copyright and names
and all that stuff and trademarks.
And it took about a year of this path, a little tortured path,
but it took about a year to get to a point where the community could fully own the project
and kind of set course and decide on the things that mattered.
I don't like foundations in general.
I think it's just like a way for people to make a living off corporate money
without really adding value.
I'm not accusing the Node Foundation of any of that.
And right now, most of the work is done by Michael Rogers, who is awesome.
And in general, Michael gets a blank check from me in terms of trusting him to do the right thing for the project and the community.
So I'm okay with the current staffing.
I don't really care much about the foundation part.
I kind of think it's unnecessary
and I personally don't find any use for it.
I was doing just fine working with the Joann people.
I had great collaboration with them.
And if now I have to collaborate with someone else,
that's fine.
The more interesting part for me
is the fact that the project
now has significant amount of contribution and is able to move faster.
IO was a good phase as well because it kind of like and mature and grow up
and understand how to run a project
with a tremendous amount of influence
and importance in a responsible way.
When I.O. was going on, I didn't really want to touch it
because it was just crazy.
The amount of changes and the amount of just modification
and add-ons and just noise that was going on
was just unmanageable.
I can't imagine anybody with a day job
that is not full-time working on Node core
was able to keep track of anything that was going on there.
And I think that's kind of what a lot of people felt.
And the people who were using I.O. were primarily the people who either just liked the latest
of anything and they don't really care much about it, or they just really needed the new
V8 features, you know, the new ES6 and so on.
And that was not available to them in the 2010 releases.
So now things are kind of different.
And I think that version 4 represents a significant milestone for the project.
I've been using it for a couple of months now.
I'm really satisfied with it. I'm using it for sideways. It months now. I'm really satisfied with it.
I'm using it for sideways.
It was going to be all starting from Node 4.
I'm using Node 4 for another project I'm doing as part of my consulting work.
As far as I'm concerned, that's what everybody should be.
It will take a few more months before some big players will move their environment to Node 4 and kind of come back
and say, yes, you know, we're running it at scale and it's working really well for us.
You know, the kind of, you know, Walmart Black Friday memory leak story, we know there's
more of those in there.
And so we just need somebody to find those first or at least give us the confidence to know
that the code base is sufficiently solid
that even if there are problems,
they're not going to be devastating for you
once you reach scale.
I don't think we're far from that point.
I think we're almost there,
but that's kind of where it is,
and as far as I'm concerned,
I moved happy to node 4 version 10 is no longer
supporting happy node 0 10 it still works with it but we don't run any tests with it so we are
no longer guaranteeing that it will work and also we have said that even within version 10, we're going to start using Node version 4 features in there.
So we're going to start using constant let and error functions and a bunch of those features that will completely break on 0.10.
So what are some features in Node 4 that have you excited?
I'm mostly excited about just the project using a newer V8.
Honestly, I'm not one of those people who are super excited about all the new language features.
I'm excited about let and const, just because they finally make sense in terms of proper scoping of variables.
But the other features I don't really care about that much.
I'll see how I like them as I use them more.
I probably just want to get access to the latest V8.
The performance improvements are significant.
The amount of bug fixes that are included,
the fact that it's running the same version as Chrome,
which makes it a lot easier for quick testing of things
and kind of having the same performance profile
across the client and server.
So, and I think those are significant improvements.
But I also, I'm glad that the the project is has more people working on it and so it's uh the team is more responsive now to
issues um and it's kind of more um more democratic now it's you don't need to have did you ever have
any particular issue with Joyent and the way
it kind of helped Node move along?
Not really.
I'm a big fan of Joyent, and
I felt that up
until the point where
the foundation discussion started,
I felt that they did a pretty
good job leading the
project and promoting the
values that were of main concern to me, which was mostly stability and security and performance.
And those things were working well.
I think that there was a lot going on, both internally at Joyent with a new CEO and some
internal changes, as well as around the community with a bunch of new startups focused on Node and
trying to make a living off Node, as well as more of the big players, you know, if it's IBM and Oracle
and others like that
who start showing interest.
And it got to a point where
the status quo was clearly not sustainable.
And I think that at that point,
Joyent, in hindsight,
could have managed that transition better.
But that said, I don't think that, you know,
they were completely unreasonable in the way that they acted.
And ultimately, you know, with their new CEO,
they came to the right conclusion and they did a pretty smooth transition to the environment and to the foundation.
And at the same time, the foundation was set up purposely to make it really easy to merge it with the IO community
so that, you know, that was all done very well.
So there you have the typical corporate flexing and kind of trying to get the most out of the situation for your shareholders and your own corporate needs. But ultimately, I think that it took time because it was a dramatic change
and people had to get comfortable with it, especially within their corporate boards.
But I don't really think, you know, I was an insider and I was privy to pretty much everything
that was going on from the very, very beginning.
I mean, I knew about the things going on even before all the other players knew about them.
Just because I kind of was in the middle and everybody was treating me as a confidant.
So I kind of was able to get a big picture a long time ago.
And everybody had a great intention. Everybody was, you know,
approaching it from really
the right motives
and primarily with the project
well-being in mind.
You know, reasonable people can disagree.
So I think that, you know,
the drama played out.
Some of it played out
because, you know, people like drama.
And so it's fun.
But ultimately, I don't really, i've never seen that as an issue and even throughout the process
i've kind of blogged about it and tweet about it it's kind of like you know everybody you know if
you want drama you can enjoy it but otherwise you can ignore this it's just noise and everything is
good and keep using note it's it's still the best platform to use so So, no, I don't have concern.
I think for the most part,
I've never seen, you know,
like companies behave like companies
when it comes to open source.
I've seen people working for companies
sometimes making bad calls
on open source policy.
You know, sometimes your legal team is a little too eager
and they don't want to take risks.
But ultimately, it's about making sure
that the company understands the value,
what they're giving up, what they're gaining.
And for the most part,
participating in open source is a huge asset
for pretty much every company out
there i guess when it comes to companies like as big as dwin or walmart if if for some reason we
had the ear of someone inside of a company like that that was like maybe had a if they want to
do more in open source but they're not really sure how to approach it. What are some, I guess, now that you've gone through a couple of different scenarios, what kind of advice would you give to corporations out there and how they should approach open source and what they should look at towards value back to them and value back to the community?
Open source is like any other skill.
You need to bring experienced people to the table to help you out with it.
Companies that have done a bad job have typically tried to do it on their own without learning from anybody's experience and without using other people's help.
So if a corporation has the resources and is looking to invest seriously in open source, they should go and bring in someone who is an open source expert, whether they are a policymaker, which is the approach Yahoo took.
They brought in someone when I was working there to lead open source policy, and he did a great job setting up a good balance between what the company was kind of afraid of and
what the engineers wanted to do and kind of what the balanced approach would be.
You can do something that's more like, you know, what Walmart ended up doing, maybe not
consciously, but just hiring a few people that brought in a significant amount of open
source experience, whether it's, you know, Ben and Dion or myself or other people to
the organization that can help them,
can kind of hold their hand and say, hey, look, we're going to open source this.
This is why we're doing it.
We know how to do it correctly to gain value.
So it's the same way that if you have a company that has never, Node before or JavaScript before,
you're not going to go and hire, you know,
Java engineers and buy them a JavaScript book
and say, you know, learn this
and let's build everything in JavaScript now.
That would probably be a terrible idea.
What you do is you go and you find people
who are experienced in the area and you hire them and then use them to leverage other people and grow.
So open source is the same way.
It's a really complex ecosystem between the tooling and the community and the legal part and also just managing the logistics of an open source project.
You have to understand the cost involved.
It's not cheap.
And you have to understand the pitfalls.
Open sourcing a project that gets no traction is really bad.
It can actually cost you more than if you did nothing.
So you kind of have to understand those things.
And at this point, there's plenty of experts.
And if you don't have the money to or just don't want to hire someone just to manage open source policy,
go find a really successful open source project and hire that maintainer and ask them to be your guide to do open source.
So there's all these different approaches um on how
you can navigate it but it's not just a matter of taking your source code and dumping it on github
that is not open source that is just yeah that is just you know show and tell well said well said
well let's take a another break um when we get back i want to dive deep into the topic at hand, really, your thoughts on OAuth and your replacement, your OAuth replacement, Oz.
So let's take a break.
When we come back, we'll kick off that topic.
Guess what, everyone?
We've partnered with Casper, the online retailer of premium mattresses, to give you $50 towards your new mattress.
The mattress industry has inherently forced consumers,
myself included, into paying notoriously high markups,
and Casper has revolutionized the mattress industry
by cutting the cost of dealing with resellers and showrooms,
and they pass those savings directly onto you.
Their mattress is a one-of-a-kind.
It's a new hybrid mattress that combines premium latex foam with memory foam.
And the Casper Experience was designed with you in mind and optimized for sleep.
And this is my favorite part.
It's backed by a 100-night no-hassle return policy with full refund and a 10-year warranty.
And what's even cooler is how they ship this mattress to you.
It comes in a box that couldn't possibly fit a mattress.
And when you open it, the mattress unravels for you to lay down and catch some Zs.
Head to casper.com slash changelog and use the code changelog when you check out to get $50 towards your new mattress.
Enjoy.
All right, we're back with Aaron hammering and Aaron,
uh,
this call started as a tweet,
I guess in a way,
right?
As they all do as,
as they all do.
And,
and I'm,
I thought I had my notes here perfectly,
but I didn't.
And I,
I went away from my tweet that I had saved from you.
But long story short, you were announcing Oz, and you were saying,
hey, I don't want to give any talks about Oz right now,
but I wouldn't mind coming on a podcast.
And so I chimed in and said, hey.
Well, technically, the changelog did, and me as the changelog, and here we are.
So this is pretty interesting.
So what is happening, I guess, with OAuth 1, 2, and then this road to hell, as you've said?
And what the heck is Oz?
So it's actually interesting because a lot of – almost all my cool stuff from the last couple of years all came from this Yahoo post-mortem project.
And Oz is also a byproduct of that work.
I was working on OAuth and OAuth2.
I think the story about me and the messy divorce with OAuth2 is well known.
And if you don't, there's some highly entertaining blog posts and videos online.
Enjoy.
And the way I looked at it is that
when I stopped working on OAuth 2,
I felt that once I had enough,
I just spent four or five years
on doing that kind of work
and I just couldn't take it anymore.
But also, I felt that the atmosphere wasn't conducive
for a meaningful alternative at the time.
I felt that we tilted too much to the side of convincing people that the security provided by OAuth 2.0 was just good enough.
And it was so much easier to use and so much less developer friction that, you know,
if it's good enough for Google and Facebook and Yahoo and Microsoft, then it must be just good enough.
The problem was that, like I said back then,
OAuth 2.0 is an outline.
It's not really a useful implementation.
If you took OAuth 1.0 and you did a compliant implementation to the spec,
you got pretty good security out of the box.
It's very hard to implement oaf one
um insecurely in terms of the protocol itself yeah you can always you know leak stuff and and
just do stupid things but um but the message flow the the um the workflow the the structure of the
tokens all that stuff is pretty solid um with outh 2, because of all the compromises that were made,
it just became an outline,
which meant that if you are Google or Microsoft,
you can hire the best security experts
and they can write a great implementation
that will be very secure.
But if you're not, then what you have is a, you know is whatever random stuff you end up understanding from it, and you just have a simple bear token protocol where if that token leaks out, then it's game over.
And if you look at the implementation, for example, the vast majority of OAuth 2 implementation today don't expire their tokens.
So you get a token from, I don't want to put anybody on the spot, but I'm sure if you've used OAuth 2.0, you got an OAuth 2.0 token
and you cut and paste it somewhere and you're happy.
And it's been a year now, two years now, and it's still working.
That's pretty bad.
If you think about it, you have this really long lasting credential that has no security attached to it. And if anybody gets hold of it, an employee quits
a company, they take that token with them, and now they have access to all that data. And you
can't even tell that it's them because there's no traceability. There is no binding to the identity of whoever's making the call and so on.
So I kind of look at the environment and I said, that's not for me.
Like I, I'm not going to use it.
And so I started playing with two protocols, Oz and Hawk.
If you, if you are familiar with like, you know,
all of one terminology, there was the three-legged
and the two-legged, where basically if it's just client server and you're just using the signature
stuff, you're not really doing any of the dance of authorizing it. You're just using it as basically
a replacement for basic auth. That was the two-legged use case. And then the three-legged
was when there is an app,
a server, and a user,
and the user is authorizing third-party access.
So I kind of split those two concerns.
And Hawk is the authentication protocol.
It's basically like BasicAuth, which I say DigestAuth.
It's just a client-server authentication
that's using holder-of-key principles,
a little bit of crypto.
If you look at the code, unlike OAuth 1, it's super simple.
It's basically taking OAuth 1 in terms of the signature
and bringing it to the modern era.
So OAuth 1 is so awful because it was designed to support PHP 4.
And PHP 4 didn't give you access to the raw request URI. So we had to reconstruct
it. This is why we're doing all this encoding and sorting and all that stuff. It's all PHP 4 fault.
And at the time, it was a requirement because PHP 4 was the only available cloud hosting
environment you could buy. And we wanted something that is accessible to everybody and and so that's kind of
where off one came from um and so i basically said you know all the principles around it were solid
uh you know they all came from if you look at the people who developed off one uh some of the best
security experts in the world um no exploit known so far against it. So why reinvent something if we can just simplify it?
So that's what I did with Hawk.
And that has been published for a few years now.
It's pretty widely used.
And if you're using Node requests,
you already have a Hawk client available to you.
It has been bundled with requests for a few years now.
It's a very simple protocol,
and it works really well for client-server authentication.
And then what Oz does is basically takes that
and adds the whole third-party authorization on top of it.
Now, in the beginning,
both of these protocols were part of OAuth 2.
So Hawk originally was the MAC token that was supposed to part of OAuth 2. So Hawk originally was the MAC token
that was supposed to ship with OAuth 2.
And when I quit,
the interest in maintaining that work disappeared
and it just died in committee, as they say.
People just felt that the bearer token
was just good enough.
And then after that,
they kind of decided that the
right way to do it is with the json web tokens um instead of anything else and digital web token
come with their own set of security but i don't find them to be good enough um to be honest um
because they're not bound to the request at all so i kind of looked around and I said, okay, here's my problem. I'm not going to use OAuth 1 because I'm already over it. It was great in 07, but I need something else. I'm not using OAuth 2 because I'd rather poke my eyes with needles. And so, what is I basically said, you know, I'll take what was good of both of these protocols and the pieces I liked, and I'll throw away everything that's just garbage.
I'll throw away all the extensibility of OAuth 2 that I just don't care about.
I'll throw away all the stuff that is not secure enough, like bear tokens.
And I'll combine, you know, the best of both worlds, the best of OAuth 1, the best of OAuth 2, and produce something else.
Now, Oz could easily be a fully compatible OAuth 2 implementation.
There's nothing in it that cannot just be an add-on on top of it.
But I kind of felt that would be counterproductive because the OAuth 2 mindset, the culture around it at this point is so
hostile to any meaningful security.
Anything that is a little bit inconvenient, if you have to use anything like, oh my God,
I have to use some client code now to make API calls?
No, that's no longer acceptable.
And so you go to that crowd and you're not really adding any value
because nobody will use it in that context.
So I felt that instead of trying to stay committed
to the OAuth 2 track,
I'm just going to throw it out.
And so when this was part of the original post mile code it was basically all
OAuth 2 so
what is called Oz now was just OAuth 2
with a bunch of add-ons
you know the self encrypted
ticket with
request authenticity and all those
things were just add-ons and what I
did is I just kind of threw out all the
OAuth 2 compliant pieces
and gave it a new name.
And that thing sat there for a while.
I haven't done much work on it for about two years now.
Most of this code has been written shortly
after I left Yahoo.
And the reason why I didn't work on it much
because I didn't really have any use for it.
And I don't like working in a vacuum
where I'm developing solutions
for unknown, soon-to-be problems.
And now that I have my startup,
I needed something again.
And that's kind of why this work kind of got resuscitated.
And I decided to kind of go ahead and just finish it and properly document it and all that.
So that's kind of why it became like news a few weeks ago.
But in practice, this was kind of done a long time ago.
There's parts of the project that, like you said go back a couple years so what uh
was just perfect timing i guess with your departure from walmart and you know maybe some
sponsored time from your current company which is your startup that uh that this became you know
your forefront attention or is this just like good timing for you like this is a good time to solve
this problem it was mostly because i needed something so i i said okay i'm building this app and i need a security protocol um i need exactly what oaf one
off to provide um i don't want to use either one of them so now what and it's it's actually kind
of sad that uh you know in the last you know oaf one was in 07 so it's been eight years now
eight years should be you know like if you look at most other protocols,
look at JavaScript,
look at HTML,
look at pretty much
every technology
over eight years.
Yeah.
People are kind of like,
you know,
eager to change
and fix and grow
and improve.
And this work
has kind of been stale
for a long time.
And so I needed something
and I kind of looked
at what are my options
and I said, okay, I started this thing I kind of looked at what are my options and I said, okay.
I started this thing a couple of years ago.
I liked using it when it was in its previous incarnation and I just decided to go ahead
and finish it.
And to be honest, I wrote it for me.
I do a lot of happy work and a lot of the work I'm doing for happy is it for me. You know, like I do a lot of happy work and that's kind of a lot of the work I'm doing for happy is not for me.
You know, I'm just trying to help other people and kind of grow a community.
With Hawk and Oz, at this point, I mostly care about my use cases.
And I'm also it's a very tricky project to talk about and answer questions about because you're kind of making security
recommendation guarantees, which I don't want to do because it's just the wrong thing to do to
advise people on security on unknown projects that I don't understand. So it's a really interesting
project right now that there's a bunch of code and stuff sitting there and when people open issues
and asking me really deep questions about how I would how i would recommend them using it i'm kind of go well
sorry can't help you really you kind of have to read the code and figure it out on your own
um because it's these are all pretty complex security issues and the tool is really designed
for people who who really understand you know outh and these principles well and just want to use something cleaner with a different feature set.
Maybe speak to the security aspect a little bit, because like you said, you know, making a committee or a working group, at least you would think you'd be able to say,
is they're a group of experts working together
to come to some sort of solution.
Now, in practice, that sometimes is successful
and sometimes fails miserably, but it's a group of experts.
And I think my first thought when I saw your Oz announcement
was Aaron, oh yeah, Aaron Hammer.
He does HappyJS,
and he's the Walmart guy that we had on the show.
Wow, he knows security?
When I see an announcement of like, oh, I'm replacing OAuth 2,
it's like, who's replacing OAuth 2?
There's this question of authority and expert, expertness,
I don't know the word.
But maybe just give a little bit of background.
After reading your code a little bit and reading your your readmes you know i was convinced that
okay he actually knows what he's talking about um but do you have to uh give authority sometimes
or do you have quite people questioning your ability to create security protocols um i mean
if you know my background you know and if you look at the um the all off
specs you know my name is all over it um well there was the example you said earlier at uh i
don't mean to interrupt but there was the example you had earlier when you're at walmart with the
infosec people you know badging about the the gist that you were posting you're like well
you know who i am so you had you had to throw around kind of like your
authority there too.
Yeah, I do it once in a while when
I absolutely have to.
But basically,
go on Wikipedia
and look up OAuth and
come back.
But
I'm not a security
expert. I mean, I'm very well versed in security, and I am an OAuth expert after spending a few years serving my time.
And what is really, really key here is that, one, I'm not trying to invent anything new. If you look at what this does from a protocol perspective, it's exactly the same as what OAuth and OAuth 2 are doing.
And if you look at the implementation, that's really where the scrutiny should be focused on, and nobody does that.
And so one of the complaints I always had about people saying know, people saying, oh, is this an OAuth compliant implementation?
And I said, well, you look at NPM and you find an OAuth module and it says it's compliant and you're trying it out and working well.
That doesn't make it secure because you have to look at the source code and understand how it's operating and where it's storing its information and how it's generating its randomness.
And is it actually
verifying the nonce or not?
And if you look at OAuth 2, it requires a whole bunch of server validation that if you
don't perform, the protocol will still work perfectly well.
It's not going to fail you.
And so it's very misleading to say that a spec is secure.
Implementation can be secure.
Specs are not secure.
You know, they're just paper.
It's just words.
And so that has been kind of my gripe against most of the OAuth 2 and even some of the OAuth 1 crowd is that, you know, people are saying, I'm going to pick OAuth 2 and that makes my system secure.
I'm like, no, it doesn't.
And what I want people to do is to,
there's a little bit of protocols.
You can look at Oz and Hawk
and scrutinize the protocol.
If you know what you're doing,
it's very easy to do.
And I did have a bunch of,
the same top level security experts that have looked
at OAuth, have looked at Oz and Hawk and gave it their blessing.
You know, there's a lot of liability involved in security.
So nobody, I'm not going to put their name and say this has been approved by, you know,
X and Y.
But I feel very confident that the protocol itself is solid and it's basically identical to OAuth.
You know, parameter names are changed and, you know, some of those things are different, but fundamentally it's exactly the same protocol.
What's much more interesting and important is the code I wrote and how it's implemented.
And I'll give you, you know, one concrete example.
OAuth 2 was created specifically to help Yahoo and Google and Microsoft scale their OAuth operations.
That was the main concern they had.
The secondary goal was to make it easier for developers to use, but the primary goal for
them is to scale it.
And what they wanted to do was to make the tokens self-encoded so that when they get
a token, they don't have to do a database lookup to find out if the token is still valid. What they want to do is to decode
the token using
some kind of crypto, and then
inside of it, they'll find the information they needed,
and that was good enough. The thing is that once
you have this kind of design, it's a very
highly scalable design because there's no data center.
You don't have to synchronize
your storage across multiple
locations and all that.
So it's great.
But now you have credentials that don't expire.
Because if the credential is self-encoded, if the credential itself includes what you
need in order to use it, then there's no lookup, then you can never invalidate it.
You can't revoke it.
And so what they wanted to do is they wanted to issue short-lived credentials.
In Yahoo cases, I think it was an hour
and
You can use that for up to an hour
But after an hour you have to go and come and get a new one
That's kind of where the off to refresh token came in
so
If you're using off to when you're not using refresh tokens
You're actually doing a really big
disservice to yourself because you're issuing these long-lasting credentials.
Now, if you are doing a database lookup for every request, then, well, maybe you should
reconsider that if you have any kind of scale for your authentication.
And every API call now has a database lookup just for the token, which is challenging.
So if you kind of think about it, now you need to have some kind of self-expiring encrypted token.
So the JWT work is doing some of that.
But then there's other layers that are missing.
And I can talk about this for hours.
But basically, what I've done is I said, OK, I'm going to produce a token that is self-encrypted, that expires, that can do password rotation, that can do all those things, that is going to give you both privacy and authenticity.
And I'll just do it in a way that is going to give you both privacy and authenticity um and i'll just do it in a way
that is going to be solid so i talked to a bunch of my crypto friends and i sat with them and i
said okay how do i do it in the absolutely right way and what is the right algorithm to use and
the right crypto to use and how do you generate the keys and all that stuff and then i wrote a
module called iron and iron basically does that it takes a json object and turn into an opaque string that
is um fortified and if you if you don't understand how to do that then you can't really properly use
oaf2 and that's kind of all with my point is that to properly use oaf2 you have to be a pretty
advanced developer and understand security and crypto really well, which most people don't.
So what Oz is trying to do is take all these great engineering principles and implementation principles and just put them together and say, you know what, let's forget about this
interop and all this standard nonsense because nobody really cares.
When was the last time you were trying to reuse code across multiple providers?
That was the grand vision of like, you know,
2005, 2006, when we were trying to kind of like,
you know, make API standards across the web
and open up the social web walls.
And at this point, nobody cares about this anymore.
You know, that's all dead.
And so since we don't care about, you know,
making sure that the Twitter API
and the Facebook API are compatible to each other,
and because there's only two of them now and we don't care you know when there was like
a hundred of them it was painful um why are we bothering with interrupt so if you throw away
interrupt now you can do whatever you want and now what i wanted was a great solution for a javascript
based environment that will work well on the server, well on the client, get me all the security I want.
And what I want people to do
is to take the code I wrote
and scrutinize that,
go line by line
and find where I'm doing something stupid
versus, you know,
here's the protocol documentation
and you can kind of say,
oh, here's the flow
and this is where, you know,
you send the parameter in.
That's not really interesting.
It's kind of needed
just to understand what I'm doing,
but it's not really helping you understand if this is secure or not.
So I think that's a key goal of this work,
is that I'm trying to shift the focus from an academic exercise
of writing a security specification to a very practical exercise of writing a piece of code
that you can reason about in absolute terms
because it's a piece of code.
It does one thing.
And then you can find out if that is secure
as an end product versus a theoretical secure protocol.
So you got three modules.
You have Iron, which you said was the cryptographic piece,
which basically just takes a JSON object
and does, I'm assuming it's like symmetric encryption on it.
Yep.
Just encodes that or encrypts that thing.
Then you have Hawk, which is the authentication protocol or scheme, as you call it.
And then OZ is kind of the authorization layer.
Am I breaking those three out correctly?
Yep, exactly right.
Okay.
So when we're talking about HAWC, one of the things that you say in the introduction to Hawk as a primary design goal is that it simplifies and improves HTTP auth for services that are unwilling or unable to deploy TLS for all resources.
I stopped there for a second and thought, why is this necessary?
Can't we just be willing and able to deploy TLS and use basic auth?
Would that require not having this library? Is that just a perfect world looking at it
and in the real world that's not the case?
It's part of the answer.
There is a really
important principle in any secure system,
and that's to have separation of concerns
and layering of defenses,
also known as don't put all your eggs in one basket.
And the reality is that
even if you are deploying TLS everywhere,
you don't have control over your clients.
I mean, the TLS protocol doesn't ensure that the client is doing the right thing, right?
The server can make sure that the channel is encrypted.
It doesn't know if the client is leaking stuff.
It has no way of knowing.
It doesn't know if the client is properly validating the server certificates which in most cases it doesn't um yeah the vast i think rails still doesn't validate client certificate by
default if i'm correct um i know node uh i had to like you know scream and yell for node um dot 10
to change the default to uh throw on invalid server set insert instead of ignore it by default.
And so if you just have to assume that the developer will do stupid things,
it's just because code does stupid things a lot of time.
And is that good enough for you?
So if you think about it, in a perfect world where the credential is guaranteed to be sent over TLS to the right server and not leak anywhere,
then yes, bearer tokens are just fine.
But it's never a perfect world.
And so you have to ask yourself, should I do anything else?
And the reality is that if you're sending a bear token to the wrong server, right, either it's typo or you fail to check your TLS certificate, you know, you're on an airport Wi-Fi and someone is basically giving you bad certs.
And you have a code that ignores bad certs because that's what most developers do because it's like, hey, look, it wasn't working and I put this ignore and now it's working.
Awesome.
And you go on Stack Overflow,
see how many people answer questions about bed certs
by saying, oh, just add this flag of ignore.
Problem solved.
And if that's the case,
then whatever app is not validating properly
is not fully exposed
because they don't know who they're talking to.
So you don't actually get TLS protection.
So I think that's a really important point to make is that it's just not enough.
And there's a lot of different ways where you can leak those credentials.
That's one thing. The other thing is that without some kind of crypto, it's very hard to know that the
request came from the right person and it's meant for the right server.
And I'm going to try to keep this as simple as I can.
But basically, if you think about a simple scenario,
let's say there's Facebook and then I have an app.
I have two apps that use Facebook to log in into them.
Because they both use the Facebook token as the authentication key.
Because what they do is you go to Facebook,
you come back to the app with a tip
with a token and then the app goes back to facebook and says who is this token uh who does
this belong to and then facebook say oh it's steve great now we can log in steve um so that that that
token is really powerful what happens if i trick you into logging into my app using Facebook, then I take that ticket that I got for you from Facebook, and I now go back and log into another app that's using Facebook to log in.
I can now log in as you to the other app.
I can't really attack you on Facebook itself.
That doesn't work. But I can now, because those tokens are not bound to
any, you know, if you remember in all of one, we had everything had to be signed by both the
client secret and the token secret. In all of two, because there's no signatures, there's nothing
that binds the tokens to whoever's making the request. So I can now trick another app to
thinking that I'm you using your Facebook ticket that you gave me perfectly legitimately.
And so once you start removing these layers, you have all these outcomes.
And, for example, Facebook has a feature to solve that.
When you make the Who Am I API call, Facebook gives you an option to include with it
a hash of, I think, your client ID or something.
And then they'll check for you
if the ticket was issued for you.
And if it's not, they'll say,
oh, sorry, you're using a token
that wasn't really meant for you.
Someone is tricking you.
But it's an optional argument.
And if you look, for example,
at the, I mean, at least last time I looked at the Node Express Passport Facebook implementation, that feature is off by default.
So you get all these details.
And look, everything I just said, I'm sure most of the audience has never heard about it and not aware of it and doesn't even know that Facebook has this feature that allows you to protect your app from wrong logins.
But the whole point is that they shouldn't need to.
And if the protocol is written correctly,
then the protocol might be a little more difficult to use,
but at least it does the work for you
and it gives you the protection that you need.
It doesn't require you to go and invent your own extension.
So, for example, that extension is a Facebook invention.
They added it to the protocol because they had some attacks on some, I don't know, whatever they're calling it, Canvas apps or whatever they're calling it.
People were able to trick one and kind of log into another.
And I don't remember the specific of the export that someone found,
but that's kind of what they've done to solve it.
And there was no standard for that.
So now, you know, a secure Facebook implementation
is no longer compliant
with any other OAuth 2.0 implementation
because they had to add their own parameter to the mix.
So that's kind of the reality of it,
which is why, you know, all this crypto stuff, it really matters.
The other thing is being able to invalidate these credentials and being able to validate that they haven't expired.
All those requirements are all implementation details. So in a perfect world, if your client, you know, to your question,
if you control your client code,
let's say you're running your own private client server implementation.
So you have full control of your client.
You know what you're doing.
It will never run on hostile networks.
You're fully checking the credentials.
No one will ever see those credentials outside of you.
You know, like if you put all these constraints on it,
you don't actually have third-party apps accessing your API.
It's just your own software.
Then yes, basic auth over TLS is just fine.
All right.
That's an excellent answer.
That's like, if you disconnect your server from the network,
it's completely secure.
I mean, there was a really interesting debate going on
on the Express session middleware a few weeks ago,
a few weeks ago, a few months ago, where Express uses an HMAC to hash every session ID
so that it cannot be messed with.
And all the people who play Crypto Expert on the internet
showed up and basically said that if you use
a properly randomized session
id it's as secure and you don't need to do any crypto to it um and that kind of brought all the
same arguments that yes you know in theory of an extremely well randomized hard impossible to guess
session identifier doesn't need to be you be hashed or any kind of crypto protection.
But that's not where the story ends.
And there are so many ways in which that can fail.
You use the wrong crypto function.
You use math random instead of a properly secure crypto generator.
Or you just don't know it and you kind of somewhere in you fudge in a different identifier because you like
to have them sequential. Or it comes from a database
and the database can be hacked and the database ID generator can be
changed to be non-random. There's all these
layers. So it's at the end of the day, when it comes to security,
it's always eggs end of the day when it comes to security it's always you know eggs and basket is what you know the two words you have to like ask yourself now i think that's a strong point
and i think the layering is a compelling argument of why you'd want to use hawk even in scenario
described um we're gonna stop here for our final break here from another one of our amazing sponsors and when we get back we're
going to close up this conversation with more on oz and we're going to have aaron describe the
protocol a little bit and maybe compare and contrast um specifics with ooff too so we'll be
right back imagix is a real-time image processing proxy in CDN and let me tell you this is
way more than image magic running on EC2. This is way better. It's everything your
friend and developers have dreamt of. Output to PNG, JPEG, GIF, JPEG 2000 and
several other formats. And if you're like me, you've ever argued with your boss or
a teammate about serving retina images to non-retina devices,
you'll appreciate their open-source, dependency-free JavaScript library that allows you to easily use the ImageX API to make your images responsive to any device.
Now, all of this takes a platform, and the ImageX platform is built on three core values.
Flexibility and quality, performance, and affordability.
When it comes to flexibility and quality,
ImageX has over 90 URL parameters that you can mix and match
to provide an unlimited amount of transformations that you need for your images.
And they take quality very seriously.
And because of their commitment to quality, several top 1000 websites in the world trust
them to serve their images.
Now when it comes to performance, Imagix operates out of data centers filled with top of the
line Mac Pros and Mac Minis and they're set up for a completely streaming solution.
This means your images never hit the disk. Images are
served by the best SSD based CDN for delivery around the world anywhere
extremely fast. And while we're talking about speed, almost all the image
processing happens on GPUs. This means transformations are super fast when
compared to competing virtualized environments. And lastly, it's all about
affordability. Everyone wants to save a lastly, it's all about affordability.
Everyone wants to save a buck.
That's how the world works.
Because Imagix processes close to a billion with a B images per day, they're able to make certain optimizations at scale
and pass those savings on to you.
To learn more about Imagix and what they're all about,
head to imgix.com slash changelog.
Once again, imgix.com slash changelog.
And tell them Adam from the changelog sent you.
All right, we are back with Aaron Hammer discussing Oz, his WebAuth protocol based on industry best standards.
Aaron, you said Oz is not a spec,
it's an implementation.
You don't really care if it's ported to other environments
because what you want is an awesome JavaScript implementation.
Tell us a little bit more about Oz,
and specifically, from my perspective as an OAuth user,
I've never written a provider.
I've dealt with, as an application developer quite often,
reading through it, on the surface,
it kind of does look like OAuth 2.
So I know you've done it a little bit surface
during the intro, but maybe give us a little bit
of a deeper dive into Oz itself.
We've talked about Hawk and Iron,
and the compare and contrast it with
OAuth 2, perhaps from the perspective of a user.
Sure. So
OAuth 2 is focused
on two main pieces.
One is the authorization flow,
which is how do you go about redirecting the user
from one place to another to authorize
and kind of move and pass along the necessary credentials,
whether the authorization code or the grant,
or I honestly don't remember all the terminology I made up for OAuth 2 at this point.
And that's one part of what it's doing.
And the other part is once you have obtained a token,
is how do you use that token to make authenticate requests?
So those are the two pieces.
And in OAuth 1, they're kind of like all mushed together into one flow.
And in OAuth 2, we kind of separated that.
And it ended up being two specs.
One was the authorization protocol, and the other one was the bare authentication scheme.
And then the bare authentication scheme was enhanced later on to use JWT tokens, which are the JSON Web token.
It's a protocol of taking a JSON object, which is very similar to SAML principles, taking those into creating some kind of credential that is self-describing versus just a random bare token string that you use.
So that's kind of what OAuth 2 provides.
If you look at the three protocols that I have,
I think the parallel would be that Iron is in a way
similar to JWT.
So it's basically a token format.
The main difference is that iron tokens are opaque to the client,
but they are meaningful to the server.
It's basically taking a JSON object, stringifying it, encrypting it,
and then calculating a hash on top of that,
and then also baking into the structure additional features for expiration and password rotation,
which is really important for proper crypto hygiene.
And so that's kind of what that gives you.
It gives you a token format that you can use.
What Hawk does is take the part that is completely missing
from OAuth 2,
which is an authentication scheme that is using some kind of crypto.
Similar to how OAuth 1 was written,
it's basically requiring you to sign every request.
So every token comes with a token and a secret.
You use the secret to calculate a hash, and you send the hash with the request. So every token comes with a token and a secret. You use the secret to calculate a
hash and you send the hash with the request. So in those terms, it's very simple. It's basically
competing with the OAuth bear token scheme, only it adds some layer of cryptography and extra
security to the security layers. And then what Oz does is, Oz is more of an implementation component
versus a protocol component.
It's basically taking the elements from OAuth 2.
So if you think about OAuth 2,
it's basically in the traditional OAuth 2 flow
where you go and you send a user to a page to authorize.
They come back with the authorization code.
Then you exchange the authorization code for a ticket, for a token.
What Oz does is basically says, you know what?
We're not going to tell you how to do the flow itself.
Like, you know, we're not going to tell you how to redirect.
At the end of the day, that's how you're implementing your app. But we are going to introduce the basic building blocks.
And so the first building block is that the application itself needs to authenticate and
obtain its own HAWC credentials. So the same
with OAuth where you pre-register your client with the client secret and all
that, you do the same thing with Oz. You establish that relationship out of bound
and once you do that you cannot use your client credentials because basically
everything is always hawk.
So if you think about OAuth 2, the first step is you're using basically either basic auth or some kind of form encoded credentials to get the initial interaction with the server.
In Oz, everything is hawk.
So it's all secure from the very beginning.
And what happens is that the only thing you can do with your Hawk credentials is exchange them for Oz credentials.
Oz credentials are called tickets.
And basically, instead of calling it a token, it's called a ticket.
You need a ticket to get in.
And a ticket can be provisioned for either the app or the user.
So you have two kinds of tickets.
The ticket itself is just an iron object.
So that's kind of where it's using that piece.
The flow is very similar. The user is being told to go to some server page to authorize access.
They go there, and when they authorize access,
they come back to the app with an RSVP.
And that RSVP is basically an authorization code.
It's a smart authorization code, so it has some stuff encoded in it.
And then they go back to the server,
and the server can take that RSVP and issue you your ticket.
It's exactly the same as the traditional OAuth 2.0 flow.
The only difference is that Oz gives you APIs to build it and doesn't force you to do any kind of redirection or which query parameters should I stick the RSVP in.
None of that stuff is really interesting.
That's up to you.
You can implement any way you want.
For example, one implementation I've done
doesn't even use those flows.
It's just using cookies.
So what it's doing is you go to a login page
and you log in with Twitter.
And when you're done,
you end up with a valid session cookie on your server.
And then what you do is you make a call and say,
hey, can you exchange this server, this cookie basically, and issue me a ticket that has the
same permissions. And internally, it's using all these
elements in order to do that. And you can see that all
in, you know, you can see how it works. There is the original
PostMile project that I was working on at Yahoo
is using a slightly older version of
Oz right now, but it's all the same principle.
And you can see exactly how the flow works there in terms of you come to the website,
you log in, and then those credentials are sent to the one-page app.
And then from there, you're just doing Oz authentication.
Now, Oz authentication is basically just hawk authentication with two extra parameters which gives you built-in
support for delegating access. So you can have one app delegating access to
another app if they're allowed to. That's already built-in and it also gives you
some support for scoping so you can scope scope APIs and it's part of the ticket.
So there's all these extra features that it gives you out of the box
as part of the solution.
So that's kind of what it does.
In practice, you can very easily adopt
this protocol to be exactly OAuth 2.
You can use, you know, iron
and even use Oz ticket as OAuth 2.0, you can use Iron and even use
OzTicket as OAuth tokens. That will work just fine
as long as you properly sign the request. You can use
Hawk authentication as just a valid
OAuth 2.0 token type.
And in fact, it used to be that.
If you Google the OAuth 2 MAC authentication scheme,
you'll find a very old draft that I wrote
that was basically what Hawk is now
before I quit the working group.
So these are all very old principles.
I think the big change here is that I'm shifting focus from protocol to code.
And I'm focusing on this implementation instead of trying to create an ecosystem around it.
So has that been successful from the perspective of code review and criticism?
Have you drawn to your code base the eyes that you have hoped for?
Yes, for Hawk and for Iron.
Those two have been thoroughly reviewed and scrutinized.
Less for Oz, mostly because up until a couple weeks ago, it wasn't even documented.
So it wasn't at all accessible.
I think that now that it is, it might get some more scrutiny.
People have been using it.
I'm always surprised when I'm getting email from someone saying,
you know, we've been using this thing in production for the last year and a half,
and we have a question about something.
And I was like, oh, okay, that's interesting.
And so it has been getting,
you know, some traction. It's nowhere near the numbers, right? It's like, it wouldn't be even
like, you know, a full percentage, you know, in comparison to where OAuth 2 is. It's completely
insignificant. But it's there. And I think that if you look at the pieces that Oz itself brings, if you look at the areas that they focus on,
then OZ itself doesn't really change as much because it's
just an implementation detail on top of the other two,
and the flow itself that it's using is basically OAuth2.
If you look at the code,
you can basically see it.
It's very little code.
There's always a nice thing to see for any project focused on security as a small surface area.
What does success look like for Oz?
Like, what's an end goal?
What would you look back and smile and say, I did it?
It works for me.
Wow.
So you're already there.
No, not quite.
You know, my startup needs to be successful.
And then I can say, hey, look, I have millions of users using my product and it's all powered by this.
I see.
To me, success...
There's two ways of looking at it.
There's the success of, I wrote a piece of software and it's doing what it's meant to do.
No known exploits, you know, nobody getting hacked because they're using it.
Whoever is using it is working well for them and it's primarily working well for me because I'm putting the effort into it and I need it.
And so it needs to provide me with a good solution.
At some point, Sideway will have a public api when that happens you know the
the the test is going to be whether developers are willing to learn a new kind of protocol to work
with the api or not right that's a big one um if you know if i put a public api people looking like
oh why is that using off too i don't want to touch this it's another one of those stupid you know
custom made security thing i'm not dealing with it um so it's to touch this. It's another one of those stupid custom made security thing. I'm not dealing with it. So that will be another test in the future. And just whoever is stumbling
upon this and playing with it, and if they like it and decide to participate, that's great.
Now, on the open source front, what I really want to get out of it is to kind of give people a reference implementation that they can then use for whatever they want.
It's super easy to take the pieces as modules or just as code to cut and paste and write your own awesome OAuth 2.0 implementation.
So if you're looking at OAuth 2.0 and you're saying, I want all these features that it was designed for,
I want these really strong refresh token expressions,
and I want to have all these self-encoded tokens,
and all these features that you want,
and you can use the building blocks
that these three modules provide,
or just cut and paste code from them, that's great.
It can make your OAuth implementation awesome
even if you're not a world expert on it
because you can see how it's done
and you can either reuse it or imitate it.
So the focus really is not on getting people
to stop using OAuth 2 and OAuth 1 and just using Oz from now on.
It's more of saying, hey, look, here's a bunch of code written by someone who hopefully knows what he's doing.
You know, using all these same principles.
And you can use it, you know, to learn.
You can use it to imitate.
You can reuse it as is.
And I think that's a more interesting aspect.
Because if you're,
if you missed it by now,
my attitude here is that
there is very little value in a standard in this space.
There's a lot of value in thoroughly tested
and proven piece of code.
I mean, how many people have read the TLS spec
and can understand how TLS works?
Almost nobody.
What you do is you use an implementation.
And so my political agenda here
is really to kind of shift the focus
from writing security specs
to writing fantastic implementations that provide you actual security
where we can reason about the implementation and fix bugs in it
versus the debate of what to call the parameter
where we're sending it back and forth.
More doing.
That's kind of like that commercial. Less talk, more doing. I calling it kind of like uh uh that commercial
less less talk more doing what's i'm trying to get the commercial right now it's like a home
depot commercial i think it's home depot yeah yeah it's like more doing you know it's like come on
man let's just let's just make this happen well in light of that uh i was just gonna say can you
give us the status as far as like version numbers are these 1.0? Are they done? Are there roadmaps?
What's kind of the status of all these projects?
I know they're kind of old as far as when you first started working on them.
Yeah, that's the awesome thing.
They've all been there for a long time, and they haven't changed.
I'm sure there will be versions coming. Iron mostly needs better documentation because it has, I think, only like 10% of the features are actually documented in the example on the readme.
The rest of them are not documented.
It didn't stop people from using them in some really smart ways because Iron is actually bundled with Happy.
So, for example, all the Happy secure cookies
are using Iron inside.
And so that has already been widely deployed.
I mean, you can look at the NPM download numbers, right?
The Hocko numbers are misleading
because it's basically getting request numbers
because it's shipping with requests.
And so it's giving a distorted picture of how many people are using it because everyone who's using requests has a piece of code of Hawk in their application.
But Iron is pretty heavily used right now.
It hasn't changed in a long time.
There is no reason to change it.
If you want stronger crypto algorithm, you just configure it with stronger crypto algorithm,
even though it ships with pretty secure settings out of the box.
Hawk has been also very stable. We had no protocol changes in Hawk.
Hawk is the one place where there is some interoperability.
And actually, if you look at the port label on the project,
it's kind of incredible.
It's already ported to like nine or 10 different platforms.
I never talked about the project.
I never blogged about it.
I never gave talks about it,
except for when I gave the real-time kind of talk
about OAuth 2.
And somehow people found that module and adapted it and used it and ported it, which is kind of awesome.
Mozilla actually used Hawk as one of the security components when they were doing identity for browser ID.
They used that for some of their security um i think they still do i don't know how active that project is but
it was last i know they were using it um and so these two pieces are solid um they're very simple
they have great browser support at this point and they're widely used.
The only piece that's kind of new
in terms of outside attention is Oz.
And Oz is 1.0.
And my guess is that
if it's going to have breaking changes,
it's going to be breaking changes in the node API,
not really in how it works,
not in the internal API not really in how it works. Not in the internal
structure of the
tickets.
So it's pretty stable now.
I guess now
it's about the time we wrap up the show.
Aaron,
I think we got a couple closing questions we'd like to ask
our guests. You've answered some of them
in the past, so we won't ask you
the hero question
uh but a good staple would be if you can help the the listeners who've been listening to the show
know how they can step into iron step into oz how can they support these projects or even hawk you
know what's some of the needs that these projects have that the open source community can come in
and step in and help out with?
I mean, I'm really looking for people to use it and play with it.
And it would be great if people who are experts can come in and look at it and say, whether publicly or privately, that they looked at it and it looks good.
Iron and Hawk don't really need much at this point.
They're used and they're pretty stable.
Only occasionally someone will post a question.
But I mean, those two, as far as I'm concerned,
are kind of finished.
On the odd side, that's going to get more interesting.
So if people are building new applications and they're kind of trying to decide
what to use, OAuth 1, OAuth 2,
what, you know, run with their own thing,
you know, take a look at it
and see if, you know, if it works for you.
And if it does, kind of join the conversation.
The caveat is that because it's security protocol,
you know, and it's very hard for me
to help people with their own implementation of it
or how they're going to
use it because it's basically amounting to giving them security advice, which is something that I
don't do on principle. But if people who feel proficient in that space, who would,
if you look at OAuth 2 and you say, I feel confident that I can go and write my own
implementation, then you would be the right person to use Oz
and the right person to interact with me on the project.
And one of the directions that I want to take the project
is to have a really good story about mobile apps.
That's where my next need is.
I think OAuth 2 does a terrible job with native clients.
And nobody really has a good story about native clients you know
it's all kind of like security theater you know like encrypting secrets inside the client and
people are like you know extracting them and posting them on the web and all this nonsense so
want to have a better story there um so that's going to be the next area to focus on odds it's
kind of getting the the mobile experience experience figured out, how to use the
authorization page with two-factor auth and all those things. So it's kind of more of a usability
perspective of the space implemented through a specific implementation.
Cool. And our last question, I think this one, it kind of depends.
It's been a while since we've talked to you, so it kind of depends on how you can answer this.
I imagine you're still in the same areas of your interest, but what's something interesting out there that if you had more time or you wish you had more time to play with, like what's on your open source radar?
I wish I had time to do more Node core work.
There's two areas in particular that I find to be absolutely disgusting in Node.
One is domains.
And the other one is the HTTP implementation.
And I really wish that I had the time to go and kind of like take over one of those areas and like rewrite them and submit it back to the core team.
I think those are two areas where it's going to be interesting.
I think more of a meta area right now is how the Node community is going to adopt all the new features that are available to them. I think that's a really interesting question of as we're migrating people from Node 0.10 to 4,
you know, how to keep the module ecosystem, you know,
going without kind of like alienating half the community because they can't upgrade just yet.
So I think that's an interesting one to solve.
And then once you have that,
like, you know, everybody will have to adapt their style guide and coding convention and everything
to use all these new features. So that's going to be, that'll be like presenting a whole new set of
challenges, especially when you're in an established community that does follow strict guidelines,
like HAPI does.
For example, we have an open question right now within the community, which ES6 features
are we allowing people to use in HAPI module?
Because we do have a style guide and we kind of require everybody to follow it, and so
do we allow people to use cons and symbols and let
and error function and promises and so on?
Because we want to make sure that the code remains readable
by the entire community around the project.
We don't want to have one module that, you know,
is using features that nobody understands yet.
And so nobody can maintain it now,
especially if it's a dependency within HappyCore.
So those are, I think,
the most interesting areas going on right now.
And if I had more time,
I would definitely be diving more into NodeCore.
To have more time would be just awesome.
Everybody wants more time, right?
Well, Aaron, I want to thank you
for joining us for such a lengthy conversation about
about oz and you know more importantly your passion for you know solving these problems and being
uh you know a leader enough to to lead us there but also share it back through open source and
just such a such an inspiration for those in the community to to aspire to be like and to lead like
so thank you for coming on the show i also want to thank our loyal listeners for listening in the community to aspire to be like and to lead like. So thank you for coming on the show.
I also want to thank our loyal listeners for listening to the show.
We thought you wouldn't be possible.
And also to our members and sponsors for sponsoring the show.
The sponsors for this show actually were Code Ship,
Toptow, Casper the Bedmaker,
which was interesting for us as a sponsor,
and also Imagix.
And next week we're talking to Matthew Holt about his
H2 as we learned with the
conversation with Ilya we can shorten
HTTP2 to H2
we're talking about his H2 web server
called Caddy so stick around for that
and at this time guys let's say goodbye
so bye
bye Bye. you