The Changelog: Software Development, Open Source - OAuth, "It's complicated." (Interview)
Episode Date: August 23, 2021Today we're joined by Aaron Parecki, co-founder of IndieWebCamp and maintainer of OAuth.net, for a deep dive on the state of OAuth 2.0 and what's next in OAuth 2.1. We cover the complications of OAuth..., RFCs like Proof Key for Code Exchange, also known as PKCE, OAuth for browser-based apps, and next generation specs like the Grant Negotiation and Authorization Protocol, also known as GNAP. The conversation begins with how Aaron experiements with the IndieWeb as a showcase of what's possible.
Transcript
Discussion (0)
What's up? Welcome back. I'm Adam Stachowiak, and you are listening to The Change Log.
On this show, Jared and I talk with the hackers, leaders, and the innovators from all areas of the software world.
We face our imposter syndrome, so you don't have to.
Today, we're joined by Aaron Parecki, co-founder of IndieWebCamp and maintainer of OAuth for a deep dive on the state of OAuth 2.0 and what's next in OAuth 2.1. We cover the complications of OAuth,
RFCs like ProofKey for Code Exchange,
also known as Pixie,
OAuth for browser-based apps,
and next-generation specs like the grant negotiation
and authorization protocol, also known as Gnab.
But the conversation begins with how Aaron experiments
with the IndieWeb as a showcase of what's possible.
Big thanks to our partners,
Linode Fastly and LaunchDarkly.
We love Linode.
They keep it fast and simple.
Get $100 in credit at linode.com slash changelog.
Our bandwidth is provided by Fastly.
Learn more at fastly.com.
And get your feature flags powered by LaunchDarkly.
Get a demo at launchdarkly.com.
This episode is brought to you by Gitpod.
Gitpod lets you spin up fresh, ephemeral, automated dev environments
in the cloud in seconds.
And I'm here with Johannes Landgraf, co-founder of Gitpod.
Johannes, you recently opened up your free tier to every developer
with a GitLab, GitHub, or Bitbucket account.
What are your goals with that?
Thanks, Adam.
As you know, everything we do at Gitpod
centers around eliminating friction
from the workflow of developers.
We work towards a future
where ephemeral cloud-based development environments
are the standard in modern engineering teams.
Just think about it.
It's 2021 and we use automation everywhere.
We automate infrastructure,
CI-CD build pipelines,
and even writing code.
The only thing we have not automated are developer environments.
They are still brittle, tied to local machines and a constant source of friction during onboarding
and ongoing development.
With Gitpod, this stops.
Our free plan gives devs access to cloud-based developer environments for 50 hours per month.
Companies such as Google, Facebook, and most recently GitHub have internally
built solutions and moved software development to the cloud. I know I'm biased, but I can fully
relate. Once you experience the productivity boost and peace of mind that automation offers,
you never want to go back. Gitpod is open source and with our free tier, we want to make cloud-based
development available for everyone. Very cool. All right, if this gets you excited, learn more and get started for free at getpod.io.
Again, getpod.io.
So Aaron, we have you here to talk about a few different things oauth indie web tracking yourself since 2008 as it says on your website you're like super into tracking
your location i thought we'd start there kind of interesting i mean we're all being tracked at this
point but you're doing it to yourself on purpose. You want to tell us about that?
I've been doing it for a long time.
It was quite a while ago now.
I've just always been fascinated with data collection and personal data collection about myself.
And I actually, technically, I started tracking myself at least 10 years before that when I dug up some log books that I had found from from my early
childhood of having written down when I we left for school and when we got there and like the
times and it was about two years of this of this collection of notebooks that I found with that
and I was like oh yeah that explains a lot that explains a lot so you've been doing it on purpose but via
gps for a long time have you learned anything about yourself habits or i mean has it has that
data tracking i enjoy data tracking but i always think like why am i i stopped doing it i'm like
why am i doing this because i'm not there's nothing actionable there's nothing to learn
but it seems like you're getting something out of it. So have you revealed things about yourself to yourself, or what?
I know that's really the glamour idea of learning insights about yourself and things like that.
And there's definitely some ways where that's possible, but I would not say that's my primary
motivation at this point for doing it.
But what I have done with that data is used it to remind myself of things or used it to be able to geotag or remember where I was on a certain date or tag other things with my location.
So if I have a well, I guess cameras do this automatically now.
But if I have a photo from a not smartphone camera with just a date on it and I want to be like, where was that from?
Where was I when I took that photo?
I have that data now.
So I can go back and dig it up and correlate it with the location
because it has a timestamp.
Were you a big Gowalla user by any chance?
Oh, yeah.
And I still use Foursquare.
Foursquare or Gowalla?
Which one's for you?
Well, Foursquare now because Gowalla's gone.
But I did use Gowalla briefly and then switched to Foursquare and I'veala is gone. But I did use Goala briefly and then switched to Foursquare
and I've been using it since then also.
That's interesting.
Logbooks even.
Like before GPS, you're logging yourself.
Yeah, totally.
Like writing down, I got home at 3.28 p.m.
Like that kind of thing.
Yeah, I made a little spreadsheet on a little notebook
and filled in the dates and times.
I filled out who drove the car that day.
That's kind of cool. What do you think made you do that? What were some of the
early thoughts around doing it? Did you do it intentionally or was it just sort of like for fun?
I have no idea. I mean, it was definitely intentional. I have no idea why I did that.
Well, one thing you said is you're kind of obsessed with maps.
And I think actually Jack Dorsey once said that about himself.
The idea for Twitter was more like, when it started,
was like, what's your status?
Or kind of like, what are you doing right now?
And that came out of his interest in where people are
and what they're doing.
And he kind of thought of it in a train sense.
There was a mapping part of what his thought
process was there now obviously he stumbled upon something quite different from that but
was your interest with maps tied into the interest of like where i am when i am yeah definitely and
i also remember uh doing this as a child on long road trips between like portland and california
i remember taking the giant fold-out maps and the highlighter
and then tracing the route on the map,
but in real time.
So like, oh, now we made it to this off-ramp.
Let's go fill in that little trace.
Oh, we made another mile.
I can see this mile marker.
So trace that, doing that in real time
because GPS tracking hadn't really existed at that point.
Right.
That is kind of cool.
Do you also find yourself a completionist?
Yeah.
Yeah, well.
Because you're starting to hit on some things
that resonate with me and I'm a completionist.
I'm starting to think.
I would say I would probably describe myself as,
the problem I have is that if I start something,
I need to be able to continue to do it indefinitely.
So there are some tracking projects I've started that I have stopped.
And I will tend to not bother starting something unless I know I can continue it.
So one of the ones that I have, for example, not been able to do, even though I've tried a couple of times, is tracking my mood.
So I think it'd be very fascinating data.
But I've had two problems trying to collect that data. One, the amount of effort it takes is slightly too much to be able to plan on doing it indefinitely. Whereas the
amount of work it takes for me to do my track, my location indefinitely has now reduced to almost
zero because I automated so much of it. And the other problem with tracking my mood is i have not been able to find a good rating system that i can reliably that i can rely on
to be consistent over time so i've yeah i've tried three point scales five point scales ten point
scales and they all have various problems and inconsistencies.
And the last problem is that as I try to track my mood,
I am either influencing it negatively,
as in if I'm thinking to myself,
I'm not in a very good mood,
it'll put me in a worse mood having just thought about that.
So it doesn't even seem like all of these problems with it,
even though I love that data, it just caused me to completely fail to collect that.
Yeah.
Even though I've tried several times.
It's like you're a faulty measuring stick, you know,
because your mood affects the mood
and you're trying to observe the mood.
Kind of the Heisenberg principle of observability
or something like that,
where you end up changing the thing
that you're trying to observe.
And that was actually one of the big conscious decisions I made when I started tracking my
location, which was I didn't want the fact that I was tracking my location to change
where I was going.
So at the beginning, like think back to 2008, smartphones were brand new.
The iPhone was only a couple of years old.
So that was not really like a normal thing that most people had access to at that point.
So the big worry was like, Oh, well, if you, if, if every, if you're tracking where you're going,
aren't you going to be like concerned about somebody finding out or concerned about whatever. Right. So I, I just tried to make sure that the act of tracking my location was not
changing where I was going as, and I wasn't avoiding places or I wasn't even like
going down other streets in order to complete a city grid or things like that. I want, cause I
wanted it to be passive collection, just about what I do, not trying to treat it as a, as a
challenge to visit every street or something. What do you do to go back to it? You, you said
before you could map it back to places or whatever. How do you go back to this data and enjoy it or make sense of it or analyze it?
So everything that I've collected, I've now normalized all the different ways that I have
been collecting into my current sort of database, which is actually just a collection of JSON
files on a hard drive.
And they're sorted into year, month, day folders and or rather year, month folders with a file per day,
then it's a line of JSON per file, or within the file. So what that basically means is that
none of these folders are very large. At most, I have 86,400 lines in a file, one per second.
That's the max resolution I can track with mine. And it becomes it's a very manageable data set. It's not
anything fancy. It's, you know, easy to backup easy to sync between multiple computers. And that
is where everything is stored now. So my current GPS tracking app that I wrote, writes, I have a
thing that writes into that storage format on the server. And then over the past years of using different kinds of apps, I've converted all that data into that format.
And then I've got some simple tools on top of that, which will load it in a web interface, for example, where I can just pull up a day and then see the whole data gets used is on my website. When I post a photo or post a note on my website, which these all get, this is getting into
the IndieWeb thing, but I don't actually post on Twitter.
My bot posts on Twitter.
So I post on my website.
My website posts up to Twitter for me.
And anything I create on any social media ends up coming back to my website in some
form.
So my website is the canonical version of my online presence across whatever platforms I happen to
be using this year. When I create a post on my website, then that also has a hook into my
location database. So I can tag every post on my website with where I was at the time it was posted,
even if the thing I'm using to post doesn't know about my location
very interesting and your website's a wealth of things i look at the copyright i think goes back
to 1999 so like i like this you have your hub and everything else is just distribution or you know
broadcasting into other spaces but like aaronperecki.com, that's yours.
You own it.
You can do whatever you want with it.
You have.
You've built over time.
A lot of us replace our website,
but it seems like you've been adding new portions.
And so you can tie into this, like,
lifelong database of GPS's positioning
and use it however you like.
It's pretty cool.
Over the top, is this accurate to the time of day for you,
the battery life of your phone or something?
And then your cloud, like what's the partly cloudy where you're at, 68 degrees?
Is that based on your phone or what?
It's current.
That's, again, tapped into that same location database.
So my website always knows where I currently am
and whether I'm on a bike or on a plane.
And because it knows where I am, it knows my local
time and it knows the weather. So what that means is that, so I used to travel a lot, obviously,
not anymore, but I was previously traveling a lot for work and hopping between countries and cities
and doing all these workshops and conference talks. And it meant that basically at any given
moment, nobody would know what time it was if they were trying to contact me because I could
be anywhere. So I put that on my website as a way to be like time it was if they were trying to contact me because I could be anywhere.
So I put that on my website as a way to be like, oh, if you're trying to get in touch with me, you go to my contact page and it says, oh, it's 3 a.m. because Aaron's in Sydney.
And then you know it's probably not a good time to expect a response.
You've made life really easy on a potential stalker. I mean, they would just be hooked up with all the tools they need just to know exactly what to do.
I have thought of that.
And I also definitely recognize
that I am extremely privileged
and that I am not likely to have a stalker
because I am not a woman on the internet.
So that is something I've been aware of
and I realized not everybody can do this.
And I like to think of it as I'm able to use this privilege
of being a straight white male on the internet
to be able to demonstrate some of the more things
that are a little bit farther fetched
about self-tracking and publicizing that information
because I'm not likely to become a target.
Yeah.
Well, one thing all people can do is the IndieWeb thing,
which you're promoting and practicing yourself, right?
So this idea of IndieWeb, which you've been a part of for a while,
is something that everybody can opt into,
that way of going about engaging with the internet.
Do you want to just touch on that briefly?
I know we're going to get to OAuth, and there's so much to talk about there that we do want to save time for it. But I think IndieWeb is important and interesting. So you obviously do too. You've been a co-founder
of IndieWebCamp and have been a part of it for a while. Yeah, I like to think of my website as
demonstrating all of the things that are possible to do with your own personal website and expressing
yourself online. And I fully realize
that not everybody will do all of the things that my website is doing, nor should they.
But I would like to have as much of that public as I can in order to demonstrate what's possible.
And then people can choose which of those things they like. Maybe you like the idea of having just
the time of day it is where you are, but not anything about where you are. Or maybe you like the idea of having all of your photos on your own website, treating Instagram
as just a copy of your account. So you can pick and choose from all of the things that all of us
in the IndieWeb community are doing. And I've just chosen to use my website as a way to demonstrate
a lot of what is possible. What the touch points or for somebody's like okay
indie web sounds cool but maybe intimidating or i'm not sure how do i is there a list of lists of
like these are indie web people who are doing indie web things and you can steal some of their
ideas or are there implementations of these things i know there were some open source projects for a
while that were trying to promote this lifestyle of posting online and syndication. Some have fallen by the
wayside, but where do you send people who are interested in IndieWeb? The main home of the
IndieWeb online is IndieWeb.org. And that is the, it's a wiki where it's a collection of,
it's documentation of what everybody is doing with their websites and both in the past and what
could be done in the future and the community is organized mainly in an online chat so that's
available through in indie web fashion the website chat.indieweb.org as well as irc as well as matrix
as well as slack and as well as possibly discord we're experimenting with a discord, as well as Matrix, as well as Slack, and as well as possibly Discord. We're experimenting
with a Discord bridge as well. And they're all connected. So you can join via any of them and
you're talking to everybody all at once. So you're not tied to one of these platforms.
Trying to make it accessible, come in using whatever is the easiest for you.
And the community also is organized around events. So heavy event-based meetup based community again it was
a lot of in-person events every year i have been hosting a conference in portland it was the indie
web summit and we've obviously been on pause the last two years now it was always in the summer
but we're still doing a lot of online meetups in the meantime so these are over zoom usually
sometimes jitsy and you can join any of of these meetups and just come and chat and learn what other people are doing.
The main idea of the IndieWeb community is to get people to have their own presence online.
Just have your own website.
And that can mean a lot of different things to a lot of different people. And that's great that it can. So if you want your website to be just a one page thing about you and what you're doing and
links to find you elsewhere, that's great. That is your website. You control that and you can do
that. If you want your website to be a full on log of everything you've done online and offline,
that's also great. You can do that. So there's a lot of obviously a lot of range in between those two extremes. And
we do see a lot of people in, you know, fall into various levels of that. So you could have
a WordPress blog, which is a great easy way to get a website that you can post things to and
collect your online life on that site. I like that pragmatic approach because a lot of the IndieWeb, I don't know, blog posts
or content that I've seen historically, some of it's very purist and idealistic to the
point where like it's all or nothing.
And I like that the way you're presenting it and maybe the way that the community's
moved or whatever.
It's a little bit more like opt-in-able to different aspects of IndieWeb
because I resonate with a lot of what you're saying
and there's also bumping up against
either technical limitations or time limitations
or content that I don't care about quite as much.
I don't really care.
I guess historically with my tweets,
I have posted them on Twitter
and then I had a thing that would
suck in those into my website
like an open source thing
like a tweet archive kind of a thing
Twitter now offers that
you can log in and click down a zip file
every once in a while if you wanted to
but it's not the purest
publish there and then syndicate
it's publish over here on your platform
but then make sure that I ultimately have those things
so that Twitter couldn't remove them.
And so it's kind of not full indie,
but at the same time,
I very much believe in the power of owning your own domain,
publishing your own content on your own website,
especially content that matters to you
and you want to last for a while,
and then using
the different social networks for what they're good at, as opposed to writing for free on
twitter.com, all my thoughts, for example. Exactly, yeah. And I think it's actually even
more of a problem of when you're writing these longer form things that are, you know, tutorials
or things that you want to use to build your own brand or
build your presence online. And then you go and put that on medium where it's like a hundred percent
somebody else's platform and you're just giving content to somebody else's domain. So for those,
it's like especially important, put that on your website and then use those platforms,
like you're saying, to promote the thing that you wrote on your website and drive people to your own place online.
It's really great for a source of truth too, because if you use your personal domain as the hub and you broadcast that Twitter to Twitter or somewhere else, and somehow in the middle there,
it changes. Well, this is actually the source of truth. Like, you know, reminds me of the very last
episode of Silicon Valley when he sent the message.
Stop me if you heard this before, but he sent the message and perfectly put four dots in it, not three, which is a common ellipses.
And somehow the AI in the middle there decided to compress it, which taught them how they subjected security and all these fun things, whatever.
Turn that four dot ellipses into a three dot ellipses.
So long story short, somewhere in the middle there can change.
And by you having your hub, you can you can confirm truth essentially yep This episode is brought to you by Retool.
Retool is the low-code platform for developers to build internal tools super fast and super easy.
They have a ton of integrations
and templates to start with. With a click of a button in seconds, you can start with a new
Postgres admin panel application, kick off an admin panel for reading from and writing to your
database built on Postgres. This app lets you look through, edit, and add users, orders, and products.
It's too easy to get started with Retool. Head to retool.com slash changelog to learn more
and try it for free. Again, that to retool.com slash changelog to learn more and try it for free.
Again, that's retool.com slash changelog.
So Aaron, back in December of 2019, in a post titled, It's Time for OAuth 2.1,
you wrote,
Trying to understand OAuth often feels like being trapped inside a maze of specs,
trying to find your way out before you can finally do what you actually set out to do,
build your application.
That resonated with me.
And you go on, of course, to speak in depth about that and about OAuth 2 and 2.1
and where we've been and
where we're headed. But how did we get there? How did we get to that point? Because it's been a long
and windy road with OAuth and a lot of people involved. Why does it feel like that? Or at least
why did it feel like that in December of 2019? That's a good question. Yeah, I still stand by
that statement. And here we are a year and a half later and still working on 2.1.
But obviously that was in no small part due to the events of 2020 slowing that work down.
But how did we get there?
I think, honestly, I think it's a natural evolution of the space.
And I don't even think it's necessarily bad that it happened that way.
It started out in 2012 with the OAuth 2 draft being published. And that draft, the core draft,
was, you know, I had been going through a pretty rough time in the spec world. And there were several arguments involved in creating that. And there were several people who quit in a fit of rage
to uh go and do other things because they were just done with the spec world which i totally
understand and what was left in that core draft was a relatively small amount of information a
small amount a small spec it was it was a core right it was a framework actually not even a spec
which means you can use it to build things you can can use it to build specs. But by itself,
it didn't necessarily describe a complete interoperable system. So there are a lot of
good parts in it, but you need more in order to actually finish building out a system.
You need more pieces. So there's that aspect of it. And then the other aspect is that,
so over the years, there were things discovered about the core spec that were maybe security
problems, or there were better ways to do things. And a lot of that stuff ended up being expressed
by new specs and new extensions. One of them being Pixie, P-K-C PKCE, proof key for code exchange. That was an extension
developed originally because mobile apps couldn't do the flow the sort of normal OAuth way, and they
needed a solution for that. And then it turns out it's been discovered that that extension actually
solves a number of different attacks that weren't even really thought of when Pixie was originally
developed. So this is this
is stuff that just sort of happens over the years of people building things with the specs and
deploying these systems and getting experience with how these things work and and how they
evolve and then documenting it. And that's really what specs are the documentation of a system. And
yes, there's a lot of them because we've learned a lot over the last nearly 10 years.
And I think that's okay.
It's okay to have that evolve slowly like that.
It doesn't need to be something that you create perfectly in the first try, because realistically,
that's actually not possible to go and set out to design a spec and make it perfect on
the first try.
That is how we got there. It was a lot of filling in the gaps in the original OAuth 2. It was a lot
of patching of security features. And also there's the whole section of things that weren't,
that were intentionally not described by the spec. For example, how a API can validate access tokens, which was
in at the beginning sort of considered internal implementation detail of a system. But it turns
out, as we've seen, people create companies around the idea of providing OAuth as a service,
then it makes sense to provide a standard way to validate access tokens so that you can interoperate
with different companies services. So it's just a lot of evolution slow evolution of the space and that's how we got
to december 2019 of yes there's a lot going on there are a lot of different pieces a lot of
moving parts and uh if you actually take a look at those moving parts and all the different building
blocks and all the different pieces there is a much simpler
picture that's sort of coming out the other end which is what we're trying to capture in oauth 2.1
is that there are a lot of things that are known to be not good practices anymore so let's take
those out there's security features that we should just always be doing like pixie because it solves
many different attacks and if we can
consolidate all those then that's just less stuff for people to read because i don't want you to
have to read 10 specs to in order to get to the point of what the the industry considers the best
practice right now so oaf 2.1 which sounds like it's still in the works is not new things it's a distillation of things that were
how off to evolved and said here were the good ideas let's get rid of those bad ideas
this is how you should do it now yep it's trying to modernize oauth 2.0 without actually
changing any without adding new things So we're not trying to invent
a new spec. We're not trying to say everything about OAuth 2.0 was terrible. Let's start over.
It's really just, it's trying to encapsulate what is currently regarded as the best practice
of OAuth 2.0. And the problem with OAuth 2.0 is that if you say OAuth 2.0, it actually doesn't
really mean anything because it's so many different specs in reality. OAuth 2 is a collection of several different specs and you kind of have to know
which ones are relevant. So the idea with calling it OAuth 2.1, giving it a name and a new RFC is
that that just sets a new baseline. So that's giving a name to what is the best practice and
what we do consider to be OAauth 2 today so are those guide
rails there i know the 2.1 isn't ratified or finished or whatever but if you were to say
2021 and onward if you're doing oauth 2 here's the flow or maybe maybe it's three flows or
whatever it is like here's the simplified version now and here's how it would
work could you explain that to us in words or is that like a half hour dissertation okay i mean i
can give you a short version give us the i can also give you a long version let's start with
the short and give us a medium the the short version is disregard password and implicit grants. Those don't exist anymore.
So the main flow in OAuth 2 and OAuth 2.1 is the authorization code flow with the mechanism
described by Pixie.
So Pixie describes a, it's a neat little trick that's been added into the authorization code
flow.
It turns out there's several reasons why that's a good idea, which are way too detailed to
go into right now.
But it is always a good idea for which are way too detailed to go into right now. Okay.
But it is always a good idea for every kind of app to use Pixie.
I do get a lot of people confused about, because of the origins of Pixie,
whether you should use Pixie if you have a client secret, for example.
And it turns out the answer is yes, use Pixie, even if you have a client secret,
because a client secret is not solving some of the attacks that Pixie does solve. So it's not a replacement for a client secret. It's not an alternative.
It is just how the authorization code flow should work. Now, the client secret issue is,
how do you authenticate a public client like a mobile app or a single page app?
And the answer there is you can't. There is no way to do that, whether or not you're using any OAuth flow or OAuth at all. So you just don't.
And you rely on the redirect URL and the registration of that and the fact that domain
names are the other foundation of our security online. And that's enough protection of those
kinds of apps. The main OAuth flow today is authorization code flow with Pixie and use that
for everything unless you
have a very specific reason. Otherwise, the other flow to be using would be the device flow, which
is an extension. And that's what you'll be using when you're on like an Apple TV or other devices
that don't have a browser or don't have a keyboard. And then the third sort of, I don't want to call
it a main flow, but the third flow that will be commonly used is the client credentials flow, where it is a client, an OAuth client that's not acting on behalf of a user.
It's just the client acting on behalf of itself.
So there's no user involved in the flow.
So the client shows up and says, I want an access token, and then it gets one.
Which happens a lot with administrative apps or backend tooling, where you're just trying to remotely manage a service which we do a lot of that stuff
around here and you
just don't need and there's no client being represented
you're just like no it's just
us it's just changelog trying
to update a
DNS record yeah and it's kind of like using
a API key to go
to go make a request somewhere
but by including it in the OAuth
world you it means you can use it alongside of your OAuth flows
that do have users.
So you end up with the same access tokens
and you end up with the same sort of ways of validating things
and you don't have to hard code as much stuff
in every different place it's being used.
Right, so there's no advantage as a provider
or as a service provider to doing the OAuth style,
except for the fact that you're probably doing it
for your client style anyways.
And so it's just one path of authentication
for your API,
regardless of which style you're doing
versus like you said, if you're like,
well, we also have this API token thing we do
for service accounts.
And that would just be like a whole nother code path for the provider.
Is that what you're saying?
Yeah.
So let me,
let me rephrase that,
I guess.
So using client credentials has the advantage of using the same access token
format that you'd be using for flows that do have users involved.
So if you're building out a system and you expect a user to be logging in,
you should be using a Roth. You'll end up with access tokens and your APIs can validate those
access tokens. If you also have a situation where you're expecting clients to not have users log in
because they are just service level things, then if you fit them into the same framework,
you can have your clients go get access tokens from the same place, your OAuth server,
and your APIs can validate access tokens in the same way as they're validating the access tokens for users.
Now, the alternative would be you would have a special API key thing that your APIs know how to validate those API keys.
But now you've got a whole separate thing to manage of issuing those, provisioning those, getting your API to validate those.
Whereas using the OAuth client credentials flow
means you've consolidated that logic.
Okay, I think I understood you, but I regurgitated poorly.
So you did a good job the first time
and a better job the second time, thank you for that.
The device thing is interesting.
Because back in 2012, we didn't have these devices really,
or did we?
I mean, where you have like an Apple TV
or a Chromecast thing that you're trying to sign in with,
and there's no keyboard there.
There's no there there.
It's like, well, you know,
it's going to pull up this thing onto the screen
that you're going to go like letter by letter.
There's no browser usually, right?
There's usually no browser in those,
which means you can't open a webpage
and send these
off to the to the oauth server right so what we have seen is people build password dialogues into
those devices and then yeah you've got the on-screen keyboard that you're scrolling letter
by letter switching to the symbol oh yeah a long password long password and you're entering it very
slowly in front of anybody else in the room so it's painful that's great and that's not a good
solution so the the fix for that is the oauth device flow which kind of separates the application
that's going to be getting the access token from the device you're using to log in so you will
start the flow on the tv there is no keyboard there is no browser and instead it says hey go
over to your computer or go pull up your
phone, go enter this link, and then enter these six letters. And that establishes the connection
between the TV and your phone. And then you can finish logging in on your phone where you do have
a keyboard and a browser and your password manager and things like that. And potentially even more
hardened security for the person's true identity because face ID and touch ID and... Right, you can
tap into multi-factor auth,
you can tap into single sign-on to other systems
that the TV doesn't even have to be aware of.
Lots of benefits.
So OAuth can handle that now.
Another thing you mentioned in that post,
which I thought was interesting,
it was kind of an aside,
is that Justin Richer, perhaps,
has this whole other idea,
transactional authorization.
I don't know if that's still a thing.
You say maybe eventually that'll be OAuth 3.
Has that advanced or is that still a thing
or what's the situation with that?
Yeah, there's been quite a lot of movement
on that front since 2019.
So it was called transactional authorization in 2019
when Justin had originally proposed it.
And since then, there actually is a new working group formed at the IETF to take on that work.
And it's been renamed since then.
So now it's called Gnap, G-N-A-P.
And don't even get me started on the naming.
What's the stand for?
That was a whole thing.
It was a very long discussion on the mailing list
and pages of Google Docs of suggestions and voting.
It was a whole process.
But that was when the group was formed.
Had to decide on the name for it.
And anyway, whole thing.
So GNAP, Grant Negotiation and Authorization Protocol
is what that stands for.
And that was the least bad
suggestion out of all of them so that is a new ietf group meaning it's not happening within the
oauth working group however there are a lot of people who participate in both still just like
oauth and open id connect where open id connect is actually not even in the ietf it's in its own
foundation but there are a lot of people who participate in both.
Gnap, on the other hand, is a new IETF group, and it's a new document, and that work has continued on since then.
It has gone through a pretty extensive amount of changes and iterations and redefining the scope of the document and pulling some of it out into a new document.
The whole idea with that one is explicitly to not assume any compatibility with OAuth,
but solve similar problems. So I know a lot of people's frustrations with OAuth
beyond just the fact that it's in a bunch of documents. There are some things about how OAuth works that
you can't really change at this point without breaking a lot of assumptions of a lot of
software. So those are things that we kind of have to just deal with and live with now in the OAuth
world. And they're not, it's not broken. It's fine, but it's not ideal. And there isn't a good
way around that one we're trying
to clean up oauth with oauth 2.1 i would think of that as like housekeeping you know clean up
your house before a guest comes over kind of thing but uh gnap is more like rebuild the house
we're going to start from a new foundation do you have any for instances on things in oauth 2 that
you just described in general but are there any examples of what you're talking about? Yeah. One of the examples of something that is pretty deeply baked into the model of OAuth is
the idea of a client. This is where you would go to the developer website of a company and you would
say, I'm going to build an OAuth app against your API. I'm going to register a client. And you go
in there and you type in the client name, you upload an icon, and then you get back back a client id and a client secret or you may only get back a client id if you told that
you're building a mobile app and then you put that you use that client id and your client secret
in your applications you configure your applications with those client id with a client id with a
client secret if you have the secret and you do an awath. The reason this is potentially a problem is that there isn't really a distinction between
the concept of I'm building this app, like it has a name and it's in the app store, and
the difference between a particular instance of that app running somewhere.
So this is most obvious with mobile apps.
I publish an app into the app store and it's identified by the client ID and it has a name
and it has an icon and all that.
But when it runs on somebody's phone,
it is a unique piece of software on one person's phone.
And somebody else running that same app,
it's the same software, but it's a different instance.
And because it's a different instance,
we actually have an opportunity
to do a lot of things around the security of it
that just don't really mesh well with OAuth.
And yeah, you can shoehorn a bunch of the stuff of it that just don't really mesh well with OAuth. And yeah, you can shoehorn a
bunch of the stuff in it. Like one of the security features that would be really useful is to be able
to say, okay, the access tokens issued to this person's phone cannot be used by anybody else's
instance of that app. So if an access token is, you know, somehow shared with another device, you wouldn't
be able to kind of swap it out and have it be put into the other person's phone because the person's
the access token is tied to that one device. And this is something that we are trying to do,
solve in many different ways right now in the OAuth community of this idea of authenticating
individual instances of the software with specific keys on each specific instance of an app.
Again, it's not that it's impossible to solve it
within the OAuth framework.
It's that it's fighting the OAuth framework
trying to add that concept into it.
So it ends up being harder to describe
like I am struggling to describe right now.
It ends up being harder to describe that
because of the assumptions of OAuth being,
you have an OAuth client, it has an identifier, and that's just kind of the client.
Right.
That is the client, but it's not really because there's an instance of the client that isn't
really talked about in OAuth.
Yeah.
So with Gnap, it's flipping that completely on its head where there isn't really the concept
of one group of software.
Every client is an instance by default and has its own keys by default and
that is permeating the entire you know part of gnap where you start the flow with your own keys
that are assumed to not be shared with any other piece of software and then you can take advantage
of the fact that there are unique keys baked in from the beginning for each instance so yeah it's
not it's not impossible to do these things in OAuth.
We do see people adding in those security features and bringing in those properties into OAuth clients.
You can definitely do it, but it's not the easy way to do it.
It's not the default way and it's a lot harder to describe.
You said Gnap was essentially starting over, right?
Do you feel that's the best way? What are your thoughts on the direction? Obviously you're
seen to be pro-OAuth 2.1 or current 2.0.
Where do you land on that? Are you for Gnap?
What do you think is good or bad about it?
I should also clarify, I am one of the editors of OAuth 2.1
meaning I'm participating in the editors of OAuth 2.1, meaning I'm participating in the
development of that draft, which is progressing on standards track. I am also an editor on the
Gnapp spec. So I am involved in that work and I do work with Justin and Fabian, the other editor,
on that draft as well. So I do think that within the OAuth world, the OAuth 2.1 work is extremely important.
And I do think it's worth doing that work regardless of anything else that happens
elsewhere. So I think that there's obviously a huge amount of software that's deployed with
OAuth today. And it bakes in these assumptions and it's fine and it works and it needs to be continued to be supported for a very long time.
And I think that all of that stuff does benefit greatly from having a simpler definition of OAuth, which is OAuth 2.1.
Now, totally separate from that, I think there's a lot of interesting opportunity with Gnap to make this work in ways that are easier to deploy in situations that we
haven't necessarily thought of yet. So for example, the device flow was not thought of
at the beginning of OAuth when OAuth was first created, and it's been added into it.
And it fits into that world in a way that is definitely not the sort of natural way of doing it because it has to rely on these
assumptions that maybe don't apply in the device world. And I think we're going to see more of that
happening in the future as more kinds of devices appear and technology keeps evolving. One of the
aspects of that is this whole idea of self-sovereign identity, which we're seeing as a huge community right now using digital wallets for identities and things like that.
None of that is very mature at the moment. It still feels very experimental. And a lot of it
completely does not work with the assumptions of an OAuth world. So you'll see people either completely not understanding OAuth from that world
because it doesn't match the underlying assumptions of how they're thinking about the world.
And some people will try to sort of shoehorn OAuth into that model.
So what we're hoping is that with Gnap, it can be a better fit for a lot of the
future developments of things that are maybe not even thought of yet
so you think that work for 2.1 needs to happen no matter what because oh well it's going to be
around it's not going to go away so we need to continue to work to stabilize things but
gnap might be a better future i think there's potential for that and at the very least i think
there is potential for gnap to point out some of the assumptions that OAuth is
making that maybe we don't need to rely on anymore. Maybe there's ways to sort of backport some of
that work into the OAuth world, if it can be demonstrated that those assumptions were holding
back progress in other ways. And that kind of stuff is hard to do within a single working group
because of how much legacy
there is of how much deployed and running code there is which again it's not that that's bad
i'm not saying that's bad at all it's great that there's a lot of running code because that's what
actually matters at the end of the day what i don't like seeing is people not realizing what
assumptions exist in a system and not being willing to challenge those assumptions. That is kind of why
this has to end up happening in a new group, because it's a lot easier to just say, well,
we're just going to forget about all of those assumptions and start with a green field and
then come up with something that hopefully does result in running code and useful in some
deployed systems. But if not, maybe we can use that to point out some of the assumptions
in OAuth that don't need to be there and should change in OAuth.
What about the progression of using OAuth, moving from different spec to spec? So if,
you know, OAuth 2 to OAuth 2.1, what is it like to be a developer to have to
deal with that change or, you know, enable my application to be, you know, within that spec?
Is it a challenge for a lot of developers to go from version to version?
Yeah.
Well, so OAuth 1 to OAuth 2 was a huge breaking change.
Again, OAuth 1 had a bunch of assumptions that didn't make sense anymore.
Mobile apps weren't really a thing when OAuth 1 was created.
And it turns out OAuth 1 doesn't really work at all with mobile apps.
So that was a huge breaking change and basically completely incompatible. And there's no way to like migrate.
You have to just, it's from scratch. You write the code from scratch. And that's why Twitter,
for example, still hasn't really switched over to OAuth 2. They're still on OAuth 1.
And what we're hoping with OAuth 2 and what we've seen over the last now 10 years is that
it's a lot of incremental changes,
a lot of smaller incremental changes. So you don't need to support the device flow, for example,
unless you need to support those devices. So you don't even need to worry about that spec unless
you are building apps for a TV. But also things like Pixie, which it's not a new spec, but we are
now hearing about it a lot recently because it is now recently being recommended
for every kind of application, even web server based applications.
So adding Pixie in is not a ton of work by itself, and it is something you can add incrementally
to a system.
That's all just to say that OAuth 2.1 is not supposed to be something like, oh, you're
going to have to go tear everything out and replace it.
It's really supposed to be, well, it's very possible that the code you are running right now
already is compliant with OAuth 2.1 if you followed all of the recent guidance in OAuth 2.
That's the goal, is that hopefully there will be a set of people who don't have to make any changes
and they will already be compliant with OAuth 2.1. For our listeners out there building applications with Square, if you haven't yet, you need to check out their API Explorer.
It's an interactive interface you can use to build, view, and send HTTP requests that call Square APIs.
API Explorer lets you test your requests using actual sandbox or production resources inside your account, such as customers, orders, and catalog objects. You can use the API Explorer to quickly populate sandbox or production resources in your account.
Then, you can interact with those new resources inside the seller dashboard.
For example, if you use API Explorer to create a customer in your production or
sandbox environment, the customer is displayed in the production or sandbox seller
dashboard. This tool is so powerful and will likely become your best friend
when interacting with, testing, or playing with your applications inside Square.
Check the show notes for links to the docs, the API Explorer,
and the developer account sign-up page,
or head to developer.squareup.com slash explore slash square to jump right in.
Again, check for links in the show notes,
or head to developer.squareup.com slash explore slash square to play right now. so oaf 2.1 says don't use the implicit flow why what is it why avoid it
i already implemented it i'm not supposed to use it now? Help us out with the implicit flow
and then explain Pixie exactly what it solves
and maybe how it works.
Yeah, so the implicit flow is one of those things
that I probably would recommend replacing,
if at all possible, with the more secure flow.
The implicit flow was always a hack.
It was created as a workaround for limitations in browsers. Keep in mind,
these are limitations in browsers from 2010. So the world is quite a bit different now.
Browsers can do a lot more things. So the way the implicit flow works is the user clicks the
link to log in. They're taken from the application over to the OAuth server. They log in there like
normal. This is all the same in both flows. But when the OAuth server. They log in there like normal. This is all the same in both flows.
But when the OAuth server is ready to go and give the application an access token,
it sends the access token in the address bar, in the URL, in the redirect back to the application.
And the application will pull it out of the URL and then start using it. So at first glance,
you're like, cool, that seems very easy. It saves a step. I don't have to worry about this weird authorization code thing.
I don't need a token endpoint.
It's just one redirect there and one redirect back.
So why is that a problem?
Well, in OAuth, we use these terms front channel and back channel.
So the idea with a back channel is it's the sort of normal or
default way that you're used to making requests on the internet. It's an HTTP client to an HTTP
server. And if you're using HTTPS, which you should be for almost everything these days,
then that connection is encrypted. You know that when you send data and what you receive,
it's all secure and encrypted in transit, and you can trust the response that comes back. I like to think of that as hand delivering a message to somebody. So you
can walk up to somebody, you can see them, they can see you, you can give them something, you can
see they took it, and you know that nobody else came in and stole it because you can see they
have it now. That's the back channel. That's great. We should use that as much as
possible. The front channel is the idea of instead of an HTTP request from a client to a server,
we're going to have two pieces of software exchange data through the user's browser.
So that means we're actually going to use the address bar as a way to move data from one thing
to another. So both OAuth flows, the authorization
code flow and the implicit flow start out in the front channel with the first request that the
client makes is saying, here's what I'm trying to do. Here's who I am. Here's what I'm trying to
access. I would like you to send the user back to this redirect URL when they're done. That is a
front channel message, meaning the application does not make that request directly
to the OAuth server.
It actually makes the request to the browser and tells the browser to go visit the OAuth
server.
That's fine.
Great.
Although I can explain some issues with that as well, which there is also solutions for.
But the important one is on the way back, where the OAuth server is trying to deliver
the access token back to the application.
Now, the secure way to do that would be in a back channel, but the OAuth server doesn't have a
way to talk to the application in a back channel. The app might be running on a mobile phone or
might be a single page app in a browser, and those are not an HTTP server, so they can't accept the
back channel request. So instead, the OAuth server uses the front channel, putting the access token
into the address bar, having the browser delivered to the application. I like to think of this as sending a letter in the mail, where I'm trying to
send you a message. I don't have a way to go and walk up to you to give it to you. So I instead
put the message in an envelope and I put it in the mail and I trust that the mail carrier is going
to deliver it to you. And there's a lot of trust there. There's a lot of inherent trust in the mail service.
It'll probably work. It'll probably be fine. But I have no way to prove or guarantee that the message made it there. I also can't ensure that it wasn't copied in transit or stolen or tampered
with, modified. I have no guarantee once the mail has left my hand and it's in the post office.
So anytime we're using the front channel, it's that same situation.
The OAuth server wants to give an access token to the client.
Instead, it gives it to the browser to deliver to the client,
which means now the OAuth server doesn't actually ever know if it really made it there.
And think about if you get a letter in the mail,
you have a similar problem on the receiving end.
You don't actually have any guarantee that that letter is from who it says it's from.
A return address isn't any proof at all. So if you ever get anything in the front channel,
you can't be sure that it's actually from who you think it's from. Meaning if you get an access token in the front channel, you don't know if it's the access token you were expecting, or if
it's somebody trying to trick the application into accepting a different access token. So this is the problem with the implicit flow,
is that it actually sends the access token in the mail, and there is no guarantee on the sending
side that it's secure, and no guarantee on the receiving side that it's actually the right access
token. And there's not really a way around that. There's various patches you can do to solve one
half of those problems, but not the other half.
That's just inherently the problem with using the front channel.
The implicit flow was created because of old limits and browsers,
primarily the lack of ability to do cross-origin requests.
So back in the day, cross-origin resource sharing wasn't a thing.
So we use the implicit flow to avoid any sort of HTTP request.
Instead, it's just using redirects, using the front channel.
So clever hack, but hey, guess what?
Browser's caught up, and now we have cross-origin resource sharing,
and it's not really a thing anymore,
and it's no problem to make cross-origin requests.
So we don't need the implicit flow anymore,
and we can't even solve all the security problems with the implicit flow.
So it really just doesn't have a place anymore.
That's the reason we're taking it out of the OAuth spec.
So then I've tracked all that.
I don't know about you, Adam, but that was a good explainer.
I think I'm with you.
Yeah, it was awesome.
Now go into what it's replaced with and why it fixes those problems.
So this is definitely a challenge to do without diagrams.
But...
This is hard mode podcasting.
Yeah, yeah, yeah.
So the better solution is the authorization code flow,
in particular with Pixie.
So the way that works is it starts off the same.
The app makes a front-channel request to the OAuth server to start the flow. The user logs in, like before, does two-factor auth, whatever they need to do.
And instead of the OAuth server sending the access token back in the front channel,
it still has to send something back in the front channel because it doesn't have a back channel
connection. What it sends is a temporary one-time use short-lived code and that's called the authorization code this is why
it's the authorization code flow okay so that's what it sends in the mail you can use this mail
analogy to think about how this works if you want to send somebody your house key and you put it in
the mail how good are you going to feel about that probably not very Instead, it would be a lot better to put something in the mail
where it doesn't matter if it's stolen
because you can protect it in other ways.
So instead of putting your actual house key in the mail,
you can put a coupon, a temporary one-time use,
go to this desk to redeem it kind of thing.
And if somebody steals it, well, we can do other things
in order to prevent it
from being used by somebody who stole it. So an authorization code by itself is solving some of
these problems, right? Where now at least the application gets this authorization code in the
front channel. So it doesn't know where it's really from. And that isn't the access token yet,
but it can go redeem it for an access token at the token endpoint.
And it can do that in the back channel.
So it can go and take that authorization code, make a back channel request over HTTPS to the OAuth server.
And now it knows where it's talking to and it knows who it's talking to.
And it can get the access token in the response from that HTTP request, meaning it's in the back channel where it is secure.
That's great.
That's the authorization code flow. The problem is that if we can't authenticate the client, then if someone steals
that authorization code, because it was in the front channel where it's possible to be stolen,
how do we know that it's actually their client that we thought we were sending it to, right?
If you send a coupon in the mail and someone steals it and they go to
the desk to redeem it for your house key, how do you know that you aren't giving a house key to the
wrong person? And that's where Pixie comes in. So Pixie attempts to solve this problem of not
really knowing who is going to end up coming back with this authorization code. If you imagine
you've just sent this coupon in the mail, you want to know
that the person who received it is the same person that requested it. That's the key. The problem is
that that request came in the front channel, which means you can't even actually really know who that
request is from originally. So this is the sort of brilliant part about Pixie. Pixie uses a hash
mechanism to work around this limitation of not
being able to like have pre-registered secrets so a hash the idea with a hash of course is it's a
it's a one-way operation so if i told you to think of 10 random numbers write them all down on a
piece of paper and then add them up and tell me the sum that is an example of a hashing algorithm
it's not a very good one please don't use it in production but it is a hashing algorithm. It's not a very good one. Please don't use it in production, but it is a hashing algorithm. Knowing just the sum, I would not be able to tell you which 10 numbers you chose.
But if you tell me the 10 numbers, I can verify they add up to the same number. So it's a one-way
operation, meaning you can take the hash and share that in a front channel where it may be observed
or stolen because there's no way to reverse engineer it. So if we take that mechanism of a hash, we can add that into the flow.
When the app first starts out, instead of just sending the user over to the OAuth server,
it first creates a random string, and then it calculates a hash of that string.
We actually use SHA-256 for this.
So it calculates a hash, and it puts that hash in the front
channel request to the OAuth server. So someone could observe that hash, but it's fine because
they can't reverse engineer it. The OAuth server can remember the hash, and when it issues the
authorization code, it knows what hash it saw when it issued that code. So now that coupon that it's
sending in the mail, it knows a sum, or you know, the hash that it created when it issued
that code. So now when someone is coming back with that authorization code, which it doesn't
hadn't previously had a way to link up with the original request, in order to actually use the
authorization code, whoever is using that code has to be able to prove that they control the hash
that was used to request the code and
they can do that by providing the original secret which the oauth server can calculate the hash of
and compare the two hashes and the secret doesn't matter anymore because it's one-time use so it's
one-time use and that is over the back channel as well okay fair yeah yeah so what this means is
that the oauth server now knows that the thing that made the request
with the authorization code is in fact the same thing that it sent the authorization code to
in the front channel. Nice. So if we think back to our house key analogy, if you sent your,
this coupon in the mail, you don't really know who it went to. But instead of just sending it off
to somebody in the mail, it had to be requested by somebody. So the request that came in would
include a number. It would include that sum. You could write that down, create this coupon,
send that in the mail. You've still got this number you're holding on to. And now when someone
comes back with that coupon to redeem it, they have to be able to prove they know that secret number but it's not just a secret number it's a hash which means they have
to actually know the actual 10 you know numbers that they chose to add up to that number and then
you know the person walking up to you at the desk is the person that actually made that first request
on the phone for example right so that's pretty pretty nice move it's a clever little it's a clever little
trick yeah because you're making the front channel which is inherently i guess insecure
secured because you're able to share the hash which is publicly available fine
and then when you get to the back channel and as well as just being a one-time use thing but
the back channel is necessary because you could you could have intercepted that on the way or something.
So yeah, you can use the back channel
to provide the secret.
Now in most public key, private key,
or where you have like a hash and a source operations,
providing the secret is not something
that you necessarily want to do,
but because it is one-time
and you're proving who you were,
all you're proving is that you made this request
in the first place, right?
Then you can secure it that way exactly this is not the same as actual public authentication no and it's not intended to be it is a much simpler mechanism because it's
doing just one particular thing which is tying the initial front channel request for a particular
login to the particular request in the back channel for the access token
right it's not a so every time the app starts this flow makes up a new secret it's not part
of the app's identity at all it's just unique to this one instance of this one flow so yeah
normally in public private key stuff you don't share your private key at all you assign things
but this is not that same thing it just happens to be using a hash that is also often used in right and you're generating a new one every time because all you're trying to say is i
was the original requester but next time you do the request it doesn't matter what those combos are
right whereas if you had a singular private key and you give the public key out you wouldn't send
the private key even if it was a back channel later but yep and this is actually kind of getting
back to that what we were talking about earlier about client instances and the identity of clients
where in this model,
the client authentication doesn't really matter.
And that's not the point here.
And that's why I was saying at the beginning
that Pixie is not a replacement for a client secret
and has nothing to do with
whether or not you have a client secret.
Pixie is useful to make sure
that the thing that's requesting the authorization
code is the same thing that's going to be using that authorization code later. If you have the
ability to authenticate the client, then you absolutely should, even if you're doing Pixie.
So that request for a token is over the back channel. So that request for the token over the
back channel, that can be authenticated for clients that have credentials, meaning web server-based apps.
Or if you are doing per-instance authentication of mobile apps,
you can do it there as well.
It would just be a per-instance authentication of some sort.
And that, again, it just has nothing to do with whether the Pixie is being used at all.
It's a completely separate question.
What's interesting, too, is how it seems to be pretty transparent to a user.
So you mentioned before Apple TV gives you a code.
You go to somewhere.com slash activate.
You go to that browser, to that on your phone.
You log in, so you authenticate via, say, your typical username and password, maybe.
I don't know.
Is that the scenario?
That's the kind of scenario where Pixie is playing a role. So it's
pretty transparent to that end user where
all I'm doing is typing in that six character
string that the Apple TV
told me. I went to my browser in the phone
and it's pretty transparent from
a user perspective. Well, Pixie isn't even
visible. Pixie isn't even visible to the end
user, right? Right, exactly. It's all
just behind the scenes stuff.
The point I'm trying to make, too, is that that's good because the more trouble you put in front
of a user to be, I suppose, secure or to use authentication, the more they're going to
write their password down on their monitor or circumvent the system or just not use it
and be insecure anyway.
So the cool thing is that this protocol, this spec allows developers to make these things where users don't get fatigued by the process of authentication.
You can still do it, and it's not a challenge.
Kind of a pain to open my phone, go to slash activate, throw in that code.
But I prefer that over, say, swiping my finger back and forth on the Apple TV, you know, as an example, to use that example. It's very fatiguing as a user to make my friends wait
or make my wife wait or whatever while I log in.
It can be done sort of quickly because I have a lot more identity
and presence on my phone that secures me to it to know
that I can give that code back to the site.
So you did it.
You explained to us without diagrams.
I think you have diagrams somewhere.
So let's not make this the only resource for people.
Do you want to learn the new OAuth 2?
Not the new, but the preferred OAuth 2 things.
You have to listen to the third part of this one episode of the Change Log.
No, there's other resources.
Aaron, point us towards them.
I know you have, I think you have a book on OAuth 2 Simplified.
There's guides. there's cheat sheets.
How can people visualize this and learn it on their own time?
Yeah, I've got a lot of resources available.
So the book that I wrote, OAuth 2 Simplified, it is at oauth2simplified.com.
You can find links to purchase it there.
It is actually also, the contents of the book are on OAuth.com.
That is the sort of web-based version of the book.
That website is sponsored by Okta.
And I also have a video course about OAuth.
And that's where we walk through step-by-step all the flows.
There's a whole bunch of exercises in there to actually try this stuff out yourself as well.
You can find the link to that one also at oauth2simplified.com.
The OAuth course is called the Nuts and Bolts of OAuth.
Very good. We have Developer Day coming up from Okta, which is kind of cool.
We'll be doing some talks there. What else has happened there? I think you mentioned
labs and the pre-call. What other fun things? We haven't even mentioned that on this
podcast yet, but...
Developer Day will be a lot of fun. That is on August 24th. And the first day is going to be
a bunch of really interesting talks, not just about OAuth, about all sorts of web stuff and API and authentication. And I'll be doing a talk there with Vittorio from Auth0. So that'll be fun. Always a good time chatting with him. the sessions are the labs and that is a full day of hands-on activities it is entirely free and
actually they'll be streamed to youtube as well so you don't even need to register for those you
can just show up and that's going to be starting at 8 a.m pacific ending at 5 30 p.m pacific every
90 minutes will be a different topic i will be kicking things off with a walkthrough of oauth
we'll do exactly what we just talked about here of walking through Pixie
step-by-step against a real OAuth server. You'll be spinning up a little OAuth server and trying
to get an access token. And I'll be there live and helping you through it. And then the rest of
the day, we've got all sorts of fun events as well. There'll be a session from Auth0. We'll be
doing stuff with Terraform and Kong and JFrog.
So a lot of good sessions there.
Sounds cool to me.
Well, we do want to give a shout out to Bharat.
They call him All Business.
All Business Bharat over there
for introducing us to you, Aaron.
This has been an absolute joy.
You do a great job explaining these things.
I mean, it's hairy.
It's hairy technical details.
And that's kind of the onus of this conversation.
It's like, hey, OAuth is complicated.
Why is it complicated?
There's reasons for that.
I think you did a good job explaining a little bit of the history
and how things have changed over time
and how you're not going to land on the perfect API or spec
the first time anyways.
So you have to learn as you advance. And that means that things got a little bit complicated,
but now they're becoming a little more simplified. And there's a bright future ahead for
authentication on the web. Anything else that we didn't ask you or you wanted to touch on before
we called a show? No, that sounds great. Developer Day will be fun. Oh, I do have a show that I do with Vittorio on YouTube,
the OAuth Happy Hour.
And it is approximately monthly.
And we just chat for an hour about OAuth
and talk about what's new in the OAuth world,
what's been happening with the specs.
We get into some of the details of some of the extensions
that are brand new and still being worked on.
And it's a lot of fun.
This is a live stream on YouTube.
You can check out the schedule for that at octadev.events.
There are links to the upcoming episodes there, and we schedule them usually a few months ahead of time.
Cool.
And yeah, a lot of fun.
Come bring your questions, too.
We'll answer questions from the chat if you show up.
Yeah, there you go.
Show up and ask questions. That's cool is uh drinks required or the optional drinks are optional bring whatever you want to drink we uh may even be able to hook you up with
some drinks soon i'm working on making that happen nice cool it's been fun aaron thank you so much
for your time man thanks for having me. It's been really fun.
All right. That's it for this episode of the Change Law. Thank you for tuning in.
We have a bunch of podcasts for you at changelog.com. You should check out. Subscribe to the master feed. Get them all at changelog.com slash master. Get everything we ship in a single
feed. And I want to personally invite you to join the community at changelog.com slash community. It's free to join. Come hang with us in Slack. There are no imposters and everyone is
welcome. Huge thanks again to our partners, Linode, Fastly, and LaunchDarkly. Also, thanks
to Breakmaster Cylinder for making all of our awesome beats. That's it for this week. We'll see
you next week. Thank you. Game on.