The Changelog: Software Development, Open Source - Ecto 2 and Phoenix Presence (Interview)
Episode Date: June 22, 2016José Valim and Chris McCord joined the show to talk all about how they're advancing the "state of the art" in the Elixir community with their release of Ecto 2.0 and Phoenix 1.2. We also share our jo...urney with Elixir at The Changelog, find out what makes Phoenix's new Presence feature so special, and even find time for Chris to field a few of our support requests.
Transcript
Discussion (0)
I'm Jose Valim and I'm Chris McCord and you're listening to The Change Log.
Welcome back everyone this is The Change Log and I'm your host Adam Stachowiak. This is episode 208
and today Jared and I are talking to Jose Valim and Chris McCord about Ecto2 and Phoenix Presence.
It's fresh off ElixirConf Europe.
We talked about our journey with Elixir and Phoenix
because we're building our new CMS using Phoenix and Elixir.
We talked about Ecto 2.0 and what's happening there.
Phoenix 1.2, when it's coming out,
and what makes Phoenix Presence so special.
At the tail end of the show, we talked to Chris
a little bit about some random support questions
that came up along the way, so stick around for that.
Our sponsors for today's show are Linode, Robar, and CodeChip.
Our first sponsor of the show today is Linode, our cloud server of choice.
Get up and running in seconds with your choice of Linux distro,
resources and node location, SSD storage, 40 gigabit network,
Intel E5 processors.
Use the promo code CHANGELOG20 for a $20 credit, two months free.
One of the fastest, most efficient SSD cloud servers is what we're building our new CMS on.
We love Linode. We think you'll love them too. Again, use the code CHANGELOG20 for $20 credit.
Head to linode.com slash changelog to get started. now on to the show all right we're back everybody we got joseph
alem joining us and chris mccord and jared this is a show we kind of teed up back in february and
basically back in last march when chris first came on and he influenced influenced us around
phoenix and elixir and we've drank the kool-aid we got him back on and we're talking're talking about some, some cool stuff. So what's the show about? That's right. So we've had a lot of listeners
who've requested catch up shows with past guests and we had Jose on, like you said, back in
February. And at the end of that show, you could hear us running out of time to talk about even
more. And so we thought, well, we got to get you back on. And in the meantime, you know, Phoenix
1.0 has shipped since we had Chris on back in March of last year.
And 1.2 is on the cusp of coming out with cool new features.
And so we thought, let's just have a whole party of both of them together.
So thanks for joining us, guys.
Thank you.
Thanks for having me.
So we've been through your guys' origin stories.
No need to rehash on that.
If you guys, if the listeners would like to hear that check out episode 147 for chris's and episode 194 for jose's we'll link those up in the show notes
but looks like you guys just got off of uh elixir comp europe can you tell us about it sure so um
well i'll start chris um so it was a really a great event. It was in Berlin and we had 330, about 330 people.
And it was really great because what was really interesting to me was to see how much the
community has matured in this one year, because we had last year, we had an extra conference here in
Krakow and it was a smaller
event and you could say that was still much more a lot of people coming to the language their first
uh contacts and phoenix already had some traction and people were thinking oh no now i can write my
applications with phoenix right or i can use acto but they were thinking about the tool right they
were thinking about phoenix they're thinking about acto, right? They were thinking about Phoenix, they were thinking about Ecto.
And now it was really interesting at Xeroconference Berlin because you could see that the community was,
we had a lot of new people coming and that they were at this stage
in the adoption, they were thinking about the tools,
oh, I can write my next project with Phoenix, right?
But we also saw a lot of people that they were,
and a lot of the talks, they were like, you know,
I got, I learned this too, right?
I learned Phoenix, I learned Acto, I learned Elixir.
But now we have this amazing platform and return machine for building distributed systems and solving the problems that we have today differently.
So we had like more talks about distributed systems more talks about embedded so that was really interesting to see you know how much the
community could grow and mature in just a one year period nice Chris anything
to add there no I think that's a it's a good overview I think that just like
Jose said we were hearing people actually actually using Elixir in Phoenix in the large,
like people that work at large banks and other large established companies
that are actually getting Elixir in the door and using Phoenix internally.
So it was exciting to kind of see it go from this emerging hobbyist thing
that people were excited about to now they've actually
pushed it into their companies and are having big success with it?
There was, and related to that, there was something that was also really, really cool.
I mean, when we are on IRC, for example, Chris and I, when we are talking to people, and
then Chris and I, we also talk a lot about ideas about the future, for example, what
could happen in future Phoenix versions. And we we see a convergence right so chris and i think about some topics and
then someone ping us on irc and say hey i have been thinking about writing my application this
way and then uh and then we're like oh that's cool because we have been discussing about it
and it was also nice to see and you're probably going to talk about this when we talk about
phoenix later on uh but this was nice to see at the event.
Also, people, they were giving talks about things that Chris and I, we were thinking for a while, but we were only talking between us.
And then people would go and present, look, I'm already doing this, and it has worked in this way for us, and we got good results.
So that's also very interesting to see when it happens. And it has happened.
We have a couple of talks as well.
Very cool.
Well, like Adam said during the intro,
we have also drank the Kool-Aid, so to speak.
I think, Jose, when you were on last time in the post show,
you asked, I disclosed that we were using Elixir and Phoenix
to build our next generation CMS.
And you asked why that was.
And I gave the lamest answer of all time, which was basically because Chris told me to.
Which is to say that, you know, when we had Chris on last March, just hearing all of his thoughts
on it and why he built it the way he built it. And a lot of the things that I've experienced as a
longtime Ruby and JavaScript developer and somebody who I make most of my living
building and maintaining Rails applications
got excited about it
and Adam will attest that I get excited
about almost everything that we have on
and I'm always telling whoever it is
I gotta try that out.
I'm going to check it out.
I'm checking that out and then life happens
or work happens and
often times I don't get to.
I thought Elm had him pretty good.
I still have Elm teed up.
In fact, I'm looking for reasons.
But with Phoenix, I actually had opportunity to give it a shot.
And I had a very small need, which was basically it was for the changelog.
We have memberships.
And part of membership is you get access to our private Slack room.
And that was all manual, so the membership would come in.
We used Memberful for that.
And we'd get an email, or I don't even know if we'd get an email, Adam.
We'd have to just go check it every once in a while.
We'd get an email, and the email would get lost in the system.
And then it would be like, let me add a to do and that you didn't get done,
you know,
within a day or two.
And then somebody is the new member who's not getting greeted properly is
saying,
Hey,
what happened to the Slack room?
And yeah,
we feel bad.
And,
and I'm thinking there's no reason why this shouldn't be automated.
Um,
so memberful has a web hooks API and basically all we need to do is,
you know,
take a web hook, a post call, and then fire out to the Slack API and invite them into our channel.
There's nothing too tough about it.
And so because I had that really small use case, I could try Phoenix out.
In fact, I shipped it without even learning any Elixir.
I don't think I was just kind of like banging on the keyboard until things happened. And so I just got this really fast little win because I think,
I mean, I could have got it in probably 20 minutes with the Sinatra app, but it took me just a couple
of hours. And in the meantime, I shipped it. I felt good about it. And then it wasn't working,
of course. And so I went to find out
why it wasn't working. It turned out that the memorable webhook wouldn't set the content type
on their post, like to application slash JSON. And so I was like, well, that's lame because now
basically the JSON parser was failing to parse it correctly. Or Phoenix wasn't picking it up for the right type. And that required me to dig into the framework
just a little bit
and realize how it's all wired together.
And it also allowed me to see some stack traces,
which was surprising because I'm used to stack traces
that are so long that you have no idea
where you are and what's going on.
And the stack trace was like six or seven calls
through the whole web stack.
Maybe it was more than that, but I felt like very few.
And I was like, wow, I can actually see everything
that's going on here.
This is very cool.
And what I needed to do was actually just,
because this is the only call we're ever going to take,
and I don't care what else happens,
I can just force the content type to always
be application JSON. And so I just opened up the endpoint file and basically wrote my
own little plug and plugged it right into the pipeline and everything was working and
it was kind of magical. And it was very much what Chris had been telling me about. So that
was kind of my Kool-Aid moment. I didn't dive right into it after that, but I thought, you know what?
There's something here, and I like it.
So I guess, Chris, thanks for selling me on it, and now we have some support requests.
Now we're here with questions.
Let me have it.
So just to frame this conversation, we're speaking with a certain level of,
a lot of times we bring this childlike wonder to our conversations.
And we've been criticized for that sometimes for not having domain expertise on every topic.
And to that, we would say, if we had to be experts in every topic,
it would be a very boring show because we just talk about like two or three topics.
But in this case, we do have some experience.
And so our questions will be informed to a certain degree.
Wow. That was really long winded. Let's talk about Ecto.
It's exciting though. I mean, that's a cool thing.
How Chris influenced, I mean,
I think just kind of rewinding back for the listeners who listen to this,
like not only does this show influence the people who listen,
but also the people who host it. So that's, that's interesting to me,
at least.
That's awesome. Actually, let's,. Actually, let's bypass that a little bit
because you mentioned Elm, right?
And maybe Phoenix can also be a good reason
for you to pick up on Elm
because there are a lot of people doing Elixir
and they are also interested in Elm.
And I think Chris will be able to confirm,
but I think Phoenix and Elm, using Elm on the JavaScript side is probably the most popular option today with Phoenix.
I hear also a lot about Ember and those are the two I hear the most.
But a lot of people are talking about Elm.
And so you can see a lot of blog posts, really good blog posts complete, you know, that goes from the beginning to the end.
And there was also some integrations between Phoenix and Elm and Phoenix channels.
Do you have news on this side, Chris?
Yeah, I have maybe just a teaser.
So, yeah, I actually gave a keynote at Erlang Factory this year with Evan Ciepliski, creator of Elm.
And with Elm 0.17, which I think just came out last week
or very recently, there's new WebSocket support.
So now I want to see this.
And this is not a promise, but this is a kind of promise.
I want to have an official Phoenix Elm library
under the Phoenix Framework organization that's
a channels client
to Elm with the new WebSocket integration
because I think we could make something
pretty great as far as interop goes.
But we're currently exploring that
so Elm is still on my
to learn list but there's a couple
Jason
Steves and Sonny Scroggin
there's a couple of Phoenix core team members
that have Elm experience
that are kind of exploring what that might look like.
So expect more out of that soon.
Yeah, that's very cool.
I'm still looking for a reason to check out Elm
and to give a little bit of insight
into the Phoenix app that we're building for the changelog.
It's very boring.
In fact, that was one of the reasons
why I felt like we could tackle it in this.
We have big plans long-term,
and we have ideas that I think the channel stuff plays into for sure.
But in the meantime,
we're just kind of replicating what we currently have
so that we can, you know...
The big purpose is to have multi-tenancy
in terms of podcast support as we develop some new shows.
But it's a server-side rendered content application.
And so we're just using the old school, render the HTML with Elixir and go from there.
That being said, I've seen a lot of excitement around using it as an API for Ember and Elm applications.
And I think there's definitely some opportunities there down the road for us to check out Elm more.
Yep. And there's nothing wrong with server rendered HTML.
I love it, actually.
I'll be the first to say that. Yeah, there's nothing. It's great when that's all you need.
Yep, absolutely. And I'm actually glad that it's how they're using it
because we have like the separate Phoenix HTML library
and we get like, we don't get bug reports at all.
And I know that people are using
because for example, you just told me that you're using,
we hear other people say, no, we do, you know,
no API is just in your HTML zone. But because we got no bug reports for a long period of time i was like
damn was like maybe nobody's using this but no it's just that uh it's actually good it's working
with a lot of uh without having problems and uh yeah i will i actually i say we have support
requests i didn't say bug requests I actually have not hit a genuine bug
in your guys' stack yet.
I've hit into all sorts of like,
just little issues with brunch.
And so that's a conversation that maybe
if we have time at the end,
I would like to talk about that a little bit
because I think that was an interesting decision.
So basically to tee that up,
Phoenix does not have its own asset pipeline
that's written in Elixir,
integrated tightly into the framework.
It uses the NPM community,
specifically the default is the Brunch build tool.
And it just kind of like lightly couples itself to that
and you can swap it in and out and stuff.
And I follow along on the tracker and the Phoenix,
uh,
issues in the mailing list,
just silently watching.
And a lot of the requests that you guys get are mostly brunch requests.
In fact,
um,
sometimes I'm not sure I have had a few issues and I'm like,
is this a Phoenix issue?
Is this a brunch issue?
Uh,
I'm not really sure.
So maybe we can talk about that later,
but let's get to the,
let's get to the meat of the topics here.
Jose, when we had you on last time,
we just touched on Ecto a little bit,
and we referenced it in this call.
But to give the listeners a bit of information,
this is your database connection tool.
I'm not sure if you're calling it an ORM.
I know you've removed Ecto.model and have Ecto.schema,
so you're separating it quite a bit from what people who are moving from perhaps a Ruby and Rails background over to Elixir and Phoenix would think of in terms of ActiveRecord or these types of other libraries that model themselves after the ActiveRecord, rather than either the pattern or the library ActiveRecord.
So that's my bad way of describing it.
Why don't you describe it better?
No, no, that's a very good introduction.
So that's one of the big features coming in Acto 2
is that we want you,
so we would say Acto 1 was more of a modeling tool
in the sense that you would define the schemas,
at the time they were called models, right? So you would define the schemas at the time they were called models right so you would define this model and then you would think oh that's where i'm going to put my
domain logic right so you would like define the functions and then you would have callbacks
that would probably have a little bit of your domain logic as well so uh so we are stepping
away uh from that because uh we are starting to see a lot of the issues we saw happening elsewhere.
We've coupling, for example, with callbacks and then start, look, I am like, you define a callback because you want to execute something when you're going to create something to the database.
But there are like some scenarios where you don't want that callback to run.
And then you have things like, oh, we skip the callback or suppress the callback.
You start going to this all weird lifecycle stuff.
And that's not the way we should write code, right?
We should not write a code and then try to undo it in some places,
in some like ad hoc fashion.
You know, I would prefer to write code that are small functions that I can call and
compose as I call them. Right.
So I don't want to build one thing and start like putting patches or holes in
it.
I want to have a bunch of small things and just call with the functionality I
need. So, uh,
Acto2 drives a lot to this direction. We say, you know,
we want you to consider Acto to be a tool and not
what you use to model your, you know, your domain, like your domain is still going to be
module and functions. And Acto can be, for example, consider a tool that allows you to get
data from the database and put it into an Elixir data structure. So that's why we got rid of models and now we have ActoSchema.
And that's all it does.
It just allows you to get data from the database
and put into the structure.
And it's convenient because it defines it once.
And then if you need to use it in a lot of places,
you just use the schema in a bunch of different places.
But in order to show a little bit more
of how you should think about it as a tool,
now, for example, we also made schemas.
Because if you think about the database, it's just a data source.
It's just something that you can get data from.
And so we say, well, there are a bunch of other data sources that we have in our application.
So, for example, and I wrote a blog post, we can include a link I think called, uh, Actos insert all in the schema,
schema square is that talks a couple of new features in that in Acto2. But one of them is
like, for example, if you have an API, that API is a data source to application, right?
You're getting some data and you're feeding it into application that you want to parse
and you want to handle it and maybe put into a data structure the same way you would do with the database. So you can also use the schema to get,
for example, this data from the API and validate it and cast it and handle it in a bunch of different
ways. So it's starting to look more like a collection of, and they work really well together, right? But they're not taking over
what we think your application should be.
So maybe it can be hard to talk about this,
but the blog post we can link has a very good example.
So if I remember correctly, while we do that,
so for example,
imagine that we want to do a sign up form, right?
And then the product owner,
he says something like,
you know what?
I think we should have first name and last name.
And then you're thinking,
no, that's a bad idea
because not everyone has a last name.
I don't want to model my database like that,
but I know that the owner, he's really decided on that.
So I think, you know, I have this requirement from the UI, but I know how the data wants to look like.
And you don't want to pollute your database with UI decisions, right?
You don't want the UI to drive our database.
So you start to have a mismatch.
And then you start thinking about things like this. You want the email to go to an accounts column, but you want the name to go to an accounts table, but you want the name to go to some other table.
So you have a mismatch between what you want to present and what goes to the database. And the way we typically solve this, for example, how we would solve this in Ecto-1
or in Rails is that
you would add new attributes
to your model and then your
model starts to be this weird
thing that has a bunch of fields for
this operation and a bunch of other fields
for this other operation start to become
this small Frankenstein.
It's just getting a bunch of different concerns.
And then if you can just break it apart and say,
hey, I have a schema for this,
and then I can handle the sign-up logic
and then just get the data and put it to the database, right?
You can think more properly about how are those different data sources
and how you can handle them more directly.
So that's one of the things that it's coming as part of Ecto2.
Yeah. I think Ecto's interesting. It's definitely a different mindset. So I'm very much coming from
the active record mindset and I've been an active record lover pretty much from the start. I know
there's a lot of haters. I know there's a lot of people that, you know, like it and see its downfalls
and I definitely see its downfalls. I've used it for many years. But one thing it does is it makes the simple things really simple.
And some of my frustration as we started to build out,
you know, in fact, that little toy,
I call it a toy, but that production toy,
Phoenix app didn't even have, you know, any database necessity.
But as we began building the CMS,
I'm starting to work with Ecto more at first I was I struggled this where I'm used to be able
to just hop into the console and manipulate data pretty simply in an
active record style and with Ecto there are these different components and so
Ecto breaks out into a repo. There's a change set idea.
These are just concepts and, you know, modules ultimately.
So you have change sets, repos, and queries.
And you've, you've, you talk about composability.
You're composing out, you know, this way of manipulating data through these three things.
And at first, it's difficult to know how you kind of,
you know, take the pieces of Play-Doh
and munch them together to get what you want.
But that started to subside
and I'm starting to get it, so to speak.
Can you talk through for the listeners
kind of these different components in terms of the repo,
the change set, and the query?
Yeah, yeah, that's a great question.
And there is the fourth one, which we were just
talking about, which is the schema.
So they are all involved together.
So the repository is ultimately what
represents your data storage.
So every time you want to talk about,
you want to get something from the database,
you want to write a database, you
want to start a transaction, you always to get something from the database, you want to write a database, you want to start a transaction,
you always go for the repository.
And this is very important for us
because, so I like to say that functional programming
is about making the complex parts of your code explicit.
And it's very important for me
for all this functionality being the repository
because every time I call the repository, I want it to be obvious, right?
Because that's a lot of complexity if you think about what it's doing.
It's managing connections.
You need to serialize data, send that stuff to the database, and that's the TCP connection, and you need to get the data out, right?
And then every time I talk to the database, there's a chance that can be the
bottleneck or have performance issues in your application. So, you know, I don't buy this idea,
for example, that all this logic should be hidden behind something like user.save and that, you
know, you should not care. Of course you should care, right? What is happening when you execute
that thing, because, you know, putting it into an agent that is in memory and sending it to a database, it's a whole other story.
And you need to know about that.
So that's the idea of the repository.
And then we have data, which can be Alexia structures.
So it's basically a key value thing where the keys are defined beforehand,
so it's a structure, but it can also be mapped.
It can be anything.
You can interact with the repository in different ways.
And so we have the shoe, right?
We have the repository,
and then we have the Elixir schema,
which is ultimately just data.
It's just struct and the active scheme,
which is just data.
And then we have the query, which
is basically, now we have all that data
in the database, right? And you want
to slice that data
in different ways and get part of the
data. So how do we do
that? And then we have actual query,
which is
just,
again, just elixir data that you write
the query little by little, right? So you're saying, look from the post table
and then you can call a function or say,
I want to get the posts that were created
more than a month ago.
And then I want to get just the public posts
so you can compose a little bit.
And then you're going to create this whole query thing,
this whole structure, send to the database
and the database is going to interpret that as SQL,
for example, if you're using Postgres.
So that's the query, and you mostly use it to read data from the database
to get data out.
And then we have the change set,
which is what we use to track changes to the data.
So the idea is, we have the repository for the data use, and we have the queries to the data. So the idea is, so we have the repository
which is where the data is
and we have the queries
to get data out
and we know that we put those,
we can get the data
that comes to the database,
we can put in those schemas
in those data structures
that we were talking about, right?
So we have now the data memory.
How can we change it, right?
How can we say,
hey, I want to,
I want to,
so if I have a post,
I want to update the title.
How can we do that?
So the way we do that is that we have a change set.
And the change set is, as the name says, it just contains all the changes that you want to do when you talk to the database.
So you say, look, I want to update the title to this new thing.
And then you're going to give the repository the change set.
And then it knows how to convert that to the proper SQL and send the
comments to the database.
So those are the four main entities and how they act with each other.
And then you said something very, very, very nice at the beginning,
which was, you know, you are used with the good experience, you know, like, so for example,
I just, if you're creating like a CRUD application, for example, like the simplest application
that can be just, you know, Hey, I just want to get, it's in the case where the data you
are showing the why is exactly the shape of the data you want to have in the database,
right?
That case should still continue to be straightforward, right?
You don't want to add a lot of complexity to that.
And I think Acto 2, so Acto 1,
you are trying to be really like, you know,
oh, we have those concepts here
and you should use those concepts to do those things
because we are trying to direct developers
to, you know developers to the proper
mindset. But at times people are trying to do stuff and they're like, ah, this is too hard.
It could be simpler. There is no reason why you put this barrier here. And there was really no
reason. It was just that we're saying, hey, we want you to hit this wall and then let us know
what happens. It's like, are you going to be happy that you hit the wall
and you went somewhere better or are you going to be upset
that the wall is there?
So we were able also to take some of those walls
because some were good, but some we had to take them out.
So Acto2 proves also this common case, right?
Where, hey, the UI is mapping to what I have in my database.
But as I said in the beginning, it also makes it clear, right?
That you are mapping, you are coupling those two different things, right?
You're coupling the UI to the database.
So if that's what you want to do, fine.
We're not going to force you to define a bunch of different mappers, but you should have
in mind that, you know, as long as we start to steer a little bit away from this, that the UI
doesn't really map to our database, we make it really easy for it to break apart and you should
break apart and start thinking about those things separately. I think we're hit up against our first
break. More questions on Ecto for you on the other side.
Specifically, I want to talk about preloading
as well as a little bit more on change sets
and some of the really cool things
that I've been waiting for a database library to do,
such as taking constraints that you define
at the database level and allowing those to trickle
all the way up into human-readable error messages
without having to duplicate your work. So let's take that break and we'll talk about
those things and more on the other side. Rollbar puts errors in their place, full stack error
tracking for all applications in any language. And I talked to Brian Rood, the CEO and co-founder
of Rollbar deeply about what Rollbar is, what problem it solves, and why you should use it. Take a listen.
How do you build software faster? How do you build better software faster?
And there are tons and tons of aspects to that. And Ruby is like, you can have a better language,
you can have better frameworks that help you be more expressive and more productive.
So the flip side of that is after you've built something that works, or at least mostly works,
how do you go about getting it from working to like in production and actually
working? How do you cover the edge cases? How do you find things you missed? How do you iterate on
it quickly? And that's kind of where what we're trying to do comes in. So we're trying to say
after you shift your software, you're not done. You know, you saw there's still work to do and
we want to help make that process of maintaining and polishing and
keeping things running smoothly be really really really easy so developers spend roughly half their
time debugging right so anything we can do to make that process better is going to have a huge impact
all right that was brian ruse ceo and co-founder of rollbar sharing with you exactly why it fits
why it works for you head to rollbar.com change. You get the bootstrap plan for free for 90 days.
That's basically 300,000 errors tracked totally for free.
Give Rollbar a try today.
Again, head over to rollbar.com slash changelog.
All right, we are back with Jose Valim and Chris McCord
talking about Ecto and Phoenix.
Jose, before the break,
I mentioned preloading.
You said Ecto 1 had a lot of,
you put hurdles in the way or barriers
and some you've removed,
some you've kept.
One barrier that I hit quite often
and I've just learned to work through it
and I understand the reason for it
is you won't automatically preload associations
on the developer's behalf.
This is something that can often lead to inefficient queries
and plus one queries and such.
And so I feel like this is one of the barriers
that you wanted to put in so that people knew exactly
and had to explicitly load the data that they want
for a particular use.
That being said, it also can be somewhat annoying sometimes.
So talk to us about preload.
Yeah, so about preloading, exactly as you said,
we don't do lazy loading, right?
So you need to specifically say,
hey, I want this data and that's a barrier.
We are not changing it, right?
Because I think it's a very important,
so there are a bunch of decisions that leads to this, right? Because I think it's very important. So there are a bunch of
decisions that leads to
this, right? So for example, first of
all, we don't have like mutability
in the sense that you can just
call something and then we will
load the association and cache it,
right? Elixir data structures, they are immutable.
So we have already one issue
implementation-wise, okay?
And then the other issue is exactly what you hinted.
A lot of the applications I have worked on,
they have N plus one query issues, right?
So, you know, because things can be lazy load automatically,
you basically don't care.
And then when you see you are loading a huge chunk
of your database dynamically and in a non-performant way at all.
So there are a couple more decisions related to this as well.
So, for example, to talking from the other side, so we force you to think about it
upfront and and preload the data upfront.
And it has a bunch of good consequences, which are also some of the reasons that led us to this.
So, for example, if you have to load the data up front,
then you are kind of like,
look, I'm loading the data in the controller, for example,
because that's where we are loading before calling the view,
which means that, for example,
we say in Phoenix that we would like your views to be pure in the sense they should just do the data transformation. It receives a bunch means that, for example, we say in Phoenix that we would like your views to be pure
in the sense they should just do the data transformation.
It receives a bunch of data, for example, a collection from the database,
and it transforms that data into HTML,
and it should not have side effects, right?
It should not write to the database, read from the database,
and do a bunch of crazy stuff, and that makes your views
really, really straightforward because you're just thinking
about data transformation and the complexities in the controller.
So that's a pattern we also wanted to promote.
And that's why we did this decision.
And you said about the barriers.
So that's one example that we try to improve a little bit more the barrier in the sense that having better home messages when you don't preload the data, when you try to use it, or if you preload multiple times.
So I think in early active versions,
if you had like posts
and then you call preload comments
and then you call preload comments,
again, we would preload it twice.
So now we are a little bit smarter to say,
hey, this thing's already preloaded.
Let's just use that.
So making the functionality
more convenient altogether.
And one of the nice things
we also did in this release is that,
so if you preload all the data up front,
you're going to say, hey, I have this post
and I want to preload comments, I want to preload likes,
I want to preload this, preload that.
So when you specify all the things you want to preload,
now we actually preload them in parallel
because we know all the data you want,
then we have the post.
So we just say hey
i'm going to actually then do four queries to the database process that data and then put into the
post so uh because we have like this idea of um having the whole data up front it uh it helps
a lot with that and then there are things going a little bit more into the Phoenix direction,
there are things that we have been discussing for a while
that we could add to Phoenix,
which is if you tell the view,
so if you tell the view,
what is the data that the view needs
instead of just going crazy
and doing queries anywhere in the view,
we can do a lot of exciting stuff.
We can automatically cache views because we know the data depends.
And then if we can track when the data changes,
we know that the view needs to be, you know,
the view in the template needs to be regenerated.
So it goes over this idea, right, of having like my data
and all of its dependencies in one place and not scattered through the view.
So it can do things like automatic caching,
or for example, it could automatically send updates, right?
So if you say, hey, I have this data,
you have your controller that says, I have this data,
and when this data changes,
I want to recompute this view
and send to our clients using channels,
we'll be able to do that because again
all the data dependencies the preloads and so on they are in one place and not scattered throughout
the views and i think the preloads play an important part this whole thing let's take a
concrete example here um so we're building a you know a cms for podcasts and episodes and whatnot
so we have a podcast episode and so if you think of an episode page, we're pulling in lots of different data. And this is one of the points where, first of all,
it is definitely a nice barrier in terms of as I'm writing the code, I think to myself, wow,
this is pulling in lots of different data from different places. So when I'm preloading all the
things I need for an episode, I preload the podcast that it belongs to, the hosts, the guests, the sponsors,
the different channels, which are like topics and the links. And so it's like preloading tons of
different related objects or records. Are you saying that in Ecto2, those queries will be
dispatched and then brought back together so they run parallel. Is that what you're saying?
Yeah, exactly.
And sometimes those things are nested.
So you can think of it as a tree where the post is the root.
And then sometimes you want to get the guests and then the guests is doing more preloads that is doing more preloads.
So that's like one branch of the tree.
And then you want to bring something else, right?
Like the likes for that episode.
And that's another association.
So you can feel there is a tree
and what we preload in parallel
are exactly those branches.
That's very cool.
The one thing I love the most about open source
is when my code doesn't have to change at all
and I upgrade and it just gets faster and better.
I just, I feel like,
and then you do that times to the nth degree of everybody who's using that. It's a beautiful thing. I love like, and then you do that at times to the nth degree
of everybody
who's using that.
It's a beautiful thing.
I love the impact
that you can have
when you have
lots of people
sharing the same
code base.
So very cool.
One last thing,
we mentioned change sets
and I think change sets
are really
the gem of Ecto.
I think it's a great idea
and I think it's well realized
this idea that
often times
you're taking
input from
different places and where do you put the information on who can do what. And in the
traditional active record style model, it all belongs to the model. And so you have these,
you know, callbacks or if statements or conditional virtual attributes and all these things.
And with a change set, you just have a new, you just have another change set. So you just have this. Perhaps you have your admin change set
and your regular user change set. And that defines what they can and cannot change about
that particular schema, which is very cool. Also, the constraints. So talk to us about
change sets and how you can take different constraints, whether they're foreign keys or uniqueness validations from your underlying database and use those with Ecto.
So this example you gave with change sets, it goes really well with what I said at the beginning.
So, for example, if we have like, if we take active record, you define the whole validations there
in one point, and maybe they are conditional. And then sometimes you don't want to execute
the validations. And as you said, like, it's end up with a bunch of conditionals, right? Because
you add the thing, and then you need to know how to undo those things. And with change sets,
because it's just a bunch of functions, right? Like I can have the admin change set, I can have
the other change set and uh if they both
share a common ground that is going to be a third function that the two change sets they're going to
call and they're in and there is nothing global right everything is constrained in the change set
and you can introspect the change set and see what is changing what validations ran and you can see
everything right it's it's very like it's very kind of touchable, right?
Like you can go and introspect and know what's happening.
And one of the things that we have there is exactly the idea of constraints.
So we have two things in change sets.
We have validations and constraints.
So validation are things that you can run on the data
without needing the database, right?
So I can validate the length of our string.
I can see if it's required,
if it's effectively there or not,
if the user sent a new value.
So all those things we can validate
without the database.
But there are other things like,
does this association exist
if you are inserting a foreign key?
Or is this email unique?
You cannot actually answer this question
without asking the database.
And if you don't ask the database,
if you implemented the application level,
you're going to have duplicated data
in your database, right?
So to solve this problem,
for example, unique,
I want this email to be unique.
You actually need to go over the database.
So the idea of constraints
in change sets is exactly
to leverage the constraints that we
have in the database. So
when we create a change set, we say, look,
I want
if by any chance the
database says that this email
is duplicated
because of constraint, I
want to convert it to a nice user error message.
So the constraints of the change set
is a way for us to tell the change set,
like, hey, you know,
eventually you're going to execute that in the database,
and if the database says that this is wrong,
that's how we are going to tell the user
of exactly what happened and with this exact message.
So it maps those two things, right?
It maps your application and it maps your database.
So in your database, you can add all the constraints that you want,
and then we can still show them nicely to the user.
Yeah, very cool.
Okay, Jose, I'll give you one last chance on Ecto-2 new stuff.
Got just a couple minutes.
Give us a rundown of other cool stuff
and then we'll switch gears and talk about Phoenix.
All right.
So we were talking about performance
like parallel preloads
and that's one particular case,
but overall performance is better
because we are now relying on something
called DB connection
that was made to represent the database connection.
So there was a bunch of optimizations
of how connection pooling and all this kind of stuff works.
So I don't remember exactly the numbers,
but people are seeing like from 50% to 80% faster,
you know, in general,
just queries and encoding the coding and so on.
And that's one nice thing.
I was telling, I said in the beginning
a little bit about the barriers, right?
Like, oh, you know, like we put some barriers because we wanted to force people to do some things.
One of the barriers we put in Act 1 was that every time we wanted to insert some data, we were forcing you to use things like change sets in the database.
But for insert, we actually did not need to track a lot of stuff.
So if you want to just insert data into the database
without creating a change set,
you are supposed to do that.
So that's also something we brought to Acto2
and we really built on the idea.
So what you can do today with Acto2
is that you can do repo insert
and you can define your whole tree of data.
You can say, look, going back to to the show right
example so look this is this episode that is going to have these guests that is going to have this
other information and i want to have those comments so you can just really build like an
elixir a very deep elixir structure with all the data and then when you call repo insert we are
to traverse the whole tree you know the whole data and insert it to the database.
And what is really nice about this is that now you need something, for example, like
a factory system for your data, right? Because it builds associations that does this and does that,
because it's very easy for us to just say, hey, this is the whole tree that I need for my test.
And then the tree is really obvious. You just insert
it and Acta is going to take care of all that
for you.
So that was one very nice
addition. And I have a
blog post ready for this and it's
going to come out soon so people can check it
out a little bit with more details.
And the last feature, and I think we actually mentioned
this in the last episode,
was the idea of concurrent tests even if the tests rely on the database.
So Elixir always had this feature where you can go to a test case, set async true, and then all cases that have async true, they run concurrently.
I like to say that it's 2016.
Everything you do should be using all of your cores, right? All the cores in your
machine. And if you're not doing that, you're like literally just wasting time, right? Because
it's a very easy math to do. Assuming you can never parallelize a hundred percent, but assuming
you can parallelize like 80% of your tests and you have four cores, there's like 80% of our tests that their time could be defined kind of per four, right?
And you just gain a huge amount of time.
So we got this idea and we extended it with VACTO2.
So you can now run tasks concurrently,
even if you're talking to the database.
And the way we do this is that every time you run a task,
this task gets a connection to the database,
its own connection to the database that is inside the transaction.
And then you can have a bunch of tasks running,
and all of them is going to be talked to the database
with their own specific connection.
And because it's all inside the transaction,
whatever a task does, write, or reads
is not going to affect the other test.
So it's a really cool feature.
And we have a lot of people using it already
with great success.
And we have recently also integrated this
with acceptance testing tools.
So we have two in the Elixir community,
like Hound and the other one is Wallaby
or something like that. And
you can kind of drive the testing
as if you were a browser, right? Like using
PhantomJS or Selenium.
And those tools now, they also
work with these concurrent
tests. So it's really great. You can have
concurrent acceptance tests
and you don't need to do much
really. It's like one line of code
that you add to your test helper and then one line
of code you add to the setup and it
works. So everything
is faster, including your tests.
So that's it.
Love that. Yeah.
Very good. I guess one
more question, Jose. If you could just get a little bit more
excited about this stuff, we'd all really appreciate it.
I'm just messing with you.
Well, we have a lot to talk about with Phoenix.
Chris, you're still there, right?
I am still here.
All right.
Chris is still here.
Awesome.
We're going to take a quick break.
And Phoenix 1.2, I guess we should probably even catch up with what's happened in Phoenix
because we were pre 1.0 in our previous call.
So we'll catch up on Phoenix and talk about Phoenix presence, which looks to be quite an innovative thing coming to Phoenix 1.2 on the other side of the break.
Our friends at CodeShip are sponsors of this show.
And I talked to Ethan Jones about the high performance and security of their new Docker platform.
Take a listen.
So we built CodeShip with Docker with security very much in mind.
So at the start of every build, we spin up a fresh EC2 instance.
At the end of that build, we spin that instance down.
An instance is never reused even between your own builds,
much less between builds of any other customers.
We also don't do things like cache your dependencies
or cache your images locally on our infrastructure. So rather we support remote caching for those things on
your own repos. And all of that is basically to get to the point where your code is never living
outside of the CICD process. None of your application is ever stored on our servers
and nothing you push through CodeShip is ever going to be saved, persisted,
reused, artifacted anywhere once that build completes other than the explicit commands
you run in the middle to do that. The side effect of that architecture is that because
these things happen on these EC2 instances in the middle, that gives a lot of flexibility for
performance because you can scale up that EC2 instance or scale it down
based on the trade-offs you're looking for. So if you want a lot more resources or if your
application has a ton of read write ops or a ton of memory usage, we can sort of up that EC2
instance for your builds. So it makes it really flexible in this sliding scale performance way,
but the conversation was really more around security and around keeping everything
as protected as we possibly could. How nice is it to get performance as a side effect of security?
Yeah. That's awesome. All right. That was Ethan Jones of CodeShip talking about performance and
security with our new Docker platform. Head to CodeShip.com slash changelog to learn more.
Tell them we sent you. Use the coupon code, the changelog podcast 2016 once again the changelog podcast 2016 that'll
give you a 20 discount on any plane you choose for three months head to codeship.com slash changelog
and now back to the show all right we are back with jose and chris chris i guess it's your turn
as we shift gears and talk about phoenix we had on in March of 2015, and I think we were pre-1.0 at that point.
So we definitely want to focus on the presence feature that's coming in Phoenix 1.2.
But could you briefly give us a brief history, recent history of Phoenix for our listeners?
Sure.
I can't believe it was over a year ago that I was on.
Now I'm trying to think back.
So we reached 1.0 in July, so not too far after I was on.
And I think as far as new features, since I was first on into 1.0, I think it was just about stabilizing things.
So we'll go from brief history from 1.0 to where we are now.
That's where you mentioned Phoenix presence.
So we, and we have, before that, we had some performance optimizations.
So the whole idea with 1.0 was to get our API stable.
And we knew that we had, we had some benchmarks as far as HTTP goes and things were looking quite good. But after we released 1.0, we decided to see how our channel layer was doing performance-wise.
And our channels is the real-time layer in Phoenix.
It wraps WebSockets, but you can also use long-pulling or other transports, but your server code remains the same.
So when we went to benchmark this, we were only able to get like 30,000 connections.
So 30,000 simultaneous users, which was much lower than we were hoping. And it was a cool
story because, you know, initially it was like, wow, that's horrible. But with like
a few lines of code change, I think it was Jose had the first optimization. It was like he actually removed a little bit of code,
changed a few lines, and it doubled the performance.
So we got 60,000 connections.
So that was cool.
And then it was just we repeated that a few times
where we would change a few lines,
end up with a diff that was actually less code,
and we would double or triple performance.
And in fact, our last optimization,
we just changed one line of code
and it gave us like a 10X increase in throughput.
So long story short,
we've always preached that we have this great tooling
for Erlang and Elixir
and that it's really easy to get a live running look at the system.
So we were actually able to really put that to the test
and we provisioned like 50 servers that would act as WebSocket clients, all sending connections to one
server, because we needed to try to open like 2 million connections. And we were actually able to
get a GUI into our server of like a live running list of like, what our processes were doing,
what our in-memory storages were looking like,
and that's how we optimized.
And it was too easy.
So we ended up changing.
We ended up with a diff that was less code
to go from something that supported 30,000 connections
to our channel layer that supported ultimately
2 million connections per server.
So that was the really exciting part from after 1.0
where we were able to optimize and get WhatsApp scale of 2 million connections on a single server.
Which WhatsApp was the case study that you cited in our call last year is what got you excited about relaying an Elixir was the fact that they built WhatsApp with like 30 engineers or less up to ridiculous scale?
Yeah, I think, yeah, that's WhatsApp.
That anecdote of like 2 million users per server was like what got me into Erlang and Elixir in the first place. So it had to feel pretty good when you got your channel layer to similar success.
Yeah, it was like probably the most fulfilling process of this whole Phoenix open source thing, because the platform, like the hype lived up to reality.
And I wasn't thinking we could actually get WhatsApp like scale because I had when I read about WhatsApp, like they were using FreeBSD and they had like, they forked Erlang and made some optimizations
and they were like, they had fine-tuned FreeBSD.
So I was thinking that it was going to be very difficult
to try to replicate that kind of scale.
And also channels, we're doing like,
we're doing more work because we have to,
you know, every abstraction has a cost.
So we're having you not have to worry
about the transport level.
We're able to send specific errors to the client.
So we're doing extra work.
So it was really fulfilling to actually see that, you know,
with minor changes in our initial best effort approach,
with just a few tweaks,
was able to go to something that was able to get millions of connections.
So that was, yeah, incredibly fulfilling to kind of come full circle.
And also it's a great brag slide now of like showing that 2 million
connection chart.
So, so it's good marketing for us.
Um, so yeah, so that was, we included that blog post and change log weekly when you,
when you, uh, posted it.
And I think that was one of our top clicks, if not our top click of, of the newsletter
that week.
So I think, uh, I think the bragging paid off
in terms of people were interested in those results.
Awesome.
And then like you said earlier about loving open source,
about your code getting faster.
So now you're using channels at Phoenix 1.0.
You can upgrade to Phoenix 1.1 or 1.2 now
and you'll have something that's like,
you know, orders of magnitude faster with changing nothing.
Awesome. So yeah, that was the effort after, you'll have something that's like, you know, orders of magnitude faster with changing nothing. That's awesome.
So yeah, that was the effort after,
directly after 1.0 was performance optimizations around channels.
And then with 1.2, which is release candidate,
which is due out very soon,
it was really all about Phoenix presence.
And for that, Phoenix presence started out as like a simple,
we wanted to solve a simple problem.
What we thought was simple was, you have this real-time layer and people were asking, how do I get a list of currently connected users?
So the simplest use case would be show a sidebar in chat of who's online or who's in this chat room.
And we thought this was going to be pretty simple to solve.
And people weren't solving it well when they deployed it to multiple servers,
and this ended up being like several months of work
where I thought it would be a simple problem
ended up being actually pretty nuanced.
So yeah, I guess I can speak to Phoenix Presence
and kind of all the things we had to solve there.
So the issue with Presence is you can have, well, one, you can be online from multiple places.
So if I open a browser or open the app on my browser and then I sign in them from my phone, I have to be able to distinguish those two things.
Because I want to show Chris just signed online like the very first time I log into the app.
But if I come in from my phone, I don't want to alert everyone that I'm here because I'm already there, but I might want to show an icon that I'm now
online from mobile.
So you have to treat presences as unique, even given the same user.
And then the distributed problem is the hardest issue that almost no one solves, and that's
if you have this distributed state on the cluster, most people just shove that into
like Redis or a database.
And that works if you just assume that computers are reliable and the network's reliable.
And I think most people just at this point, they just assume that nothing bad is ever going to happen.
It's usually the best case. And the problem is if you have a net split or you have a server drop and go down,
you're going to have orphaned data in Redis or your data store.
So I might show user that's online now forever
because the server that was responsible for removing them from Redis
is now dead.
It got, you know, a column fire,
someone tripped over the power cord.
So you end up with like convoluted solutions
that Jose and I, when we were originally planning this, we were talking through how you would implement this. on fire or someone tripped over the power cord. So you end up with convoluted solutions that,
Jose and I, when we were originally planning this,
we were talking through how you would implement this.
Initially we were thinking maybe we would have an adapter
that could be like Postgres or Redis.
So we were thinking in a database sense,
and then you end up with just all these convoluted things
of like now, if a node goes down,
you can have every other node periodically
try to detect that one node went down and then clean up those orphan records.
But if you have like 100 servers on a cluster, now you have like 100 servers competing to do cleanup for one other server.
And it just becomes this mess and it's not going to scale.
And not to mention you have a single point of failure once you deploy Redis and you're going to have to serialize, derealize your data in and out.
So it's got some severe side effects.
So we wanted to tackle this in a way
that didn't require the single source of truth,
the single bottleneck.
And that's where Phoenix Presence adds CRDT,
which is a conflict-free replicated data type.
And that gives us the ability
to have like an eventually consistent,
eventually consistent list of presences that is just going to recover from net splits or servers going down or new servers coming up.
So if a new server joins a cluster, it's just going to request from the minimum amount of nodes all of the presence information and it's going to self-heal. So you can deploy this now with no single point of failure and it's going to recover automatically
under pretty much any scenario,
whether there's like a network issue
or whether you have a server just drop off forever.
So we've solved all of those hard edge cases
that no one really gets right.
And at the end of the day, on the server side,
it's like a couple lines of code on the server
and a few lines of code on the client and a few lines of code on the client.
And you can get an active list of users.
And it's something that you don't have to think about.
Yeah, so maybe can you give us the scope of the presence feature for Phoenix users?
You said there you have a list of active users.
In terms of all that it will provide for the Phoenix user to develop their channel-based application.
What all is going to be there, quote-unquote, for free?
Well, free for us, but hard work for y'all with Phoenix 1.2.
The API is pretty simple.
There's a mixed PhoenixGen presence generator that just generates you a presence module that you can put in your supervision tree.
And what that's going to give you is you can say presence track, so like track my
user, and you can give it some like user ID and metadata. So it's like a couple lines on the
server to say, hey, track my process and also send a list of presences down to the client.
And then the JavaScript client includes a new presence object that handles syncing the state
with the server because you want to be able to resolve conflicts not only
on the server on the cluster but also on the client so if the client disconnects you might
have users that have come and left and the client needs to be able to sync that state when they
reconnect so we provide just a couple functions on the client there's a presence sync state and
sync diff so as information is replicated on the cluster instead of getting like 500 users come and
go like really quickly instead of getting 500 messages on the client, you'll get a single presence diff event and you'll call presence sync diff with a couple optional callbacks to be able to detect.
Given that single event, if a user joined from the first time or if they joined from an additional device, you'll be able to actually detect those cases. So you can maybe show a count of the number of devices I'm on
or show the mobile icon.
Or if I'm logging off from every device,
I can actually finally show, you know, Chris left.
So we give you all those primitives
and it's just a few lines of code that you have to write.
And that's pretty much all presences from the Phoenix 1.2 sense.
It's just a few lines of code on the server
and the client to develop this list of active users
on a given topic,
whether that's per chat room
or maybe a global active list of all users
signed into the application.
It's interesting that you mentioned the CRDT.
We recently had Juan, Juan Bennett of IPFS,
the Interplanetary File System, on the show.
And near the end, we asked him what was on his open source radar,
and he had mentioned CRDTs as a very interesting piece of computer science
that has a lot of use cases, and he thinks that it needs more exposure
because people aren't using this data structure.
And at the time I was about to interject and say,
I think it was Phoenix Presence using CRDTs,
but I wasn't sure and I also didn't want to interrupt him.
Can you talk about how you came to discover CRDT as a thing
and use it for this feature?
So, yeah, I think you're right. It hasn't, they haven't really been put to their full potential.
And that's one of the things that excites me the most is like, I like to say that, uh, Phoenix is
putting a cutting edge CS research into practice. So it's not only, you know, we're not just trying
to say like, Ooh, we can be computer science-y, right?
It's exciting.
It's exciting to me because we're applying this cutting edge research, but we're actually putting into something that you can solve day to day.
Like, you know, like the React database is like they use CRDTs, but there's not like unless you have the need for this distributed database, like no one day-to-day is leveraging CRDTs,
at least that I'm aware of.
So I'm excited that we're able to solve this
like simple use case using this really great research.
But the nice thing about CRDTs is they give us,
I mean, it stands for conflict-free replicated data type.
So they give us eventual consistency.
So we don't have to do remote synchronization where we have a consensus protocol where we have to lock the cluster and figure out who has what.
So it gives us a way to be, if we can fit within the constraints of the CRDT, we can have much better performance and much better fault tolerance.
Because we can just replicate data
data can arrive out of order
it can arrive multiple times
and all of that is going to
eventually commute to the same result
like conflicts are mathematically impossible
if you fit your problem
if you fit your problem into this
confined CRDT problem space
so it has some really nice qualities
in a distributed system
because you can't really rely on
the network always being reliable.
And you also,
once you get to a lot of nodes,
you don't want to have to lock the cluster
to get some kind of consensus.
You want that to be automatically resolved for you.
So we knew that this,
there's a particular,
there's different kinds of CRDTs,
but we knew that a particular kind of CRDT called an ORSWAT,
Observe Remove Set Without Tombstones, had all of the qualities that we wanted for presence.
So from kind of that thinking, I was talking with Alexander Sanj, who has worked on some Elixir CRDT libraries
and gave a great talk last year at ElixirConf about CRDTs.
And he kind of confirmed our thoughts about,
yeah, this presence would be a perfect fit for CRDTs.
So we kind of knew it was going to be an optimal solution
if we could figure it out.
Yeah, presence makes a lot of sense
for eventual consistency
because it's not a requirement
that everything always be completely consistent when you're worried about who's who who is and
who is not present as long as it eventually gets there it makes a lot of sense yeah exactly so uh
besides the the talk that chris said that we had in the lecture con last year
chris talk at elixir conference now
in europe in burning was also really good and if if someone is finding it hard to follow only you
know for for um listening i recommend watching the talk the video should be out soon and then uh so
and craze has some examples there for example he had like two nodes connected to each other,
and then some users are in some node,
and then some users are in another node.
And then he simulates a disconnection between the nodes.
And you can see that everyone that is in one node
disappears from the other node, right?
From the screen, disappears live from the browser
for each connected client.
And then as soon as he reconnects everything, the screen disappears live from the browser for each connected client.
And then as soon as he reconnects everything, all of the clients, they come back, right?
Everyone that was connected to those particular servers, because now the servers are back
up, they can exchange this CRDT again between them.
And then, you know, oh, now I know everyone who is back up again.
So that was a very good example.
And there was also, Chris also showed there a very good example of, uh,
it's actually not a lot of lines of code. As I said,
you just generate the present stuff. You go, you tell me the server,
why do you want to track? And then the JavaScript side side, you just say, Hey,
every time you receive a new state, that's how I want to change my, my,
you know, my views, my JavaScript in the browser.
And every time I receive a diff,
that's how I want to change it and done, right?
It's really, really small to get everything working with all those properties, right?
Like no central point of failure.
Everything is distributed.
And if nodes go down and they come back up,
you're going to be able to merge and everything just works.
It's really works it's really
cool I think
if everything
works well we're going to end up with
the same problem as things HTML
like we don't get bug reports
and then we don't know if that's because people are not
using or because it just works
but I think it just works again
because we are hearing stories
of people using it in production
already for a while and it's like just fine. So that's always nice to hear.
That's the thing. Yeah, I keep telling like, I keep thinking that we'll have bugs,
but because CRDTs are tricky to implement, like they have to be correct. Like there's
no like almost correct as far as CRDTs go um but it's like so I released Phoenix 1.2 RC and
and uh people are using it and reporting that it works but like I don't believe them
because this thing like this took me months of work and like you know reading research papers
and you know trying you know having crushing self-doubt and then overcoming that and like figuring out how to parse uh these academic papers and now that it's released or
almost released i don't actually believe that people are using it because i haven't had any
like show-stopping bugs but uh so that's that's good but also it's like you know it's it's uh
unless someone comes and tries to mathematically verify our implementation, we'll see how it goes.
But yeah, it's been a really exciting process.
And I guess it's kind of driven where we're going to go with Phoenix beyond 1.2, which I guess we can talk about.
But maybe if you want to still stay within the realm of Presence first, or we can talk about maybe where we want to go with with what we've built uh because it turns out that presence is actually kind of this gold mine of
untapped potential that we accidentally created yeah i mean that was what i was going to actually
ask next and we're coming up against a hard stop so um let's talk about the future a little bit i
was going to say is phoenix presence a one-off feature that y'all put a lot of work into but
it kind of stands on its own or to me it kind of seems like there's building blocks that have been laid for other
things and so maybe you could speak to that in the future yeah so i'll start with when i my very
first phoenix talk and i think it was 2014 at the very first elixir conf um you know i had a good
idea what i where i wanted phoenix to go, and I think I pitched it as a distributed web services framework.
It was on the first or second slide.
And I talked about leveraging Elixir in this distributed runtime.
And at the time, we were just trying to tackle the standard web use case.
But I talked about long, long term,
I wanted to have a distributed service discovery layer.
And at the time, I really had no idea
what I was talking about, other than I knew
that we had the technology, and I knew that we could
solve it with Elixir, being able to just deploy
a service somewhere on the cluster that can perform
some work for you, and then be able to tell it,
hey, do this thing, and just have it work magically.
And I figured it was going to be like really far off.
And even if you would ask me like around 1.0 last year, I would have told you like, yeah,
it's still I'm interested in it, but it's really far off.
But it turns out like we've accidentally solved it with presence and kind of like pretty far
into the solving this simple use case
of showing lists of users online,
we kind of realized that we made this,
what we really made is a distributed process group
that has some like really nice qualities.
It's eventually consistent, which is nice,
and it also allows you to attach metadata
about each process joining or leaving a group
and gives you a callback to be invoked
when a process joins
or leaves so we realized instead of replicating users that are online it's exactly the same thing
if we replicated uh what services were online like they're both processes so we could have instead of
chris's online we could say hey this web crawler process is online that says it can do web crawling
and instead of listing the users that are in this chat room 1, 2, 3,
I could say give me every process on the cluster that says it can do web crawling.
And the code, it's the exact same code that would apply to both cases.
So we realized that we have this service discovery layer by accident.
And we've solved all the hard things that we would have had to solve to do service discovery layer by accident and we've solved all the hard things
that we would have had to solve to do service discovery and it has all of the
qualities that we want as far as recovering from failure net splits or
new nodes coming online and just having services automatically be discovered so
we want to where we want to go next is we want to maybe make an API
specifically around services where we can build on top
of presence to be able to do like efficient service lookup
and routing to be able to say,
like be able to do process placement where I want to call
a web crawler, for example, like something expensive.
I want to have multiple of those deployed across the
cluster, one for failover and two, so I can distribute that work. So I want to, I would like deployed across the cluster. One for failover and two so I can distribute that work.
So I would like the client to be able to, say,
automatically load balance
based on maybe the current work factor of each web crawler
so the web crawler can update their current number of jobs
that they're processing,
and that would be in the metadata of the presence.
And then we could also do other efficient routing
where we could just automatically shard based on
the available web crawlers on a cluster.
That way the caller just says,
"'Hey, call this web crawler service.
"'Here's information,' and will automatically
"'distribute that load for them.'"
So there's some other neat things we can build
on top of presence, but really the plumbing is there today. And that's what the most exciting part of it is for me is like we accidentally solved this
exceptionally hard problem in our quest to show like, you know, what users are online.
And I think it's kind of a testament to Elixir and the platform that, you know, we use these
primitives of the language that we're built in and we build on top of them. And given this
distributed runtime and this great environment,
we ended up with something far greater really by accident.
That's an awesome byproduct of trying to solve what is typically a simple problem,
but you found this greater piece to it.
Since we have a hard stop for Jose, we're going to let Jose off the call.
Jose, are you cool with bailing out so we can continue with Chris for a few more?
Yeah, yeah, definitely.
I just, I want to add one thing, my last words.
So just to give an idea what we're thinking here.
So like imagine you're building an application
and then this application is growing
and then you're like, oh, geez, I want to,
I'm going to get like this part of this application
or this feature that we are going to build next.
I want to put it elsewhere.
I'm going to make it another application, right?
And then what you do is that you start to develop
that separately.
And then those applications talk to each other.
So you need to go implement a web API
for that other application
that the other application is going to call,
and then you need to serialize your stuff to JSON, right?
And then if you have, like,
and then if one server cannot handle that at all,
you need to put a HA proxy or something like that
on top of that other servers, other services, right?
And then on the other, and then your original application,
you need to go write the code
that's going to the JSON
deserialization.
And then you need to have
an HTTP client
that's going to talk to the thing, right?
So you're writing all this code
and all those things.
And it ends up being a lot of things.
And what you end up is like,
it's basically,
you have like a distributed system
where you're talking to other machines,
but the way the distributed system
communicate with each other is just
very complicated, right? Because
you're using HTTP that's not efficient,
and then you're using JSON that's not an ideal
serialization format as well, right?
So you create all those things, right?
You need to have all those infrastructure
pieces. And here, because we have the
Erlang, we have the Erlang virtual machine that runs
in distributed mode, nodes can talk to
each other, it already knows how to serialize data
between distributed entities, right?
So we're saying that, hey, you are writing your code
and then it can even be in the same project.
So for example, you have like in the same,
in your same application, you have like,
hey, I have like in this project,
I want this to run in one node
and this to run in the other node.
So when you test things locally,
you don't need to start a bunch of different entities, right?
It's just everything you can talk directly there.
And then when you want to run into production,
you just say, hey, now you run in those different machines
in different clusters,
and then you don't need to do any of the other stuff, right?
You don't need to have a proxy to do the load balancing for you
because the Phoenix service system is going to take care of that.
So you just say, hey, I want to run those things there and done, problem solved.
You don't need to be writing HTTP clients.
You don't need to think about how you're going to serialize and deserialize the data because
it's all taken care of.
And that's kind of the idea we're going at.
So if you can make a parallel, like when we designed the present system and then you can have a bunch of machines
in your cluster just talking to each other
and then you don't need radios,
you don't need other dependencies,
we are thinking the same right here.
But, you know, look, now I have
those different services, right,
running different machines
and they can talk to each other
and you don't need HTTP client,
you don't need a proxy,
you don't need something that is going to do the service
registration and manage when those nodes
are up. So, you know,
everything is there. We can do it because
of the platform.
Wow. Good stuff, Jose.
Cool. So I have to go
unfortunately, but
Chris, go on
and thanks again for having me
and we'll chat later. Yeah. Have a good one. Thanks, Jose. We appreciate it. Thanks, Jose. Thanks for having me and we'll chat later.
Yeah, have a good one.
Thanks, Jose. We appreciate it.
Thanks, Jose. Thanks for your time.
We'll talk soon.
Bye.
All right.
Now we're back to Chris.
So that's unusual.
We've never had a caller drop off during a call here before.
So that's what you do when you have two
and one has limited time.
Now he's gone, we can badmouth him.
Now he had some, yeah, he made some good comments
about kind of where we see service discovery going.
Yeah, big ideas.
It really simplifies everything and gives you,
like, you know, microservices are, you know,
the hot movement, but it really gives you the,
it really gives you the best of both worlds
where you can develop these things
just in Elixir like
you normally would, and then you deploy them out there as quote-unquote microservices,
but you don't have all of this munging to do of like, okay, now let's talk to this API
team because these things aren't discoverable.
They're just web endpoints that have their own load balancers.
All these layers disappear, and that's something that is really exciting to me, if we can leverage that.
Yeah.
I've always loved the simplicity of HTTP from a programmer's perspective in terms of an interface for communications.
But it always seemed like it was suboptimal when it comes to microservices.
And everything over JSON and HTTP, There's just a lot of stack there
that you don't necessarily need.
Yeah, especially if you already have
a distributed environment.
And the other part of this too
is because we have a distributed environment,
it's not just about making a remote procedure call.
I mean, that's part of it.
But part of it too is if I want to,
let's say I'm running a game
and I have some game state and game logic,
and I want to be able to spawn these things somewhere on the cluster.
So imagine a service not only can do web crawling, but maybe you have one that's doing game state.
So I want to have a process spun up somewhere on the cluster for me that manages a couple players' game state of a game that they're playing.
That's a long-lived process.
It's not just a remote procedure call.
What that does is I'm going to say, hey, somewhere
on the cluster, someone
spawned me a game server.
And I get a process back of that game server
and I can communicate directly with that process
now, just like any other
Elixir code. So it's not only
a service remote
RPC call, it's being able to do process
placement on like,
hey, someone spawned me one of these things
and then I'm going to treat it
just like any other Elixir code,
just like any other process
and communicate with it directly.
I can ask it later for its state.
I can change its state.
So it really gives you the best of the platform.
Very cool, Chris.
Well, if you have a little bit of time
and are open to it,
I do have a list of random support questions
that Jared has,
which we're going to maybe even ask off air,
but we have some time.
I think people will be interested in hearing
your take on a few things.
Yeah.
With regard to Phoenix.
Let me have them.
So, yeah.
So the first two are kind of combined
in terms of taking Phoenix into production.
Kind of part A is,
do you suggest running it behind a proxy or not?
And related,
does Phoenix or Cowboy or the stack itself have the HTTP2 support?
Is there anything specific that you'd have to do to get that running or what's the situation there?
Yeah, so for the first part, it really depends.
Like there's no absolute need to run Phoenix behind a proxy.
In fact, I've heard, I don't think it's going to be the normal case,
but I've heard two different cases of Nginx actually becoming a bottleneck before Phoenix when behind Nginx.
But I don't think, for the vast majority of people,
well-configured Nginx, that's not going to happen.
But that's just an interesting anecdote.
But I think as far as a dockyard, we deploy everything behind Nginx.
It's just simpler.
Like that's our deploy process.
And that's how we can load balance multiple web frontends.
So I think Nginx is still what I would,
how I would deploy my web frontends in Phoenix.
But it's not, absolutely not a requirement.
So really it's just going to depend on,
I would say deploy just like you would any other web stack.
And those web frontends will happen to be clustered together
with your greater Elixir cluster,
but still being load balanced in front of or behind Nginx
is a great option.
And for the second part, HTTP2, we're exploring that.
And Cowboy Master part HTTP2 we're exploring that Cowboy Master
has HTTP2 support
so Cowboy 2.0 is going to come out
with HTTP2 and
there's also another library called
Chatterbox which is
an Erlang HTTP2 server
so
we're currently looking at how to
get HTTP2 into Plug
which is our web server abstraction that Phoenix sits on top of.
So it's not there yet, but as soon as we have,
we need to get into Cowboy 2 and this Chatterbox library
and look at how they could both fit into kind of a common API under plug.
So I think definitely when Cowboy 2.0 goes stable,
we'll shortly thereafter release a plug
that will have HTTP2 support,
and then Phoenix will just get HTTP2 on top of that.
So it's coming, but it's not there yet.
Okay.
Talk about deployment a little bit
in terms of how you get a Phoenix application into the wild.
Let's ignore for now the platforms as a service,
the Heroku build packs and whatnot.
I know there's EXRM, which is the Elixir Release Manager,
which seems to be the way suggested to move forward.
I'm wondering if there's anything on top of that,
similar to a Capistrana where it's kind of manipulating the EXRM in order
to do the, for instance, the SCP step of the application to the server, maybe database
migrations, rollbacks, those kind of things.
What's the deployment story?
Yeah, so the deployment story could, I guess, I hope it gets better.
Like, it's not bad.
It's similar to where, like you were, you know, with Ruby earlier on.
But we have some tools, but there's still some manual steps.
So just for our listeners that aren't familiar, XRM is a way to build releases.
So there's two ways to deploy Elixir and Phoenix applications.
One is to run the project directly like you would run it in development.
And another one is to build a release,
which is a self-contained tarball
of the whole Erlang VM,
all of your project code,
everything it needs to run.
And then you can deploy that
and run it as a self-contained entity.
And that gives you some nice features
like hot code upgrading.
So under the
Capistrano-like case,
XRM will build you a tarball,
but there's this final step of, okay,
now you need to SCP that onto the server
and then basically
start the release and
run it, which isn't that hard.
So some people, that's how they deploy. They just have
a bash script that just SCPs,
starts, and they're good.
But I would like to see some tooling built on top of that.
Because I think to give you that like mixed deploy like that, just single task like deploy function, I think it would go a long way.
There's a couple of tools I've been meaning to check out.
One is called Relisa, R-E-L-I-A-S-A, I believe.
And I think it does that for you.
I just haven't had time to look into it.
So I'll link that in the show notes.
So yeah, I think releases or deploys could definitely get better.
It's not like it's this insurmountable thing today, but I think that we want to give people that Capistrano-like experience
because that's just removing yet another barrier to entry to people,
you know, getting this out in the world.
Yeah, absolutely.
And I was, I was walking through the steps and it looked very much like, okay, this looks
like maybe a 10 line bash script, which, you know, does those for me.
But then I started thinking of, you know, atomic changes and rollbacks and, you know,
database migrations, if there are any.
And I thought, hmm, somebody should solve this problem.
I wonder if anybody has yet.
So if you're out there and you want to get involved
or maybe Relisa is it, then we just don't know yet.
But there's an opportunity to help the Phoenix community.
Yep, and there's also eDeliver,
which is what we use at Dockyard.
And really all eDeliver is
is just a bunch of bash scripts that are wrapped.
And we've had some stumbling
blocks there. We've got it to work,
but I think it could still be
it's not just this
setup
and you're ready to go.
If someone wants to
get some open source street cred, that'd be
a great problem to take on.
There you go.
So last thing I have for you, I teed it up way at the beginning of the call, talking about the decision to basically bring in a third party build tool, which the default
is Brunch, which is, you know, really leveraging the NPM ecosystem.
And I think you were on, it was either Ruby Rogues or the Elixir Fountain recently saying
it seemed like you didn't want to
touch the JavaScript side with
a 10 foot pole to put words
in your mouth but just staying out of that whole
thing. I know that Brunch
which seems like a really nice build tool
has
some integration points with Phoenix
and there's ways to swap
it out.
I was on the Phoenix Slack channel the other day and somebody mentioned that they had been replacing it with
webpack um maybe just talk about the build tool situation the decision you made and how you go
about you know changing build tools or you know a lot of these people who are building Phoenix backends
for JavaScript or Elm frontends,
they don't even need any part of this.
But specifically, my specific support request
is about phoenix.digest,
and if that's tied specifically to Brunch
or if that would work with another build tool.
But I guess to broaden it,
just speak to the build tool situation in general first.
Yeah, so phoenix.digest, just to answer that, is not tied to Brunch. tool but I guess to broaden it just speak to the build tool situation in general first yeah so
Phoenix Digest just answer that is not tied to brunch so I'll touch on that in a second so
yeah this has been the most miserable part of Phoenix and I think it's it's no fault of brunch
so backstory is people out of the box like Phoenix has really been about giving you
like a great out of the box experience,
which I think is
one of the most important things.
And out of the box,
people just want the ability
to compile and bundle
their JavaScript and CSS.
Like they just want it to work.
Like you put your JavaScript
in a directory,
you put your CSS in a folder
and it gets compiled for you
when files change.
And what we didn't want to do is write our own asset pipeline because I didn't want to
spend a year of my life working on that.
And the other side of this is as much as the vast majority of issues on the Phoenix issue
tracker are Node and NPM related or brunch related and uh and that's not a problem of brunch it's
just i think the javascript community has a bunch of great tooling but a bunch of fragile tooling
and and if we even if we implemented our own asset pipeline in elixir we would still have
a hard node dependency because it's like i like to call it that like node and JavaScript is just like an unfortunate
reality in web development
like there's no way to get away from it if you
want to support SAS
ES6 CoffeeScript
TypeScript like pick any tool that
you use to deliver an
experience in the browser it's going to be
you're going to have a hard node dependency like that is what
it is so even if you spent all this time
writing an Elixir asset pipeline to concatenate files and call shell scripts you'd still need to have a hard node dependency. That is what it is. So even if we spent all this time writing an Elixir asset pipeline to
concatenate files and call shell
scripts, you'd still need to have node
installed on the server because we'd still be shelling
out to these tools. So unless we
wanted to reimplement an ES6 transpiler
or SAS, all these tools in Elixir,
it's pointless.
So instead
we said, okay, let's look at what the JavaScript ecosystem
has.
And we investigated like all of the dozen popular ones.
Like I looked into Grunt, Gulp.
From Gulp, I looked into Webpack.
And from Webpack, then I checked out Brunch.
And Brunch won because it was by far the simplest to use.
It had the smallest scope as far as feature set,
and it was the fastest.
And a lot of these tools like Gulp and Webpack,
they want to be not only asset builders,
but they want to be task runners, test runners.
They want to run development servers.
They want to do all these things.
And if you're familiar with the Node community, all these things have
dependencies and your dependencies have dependencies
and you end up with something that is like this
insane dependency tree
just to concatenate
JavaScript and CSS. We like Brunch
because it was simple and fast and
it's just JSON configuration.
You don't have to really know how
Brunch works. When you run
MixedPhoenix New, you get a project and by default,
you'll get ES6 compilation and CSS bundling
just by putting CSS and JS into a folder.
So that's how we settled on Brunch,
but we knew that it would be a point of contention
because there's like a million different tools
in the JavaScript community.
So what we did is we only include it by default,
but there's no coupling.
So if you wanted to use Webpack,
it's like a one-line change in your configuration,
and it will start the Webpack watcher
instead of the Brunch watcher.
So we call these things like watchers,
where they watch a static directory for changes,
and then we shell out to them,
and they do whatever compiling is needed.
And they'll build files into a priv static directory,
which is where our static files live.
And that's where the digest task comes in,
where if you want to digest your assets,
all we say is your static build tool needs to build to this directory.
We don't care if it's Brunch or Webpack or Grunt,
and then we'll digest those already bundled files.
So we tried to integrate this in a way that gives you great out-of-the-box experience for the most common use cases, but
if you have some other tool, you should just be able to swap it out and use what
you like. Certainly good points on Node
being there for you no matter what. It's on the front end. You can't get away from it
while you recreate the wheel or?
Redundancy, you know in that case making something that you don't need
Yep, and it's been a pretty I mean it's been there's been a ton of misery
Like you know, I don't like to well you don't want to spend your life like you said doing right so
Yeah, I don't like to like put down anyone else's work, but like work, but there's been so many times that this is just...
that NPM install has just broken for people.
And you can probably sense some frustration from me
because someone will open an issue about...
There's repeatable builds.
Things just break.
Things that have been stable on Windows
will just suddenly break.
So we've had all these Windows support issues,
which is interesting because it's not,
like what I thought that Elixir and Erlang
would be tough to run on Windows,
but it turns out the biggest issue
is people trying to run Node on Windows,
which I thought was a solved problem.
So I think that, you know, I wish we could,
you know, my only hope for the JavaScript community
is we can settle on tooling
instead of having so many options
and then also maybe end up with kind of a repeatable build process
that is much more stable than it is now.
So, yeah, we'll see how that goes.
The question is, did Brunch depend upon LeftPad?
I mean, I'm trying to think. Yes, actually it did.
Okay.
Everything depended on LeftPad.
Everything did. So, therefore, Everything depended on left pad. Everything did.
Everything.
So therefore, Phoenix depended on left pad.
Yeah, because people started reporting issues.
Oh, wow.
And it was left pad related, which is funny.
Well, Chris, it's been fun having you.
We're near time.
I think Jared has a hard stop here in 13 minutes.
I'm not sure what your timing is, but we could talk for longer. We want to give you a chance to sort of
give some last words like Jose did as well. So anything you want to say
in closing, we'd love to hear it. Sure. Yes. So
let's see. Maybe just a recap of Phoenix 1-2.
Yeah. So Phoenix 1-2, it's release candidate today. We have no
working issues.
So I think within the next week or two, by the time this airs, it should be out.
Presence is the biggest feature. We're really excited about really enabling distributed applications that you just don't have to think about.
And that's where we want to go next is being able to give you this kind of distributed tooling layer where you can build out, you know, develop a application on your laptop
and then run it distributively, but the code's the same.
So it's kind of a similar theme of,
with channels we wanted to give you this trivial interface
for real-time connections where you didn't have to worry
about how's the client connected, you know,
what transport are they coming over.
We kind of want to apply that same idea to distribution,
where you can develop on your laptop and then you can deploy this and you don't want to apply that same idea to distribution, where you can develop on your laptop,
and then you can deploy this,
and you don't want to have to care,
is this service available locally,
or is it available on some computer somewhere,
or do I happen to have 10 of these things deployed on 10 computers
because I want fault tolerance and scalability?
So we kind of want to give you that experience
and take care of all those details for you.
So that's what's
coming next. And check Phoenix one two out when this airs. Adam, before we close, I'd like to give
a quick shout out to Dockyard for employing Chris and allowing Chris to work on open source. I think
is it full time you're on Phoenix or at least part time? Yeah. So, yeah. Thank you for that,
by the way. Yeah. I've my primary role is to work on Phoenix.
So it's about three quarters of my time
are spent on open source and Phoenix development.
And since I've been there,
since it's come up on maybe six months,
I've been almost entirely full-time on Phoenix.
So it's been, you know,
none of this present stuff would have happened
without their support.
So I owe them a huge thank you.
So a good way to support you supporting Phoenix
would be to potentially to buy services from Dockyard.
Oh, certainly.
Certainly.
Dockyard.com, slash services, full project design engineering,
the full gamut.
We love companies that support open source,
and we talked earlier about how
the beauty of it is
everybody else's applications
and projects get better by this
shared effort.
Companies that allow that shared effort
we all thrive based on it.
Huge shout out to them for doing that
and for all companies that are putting
their hard-earned money behind
open source projects as a way of sustaining the ecosystem.
And the same with, you know, Jose's gone, but Plataformer Tech, Jose's company for sure.
It's like, you know, obviously Phoenix wouldn't have happened without Jose and Plataformer Tech because they took the even crazier position of not only, you know, saying, OK, let's support this web framework.
But, you know, Jose had this crazy idea to write his own language
and take a couple of years off to do that.
So we owe them a huge thank you as well.
Absolutely.
That's an interesting story too.
We mentioned that in the early show that both of your backstories
and in that show with Jose, he talked a bit about how they were betting on it early
and how he was working on the side and then they started using it and it sort of took over um so if you want to listen to that what's
the episode number again jared episode 194 for jose's and 147 for chris's show so go back and
listen to those before i forget i'll just say my uh so my my uh keynote from elixir conf europe is
actually online now so we'll include that in the show notes.
And that'll take you through.
I kind of pay it forward by walking through how CRDTs work to give you kind of this mental model without having to read research papers.
So if you're interested in CRDTs, that'd be a good talk to watch.
Well, I think your conversation today on that subject opened some ears for sure.
But it was good to have you back on the show, especially to catch back up.
It is kind of crazy.
It's been a year since you were on the show and I kind of enjoyed just sitting back hearing
all this goodness because as Jared's mentioned, he's building the CMS and it's the future
of the change log and what we're doing here.
So it's great to have you back on and Jose as well to talk through the underlying technology
that is building our future, which is just, to me,
it's just such an awesome feeling, honestly, to have that and to share that with you guys.
Yeah, well, appreciate it.
And you know where to find me online if you have problems.
And also maybe six months, a year from now, we can talk about, you know, Phoenix Next
and our awesome service discovery.
Only one thing I want to mention before we close, and it's a shame.
Josie's not here anymore, but you can pass the message, Chris,
or you can listen in the produced show that goes out.
When we had Matt on the show, we had Matt on episode 202,
and Matt is a fan of Elixir. So that would probably get Jose pretty excited.
He even said on the show too.
So that was good.
That's awesome.
And he listened to Jose's show one,
nine,
four.
Big,
big stuff there.
But,
uh,
I guess let's,
let's close the show.
So if that's it,
fellows,
we can,
you can go ahead and say goodbye.
Sounds good.
Goodbye.
Thanks,
Chris.
Thanks,
Jose.
Yeah. Thanks good. Goodbye. Thanks, Chris. Thanks, Jose. Yeah.
Thanks for having us.
I'm Jose Valim.
And I'm Chris McCord.
And you're listening to The Change Log.
Come on, Chris.
I'm Jose Valim.
And I'm Chris McCord.
And you're listening to The Change Log.
Don't laugh, man.