Software Huddle - Infrastructure, AWS, AI and Jobs, HTMX & more
Episode Date: September 10, 2024Today we have a special guest. We have Jeremy Daly, who’s been in the cloud space for a while. Jeremy is the co-founder of Ampt, which is building an abstraction infrastructure layer on top of AWS,... just to make it simpler to sift through all the different options and develop on AWS and do best practices there. So we wanted to get his opinions on a lot of different infrastructure stuff that he's seeing and how AI is changing development. We even talk about some front end stuff at the end and HTMX and whether it's real, whether it's a troll. So lots of good stuff in this episode. Timestamps 01:56 Start 04:28 Jeremy's Background 07:26 Hard things about building ampt 11:59 Infrastructure from Code 17:07 App Runner 20:10 Comparing ampt and PaaS 27:22 Managing a lot of AWS accounts 30:46 Better than AWS 35:27 Thoughts on AWS deprecating services 47:11 Using AI 57:20 ChatGPT Adoption - Non Programmers 01:06:19 AI affecting the job market 01:18:37 HTMX Software Huddle ⤵︎ X: https://twitter.com/SoftwareHuddle Substack: https://softwarehuddle.substack.com/
Transcript
Discussion (0)
But essentially, I think that AWS hasn't done what it needed to do to build these services.
That's why we see services like Memento and Neon and some of these other ones, because they haven't built an equivalent version of that within AWS.
One of the things you mentioned earlier was some nervousness around your love and use of App Runner.
And I think that kind of leans into this thing
that's coming up recently around deprecation of AWS services,
like CodeCommit, CloudServe, Snowmobile, WorkDocs
are all sort of in these deprecation cycles right now.
And there's a bunch of other ones as well.
I'm kind of curious, what are your thoughts on this move
that recently from AWS to start, you know,
satelliting some of these services with maybe not a ton of communication about what they're doing?
If I was hired as the CEO of AWS, I would cut hard and I would cut deep because the service sprawl there is incomprehensible to the average person.
And there are a lot of services that I would cut.
Do you think AI is affecting the job market for devs right now?
Are we seeing a softer job market because of AI?
I think there are too many companies that are buying into the hype that AI is somehow going to replace developers.
Originally, when I kind of looked at AMP, I thought of it more as like a platform as a service.
And clearly, it's something that's fundamentally different.
So what is it that someone that's maybe using something like a platform as a service,
it's cost the main hurdle there.
Why don't those systems work for these folks today?
And where does essentially this approach step in
and kind of fix some of those issues?
Hey, folks, this is Alex.
Great show for you today.
As Sean joins me on the show, we also have a special guest.
We have Jeremy Daly, who is an old friend of mine in the cloud.
He joined us on the show
and I really just trust and respect him
on a lot of things.
He's seen a lot
and I think has a pretty wise opinion on this.
Jeremy's the co-founder of Amped,
which is building like an abstraction
infrastructure layer on top of AWS
just to make it simpler
to sift through all the different options
and develop on AWS
and do best practices there.
So I really
wanted to get his opinions on a lot of just different infrastructure stuff that he's seeing,
how AI is changing development. We even talk about some front end stuff at the end and HTMX
and whether it's real, whether it's a troll. So lots of good stuff in this episode. As always,
if you have any questions or guests that you want to see on the show, anything like that,
feel free to reach out to me or Sean. With that, let's get to the show.
Sean, it's good to see you. How you been? I've been doing well. It's been
super busy summer. I feel like we haven't chatted live since we were together in Miami back in
May. It's just been like Slack messages and texting back and forth over the last
little while. I haven't been home for a weekend in like a month and a half over the summer.
I just like work
stuff personal stuff all this kind of stuff going on yep yep for sure i hear you have some some big
trips coming up including shift in in croatia yeah i'm in europe in a couple weeks and then
australia and then uh off to canada for a personal trip but yeah three international trips in in like
a you know five weeks or something like that coming up. So I definitely owe my wife some flowers
and maybe like some parenting relief at the end of that.
Yep, yep.
I've actually got a couple of international trips.
I'm going to Canada next week for like two days
just to do some training there.
Say hello to my, you know, to the motherland for me.
Yeah, exactly.
And then heading to Poland and my wife is coming with me. So like,
I'm, I'm speaking at a conference there for one day and then we're going to be there for
eight days and just do Warsaw and Krakow and things like that. And so I've never been to
Poland. That's actually like sort of my homeland. Like my mom is like half Polish. So I'm excited
to go check out Poland. Nice. You get some pierogies.
Yep. Exactly. Exactly. Yeah, so it's
good to see you. I'm also excited because we have a guest on the show today. This is an old friend
of mine. So Jeremy Daly is joining us. Jeremy is the founder of Amped. He's an AWS serverless hero.
He's a publisher of Off By None and just like someone I trust a lot on his opinions and just
enjoy a lot. So Jeremy, welcome to the show.
Thank you. Thank you guys for having me here.
Yeah, absolutely. Why don't you give, you know, I gave a little bit high level overview of who you are and what you do,
but what you've been working on lately?
Yeah, well, what have I been working on lately?
Well, it's been a crazy two, three years, I guess, post-pandemic, or I guess in the beginning of the pandemic,
I started working for Serverless Inc. on this really cool project called Serverless Cloud.
The idea was essentially to build a platform that you could deploy your serverless framework
apps to that would run on top of AWS, but you wouldn't have to set up an AWS account.
That actually kicked off sort of a whole movement.
Doug Moskrop and I were early
on that project and kind of came up with this idea of infrastructure from code, which was this idea
that was like, hey, can we just read your application code, turn that into something
that can provision infrastructure for you? We pushed that pretty far, eventually split that
off into this company, Amped, which we co-founded with Amrishamdan
and Rushik and Doug Moskrop.
So we started this company, been working on it for the last two years or so.
A lot of AI noise out there right now, which I'm sure we can talk about a little bit later
and some other things.
But we built a pretty cool product.
That's been the majority of my focus.
I've also been doing quite a bit of consulting work as well,
both from an AMP perspective
and just as a serverless cloud perspective and so forth.
But if you want ancient history,
started a web development company out of my dorm room in 1997,
turned that into a good-sized agency,
did that for 12 years, got sick of building forms
for other people, and eventually sold that off and built a startup around a social media site
for parents, and then parlayed that into a VP of product position at another startup that was doing
some cool stuff, and then did a whole bunch of ML and natural language processing stuff at another startup that was doing some cool stuff and then did a whole bunch of ML and natural language processing stuff at another startup. And then just went through that whole cycle and just
was able to kind of follow on all the trends. But I think my biggest claim to fame for me is I'm
always early. So I'm always early on every tech thing that comes out there. And I build something
and I'm like, this is amazing. Why is nobody doing this? And then like four or five years later, somebody else is doing it and they
somehow succeeded with it. So, um, one of these days I'm going to get it right. Um, and, uh,
and I'm going to be right there at the, I'm going to get the timing, right? Let's, let's put it that
way. The timing stuff. I mean, also, you know, you just, you know, hearing your history there,
you spent a lot of time, not only on early technologies, but doing, you know, just, you know, hearing your history there, you spent a lot of time not only on early technologies,
but doing, you know, startups and stuff.
And startup life is tough.
I feel like the seven years that I was a founder added,
it was more like 20 years.
Like, it's like, they're like president years, essentially.
You go in, you're like fresh faced, 18 year old.
You come out two years later, you're like-
All the gray and the beard.
Yeah, right, right. Yeah, absolutely. Well, I'm excited to have you on 18 year old you come out two years later you're like all the gray and the beard yeah right right
yeah absolutely well i'm excited to have you on because like i love talking about infrastructure
a lot of the the people i i have on the show are like database people or just like different types
of infrastructure and i know like you have a lot of experience with infrastructure and especially
like working on amp like you're trying to orchestrate all this infrastructure for your
different clients and you see a lot you talk to a lot of teams and just different things like that.
I guess like when you're building AMP and things like that, what were some of the hard
things about building AMP?
Well, yeah, I mean, to just get sort of an overview on infrastructure, because this seems
to be something that is an ongoing debate on social as to whether or not AWS is infrastructure.
Like if you're writing CloudFormation or CDK,
are you actually deploying infrastructure or are you just using these cloud services?
You probably can't see on camera, but I have scars all over my hands
from working with real infrastructure when I used to rack and stack servers
in a co-location facility and drive there at 2 o'clock in the morning
when a drive failed and it had to be swapped out manually.
So I love AWS and these services. I still know that somebody is racking and stacking servers and cutting their hands every time they open one. So I appreciate those people
very much. So I maybe have a different perspective on what I consider infrastructure. But the ability
for you to just go ahead and deploy these services,
no matter whatever you want to call them,
I think is quite a magical thing
that too many people take for granted nowadays,
especially a lot of people in startups
where you see startups with a few hundred thousand dollars
that are able to build these amazing things.
It's like, yeah, you wouldn't have been able to do that
15, 20 years ago
because you wouldn't have been able to afford the infrastructure. Although we may have over complicated some stuff
as well. But yeah, but I mean, the biggest challenge, I think, with infrastructure,
from a cloud standpoint, is sort of getting it right, right? Because if you're thinking about
just the lift and shift of, you know, spin up an EC2 instance, which is the first thing we did,
by the way, I approached the cloud, I had everything in my own co-location facility. And the first thing I did with AWS was take every service
I had running in this co-location facility and move it over to EC2. And this was actually even
before RDS because we were using a SQL server in the background to power some of the apps. And
we essentially had to spin up SQL Server, install SQL Server.
I remember this is being recorded, right?
I remember stealing the key, the software key, and installing that on servers before they had that,
or on EC2 instances before they had that.
But this was even before load balancers.
I mean, this was like, this is ancient AWS.
EC2 classic, not even VPCs or anything like that.
Right, exactly.
But so, you know, so that was my first sort of foray into AWS infrastructure. I mean, this was like, this is ancient AWS. Right, EC2 classic, not even VPCs or anything like that. Right, exactly.
But so, you know, so that was my first sort of foray into AWS infrastructure.
And even that was amazing.
And essentially that cut like our costs from a co-location facility where we're paying about $7,000 a month with redundant electricity and high speed internet.
And obviously, you know, all the cage fees and everything that was there to like $700 a month to have a couple of EC2 instances running, right?
So I immediately knew I'm like, okay, this is game changing that sort of approach.
But so that was relatively simple, right?
It was, you're just deploying servers like we were doing now.
Now, what is there?
300 some odd services at AWS.
They all have to interact with one another.
I mean, the complexity is not in the service itself.
DynamoDB, I mean, I know, Alex, that's one of your expertise.
You've heard of that service?
Yeah.
You know, that's one of those services where it's complex.
I mean, there's a lot of things you have to think about to do, but you don't have to run it.
You don't have to do a lot of configuration.
It's just, it's kind of there.
It works for you.
So that's not the complexity. The complexity is when you're reading off of a DynamoDB stream
and it's going to a Lambda function and you have these permissions and what happens if it fails,
and then how do you drain it or retries and batch sizes and all this other stuff.
So the complexity with infrastructure today is more about the connections between it because
of the distributed nature of that. And that's the biggest, I think your question was,
what challenges have we seen?
I mean, basically the challenges that I've seen building this way
for the last 10 years plus
when we started using more services,
the challenge is getting that complexity
or getting those interconnectivity,
the cross-service connections
to these different pieces of infrastructure,
getting that right is the hardest thing.
And again, that's what we're trying to sort of solve with AMP,
and we can talk a little bit more about that.
But yeah, that's the big piece of it.
One of the things you mentioned,
so when we were just introducing you,
was this idea of trying to like analyze the code to figure out like what services deploy.
So like, what is that workflow?
Like, are you doing, is that what's happening?
You're basically like analyzing someone's code base and then figuring out which services are, or, you know, what sort of pattern deployment makes sense. And then auto, you know, running, spinning up the services
and essentially offloading all that complexity.
Yeah, so, and that's actually been one of the biggest challenges.
So our original sort of idea was like,
how far can we push this idea of infrastructure from code?
Which was to say, you know, if you write an API
or you write a database connection,
you know, is there a way that you could extract from that, whether through static code analysis or parsing the AST or something like that?
Could you look at that and say, oh, okay, you're trying to call a DynamoDB table called my table or whatever it is, right?
And you're making a put item request or you're doing a get batch, you know, items or something like that, and be able
to look at all of that information and say, can we then spin up a DynamoDB table for you, call that
and give you the right IAM permissions and some of these things. And that was sort of where we
started, like, is that possible? And it probably is with a lot of research, but also think about,
you know, I mean, our biggest thing has been deterministic deploys. Like you can't just make a code change and then it suddenly drops a table for you or something like that.
So we took a different approach.
And what we said was, you know, for the most part, there's a set of services that you can run in an AWS account that will do most what you need to do.
So CDNs, Lambda functions, you know, DynamoDB table,DB table, an event bridge queue, SQS queues, a couple
of these different things.
And those things can all kind of interact with one another.
There's some basic permissions you can give those.
And then we said, OK, if we take your application code and just deploy that, then if we parse
the AST and look at the different services that you're trying to call, the different
handlers that you're writing, and we're very much so handler-based, right?
So it's like you write a task handler,
you write an event listener,
you write a storage listener,
you write an API listener.
Essentially, they're listeners, right?
And it's all Node, by the way.
It's all TypeScript, JavaScript.
So we tried to follow that consistent pattern of like,
you know, like it's a handler.
It's an event-driven system.
And so we pick up that handler. We say, oh, you want to run a task. Okay, great.
So we know that you want to run a task and then we see, oh, you've actually said you want to run
this task every five minutes or want to run it on this, you know, this cron schedule. So we can
actually take that, or maybe I want to listen for a particular event or whatever. So we can actually
read that in on the AST and say, okay, well, we need to create an event bridge rule here, or we need to add a timer here, or a, I guess there's called event bridge
rules now for scheduling things too. But so we have to do this or, oh, you have a data.on handler.
So you want to listen to data changes. So now we need to make sure that we create that connection
for that DynamoDB stream connection and filter that to the right function.
So we took the approach, and this is, again, look, we're a startup.
We've been doing this for quite some time.
We've figured out some really cool things, but we still don't know how to sell it.
We still don't know how to explain it because it is still very, very new.
And the way that we've been thinking about it lately, because this is essentially what we did,
is we built basically a pre-configured hosting environment for you in an AWS account.
And then it's programmable in the sense that it reads that in from your code.
It looks at what you're doing in your code, sees what handlers there are and so forth.
And then it can actually sort of configure or adapt that infrastructure to do certain things. And then we push that further in
the sense that we said, look, okay, now you've got something like a task. So let's just take the
task example. If that task has to run for 15 minutes, well, you could run that in a Lambda
function. A little expensive to run a task in a Lambda function. So we're like, well, what if you
wanted to run it for 30 minutes? We're like, well, what if we just load that same code into a Fargate container and run it in a Fargate container? So we did that. And then we're like, well, what if you wanted to run it for 30 minutes? We're like, well, what if we just load that same code into a Fargate container and run it in a Fargate container?
So we did that.
And then we're like, well, what if you have
like a high velocity API and Lambda gets kind of expensive?
We're like, well, AppRunner is a great little service
that we could do that.
So we essentially built a way that we take
your uncompiled code, right?
Just the code that comes right from your Git repository.
And we can compile
that down so that it will run in Lambda functions, it will run in AppRunner, it will run in Fargate,
and then we can dynamically change where the compute is based off of what it is you're trying
to do. So if you have a high velocity API or website, and we do, we have one customer does
like 1.6 billion requests per month for a Next.js app. That just routes right into AppRunner, right?
But if you're running a half-hour task, then it will automatically switch to Fargate and run that in the background.
So it's probably too much information.
But again, this is part of our problem, sort of trying to market this service,
is that it takes a long time to explain, and you can't really whittle it.
We haven't figured out a way to whittle that down into one tagline yet.
How do you even like, you know, you're talking about Lambda and AppRunner and Fargate, and
you're like one of the three people I know that is just like an AppRunner like advocate
and just loves it.
And it's like the same three people.
And I love all you guys.
So I feel like there's something that must be there.
But like, you know, there's famously like 17 ways to run containers in AWS. How did you even choose the different ones? Are you just trying out a bunch
of different ones? Are you talking to people and service teams? Like, how did you figure out,
hey, AppRunner is actually like, this is a sweet spot for a certain set of workloads.
Yeah. So when AppRunner was first launched, we were like, okay, this is sort of an
interesting thing. Like, why another way to do it? But here's what's super exciting about AppRunner
is that it has its own built-in load balancer. So if you create an ECS cluster and you want to run
your app using Fargate on the backside of the ECS cluster, you certainly can do that. And that
works great. The only problem is you can't route traffic to it, right? So you have to set up an ALB
in order to route traffic to that. Now, an ALB at
the very least costs, I don't know, $29 a month, something like that. So if you just want to have
an application sitting there, and it's a small app, just to run it in ECS or EKS, you have to
get that load balancer in front of it. So we were exploring this and we said, well, we can do ECS,
like that makes sense. Because we did have people requesting like, can we not run this on Lambda
because, again, it just doesn't scale the right way when
you do certain things. I mean, it scales, it's just
the cost doesn't scale the way you would expect
it to. So we looked at ECS
and then we're like, yeah,
but having that load balance is kind of a pain.
And then we were looking at AppRunner and we're like, wait a minute,
AppRunner doesn't need
to have that. And then the other thing
that is really, really cool about AppRunner is AppRunner has two modes.
So it has this sort of like provision mode and then it has like active mode.
Right. So you can provision containers and it's super cheap.
It's like, I don't know, seven zeros and a seven cents per hour or whatever it is or per minute, or something like that, that keeps a container
provisioned for you. And then as soon as it gets traffic, it switches it to active mode
in milliseconds. Like, it's fast. So it's kind of like provision concurrency for Lambda,
but like a thousand times cheaper, right? And so you can have a container that's running in this provision
mode. And the second it gets traffic, it switches into active mode that there's no cold starts.
It is very, very cool. And you can also, you can say, I want this many containers to actually spin
up into active mode at some particular time. So you can actually say like, I only want 10
containers to spin up. And if you only spin up 10 containers, then you can actually cap your cost,
right? So that's another big complaint where, you know,
somebody puts something on Vercel
and then next thing you know, it goes viral
and then they get $128,000 bill.
This is a really interesting way
to sort of prevent that from happening.
So I love AppRunner.
I've heard scary rumors about, you know,
the success of AppRunner internally.
And it makes me really nervous
because I think it's a highly underutilized service
that is very, very cool.
Originally, I think when I kind of looked at Amped,
I thought of it more as like a platform as a service.
And clearly it's something that's fundamentally different.
So what is it that something that's like fundamentally different. So what is it that, you know, someone that's maybe using something like a platform as a service,
like it's cost the main hurdle there. Like why don't those systems work for these folks today?
And where does essentially AMP, you know, this approach step in and kind of like fix some of
those issues? Yeah. Yeah. So I think part of it is understanding too how AMP works kind of behind the scenes in
terms of different environments and so forth. So essentially what AMP does is it creates what we
call AMP environments. So those run in an isolated AWS account, and that has a bunch of pre-provision
services, like I said. So that's an AMP environment. That communicates back with the AMP
orchestration service. So basically what the AMP orchestration service does is it will create those environments
for you. It does all of the routing, sets everything up. It manages all of the user
permissions and so forth. So as a user, when you spin up a new application in AMP, that will connect
to a provision in AWS account for you, set up the AWS or set up that AMP environment.
And then that's dedicated to that one,
basically one stage or one environment of that app.
Then if you publish that to like a dev environment
or to prod or whatever,
it'll spin up another AWS account
with another AMP environment that's completely isolated.
And the idea is that all of those environments are,
they're the same, right?
So if you build something in your dev sandbox environment,
that's all synced instantly to the cloud.
You have this really cool interactive way of building apps.
That, when you publish that,
that will run the exact same way in prod
as it will run in dev or in your sandbox. And the reason for
that is because we want that production parity, that sort of high fidelity sandboxes, as we call
them, so that you have the capabilities to do that. So, so that's the that's the idea behind
how we sort of do it. So when an app is running in an AWS account, it pretty much runs completely independent of the AMP service.
So we do send some telemetry back.
If it's a preview account, we do some quotas and some of those things.
So there is some communication with the main service.
But the goal here really is to say this is running in its own separate AWS account, which eventually we hope more people will own their own AWS account as opposed to us managing it.
But the orchestration of actually setting those up and deploying the code,
CICD, by the way, that's all built in, like you don't have to worry about any of that other stuff. It just works. So we tried to simplify the whole software development lifecycle.
So going back to your question about PaaS. So the thing about PaaS is I think there's some
really great PaaSes out there, right? I mean, Vercel, people love
Vercel. It works really, really well. If you're building Next.js, which again, we can talk about
that maybe, but if you're building Next.js, Vercel is the place that you probably are going to want
to host it if you want to take advantage of the ecosystem that they built. There's a couple of
other passes. I mean, I would still consider, or I do consider, something like Neon is still technically a pass, right?
It offers an infrastructure service, but it still has some of those pass sort of capabilities to it.
So I think the problem that some people face, and I think others have pointed this out as well, or I guess problems that other people encounter, is if you have to connect 10 different services,
so if you're using a workflow service, which I forget, is it Ingest? Defer.run, I think,
is now part of Ingest or something like that. So you get Ingest, which is the ability for you to
run these workflows remotely. I mean, how much data do you want to pass into a third-party service?
I'm thinking about workflows. The workflows we run are super
data-intensive. That's why we run workflows, right? Because there's so many steps and so
many checkpoints and things like that that you need to make sure and verify. Often, those
workflows need to have access to very secure data or bits of information that you wouldn't
just ship off to a third party. So there is something about egress, right? There is,
do I have to contact this other service?
How long does that API take? Can I run things sort of internally versus having to use a third
party service? And this is what we're trying, this is part of the approach with AMP, just to say,
AWS does pretty much everything that these other passes do. It's just you're often building some custom or bespoke solution in order to make
it do what that other thing does. So that's where it comes back to the idea of productized patterns,
which we talked about briefly before. But the idea of a productized pattern is to say,
if you can encapsulate the thing that a pass does or a part of a pass does,
if you can encapsulate that thing and you can productize it, meaning that all of the interconnections between the services and all the failovers and
all the fallbacks and all the retries and all that kind of stuff is there, and you can kind
of guarantee that that will do what you need it to do, but it will run on AWS, that's what we're
trying to do. So if you need a queuing solution or you need a caching solution or you need some
of these other things, let's first look at, can I run this directly on AWS within my own account
with access to whatever services I need to
without giving some third party more access than they need?
Can I do that all there?
And then, and only then, if you can't do that,
let me reach for a third party tool.
So I do think that PaaS is important. And I do think pass is very different than SAS
too. I mean, I know we all know this, but like, there's some things that sort of blur the line,
I think. But I do think that pass is a great choice for a lot of people to get started with
things. But I do think that when things become more serious, it does, it does potentially pose
a problem. And that's part of what we're trying to do is we want to eliminate
the graduation problem, right? So if you build something on Vercel and it works really great,
and maybe you've got some backend APIs that are written on their serverless functions or whatever,
and that's fine. But at some point, if you graduate beyond that and say, well, we need to do more than
what we can run in these serverless functions, then you have to start building a backend. And
then where do you build that back end?
And then do you move all of your back end off of Vercel in order to do that?
And then you're just using Vercel to connect to the front end.
And then eventually maybe you move off of that.
So I don't know.
That's my thought on it.
But what our biggest, and I think this podcast talks about VCs, right?
That's been the biggest, excuse me, the biggest challenge we've had with VCs is this notion of like, well, when do people graduate from your service? And we're like,
we hope they don't. We hope to build a system in which they don't have to graduate.
Yeah. And that's the main challenge I've seen with PaaS is that graduation problem. It's like,
at some point, if you are successful, you're just going to, for either cost reasons or
scale reasons or whatever have to
graduate from those platforms to something where you can go in and really like have more control
over what you're doing um and have a better sense for you know where where you're spending as well
um and you know that's going to be a painful migration at some point in most companies journeys
yep absolutely one thing that is interesting to talk about
is just like, hey, each AMP environment
is getting its own AWS account,
which means you had to build a bunch of background tooling
to manage a lot of AWS accounts.
I guess, was that a major challenge?
Am I overestimating what that was like?
Or is that a total pain to do a lot of that?
No, that's actually huge.
I mean, what we did, and we actually used the serverless framework
for a lot of this. So we used serverless framework tooling, and that was
part legacy sort of because we were at serverless, you know, serverless sync
when we did it. But actually, it's been fine because the other thing that we've done is
most of the convenience features that the serverless framework
has, we use that.
But then we also use just the resources section.
I think if you're familiar with the resources section of the serverless.yaml to define a lot of cloud formation.
And we actually, you know, we talked about Terraform.
We talked about, you know, CDK.
We talked about some of these other things.
But we're like all of that stuff, you know, other than Terraform.
I mean, Terraform, we were a little bit, we wanted to be careful with Terraform, but with like CDK, we're like,
it's just going to compile down to, it's just going to compile down to CloudFormation anyways.
And because it's such a strict thing that we have to do in order to make sure that every one of
these environments is the, you know, in the exact same state, we said, let's just write it in
CloudFormation. So most of what we have is actually CloudFormation behind the scenes. And again, we're all AWS, right? So we
don't do anything on GCP or Azure yet. You know, we don't really have any plans for that. Like,
it's mostly focused on that. But yeah, but the tooling behind the scenes is quite a bit. And so
one of the things that we did was we took, you know, a lot of the orchestration stuff that you
would normally run on like a CICD runner or like a, you know, like GitHub actions or something like that. And we
brought it all into Lambda functions and state machines in order to do a lot of that, the workflow
management stuff. So if you're familiar with Michael Hart, he built a thing called Yumda.
And we actually use Yumda, which is basically yum for Lambda.
We actually use that as part of our build tool.
So when we build your application for your environment, your build runs in the environment using a Lambda function.
And then some of the tools with Yumda.
So what it does is basically it's like, again, from a security standpoint, everything runs in your own account. It builds itself like some of those things. And that's part
of how we do like the deployment pieces of it for your actual application. And then we use similar
stuff in order to build most of the deployment capabilities and all those workflows that use
serverless, the serverless framework and CloudFormation to do that in our centralized accounts that reach out and do that.
But we had to build an account vending machine, essentially. We had to build a whole bunch of
stuff that maintains the state and a bunch of workflows that go and run these scripts and
mutate the infrastructure and can read back the state of the infrastructure, make sure that things work correctly. So it was not, it was not a simple task. So it is very, is very complex
system that that does that. Yeah, for sure. And man, I shout back to Yumda. Michael Hart,
I miss him in the AWS ecosystem, because he was always making like some some wacky projects and
some like really pushing, pushing the limits of it.
We need it.
We need him back.
I want to ask,
like,
just like you mentioned sort of,
I was,
I call them like sort of third party infrastructure providers where you have like AWS,
GCP and Azure.
And then,
and then you have like these other ones that are,
that are performing like a smaller piece of infrastructure.
I guess like,
are there certain ones that,
that,
that you're leaning into?
I know y'all integrate with,
with some of those in,
in AMP.
I guess, like, how do you sort of, like, are there some you like and trust and say, hey, that is way better than what we can get on AWS?
Or how are you thinking about that?
Yeah, so I think I've been disappointed with AWS's pursuit of, you know, quote unquote, serverlessness.
Because they, and I've talked about this a number
of times. And so they know, and I've talked to all the service teams too. So they know,
they know how I feel. But essentially, I think that AWS has not done itself, or it hasn't done
what it needed to do to build these services. That's why we see services like Memento and Neon
and some of these other ones,
because they haven't built an equivalent version of that
within AWS.
And what I love about AWS from a serverless standpoint
is that we literally deploy, again, a CDN.
Actually, we deploy CDNs into like seven different regions,
which they're not always super happy about.
We deploy ACM for our certificates.
They being AWS or they being your customers?
They being AWS, yeah.
We had to throttle some of the way that we use some of their data planes and their service planes or control planes.
But we spin up the CDN.
We spin up a bunch of Lambda functions,
a bunch of DynamoDB tables,
we spin up all kinds of the ACMs,
all this stuff,
we spin up a ton of infrastructure.
And in order for that infrastructure
to sit there with no data storage
or anything like that,
costs exactly $0.
And then as soon as you start to use it,
then it starts to cost you
a little bit of money.
But most of that stuff is on demand, or you only pay when you're storing some information. So that is magical from a developer workflow standpoint. and I need them all to have their own AWS accounts so they can play around with things and not worry about borrowing resources
or the noisy neighbor type thing
or resource contention, that kind of stuff.
I want to give them all their own environments,
but I can't spin up an RDS cluster
in every single development account, right?
Like I can't spin up an ElastiCache service
in everyone or an ECS cluster
with a load balancer running,
right? Like it just gets too expensive. And again, maybe there are people who love to just
throw money away, but I'm not one of those people. And, and I don't think other companies,
I think a lot of other companies aren't, especially the, the startup. So the idea is
that if I can spin up all of that infrastructure for free and only pay when I use it, that's the ideal situation for me.
There is no good solution for caching. ElastiCache serverless is not a serverless service.
Aurora serverless is not a serverless service in the sense that I can't just say,
hey, create a table for me or create a cache for me. And then whatever data I'm storing it,
I'm willing to pay. But every time I
hit it, I'll pay a small fee every time I hit it. That just doesn't exist. That does exist with
something like Neon, which is pretty cool. And they're doing some pretty cool things.
Planet Scale had something similar to that. I know they have a sort of baseline charge now.
And again, just from my perspective, I say this all the time, I have no
problem for paying for services when you're using them in production, but I need a high fidelity
version of that service that I can use in a development mode that doesn't cost me the full
price of running it, you know, at even if it's at a lower level, I need that capability. And so
Postgres, if you're using Neon Postgres,
you get that for Postgres.
If you're using Memento, you get that for caching as well as for their new topic service.
They're working on their new storage service as well,
which could be interesting.
Although again, the storage service,
like S3 is a great service.
Like, I mean, it's so good.
And, but yeah, so I mean,
in terms of other ones that I trust,
like cachings is one of those things for me that's very, very lightweight.
Like the whole idea is that I'm not going to use cache as a database.
So if the cache goes away and I have to go right back to the database or I have to rebuild the cache, like that's okay.
I'm willing to sort of take that risk.
Database a little bit less so, but I do like what Neon's doing.
You know, one of the things you mentioned earlier was like, you know, some nervousness around
your love and use of like AppRunner and how, and I think that kind of leans into this thing that's
coming up recently around deprecation of AWS services, like, you know, code commit cloud surf,
snowmobile work docs are all sort of in these like deprecation cycles right
now. And there's a bunch of other ones as well.
I'm kind of curious, like, what,
what are your thoughts on this move that recently from AWS to start,
you know,
satelliting some of these services with maybe not a ton of communication
about what, what, what they're doing.
I, if, if I was hired as the CEO of AWS, I would cut hard and I would cut deep
because the service sprawl there is incomprehensible to the average person.
And there are a lot of services that I would cut.
And I think it's great. And I think it's great.
I honestly think it's great.
I think the idea of, I mean, I guess two ways to approach this.
I don't think it's great if there's services I'm using.
So I'm sure there are people out there who are using WorkDocs that are like, come on,
what's happening?
You know what I mean?
Or using one of these other services.
Yeah, I mean, it's like every time Google kills something, like, you know, there's a subset of people who really, you know,
love that thing, and they're very upset about it, of course. Right, I mean, and it's going to happen
anyways, but I do think that what I like is that they sort of, I mean, the communication was not
good. We know this. Jeff Barr said, you know, sorry, we made a mistake. We should have been a
little bit clearer about this. So the communication wasn't great.
But I do think that before they cut any of those services,
and maybe they did this, maybe they did this and we won't see any more.
But I really wish that they would have, you know,
sort of just looked at all these services.
And again, like I said, maybe they did this.
But look at all the services and say, you know,
this is not a good business unit. this is not a good business unit.
This is not a good business unit.
This, you know, this is just, you know, doesn't have the adoption we need it to have and so forth.
So that if they were to kind of just do a huge cut, cut really, really deep and then move a lot of those people over to other services that, you know, could use could use more attention and so forth.
Yeah, like a coordinated effort on this would make a lot more
sense. It's like, hey, we're cutting these 50 services, you got 18 months, you know, some sort
of like longer deprecations, you know, I've led a couple of deprecation cycles in my career, and
they're always challenging. But like, the key is like, you gotta give people enough time and a
heads up about it, and then like, remind them constantly that the thing is coming.
And I think it would be interesting if there was a clear strategy around it too. Like, I mean,
why those services particularly, like, I mean, if it is literally the business unit thing, I mean,
if it's a business unit thing, then I mean, Lambda could be on the chopping block, right?
Because Lambda is not a very good business. I mean, it's, it's not a good business model,
not for AWS. Right. And so, you know, and right? And so those are the things that make me a little bit nervous.
But obviously, there's a fundamental thing that, to me, Lambda could be a loss leader service that drives revenue through the other services.
It's just so much glue for everything else that, yeah.
Right. It's the glue, right?
So I'm not worried about Lambda, although, again So I'm not worried about Lambda, although, you know, again, I'm not worried about Lambda. But I am worried about services
like AppRunner
and some of these other ones
because, like, again,
AppRunner does a lot
of really great things,
but it also,
as a standalone service,
it also potentially
tries to do too much, right?
So you can build from source
with AppRunner,
which is kind of cool.
You just connect it
to your GitHub repo
and it will literally do
all the builds for you,
containerize everything, and then go ahead and do the deployments. That's really great.
I don't need that, right? I mean, maybe somebody else does. I mean, and maybe that makes sense.
But I also think that you invest in those types of, you know, the bells and whistles on some of
those things. And it almost breaks down the core of what it is. Like if you just said AppRunner is,
you know, an auto scaling service that you can provision containers to that has, you know, the load balancer built in, it's like, it's pretty much only for
HTTP. I mean, you can connect to it and run other services, but through an HTTP interface, right? So
I think that it's a great service, but trying to make it as like a standalone, like light sale or
something like that, I don't think makes a lot of sense around that.
And then there's the larger question, and this isn't really about deprecation, but this is more about consolidation because there are 19 ways and maybe 20 ways or whatever it is now to run
containers on AWS. I mean, you have Lambda and Fargate specifically moving very, very close to one another, right? And so it's like, other than the cold start on Fargate, you know,
versus the cold start on Lambda, they both run on the same,
well, I guess they might run on different infrastructures now
because I know that AppRunner runs on top of Firecracker, right?
So it's got a similar VM underneath or whatever, but I'm not sure about,
I know Fargate can also run on top of EC2. I don't know. There's all kinds of weird stuff, but
you're moving very closely and closer together in terms of how you can trigger these things.
And so if you have an asynchronous process that needs to kick off, you can start a Fargate task
with EventBridge, right? So if I have to run a task in the background, why would I fire that off? So
like, let's say it's a, you know, some image processing that I have to do on a, on a, an image
that gets uploaded to S3. Well, use, use, you know, event bridge to trigger that to a Fargate
task and then don't run it on Lambda. And maybe you can run, you know, a whole bunch of them at
the same time or something like that. So the way that these get triggered, their use cases, the overlap between some of these things, the complexity
that we've added with things like rolling time windows or rolling windows or whatever, like it,
it just seems like there's, there could be a overall strategy of sort of like, here's how
you use compute on AWS. And, you know, here are the, here are the services that you can interact with compute. That to me, I'd love to see a
bigger strategy around that. And then the other thing with stuff like
the call center app that they have and then 475
different services under SageMaker. It just seems like the
two pizza team thing was great, but it may have
run its course.
Yeah, I mean, I think AWS is like the land of a thousand CEOs, right?
Like HPM is basically like spinning up these sort of separate projects.
And I wonder at some point without like particular controls in place
and you just have a lot of people, I think,
that are maybe trying to make their mark or go for that for that promo cycle like we're gonna spin up this service
even though it's pretty much the same as this other service but it has this you know take on
it and then you end up with eventually with like five things that kind of do like similar stuff
or that do the exact same thing i i love the story about rds uh rds proxy if you've ever heard the
rds proxy two teams were building RDS proxy without
knowing that they were both building the exact same service.
So there's
more nuance to that story, but essentially, one
of them, they had to kill a project, and the other
one took over. But I think you
see that, and I do agree with the
how do I make my mark type thing.
And if you have individual business
units who say, oh, wouldn't it be cool
if you didn't even have to build? You didn't have didn't have to compile your container locally and you could just build from source.
Like, that's great. That's a really, really great feature.
But the question is, who needs it? Right.
And then how does it fit into a larger into the larger strategy of what you're doing with your cloud business?
So I don't know.
I think one thing that I've been kind of frustrated with in AWS the last
couple of years is two things.
Number one,
like when a service comes out,
there's not cleared on like,
is this actually ready for prime time?
Or are you like putting the feelers out and getting some,
some things out there or just like for an existing service,
is this still getting worked on and looked at?
Like it seemed like Cognito,
nothing happened for six years.
And there are some other services, I don't want to name,
but you just talk to people and it's just like,
yeah, that team is kind of a mess.
I would not sort of bet on it.
It's like you almost just have to go through the grapevine
and hope you know somebody that knows somebody
to understand what's actually getting investment and what's not.
Yeah.
I mean, there was also 27,000 layoffs or something like that last year from AWS.
So who knows how much of a crew for some of these things are still there? I mean, I'm sure a lot of that was not necessarily engineering resources, but despite that, or I kind of like how AWS will roll things out in preview as an experiment. But one thing that I have found is that if they roll something out in like, hey, we're thinking about doing this thing. Here's a little demo of it. And then, you know, and then it just goes away when people start using it. And they're, I mean, they're clear about like, don't use this in production, because it might go away or, you know, whatever. It's not ready for primetime. But I think one thing, once things go to GA, it's pretty solid, even if they don't do quite everything you need them to do.
I do think there's a lot of tech debt at AWS, though.
I mean, it's kind of interesting.
We had a situation where I mentioned about throttling against the control planes.
Some of the things that they built as convenience features
just don't, there's just a lot that has to happen.
So you make one API call and they make 6,000, right?
And so that, you know, behind,000, right? And so that,
that, you know, behind the scenes, right. And it takes a long time. And, you know, they literally,
I forget what the number was, but we were doing, we were doing something on their control plane for, for setting up, I think it was just setting up certificates or something like that.
And we, we started getting throttled. We had a conversation with the team. They're very great, you know, great about the whole thing. Um, but essentially their, their throughput, right? Like their quota was like 50, um, 50 requests per second or something like that, or 50 requests per minute. It was very low. And we're like, oh, so we can, can we do, so we can do 50 requests per minute or whatever it was. And they're like, no, no, no. They're like, that's like for the entire region. Like, so we, so anybody that's using this control plane has this limit. And look,
I expect those sorts of things. Like, I mean, the duct tape and popsicle stick that I've seen
at companies is, you know, it's, it's real. Um, and so, so yeah, I mean, uh, I, I think that
there's a lot of tech debt there. I think there's a lot of, you know, things that they just have to,
they just have to figure out.
But I do like that they released new stuff.
Don't get me wrong.
I do like that.
But then I do agree with you that they should be spending more time on some of the products that have been out there for a while also.
Yeah, I respect that they don't seem to do a lot of sort of vaporware that you might
see at other companies and you know that that um the fact that they had a limit of you know 50 requests uh per second for like
everybody you know i don't know how it works at aws you might have more um you know internal
information on this than me but like i know what google like if you're building a new service like
of course google has infrastructure to scale like you know don't know, a million QPS or whatever. Like, but it's not like anybody can just do that.
You have to have essentially the demand to get there and, you know,
go through a certain process to, to spin up, you know,
a service of that size.
So generally people are starting like really, really small.
And then they're in order to build the demand.
So I would assume like AWS is similar where they're trying to control costs
by limiting how much you can just like,
anybody can just like spin up a service
that's like, you know, infinitely scalable.
Right, right.
Yeah, yeah.
Well, the good thing about the tech debt
is we have Amazon Q there to help us upgrade all this.
I've seen a lot of, you know, Andy Jassy
and Matt Garvin talking about how it's upgrading
and writing a lot of code.
Jeremy, I know you're sort of an AI skeptic.
I guess, where are you at in using AI in your development cycle?
Yeah, I use it all the time.
I have ChatGPT open as well as Copilot enabled now.
I disabled Copilot for a while because it was giving me a lot of junk,
but I re-enabled it and used it quite a bit.
I mean, I get very frustrated with some of the output,
because the things that are known, the known knowns, right,
are very easy to find how to do those things, right?
It's in the documentation.
It's things that I have stored somewhere in my brain,
somewhere that I at least have the general idea how to do it.
And then there are the things that I kind of know or I know about, or I guess the known unknowns.
And when I ask those questions to ChatGPT or to Copilot, I am very frustrated on it,
honestly, with the answers I get. Just as a quick example, I was, again, I posted this, I think, last night on Twitter,
but I've been spending a lot of time with containers
because we've been containerizing,
we've been working on the container packaging format
to deploy for the Lambda function
so that you can have bigger package size.
And there's been some great work that AWS has done
about caching layers and some things
that make the cold starts.
Actually, AJ Strudenberg as well had posted about this,
but it actually is pretty fast to run Lambda containers now,
of course, depending on the size of the package.
But I've been doing a lot of work on that.
And so we were using Crane,
if you're familiar with Crane,
it's a Google Docker packaging format or packaging service
that's like Docker desktop or whatever,
but it's more lightweight.
It's very cool. So anyway, so we're just appending
layers and adding some images for the
app layer and some of those other things.
So very straightforward. Their documentation
is terrible, right? And you're going through and you're trying to
find what you're doing. So I'm asking the question
in ChatGPT
and the thing spits back and I said
something like, is there a way
how did I phrase this? Something like, is there a way, how did I phrase this?
Something like, is there a way for me to append a layer
to an existing ECR,
an existing image in an ECR repository,
something, whatever I asked the question.
And I thought I prompted pretty well.
And it's like, no, unfortunately,
there is no other way to do that.
You have to down, you have to rebuild the image
and then tag it or upload it to ECR, tag it, blah, blah, blah, whatever. And I'm like, I suppose you got to know
and not just like a manufactured like reason. Right. So, so I'm like, but I said, what,
what if I used crane? And it's like, oh, you could use crane to do exactly what you just
asked me to do, even though I told you that you couldn't do it. And that's the, that's my biggest concern with, I guess that's my biggest concern with, with AI in general.
And if we want to talk about AI as an assistive technology for helping us code, I use it quite a
bit. Every once in a while, I'll just, you know, I'll do a comment in my code and say, you know,
write me a loop that does, or, you know, write me a reduced function that does X, Y, Z, blah, blah, blah, whatever.
And it usually spits out pretty good stuff that I can use.
I am very, very much so, though.
I love the term AI slop, if you've heard this term AI slop.
And it's just basically all the junk that's produced.
And I've posted about this a number of times. I'm going to scratch my eyes out if I have to look at one more AI-generated blog post image that is the title image. And I know that
in the past, it's like, oh, well, Google loves when there's an image or social media loves when
there's an image associated with your blog. And that's great that people can do that now,
except that they're all the same and they all look terrible. And then the other thing is, is again, I curate a newsletter and I read, you know, I probably review 300 to 400
articles per week. Most of them I dismiss because they're not, you know, they're not quite on topic
or they're, you know, they don't look great, but I probably read, I don't know, 70 to 80 articles
per week and give them a quick overview. I know I could use, you know, chat GPT to summarize them if I wanted to,
but I actually like to read them and see what they are.
And I come across ones that are clearly generated by AI.
And my problem is with that is, and that's,
and that's the term that's been coined AI slop.
It's the stuff that just gets generated from AI.
It's AI produced language or images and so forth. And this is something that I said a long time ago. I said
sometimes I'm ahead of the curve here. So right after this started and we started seeing these
gen AI articles, I said, what's going to happen when AI trains itself on its own output. Like I asked this question in like January of 2023 or whenever,
what, like right after, you know, people started using chat GPT to do this. Um, and then a few
articles came out that kind of mentioned it, but then there was a pretty big study that Rice
university did, um, somewhere in, uh, somewhere sometime last year, maybe August of 2023. Um,
and they coined this term, um, called model autophagy disorder. And
basically what it is, is that it's like, it's like incest, right? It's like what happens when
you inbreed animals, right? That's why all Dalmatians are pretty dumb, right? Because
they're, they're, they inbreed them in order to keep the Lyme pure, right? So the problem is,
is that if a model trains itself on itself on its own output right
because of the way that these models work it basically every cycle every time it goes through
the the quality degenerates and it starts to it actually starts to form like it it becomes less
and less diverse so they yeah you're basically mean, because the, you know, everyone says like garbage in garbage out. Right. So if you're,
if you're training on less quality information,
because eventually the amount of data that's like written stuff on the
internet,
it by humans is going to be equipped by the amount of written content by AI.
And that becomes the dominant data input for sensory training the models.
So you get like a Mobius strip effect, essentially.
And the models have already stole all of the content on the internet.
And I use the word stole because I do think that's another problem
that I have with how some of this works is that AI couldn't be predictive
or Gen AI models couldn't be predicted if they didn't steal
the work of other humans. And if they would have licensed it, sometimes it just would have, they
wouldn't have never been able to do that. Right. So they have to, they had to steal that work and
they had to, you know, and I think it's a copyright violation. I might be in the minority, but I think
it is. Um, but, uh, but I think that information, they needed all that information. And the point
I was going to say about this, this mad, this,AD, this model of autography, I don't know,
I can't pronounce the word.
Anyways, but what they basically said was they took all of these faces, right?
So they took a bunch of people with different facial features, different skin tones, things
like that.
And they ran it through the model and they said, generate a new set of people based off
of this. And then it spit out the model and they said, generate a new set of people based off of,
you know, of this. And then it spit out a bunch of images of people. Then they trained it on those images and spit it out again. And by the time they got to like the fourth iteration, everybody
looked the same, right? It was just like, we were all blended into one big gene pool and we were all
looked the same. And then they did the same thing with handwriting. So they took all these handwriting
samples of like people writing A, B, C, whatever. And they took those letters and they ran them through and they said, okay,
spit these letters out, generate the letter A. And so they generated all those, they ran it through
and then four times through or whatever that number was, it came back and all the letters
looked exactly the same, right? Because it just degrades over time. I think it's because you have
a lot less sort of like it's less creative like if you
ask uh you know chat gpt or something like that to create uh you know a movie script or idea
it's going to be a lot of times like fairly general like it'll sound decent uh as like
maybe a first cut but like it it you could tell especially if you see a lot of these things like
i um you know your story earlier remind of reading those, you know,
so many, uh, news articles a week, like it reminds me,
I was reviewing conference abstracts a little while ago,
like hundreds of conference abstracts and like probably half of them were
clearly written by AI.
And I don't mind people using AI as assistive technology to help,
but can prove their writing.
I do that all the time.
But just like giving a prompt and then copying and pasting it and sending it to a conference is the wrong way to go about it.
And it's very clear when you're doing that because there's just signals, just like the images.
There's certain signals that make it clearly stand out as something that was generated from an AI.
Yeah, and this is how I sum it up.
And it's harsh, maybe, because I know you've got investors
who have put billions and billions and billions of dollars into this.
And I think AI is an amazing technology.
I think there's so much that we can do with it.
But Gen AI is a highly sophisticated parlor trick.
And the output that you get from Gen AI,
it is enough to fool you to think that
it's good, right? And I agree. It's very, very cool what it can do. But the quality of that output
is much lower and unoriginal. And that's my problem. And so if you want to take a CSV file
and throw it in the chat, GBT, and say, hey, can you actually just break this down into categories
for me
or do some of that?
Like, it is great for that.
That's amazing.
There's so many time-saving features.
Let's use it for that stuff
and focus on those technologies
and those use cases.
Let's stop generating content with it
and stop generating images
and stop generating videos with it.
Oh God, the videos like give me nightmares
when I watch these videos.
It's where there's like no expression
or interaction between characters in the video. So I was just like staring off and like kind of moving
but that's the or and then they just morph randomly like that's the weirdest oh it's so weird
yeah that's yeah those are the terrifying ones yep yeah yeah when your elbow goes through your
face it's a little off-putting do you ask people in your life like outside of your programming circles are you using chat gpt ai like do you see a lot of adoption from i guess like non-programmers
generally i mean my kids use all the ai features that are built into uh into well no but they use
all the ai features that are built into snapchat and all the other things you know i mean all the
filters and some of that stuff whatever um but honestly no but honestly, no, I mean, my, my wife is a, my wife's a teacher, um, you know,
for, uh, for elementary school. And she doesn't, I mean, it's not like if I told, if I, she's like,
oh, I know what it is, but I've never used it. Right. And then most of the people that I know
who are non like in the tech industry, but you know what, they'll, they'll use it as like
a, um, uh, they'll use it for as like a novelty, you know what I mean? Like, so, uh, my, my, uh,
my, my wife's cousin, you know, generated a poem, um, with it that he used to put in my,
my daughter's, uh, you know, graduation book thing. I like just random stuff like that, that,
um, uh, that they do. But I mean, I, again, I think people are using it.
So certainly students, as you said, I mean, I think they're using it quite a bit.
Um, I, you've heard the mushroom book story, right?
Did you hear the mushroom book story?
No, I don't know the mushroom book story.
No.
Somebody generated a book that they published on amazon.com that was how to identify mushrooms.
Um, it was all AI generated and a family followed it
and got really, really sick from it. Oh, my goodness. That that feels like there's a real
high or like a really high chance of failure with that. Like that's like right. Self-pub. I mean,
well, yeah, with self-publishing books, I mean, the whole point is you used to have to go through
quite a bit of of work to get a book published. I. I know you self-published your book, Alex, but you didn't write it with ChatGPT and had the integrity to do it.
It wasn't there.
I actually was sitting next to Alex in Seattle at a dev conference, and I'm like, what are you doing?
I'm writing a book on Dynamo TV.
Anyways, but no, so I think I think there's a there's obviously
dangers to generating a lot of slop. And then the other thing is, is that, again, it's just it's not
accurate. Like the idea of the, you know, it might hallucinate and it can be wrong. Like, no, it is
almost certainly going to be wrong in some way, even if it's a subtle way, you know, that in some
cases could have disastrous consequences. And I and I'm not I'm not worried about Skynet. I used to joke
about that. I'm not worried about things becoming simpler, because that's the other thing that I'll
say too. I think AI, where we are right now, we may have reached the peak. We've trained it on
every bit of information, human-generated information we have. If we train it on
information that it generates, we already know that that's going to just degrade the quality of it.
I think what we've done or what companies like OpenAI have done is they've taken that.
And again, I don't want to sound horrible about this because I'm usually a pretty positive person.
And I feel like I'm talking about a lot of negative things here.
But this idea of taking that parlor trick, which is very, very good and does a lot of really interesting things.
It's very fascinating, right?
It's very easy to kind of distract people with it.
They've taken that and then they've built all these things around it, like rag and all
this other stuff that's like, well, how do we make it so that what it actually spits
out is accurate and true?
And what other cool features can we add and so forth?
What other safeguards can we put in?
I think that's going to be the majority of what these companies are going to need to
spend their time on for the next few years. We were supposed to have self-driving
cars 11 years ago, and they keep telling us next year because the 90% is so easy. I have built so
many things to 90%. It's that last 10% that's super hard. The last 10% is hitting a kid,
walking across a crosswalk. That 10%, that's important hard. The last 10% is, you know, hitting a kid, you know, walking across a crosswalk.
That 10%, like, that's important.
You need to get that right.
And I think that's where we are
with the idea of Gen AI and so forth.
And I know it's doing some cool stuff.
Runway ML is doing some really cool stuff with video.
Sora, again, is still nightmare-inducing stuff.
But like, there's a lot of really cool stuff
that's happening out there.
But how much better is it going to get when it comes to the Gen AI stuff?
I think we've reached a point where
I don't think it's going to get much better. It's going to be very, very incremental at this point.
And AGI, the singularity, that is 100 years off
in my opinion.
Yeah, I mean, I think I agree in terms of AGI.
I think there needs to be a new innovation that,
like there needs to be additional step function.
But I think what's going to end up happening,
and I think we're already seeing this,
is that really the investment over the next few years
is going to be these like assistive technologies
where developers have already, you know,
have already jumped in on that with the, with the copilots and stuff.
And actually I was reading stack overflows,
annual developer survey recently, and they were saying like,
so basically 66.2% of the respondents,
there's like 30 over 30,000 response said that the biggest challenge with
these AI systems is trust.
But everybody says that it increases their productivity. So even though they don't trust
the output, they still see value in terms of it increases their productivity. So there is value,
I think, from the assistive technology, just like when it comes to the self-driving car analogy,
we had forms of assisted driving for a very long time and
those are valuable too like if i have something that you know uh um you know makes a sound or
something that when i'm about to like back into somebody or automatically breaks for me like all
these things are really really helpful but there's a big difference between that and i'm taking a nap
in the back seat as my car like drives me drives. And I think that, like you said, that last 10% is...
We might not even have the technology essentially
there today to solve that last 10%, 20%.
But there's tremendous value in these assistive technologies,
whether it's being able to comb through massive amounts of information
to get a summary or take data that historically
has been locked away because we don't have a way of like processing it to be able to pull some
insights out of it and put it into something that we can actually run, you know, queries and do
analytics against. Like there's a lot of value in those types of things. I'm a hundred percent there
with you a hundred percent. Um, and, and like, that's, that's where I use it the most, right?
Like you take a article, you summarize the article, you generate, you know, maybe some key
points of the article, like things like that, or taking large sets of data and running those
through and doing, doing things that you wouldn't, it would take a human a long time to do. Right.
But, um, but the whole point here is that it's, it's knowing that that needs to be done. It's
knowing that you need to analyze the data to look for, for something. It's like, it's knowing that that needs to be done. It's knowing that you need to analyze the data
to look for something.
It's like having that.
And I don't know if you saw this,
but there was maybe a couple of weeks ago,
somebody posted, I think it was their eight-year-old daughter
or maybe 10, something like that.
Anyways, that used, I forget,
maybe it was Cursor or something like that
to build a Harry Potter chatbot thing.
And it was adorable, right?
I mean, the girl was adorable, right?
But she just was typing in and it's making changes to the code and she pushes up the
Cloudflare.
And I made the joke.
I said, AI is not going to take your job.
An eight-year-old using AI is going to take your job.
But it's sort of like even that, and I think you guys mentioned the Josh Pigford thing
where he wrote a whole game,
you know, using curse or whatever. And that's, that's very cool. And that's, that's neat that
you can do those sorts of things and that you can take somebody who may be, and I think Josh
Pickford's quite a pretty good developer, but like, you know, an eight year old or a 10 year
old or whatever, and be able to build something that is useful and whatever. Now the question is,
was it good, right? Like, did it just work? Or was it like a truly optimized something that would
scale and do these other things? Like, I think that's a whole, that's a whole other question.
I know, I think Cursor has like a 2000 line context window or something. I know there's
limited context windows for some of these things, Even if they're massive context windows, can you actually make a lot of these changes?
And then the other question is, is that how much does natural language translate into it where you start thinking about APIs and posts and patch and puts and things like that?
And what is it actually using behind the scenes?
Is it doing the right thing?
Is the security there? All these other things that, you know, I just don't think are shown in these, in these, these really cool demos, right? Don't
get me wrong. But, but, but for that type of stuff, like I will use it as an assistive technology.
I will not trust the output. I will, I will use it and verify that it works. And then also what's
really great is sometimes it inspires me. And
since it gives me something that I'm like, I'm like, oh, I didn't even know that was a,
even though that was a method that I could use there. And then I'll go and I'll look up that
method and I'll learn more. So I do think it makes, it can make you a better programmer,
but it doesn't make you a programmer, if that makes sense.
Yeah. Do you think AI is affecting the job market for devs right now? Like,
are we seeing a softer job market because of AI?
I think there are too many companies that are buying into the hype that AI is somehow going to replace developers.
I think without a developer steering it, or at least have their hands on the wheel and one foot hovering on the brake,
these things can't do those, you know, that it's not, they're not automated yet.
But even if you said, hey, I used to need 10 people and now I need two that have a cursor subscription, you know.
Yeah, so that worries me.
That does worry me. And I think, was it Charity Majors who wrote the article about, you know, basically getting rid of junior engineers or something like that? I don't remember.
Somebody wrote an article about, and I apologize, Charity, if you're the one who wrote this article and I'm not giving you
proper credit or whoever did, but essentially the article was
about this notion that when you use
junior engineers, sometimes as workhorses,
basically like, oh, we need to comb through this data,
we need to write a bunch of these functions,
or we need to build a whole bunch of tests around this.
And yes, maybe some of that stuff is automatable now
by a senior or an upper-level dev
that can do some of those things.
So I do think we probably see some of that right now.
I think we're just going to shoot ourselves in the
foot, though, because eventually some of these older devs are going to start
aging out, like myself, and they're not going to have
all that context and experience because
they never got into the business because there was no jobs for them.
I don't know. I do think it is having an effect. I do think there may be a
correction. And I also think like, again, I hope people wake up to the hype. Like it is, it is very
good and you can do a lot of things with it. Right. But it is not, it is not the silver bullet
that's going to solve, you know, everything for you and reduce your, you know, reduce your
workforce. I mean, maybe in the, in the, in the in the helpline support side of things, because it probably
does a better job than some of the support people. But I think from a development standpoint.
SDRs as well.
Yeah, yeah, right.
I think I wouldn't be worried. I think if you make it, let's say you make an engineer,
suddenly one person can do the work of two or three people.
I don't think that means that suddenly there's 30% less jobs.
I think what, because even if you look at, like we were talking earlier about what you can do with things like AWS now.
Like 20 years ago, that took a lot more people to build know build your own data center run all this infrastructure uh you had to staff you know a whole bunch of people to be you know going and
swapping out uh hard drives in the middle of night on a saturday and stuff like that but a lot of
that stuff went away but it's not like people's uh didn't don't have jobs what ended up happening
way more programming jobs yeah exactly you you end up up making it now easier than ever to create a startup, build something and deploy it and get up and running. So I think what ends
up happening is sure, maybe a company has maybe, you know, net, they have less engineers or something
like that, but you just end up with more companies doing more things and it kind of ends up, you know,
falling out the same. And there's a, there's a law, there's a law for that. I forget. forget i mean i'm not as smart as ben kehoe who can just remain keep all these things in jevons paradox
jevons paradox maybe that's what it is right yeah like so i think that's what like where something
gets cheap like gets cheaper and you actually end up like using so much more of it yeah yeah right
you just you do more right so it's like if you've got more time, you're not going to reduce, you're going to use that time to do more, right? Like if you, if you, if you reduce your
spend here, you're just going to spend more somewhere else, right? You're just going to do
that. Yeah. So I think, I think that's very true. Even the companies that had a lot of the big
companies that had major layoffs in the last couple of years, their actual number of employees
ended up staying the same. They basically laid off and then like rehired a lot of those people, but it was to like redeploy resources into new projects that were more AI
focused. So they're really just kind of like reshuffling the deck in some sense. So it wasn't
like they, you know, really drastically reduced their headcount. And this article from Forbes
came out actually today, which is the generative AI hype cycle is almost over.
What's next?
But in that article, it cites that 80% of-
Was that written by Jeremy Daly?
Yeah, it was written by Jeremy.
I may have ghostwritten that article, yeah.
But according to this RAND Corporation report, 80% of AI projects fail.
So there's basically four out of five.
And the additional 19% people don't tell you about.
Yeah. So, you know, I think we're a long way before, you know, this is, you're, you know,
drastically seeing any kind of impact on, at least on, you know, engineering jobs.
Yeah, that's interesting. Well, you know, I asked you originally about just like,
hey, are you seeing non-tech people? And Jeremy, you also talked about AI slop.
And I'm finding it's surprising how often outside of programming I'm using like Claude or ChatGPT on my phone to get around what I call like SEO slop, which is just like if I want a recipe or something like that, I like Google recipe.
And you get just like garbage Google results where it's like these giant things the recipe's all hidden and all that stuff
where it's just like man screw it i'm just gonna go to chachi bt or claude to do that you don't
want to read uh you don't want to read a uh like 2000 word essay about how they came to find this
turkey recipe that they uh yeah or even just like doing something simple like my wife the other day
was like how do i create a group of contacts in Gmail that I can send an email to?
And I'm like, I don't know.
Go ask ChatGPT.
I could Google it and look through a bunch of crappy articles on that.
But it's kind of like cutting through the SEO slop.
And now it's giving us a I slop, which at least is more concise.
Well, that's the problem with what happened, I think, with you know if you want to if you want to rank well
even if the answer to a question is literally one sentence you can't have a web page with a
single sentence and rank well on google so now you need to write like 2 000 words at least uh
where the sentence the one sentence is buried in there it's kind of like a lot of these like
i think um professional self-help books and stuff like that should be a
thousand page, a thousand word blog posts, but in order to turn it into a book, they need to stretch
it out with like 150 pages of filler. I will say this though, about some of them, maybe not
self-help books, but the books that like that, you know, they have like, you know, I don't know,
five principles or whatever the heck it is that they go through. Like I do find examples to be
super helpful. So with it and just like, so if it's like, you know, do this, or this is this is like a strategy
definition, or like, and then give you like multiple examples of how that works, like,
that to me is super helpful. And I think that's one of the places where, where AI filtering through
that 2000 word essay on why I discovered, you know, this turkey sandwich recipe or whatever it is,
that part of it is you lose a lot of context, right? And so that is something where I like it
if I need a quick answer, right? If I need a quick answer, great. But here's what my concern is.
The more that we use ChatGPT and Google's new, what do they call it, Gemini or whatever,
to answer that question and just give us that one line.
Give us, you know, I hit the feeder bar.
It gave me the, you know, gave me the treat.
And it, you know, I hit, it got the hit of dopamine
because it answered my question,
but I have no context around it, right?
And so that makes me nervous
because if that's the thing that humans get used to doing,
then what happens to people actually writing
those examples and those
lengthy, because I think there is value in some of those stories that some people want to see it.
But I also think there's a bunch of people just want to cut through it. But if we make it too
easy to cut through some of that stuff to get the information that we need, then the question is,
is what's the point of even writing that extra stuff if all you're looking for is just the answer?
Yeah. Yep. It's all going to collapse on itself again. That's more of a humanity. That's humanity collapsing on itself more than
than an AI question. But well, I mean, there's also the source attribution issue as well, where,
you know, maybe I get the answer, but is it an answer that I can trust?
Right. Yes, very much so. Yeah. Yeah. But I feel like I'm like one of the world's biggest,
like chat to BT in your daily life advocates, at in my uh circle of omaha i'm like trying to tell people like they'll
ask me a question i'm like i don't know go ask chat gbt because they're like you sort of have
to use it for a while and get a feel for like where it's going to be good and where it's not
going to be good but they're just like so many times when people are like asking me how to do
something i'm like i have no idea but i know how you can get that answer like excel excel formulas
like i don't know excel formulas off the top of my head,
just go ask Shachiby T and all that sort of stuff.
But you also have a,
but you're all,
if you ask it a question
that you don't know anything about, right?
Like if you just like are trying to research something,
I worry about that.
If it's something that you do know about,
and this is actually the funny thing.
And there are a few people who post on Twitter that, you know, they're, they're, they're,
what's the right word?
They're prolific on Twitter.
They post a lot of stuff.
And when you post a lot of stuff, there's some stuff that's good, some stuff that's
bad.
And I read a lot of stuff and I'm like, oh, that's interesting.
I didn't know that.
And that's, that's an interesting thing.
Then they'll post something and I'll be like, hmm, I know that.
And I know that what you're saying is to be incorrect. Um, and, uh, or I've had a different
life experience than, than what you're talking about there. Um, and so that's what, that's why,
you know, again, influencers are, are, uh, are a tough crowd because it's tough to, and some of
them are bought and paid for too. Um, you know, others that promote certain past program, uh,
past, uh, platforms like, so, um, you have to be careful about where the information comes from.
So even if it seems like it's a good source of information,
I don't think AI changes.
I don't think it changes that.
I think you have to have context.
And you can't trust everything you see, no matter who it comes from.
And you've got to dive into that more.
So yeah,
I mean, I don't trust anything on the internet anymore. And the hardest problem is that you get
something back on the internet, and then you search for it on the internet, and you just find
a bunch of things that confirm either way, right? So it's really, really hard. And so, I don't know,
maybe we need to start going back to libraries and looking in books that weren't written by ChatGPT.
Just ban Google. Yeah, get rid of Google.
Well, they might be self-published books though generated by yeah exactly right right who knows how i mean when it when it's uh the the dynamo dv book uh co-authored by chat gpt right yeah yeah
we're gonna have like our own like library of alexandria or whatever it is that people find
in a thousand years and it's just gonna be all these like crappy chachubi tea books like i'm done with idiocracy look we are so close
to idiocracy it's uh if you've seen if you've seen that movie but um but no i mean i think i
again just quickly on the ai thing too like i think if you if we can't figure out a way to
filter out bad information which that's Google's number one,
like collecting information is easy.
Filtering out bad information is really hard, right?
I mean, all of the social media platforms
have this problem too, right?
So the issue is if we can't figure out a way
to filter out what's good and what's bad
or get the stuff that's good
and filter out the stuff that's bad,
then these models are going to be really, really hard to continue to train
and learn anything new.
And if everybody's inspired somehow by ChatGPT,
even if you're just using it to maybe you're writing a paper
and you get one sentence from ChatGPT,
and that sends you down a whole path that probably sends the same people down.
I mean, nobody's ideas are really unique anyways,
but you go down the wrong path and you just end up generating AI slop, even if you wrote it yourself,
because it's based off of an idea that AI gave you. If we keep training and training and training
and training on that, I mean, again, I, it, that to me is we are polluting the information. We're
polluting the world of information and, um, and, and AI is just, just accelerating that.
But again, I don't know.
I'm just old and I'm going to,
I guess, I don't know.
Get off my lawn.
Get off my lawn, right.
I yell at the old man yelling at clouds.
That's kind of where I am now, but yes.
So we're running out on time,
but there's one area I want to switch gears and talk about because I've been asking
a lot of people on it,
just front-end development, right?
I feel like there's all sorts of noise and it's hard to know like what's
i feel like that is like the topic du jour that i see on twitter the most um and so many different
opinions so i want to get a sense of where you are and number one i saw in your newsletter yesterday
you you were talking about how you you like htmx tell me the drawbacks gmx because like i am still
confused on whether it's real or like whether it's
a prank that people are playing on me like are they just trolling us like seriously yes yeah i
cannot tell okay so tell me the draw of hgmx um all right so hypermedia right so nobody knows what
this is anymore uh except for maybe some of our the older folks um but when the the internet was
designed um the the idea behind it was essentially. But when the internet was designed,
the idea behind it was essentially,
or when the World Wide Web was designed,
was basically the idea was it was documents, right?
And so you had this document and then you would go to this document
and you'd go to this document
and they would say,
hey, what if you want to update a document?
Okay, well, or create a new document on the web.
How do you do that, right?
And so that's where the whole idea of Post came in
and then they're like,
and then Put was there
and then they're like,
yeah, but what if we want to update a part of the document?
Okay, now we have patch and like all these other things.
So it was a very well thought out spec
in terms of how these documents would work.
And so at the end of the day,
what you get from a webpage,
unless it's like a game or something like that,
is essentially just a set of instructions
that render a screen for you, right?
So CSS and all that other stuff like that,
that's all part of the, now that you're again,
styling it and making it look good.
But the big thing is how do you get data back and forth from the server and
how do you mutate that screen?
How do you change that screen that you're doing or that you're working on?
And so way back in the day when I was still probably writing Pearl and CGI
and got into, I don't know if anybody remembers SHTML, which was server side includes, which was a pretty cool thing that you could do,
um, back then, but essentially is you could generate parts of a, of an HTML document,
um, dynamically, right. By basically calling a script. So it was like, it was like a precursor
to PHP or something like that. Um, but we got to a point where we were able to generate dynamic
documents. Um, and, and that was pretty cool. Right. So it kind of violated that first sort or something like that. But we got to a point where we were able to generate dynamic documents,
and that was pretty cool, right? So it kind of violated that first sort of step of just having, like,
a document that was a thing and it was always the same.
So we started generating dynamic documents, and that was pretty cool
because we could do some really cool things around applications.
And then we got to the point where we said, okay,
we actually want to change information on this page
without refreshing the whole page.
We shouldn't have to fetch the whole document again. We want to change information on this page without refreshing the whole page. We shouldn't have to fetch the whole document again.
We want to change it.
So the early days of AJAX, where we were able to just swap out something and then we'd write some code and make some changes.
This all evolved.
jQuery, Scriptaculous, Prototype.io, MooTools, all these things gave us more and more capabilities to manipulate the DOM using
JavaScript. And it worked fine. It just was a heavy lift. And I also question whether or not
browser speed and computer chip speed, we could still be doing it the old way and it probably
wouldn't matter now. But somebody came up with this idea of the virtual DOM where you could make
all these changes in memory and then only apply the bits that you needed to do.
That's the basis of where we've moved to with front-end development.
So you get things like React and Vue and a whole bunch of these other services that are doing something similar.
But then I think we just pushed it way way too far right where it's like you now need so much
tooling in order to generate all of these different uh all these different packages and all these
different um you know I'm drawing a blank on on what they're called but all these different things
that you you need to deploy all these different css packages and all these different you know
or bundles right whatever you're creating all these different things you know, or bundles, right? Whatever, you're creating all these different things just in order to load a page
and do something interactive on it.
What I like about HTMX is it says,
no, no, just write an HTML page
and you can use your Tailwind
and you can use, you know, or CSS directly, do all that stuff.
But then you're like, I want to do something
when you click this button.
I want to do something when, you know,
when you scroll to a certain point,
I want to do whatever I want to do. I want to manipulate this part of the page. So rather than
you having to have this virtual DOM and all this other stuff that needs to maintain state and
whatever, it basically just makes an HTTP call to the server, right? Using post, put, patch, update,
delete, whatever it is, right? Or updates, not one, but whatever it is, right? And then it just
spits back a piece of that HTML that is
going to replace where it is. And it works really, really well. Now, again, there are some things
that I do like about, I mean, I love the idea of Livewire, which is pretty cool because it does,
I think, a better job of maintaining and manipulating the state on a page and re-rendering
than some other services do. But the idea of like, if I needed to up,
you know, let's say there's stuff happening
in the background and some table information updates,
I want to update that on the page
while I'm just sitting there.
You can use WebSockets or, you know,
or long polling or some of these other things,
service sent events, things like that.
And you can manipulate that.
It's not quite as easy to do that as just, you know,
if you hit the API again, you get a new set of data
and then it calculates the DOM, it regenerates everything for you, and then spits it out.
I just find, and I think that there are cases where you need to do that.
I really like what Astro does with islands, where you can have a bit of that interactive code, where you don't have to have the rest of the site be completely powered by that.
But I just think it's easier to grok if you think about it more clearly. Like I have an HTML page that has
something on it and I want to manipulate something on it based off of something that happens, how I
interact with it. And then you just go to the server, you pull some information down. It's
backend agnostic. So you can generate HTML with Python or with Flask or with Node or Java, whatever your back end
is built in.
So you have to think about it how we used to think about it when we were building Ajax
apps way back in probably 2003.
Yeah, I mean, it's basically what's old is new again.
We were original dynamic web servers were super server heavy where you know essentially
every everything was a re-render of the entire page and then and then we and you were generating
the whole page every time right yeah and then and then we you know tried to fix that with ajax but
those became a little bit uh like overly complicated hard to maintain as well and then we
went like extreme to the you know essentially the essentially the single page, everything's happening in the front end.
And in your server becomes just kind of like a,
a dummy proxy to your, your database, the poll data.
And then those perform terrible on, on phones.
And they also become like completely unmaintainable. And now,
and now it's like, you know, everybody's like, Oh, Hey,
we have these like beefy servers.
We can actually do stuff on those.
Like, let's let's serve HTML from there.
And and then we're like rediscovering this.
And I think that is the pattern.
Like we had Ken C. Dodds on the show and he talked about that and sort of the evolution of these different paradigms.
But I think we we went one extreme and then we swung the other stream.
And now we're kind of back to the middle again.
Yeah. And I like this. I like the simplicity of it. I think it's super easy to do.
The other thing that's great about HTMX is you can
basically use
a static site generator to generate
a whole bunch of stuff to start with.
I did a couple of live streams on progressively
enhancing it by just making
a call. So you just load a static
page and then once the page is loaded, it can
actually just kick off some onload things and it will, will fill in some data for you. And
it's, it's pretty cool. I mean, the whole React server component debate that's happening now,
like, I don't know if it's a debate or just it, to me, it's just, again, it's like, so what you
did is you shifted everything this way, like you just said, and, but you're like, well, that
probably isn't as efficient when we can generate a bunch of this stuff, you know, ahead of time and spit that down.
I haven't played around with it much since it first kind of started.
They started hyping it up.
But I think it pretty much render what I've seen is it renders the code on the server, sends that down to the browser, then has to make another call to make it interactive or reactive.
And then it refreshes all of it.
So you've got to make a whole bunch of calls to the server
just to load a page, and it's like,
if I know what this page is going to look like
and I want to fill in some information, why is this so hard?
So I don't know.
I think HTMX very well could be just people trolling us,
but I think it's genius in
its simplicity. Um, and I mean, the other thing is like Svelte kit also feels kind of like HTMX
in some way, um, in, in how some of that stuff works. Um, I do like Astro. I think Astro is
really, really cool. I like that you can generate stuff statically. I like that they have the whole
fragment capability. Um, it's not quite as, it's not as organized as i would like it to be i don't like that astro just
like officially said hey net netlify is our our uh our provider like that to me is i mean i guess
there's some there's some value in it for netlify and maybe for astro but like i like the idea of
there's some weird games in the front end market and and and i guess, we're not breaking any news here that like the front end market is still
jumbled.
It's,
it's been a mess for like,
you know,
35 years,
25 years.
Yeah,
no,
we,
no,
we've been,
we've been doing this stuff for a while,
but I,
and I think the big takeaway is it for me on HTML,
I'm one of those,
like when I do front end,
I'm like one of those developers that knows react and doesn't know HTML,
you know? And I'm like, I have to use those developers that knows React and doesn't know HTML.
I have to use component libraries and all that stuff to make it work.
So then when I look at HTML, I'm like, what are these real tags going around and things like that?
That's the other thing, too, is the component libraries.
That is one thing that I will say that I do really like about how some of these other things work or how React works, for example.
You just drop in a component library and a whole bunch of this stuff is built for you.
All that interactivity, it's all there.
And I like how you can package that stuff.
And that is a huge advantage to using something like React.
The question is, did you need all that in the first place
in order to make it work?
So it depends on what you're building.
But I use HCMX to build a whole bunch of internal dashboard like stuff
and interactive dashboards or, you know, admin type utilities,
things like that. I built the news quiz runs on HTMX,
actually. So if you go to the quiz.offbynone.io, that's all
that's all built on HTMX and, uh, and Hono on the backend running on AMP. So,
um, yeah, I mean, I, I, I think it's, uh, I think it's, I think it's got legs. Um, but, uh, I,
I do worry about people like you, Alex, who only know how to use React.
It really is true. But at least the good news is ChatGPT and Cursor will save me. I don't need to learn these dang tags anyway. So, I think also one of the things that you see,
sometimes there can be something that comes along
that's a better idea,
but it doesn't necessarily end up creating traction
because there's already essentially
a groundswell of network effects
behind something that's already the incumbent technology.
It's really hard to get that
because the switching cost is so high
for a company to be like, okay, we're going to, you know, rip and replace our React front end to
this new technology, right? So it's just like, you know, like Craigslist has looked the same for 35
years. And there's really no sort of like people have broken off pieces of it. But for like low
end jobs, and I don't know, meeting strangers
online, there's nothing that really competes with it. And it's because they have that network
effects, essentially. Well, I also think that there are principles. So the principle that I
really like about HTMX, which is I also like about Tailwind. And again, I know some people
hate Tailwind. I used to write CSS by hand back in the day. I hate writing CSS, but it's too many different things to do it. So I think
a principle that runs through those is the idea of locality of behavior, right? And it's basically
saying like, if I'm looking at a piece of code or something, how much of that is buried in an
abstraction that I don't actually know what it's doing, right? And so that's one of the things with like, sometimes with React components and some of these other things where like, a lot of that is buried in an abstraction that I don't actually know what it's doing, right? And so that's one of the things with like sometimes with React components
and some of these other things where like a lot of that styling is buried in layers and layers and layers of abstractions
that I have control over probably, but like it's sort of hard to see.
So the idea of saying like, so like writing inline CSS where you just say, okay, you know, BG dash red 500. Like I know looking at this piece
of HTML code that this is going to have a background color of red, right? Like I just
know that's going to happen. And maybe it's writing a lot of code, but again, Tailwind is
very much so on the idea of componentizing the way that it works. So you have the locality there,
but it's, you componentize it so you don't have to write the same thing over and over again.
But I also like that about HTMX in the sense that he's very big, Carson Gross is very big
on this idea of fragments, right? Which is to say, you want to take an entire HTML document or a
section of an HTML document that's big enough for you to kind of grok and understand what's
happening there. And maybe you have like an edit screen, maybe you have like just a display screen,
maybe there's a different way that something's formatted.
So the idea is that rather than breaking that up
into hundreds of component files, right?
Separate files and separate API routes
and all this stuff that you have to manage,
that if you can use a system
that allows you to fragment those,
to say, I need this from this template.
I just need this fragment from this template
and I can go ahead and use that.
And then I can re-render that
and update a part of the screen using HTML interactive interface.
Then that's a much easier way to grok things because you're not saying like, oh, I need to get the edit screen.
So I need to go do this and then I got to go look in this file and do that.
And I actually built an open source package called Fraglets, which is about fragments on top of templates, um, which
just uses none juxt under the hood, but I just built in a couple of extra features. So, um, you
can basically grab a, a none juxt template and then just using, um, using the, uh, the block syntax,
you can just grab a fragment from it, uh, and render it as if it's a template and it compiles
down into JavaScript so that you can import it dynamically, you know, into your file. So it's great for things like Lambda and some of
these other things. So I built that because I was like, I need a better way to do fragments
using Node. But anyway, so I think that's interesting. And I do think that if the
locality of behavior is a really good thread that is worth pulling and the benefits, the DX
improvement on that, right? The developer experience
versus trying to grok this
really complex dependency tree
I think is really helpful.
Well, I think we need to wrap this up,
but I do appreciate your commitment to the bit
and keeping the HTMX
troll alive and explaining.
No, it's a good stuff.
And Jeremy, thanks for coming on. It's been fun to
chat with you. If people want to find you, find out more about you, find out more about Amped, where should they go?
Yeah.
Yeah.
So you can, my blog slash website slash whatever, jeremydaily.com, D-A-L-Y.com.
I'm on X and all of the, you know, X is Jeremy underscore daily.
And then I'm on LinkedIn as well that you can do that.
And if you want to check out Amped, getamped.com.
It's A-M-P-T dot com.
Sign up.
It's free.
Play around with it.
Tell us what you think.
If you can explain it to me, that'd be great because then maybe I can explain it to more
people.
But I don't know.
It's new.
It's fun.
It's, you know, maybe a little early, but, you know, we'll see what happens.
Jeremy, thanks for coming on.
Sean, enjoy your international travels that you have coming up.
And yeah, we'll see you around next time.
Awesome.
All right.
Thanks, guys.
Yeah, thanks for coming on, Jeremy.