PurePerformance - 014 Pat Meenan on Latest Trends in Scaling Frontend Performance
Episode Date: September 26, 2016Are there new Web Performance Rules since Steve Souders started the WPO movement about 10 years ago? Do we still optimize on round trips or does HTTP/2 change the game? How do we deal with “mobile o...nly” users we find in emerging geographies. How does Google itself optimize its search pages and what can we learn from it. In this session we really got to cover a lot of the presentation Pat Meenan (@patmeenan) did at Velocity this year.Related Links:* Scaling frontend performance - Velocity 2016***** https://www.youtube.com/watch?v=LdebARb8UJk * WEBPAGETEST***** https://www.webpagetest.org* Google AMP***** https://www.ampproject.org/***** https://github.com/ampproject/amphtml
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson. Hello and welcome back to part two of our talk with Pat Meenan from Google.
Just in the previous talk, we were discussing machine learning and figuring out why you have bounce rate,
what's contributing to your bounce and conversions.
And besides machine learning, what was the kind of tree we were talking about, Pat?
It was something tree.
I forget the first name.
Oh, random forest.
Oh, sorry.
It was a tree.
Another form of machine learning.
That's why I was thinking tree.
So welcome back, Pat.
And welcome back.
Not welcome back, Andy.
You're always here.
We're always here.
We live in our headphones here.
So I think we want to talk,
switch to the topic now. Well, before we go into the topic, we wanted to try to introduce a new segment here, Pat, and you're kind of our guinea pig on this. So we apologize for, you know,
not knowing you super well, but subjecting you to this. We want to get into the habit of asking our
guests, you know, kind of some of the biggest or most embarrassing mistakes they've made in the past and maybe what they've learned from them, uh, kind of illustrating that we all, we all do some goofy things and, but usually there's a great learning from that.
Um, so with that, Pat, what would you like to tell us about yours? Yeah. Yeah. So, I mean, it's not all that embarrassing, but I've, I've made more than my
share of mistakes over my career. And I think probably the one that stuck with me the most,
it's around memory usage. And so when, when I was back at AOL, one of the things that we got
harassed about a lot was in task manager, how much memory the client was using. And I was on
the client team at the time. And so, and it's not like you don't see that these days where everyone's
sort of focused on how much memory browser X uses, right? And so at the time, task manager
and windows, it's still a reports working set, not like total RAM allocated or anything like that, which is roughly how much all of the RAM that it's using. And then Windows will
page things back in as code runs. So it doesn't drop down to zero, but it drops down really low.
And how often you call that sort of maintains how low you can keep the memory looking. But it's a
horrible idea to do for performance because you're paging your code in
and out all the time when you're doing that. And maybe something that's not accessed really
frequently is paged out where it could have been a lot faster if it was in the RAM and systems have
tons of RAM for these caches. And so I've sort of learned from that and we shipped it. I mean, I wouldn't be
surprised if the AOL client didn't already or didn't still for people that are using it every
after certain events, like after you make it through the sign-in flow, it'll page itself out
just to get rid of all of the code from the sign-in flow and things like that.
But I've since walked back from that a lot just because your system has a lot of memory.
Optimally, the programs running on your system would use every last bit of it to make the performance of your user experience as good as possible.
You just don't want it to need all of that memory. So
it would be nice if your browser ran on 16 K of memory, but if you had 16 gigs of Ram,
it would use all of that for caches and it would be lightning quick and everything else. So,
um, now whenever I, and I do get frequent requests for people asking me to add like memory tracking or memory usage to web page test.
And I always push back really hard because for the most part, it doesn't make sense.
You largely want your apps to use 100% of the available memory as long as they're being good citizens about it. And historically, that at least for me has been the biggest code mistake I've made
that I've learned from and sort of taken with me over the years.
Well, thank you very much. That's a great one.
Yeah, thanks. It's a lot of lessons learned. A lot of thanks for sharing because I think
opens up opens up all of our minds, but about this, about this problem. And thanks for,
for being for for being
for being uh open to sharing it with a large audience cool all right so on to the uh the next
main topic of um web performance um and andy i think you had a lot of different concepts you
wanted to talk about i know we'll you know we wanted to talk maybe a little bit about his other
velocity talk and but there are some other things i don't know if there's any you wanted to talk maybe a little bit about his other Velocity talk, but there are some other things.
I don't know if there's any you wanted to tackle first.
Yeah, well, basically, in the previous session, we talked about finding out how much we need to move the needle of page load time or performance in order to improve conversion or reduce bounce rate.
Now, Pat, I know that you and Steve and a lot of other people talked about
performance, web performance optimization over the last couple of years, and it's been around
for a while now. The first thing that I would actually be interested in is, so what has changed
in the last year or two, especially in the advances when it comes to new browsers,
new capabilities, obviously a total shift in the way we write applications.
We know we're talking about single-page apps.
I mean, I know this is a broad topic,
but basically what I would like to quickly understand for our audience,
they might be new to the topic and they read Steve's book on optimizing websites.
And just to clarify, that's Steve Sauters.
Steve Sauters, yeah.
If they don't
know who he is they should learn yeah they should they should know him so if people are getting new
at googling for web performance optimization they probably find his books and some of your blogs of
your of your earlier posts but what's new now what has changed in the last year or two and and what
are the new i can the the quick tips and tricks that you would tell somebody that is
new to this space to look out for besides obviously what is still relevant when we look back you know
five six seven years in the early days when steve said reduce the number of round trips
optimize your images what else is there yeah and so I'm glad you threw that last bit in there
because, you know, there's a lot of new stuff going on, but the old stuff is still relevant
and it's actually probably the most critical. So make sure you do all of that stuff first and then
play with the new, shiny, exciting stuff. Because I mean, I can't tell you how many sites I look at where it's a little
discouraging after, after the last eight, 10 years or whatever, where we're still talking about,
look, we need to fix all of these basic things still. And they're still all of the same problems
recurring. Um, but let's assume you've got all of that stuff taken care of, you know,
you're serving well-compressed images and keep-alives and all that kind of stuff.
It's a really exciting time in browsers these days, or even just the web in general.
HTTP2, easily one of the most exciting bits for me, and I think for the web at large.
And I think even though, you know, the protocol shipped, it's been stable for
the last year. And even before that, it shipped to Speedy. We are still very early in the days
of HTTP2 and the optimizations for it and probably get into a lot of that later, because I think
that's going to be a huge area for differentiation across CDNs and accelerators and even web server infrastructure.
Do you happen to know what the adoption rate right now is of HTTP2?
I don't.
And it's also...
Hard to say probably, right?
It's one of those things that's a little hard to say because you've got a lot of what we look at, like within Chrome, for example,
is we look at the usage by page views,
and those are going to be heavily skewed towards people visiting Google, Facebook, LinkedIn, Twitter.
And once you get past like the top five or ten sites or whatever,
you've got 90% of the page views.
And, I mean, out of all of those, the vast majority of those are already on HTTP2 and already HTTPS only.
So there's a huge adoption as far as global page views go.
Even looking at the long tail, you've got sites like Cloudflare, which is doing SSL for everybody, and they automatically do HTTP2. So as a lot of the core infrastructure
takes care of the upgrades for you, you see large swaths of the internet migrate to it.
For the sites that aren't HTTPS yet, that's the biggest section where they're obviously not HTTP2 because one requires the other right now.
The other, I think the other, I don't know how I feel about it.
I'm kind of split between it's not sort of the open web, but it's also it solves like all of the performance problems for a lot of the sites is,
uh, the AMP, the accelerated mobile pages, um, where, uh, assuming you, you write AMP content,
it's edge cacheable by whoever's serving the, the, the AMP pages and it's guaranteed to be optimized. So you've got guarantees that there's
no server response time issues, certainly in the long tail with shared hosting and all that kind
of stuff. That's one of the biggest problems I still see today. So that all goes away when
someone like Google or LinkedIn or whoever is serving your cached version of your page automatically for you.
And even though you can build faster HTML pages by hand than you can with AMP as a platform,
AMP is guaranteed to not do slow things.
So you also have guaranteed fast user experiences,
even if it's not necessarily the fastest user experience possible.
Can you get like optimized images and stuff?
Quickly, for people that don't know AMP, where could they find more about it?
I guess Google for AMP.
So there's, yeah, if you Google for accelerated mobile pages is probably the best search term to use for it.
There's a GitHub. it's all open source
it's a github repository and it's um a restricted set of markdown that you can use in html
where like script tags aren't allowed you're only allowed to use other amp modules and those are
vetted javascript libraries that don't do bad things.
Like document.write is sort of the quintessential, God, if we could only undo that from the web, everything would be better.
So there's like a curated set of libraries that you can use that do just about everything. They've got like image galleries
and video players and analytics beacons and sort of all of that kind of stuff come as available
prepackaged modules and usually callable out to whatever service you want to use. And since it's using a restricted set of markdown, it can run a validator against
the code to make sure that your HTML is actually only using the restricted libraries and not
something else. And if so, then it gets cached on the edge and served right now, at least served in
the Google search results in the news
section. And you get a little lightning bolt next to the articles and the pages effectively load
instantly once someone clicks on them. Google does a good job in promoting these projects
with use AMP by serving them up in a nice way in the search results. Is of the way to promote it as well? Yeah, I mean, that's certainly a way to encourage adoption,
but it also drives for a much better, as far as like the news results,
a much better user experience because the way the AMP containers work,
the news search results can load them in an iframe,
and as you slide left and right,
you can actually flip between AMP pages
and they're all sort of instant loading.
So you get a much more seamless integration
even in the search results page.
So that means if I come back to my question,
thanks for that,
for that little excursion into AMP
and mobile performance.
And actually I had mobile performance
on my topic list as well.
But remember, people out there, all the initial rules that Steve and Pat and others a couple years ago put out there are still all relevant.
Make your pages lean.
Reduce the number of resources on the page.
Use browser-level caching.
Optimize your images.
Sprites, I guess, are still a big thing.
What about the domain sharding?
Domain sharding was one thing back in the days where Steve actually said, you know,
with domain sharding, you can basically force your browser to open up more connections in
parallel to download content.
Is that still relevant, especially in the era of HTTP2?
Or is that something we should not do?
Because basically you're just bombarding your servers
with too many parallel connections that don't really make sense.
So it doesn't disagree with HTTP2.
It's a little tricky.
So HTTP2 will do what it calls connection coalescing,
where even if you serve multiple domain names, if they all serve from the same IP address and
you have a shared certificate that includes all of the names on them, it won't open new connections
and it'll just reuse the existing connection. So in theory, you can still do domain sharding
and have all of the connections collapse together into one HTTP2 connection.
You do still pay the cost of the DNS lookup, which is a little unfortunate.
That said, domain sharding itself has kind of gone a little off of the deep end.
Um, and we in Chrome have, uh, protections in place to make sure that even if you've
done domain sharding, we won't load more than 10 concurrent images across no matter how
many domains you have, because once you open up too many concurrent
domains, you start defeating lower level TCP congestion control. And we've seen a lot of
cases where on slower connections, you'll actually flood the connectivity to the point where the
servers will send duplicate data and you end up with well over twice as many bytes on the wire to deliver like a
two meg web page actually ends up sending four megabytes of data because protocol inefficiencies
and stuff that's interesting uh from doing from doing two wide domain sharding so we ended up
having to put and i think etsy has a good blog post where that they put out about how they were sharding their image galleries across a whole bunch of domains and decided, oh, maybe two is a better number.
Because they were seeing cases where you were getting what we call spurious retransmits.
And it actually slowed down their page loads.
So it's a really fine line to walk.
If you are going to domain shard, I wouldn't do more than
two. That said, there's not that much of a benefit these days. It was really important when browsers
would only open two connections per domain. It's been probably a decade plus since that's been the case they all open at least six
edge and some of the other browsers i think will ramp up even higher than that
so for the most part and odds are you're actually serving a lot of content from a whole lot of
different domains so the costs of each one of the domain shards usually ends up being a DNS lookup, a separate socket connect, and if it's HTTPS, a separate TLS negotiation.
To be able to make up for those additional costs in concurrent downloads actually ends up taking a lot of benefit away from doing the sharding. So I'd start with no sharding. And maybe if you find
a specific use case where you do see a lot of benefit from doing it, take a look at it. But
starting out with not sharding is certainly a much better starting place. That's a great advice.
And maybe some other additional new things that you said
that just popped up in the last year or two. Any hot topics that people talk about?
I mean, it's not really new from the technology perspective, but emerging markets are huge. All of the new internet usage is coming from emerging markets. The US and Europe, for example,
is well saturated. And even sort of the older Asian markets like Korea and Japan are all well
saturated with internet penetration. You're talking 80, 90% plus saturation levels already. So all of the new
internet users are coming on from India, China, Brazil, Mexico. And for the most part, they're
mobile only, not mobile first. The phone is the only internet device that they have.
It's their first experience on it. And so you end up and the networks are 3G ish, sometimes 2G, you get a lot
of cases where they use their data bandwidth up and halfway through the month. And then all of a
sudden, what was a fast connection ends up being a slow connection because they get dialed back.
Or they also have to pay for data packs and all of that kind of stuff.
So data becomes a really important aspect to the web these days.
And we're starting to see it's kind of interesting that it's kind of what's old is new again.
But a lot of the mobile performance issues are the same kinds of things you used to see back in the dial-up days.
And so the transcoding proxies, things like Opera Mini and UC Mini, are really popular because they recompress all of the images to make the pages smaller so that they can actually be consumed in these networks. just a matter of less data for them to pay for, although that's very important because data
ends up costing them a much larger percent of their take-home pay than it does in the U.S.,
for example. But unless you actually try browsing your pages in some of these environments,
and I do highly recommend using a 2G profile in
Chrome DevTools or Link Conditioner in Apple or in WebPageTest, use the slower profiles to get an
understanding for exactly how painful it is. It really is a matter of them being able to consume
the content at all or not. You're talking about changes in start render time from one plus minute
down into the 10 second range when you start using some of these compression proxies.
Or if you have an actual fast website that's aware of the slow connections that the users are on,
or just a well-optimized site
that's delivering as little content as possible
as quickly as possible.
You can still have fast experiences.
But I mean, if you take a look,
like at the HTTP Archive over the last five years or so,
average websites gone from 700K to two and a half megabytes.
It's crazy. we don't really notice
on fast connections per se but on slow mobile connections a two and a half megabyte page on a
2g connection it's not a matter of it just taking longer it's not going to load the user is never
going to see your content and actually i mean there's a lot of amazing points that you just brought up uh and then you said we are fortunate here in the u.s and in europe with with big
bandwidth but also i i think you and your velocity presentation you had a tweet not sure who it was
but basically also if you are in the u.s and even though your phone is on an lt connection and but
if you're out of your data plan, you basically throttle down to lower bandwidth.
And that could also be very...
Yeah, and it's actually interesting timing on that because Verizon, AT&T, and I think all of the players are just now launching unlimited data plans that throttle, even for tethered connections um so once you exceed a certain data limit uh instead of
charging you more like an insane amount they'll just throttle you down to really slow speeds
yeah i've been on uh even 3g connections uh these days and it's almost you know you say it takes
long i i kind of say it's unusable it's funny though because when i got my first iphone 3g was
blazing fast but obviously the pages were much smaller and everything was workable on a 3G.
But even these days, once I get away from the city, heading towards the mountains or anything, and you hit one of those 3G-ish areas, it's extremely painful.
Yeah, and it's a combination.
The web itself has gotten a whole lot bigger, but also the networks.
When the 3G networks were completely unloaded, things were actually really fast.
You can get a fast 3G connection if there's no one else on the network.
As the networks become more congested with users, they end up even what's advertised and connected to as a 3G network ends up having fairly low performance versus the specs.
And certainly in the emerging markets where they're getting millions and millions of new users continuously, it's even more of an issue.
You mentioned Opera.
I remember Bruce Lawson, I think is his name, from opera who had a great keynote at velocity this
year and he actually talked exactly about the emerging markets and what they are doing now
is there do you think is there a way that we here in europe and in the us should also look into
these practices what opera is doing with with uh basically packaging up optimizing content
packaging it up and actually sending the the optimized content to the browser?
Are these some techniques that actually also make sense in general,
or would this go too far?
No, I mean, so Google already has a light version of their search results page,
for example, that they serve when they think a connection is slow.
That basically has a lot of
the JavaScript stripped out and it's sort of more what the classic search page used to be. You type
in your query, you hit search and you get results without whatever fancy interactive features and
stuff. And they've also got like transcoded versions of landing pages that if you go on
search results and click on a page,
and we know you're on a slow connection in an emerging market, we'll rewrite someone else's
page for them just to give the user a fast experience. But for the sites themselves,
it's, I mean, it's all of the basics. It's no different than what you're doing if you're trying to optimize for a fast connection even,
reduce the number of requests, reduce the head-blocking resources, all of that kind of stuff.
It just takes it to a whole new level, at least these days.
It's the same level that it was 10 years ago, where if you want to keep all of your rich interactive features and you need all of your JavaScript for your high-end users, then you might have to start making a decision if you want to segregate your traffic and detect a slow connection and fall back to a more basic experience.
Almost like the MDOT days, but God please, not redirects and not actually MDOTs.
But do you have a rich and a light experience or can you lighten up your rich experience?
Can you do things incrementally and go, OK, we're going to start and it's progressive enhancement isn't new, right?
But can we deliver the basic content?
And then if the page is loaded within X seconds or whatever, decide,
okay, this connection is fast enough to handle the rich experience,
deliver all of the JavaScript and whatever else is needed to build up the,
to progressively enhance
to the rich experience and i remember i remember seeing in in your presentation um even taking
into consideration things like a battery how much battery is left right because that's gonna
um play an important factor yeah there's there's all sorts of um battery left cpu gpu there's and a lot of this stuff stuff is now
detectable from inside of the browser uh you can sort of make a lot of these decisions and even
if you're serving video making the decision between 60 frames per second and 30 frames
per second has a huge impact on battery, for example. Now, another big topic that I remember from Velocity,
and I think you also covered it in your presentation,
is ads and ad blockers and kind of the browser vendors
move forward to actually try to build a lot of these things
natively into the browser to actually speed up,
well, on the first thing, the blog ads,
but then also speed up performance with it.
Anything you want to share here,
especially when it comes to front-end performance,
what the impact of ads is
and why browser vendors actually move into that direction
of providing browser or ad bloggers natively?
Sure. I mean, yeah, so it's no secret that ads tend to slow down pages a lot.
And certainly when you're doing your progressive enhancement,
one of the things you probably want to look at is when you inject your ad tag,
it's probably a good idea to see how long it's been for the page to get to that point and
decide if you want to inject it at all. If you want to inject maybe a text fallback or the rich
experience ad and do smart things about what kind of experience you deliver. But at least from the
Chrome side of things, we're not, at least that I know of, actively trying to block ads.
But what you will see is there's a bunch of things that we call user agent interventions.
And if you search for like Chrome interventions, you'll basically see a list and there's like
intent to ships and intent to experiment with and stuff like that for all of
them but what we end up trying to do is target behaviors that where we could optimize for the
user experience and that's where the user agent intervention part comes in is regardless of what
the developer told the browser to do if we think it would be a much better user experience
to do this other thing instead,
we maybe explore being not spec compliant
and violating what the developer thought
they were asking us to do.
And so a good case, a good example of that
is document.write as a feature in general. It's horrible for performance.
It breaks preload scanner and all that kind of stuff. It happens to be used a lot by ad networks,
but it's also used by tag managers and a whole lot of other things. From a browser perspective, JavaScript execution, we can't figure out what the code is going to do.
So we have to sort of stop all of our predictive stuff and just execute whatever is being done.
And you end up with long blocking sequences and in slow connections.
It's a really bad experience. So there's in-flight experiments to go, okay, well,
what if we disable the ability to document.write at all, or at least with some heuristics
in certain conditions, like if the user's in an emerging market or if the user's on a 2G connection,
what's the impact of blocking document.write do they browse more do they consume more content
do they abort less and the answer to all of those is yes by a huge margin the page load times like
come down in half almost or at least the time to start renders and things like that
and you see a lot more engagement and a lot more people consuming content.
But there's also things like scroll handlers. Maybe we'll ignore synchronous scroll handlers when you're scrolling. A lot of other browsers do as well, because those can cause janky scrolling
and things like that. So the browsers are starting to become more opinionated uh to try and deliver the user
experience for cases where uh it looks like the web has failed the users i think that's probably
the best way to put it cool well i you know i i think i know we as you said we are all very
passionate about the topic and uh I guess we could go on.
I mean, I still have a lot of questions on my list, but I really think we, as a summary for folks that are, you know, still either are new or have been in this field of performance, front-end performance optimization for a while or are new, please check out the video, the recording from Pat.
You know, what do you do, the velocity?
I'm sure the New York, what are you doing in New York besides redoing the one with Tammy?
Are you also redoing the front-end performance talk?
No.
I mean, I have a two-day training I'm doing with Tim,
but that's just more on bringing people up to speed on all things
web performance.
But yeah, maybe it's a summary to kind of
conclude
this talk. You know,
don't forget the basics, right? Obviously,
that's the most important thing.
Then you brought some good things up
on mobile.
HTTP2 will be
changing a lot of things.
I'm very fascinated that we already have a good penetration of HTTP2.
I really like what you mentioned about emerging markets and what they do down there.
That's quite phenomenal.
Then what else did you mention?
Just to make sure that people don't forget that Chrome interventions is very interesting.
And we talked some about the AMP pages. Yeah, exactly.
Google for AMP.
So I think that the page actually that I found, it's the AMPproject.org.
That will be one page to start.
Anything else that we want to make sure that our listeners don't forget and that they do in order to make the web a better place?
I'd just probably throw out, we hadn't talked about it yet,
but I want to make sure people are paying attention to priorities in HTTP2
and number of third-party resources are sort of the two big things to watch out for
with HTTP2 as
it becomes more popular. The protocol itself and the browsers heavily depend on the server
doing intelligent ordering and things with the browser will attach a priority
and dependency information to every request. But it's a requested priority, and it's up to the server to honor that
or at least do intelligent things with that.
If the server blindly shoves all of the resources down,
you're going to have a really bad experience.
And I fully expect that's going to be one of the differentiating features
of HTTP2 implementations on the server side.
The other thing to watch out for, and we
don't necessarily have good answers for yet, is third-party content with HTTP2, where HTTP2
coalesces all of the requests to a single origin. But once you have 30 different third parties on
your page, you've got 30 different http2 connections that aren't necessarily prioritizing against each other and things start to fall apart and i expect you're going to see
browsers evolving in that space but we also sort of need to figure out how do we make sense of that
yeah and well it's just cool and i mean one topic that we didn't cover at all which is a topic that
we've been talking about in the in the early days of the damage with Ajax Edition when we had that tool is the whole JavaScript performance, JavaScript execution.
I mean, obviously, you know, we could talk about this forever.
Yeah, and, I mean, it's scary how much that's actually becoming as mobile phones.
I mean, they're powerful, but not compared to the amount of JavaScript we're throwing at them.
So it's becoming a very big deal.
I see a lot of our users trying to figure out memory problems,
memory leaks in JavaScript, how to diagnose that,
and then how that JavaScript obviously runs on the thousands of devices out there.
And I think that's obviously, this problem is not getting easier, right?
The amount of device combinations and permutations we have out in the wild.
That's critical.
All right.
Hey, Brian, anything else from your side to kind of wrap this up?
Yeah, I wanted to make this go a little juvenile here for a moment,
going back to when we were talking about sharding.
I wanted to make sure people knew that was S-H-A-R-D and not S-H-A-R-T-I-N-G.
Both of them are equally not advisable.
If you're not sure what the T is, go ahead and look it up.
Urban Dictionary will probably give you a lot of colorful, um, definitions there for you. Um, the other funny thing I want to just bring up,
you know, we, uh, Pat was talking about going back to the basics on web performance,
you know, and still seeing all those same mistakes or, you know, the, the practice is
not being put into place. And it just strikes me how once again, we're seeing everything old is
all the old problems are new again. And what I mean by this is,
you know, Andy, you do with the free trial, you see all these same mistakes being made over and
over and over and over and over and over again. You know, M plus one queries to chatty database,
not people not handling their thread counts, and everything on the back end. And just when people
start getting it under control on the on their back end systems, which, you know, I think by
and large, they still haven't gotten it under control. But for when they do get it under control, then they move into microservices
and start making all the same mistakes again. So it kind of seems like human nature that
no matter how many times people can put out a list of, you know, the top 10 things to do for
performance, people are going to naturally ignore them or for whatever reason not um stay on top of them and
not keep as good of a lookout or place the importance on um prioritizing those um it's
just it just strikes me uh yeah well and i think one of the reasons is probably because we we keep
getting a new set of engineers out there all the time, right? I mean, the engineers that are now building
the new cool SPAs might not be the same engineers
that have been around 10 years ago
when Steve Souders talked about performance optimization.
You would think these would be the fundamentals, though.
You know what I mean?
Yeah, I agree.
But I guess that's why we need to shout out,
keep shouting out the same things,
because there's new people that haven't heard them before.
And, yeah.
Job security for us.
And I think we have a problem that the defaults aren't fast, right? So you actually have to know and think about it.
As sad as it is, I think that's part of what AMP is tackling,
where you can't screw it up.
And we need to sort of, I think, move the tooling and the infrastructure and everything to the point where even if they don't know what they're doing, they're not building a slow solution by default, right? And that's also what I think, what Brian, what you said, you know, we keep finding in the damage risk world,
a lot of these performance problems,
front and end, back end,
and we build into the products now is basically when you are a developer
or a tester and you test your app,
then we automatically detect
these patterns for your front
and then back end and say,
hey, you have too many images
on the page.
Hey, you have too many
of the same database query
50 times
through hibernate you may want to change the caching setting or the caching strategy
so we actually as tool vendors we try to advance and and basically build this the this artist
intelligence that we came up with over the years and that we do pretty good when we manually step
through the data we just we try to put this into the products and basically bubble it up.
So if a new engineer
just comes out of college,
runs a quick test with WebPageTest,
maybe, hopefully,
and then has diamond trace in the mix,
then he sees, hey,
no caching,
no browsers are caching defined.
And on the back end,
I have a misconfiguration in Hibernate.
I wasn't even aware of it,
but now I know.
So I learn and so I build better apps.
Okay, and if you're not following
Pat already, Pat, your Twitter handle is?
Pat Meenan.
P-A-T-M-E-E-N-A-N.
And obviously, as any listeners
already know, I'm at Emperor Wilson,
and we have at Grabner Andy.
And if you have any
questions, comments, anything on the show, you can reach us either through our Twitters or you can tweet hashtag pure performance at Dynatrace.
Any show ideas, topics, if you want to be a guest, we'd love to hear from you.
And I guess that's all I have.
Andy or Pat, any final thoughts or closing words?
The only thing I want to say is, Pat, if you want to be on the show again
because I know you said you can talk performance
forever and if you have if you want to
just talk about a new topic or if you have an update
on stuff that you're working on
I'm very glad to
to host you again because I think
we just need to as I said
educate the folks out there and
all the channels that we have we should use them and make
sure we make the web a better place, as you said earlier.
Yeah, I mean, thanks for having me.
Like I said, I love doing this.
So if there's anything anyone wants to hear more about, let me know.
I'd be happy to come back and chat about it or hit me up in person at Velocity or something
like that.
But yeah, I love talking web performance.
Well, Pat, thank you very much for being a guest.
We really appreciate it.
And it was at least an honor for me to have you on.
So thank you.
Looking forward to future engagements.
And if I can ever get out of the house,
I'm really hoping I can get to see you at Velocity someday.
Thank you for all the listeners.
And keep listening to us.
Keep these websites up and running.
Make them faster.
And I'll see you soon
or talk to you soon.
Yes.
Bye-bye.
Bye.