Chris’ Corner: Performance Talk
We’ve zoomed right into 2023, so let’s keep up the pace and talk about making the web speedy.
– Chris Coyier, proffessional newsletter writer.
Uhm but anyway I do have some choice web performance links I’ve been saving up for you this week though.
Turns out HTML is good.
Perhaps my favorite performance-related post of 2022 was Scott Jehl’s Will Serving Real HTML Content Make A Website Faster? Let’s Experiment!. It makes logical sense that a website that serves all the HTML it needs to render upfront will load faster than a website that loads an HTML shell, then loads JavaScript, and the JavaScript makes subsequent requests for the data, which is turned into HTML and then rendered. But it’s nice to see it completely apples to apples. Scott took websites that serve no “useful HTML” up front, and used a new feature of WebPageTest called Opportunities & Experiments to re-run a performance test, but with manipulations in place (namely “useful HTML” instead of the shell HTML).
The result is just a massive improvement in how fast the useful content renders. Server-side rendering (SSR) is just… better.
As always, Scott has a classy caveat:
Now, it’s very important to note that while the examples in this post helpfully display this pattern, the architectural decisions of these sites are made thoughtfully by highly-skilled teams. Web development involves tradeoffs, and in some cases a team may deem the initial performance impact of JS-dependence a worthy compromise for benefits they get in other areas, such as personalized content, server costs and simplicity, and even performance in long-lived sessions. But tradeoffs aside, it’s reasonable to suspect that if these sites were able to deliver meaningful HTML up-front, a browser would be able render initial content sooner and their users would see improvements as a result.
I suspect it’ll be rarer and rarer to see sites that are 100% client rendered. The best developer tooling we have includes SSR these days, so let’s use it.
Fortunately, there are all kinds of tools that point us in this direction anyway. Heavyweights like Next.js, which helps developers build sites in React, is SSR by default. That’s huge. And you can still fetch data with their getServerSideProps concept, retaining the dynamic nature of client-side rendering. Newer tech like Astro is all-in on producing HTML from JavaScript frameworks while helping you do all the dynamic things you’d want, either by running the dynamic stuff server-side or delaying client-side JavaScript until needed.
So if your brain goes well my app needs to make API requests in order to render, well now you have your answer. There are all kinds of tools to help you do those API requests server side. Myself, I’m a fan of making edge servers do those requests for you. Any request the client can do, any other server can do too, only almost certainly faster. And if that allows you to dunk the HTML in the first response, you should.
It’s all about putting yourself in that HTML-First Mental Model, as Noam Rosenthal says. Letting tools do that is a great start, but, brace yourself, not using JavaScript at all is the best possible option. I really like the example Noam puts in the article here. JavaScript libraries have taught us to do stuff like checking to see if we have data and conditionally rendering an empty state if not. But that requires manipulation of the DOM as data changes. That kind of “state” manipulation can be done in CSS as well, by hiding/showing things already in the DOM with the display
property. Especially now with :has()
(last week’s CodePen Challenge, by the way), this kind of state checking is getting trivial to do.
.dynamic-content-area {
display: none;
}
.loader {
display: block;
}
.dynamic-content-area:has(:first-child) {
display: block;
}
.dynamic-content-area:has(:first-child) + .loader {
display: none;
}
I like that kind of thinking.
The async
attribute is the way.
Harry Roberts digs into a somewhat old JavaScript loading pattern in Speeding Up Async Snippets. Have you ever seen this?
<script>
var script = document.createElement('script');
script.src="https://third-party.io/bundle.min.js";
document.head.appendChild(script);
</script>
We’re getting over a decade in major browsers where that pattern just isn’t needed anymore, and worse, it’s bad for performance, the very thing it’s trying to help with:
For all the resulting script is asynchronous, the
<script>
block that creates it is fully synchronous, which means that the discovery of the script is governed by any and all synchronous work that happens before it, whether that’s other synchronous JS, HTML, or even CSS. Effectively, we’ve hidden the file from the browser until the very last moment, which means we’re completely failing to take advantage of one of the browser’s most elegant internals… the Preload Scanner.
As Harry says:
…we can literally just swap this out for the following in the same location or later in your HTML:
<script src="https://third-party.io/bundle.min.js" async></script>
Gotta preserve that aspect ratio on images before loading.
It’s worth shouting from the rooftops: put width
and height
attributes on your <img>
tags in HTML. This allows the browser to preserve space for them while they load and prevent content from jerking around when they load. This is a meaningful performance benefit.
It’s actually just the aspect-ratio of those two numbers that actually matters. So even if you won’t actually display the image at the size indicated by the attributes (99% chance you won’t, because you’ll restrict the maximum width of the image), the browser will still reserve the correct amount of space.
Jake Archibald puts a point on all this.
You can achieve the same effect with aspect-ratio
in CSS, but in practice, we’re talking about content <img>
s here, which are usually of arbitrary size, so the correct place for this information is in the HTML. There is a little nuance to it as you might imagine, which Jake covers well.
JPEG XL, we hardly knew thee.
Speaking of images, it looks like Chrome threw in the towel on supporting the JPEG XL format for now. It was behind a flag to begin with, so no real ecosystem harm or anything. They essentially said: flags are flags, they aren’t meant to live forever. We would have shipped it, but it’s just not good enough to warrant the maintenance, and besides no other browser was that into it either.
That actually sounds pretty reasonable of Chrome at first glance. But Jon Sneyers has a full-throated response in The Case for JPEG XL to everything aside from the flags shouldn’t last forever thing. I’d consider these things pretty darn excellent:
- JPEG XL can be used to re-encode any existing JPEG entirely losslessly and have it be 20% smaller on average. There are a boatload of JPEGs on the internet, so this seems big.
- JPEG XL can do the progressive-loading thing, where once 15% of the data has arrived, it can show a low-res version of itself. The popularity of the “blur up” technique proves this is desirable.
- JPEG XL is fast to encode. This seems significant because the latest hot new image format, AVIF, is quite the opposite.
So even if JPEG XL wasn’t much of a leap forward in compressed size, it still seems worth supporting. But Jon is saying “in the quality range relevant to the web”, JPEG is 10-15% better than even AVIF.
Of course, Cloudinary is incentivized to push for new formats, because part of its business model is helping people take advantage of new image formats.