Since I moved the blog off Tumblr, I’ve tried to make it reasonably quick. Reasonably because it’s come in waves — waves of me saying: “Oh, that’s fine” and then deciding that whatever it is isn’t fine and needs to go.

Tumblr used to whack in about 1MB of extra guff for its own purposes. If you’re posting endless gifs then that’s not something you’ll notice, but when you’re mostly dealing in text it’s pretty obvious.

When I redesigned the site almost four years ago I remember chafing at this, but it wasn’t until I settled on my own site generator that I had the chance to really whittle things down.

Much of that was lopping off unneeded stuff such as jQuery. It did succeed in getting the size down — to about 150–200kB for an average post. But I’ve recently made a few changes to speed things up that I wanted to talk about.

Much of this has come about after reading Jacques Mattheij’s The Fastest Blog in the World and Dan Luu’s two excellent posts Speeding up this blog by 25x-50x and Most of the web really sucks if you have a slow connection. (And, well, of course, Maciej. You should really read that one.)

Fonts

For ages this site used Rooney served by Typekit as its main font. I love Rooney, it’s great. But using web fonts always means sending lots of data.

Despite thinning down the included characters (Typekit allows you to choose language support) and forgoing the use of double emphasis (<em><strong>), serving up three weights of Rooney still clocked in at over 100kB.

I’m looking at Rooney now and it is gorgeous, but there’s no way I could or can justify it — the fonts collectively would usually outweigh anything else on a page. So it went, in favour of Trebuchet MS, which I’ve long had a soft spot for.

DNS

Not related to the bytes served up but switching my registrar’s (Hover’s) name servers for Cloudflare helped cut about 100ms in DNS response times (from about 150ms).

You can host your DNS with Cloudflare for free without using any of their other caching services (I don’t), and Cloudflare is consistently one of the fastest DNS hosts in the world.

Syntax highlighting

Up until now I’d been using highlighting.js to colour code snippets, and I’d been very happy with it. It’s a nice library that’s easy to work with and easy to download a customised version for your own use.

But I’d been handing out a 10kB JavaScript library to everyone who visited — whether there was code to be highlighted or not. Had I included more than a handful of languages in my library it would have been even more.

My first decision was to use my blog generator’s extensions mechanism to mark every post or page that included code and would need highlighting — so if you visited a page without code, you didn’t receive the JavaScript library.

But really that wasn’t enough for me. A few things annoyed me:

  • Syntax highlighting had to be performed on every view on the client device at readers’ expense.
  • I could only highlight those languages I’d included in my library.
  • The library included all of my selected languages no matter what was on the page.

This wasn’t ideal.

The Markdown module I use has support for syntax highlighting, but there were some deficiencies with it that had led me to pick the client-side highlighter in the first place, several years ago.

It wasn’t difficult to fix that, however. Taking inspiration from Alex Chan, I modified the included Codehilite extension to match my requirements, which were to handle a “natural-looking” language line and line numbers in the Markdown source. You can see the source online, but it’s pretty rough and I need to tidy it up. (It also uses Pygments’s inline line numbers, instead of the table approach which I’ve seen out of alignment on occasion.)

The tradeoff here was serving a slightly inflated HTML file but a much-reduced JavaScript file (I kept a barebones script of a tenth of the size to show the plain source) — but the increase in HTML size is much smaller than the original JavaScript was.

In all, I’ve gone from a baseline of roughly 160kB to 10-15kB per image-free post. It’s not the fastest blog in the world, especially if you’re far away from Linode’s London datacentre, but it should be pretty nippy.

What else?

There are some things which I’ve rejected so far.

  • Inlining JavaScript and CSS.

    This could make the page render faster, at the expense of transferring data that would otherwise be cached when viewing other pages. (But, assuming the blog is like most others, most will only visit a single page.)

    But I feel a bit icky about munging separate resources together, and in mitigation the site is served over HTTP/2 (which most of the world supports) and inlining is an anti-pattern.

  • Sack off the traditional front page.

    Yeah, I felt this one acutely recently after posting all those dodgy but massive Tube heat maps, sitting towards the bottom of the front page and inflating its size.

    Dan Luu has his archives as the front page, which is svelte but extreme for my tastes. We’ll see about this one.

  • Ditch your CSS.

    Yeah, I know. (Well.) But I like pretty things. And it’s only about 2.5kB.