home

Adactio: Journal

https://adactio.com/journal/
The online journal of Jeremy Keith, an author and web developer living and working in Brighton, England.
https://adactio.com/journal/rss


Crossing

I’m going to America. But this time it’s going to be a bit different.

Here’s the backstory: I need to get to Chicago for An Event Apart in a couple of weeks. Jessica and I were talking about maybe going to Florida first to hang out with her family on the beach for a bit. We just needed to figure out the travel logistics.

Here’s the next variable to add in to the mix: Jessica is really into ballet. Like, really into ballet. She also likes boats, ships, and all things nautical.

Those two things are normally unrelated, but then a while back, Jessica tweeted this:

OMG @ENBallet on a SHIP crossing the ATLANTIC.

Dance the Atlantic 2019 Cruise

I chuckled at that, and almost immediately dismissed it as being something from another world. But then I looked at the dates, and wouldn’t you know it, it would work out perfectly for our planned travel to Florida and Chicago.

Sooo… we’re crossing the Atlantic ocean on the Queen Mary II. With ballet dancers.

It’s not a cruise. It’s a crossing:

The first rule about traveling between America and England aboard the Queen Mary 2, the flagship of the Cunard Line and the world’s largest ocean liner, is to never refer to your adventure as a cruise. You are, it is understood, making a crossing. The second rule is to refrain, when speaking to those who travel frequently on Cunard’s ships, from calling them regulars. The term of art — it is best pronounced while approximating Maggie Smith’s cut-glass accent on “Downton Abbey” — is Cunardists.

Because of the black-tie gala dinners taking place during the voyage, I am now the owner of tuxedo. I think all this dressing up is kind of like cosplay for the class system. This should be …interesting.

By all accounts, internet connectivity is non-existent on the crossing, so I’m going to be incommunicado. Don’t bother sending me any email—I won’t see it.

We sail from Southampton tomorrow. We arrive in New York a week later.

See you on the other side!

Sat, 10 Aug 2019 18:50:48 GMT


Server Timing

Harry wrote a really good article all about the performance measurement Time To First Byte. Time To First Byte: What It Is and Why It Matters:

While a good TTFB doesn’t necessarily mean you will have a fast website, a bad TTFB almost certainly guarantees a slow one.

Time To First Byte has been the chink in my armour over at thesession.org, especially on the home page. Every time I ran Lighthouse, or some other performance testing tool, I’d get a high score …with some points deducted for taking too long to get that first byte from the server.

Harry’s proposed solution is to set up some Server Timing headers:

With a little bit of extra work spent implementing the Server Timing API, we can begin to measure and surface intricate timings to the front-end, allowing web developers to identify and debug potential bottlenecks previously obscured from view.

I rememberd that Drew wrote an excellent article on Smashing Magazine last year called Measuring Performance With Server Timing:

The job of Server Timing is not to help you actually time activity on your server. You’ll need to do the timing yourself using whatever toolset your backend platform makes available to you. Rather, the purpose of Server Timing is to specify how those measurements can be communicated to the browser.

He even provides some PHP code, which I was able to take wholesale and drop into the codebase for thesession.org. Then I was able to put start/stop points in my code for measuring how long some operations were taking. Then I could output the results of these measurements into Server Timing headers that I could inspect in the “Network” tab of a browser’s dev tools (Chrome is particularly good for displaying Server Timing, so I used that while I was conducting this experiment).

I started with overall database requests. Sure enough, that was where most of the time in time-to-first-byte was being spent.

Then I got more granular. I put start/stop points around specific database calls. By doing this, I was able to zero in on which operations were particularly costly. Once I had done that, I had to figure out how to make the database calls go faster.

Spoiler: I did it by adding an extra index on one particular table. It’s almost always indexes, in my experience, that make the biggest difference to database performance.

I don’t know why it took me so long to get around to messing with Server Timing headers. It has paid off in spades. I wish I had done it sooner.

And now thesession.org is positively zipping along!

Sat, 10 Aug 2019 14:43:10 GMT


Register for Indie Web Camp Brighton 2019

Back at the end of May, I wrote:

We’re going to have an Indie Web Camp in Brighton on October 19th and 20th. I realise that’s quite a way off, but I’m giving you plenty of advance warning so you can block out that weekend (and plan travel if you’re coming from outside Brighton).

I hope you’ve got those dates marked in your calendar. Now it’s time for the next step: register for the event. Registration is free, but we need to know numbers in advance, so if you’re planning to come, please grab yourself a ticket there.

It’s going to be a lot of fun!

If you’ve never been to an Indie Web Camp before, you should definitely come! It’s indescribably fun and inspiring. The first day—Saturday—is a BarCamp-style day of discussions to really get the ideas flowing. Then the second day—Sunday—is all about designing, building, and making. The whole thing wraps up with demos.

Check out the previous Brighton Indie Web Camps:

See you at 68 Middle Street on Saturday, October 19th for Indie Web Camp Brighton 2019!

Fri, 09 Aug 2019 08:34:16 GMT


Discrete replies

Earlier this year, at Indie Web Camp Düsseldorf, I got replies working on my own site. That is to say, I can host a reply on my site to something on another site.

The classic example is Twitter. In fact, if you look at all my replies, most of them are responding to tweets (I also syndicate these replies to Twitter so they show up there just like regular tweet replies).

I’m really, really glad I got replies working. I’ve been using this functionality quite a bit, and it feels really good to own my content this way.

At the time, I wrote:

So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.

I decided not to include them on my home page feed after all. You’ll still see them if you go to the notes section of my site, but I decided that they were overwhelming my home page a bit. They also don’t show up in my RSS feed.

I’m really happy that I’m hosting my replies, and that I’ve got URLs for all of them, but I don’t think I want to give them the same priority as blog posts, links, and regular notes.

Thu, 08 Aug 2019 18:23:37 GMT


Priorities

I recently wrote about a web-specific example of a general principle for choosing the right tool for the job:

JavaScript should only do what only JavaScript can do.

I was—yet again—talking about appropriateness. Use the right technology for the task at hand. Here’s the example I gave:

It feels “wrong” when a powerful client-side JavaScript framework is applied to something that could be accomplished using HTML. Making a blog that’s a single page app is over-engineering.

Surprisingly, I got some pushback on this. Šime Vidas wrote:

Based on my experience, this is not necessarily the case.

Going from server-side rendering and progressive enhancement via JS to a single-page app powered by a JS framework was a enormous reduction in complexity for me (so the opposite of over-engineering).

(Emphasis mine.) He goes on to say:

My main concerns are ease of use & maintainability. If you get those things right, you’re good and it’s not over-engineering.

There’s no doubt that maintainability is a desirable goal. And ease of use for the developer is also important …but I think they pale in comparison to ease of use for the end user.

To be fair, the specific use-case I mentioned was making a blog. And a blog is a personal thing. You can do whatever the heck you like on your own website and don’t let anyone tell you otherwise.

So I probably chose a poor example to illustrate my point. I was thinking more about when you’re making websites for a living. You’re being paid money to make something available on the web. In that situation, I strongly believe that user needs should win out over developer convenience.

I wrote about this recently:

As a user-centred developer, my priority is doing what’s best for end users. That’s not to say I don’t value developer convenience. I do. But I prioritise user needs over developer needs. And in any case, those two needs don’t even come into conflict most of the time.

That’s why I responded to Šime, saying:

Your main concern should be user needs—not your own.

When I talk about over-engineering, I’m speaking from the perspective of end users, not developers.

Before considering your ease of use, and maintainability, consider users first.

In fairness to Šime, he’s being very open and honest about his priorities. I admire that. I’ve seen too many developers try to provide user experience justifications for decisions made for developer convenience. Once again I recommend Alex’s excellent article, The “Developer Experience” Bait-and-Switch:

The swap is executed by implying that by making things better for developers, users will eventually benefit equivalently. The unstated agreement is that developers share all of the same goals with the same intensity as end users and even managers. This is not true.

Now I worry I wasn’t specific enough when I talked about choosing appropriate technology:

Appropriateness is something I keep coming back to when it comes to evaluating web technologies. I don’t think there are good tools and bad tools; just tools that are appropriate or inapropriate for the task at hand.

I should have made it clear that I was talking about what is appropriate or inapropriate for users. I think I made the mistake of assuming that this was obvious, and didn’t need saying. I’ll try not to make that mistake again.

There’s a whole group of tools where this point isn’t even relevant—build tools, task runners, version control …anything that never directly touches the end user; use whatever works for you. But if you’re making decisions around HTML, ARIA, CSS, or JavaScript, then appropriateness for the end user should take precedence.

If you’re in that situation—you are being paid money to make websites, and you are making technology decisions—I urge you to remember Charlie’s words: it isn’t about you.

Thu, 08 Aug 2019 17:32:59 GMT


Navigation preloads in service workers

There’s a feature in service workers called navigation preloads. It’s relatively recent, so it isn’t supported in every browser, but it’s still well worth using.

Here’s the problem it solves…

If someone makes a return visit to your site, and the service worker you installed on their machine isn’t active yet, the service worker boots up, and then executes its instructions. If those instructions say “fetch the page from the network”, then you’re basically telling the browser to do what it would’ve done anyway if there were no service worker installed. The only difference is that there’s been a slight delay because the service worker had to boot up first.

  1. The service worker activates.
  2. The service worker fetches the file.
  3. The service worker does something with the response.

It’s not a massive performance hit, but it’s still a bit annoying. It would be better if the service worker could boot up and still be requesting the page at the same time, like it would do if no service worker were present. That’s where navigation preloads come in.

  1. The service worker activates while simultaneously requesting the file.
  2. The service worker does something with the response.

Navigation preloads—like the name suggests—are only initiated when someone navigates to a URL on your site, either by following a link, or a bookmark, or by typing a URL directly into a browser. Navigation preloads don’t apply to requests made by a web page for things like images, style sheets, and scripts. By the time a request is made for one of those, the service worker is already up and running.

To enable navigation preloads, call the enable() method on registration.navigationPreload during the activate event in your service worker script. But first do a little feature detection to make sure registration.navigationPreload exists in this browser:

if (registration.navigationPreload) {
  addEventListener('activate', activateEvent => {
    activateEvent.waitUntil(
      registration.navigationPreload.enable()
    );
  });
}

If you’ve already got event listeners on the activate event, that’s absolutely fine: addEventListener isn’t exclusive—you can use it to assign multiple tasks to the same event.

Now you need to make use of navigation preloads when you’re responding to fetch events. So if your strategy is to look in the cache first, there’s probably no point enabling navigation preloads. But if your default strategy is to fetch a page from the network, this will help.

Let’s say your current strategy for handling page requests looks like this:

addEventListener('fetch', fetchEvent => {
  const request = fetchEvent.request;
  if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
      fetch(request)
      .then( responseFromFetch => {
        // maybe cache this response for later here.
        return responseFromFetch;
      })
      .catch( fetchError => {
        return caches.match(request)
        .then( responseFromCache => {
          return responseFromCache || caches.match('/offline');
        });
      })
    );
  }
});

That’s a fairly standard strategy: try the network first; if that doesn’t work, try the cache; as a last resort, show an offline page.

It’s that first step (“try the network first”) that can benefit from navigation preloads. If a preload request is already in flight, you’ll want to use that instead of firing off a new fetch request. Otherwise you’re making two requests for the same file.

To find out if a preload request is underway, you can check for the existence of the preloadResponse promise, which will be made available as a property of the fetch event you’re handling:

fetchEvent.preloadResponse

If that exists, you’ll want to use it instead of fetch(request).

if (fetchEvent.preloadResponse) {
  // do something with fetchEvent.preloadResponse
} else {
  // do something with fetch(request)
}

You could structure your code like this:

addEventListener('fetch', fetchEvent => {
  const request = fetchEvent.request;
  if (request.headers.get('Accept').includes('text/html')) {
    if (fetchEvent.preloadResponse) {
      fetchEvent.respondWith(
        fetchEvent.preloadResponse
        .then( responseFromPreload => {
          // maybe cache this response for later here.
          return responseFromPreload;
        })
        .catch( preloadError => {
          return caches.match(request)
          .then( responseFromCache => {
            return responseFromCache || caches.match('/offline');
          });
        })
      );
    } else {
      fetchEvent.respondWith(
        fetch(request)
        .then( responseFromFetch => {
          // maybe cache this response for later here.
          return responseFromFetch;
        })
        .catch( fetchError => {
          return caches.match(request)
          .then( responseFromCache => {
            return responseFromCache || caches.match('/offline');
          });
        })
      );
    }
  }
});

But that’s not very DRY. Your logic is identical, regardless of whether the response is coming from fetch(request) or from fetchEvent.preloadResponse. It would be better if you could minimise the amount of duplication.

One way of doing that is to abstract away the promise you’re going to use into a variable. Let’s call it retrieve. If a preload is underway, we’ll assign it to that variable:

let retrieve;
if (fetchEvent.preloadResponse) {
  retrieve = fetchEvent.preloadResponse;
}

If there is no preload happening (or this browser doesn’t support it), assign a regular fetch request to the retrieve variable:

let retrieve;
if (fetchEvent.preloadResponse) {
  retrieve = fetchEvent.preloadResponse;
} else {
  retrieve = fetch(request);
}

If you like, you can squash that into a ternary operator:

const retrieve = fetchEvent.preloadResponse ? fetchEvent.preloadResponse : fetch(request);

Use whichever syntax you find more readable.

Now you can apply the same logic, regardless of whether retrieve is a preload navigation or a fetch request:

addEventListener('fetch', fetchEvent => {
  const request = fetchEvent.request;
  if (request.headers.get('Accept').includes('text/html')) {
    const retrieve = fetchEvent.preloadResponse ? fetchEvent.preloadResponse : fetch(request);
    fetchEvent.respondWith(
      retrieve
      .then( responseFromRetrieve => {
        // maybe cache this response for later here.
       return responseFromRetrieve;
      })
      .catch( fetchError => {
        return caches.match(request)
        .then( responseFromCache => {
          return responseFromCache || caches.match('/offline');
        });
      })
    );
  }
});

I think that’s the least invasive way to update your existing service worker script to take advantage of navigation preloads.

Like I said, preload navigations can give a bit of a performance boost if you’re using a network-first strategy. That’s what I’m doing here on adactio.com and on thesession.org so I’ve updated their service workers to take advantage of navigation preloads. But on Resilient Web Design, which uses a cache-first strategy, there wouldn’t be much point enabling navigation preloads.

Jeff Posnick made this point in his write-up of bringing service workers to Google search:

Adding a service worker to your web app means inserting an additional piece of JavaScript that needs to be loaded and executed before your web app gets responses to its requests. If those responses end up coming from a local cache rather than from the network, then the overhead of running the service worker is usually negligible in comparison to the performance win from going cache-first. But if you know that your service worker always has to consult the network when handling navigation requests, using navigation preload is a crucial performance win.

Oh, and those browsers that don’t yet support navigation preloads? No problem. It’s a progressive enhancement. Everything still works just like it did before. And having a service worker on your site in the first place is itself a progressive enhancement. So enabling navigation preloads is like a progressive enhancement within a progressive enhancement. It’s progressive enhancements all the way down!

By the way, if all of this service worker stuff sounds like gibberish, but you wish you understood it, I think my book, Going Offline, will prove quite valuable.

Thu, 01 Aug 2019 18:33:19 GMT


Principle

I like good design principles. I collect design principles—of varying quality—at principles.adactio.com. Ben Brignell also has a (much larger) collection at principles.design.

You can spot the less useful design principles after a while. They tend to be wishy-washy; more like empty aspirational exhortations than genuinely useful guidelines for alignment. I’ve written about what makes for good design principles before. Matthew Ström also asked—and answered—What makes a good design principle?

I like those. They’re like design principles for design principles.

One set of design principles that I’ve included in my collection is from gov.uk: government design principles . I think they’re very well thought-through (although I’m always suspicious when I see a nice even number like 10 for the amount of items in the list). There’s a great line in design principle number two—Do less:

Government should only do what only government can do.

This wasn’t a theoretical issue. The multiple departmental websites that preceded gov.uk were notorious for having too much irrelevant content—content that was readily available elsewhere. It was downright wasteful to duplicate that content on a government site. It wasn’t appropriate.

Appropriateness is something I keep coming back to when it comes to evaluating web technologies. I don’t think there are good tools and bad tools; just tools that are appropriate or inapropriate for the task at hand. Whether it’s task runners or JavaScript frameworks, appropriateness feels like it should be the deciding factor.

I think that the design principle from GDS could be abstracted into a general technology principle:

Any particular technology should only do what only that particular technology can do.

Take JavaScript, for example. It feels “wrong” when a powerful client-side JavaScript framework is applied to something that could be accomplished using HTML. Making a blog that’s a single page app is over-engineering. It violates this principle:

JavaScript should only do what only JavaScript can do.

Need to manage state or immediately update the interface in response to user action? Only JavaScript can do that. But if you need to present the user with some static content, JavaScript can do that …but it’s not the only technology that can do that. HTML would be more appropriate.

I realise that this is basically a reformulation of one of my favourite design principles, the rule of least power:

Choose the least powerful language suitable for a given purpose.

Or, as Derek put it:

In the web front-end stack — HTML, CSS, JS, and ARIA — if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

ARIA should only do what only ARIA can do.

JavaScript should only do what only JavaScript can do.

CSS should only do what only CSS can do.

HTML should only do what only HTML can do.

Thu, 25 Jul 2019 18:25:19 GMT


Patterns Day video and audio

If you missed out on Patterns Day this year, you can still get a pale imitation of the experience of being there by watching videos of the talks.

Here are the videos, and if you’re not that into visuals, here’s a podcast of the talks (you can subscribe to this RSS feed in your podcasting app of choice).

On Twitter, Chris mentioned that “It would be nice if the talks had their topic listed,” which is a fair point. So here goes:

It’s fascinating to see emergent themes (other than, y’know, the obvious theme of design systems) in different talks. In comparison to the first Patterns Day, it felt like there was a healthy degree of questioning and scepticism—there were plenty of reminders that design systems aren’t a silver bullet. And I very much appreciated Yaili’s point that when you see beautifully polished design systems that have been made public, it’s like seeing the edited Instagram version of someone’s life. That reminded me of Responsive Day Out when Sarah Parmenter, the first speaker at the very first event, opened everything by saying “most of us are winging it.”

I can see the value in coming to a conference to hear stories from people who solved hard problems, but I think there’s equal value in coming to a conference to hear stories from people who are still grappling with hard problems. It’s reassuring. I definitely got the vibe from people at Patterns Day that it was a real relief to hear that nobody’s got this figured out.

There was also a great appreciation for the “big picture” perspective on offer at Patterns Day. For myself, I know that I’ll be cogitating upon Danielle’s talk and Emil’s talk for some time to come—both are packed full of ineresting ideas.

Good thing we’ve got the videos and the podcast to revisit whenever we want.

And if you’re itching for another event dedicated to design systems, I highly recommend snagging a ticket for the Clarity conference in San Francisco next month.

Tue, 23 Jul 2019 11:55:23 GMT


Trad time

Fifteen years ago, I went to the Willie Clancy Summer School in Miltown Malbay:

I’m back from the west of Ireland. I was sorry to leave. I had a wonderful, music-filled time.

I’m not sure why it took me a decade and a half to go back, but that’s what I did last week. Myself and Jessica once again immersed ourselves in Irish tradtional music. I’ve written up a trip report over on The Session.

On the face of it, fifteen years is a long time. Last time I made the trip to county Clare, I was taking pictures on a point-and-shoot camera. I had a phone with me, but it had a T9 keyboard that I could use for texting and not much else. Also, my hair wasn’t grey.

But in some ways, fifteen years feels like the blink of an eye.

I spent my mornings at the Willie Clancy Summer School immersed in the history of Irish traditional music, with Paddy Glackin as a guide. We were discussing tradition and change in generational timescales. There was plenty of talk about technology, but we were as likely to discuss the influence of the phonograph as the influence of the internet.

Outside of the classes, there was a real feeling of lengthy timescales too. On any given day, I would find myself listening to pre-teen musicians at one point, and septegenarian masters at another.

Now that I’m back in the Clearleft studio, I’m finding it weird to adjust back in to the shorter timescales of working on the web. Progress is measured in weeks and months. Technologies are deemed outdated after just a year or two.

The one bridging point I have between these two worlds is The Session. It’s been going in one form or another for over twenty years. And while it’s very much on and of the web, it also taps into a longer tradition. Over time it has become an enormous repository of tunes, for which I feel a great sense of responsibility …but in a good way. It’s not something I take lightly. It’s also something that gives me great satisfaction, in a way that’s hard to achieve in the rapidly moving world of the web. It’s somewhat comparable to the feelings I have for my own website, where I’ve been writing for eighteen years. But whereas adactio.com is very much focused on me, thesession.org is much more of a community endeavour.

I question sometimes whether The Session is helping or hindering the Irish music tradition. “It all helps”, Paddy Glackin told me. And I have to admit, it was very gratifying to meet other musicians during Willie Clancy week who told me how much the site benefits them.

I think I benefit from The Session more than anyone though. It keeps me grounded. It gives me a perspective that I don’t think I’d otherwise get. And in a time when it feels entirely to right to question whether the internet is even providing a net gain to our world, I take comfort in being part of a project that I think uses the very best attributes of the World Wide Web.

Tue, 16 Jul 2019 18:46:10 GMT


Summer of Apollo

It’s July, 2019. You know what that means? The 50th anniversary of the Apollo 11 mission is this month.

I’ve already got serious moon fever, and if you’d like to join me, I have some recommendations…

Watch the Apollo 11 documentary in a cinema. The 70mm footage is stunning, the sound design is immersive, the music is superb, and there’s some neat data visualisation too. Watching a preview screening in the Duke of York’s last week was pure joy from start to finish.

Listen to 13 Minutes To The Moon, the terrific ongoing BBC podcast by Kevin Fong. It’s got all my favourite titans of NASA: Michael Collins, Margaret Hamilton, and Charlie Duke, amongst others. And it’s got music by Hans Zimmer.

Experience the website Apollo 11 In Real Time on the biggest monitor you can. It’s absolutely wonderful! From July 16th, you can experience the mission timeshifted by exactly 50 years, but if you don’t want to wait, you can dive in right now. It genuinely feels like being in Mission Control!

Thu, 04 Jul 2019 18:19:55 GMT


Page created: Mon, Aug 19, 2019 - 09:05 PM GMT