Julia Evans


When your coworker does great work, tell their manager

I’ve been thinking recently about anti-racism and what it looks like to support colleagues from underrepresented groups at work. The other day someone in a Slack group made an offhand comment that they’d sent a message to an engineer’s manager to say that the engineer was doing exceptional work.

I think telling someone’s manager they’re doing great work is a pretty common practice and it can be really helpful, but it’s easy to forget to do and I wish someone had suggested it to me earlier. So let’s talk about it!

I tweeted about this to ask how people approach it and as usual I got a ton of great replies that I’m going to summarize here.

We’re going to talk about what to say, when to do this, and why you should ask first.

ask if it’s ok first

One thing that at least 6 different people brought up was the importance of asking first. It might not be obvious why this is important at first — you’re saying something positive! What’s the problem?

So here are some potential reasons saying something positive to someone’s manager could backfire:

  1. Giving someone a compliment that’s not in line with their current goals. For example, if your coworker is trying to focus on becoming a technical expert in their domain and you’re impressed with their project management skills, they might not want their project management highlighted (or vice versa!).
  2. Giving someone the wrong “level” of compliment. For example, if they’re a very senior engineer and you say something like “PERSON did SIMPLE_ROUTINE_TASK really well!” — that doesn’t reflect well on them and feels condescending. This can happen if you don’t know the person’s position or don’t understand the expectations for their role.
  3. If your coworker was supposed to be focusing on a specific project, and you’re complimenting them for helping with something totally unrelated, their manager might think that they’re not focusing on their “real” work. One person mentioned that they got reprimanded by their manager for getting a spot peer bonus for helping someone on another team.
  4. Some people have terrible managers (for example, maybe the manager will feel threatened by your coworker excelling)
  5. Some people just don’t like being called out in that way, and are happy with the level of recognition they’re getting!

Overall: a lot of people (for very good reasons!) want to have control over the kind of feedback their manager hears about them.

So just ask first! (“hey, I was really impressed with your work on X project and wanted to send this note to $MANAGER to explain how important your work because I know she wasn’t that involved in X project and might not have seen everything you did, is that ok with you?”)

when it’s important: to highlight work that isn’t being recognized

Okay, now let’s talk about when this is important to do. I think this is pretty simple – managers don’t always see the work their reports are doing, and if someone is doing really amazing work that their manager isn’t seeing, they won’t get promoted as quickly. So it’s helpful to tell managers about work that they may not be seeing.

Here are some examples of types of important work that might be underrecognized:

Also, everyone agreed that it’s always great to highlight the contributions of more junior coworkers when they’re doing well.

why it matters: it helps managers make a case for promotion

For someone to get promoted, they need evidence that they’ve been doing valuable work, and managers don’t always have the time to put together all that evidence. So it’s important to be proactive!

You can work on this for yourself by writing a brag document, but having statements from coworkers explaining how great your work really helps build credibility.

So providing these statements for your coworkers can help them get recognized in a timely way for the great work they did (instead of getting promoted a year later or something). It’s extra helpful to do this if you know the person is up for promotion.

how to do it: be specific, explain the impact of their work

Pretty much everyone agreed that it’s helpful to explain what specifically the person did that was awesome (“X did an incredible job of designing this system and we haven’t had any major operational issues with it in the 6 months since it launched, which is really unusual for a project of that scale”).

how to do it: highlight when they’re exceeding expectations

Because the point is to help people get promoted, it’s important to highlight when people are exceeding expectations for their level, for example if they’re not a senior engineer yet but they’re doing the kind of work you’d expect from a senior engineer.

how to do it: send the person the message too

We already basically covered this in “ask the person first”, but especially if I’m using a feedback system where the person might not get the feedback immediately I like to send it to them directly as well. It’s nice for them to hear and they can also use it later on!

public recognition can be great too!

A couple of folks mentioned that they like to give public recognition, like mentioning how great a job someone did in a Slack channel or team meeting.

Two reasons public recognition can be good:

  1. It helps build credibility for your colleague
  2. It lets the person you’re recognizing be part of the conversation/reciprocate to the feedback-giver, especially if the work was a collaboration.

Again, it’s good to ask about this before doing this – some people dislike public recognition.

on peer bonuses

A few people who work at Google (or other companies with peer bonuses) mentioned that they prefer to give peer bonuses for this because it’s a more official form of recognition.

Lots of people mentioned other forms of feedback systems that they use instead of email. Use whatever form of recognition is appropriate at your company!

anyone can do this

What I like about this is it’s a way everyone can help their coworkers – even if you’re really new and don’t feel that qualified to comment on how effective someone more senior is at their job, you can still point out things like “this person helped me do a project that was really out of my comfort zone!”

maybe expand the set of people you do this for!

I think it’s very common for people to promote the work of their friends in this way. I’ve tried to expand the set of people I do this for over time – I think it’s important to keep an eye out for coworkers who are really excelling and to make sure their work is recognized.

more reading on sponsorship

I wanted to just talk about this one specific practice of telling someone’s manager they’re doing great work but there are a LOT of other ways you can help lift your coworkers up. Lara Hogan’s post what does sponsorship look like? has a lot of great examples.

Mekka Okereke has a wonderful Twitter thread about another way you can support underrepresented folks: by being a “difficulty anchor”. It’s short and definitely worth a read.

thanks to Sher Minn Chong, Allie Jones, and Kamal Marhubi for reading a draft of this


scanimage: scan from the command line!

Here’s another quick post about a command line tool I was delighted by.

Last night, I needed to scan some documents for some bureaucratic reasons. I’d never used a scanner on Linux before and I was worried it would take hours to figure out. I started by using gscan2pdf and had trouble figuring out the user interface – I wanted to scan both sides of the page at the same time (which I knew our scanner supported) but couldn’t get it to work.

enter scanimage!

scanimage is a command line tool, in the sane-utils Debian package. I think all Linux scanning tools use the sane libraries (“scanner access now easy”) so my guess is that it has similar abilities to any other scanning software. I didn’t need OCR in this case so we’re not going to talk about OCR.

get your scanner’s name with scanimage -L

scanimage -L lists all scanning devices you have.

At first I couldn’t get this to work and I was a bit frustrated but it turned out that I’d connected the scanner to my computer, but not plugged it into the wall. Oops.

Once everything was plugged in it worked right away. Apparently our scanner is called fujitsu:ScanSnap S1500:2314. Hooray!

list options for your scanner with with --help

Apparently each scanner has different options (makes sense!) so I ran this command to get the options for my scanner:

scanimage --help -d 'fujitsu:ScanSnap S1500:2314' 

I found out that my scanner supported a --source option (which I could use to enable duplex scanning) and a --resolution option (which I changed to 150 to decrease the file sizes and make scanning faster).

scanimage doesn’t output PDFs (but you can write a tiny script)

The only downside was – I wanted a PDF of my scanned document, and scanimage doesn’t seem to support PDF output.

So I wrote this 5-line shell script to scan a bunch of PNGs into a temp directory and convert the resulting PNGs to a PDF.

set -e

DIR=`mktemp -d`
cd $DIR
scanimage -b --format png  -d 'fujitsu:ScanSnap S1500:2314' --source 'ADF Front' --resolution 150
convert *.png $CUR/$1

I ran the script like this. scan-single-sided output-file-to-save.pdf

You’ll probably need a different -d and --source for your scanner.

it was so easy!

I always expect using printers/scanners on Linux to be a nightmare and I was really surprised how scanimage Just Worked – I could just run my script with scan-single-sided receipts.pdf and it would scan a document and save it to receipts.pdf!.


Twitter summary from 2020 so far

Hello! I post a lot of things on Twitter and it’s basically impossible for anyone except me to keep with them, so I thought I’d write a summary of everything I posted on Twitter in 2020 so far.

A lot of these things I eventually end up writing about on the blog, but some of them I don’t, so I figured I’d just put everything in one place.

I’ve made most of the links to non-Twitter websites.


Let’s start with the comics, since that’s a lot of what I write there.


These are from a debugging zine I’m still trying to finish. (https://wizardzines.com/zines/bugs/)

writing tips

computer science


These are part of a potential sequel to bite size linux



These mostly got published as How Containers Work. As usual the final zine was edited a lot and some of these didn’t make it into the zine at all or I significantly rewrote the version in the zine.


A bunch of work on https://questions.wizardzines.com.


A bunch of earlier work on https://flashcards.wizardzines.com. I came up with a direction for this project I liked better (https://questions.wizardzines.com) and won’t be updating that site further.


At the beginning of the year I did some experiments in making screencasts. It was fun but I haven’t done more so far. These are all links to youtube videos.


I’m not a big Twitter thread person (I’d usually rather write a blog post) but I wrote one thread so far this year about how I think about the zine business:

zine announcements


I know that $12 USD is a lot of money for some people, especially folks in countries like Brazil with a weaker currency relative to the US dollar. So periodically I do giveaways on Twitter so that people who can’t afford $12 can get the zines. I aim to give away 1 copy for every sale.


very occasionally I ask people questions:

that’s all!

I’ve been thinking about trying to do a monthly summary here of what I’m writing on Twitter. We’ll see if that happens!


saturday comics: a weekly mailing list of programming comics

Hello! This post is about a mailing list (Saturday Comics) that I actually started a year ago. I realized I never wrote about it on this blog, which is maybe better anyway because now I know more about how it’s gone over the last year!

I think the main idea in this post is probably – if you want to have a mailing list that’s useful to people, but don’t have the discipline to write new email all the time, consider just making a mailing list of your best past work!

Let’s start by talking about some of the problems I wanted to solve with this mailing list.

problems I wanted to solve

problem 1: not everyone is on Twitter.

I pretty much exclusively post draft zine pages to Twitter, but not everyone is on Twitter all the time. Lots of people aren’t on Twitter at all, for lots of very good reasons! So only posting my progress on my zines to Twitter felt silly.

problem 2: weekly mailing lists felt impossible:

I kept hearing “julia, you need a mailing list, mailing lists are the best”. So I wanted to set up some kind of “mailing list” or something. Okay! I’ve tried to set up a “weekly mailing list” of sorts a few times, and inevitably what happens is:

For obvious reason, that’s not super effective.

problem 3: it was impossible to find my “best” work:

I have an idea in my head of what my “best” comics are, but there was literally no way for anyone else other than me to find that out even though I know that some of my comics are a lot more useful to people than others.

I also recently added https://wizardzines.com/comics/ as another way to fix this.

send my favourite comics, not the newest comics

Unlike this blog (where people can read my newest work), I decided to use a different model: let people see some of my favourite comics.

The way I thought about this was – if someone isn’t familiar with my work and wants to learn more, they’re more likely to find something interesting to them in my “best” work than just whatever I happen to be working on at the time.

solution: saturday comics, an automated weekly mailing list

So! I came up with “saturday comics”. The idea is pretty simple: you get 1 programming comic in your email every Saturday.

Unlike a normal weekly mailing list, though, you don’t get the “latest” email – instead, there’s a fixed list of emails in the list, and everyone who signs up gets all the emails in the list starting from the beginning.

For example, the first email is called “bash tricks”, and so if someone signs up today, they’ll get the “bash tricks” email on Saturday.

so far: 29 weeks of email

So far the list has 29 weeks (7 months) of email – if you sign up today, you’ll get a comic every week for at least 29 weeks.

You might notice that 29 is less than 52 and think “wait, you said this list has existed for a year!“. I haven’t quite kept up with 1 email a week so far. What happens in practice is that I’ll add 5 new emails, they’ll get sent out over 5 weeks, then subscribers will stop getting email for while, and then I’ll add more emails eventually and then they’ll start getting email again.

It’s maybe not ideal, but I think it’s okay, and it’s definitely better than my previous mailing list practices of “literally never email the mailing list ever”.

so far: 5000 people have subscribed, and people seem to like it!

5000 people have subscribed to the list so far, and people seem to like it – I pretty often get replies saying “hey, thanks for this week’s comic, I loved this one” or see people tweeting about how they loved this week’s email.

You can sign up here if you want.

how it works: a ConvertKit sequence

The way I implemented it is with a ConvertKit sequence. Here’s an example of what the setup looks like: there’s a list of subject lines & when they’re scheduled to go out (like “1 week after the last email”), and then you can fill in each email’s content. I’ve found it pretty straightforward to use so far.

marketing = building trust

This list is sort of a marketing tool, but I’ve learned to think of marketing (at least for my business) as just building trust by helping people learn new things. So instead of worrying about optimizing conversion rates or whatever (which has never helped me at all), I just try to send emails to the list that will be helpful.

With every comic I include a link to the zine that it’s from in case people want to buy the zine, but I try to not be super in-your-face about it – if folks want to buy my zines, that’s great, if they want to just enjoy the weekly comics, that’s great too.

that’s all!

This idea of a mailing list where you send out your favourite work instead of your latest work was really new to me, and I’m happy with how it’s gone so far!


Tell candidates what to expect from your job interviews

In my last job, I helped with a few projects (like brag documents and the engineering levels) to help make the engineering culture a little more inclusive, and I want to talk about one of them today: making the interview process a little easier to understand for candidates.

I worked on this project for a few days way back in 2015 and I’m pretty happy with how it turned out.

giving everyone a little information helps level the playing field

Different tech companies run their interviews in very different ways, and I think it’s silly to expect candidates to magically intuit how your company’s interview process works.

It sucks for everyone when a candidate is surprised with an unexpected interview. For example, at the time the debugging interview required candidates to have a dev environment set up on their computer that let them install a library & run the tests. Sometimes candidates didn’t have their environment set up the right way, which was a waste of everyone’s time! The point of the interview wasn’t to watch people install bundler!

different companies have different rubrics

Also, different companies actually test different things in their interviews! At that job we didn’t care if people used Stack Overflow during their interviews and didn’t interview for algorithms expertise, but lots of companies do interview for algorithms expertise.

Telling people in advance what they’ll be measured on makes it way easier for them to prepare: if you tell them they won’t be asked algorithms questions, they don’t have to waste their time practicing implementing breadth first search or whatever.

solution: write a short document!

My awesome coworker Kiran had a simple idea to help solve this problem: write a document explaining what to expect from the interview process! She wrote the document and I helped edit it a bit.

We called it On-site interviews for Engineering: What to expect (that link is to an old revision of that document I found in the internet archive).

It covered:

keep it updated over time

That document was originally written in April 2015. A lot of things changed about the interview process over time, and so it needed to be kept updated.

I think the work of keeping the document updated is even more important than writing it in the first place, and a lot of amazing people worked on that. I don’t work there anymore, but some quick Googling turned up what I think is the current version of that document, and it’s great!

documenting your interview process is pretty easy

In my experience, advocating for changes to an interview process is really hard. You need to propose a new interview process, test the interviews, convince interviewers to get on board – it takes a long time.

In comparison, documenting an existing interview process (without changing it!!) is WAY EASIER. My memory is a pretty fuzzy, but I think basically nobody objected to documenting the interview process the company already had – it was just factual information about what we were already doing! Way less controversial.

you can make small changes to your company’s culture

Making the companies I work at a better place for everyone to work is important to me. It’s a huge project, and I’ve tried a lot of things that haven’t worked.

But I’ve found it rewarding to work on changes like this that make one small thing a little better for people.

thanks to Kiran Bhattaram for coming up with this idea in the first place and for reviewing a draft of this post, and to @jilljubs for reminding me of earlier today


entr: rerun your build when files change

This is going to be a pretty quick post – I found out about entr relatively recently and I felt like WHY DID NOBODY TELL ME ABOUT THIS BEFORE?!?! So I’m telling you about it in case you’re in the same boat as I was.

There’s a great explanation of the tool with lots of examples on entr’s website.

The summary is in the headline: entr is a command line tool that lets you run a arbitrary command every time you change any of a set of specified files. You pass it the list of files to watch on stdin, like this:

git ls-files | entr bash my-build-script.sh


find . -name *.rs | entr cargo test

or whatever you want really.

quick feedback is amazing

Like possibly every single programmer in the universe, I find it Very Annoying to have to manually rerun my build / tests every time I make a change to my code.

A lot of tools (like hugo and flask) have a built in system to automatically rebuild when you change your files, which is great!

But often I have some hacked together custom build process that I wrote myself (like bash build.sh), and entr lets me have a magical build experience where I get instant feedback on whether my change fixed the weird bug with just one line of bash. Hooray!

restart a server (entr -r)

Okay, but what if you’re running a server, and the server needs to be restarted every time you change a file? entr’s got you – if you pass -r, then

git ls-files | entr -r python my-server.py

clear the screen (entr -c)

Another neat flag is -c, which lets you clear the screen before rerunning the command, so that you don’t get distracted/confused by the previous build’s output.

use it with git ls-files

Usually the set of files I want to track is about the same list of files I have in git, so git ls-files is a natural thing to pipe to entr.

I have a project right now where sometimes I have files that I’ve just created that aren’t in git just yet. So what if you want to include untracked files? These git command line arguments will do it (I got them from an email from a reader, thank you!):

git ls-files -cdmo --exclude-standard  | entr your-build-script

Someone emailed me and said they have a git-entr command that runs

git ls-files -cdmo --exclude-standard | entr -d "$@"

which I think is a great idea.

restart every time a new file is added: entr -d

The other problem with this git ls-files thing is that sometimes I add a new file, and of course it’s not in git yet. entr has a nice feature for this – if you pass -d, then if you add a new file in any of the directories entr is tracking, then it’ll exit.

I’m using this paired with a little while loop that will restart entr to include the new files, like this:

while true
{ git ls-files; git ls-files . --exclude-standard --others; } | entr -d your-build-scriot

how entr works on Linux: inotify

On Linux, entr works using inotify (a system for tracking filesystem events like file changes) – if you strace it, you’ll see an inotify_add_watch system call for each file you ask it to watch, like this:

inotify_add_watch(3, "static/stylesheets/screen.css", IN_ATTRIB|IN_CLOSE_WRITE|IN_CREATE|IN_DELETE_SELF|IN_MOVE_SELF) = 1152

that’s all!

I hope this helps a few people learn about entr!


A little bit of plain Javascript can do a lot

I’ve never worked as a professional frontend developer, so even though I’ve been writing HTML/CSS/JS for 15 years for little side projects, all of the projects have been pretty small, sometimes I don’t write any Javascript for years in between, and I often don’t quite feel like I know what I’m doing.

Partly because of that, I’ve leaned on libraries a lot! Ten years ago I used to use jQuery, and since maybe 2017 I’ve been using a lot of vue.js for my little Javascript projects (you can see a little whack-a-mole game I made here as an intro to Vue).

But last week, for the first time in a while, I wrote some plain Javascript without a library and it was fun so I wanted to talk about it a bit!

experimenting with just plain Javascript

I really like Vue. But last week when I started building https://questions.wizardzines.com, I had slightly different constraints than usual – I wanted to use the same HTML to generate both a PDF (with Prince) and to make an interactive version of the questions.

I couldn’t really see how that would work with Vue (because Vue wants to create all the HTML itself), and because it was a small project I decided to try writing it in plain Javascript with no libraries – just write some HTML/CSS and add a single <script src="js/script.js"> </script>.

I hadn’t done this in a while, and I learned a few things along the way that made it easier than I thought it would be when I started.

do almost everything by adding & removing CSS classes

I decided to implement almost all of the UI by just adding & removing CSS classes, and using CSS transitions if I want to animate a transition.

here’s a small example, where clicking the “next” question button adds the “done” class to the parent div.

div.querySelector('.next-question').onclick = function () {

This worked pretty well. My CSS as always is a bit of a mess but it felt manageable.

add/remove CSS classes with .classList

I started out by editing the classes like this: x.className = 'new list of classes'. That felt a bit messy though and I wondered if there was a better way. And there was!

You can also add CSS classes like this:

let x = document.querySelector('div');

element.classList.remove('hi') is way cleaner than what I was doing before.

find elements with document.querySelectorAll

When I started learning jQuery I remember thinking that if you wanted to easily find something in the DOM you had to use jQuery (like $('.class')). I just learned this week that you can actually write document.querySelectorAll('.some-class') instead, and then you don’t need to depend on any library!

I got curious about when querySelectorAll was introduced. I Googled a tiny bit and it looks like the [Selectors API was built sometime between 2008 and 2013 – I found a post from the jQuery author discussing the proposed implementation in 2008, and a blog post from 2011 saying it was in all major browsers by then, so maybe it didn’t exist when I started using jQuery but it’s definitely been around for quite a while :)

set .innerHTML

In one place I wanted to change a button’s HTML contents. Creating DOM elements with document.createElement is pretty annoying, so I tried to do that as little as possible and instead set .innerHTML to the HTML string I wanted:

    button.innerHTML = `<i class="icon-lightbulb"></i>I learned something!
    <object data="/confetti.svg" width="30" height = "30"> </object>

scroll through the page with .scrollIntoView

The last fun thing I learned about is .scrollIntoView – I wanted to scroll down to the next question automatically when someone clicked “next question”. Turns out this is just one line of code:

row.scrollIntoView({behavior: 'smooth', block: 'center'});

another vanilla JS example: peekobot

Another small example of a plain JS library I thought was nice is peekobot, which is a little chatbot interface that’s 100 lines of JS/CSS.

Looking at its Javascript, it uses some similar patterns – a lot of .classList.add, some adding elements to the DOM, some .querySelectorAll.

I learned from reading peekobot’s source about .closest which finds the closest ancestor that matches a given selector. That seems like it would be a nice way to get rid of some of the .parentElement.parentElement that I was writing in my Javascript, which felt a bit fragile.

plain Javascript can do a lot!

I was pretty surprised by how much I could get done with just plain JS. I ended up writing about 50 lines of JS to do everything I wanted to do, plus a bit extra to collect some anonymous metrics about what folks were learning.

As usual with my frontend posts, this isn’t meant to be Serious Frontend Engineering Advice – my goal is to be able to write little websites with less than 200 lines of Javascript that mostly work. If you are also flailing around in frontend land I hope this helps a bit!


What happens when you update your DNS?

I’ve seen a lot of people get confused about updating their site’s DNS records to change the IP address. Why is it slow? Do you really have to wait 2 days for everything to update? Why do some people see the new IP and some people see the old IP? What’s happening?

So I wanted to write a quick exploration of what’s happening behind the scenes when you update a DNS record.

how DNS works: recursive vs authoritative DNS servers

First, we need to explain a little bit about DNS. There are 2 kinds of DNS servers: authoritative and recursive.

authoritative DNS servers (also known as nameservers) have a database of IP addresses for each domain they’re responsible for. For example, right now an authoritative DNS server for github.com is ns-421.awsdns-52.com. You can ask it for github.com’s IP like this;

dig @ns-421.awsdns-52.com github.com

recursive DNS servers, by themselves, don’t know anything about who owns what IP address. They figure out the IP address for a domain by asking the right authoritative DNS servers, and then cache that IP address in case they’re asked again. is a recursive DNS server.

When people visit your website, they’re probably making their DNS queries to a recursive DNS server. So, how do recursive DNS servers work? Let’s see!

how does a recursive DNS server query for github.com?

Let’s go through an example of what a recursive DNS server (like does when you ask it for an IP address (A record) for github.com. First – if it already has something cached, it’ll give you what it has cached. But what if all of its caches are expired? Here’s what happens:

step 1: it has IP addresses for the root DNS servers hardcoded in its source code. You can see this in unbound’s source code here. Let’s say it picks to start with. Here’s the official source for those hardcoded IP addresses, also known as a “root hints file”.

step 2: Ask the root nameservers about github.com.

We can roughly reproduce what happens with dig. What this gives us is a new authoritative nameserver to ask: a nameserver for .com, with the IP

$ dig @ github.com
com.			172800	IN	NS	a.gtld-servers.net.
a.gtld-servers.net.	172800	IN	A

The details of the DNS response are a little more complicated than that – in this case, there’s an authority section with some NS records and an additional section with A records so you don’t need to do an extra lookup to get the IP addresses of those nameservers.

(in practice, 99.99% of the time it’ll already have the address of the .com nameservers cached, but we’re pretending we’re really starting from scratch)

step 3: Ask the .com nameservers about github.com.

$ dig @ github.com
github.com.		172800	IN	NS	ns-421.awsdns-52.com.
ns-421.awsdns-52.com.	172800	IN	A

We have a new IP address to ask! This one is the nameserver for github.com.

step 4: Ask the github.com nameservers about github.com.

We’re almost done!

$ dig @ github.com

github.com.		60	IN	A

Hooray!! We have an A record for github.com! Now the recursive nameserver has github.com’s IP address and can return it back to you. And it could do all of this by only hardcoding a few IP addresses: the addresses of the root nameservers.

how to see all of a recursive DNS server’s steps: dig +trace

When I want to see what a recursive DNS server would do when resolving a domain, I run

$ dig @ +trace github.com

This shows all the DNS records that it requests, starting at the root DNS servers – all the 4 steps that we just went through.

let’s update some DNS records!

Now that we know the basics of how DNS works, let’s update some DNS records and see what happens.

When you update your DNS records, there are two main options:

  1. keep the same nameservers
  2. change nameservers

let’s talk about TTLs

We’ve forgotten something important though! TTLs! You know how we said earlier that the recursive DNS server will cache records until they expire? The way it decides whether the record should expire is by looking at its TTL or “time to live”.

In this example, the TTL for the A record github’s nameserver returns for its DNS record is 60, which means 60 seconds:

$ dig @ github.com

github.com.		60	IN	A

That’s a pretty short TTL, and in theory if everybody’s DNS implementation followed the DNS standard it means that if Github decided to change the IP address for github.com, everyone should get the new IP address within 60 seconds. Let’s see how that plays out in practice

option 1: update a DNS record on the same nameservers

First, I updated my nameservers (Cloudflare) to have a new DNS record: an A record that maps test.jvns.ca to

$ dig @ test.jvns.ca
test.jvns.ca.		299	IN	A

This worked immediately! There was no need to wait at all, because there was no test.jvns.ca DNS record before that could have been cached. Great. But it looks like the new record is cached for ~5 minutes (299 seconds).

So, what if we try to change that IP? I changed it to, and then ran the same DNS query.

$ dig @ test.jvns.ca
test.jvns.ca.		144	IN	A

Hmm, it seems like that DNS server has the record still cached for another 144 seconds. Interestingly, if I query multiple times I actually get inconsistent results – sometimes it’ll give me the new IP and sometimes the old IP, I guess because actually load balances to a bunch of different backends which each have their own cache.

After I waited 5 minutes, all of the caches had updated and were always returning the new record. Awesome. That was pretty fast!

you can’t always rely on the TTL

As with most internet protocols, not everything obeys the DNS specification. Some ISP DNS servers will cache records for longer than the TTL specifies, like maybe for 2 days instead of 5 minutes. And people can always hardcode the old IP address in their /etc/hosts.

What I’d expect to happen in practice when updating a DNS record with a 5 minute TTL is that a large percentage of clients will move over to the new IPs quickly (like within 15 minutes), and then there will be a bunch of stragglers that slowly update over the next few days.

option 2: updating your nameservers

So we’ve seen that when you update an IP address without changing your nameservers, a lot of DNS servers will pick up the new IP pretty quickly. Great. But what happens if you change your nameservers? Let’s try it!

I didn’t want to update the nameservers for my blog, so instead I went with a different domain I own and use in the examples for the HTTP zine: examplecat.com.

Previously, my nameservers were set to dns1.p01.nsone.net. I decided to switch them over to Google’s nameservers – ns-cloud-b1.googledomains.com etc.

When I made the change, my domain registrar somewhat ominiously popped up the message – “Changes to examplecat.com saved. They’ll take effect within the next 48 hours”. Then I set up a new A record for the domain, to make it point to

Okay, let’s see if that did anything

$ dig @ examplecat.com
examplecat.com.		17	IN	A

No change. If I ask a different DNS server, it knows the new IP:

$ dig @ examplecat.com
examplecat.com.		299	IN	A

but is still clueless. The reason sees the new IP even though I just changed it 5 minutes ago is presumably that nobody had ever queried about examplecat.com before, so it had nothing in its cache.

nameserver TTLs are much longer

The reason that my registrar was saying “THIS WILL TAKE 48 HOURS” is that the TTLs on NS records (which are how recursive nameservers know which nameserver to ask) are MUCH longer!

The new nameserver is definitely returning the new IP address for examplecat.com

$ dig @ns-cloud-b1.googledomains.com examplecat.com
examplecat.com.		300	IN	A

But remember what happened when we queried for the github.com nameservers, way back?

$ dig @ github.com
github.com.		172800	IN	NS	ns-421.awsdns-52.com.
ns-421.awsdns-52.com.	172800	IN	A

172800 seconds is 48 hours! So nameserver updates will in general take a lot longer to expire from caches and propagate than just updating an IP address without changing your nameserver.

how do your nameservers get updated?

When I update the nameservers for examplecat.com, what happens is that he .com nameserver gets a new NS record with the new domain. Like this:

dig ns @j.gtld-servers.net examplecat.com

examplecat.com.		172800	IN	NS	ns-cloud-b1.googledomains.com

But how does that new NS record get there? What happens is that I tell my domain registrar what I want the new nameservers to be by updating it on the website, and then my domain registrar tells the .com nameservers to make the update.

For .com, these updates happen pretty fast (within a few minutes), but I think for some other TLDs the TLD nameservers might not apply updates as quickly.

your program’s DNS resolver library might also cache DNS records

One more reason TTLs might not be respected in practice: many programs need to resolve DNS names, and some programs will also cache DNS records indefinitely in memory (until the program is restarted).

For example, AWS has an article on Setting the JVM TTL for DNS Name Lookups. I haven’t written that much JVM code that does DNS lookups myself, but from a little Googling about the JVM and DNS it seems like you can configure the JVM so that it caches every DNS lookup indefinitely. (like this elasticsearch issue)

that’s all!

I hope this helps you understand what’s going on when updating your DNS!

As a disclaimer, again – TTLs definitely don’t tell the whole story about DNS propagation – some recursive DNS servers definitely don’t respect TTLs, even if the major ones like do. So even if you’re just updating an A record with a short TTL, it’s very possible that in practice you’ll still get some requests to the old IP for a day or two.

Also, I changed the nameservers for examplecat.com back to their old values after publishing this post.


Questions to help people decide what to learn

For the last few months, I’ve been working on and off on a way to help people evaluate their own learning & figure out what to learn next.

This past week I built a new iteration of this: https://questions.wizardzines.com, which today has 2 sets of questions:

  1. questions about UDP
  2. questions about sockets

It’s still a work in progress, but I’ve been working on this for quite a while so I wanted to write down how I got here.

the goal: help people learn on their own

First, let’s talk about my goal. I’m interested in helping people who are trying to learn on their own. I don’t have any specific materials I’m trying to teach – I want to help people learn what they want to learn.

I’ve done a lot of this by writing blog posts & zines, but I felt like I was missing something – were people really learning what they wanted to learn? How could they tell if they’d learned it?

I felt like I wanted some kind of “quiz” or “test”, but I wasn’t sure what it should look like.

formative assessment vs summative assessment

Let’s take a very quick detour into terminology. There are two kinds of assessment teachers use in school.

formative assessment: “evaluations used to modify teaching and learning activities to improve student attainment.”

summative assessment: used to determine grades

Grades are pretty pointless if you’re teaching yourself (who cares if you got an A in sockets?). But formative assessments! If you could take some kind of evaluation to help you decide what exactly you should teach yourself next! That seems more useful. So I got interested in building some kind of “formative assessment” tool.

(thanks to Sumana for reminding me of these terms!)

next step: ask on Twitter how people feel about quizzes

So I asked on Twitter (in this thread):

have you ever taken a class (online or offline!) where you were given a quiz first that you could use to check your understanding of the topic at the start? did it help you?

I got about 90 replies. Here are some themes I took away from the replies:

One thing I learned from this is that being told you don’t know something is a bad experience for a lot of people.

idea: build flashcards you can learn from

My first idea was to reframe a test as a way to learn. So instead of it being something that tells you what you don’t know (which, so what?), it helps you learn something new!

So I built a few sets of flashcards about various topics. Here’s the first set I built, flashcards on containers, if you want to try it out.

If you didn’t try it – it looks like this:

Basically – there are 14ish questions, you click the card to see the answer, and for each card you categorize it as “I knew that!”, “I learned something”, or “that’s confusing” (which is meant to be a kind of “other” category, where you didn’t know that and you didn’t learn anything).

The idea is that the answers contain enough information that you could actually learn a little bit from them, and hopefully be inspired to go learn more on your own if you’re interested.

good things about the flashcards

some of the positive feedback I got about the flashcards was:

problems with the flashcards

But there were some problems that were bothering me, too.

people dislike questions that don’t match their mental model

Probably the most important thing I learned from making these flashcards is that it really matters how well the question matches the reader’s mental model.

I started out by writing questions by taking statements I’d normally make about a topic, and turning them into questions. Sometimes this really didn’t work.

Here’s an example of it not working: I think the statement “a HTTP request has 4 parts: a body, the headers, the request method, and the path being requested” is relatively unobjectionable. That how I think about what a HTTP request is.

But what if I ask you “what are the 4 parts of a HTTP request?” and the answer is “a body, the headers, the request method, and the URL being requested”? It turns out, that’s totally different!! Not everyone thinks about HTTP requests as having 4 parts – they might think of it has having 3 parts (the first line, the headers, and the body). Or 2 parts and 1 optional part (the first line, and the headers, and maybe an optional body). Or some other way! So it’s weird to be asked “what are the 4 parts of a HTTP request”.

There were a lot of other examples like this, where people reacted badly to some question I asked that didn’t match up with how they think about a topic. So I learned that if I’m asking a question, it gets held to a higher standard for how well it matches with the reader’s mental model than when making the same statement.

An example of what I think would be a better question here is “Does every HTTP request have headers?” (yes! the HTTP/1.1 RFC requires that the Host header be set!). But even that is maybe a little tricky – probably at least one HTTP/1.0 client implementation is out there in the world sending requests without headers, even though 99.99% of HTTP requests have headers.

Of course, it’s ok if the question/answer doesn’t match the reader’s mental model if their mental model is incorrect, but if their model is correct then I think it should match.

get rid of multiple choice

The other thing I learned from these flashcards is that a lot of people dislike multiple choice. I haven’t thought about this that much, but honestly I don’t really like multiple choice either so I decided to get rid of it.

next step: get reminded of The Little Schemer

I don’t remember why, but I’ve had The Little Schemer kicking around in my head for a while. I haven’t actually read the whole thing myself, but I kept hearing people talking about it. Here’s the first page of The Little Schemer, if you haven’t heard of it:

This reminded me a lot of what I was trying to do – there are questions and answers, but the goal isn’t for you to get all the questions “right”. Instead, I think the goal is for you to think about whether you know the answer yet or not and learn as you go.

switch to a side-by-side format

So, I kept a similar question/answer format, but switched to a side-by-side format, like the Little Schemer.

What I like about putting the questions & answers next to each other:

Basically I like that it gives the reader more control, which I think is important.

call it “questions” instead of “flashcards”

I also renamed the project to “questions” because that’s really how I think about learning for myself – I don’t do “flashcards”, but I do constantly ask myself questions about topics I don’t understand, figure out the answers to those questions, and then repeat until I understand the topic as well as I want to.

But coming up with the right questions on your own is hard when you don’t a lot, so I’m hopeful that providing folks with a bunch of questions (and answers) to think about will help you decide what you want to learn next.

keep the “I learned something” button

When I released the first set of questions on UDP, I didn’t include an “I learned something” button, and I noticed something weird – a lot of people were tweeting things like “I got 810”, “I got 1010”.

I was a bit worried about this because the whole idea was to help people identify things they could learn, so saying “I got 810” felt like it was focusing on the things you already knew and ignoring the most important thing – the 2 questions where maybe you could learn something new!

So I added an “I learned something!” button back to each question and spent way too much time building a fun SVG+CSS animation that played when you pressed the button. And so far it seems to have worked – I see more people commenting “I learned something” and less “I got 910”.

building small things is hard

As usual, building small simple things takes more time than I’d expect! The concept of “some questions and answers” seems really simple, but I’ve already learned a lot by building this and I think I still have a lot more to learn about this format.

But I’m excited to learn more, and I’d love to know your thoughts. Here it is again if you’d like to try it: https://questions.wizardzines.com.


Metaphors in man pages

This morning I was watching a great talk by Maggie Appleton about metaphors. In the talk, she explains the difference between a “figurative metaphor” and a “cognitive metaphor”, and references this super interesting book called Metaphors We Live By which I immediately got and started reading.

Here’s an example from “Metaphors We Live By” of a bunch of metaphors we use for ideas:

There’s a long list of more English metaphors here, including many metaphors from the book.

I was surprised that there were so many different metaphors for ideas, and that we’re using metaphors like this all the time in normal language.

let’s look for metaphors in man pages!

Okay, let’s get to the point of this blog post, which is just a small fun exploration – there aren’t going to be any Deep Programming Insights here.

I went through some of the examples of metaphors in Metaphors To Live By and grepped all the man pages on my computer for them.

processes as people

This is one of the richer categories – a lot of different man pages seem to agree that processes are people, or at least alive in some way.

data as food

data as objects

processes as machines/objects


There are LOTS of containers: directories, files, strings, caches, queues, buffers, etc.


There are also lots of kinds of resources: bandwidth, TCP sockets, session IDs, stack space, memory, disk space.

orientation (up, down, above, below)


Limits as rooms/buildings (which have floors, and ceilings, which you hit) are kind of fun:

money / wealth

more miscellaneous metaphors

here are some more I found that didn’t fit into any of those categories yet.

we’re all using metaphors all the time

I found a lot more metaphors than I expected, and most of them are just part of how I’d normally talk about a program. Interesting!


Why strace doesn't work in Docker

While editing the capabilities page of the how containers work zine, I found myself trying to explain why strace doesn’t work in a Docker container.

The problem here is – if I run strace in a Docker container on my laptop, this happens:

$ docker run  -it ubuntu:18.04 /bin/bash
$ # ... install strace ...
root@e27f594da870:/# strace ls
strace: ptrace(PTRACE_TRACEME, ...): Operation not permitted

strace works using the ptrace system call, so if ptrace isn’t allowed, it’s definitely not gonna work! This is pretty easy to fix – on my machine, this fixes it:

docker run --cap-add=SYS_PTRACE  -it ubuntu:18.04 /bin/bash

But I wasn’t interested in fixing it, I wanted to know why it happens. So why does strace not work, and why does --cap-add=SYS_PTRACE fix it?

hypothesis 1: container processes are missing the CAP_SYS_PTRACE capability

I always thought the reason was that Docker container processes by default didn’t have the CAP_SYS_PTRACE capability. This is consistent with it being fixed by --cap-add=SYS_PTRACE, right?

But this actually doesn’t make sense for 2 reasons.

Reason 1: Experimentally, as a regular user, I can strace on any process run by my user. But if I check if my current process has the CAP_SYS_PTRACE capability, I don’t:

$ getpcaps $$
Capabilities for `11589': =

Reason 2: man capabilities says this about CAP_SYS_PTRACE:

       * Trace arbitrary processes using ptrace(2);

So the point of CAP_SYS_PTRACE is to let you ptrace arbitrary processes owned by any user, the way that root usually can. You shouldn’t need it to just ptrace a regular process owned by your user.

And I tested this a third way – I ran a Docker container with docker run --cap-add=SYS_PTRACE -it ubuntu:18.04 /bin/bash, dropped the CAP_SYS_PTRACE capability, and I could still strace processes even though I didn’t have that capability anymore. What? Why?

hypothesis 2: something about user namespaces???

My next (much less well-founded) hypothesis was something along the lines of “um, maybe the process is in a different user namespace and strace doesn’t work because of… reasons?” This isn’t really coherent but here’s what happened when I looked into it.

Is the container process in a different user namespace? Well, in the container:

root@e27f594da870:/# ls /proc/$$/ns/user -l
... /proc/1/ns/user -> 'user:[4026531837]'

On the host:

bork@kiwi:~$ ls /proc/$$/ns/user -l
... /proc/12177/ns/user -> 'user:[4026531837]'

Because the user namespace ID (4026531837) is the same, the root user in the container is the exact same user as the root user on the host. So there’s definitely no reason it shouldn’t be able to strace processes that it created!

This hypothesis doesn’t make much sense but I hadn’t realized that the root user in a Docker container is the same as the root user on the host, so I thought that was interesting.

hypothesis 3: the ptrace system call is being blocked by a seccomp-bpf rule

I also knew that Docker uses seccomp-bpf to stop container processes from running a lot of system calls. And ptrace is in the list of system calls blocked by Docker’s default seccomp profile! (actually the list of allowed system calls is a whitelist, so it’s just that ptrace is not in the default whitelist. But it comes out to the same thing.)

That easily explains why strace wouldn’t work in a Docker container – if the ptrace system call is totally blocked, then of course you can’t call it at all and strace would fail.

Let’s verify this hypothesis – if we disable all seccomp rules, can we strace in a Docker container?

$ docker run --security-opt seccomp=unconfined -it ubuntu:18.04  /bin/bash
$ strace ls
execve("/bin/ls", ["ls"], 0x7ffc69a65580 /* 8 vars */) = 0
... it works fine ...

Yes! It works! Great. Mystery solved, except…

why does --cap-add=SYS_PTRACE fix the problem?

What we still haven’t explained is: why does --cap-add=SYS_PTRACE would fix the problem?

The man page for docker run explains the --cap-add argument this way:

   Add Linux capabilities

That doesn’t have anything to do with seccomp rules! What’s going on?

let’s look at the Docker source code.

When the documentation doesn’t help, the only thing to do is go look at the source.

The nice thing about Go is, because dependencies are often vendored in a Go repository, you can just grep the repository to figure out where the code that does a thing is. So I cloned github.com/moby/moby and grepped for some things, like rg CAP_SYS_PTRACE.

Here’s what I think is going on. In containerd’s seccomp implementation, in contrib/seccomp/seccomp_default.go, there’s a bunch of code that makes sure that if a process has a capability, then it’s also given access (through a seccomp rule) to use the system calls that go with that capability.

		case "CAP_SYS_PTRACE":
			s.Syscalls = append(s.Syscalls, specs.LinuxSyscall{
				Names: []string{
				Action: specs.ActAllow,
				Args:   []specs.LinuxSeccompArg{},

There’s some other code that seems to do something very similar in profiles/seccomp/seccomp.go in moby and the default seccomp profile, so it’s possible that that’s what’s doing it instead.

So I think we have our answer!

--cap-add in Docker does a little more than what it says

The upshot seems to be that --cap-add doesn’t do exactly what it says it does in the man page, it’s more like --cap-add-and-also-whitelist-some-extra-system-calls-if-required. Which makes sense! If you have a capability like CAP_SYS_PTRACE which is supposed to let you use the process_vm_readv system call but that system call is blocked by a seccomp profile, that’s not going to help you much!

So allowing the process_vm_readv and ptrace system calls when you give the container CAP_SYS_PTRACE seems like a reasonable choice.

strace actually does work in newer versions of Docker

As of this commit (docker 19.03), Docker does actually allow the ptrace system calls for kernel versions newer than 4.8.

But the Docker version on my laptop is 18.09.7, so it predates that commit.

that’s all!

This was a fun small thing to investigate, and I think it’s a nice example of how containers are made of lots of moving pieces that work together in not-completely-obvious ways.

If you liked this, you might like my new zine called How Containers Work that explains the Linux kernel features that make containers work in 24 pages. You can read the pages on capabilities and seccomp-bpf from the zine.


New zine: How Containers Work!

On Friday I published a new zine: “How Containers Work!”. I also launched a fun redesign of wizardzines.com.

You can get it for $12 at https://wizardzines.com/zines/containers. If you buy it, you’ll get a PDF that you can either print out or read on your computer. Or you can get a pack of all 8 zines so far.

Here’s the cover and table of contents:

why containers?

I’ve spent a lot of time figuring out how to run things in containers over the last 3-4 years. And at the beginning I was really confused! I knew a bunch of things about Linux, and containers didn’t seem to fit in with anything I thought I knew (“is it a process? what’s a network namespace? what’s happening?“). The whole thing seemed really weird.

It turns out that containers ARE actually pretty weird. They’re not just one thing, they’re what you get when you glue together 6 different features that were mostly designed to work together but have a bunch of confusing edge cases.

As usual, the thing that helped me the most in my container adventures is a good understanding of the fundamentals – what exactly is actually happening on my server when I run a container?

So that’s what this zine is about – cgroups, namespaces, pivot_root, seccomp-bpf, and all the other Linux kernel features that make containers work.

Once I understood those ideas, it got a lot easier to debug when my containers were doing surprising things in production. I learned a couple of interesting and strange things about containers while writing this zine too – I’ll probably write a blog post about one of them later this week.

containers aren’t magic

This picture (page 6 of the zine) shows you how to run a fish container image with only 15 lines of bash. This is heavily inspired by bocker, which “implements” Docker in about 100 lines of bash.

The main things I see missing from that script compared to what Docker actually does when running a container (other than using an actual container image and not just a tarball) are:

container command line tools

The zine also goes over a bunch of command line tools & files that you can use to inspect running containers or play with Linux container features. Here’s a list:

I also made a short youtube video a while back called ways to spy on a Docker container that demos some of these command line tools.

container runtime agnostic

I tried to keep this zine pretty container-runtime-agnostic – I mention Docker a couple of times because it’s so widely used, but it’s about the Linux kernel features that make containers work in general, not Docker or LXC or systemd-nspawn or Kubernetes or whatever. If you understand the fundamentals you can figure all those things out!

we redesigned wizardzines.com!

On Friday I also launched a redesign of wizardzines.com! Melody Starling (who is amazing) did the design. I think now it’s better organized but the tiny touch that I’m most delighted by is that now the zines jump with joy when you hover over them.

One cool thing about working with a designer is – they don’t just make things look better, they help organize the information better so the website makes more sense and it’s easier to find things! This is probably obvious to anyone who knows anything about design but I haven’t worked with designers very much (or maybe ever?) so it was really cool to see.

One tiny example of this: Melody had the idea of adding a tiny FAQ on the landing page for each zine, where I can put the answers to all the questions people always ask! Here’s what the little FAQ box looks like:

I probably want to edit those questions & answers over time but it’s SO NICE to have somewhere to put them.

what’s next: maybe debugging! or working more on flashcards!

The two projects I’m thinking about the most right now are

  1. a zine about debugging, which I started last summer and haven’t gotten around to finishing yet
  2. a flashcards project that I’ve been adding to slowly over the last couple of months. I think could become a nice way to explain basic ideas.

Here’s a link to where to get the zine again :)


When debugging, your attitude matters

A while back I wrote What does debugging a program look like? on what to do when debugging (change one thing at a time! check your assumptions!).

But I was debugging some CSS last week, and I think that post is missing something important: your attitude.

Now – I’m not a very good CSS developer yet. I’ve never written CSS professionally and I don’t understand a lot of basic CSS concepts (I think I finally understood for the first time recently how position: absolute works). And last week I was working on the most complicated CSS project I’d ever attempted.

While I was debugging my CSS, I noticed myself doing some bad things that I normally would not! I was:

This strategy was exactly as effective as you might imagine (not very effective!), and it was because of my attitude about CSS! I had this unusual-for-me belief that CSS was Too Hard and impossible for me to understand. So let’s talk about that attitude a bit!

the problem attitude: “this is too hard for me to understand”

One specific problem I was having was – I had 2 divs stacked on top of one another, and I wanted Div A to be on top of Div B. My model of CSS stacking order at the start of this was basically “if you want Thing A to be on top of Thing B, change the z-index to make it work”. So I changed the z-index of Div A to be 5 or something.

But it didn’t work! In Firefox, div A was on top, but in Chrome, Div B was on top. Argh! Why? CSS is impossible!!! (if you want to see the exact actual situation I was in, I reproduced the different-in-firefox-and-chrome thing here after the fact)

I googled a bit, and I found out that a possible reason z-index might not work was because Div A and Div B were actually in different “stacking contexts”. If that was true, even if I set the z-index of Div A to 999999 it would still not put it on top of Div B. (here’s a small example of what this z-index problem looks like, though I think my specific bug had some extra complications)

I thought “man, this stacking context thing seems really complicated, why is it different between Firefox and Chrome, I’m not going to be able to figure this out”. So I tried a bunch of random things a bunch of blog posts suggested, which as usual did not work.

Finally I gave up this “change random things and pray” strategy and thought “well, what if I just read the documentation on stacking order, maybe it’s not that bad”.

So I read the MDN page on stacking order, which says:

When the z-index property is not specified on any element, elements are stacked in the following order (from bottom to top):
1. The background and borders of the root element
2. Descendant non-positioned blocks, in order of appearance in the HTML
3. Descendant positioned elements, in order of appearance in the HTML

This is SO SIMPLE! It just depends on the order in the HTML! I put Div A after Div B in the HTML (as a sibling) and it made everything work in both browsers.

better attitude: “let’s learn the basics and see if that helps”

This whole stacking problem turned out to really not be that complicated – all I needed to do was read a very short and simple documentation page to understand how stacking works!

Of course, computer things are not always this simple (and even in this specific case the rules about what creates a new stacking context are pretty complicated.). But I did not need to understand those more complicated rules in order to put Div A on top of Div B! I only needed to know the much simpler 3 rules above.

So – calm down for a second, learn a few of the basics, and see if that helps.

watching people who know what they’re doing is inspiring

Another area of CSS that I thought was “too hard” for me to understand was this whole position: absolute and position: relative business. I kept seeing (and sometimes using!) examples where people made complicated CSS things with position: absolute but I didn’t understand how they worked. Doesn’t position: absolute mean that the element is always in the same place on the screen? Why are these position: absolute things moving when I scroll like the rest of the document? (spoiler: no, that’s position: fixed.)

But last week, I paired with someone who’s a lot better at CSS than me on some code, and I saw that they were just typing in position: absolute and position: relative confidently into their code without seeming confused about it!! Could that be me?

I looked up the documentation on MDN on position: absolute, and it said:

The element is removed from the normal document flow, and no space is created for the element in the page layout. It is positioned relative to its closest positioned ancestor… Its final position is determined by the values of top, right, bottom, and left.

So things with position: absolute are positioned relative to their closest positioned ancestor! And you just use top/bottom/right/left to pick where! That’s so simple!

documentation that you can trust makes a big difference

I think another big source of my frustration with CSS is that I didn’t have the best grasp of where to find accurate information & advice. I knew that MDN was a reliable reference, but MDN doesn’t really help answer questions like “ok but seriously how do I center a div???” and I found myself reading a lot of random Stack Overflow answers/blog posts that I wasn’t 100% sure were correct.

This week I learned about CSS Tricks which has a lot of GREAT articles like Centering in CSS: A Complete Guide which seems very reputable and is written in a super clear way.

that’s all!

I don’t really know why I started to believe that it was “impossible” to understand basic CSS concepts since I don’t believe that about computers in general. Maybe because I’ve been writing CSS at a beginner level for a very long time but hadn’t ever really tried to do a more involved CSS project than “let’s arrange some divs in a grid with flexbox”!

But this attitude really got in the way of me writing the CSS I wanted to write! And once I let go of it and used my normal debugging techniques I was able to get a lot more things to work the way I wanted.


Getting started with shaders: signed distance functions!

Hello! A while back I learned how to make fun shiny spinny things like this using shaders:

My shader skills are still extremely basic, but this fun spinning thing turned out to be a lot easier to make than I thought it would be to make (with a lot of copying of code snippets from other people!).

The big idea I learned when doing this was something called “signed distance functions”, which I learned about from a very fun tutorial called Signed Distance Function tutorial: box & balloon.

In this post I’ll go through the steps I used to learn to write a simple shader and try to convince you that shaders are not that hard to get started with!

examples of more advanced shaders

If you haven’t seen people do really fancy things with shaders, here are a couple:

  1. this very complicated shader that is like a realistic video of a river: https://www.shadertoy.com/view/Xl2XRW
  2. a more abstract (and shorter!) fun shader with a lot of glowing circles: https://www.shadertoy.com/view/lstSzj

step 1: my first shader

I knew that you could make shaders on shadertoy, and so I went to https://www.shadertoy.com/new. They give you a default shader to start with that looks like this:

Here’s the code:

void mainImage( out vec4 fragColor, in vec2 fragCoord )
    // Normalized pixel coordinates (from 0 to 1)
    vec2 uv = fragCoord/iResolution.xy;

    // Time varying pixel color
    vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));

    // Output to screen
    fragColor = vec4(col,1.0);

This doesn’t do anythign that exciting, but it already taught me the basic structure of a shader program!

the idea: map a pair of coordinates (and time) to a colour

The idea here is that you get a pair of coordinates as an input (fragCoord) and you need to output a RGBA vector with the colour of that. The function can also use the current time (iTime), which is how the picture changes over time.

The neat thing about this programming model (where you map a pair of coordinates and the time to) is that it’s extremely trivially parallelizable. I don’t understand a lot about GPUs but my understanding is that this kind of task (where you have 10000 trivially parallelizable calculations to do at once) is exactly the kind of thing GPUs are good at.

step 2: iterate faster with shadertoy-render

After a while of playing with shadertoy, I got tired of having to click “recompile” on the Shadertoy website every time I saved my shader.

I found a command line tool that will watch a file and update the animation in real time every time I save called shadertoy-render. So now I can just run:

shadertoy-render.py circle.glsl 

and iterate way faster!

step 3: draw a circle

Next I thought – I’m good at math! I can use some basic trigonometry to draw a bouncing rainbow circle!

I know the equation for a circle (x**2 + y**2 = whatever!), so I wrote some code to do that:

Here’s the code: (which you can also see on shadertoy)

void mainImage( out vec4 fragColor, in vec2 fragCoord )
    // Normalized pixel coordinates (from 0 to 1)
    vec2 uv = fragCoord/iResolution.xy;
    // Draw a circle whose center depends on what time it is
    vec2 shifted = uv - vec2((sin(iGlobalTime) + 1)/2, (1 + cos(iGlobalTime)) / 2);
    if (dot(shifted, shifted) < 0.03) {
        // Varying pixel colour
        vec3 col = 0.5 + 0.5*cos(iGlobalTime+uv.xyx+vec3(0,2,4));
        fragColor = vec4(col,1.0);
    } else {
        // make everything outside the circle black
        fragColor = vec4(0,0,0,1.0);

This takes the dot product of the coordinate vector fragCoord with itself, which is the same as calculating x^2 + y^2. I played with the center of the circle a little bit in this one too – I made the center vec2((sin(iGlobalTime) + 1)/2, (1 + cos(faster)) / 2), which means that the center of the circle also goes in a circle depending on what time it is.

shaders are a fun way to play with math!

One thing I think is fun about this already (even though we haven’t done anything super advanced!) is that these shaders give us a fun visual way to play with math – I used sin and cos to make something go in a circle, and if you want to get some better intuition about how trigonometric work, maybe writing shaders would be a fun way to do that!

I love that you get instant visual feedback about your math code – if you multiply something by 2, things get bigger! or smaller! or faster! or slower! or more red!

but how do we do something really fancy?

This bouncing circle is nice but it’s really far from the super fancy things I’ve seen other people do with shaders. So what’s the next step?

idea: instead of using if statements, use signed distance functions!

In my circle code above, I basically wrote:

if (dot(uv, uv) < 0.03) {
    // code for inside the circle
} else {
    // code for outside the circle

But the problem with this (and the reason I was feeling stuck) is that it’s not clear how it generalizes to more complicated shapes! Writing a bajillion if statements doesn’t seem like it would work well. And how do people render those 3d shapes anyway?

So! Signed distance functions are a different way to define a shape. Instead of using a hardcoded if statement, instead you define a function that tells you, for any point in the world, how far away that point is from your shape. For example, here’s a signed distance function for a sphere.

float sdSphere( vec3 p, float center )
  return length(p)-center;

Signed distance functions are awesome because they’re:

the steps to making a spinning top

When I started out I didn’t understand what code I needed to write to make a shiny spinning thing. It turns out that these are the basic steps:

  1. Make a signed distance function for the shape I want (in my case an octahedron)
  2. Raytrace the signed distance function so you can display it in a 2D picture (or raymarch? The tutorial I used called it raytracing and I don’t understand the difference between raytracing and raymarching yet)
  3. Write some code to texture the surface of your shape and make it shiny

I’m not going to explain signed distance functions or raytracing in detail in this post because I found this AMAZING tutorial on signed distance functions that is very friendly and honestly it does a way better job than I could do. It explains how to do the 3 steps above and the code has a ton of comments and it’s great.

step 4: copy the tutorial code and start changing things

Here I used the time honoured programming practice here of “copy the code and change things in a chaotic way until I get the result I want”.

My final shader of a bunch of shiny spinny things is here: https://www.shadertoy.com/view/wdlcR4

The animation comes out looking like this:

Basically to make this I just copied the tutorial on signed distance functions that renders the shape based on the signed distance function and:

making the octahedron spin!

Here’s some the I used to make the octahedron spin! This turned out to be really simple: first copied an octahedron signed distance function from this page and then added a rotate to make it rotate based on time and then suddenly it’s spinning!

vec2 sdfOctahedron( vec3 currentRayPosition, vec3 offset ){
    vec3 p = rotate((currentRayPosition), offset.xy, iTime * 3.0) - offset;
    float s = 0.1; // what is s?
    p = abs(p);
    float distance = (p.x+p.y+p.z-s)*0.57735027;
    float id = 1.0;
    return vec2( distance,  id );

making it shiny with some noise

The other thing I wanted to do was to make my shape look sparkly/shiny. I used a noise funciton that I found in this github gist to make the surface look textured.

Here’s how I used the noise function. Basically I just changed parameters to the noise function mostly at random (multiply by 2? 3? 1800? who knows!) until I got an effect I liked.

float x = noise(rotate(positionOfHit, vec2(0, 0), iGlobalTime * 3.0).xy * 1800.0);
float x2 = noise(lightDirection.xy * 400.0);
float y = min(max(x, 0.0), 1.0);
float y2 = min(max(x2, 0.0), 1.0) ;
vec3 balloonColor = vec3(y , y  + y2, y  + y2);

writing shaders is fun!

That’s all! I had a lot of fun making this thing spin and be shiny. If you also want to make fun animations with shaders, I hope this helps you make your cool thing!

As usual with subjects I don’t know tha well, I’ve probably said at least one wrong thing about shaders in this post, let me know what it is!

Again, here are the 2 resources I used:

  1. “SDF Tutorial: box & balloon”: https://www.shadertoy.com/view/Xl2XWt (which is really fun to modify and play around with)
  2. Tons of signed distance functions that you can copy and paste into your code http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm


Questions you can ask about compensation

Talking about pay is hard, and a lot of the time it feels like it boils down to “hello I would like more money please?”. But it’s totally possible to have a conversation about compensation without asking for more money at all!

When trying to understand (and let’s be honest – increase!) my pay, I’ve found it really useful to first understand the processes around compensation at the company I work for. Here are some questions you can ask. Your manager can probably answer many of these, but your colleagues might know too!

  1. Who makes decisions about raises? (is it at the discretion of the manager? Does the manager have a fixed budget they can give out? Is there a formula based on past performance evaluations?)
  2. When do we adjust salaries? (on the employee’s work anniversary? Right after a performance review?)
  3. Do we do market adjustments to give people raises if the industry salary for this job increases? What’s the process for that?
  4. Is there a salary range for my level? What is it approximately? (Also same question for total compensation and not just salary)
  5. Does the company actually stick to its salary ranges or does it often make exceptions? What’s the process for getting paid higher than the range? Who can decide to make an exception?
  6. Which other companies are we trying to be competitive with when we make job offers?
  7. How is compensation split between salary/equity/bonus? (at higher levels, will my pay be mostly equity? what do we aim for with bonuses?)
  8. Is it possible to get more vacation? (at this company, do you get more vacation after X years?)
  9. When are equity refreshes given? (do we give refreshes yearly? Only when someone’s initial stock grant is about to expire?)
  10. Who makes decisions about equity refreshes and how? (are they based on level? Performance? Who decides?)
  11. When do my stock options expire? (this one you should definitely have been told, but if your company has stock options set up like “they expire 3 months after you leave”, it’s possible for them to change their policy)
  12. Is on-call time compensated? How?
  13. How do bonuses work exactly? (is it tied to company performance? Individual performance? Level? All of the above? Are bonuses targeted to be a percentage of salary?)
  14. Is there a peer bonus system? (can people recommend their coworkers for cash bonuses?)
  15. Is there a learning budget? (for conferences / books / training?)
  16. Is it possible to take unpaid time off?

If you’re negotiating a job offer it can also be useful to ask about signing/relocation bonus and details about the stock options.

This is probably too many questions to ask all at once, and your manager may not even know the answers to all of these questions themselves. That’s okay! I definitely didn’t know the answers to all of these at my last job, but knowing even some of these answers is really helpful.

company policies can vary a lot

The reason this blog post is “questions to ask” and not “how compensation works” is that different companies have VERY different compensation policies. At some companies you can ask for a raise and just get it if you make a good case, and at other companies there are very strict rules about the salary bands for each level. And everything in between, and then apply that to every axis of compensation (salary, bonuses, equity, paid vacation, benefits). Regardless of what the “best” compensation policies are, it’s good to know what situation you’re in exactly.

And be careful of assuming you know the answers already!

why this is useful

If you know when and how decisions about compensation are made, it’s easier to figure out where to apply pressure, either individually (by making a case for yourself) or through collective action (by making specific demands as a group for something to be changed).


New zine: Become a SELECT Star!

On Friday I published a zine about SQL called “Become a SELECT Star!”

You can get it for $12 at https://wizardzines.com/zines/sql. If you buy it, you’ll get a PDF that you can either read on your computer or print out. You can also get a pack of all 7 zines so far.

Here’s the cover and table of contents:

why SQL?

I got excited about writing a zine about SQL because at my old job I wrote a ton of SQL queries (mostly related to machine learning) and by doing that I learned there are a lot of weird things about SQL! For example – SQL queries don’t actually start with SELECT. And the way NULL behaves isn’t really intuitive at first.

It’s been really fun to go back and try to explain the basics of SQL from the beginning. (what’s the difference between WHERE and HAVING? what’s the basic idea with indexes actually? how do you write a join?)

I think SQL is a really nice thing to know because there are SO MANY SQL databases out there, and some of them are super powerful! (like BigQuery and Redshift). So if you know SQL and have access to one of these big data warehouses you can write queries that crunch like 10 billion rows of data really quickly.

lots of examples

I ended up spending a lot of time on the examples in this zine, more than in any previous zine. My friend Anton helped me come up with a fun way to illustrate them, where you can clearly see the query, the table it’s running on, and what the query outputs. Like this:

experiment: include a SQL playground

All the examples in the zine are real queries that you can run. So I thought: why not provide a simple environment where people can actually run those queries (and variations on those queries) to try things out?

So I built a small playground where you can run queries on the example tables in the zine. It uses SQLite compiled to web assembly, so all the queries run in your browser. It wasn’t too complicated to build – I just used my minimal Javascript/CSS skills and vue.js.

I’d love to hear any feedback about whether this is helpful or not – the example tables in the zine are really small (you can only print out small SQL tables!), so the biggest table in the example set has 9 rows or something.

what’s next: probably containers

I think that next up is going to be a zine on containers, which is more of a normal systems-y topic for me. (for example: namespaces, cgroups, why containers?)

Here’s a link to where to get the zine again :)


PaperWM: tiled window management for GNOME

When I started using Linux on my personal computer, one of the first things I got excited about was tiny lightweight window managers, largely because my laptop at the time had 32MB of RAM and anything else was unusable.

Then I got into tiling window managers like xmonad! I could manage my windows with my keyboard! They were so fast! I could configure xmonad by writing a Haskell program! I could customize everything in all kinds of fun ways (like using dmenu as a launcher)! I used 3 or 4 different tiling window managers over the years and it was fun.

About 6 years ago I decided configuring my tiling window manager wasn’t fun for me anymore and switched to using the Ubuntu stock desktop environment: Gnome. (which is much faster now that I have 500x more RAM in my laptop :) )

So I’ve been using Gnome for a long time, but I still kind of missed tiling window managers. Then 6 months ago a friend told me about PaperWM, which lets you tile your windows in Gnome! I installed it immediately and I’ve been using it ever since.

PaperWM: tiling window management for Gnome

The basic idea of PaperWM is: you want to keep using Gnome (because all kinds of things Just Work in Gnome) but you also kinda wish you were using a tiling window manager.

It’s a Gnome extension (instead of being a standalone window manager) and it’s in Javascript.

“Paper” means all of your windows are in a line

The main idea in PaperWM is it puts all your windows in a line, which is actually quite different from traditional tiling window managers where you can tile your windows any way you want. Here’s a gif of me moving between / resizing some windows while writing this blog post (there’s a browser and two terminal windows):

PaperWM’s Github README links to this video: http://10gui.com/video/, which describes a similar system as a “linear window manager”.

I’d never heard of this way of organizing windows before but I like the simplicity of it – if I’m looking for a specific window I just move left/right until I find it.

everything I do in PaperWM

there are lots of other features but these are the only ones I use:

I like tools that I don’t have to configure

I’ve been using PaperWM for 6 months on a laptop and I really like it! I also really appreciate that even though it’s configurable (by writing a Javascript configuration file), it does the things I want out of the box without me having to research how to configure it.

The fish shell is another delightful tool like that – I basically don’t configure fish at all (except to set environment variables etc) and I really like the default feature set.


2019: Year in review

It’s the end of the year again! Here are a few things that happened in 2019. I wrote these in 2015, 2016, 2017, and 2018 too.

I have a business instead of a job!

The biggest change this year is that I left my job in August after working there for 5.5 years and now I don’t have a job! Now I have a business (wizard zines).

This has been exciting (I can do anything I want with my time! No rules! Wow!) and also disorienting (I can do anything I… want? Wait, what do I want to do exactly?). Obviously this is a good problem to have but it’s a big adjustment from the structure I had when I had a job.

My plan for now is to give myself a year (until August 2020) to see how this new way of existing goes and then reevaluate.

I wanted to write some reflections on my 5 years at Stripe here but it’s been such a huge part of my life for so long that I couldn’t figure out how to summarize it. I was in a much worse place in my career 6 years ago before I started working there and it really changed everything for me.


2019 was !!Con’s 6th year! It’s a conference about the joy, excitement, and surprise of programming. And !!Con also expanded to the west coast!! I wasn’t part of organizing the west coast conference at all but I got to attend and it was wonderful.

Running a conference is a ton of work and I feel really lucky to get to do it with such great co-organizers – there have been at least 20 people involved in organizing over the years and I only do a small part (right now I organize sponsorships for the east coast conference).

This year we also incorporated the Exclamation Foundation which is the official entity which runs both conferences which is going to make organizing money things a lot easier.

I understand how the business works a little better

Earlier this year I signed up for a business course called 30x500 by Amy Hoy and Alex Hillman. They’ve influenced me a lot this year. Basically I signed up for it because I had a business that had made $100,000 in revenue already but I didn’t really understand how the business worked and it felt like it could just evaporate at any point. So $2000 (the cost at the time of 30x500) was worth it to help me understand what was going on.

Amy and Alex both just the other day wrote 100-tweet threads that have some of the ideas that I learned this year in them: Alex on creating sustainable businesses and Amy on design.

I was hoping to build a system for selling printed zines in 2019 and I didn’t get to it – that’s probably my one concrete business goal for 2020. I tried out Lulu for printing in the hopes that I could experiment with print-on-demand but the quality was awful so it’s going to be a bit more work.

blog posts and things

In 2019 I:

The blog post I’m happiest to have published this year is definitely Get your work recognized: write a brag document. I’ve seen quite a few people saying that it helped them track their work and it makes me really happy. A bunch of people at my old job adopted it and it’s one of the non-engineering projects I’m most proud of having done there.

Publishing this post about my business revenue was also important to me – in the past I loved blogging, but I didn’t think it was possible to make a living by explaining computer things online. And I was totally wrong! It is possible! So I think it’s important to tell other people that it’s a possibility.


I published 2 zines: Bite Size Networking and HTTP: Learn Your Browser’s Language. And wrote most of a third zine about SQL which should be out in January.

I made the same business revenue as in 2018 (which I was thrilled about).

published a box set of my free zines

In August I published a box set of all my free zines with No Starch Press (Your Linux Toolbox, it’s in Real Physical Bookstores!!) They did a fantastic job printing it: the quality is really really good. I’m very happy with how it turned out. (and if you do buy it and like it, leaving an amazon review helps me a lot).

And No Starch just told me last week they’ve sold 4000 copies so far and are looking to do a second printing!

Having a Real Traditionally Published Thing out is really cool, I could not have imagined 4 years ago that I could go to an actual bookstore and buy the little 16-page zine I wrote about how much I love strace.

The business aspect of it is interesting – because I’m so used to running a business where I sell my own zines, getting 10% in royalties instead of 100% feels strange. But printing and distribution are complicated! And it’s really cool that I can say “yeah, go to Barnes & Noble, they’ll have it”! And No Starch helped me a lot with picking a good title and cover art! And basically the whole traditional publishing ecosystem just works in a completely different way from what I’m used to :)

I think I’ll have a better sense for how to think about traditional publishing from a business perspective in a year or so after the book has been out for longer.

A big thing I learned from this project is that having zines that are printed in a higher quality way (not just on a home printer) is really nice.

what went well

some things that were good this year:

some things that are harder:


"server" is hard to define

Somebody asked me recently what a server was, and I had a harder time explaining it than I expected! I thought I was going to be able to give some kind of simple pithy answer but it kind of got away from me. So here’s an short exploration of what the word “server” can mean:

a server responds to requests

A server definitely responds to requests. A few examples:


Me: "please give me google.com" 
Server: "here is the HTML for that webpage"

bittorrent server:

Me: "I would like this chunk of the good wife season 2"
Server: "here are some of the  bytes from that .avi file!"

mail server:

Me: "can you send this email to julia@jvns.ca"
Server: "I sent it!"

But what is a server actually specifically exactly?

a server is a program

My first instinct is to say “a server is a program” because for example a “the wordpress server” is a PHP program, so let’s start with that.

A server is usually a program that listens on a port (like 80). For example, if we’re talking about a Rails webserver, then the program is a Ruby program that’s listening on a port for HTTP requests.

For example, we can start a Python server to serve files out of the current directory.

$ python3 -m http.server & 
Serving HTTP on port 8000 ( ..

and send requests to it with curl:

$ curl localhost:8000/config.yaml
baseurl: https://jvns.ca
disablePathToLower: true
languageCode: en-us
title: Julia Evans
author: Julia Evans

a server might be a virtual machine

But often when I talk about “a server” at work, I’ll use it in a sentence like “I’m going to SSH to that server to see what’s going on with it”, or “wow, that server is swapping a lot, that’s bad!“.

So in those cases clearly I don’t mean a program when I say “that server” (you can’t ssh to a program, though the ssh server that runs on the VM is itself a program!), I mean the AWS instance that the server program is running on. That AWS instance is a virtual machine, which looks like a computer in a lot of ways (it’s running an operating system!) but it isn’t a physical computer.

a server might be a container

Similarly to how your server might be a virtual machine, it could also be a container running in a virtual machine. So “the server is running out of memory” could mean “the container is running out of memory and crashing” which really means “we set a cgroup memory limit on this container and the programs in the container with that cgroup exceeded the limit so the Linux kernel OOM killed them”.

But containers make everything a lot more complicated so I think we should stop there for now.

a server is a computer

But also when you buy a server from Dell or some other computer company, you’re not buying a virtual machine, you’re buying an actual physical machine.

Usually these computers are in building datacenters. For example in this video you can see thousands of servers in a Google datacenter.

The computers in this datacenter don’t look like the computers in my house! They’re short and wide because they’re designed to fit into these giant racks of servers. For example if you search Newegg for 1U server you’ll find servers that are 1 “rack unit” high, and a rack unit is 1.75 inches. There are also 2U servers which are twice as high.

Here’s a picture of a 1U server I found on Newegg:

I’ve only seen a server rack once at the Internet Archive which is in what used to be a church in San Francisco, and it was really cool to realize – wow, when I use the Wayback Machine it’s using the actual computers in this room!

“the server” might be 1000 computers

Next, let’s say we’re talking about how Gmail works. You might ask “hey, when I search my email to find my boarding pass, does that happen in the frontend or on the server?”.

The answer is “it happens on the server”, but what’s “the server” here? There’s not just one computer or program or virtual machine that searches your Gmail, there are probably lots of computers and programs at Google that are reponsible for that and they’re probably distributed across many datacenters all over the world.

And even if we’re just talking about doing 1 search, there could easily be 20 different computers in 3 different countries involved in just running that 1 search.

So the words “the server” in “oh yeah, that happens on the server” mean something kind of complicated here – what you’re actually saying is something “well the browser makes a request, and that request does something, but I’m not really going to worry about what because the important thing is just that the browser made a request and got some kind of response back.”

what happens when I search my email for a boarding pass?

When I search for “boarding” in my email, the Javascript running on the frontend puts together this request. It’s mostly indecipherable but it definitely contains the word “boarding”:

  "1": {
    "1": 79,
    "2": 101,
    "4": "boarding",
    "5": {
      "5": 0,
      "12": "1577376926313",
      "13": -18000000
    "6": "itemlist-ViewType(79)-5",
    "7": 1,
    "8": 2000,
    "10": 0,
    "14": 1,
    "16": {
      "1": 1,
      "2": 0,
      "3": 0,
      "7": 1
    "19": 1
  "3": {
    "1": "0",
    "2": 5,
    "5": 1,
    "6": 1,
    "7": 1

We get a response back which is large and complicated and definitely contains search results from my email about boarding passes. Here’s an excerpt:

"your electronic boarding pass. You could also be asked to display this \nmessage to airport security. * PLEASE NOTE: A printable",
"the attached boarding pass to present at the airport. Manage your booking \nBooking Details Passenger: JULIA EVANS Booking",
"Electronic boarding pass is not offered for your flight. Click the link \nbelow to access the PRINTABLE VERSION of your boarding",
"Save time at the airport Save time at the airport Web version",
"GET YOUR BOARDING PASS IN ADVANCE > You can now check in for your flight \nand you will receive a boarding pass > allowing",
"Save time at the airport Save time at the airport Web version",
"Booking Confirmation Booking Reference: xxxxxx Date of issue: xxxxxxxxxxxx \nSelect Seats eUpgrade",
"your electronic boarding pass. You could also be asked to display this \nmessage to airport security. * PLEASE NOTE: A printable",
"your electronic boarding pass. You could also be asked to display this \nmessage to airport security. * PLEASE NOTE: A printable",
"Save time at the airport Save time at the airport Web version",
"house was boarded up during the last round of bombings. I have no spatial \nimagination and cannot picture the house in three",
"Booking Confirmation Booking Reference: xxxxxx Date of issue: xxxxxxxxxxxx \nSelect Seats eUpgrade"
"required when boarding a flight to Canada. For more details, please visit \nCanada.ca/eTA . - Terms and Conditions of Sale",
"Your KLM boarding pass(s) on XXXXXX To: [image: KLM SkyTeam] Boarding \ninformation Thank you for checking in! Attached you",
"Boarding information Thank you for checking in! Attached you will find your \nboarding pass and/or other documents. Below",
"jetBlue® Your upcoming trip to SEATTLE, WA on xxxxxxxxxxx Flight status \nBaggage info Airport info TAG",
"your electronic boarding pass. You could also be asked to display this \nmessage to airport security. * PLEASE NOTE: A printable"

That request got sent to, which corresponds to some edge server near me. There were probably many other computers involved in searching my email than just the first one who got my request, but the nice thing about this is that we don’t need to care exactly what happened behind the scenes! The browser sent a request, and it got search results back, and it doesn’t need to know what servers.

We can just say “it happens on the server” and not worry too much about the ambiguity of what exactly that means (until something weird goes wrong :)).

the meaning of “server” depends on the context

So we’ve arrived somewhere a little bit interesting – at first when I thought about the question “what’s a server?” I really thought there was going to be a single simple answer! But it turns out that if you look at sentences where we use the word “server” it can actually refer to a lot of different things in a way that can be confusing:


How tracking pixels work

I spent some time talking to a reporter yesterday about how advertisers track people on the internet. We had a really fun time looking at Firefox’s developer tools together (I’m not an internet privacy expert, but I do know how to use the network tab in developer tools!) and I learned a few things about how tracking pixels actually work in practice!

the question: how does Facebook know that you went to Old Navy?

I often hear about this slightly creepy internet experience: you’re looking at a product online, and a day later see an ad for the same boots or whatever that you were looking at. This is called “retargeting”, but how does it actually work exactly in practice?

In this post we’ll experiment a bit and see exactly how Facebook can know what products you’ve looked at online! I’m using Facebook as an example in this blog post just because it’s easy to find websites with Facebook tracking pixels on them but of course almost every internet advertising company does this kind of tracking.

the setup: allow third party trackers, turn off my adblocker

I use Firefox, and by default Firefox blocks a lot of this kind of tracking. So I needed to modify my Firefox privacy settings to get this tracking to work.

I changed my privacy settings from the default (screenshot) to a custom setting that allows third-party trackers (screenshot). I also disabled some privacy extensions I usually have running.

tracking pixels: it’s not the gif, it’s the URL + query parameters

A tracking pixel is a 1x1 transparent gif that sites use to track you. By itself, obviously a tiny 1x1 gif doesn’t do too much. So how do tracking pixels track you? 2 ways:

  1. Sites use the URL and query parameters in the tracking pixel to add extra information, like the URL of the page you’re visiting. So instead of just requesting https://www.facebook.com/tr/ (which is a 44-byte 1x1 gif), it’ll request https://www.facebook.com/tr/?the_website_you're_on. (email marketers use similar tricks to figure out if you’ve opened an email, by giving the tracking pixel a unique URL)
  2. Sites send cookies with the tracking pixel so that they can tell that the person who visited oldnavy.com is the same as the person who’s using Facebook on the same computer.

the Facebook tracking pixel on Old Navy’s website

To test this out, I went to look at a product on the Old Navy site with the URL https://oldnavy.gap.com/browse/product.do?pid=504753002&cid=1125694&pcid=1135640&vid=1&grid=pds_0_109_1 (a “Soft-Brushed Plaid Topcoat for Men”).

When I did that, the Javascript running on that page (presumably this code) sent a request to facebook.com that looks like this in Developer tools: (I censored most of the cookie values because some of them are my login cookies :) )

Let’s break down what’s happening:

  1. My browser sends a request to https://www.facebook.com/tr/?id=937725046402747&ev=PageView&dl=https%3A%2F%2Foldnavy.gap.com%2Fbrowse%2Fproduct.do%3Fpid%3D504753002%26cid%3D1125694%26pcid%3Dxxxxxx0%26vid%3D1%26grid%3Dpds_0_109_1%23pdp-page-content&rl=https%3A%2F%2Foldnavy.gap.com%2Fbrowse%2Fcategory.do%3Fcid%3D1135640%26mlink%3D5155%2Cm_mts_a&if=false&ts=1576684838096&sw=1920&sh=1080&v=2.9.15&r=stable&a=tmtealium&ec=0&o=30&fbp=fb.1.1576684798512.1946041422&it=15xxxxxxxxxx4&coo=false&rqm=GET
  2. With that request, it sends a cookie called fr which is set to 10oGXEcKfGekg67iy.AWVdJq5MG3VLYaNjz4MTNRaU1zg.Bd-kxt.KU.F36.0.0.Bd-kx6. (which I guess is my Facebook ad tracking ID)

So the three most notable things that are being sent in the tracking pixel query string are:

now let’s visit Facebook!

Next, let’s visit Facebook, where I’m logged in. What cookies is my browser sending Facebook?

Unsurprisingly, it’s the same fr cookie from before: 10oGXEcKfGekg67iy.AWVdJq5MG3VLYaNjz4MTNRaU1zg.Bd-kxt.KU.F36.0.0.Bd-kx6.. So Facebook now definitely knows that I (Julia Evans, the person with this Facebook account) visited the Old Navy website a couple of minutes ago and looked at a “Soft-Brushed Plaid Topcoat for Men”, because they can use that identifier to match up the data.

these cookies are third-party cookies

The fr cookie that Facebook is using to track what websites I go to is called a “third party cookie”, because Old Navy’s website is using it to identify me to a third party (facebook.com). This is different from first-party cookies, which are used to keep you logged in.

Safari and Firefox both block many third-party cookies by default (which is why I had to change Firefox’s privacy settings to get this experiment to work), and as of today Chrome doesn’t (presumably because Chrome is owned by an ad company).

sites have lots of tracking pixels

Like I expected, sites have lots of tracking pixels. For example, wrangler.com loaded 19 different tracking pixels in my browser from a bunch of different domains. The tracking pixels on wrangler.com came from: ct.pinterest.com, af.monetate.net, csm.va.us.criteo.net, google-analytics.com, dpm.demdex.net, google.ca, a.tribalfusion.com, data.photorank.me, stats.g.doubleclick.net, vfcorp.dl.sc.omtrdc.net, ib.adnxs.com, idsync.rlcdn.com, p.brsrvr.com, and adservice.google.com.

For most of these trackers, Firefox helpfully pointed out that it would have blocked them if I was using the standard Firefox privacy settings:

why browsers matter

The reason browsers matter so much is that your browser has the final word on what information it sends about you to which websites. The Javascript on the Old Navy’s website can ask your browser to send tracking information about you to Facebook, but your browser doesn’t have to do it! It can decide “oh yeah, I know that facebook.com/tr/ is a tracking pixel, I don’t want my users to be tracked, I’m just not going to send that request”.

And it can make that behaviour configurable by changing browser settings or installing browser extensions, which is why there are lots of privacy extensions.

it’s fun to see how this works!

I think it’s fun to see how cookies / tracking pixels are used to track you in practice, even if it’s kinda creepy! I sort of knew how this worked before but I’d never actually looked at the cookies on a tracking pixel myself or what kind of information it was sending in its query parameters exactly.

And if you know how it works, it’s a easier to figure out how to be tracked less!

what can you do?

I do a few small things to get tracked on the internet a little less:

There are still lots of other ways to be tracked on the internet (especially when using mobile apps where you don’t have the same kind of control as with your browser), but I like understanding how this one method of tracking works and think it’s nice to be tracked a little bit less.


Page created: Thu, Jul 16, 2020 - 09:05 AM GMT