Julia Evans


Wizard Zines' first print run: Help! I have a Manager!

Hello! For the first time, Wizard Zines is doing a ★★print shipment★★!

I printed out 400 copies of Help! I have a manager! at the best print shop I could find in Montreal and they’re ready to ship to you. Free shipping to anywhere in the world (as long as Canada Post will let me ship there) is included. The deadline to order a zine is September 6.

I’ve wanted to get into printing and shipping my zines for a long time, so I’m very excited to try this out. So far the only option has been for people to print them on their home printers or at their local print shop.

I’m doing all the packaging and shipping myself from my house, so I’m going to ship them all out in a big batch around September 7.

Get it for $16

the zines!

Here’s a picture of one of the boxes of zines! So many zines!

I experimented with a few different print shops in Montreal, and I found out that using a print shop that cost more got me way better quality zines! So that’s what I did. (I went with Photosynthèse).

why I’m shipping them myself

I spent a bunch of time looking into fulfillment companies to try to ~scale~ printing & shipping zines. That research might still come in handy if this first batch goes well (I probably won’t keep doing it myself forever!), but in the spirit of flintstoning, I decided it would be a lot simpler to start out by not overengineering the process. Shipping a zine to the US from Canada using letter mail only costs about $3, so I can even include free shipping.

I’ve actually shipped 100 zines once before in an indiegogo campaign I ran in 2016, and it wasn’t too bad, so I’m confident that I can ship 400 zines manually as long as I can do it all in one giant batch. That time I even wrote all the addresses by hand, which I definitely won’t do this time.

And doing it myself means I can do some fun things with the envelopes & my laser printer that would be hard to convince a fulfillment company to do.


Here’s a FAQ which will hopefully answer all your questions! Email me at print@wizardzines.com if you have other questions.

What’s included?

You’ll get:

  1. A print copy of Help! I have a manager!, printed in full colour on high quality paper.
  2. A PDF copy of Help! I have a manager! (which usually costs $10 on its own).

What’s the order deadline?

The deadline to order a zine is September 6.

When will I get it?

I’ll mail all the zines around September 7-8, 2020 (right after orders close).

You should get your zine about a week later, assuming the mail system behaves.

Will I get a tracking number?

No. All the zines will be shipped by first class letter mail from Canada, without any tracking. This keeps shipping costs down so that I can do free shipping :). If anything at all goes wrong with your shipment, just let me know (print@wizardzines.com) and I’ll mail you another one. (it’s like UDP!)

The zines should reach the US in about a week, longer if you’re overseas.

Can I order more than one copy at a time?

No, to keep things simple, I’m only shipping one zine at a time. I’m hoping to do batches in the future though!

Can I order these for my team?

Yes! You’ll just need to make one order per person, so that I get everyone’s shipping address. This is extremely compatible with remote work :)

also, if you want to order 50 or more print copies mailed to a single address, email me and we’ll work something out

What happens if I don’t order by September 6?

If this print run sells out, we’ll print more in the future!

What if I already bought the digital copy?

If you already bought the digital copy – thank you!! You can email me at print@wizardzines.com and I’ll send you a discount code to get the print version for less.

What are your future print plans?

If this print run goes well, I’ll figure out how to scale the process up beyond “let’s ship 400 zines from my house”. I’m not sure how that will work yet, but we’ll figure it out!

Here’s the link to order a print copy, again! If you just want the PDF, you can get it here: Help! I have a Manager! PDF.

Get it for $16


Implementing 'focus and reply' for Fastmail with JMAP

Last month I switched my email to Fastmail. One fun thing about Fastmail is that they built a new protocol called JMAP which is much easier to use than IMAP. So over the last couple of days I built a fun tiny email feature for myself to use with JMAP.

The point of this post is mostly to give a simple end-to-end example of how to use the JMAP API becuase I couldn’t find a lot of examples when I was figuring it out. Here’s the github repo and a gist which shows how to authenticate & make your first request.

cool feature from Hey: focus & reply

I tried the https://hey.com email service for a little bit when it came out. It wasn’t for me, but I liked their “focus and reply” feature. Here’s a screenshot of what it looks like (from a video on their marketing site page for the feature)

Basically it makes replying to a lot of emails in a batch a little simpler. So I thought – can I use JMAP to implement this focus & reply feature from Hey?

step 0: make the feature simpler

I was a bit too scared to actually send email to start (read-only is safe!), so I decided to start by just making a UI that would show me all the emails I needed to reply to and give me a text box to fill in the replies. Then I could copy and paste the replies into my webmail client to send them. This is a little janky, but I don’t mind it for now.

Here’s an example of what that looks like:

step 0.5: have a “Reply Later” folder in Fastmail

I already had a folder named “Reply Later” in Fastmail, where I manually filed away emails that I needed to reply to but hadn’t gotten to yet. So I had a data source to use! Hooray. Time to start coding.

step 1: get started with JMAP

I couldn’t find a quickstart guide for using JMAP with Fastmail and I was confused about how to do it for quite a while, so part of my goal with this blog post is to give an example of how to get started. I put all the code you need to make your first API request in a gist: fastmail-jmap-quickstart.js

You can authenticate all your requests with HTTP Basic authentication with your username and a Fastmail app password.

Here’s the basics of how it works.

  1. Make a GET request to https://jmap.fastmail.com/.well-known/jmap. This gives you a “session” in response, gives you your account ID. You need this account ID for all the other API calls. I found this a bit surprising because I usually expect things in .well-known to be static files, but this one is a dynamic endpoint that you authenticate to with HTTP Basic authentication. (using your email / app password)
  2. Use that account ID to make requests to the JMAP API at https://jmap.fastmail.com/api/

One thing that threw me off about JMAP at first is that you have to wrap all your API requsts with

    "using": [ "urn:ietf:params:jmap:core", "urn:ietf:params:jmap:mail" ],
    "methodCalls": YOUR_REQUEST_HERE

For example, this is a request to get a list of all your mailboxes (folders). I think "0" is the ID of the request:

    "using": [ "urn:ietf:params:jmap:core", "urn:ietf:params:jmap:mail" ],
    "methodCalls": [[ "Mailbox/get", {
        "accountId": accountId,
        "ids": null
    }, "0" ]]

The API wasn’t that intuitive at first, but I was able to figure how to do what I wanted to by reading the spec at https://jmap.io.

step 2: get all my emails

Here’s the query I used to get my emails from JMAP. I basically just copied this from the examples in the JMAP documentation, but I think it’s interesting that it’s not just 1 query, it’s actually 5 different chained queries that build on top of each other. For example, you have:

[ "Email/query", {
    "accountId": accountId,
        // todo: actually do the reply later thing
        "filter": { "inMailbox": mailbox_id },
        "sort": [{ "property": "receivedAt", "isAscending": false }],
        "collapseThreads": true,
        "position": 0,
        "limit": 20,
        "calculateTotal": true
}, "t0" ],
[ "Email/get", {
    "accountId": accountId,
    "#ids": {
        "resultOf": "t0",
        "name": "Email/query",
        "path": "/ids"
    "properties": [ "threadId" ]
}, "t1" ],

This queries for a list of all the email IDs in a specific mailbox (my “reply later” mailbox), calls it t0, and then uses the results of t0 to request all of those emails.

One of the big ideas in JMAP seems to be this chaining – it really reduces latency if you can do all your work in a single request.

step 3: render the emails!

Once I had all the emails, rendering them was pretty easy – I just used vue.js + Tailwind. The whole thing came out to 170 lines of not-particularly-well-organized Javascript.

the results

It works! It’s already helped me reply to some emails. The github repo is https://github.com/jvns/focus-reply-fastmail.

there are at least 2 problems with this code (and probably more):

  1. it’s storing passwords in local storage, which I think is not a good security practice.
  2. it had some XSS vulnerabilities, which I think I’ve finally fixed by putting the plaintext email in a <pre> (so that newlines come through) and escaping any HTML entities in there. (<pre>{{email}}</pre>, in Vue)

fastmail seems to use JMAP in a different way than this

I got curious so I used the Network tab to look at how Fastmail’s web interfaces uses jmap.

  1. Fastmail’s webmail interface doesn’t seem to use https://jmap.fastmail.com/ – instead it uses https://www.fastmail.com/jmap/api. Maybe it’s just a proxy they use so that the requests are being made to the same origin? Unclear.
  2. It also authenticates in a different way, using Authorization: Bearer. It seems like this might be a better way to authenticate, but I haven’t found any information about how to get a Bearer authentication like this to use instead of using an app password.
  3. The requests it sends are sometimes compressed with deflate for some reason (instead of gzip), which I guess is fine but it means it’s impossible to look at them in dev tools because Firefox doesn’t understand deflate. Weird!

this seems like a fun way to do email experiments!

I think the idea that anyone can just use JMAP to make fun email UI experiments without dealing with the Hard Parts of email is really fun!

And it’s really cool that I could get this to work 100% as a frontend app, without any server code at all! All the email data is accessible via JMAP, so it seems extremely possible to just do everything with JMAP requests from the client.


Some possible future zines

Hello! I’ve been thinking about what zines I want to write in the future a bit. Usually I don’t have any plans for what I’m going to write next, but having no plan at all feels like it might be getting a bit old.

So this post is mostly a way for me to try to organize my thoughts about why I choose certain topics and what I might want to write in the future.

the criteria

I’m interested in writing about things that are

There are a LOT of topics that fit these criteria. As I was thinking about topics, I realized that there are lots of topics (like object oriented programming principles) that I think could in theory be pretty valuable but that just didn’t speak to me. What’s up with that?

I only write about topics that I care about

I think a thing that I was missing was – I only write about topics that I really think are exciting and fun and important and want to share with people. Some topics I have kind of a weird and complicated love for, like containers (why are they so weird?!).

And right now I’m writing about CSS, which I’m only learning how to love pretty recently.

I think it’s often important for me to write about topics which I now love but in the past did not love. For example, it took me a very long time to understand how to use tcpdump, and once I got it I felt like I had to tell everyone HELLO I FIGURED IT OUT TCPDUMP IS ACTUALLY AWESOME AND NOT THAT HARD.

It feels a lot less interesting to write about topics where it was immediately obvious to me why they were great or which were easy for me to learn.

zines that I might write

and a few that I think might be too small or too big for a zine:

zines that I don’t think I can write

Here are some topics for zines that I think are “fundamental” in the same way and that I think could be really cool. I don’t think that I could write these today, either because I don’t know enough about the topic yet or because I don’t really feel enough love for it yet.

As with most things, the only way I’ll probably learn more about these is if I end up using them more.

that’s all!

I’m still not sure (even after doing this for years!) why it’s so hard for me to tell what topics will make for a good zine that I can write. Maybe one day I will figure it out!


Some more CSS comics

I’ve been continuing to write pages about CSS! Here are 6 more.

Two of them are about how to think about CSS in general (“CSS isn’t easy” and “backwards compatibility”), which is something I’m still trying to wrap my head around.

handling browser bugs is normal?

The fact that finding workarounds for browser bugs is kind of a normal part of writing CSS really surprised me – there’s this great repo called flexbugs which catalogs bugs in browser implementations of flexbox. A lot of the bugs are in IE which means (depending on your goals) that you can just ignore them, but not all! A bunch of the flexbugs are in Chrome or Safari or Firefox.

For example, I ran into flexbug #9 a few days ago, which is that in Safari a <summary> element can’t be a flexbox, so instead you need to put an extra div inside the <summary> to be the flex element.

In the past I would have reacted to this in a more grumpy way (WHY? NOOOOO? WHAT IS HAPPENING?!?! CSS?!?!?!). But this time I noticed that my site looked weird in Safari on my iPad, figured out after 30 minutes or so that it was a Safari bug, implemented a workaround, and it actually wasn’t that big of a deal!

I think this mindset of “oh, there’s a browser bug, oh well, I guess that happens sometimes!” is a lot healthier and more likely to result in success than getting mad about it.

there are a lot of ways CSS can go wrong

I think there are at least 3 different ways your CSS can be buggy:

  1. that element doesn’t have the styles applied that it should (for example it’s supposed to be background; blue but it’s background: red instead)
  2. the element has the “right” styles applied, but those styles do something confusing / unexpected to me because of something I misunderstood about the CSS spec
  3. the element has the “right” styles applied and those styles do the right thing according to the spec, but the browser has a bug and isn’t implementing the spec correctly

Anyway, enough CSS musings, here are the comics :)

css isn’t easy

Permalink: https://wizardzines.com/comics/css-isnt-easy

backwards compatibility

Permalink: https://wizardzines.com/comics/backwards-compatibility

CSS specificity

Permalink: https://wizardzines.com/comics/css-specificity

centering in CSS

Permalink: https://wizardzines.com/comics/css-centering

padding syntax

Permalink: https://wizardzines.com/comics/padding-margin

flexbox basics

Permalink: https://wizardzines.com/comics/flexbox-basics


An attempt to make a font look more handwritten

I’m actually not super happy with the results of this experiment, but I wanted to share it anyway because it was very easy and fun to play with fonts. And somebody asked me how to do it and I told her I’d write a blog post about it :)

background: the original handwritten font

Some background: I have a font of my handwriting that I’ve been use in my zines for a couple of years. I made it using a delightful app called iFontMaker. They pitch themselves on their website as “You can create your handmade typeface in less than 5 minutes just with your fingers”. In my experience the ‘5 minutes” part is pretty accurate – I might have spent more like 15 minutes. I’m skeptical of the “just your fingers” claim – I used an Apple Pencil, which has much better accuracy. But it is extremely easy to make a TTF font of your handwriting with the app and if you happen to already have an Apple Pencil and iPad I think it’s a fun way to spend $7.99.

Here’s what my font looks like. The “CONNECT” text on the left is my actual handwriting, and the paragraph on the right is the font. There are actually 2 fonts – there’s a regular font and a handwritten “monospace” font. (which actually isn’t monospace in practice, I haven’t figured out how to make an actual monospace font in iFontMaker)

the goal: have more character variation in the font

In the screenshot above, it’s pretty obvious that it’s a font and not actual handwriting. It’s easiest to see this when you have two of the same letter next to each other, like in “HTTP’.

So I thought it might be fun to use some OpenType features to somehow introduce a little more variation into this font, like maybe the two Ts could be different. I didn’t know how to do this though!

idea from Tristan Hume: use OpenType!

Then I was at !!Con 2020 in May (all the talk recordings are here!) and saw this talk by Tristan Hume about using OpenType to place commas in big numbers by using a special font. His talk and blog post are both great so here are a bunch of links – the live demo is maybe the fastest way to see his results.

the main idea: OpenType lets you replace characters based on context

I started out being extremely confused about what OpenType even is. I still don’t know much, but I learned that you can write extremely simple OpenType rules to change how a font looks, and you don’t even have to really understand anything about fonts.

Here’s an example rule:

sub a' b by other_a;

What sub a' b by other_a; means is: If an a glyph is before a b, then replace the a with the glyph other_a.

So this means I can make ab appear different from ac in the font. It’s not random the way handwriting is, but it does introduce a little bit of variation.

OpenType reference documentation: awesome

The best documentation I found for OpenType was this OpenType™ Feature File Specification reference. There are a lot of examples of cool things you can do in there, like replace “ffi” with a ligature.

how to apply these rules: fonttools

Adding new OpenType rules to a font is extremely easy. There’s a Python library called fonttools, and these 5 lines of code will apply a list of OpenType rules (in rules.fea) to the font file input.ttf.

from fontTools.ttLib import TTFont
from fontTools.feaLib.builder import addOpenTypeFeatures

ft_font = TTFont('input.ttf')
addOpenTypeFeatures(ft_font, 'rules.fea', tables=['GSUB'])

fontTools also provides a couple of command line tools called ttx and fonttools. ttx converts a TTF font into an XML file, which was useful to me because I wanted to rename some glyphs in my font but did not understand anything about fonts. So I just converted my font into an XML file, used sed to rename the glyphs, and then used ttx again to convert the XML file back into a ttf.

fonttools merge let me merge my 3 handwriting fonts into 1 so that I had all the glyphs I needed in 1 file.

the code

I put my extremely hacky code for doing this in a repository called font-mixer. It’s like 33 lines of code and I think it’s pretty straightforward. (it’s all in run.sh and combine.py)

the results

Here’s a small sample the old font and the new font. I don’t think the new font “feels” that much more like handwriting – there’s a little more variation, but it still doesn’t compare to actual handwritten text (at the bottom).

It feels a little uncanny valley to me, like it’s obviously still a font but it’s pretending to be something else.

And here’s a sample of the same text actually written by hand:

It’s possible that the results would be better if I was more careful about how I made the 2 other handwriting fonts I mixed the original font with.

it’s cool that it’s so easy to add opentype rules!

Mostly what was delightful to me here is that it’s so easy to add OpenType rules to change how fonts work, like you can pretty easily make a font where the word “the” is always replaced with “teh” (typos all the time!).

I still don’t know how to make a more realistic handwriting font though :). I’m still using the old one (without the extra variations) and I’m pretty happy with it.


Some CSS comics

Hello! I’ve been writing some comics about CSS this past week, and I thought as an experiment I’d post them to my blog instead of only putting them on Twitter.

I’m going to ramble about CSS at the beginning a bit but you can skip to the end if you just want to read the comics :)

why write about CSS?

I’ve been writing a tiny bit more CSS recently, and I’ve decided to actually take some time to learn CSS instead of just flailing around and deciding “oh no, this is impossible”.

CSS feels a little like systems programming / Linux to me – there are a lot of counterintuitive facts that you need to learn to be effective with it, but I think once you learn those facts it gets a lot easier.

So I’m writing down some facts that I found counterintuitive when learning CSS, like the fact that position: absolute isn’t absolute!

why try to read the specs?

I’ve been having a lot of fun reading through the CSS2 spec and finding out that some things about CSS that I was intimidated by (like selector specificity) aren’t as complicated as I thought.

I think reading (parts of) the CSS specs is fun because I’m so used to learning CSS by reading a lot of websites which sometimes have conflicting information. (MDN is an incredible resource but I don’t think it’s 100% always correct either.)

So it’s fun to read a more authoritative source! Of course, it’s not always true that the CSS specs correspond to reality – browser implementations of the specs are inconsistent.

But expecially for parts of CSS that are older & better-established (like the basics of how position: absolute works) I like reading the specs.

how are the CSS specs organized?

CSS used to be defined by a single specification (CSS2), but as of CSS 3 each part of CSS has its own specification. For example, there’s a CSS 3 specification for colours.

Here are the links I’ve been using:

I’ve been kind of alternating between the CSS 2 spec and the CSS 3 specs – because the CSS 2 spec is smaller, I find it easier to digest and understand the big picture of how things are supposed to work without getting lost in a lot of details.

a few comics

Okay, here are the comics! As always when I start working on a set of comics / a potential zine, there’s no specific order or organization.

the box model

Permalink: https://wizardzines.com/comics/box-model

CSS units

Permalink: https://wizardzines.com/comics/units

Reference material: I found this section on lengths from “CSS Values and Units Module Level 3” pretty straightforward.


Permalink: https://wizardzines.com/comics/selectors

Reference material: section 6.4.1 to 6.4.3 from the CSS 2 spec.

position: absolute

Permalink: https://wizardzines.com/comics/position-absolute

inline vs block

Permalink: https://wizardzines.com/comics/inline-vs-block

One piece of errata for this one: you actually can set the width on an inline element if it’s a “replaced” element


When your coworker does great work, tell their manager

I’ve been thinking recently about anti-racism and what it looks like to support colleagues from underrepresented groups at work. The other day someone in a Slack group made an offhand comment that they’d sent a message to an engineer’s manager to say that the engineer was doing exceptional work.

I think telling someone’s manager they’re doing great work is a pretty common practice and it can be really helpful, but it’s easy to forget to do and I wish someone had suggested it to me earlier. So let’s talk about it!

I tweeted about this to ask how people approach it and as usual I got a ton of great replies that I’m going to summarize here.

We’re going to talk about what to say, when to do this, and why you should ask first.

ask if it’s ok first

One thing that at least 6 different people brought up was the importance of asking first. It might not be obvious why this is important at first — you’re saying something positive! What’s the problem?

So here are some potential reasons saying something positive to someone’s manager could backfire:

  1. Giving someone a compliment that’s not in line with their current goals. For example, if your coworker is trying to focus on becoming a technical expert in their domain and you’re impressed with their project management skills, they might not want their project management highlighted (or vice versa!).
  2. Giving someone the wrong “level” of compliment. For example, if they’re a very senior engineer and you say something like “PERSON did SIMPLE_ROUTINE_TASK really well!” — that doesn’t reflect well on them and feels condescending. This can happen if you don’t know the person’s position or don’t understand the expectations for their role.
  3. If your coworker was supposed to be focusing on a specific project, and you’re complimenting them for helping with something totally unrelated, their manager might think that they’re not focusing on their “real” work. One person mentioned that they got reprimanded by their manager for getting a spot peer bonus for helping someone on another team.
  4. Some people have terrible managers (for example, maybe the manager will feel threatened by your coworker excelling)
  5. Some people just don’t like being called out in that way, and are happy with the level of recognition they’re getting!

Overall: a lot of people (for very good reasons!) want to have control over the kind of feedback their manager hears about them.

So just ask first! (“hey, I was really impressed with your work on X project and wanted to send this note to $MANAGER to explain how important your work because I know she wasn’t that involved in X project and might not have seen everything you did, is that ok with you?”)

when it’s important: to highlight work that isn’t being recognized

Okay, now let’s talk about when this is important to do. I think this is pretty simple – managers don’t always see the work their reports are doing, and if someone is doing really amazing work that their manager isn’t seeing, they won’t get promoted as quickly. So it’s helpful to tell managers about work that they may not be seeing.

Here are some examples of types of important work that might be underrecognized:

Also, everyone agreed that it’s always great to highlight the contributions of more junior coworkers when they’re doing well.

why it matters: it helps managers make a case for promotion

For someone to get promoted, they need evidence that they’ve been doing valuable work, and managers don’t always have the time to put together all that evidence. So it’s important to be proactive!

You can work on this for yourself by writing a brag document, but having statements from coworkers explaining how great your work really helps build credibility.

So providing these statements for your coworkers can help them get recognized in a timely way for the great work they did (instead of getting promoted a year later or something). It’s extra helpful to do this if you know the person is up for promotion.

how to do it: be specific, explain the impact of their work

Pretty much everyone agreed that it’s helpful to explain what specifically the person did that was awesome (“X did an incredible job of designing this system and we haven’t had any major operational issues with it in the 6 months since it launched, which is really unusual for a project of that scale”).

how to do it: highlight when they’re exceeding expectations

Because the point is to help people get promoted, it’s important to highlight when people are exceeding expectations for their level, for example if they’re not a senior engineer yet but they’re doing the kind of work you’d expect from a senior engineer.

how to do it: send the person the message too

We already basically covered this in “ask the person first”, but especially if I’m using a feedback system where the person might not get the feedback immediately I like to send it to them directly as well. It’s nice for them to hear and they can also use it later on!

public recognition can be great too!

A couple of folks mentioned that they like to give public recognition, like mentioning how great a job someone did in a Slack channel or team meeting.

Two reasons public recognition can be good:

  1. It helps build credibility for your colleague
  2. It lets the person you’re recognizing be part of the conversation/reciprocate to the feedback-giver, especially if the work was a collaboration.

Again, it’s good to ask about this before doing this – some people dislike public recognition.

on peer bonuses

A few people who work at Google (or other companies with peer bonuses) mentioned that they prefer to give peer bonuses for this because it’s a more official form of recognition.

Lots of people mentioned other forms of feedback systems that they use instead of email. Use whatever form of recognition is appropriate at your company!

anyone can do this

What I like about this is it’s a way everyone can help their coworkers – even if you’re really new and don’t feel that qualified to comment on how effective someone more senior is at their job, you can still point out things like “this person helped me do a project that was really out of my comfort zone!”

maybe expand the set of people you do this for!

I think it’s very common for people to promote the work of their friends in this way. I’ve tried to expand the set of people I do this for over time – I think it’s important to keep an eye out for coworkers who are really excelling and to make sure their work is recognized.

more reading on sponsorship

I wanted to just talk about this one specific practice of telling someone’s manager they’re doing great work but there are a LOT of other ways you can help lift your coworkers up. Lara Hogan’s post what does sponsorship look like? has a lot of great examples.

Mekka Okereke has a wonderful Twitter thread about another way you can support underrepresented folks: by being a “difficulty anchor”. It’s short and definitely worth a read.

thanks to Sher Minn Chong, Allie Jones, and Kamal Marhubi for reading a draft of this


scanimage: scan from the command line!

Here’s another quick post about a command line tool I was delighted by.

Last night, I needed to scan some documents for some bureaucratic reasons. I’d never used a scanner on Linux before and I was worried it would take hours to figure out. I started by using gscan2pdf and had trouble figuring out the user interface – I wanted to scan both sides of the page at the same time (which I knew our scanner supported) but couldn’t get it to work.

enter scanimage!

scanimage is a command line tool, in the sane-utils Debian package. I think all Linux scanning tools use the sane libraries (“scanner access now easy”) so my guess is that it has similar abilities to any other scanning software. I didn’t need OCR in this case so we’re not going to talk about OCR.

get your scanner’s name with scanimage -L

scanimage -L lists all scanning devices you have.

At first I couldn’t get this to work and I was a bit frustrated but it turned out that I’d connected the scanner to my computer, but not plugged it into the wall. Oops.

Once everything was plugged in it worked right away. Apparently our scanner is called fujitsu:ScanSnap S1500:2314. Hooray!

list options for your scanner with --help

Apparently each scanner has different options (makes sense!) so I ran this command to get the options for my scanner:

scanimage --help -d 'fujitsu:ScanSnap S1500:2314' 

I found out that my scanner supported a --source option (which I could use to enable duplex scanning) and a --resolution option (which I changed to 150 to decrease the file sizes and make scanning faster).

scanimage doesn’t output PDFs (but you can write a tiny script)

The only downside was – I wanted a PDF of my scanned document, and scanimage doesn’t seem to support PDF output.

So I wrote this 5-line shell script to scan a bunch of PNGs into a temp directory and convert the resulting PNGs to a PDF.

set -e

DIR=`mktemp -d`
cd $DIR
scanimage -b --format png  -d 'fujitsu:ScanSnap S1500:2314' --source 'ADF Front' --resolution 150
convert *.png $CUR/$1

I ran the script like this. scan-single-sided output-file-to-save.pdf

You’ll probably need a different -d and --source for your scanner.

it was so easy!

I always expect using printers/scanners on Linux to be a nightmare and I was really surprised how scanimage Just Worked – I could just run my script with scan-single-sided receipts.pdf and it would scan a document and save it to receipts.pdf!.


Twitter summary from 2020 so far

Hello! I post a lot of things on Twitter and it’s basically impossible for anyone except me to keep with them, so I thought I’d write a summary of everything I posted on Twitter in 2020 so far.

A lot of these things I eventually end up writing about on the blog, but some of them I don’t, so I figured I’d just put everything in one place.

I’ve made most of the links to non-Twitter websites.


Let’s start with the comics, since that’s a lot of what I write there.


These are from a debugging zine I’m still trying to finish. (https://wizardzines.com/zines/bugs/)

writing tips

computer science


These are part of a potential sequel to bite size linux



These mostly got published as How Containers Work. As usual the final zine was edited a lot and some of these didn’t make it into the zine at all or I significantly rewrote the version in the zine.


A bunch of work on https://questions.wizardzines.com.


A bunch of earlier work on https://flashcards.wizardzines.com. I came up with a direction for this project I liked better (https://questions.wizardzines.com) and won’t be updating that site further.


At the beginning of the year I did some experiments in making screencasts. It was fun but I haven’t done more so far. These are all links to youtube videos.


I’m not a big Twitter thread person (I’d usually rather write a blog post) but I wrote one thread so far this year about how I think about the zine business:

zine announcements


I know that $12 USD is a lot of money for some people, especially folks in countries like Brazil with a weaker currency relative to the US dollar. So periodically I do giveaways on Twitter so that people who can’t afford $12 can get the zines. I aim to give away 1 copy for every sale.


very occasionally I ask people questions:

that’s all!

I’ve been thinking about trying to do a monthly summary here of what I’m writing on Twitter. We’ll see if that happens!


saturday comics: a weekly mailing list of programming comics

Hello! This post is about a mailing list (Saturday Comics) that I actually started a year ago. I realized I never wrote about it on this blog, which is maybe better anyway because now I know more about how it’s gone over the last year!

I think the main idea in this post is probably – if you want to have a mailing list that’s useful to people, but don’t have the discipline to write new email all the time, consider just making a mailing list of your best past work!

Let’s start by talking about some of the problems I wanted to solve with this mailing list.

problems I wanted to solve

problem 1: not everyone is on Twitter.

I pretty much exclusively post draft zine pages to Twitter, but not everyone is on Twitter all the time. Lots of people aren’t on Twitter at all, for lots of very good reasons! So only posting my progress on my zines to Twitter felt silly.

problem 2: weekly mailing lists felt impossible:

I kept hearing “julia, you need a mailing list, mailing lists are the best”. So I wanted to set up some kind of “mailing list” or something. Okay! I’ve tried to set up a “weekly mailing list” of sorts a few times, and inevitably what happens is:

For obvious reason, that’s not super effective.

problem 3: it was impossible to find my “best” work:

I have an idea in my head of what my “best” comics are, but there was literally no way for anyone else other than me to find that out even though I know that some of my comics are a lot more useful to people than others.

I also recently added https://wizardzines.com/comics/ as another way to fix this.

send my favourite comics, not the newest comics

Unlike this blog (where people can read my newest work), I decided to use a different model: let people see some of my favourite comics.

The way I thought about this was – if someone isn’t familiar with my work and wants to learn more, they’re more likely to find something interesting to them in my “best” work than just whatever I happen to be working on at the time.

solution: saturday comics, an automated weekly mailing list

So! I came up with “saturday comics”. The idea is pretty simple: you get 1 programming comic in your email every Saturday.

Unlike a normal weekly mailing list, though, you don’t get the “latest” email – instead, there’s a fixed list of emails in the list, and everyone who signs up gets all the emails in the list starting from the beginning.

For example, the first email is called “bash tricks”, and so if someone signs up today, they’ll get the “bash tricks” email on Saturday.

so far: 29 weeks of email

So far the list has 29 weeks (7 months) of email – if you sign up today, you’ll get a comic every week for at least 29 weeks.

You might notice that 29 is less than 52 and think “wait, you said this list has existed for a year!“. I haven’t quite kept up with 1 email a week so far. What happens in practice is that I’ll add 5 new emails, they’ll get sent out over 5 weeks, then subscribers will stop getting email for while, and then I’ll add more emails eventually and then they’ll start getting email again.

It’s maybe not ideal, but I think it’s okay, and it’s definitely better than my previous mailing list practices of “literally never email the mailing list ever”.

so far: 5000 people have subscribed, and people seem to like it!

5000 people have subscribed to the list so far, and people seem to like it – I pretty often get replies saying “hey, thanks for this week’s comic, I loved this one” or see people tweeting about how they loved this week’s email.

You can sign up here if you want.

how it works: a ConvertKit sequence

The way I implemented it is with a ConvertKit sequence. Here’s an example of what the setup looks like: there’s a list of subject lines & when they’re scheduled to go out (like “1 week after the last email”), and then you can fill in each email’s content. I’ve found it pretty straightforward to use so far.

marketing = building trust

This list is sort of a marketing tool, but I’ve learned to think of marketing (at least for my business) as just building trust by helping people learn new things. So instead of worrying about optimizing conversion rates or whatever (which has never helped me at all), I just try to send emails to the list that will be helpful.

With every comic I include a link to the zine that it’s from in case people want to buy the zine, but I try to not be super in-your-face about it – if folks want to buy my zines, that’s great, if they want to just enjoy the weekly comics, that’s great too.

that’s all!

This idea of a mailing list where you send out your favourite work instead of your latest work was really new to me, and I’m happy with how it’s gone so far!


Tell candidates what to expect from your job interviews

In my last job, I helped with a few projects (like brag documents and the engineering levels) to help make the engineering culture a little more inclusive, and I want to talk about one of them today: making the interview process a little easier to understand for candidates.

I worked on this project for a few days way back in 2015 and I’m pretty happy with how it turned out.

giving everyone a little information helps level the playing field

Different tech companies run their interviews in very different ways, and I think it’s silly to expect candidates to magically intuit how your company’s interview process works.

It sucks for everyone when a candidate is surprised with an unexpected interview. For example, at the time the debugging interview required candidates to have a dev environment set up on their computer that let them install a library & run the tests. Sometimes candidates didn’t have their environment set up the right way, which was a waste of everyone’s time! The point of the interview wasn’t to watch people install bundler!

different companies have different rubrics

Also, different companies actually test different things in their interviews! At that job we didn’t care if people used Stack Overflow during their interviews and didn’t interview for algorithms expertise, but lots of companies do interview for algorithms expertise.

Telling people in advance what they’ll be measured on makes it way easier for them to prepare: if you tell them they won’t be asked algorithms questions, they don’t have to waste their time practicing implementing breadth first search or whatever.

solution: write a short document!

My awesome coworker Kiran had a simple idea to help solve this problem: write a document explaining what to expect from the interview process! She wrote the document and I helped edit it a bit.

We called it On-site interviews for Engineering: What to expect (that link is to an old revision of that document I found in the internet archive).

It covered:

keep it updated over time

That document was originally written in April 2015. A lot of things changed about the interview process over time, and so it needed to be kept updated.

I think the work of keeping the document updated is even more important than writing it in the first place, and a lot of amazing people worked on that. I don’t work there anymore, but some quick Googling turned up what I think is the current version of that document, and it’s great!

documenting your interview process is pretty easy

In my experience, advocating for changes to an interview process is really hard. You need to propose a new interview process, test the interviews, convince interviewers to get on board – it takes a long time.

In comparison, documenting an existing interview process (without changing it!!) is WAY EASIER. My memory is a pretty fuzzy, but I think basically nobody objected to documenting the interview process the company already had – it was just factual information about what we were already doing! Way less controversial.

you can make small changes to your company’s culture

Making the companies I work at a better place for everyone to work is important to me. It’s a huge project, and I’ve tried a lot of things that haven’t worked.

But I’ve found it rewarding to work on changes like this that make one small thing a little better for people.

thanks to Kiran Bhattaram for coming up with this idea in the first place and for reviewing a draft of this post, and to @jilljubs for reminding me of earlier today


entr: rerun your build when files change

This is going to be a pretty quick post – I found out about entr relatively recently and I felt like WHY DID NOBODY TELL ME ABOUT THIS BEFORE?!?! So I’m telling you about it in case you’re in the same boat as I was.

There’s a great explanation of the tool with lots of examples on entr’s website.

The summary is in the headline: entr is a command line tool that lets you run an arbitrary command every time you change any of a set of specified files. You pass it the list of files to watch on stdin, like this:

git ls-files | entr bash my-build-script.sh


find . -name *.rs | entr cargo test

or whatever you want really.

quick feedback is amazing

Like possibly every single programmer in the universe, I find it Very Annoying to have to manually rerun my build / tests every time I make a change to my code.

A lot of tools (like hugo and flask) have a built in system to automatically rebuild when you change your files, which is great!

But often I have some hacked together custom build process that I wrote myself (like bash build.sh), and entr lets me have a magical build experience where I get instant feedback on whether my change fixed the weird bug with just one line of bash. Hooray!

restart a server (entr -r)

Okay, but what if you’re running a server, and the server needs to be restarted every time you change a file? entr’s got you – if you pass -r, then

git ls-files | entr -r python my-server.py

clear the screen (entr -c)

Another neat flag is -c, which lets you clear the screen before rerunning the command, so that you don’t get distracted/confused by the previous build’s output.

use it with git ls-files

Usually the set of files I want to track is about the same list of files I have in git, so git ls-files is a natural thing to pipe to entr.

I have a project right now where sometimes I have files that I’ve just created that aren’t in git just yet. So what if you want to include untracked files? These git command line arguments will do it (I got them from an email from a reader, thank you!):

git ls-files -cdmo --exclude-standard  | entr your-build-script

Someone emailed me and said they have a git-entr command that runs

git ls-files -cdmo --exclude-standard | entr -d "$@"

which I think is a great idea.

restart every time a new file is added: entr -d

The other problem with this git ls-files thing is that sometimes I add a new file, and of course it’s not in git yet. entr has a nice feature for this – if you pass -d, then if you add a new file in any of the directories entr is tracking, then it’ll exit.

I’m using this paired with a little while loop that will restart entr to include the new files, like this:

while true
{ git ls-files; git ls-files . --exclude-standard --others; } | entr -d your-build-script

how entr works on Linux: inotify

On Linux, entr works using inotify (a system for tracking filesystem events like file changes) – if you strace it, you’ll see an inotify_add_watch system call for each file you ask it to watch, like this:

inotify_add_watch(3, "static/stylesheets/screen.css", IN_ATTRIB|IN_CLOSE_WRITE|IN_CREATE|IN_DELETE_SELF|IN_MOVE_SELF) = 1152

that’s all!

I hope this helps a few people learn about entr!


A little bit of plain Javascript can do a lot

I’ve never worked as a professional frontend developer, so even though I’ve been writing HTML/CSS/JS for 15 years for little side projects, all of the projects have been pretty small, sometimes I don’t write any Javascript for years in between, and I often don’t quite feel like I know what I’m doing.

Partly because of that, I’ve leaned on libraries a lot! Ten years ago I used to use jQuery, and since maybe 2017 I’ve been using a lot of vue.js for my little Javascript projects (you can see a little whack-a-mole game I made here as an intro to Vue).

But last week, for the first time in a while, I wrote some plain Javascript without a library and it was fun so I wanted to talk about it a bit!

experimenting with just plain Javascript

I really like Vue. But last week when I started building https://questions.wizardzines.com, I had slightly different constraints than usual – I wanted to use the same HTML to generate both a PDF (with Prince) and to make an interactive version of the questions.

I couldn’t really see how that would work with Vue (because Vue wants to create all the HTML itself), and because it was a small project I decided to try writing it in plain Javascript with no libraries – just write some HTML/CSS and add a single <script src="js/script.js"> </script>.

I hadn’t done this in a while, and I learned a few things along the way that made it easier than I thought it would be when I started.

do almost everything by adding & removing CSS classes

I decided to implement almost all of the UI by just adding & removing CSS classes, and using CSS transitions if I want to animate a transition.

here’s a small example, where clicking the “next” question button adds the “done” class to the parent div.

div.querySelector('.next-question').onclick = function () {

This worked pretty well. My CSS as always is a bit of a mess but it felt manageable.

add/remove CSS classes with .classList

I started out by editing the classes like this: x.className = 'new list of classes'. That felt a bit messy though and I wondered if there was a better way. And there was!

You can also add CSS classes like this:

let x = document.querySelector('div');

element.classList.remove('hi') is way cleaner than what I was doing before.

find elements with document.querySelectorAll

When I started learning jQuery I remember thinking that if you wanted to easily find something in the DOM you had to use jQuery (like $('.class')). I just learned this week that you can actually write document.querySelectorAll('.some-class') instead, and then you don’t need to depend on any library!

I got curious about when querySelectorAll was introduced. I Googled a tiny bit and it looks like the [Selectors API was built sometime between 2008 and 2013 – I found a post from the jQuery author discussing the proposed implementation in 2008, and a blog post from 2011 saying it was in all major browsers by then, so maybe it didn’t exist when I started using jQuery but it’s definitely been around for quite a while :)

set .innerHTML

In one place I wanted to change a button’s HTML contents. Creating DOM elements with document.createElement is pretty annoying, so I tried to do that as little as possible and instead set .innerHTML to the HTML string I wanted:

    button.innerHTML = `<i class="icon-lightbulb"></i>I learned something!
    <object data="/confetti.svg" width="30" height = "30"> </object>

scroll through the page with .scrollIntoView

The last fun thing I learned about is .scrollIntoView – I wanted to scroll down to the next question automatically when someone clicked “next question”. Turns out this is just one line of code:

row.scrollIntoView({behavior: 'smooth', block: 'center'});

another vanilla JS example: peekobot

Another small example of a plain JS library I thought was nice is peekobot, which is a little chatbot interface that’s 100 lines of JS/CSS.

Looking at its Javascript, it uses some similar patterns – a lot of .classList.add, some adding elements to the DOM, some .querySelectorAll.

I learned from reading peekobot’s source about .closest which finds the closest ancestor that matches a given selector. That seems like it would be a nice way to get rid of some of the .parentElement.parentElement that I was writing in my Javascript, which felt a bit fragile.

plain Javascript can do a lot!

I was pretty surprised by how much I could get done with just plain JS. I ended up writing about 50 lines of JS to do everything I wanted to do, plus a bit extra to collect some anonymous metrics about what folks were learning.

As usual with my frontend posts, this isn’t meant to be Serious Frontend Engineering Advice – my goal is to be able to write little websites with less than 200 lines of Javascript that mostly work. If you are also flailing around in frontend land I hope this helps a bit!


What happens when you update your DNS?

I’ve seen a lot of people get confused about updating their site’s DNS records to change the IP address. Why is it slow? Do you really have to wait 2 days for everything to update? Why do some people see the new IP and some people see the old IP? What’s happening?

So I wanted to write a quick exploration of what’s happening behind the scenes when you update a DNS record.

how DNS works: recursive vs authoritative DNS servers

First, we need to explain a little bit about DNS. There are 2 kinds of DNS servers: authoritative and recursive.

authoritative DNS servers (also known as nameservers) have a database of IP addresses for each domain they’re responsible for. For example, right now an authoritative DNS server for github.com is ns-421.awsdns-52.com. You can ask it for github.com’s IP like this;

dig @ns-421.awsdns-52.com github.com

recursive DNS servers, by themselves, don’t know anything about who owns what IP address. They figure out the IP address for a domain by asking the right authoritative DNS servers, and then cache that IP address in case they’re asked again. is a recursive DNS server.

When people visit your website, they’re probably making their DNS queries to a recursive DNS server. So, how do recursive DNS servers work? Let’s see!

how does a recursive DNS server query for github.com?

Let’s go through an example of what a recursive DNS server (like does when you ask it for an IP address (A record) for github.com. First – if it already has something cached, it’ll give you what it has cached. But what if all of its caches are expired? Here’s what happens:

step 1: it has IP addresses for the root DNS servers hardcoded in its source code. You can see this in unbound’s source code here. Let’s say it picks to start with. Here’s the official source for those hardcoded IP addresses, also known as a “root hints file”.

step 2: Ask the root nameservers about github.com.

We can roughly reproduce what happens with dig. What this gives us is a new authoritative nameserver to ask: a nameserver for .com, with the IP

$ dig @ github.com
com.			172800	IN	NS	a.gtld-servers.net.
a.gtld-servers.net.	172800	IN	A

The details of the DNS response are a little more complicated than that – in this case, there’s an authority section with some NS records and an additional section with A records so you don’t need to do an extra lookup to get the IP addresses of those nameservers.

(in practice, 99.99% of the time it’ll already have the address of the .com nameservers cached, but we’re pretending we’re really starting from scratch)

step 3: Ask the .com nameservers about github.com.

$ dig @ github.com
github.com.		172800	IN	NS	ns-421.awsdns-52.com.
ns-421.awsdns-52.com.	172800	IN	A

We have a new IP address to ask! This one is the nameserver for github.com.

step 4: Ask the github.com nameservers about github.com.

We’re almost done!

$ dig @ github.com

github.com.		60	IN	A

Hooray!! We have an A record for github.com! Now the recursive nameserver has github.com’s IP address and can return it back to you. And it could do all of this by only hardcoding a few IP addresses: the addresses of the root nameservers.

how to see all of a recursive DNS server’s steps: dig +trace

When I want to see what a recursive DNS server would do when resolving a domain, I run

$ dig @ +trace github.com

This shows all the DNS records that it requests, starting at the root DNS servers – all the 4 steps that we just went through.

let’s update some DNS records!

Now that we know the basics of how DNS works, let’s update some DNS records and see what happens.

When you update your DNS records, there are two main options:

  1. keep the same nameservers
  2. change nameservers

let’s talk about TTLs

We’ve forgotten something important though! TTLs! You know how we said earlier that the recursive DNS server will cache records until they expire? The way it decides whether the record should expire is by looking at its TTL or “time to live”.

In this example, the TTL for the A record github’s nameserver returns for its DNS record is 60, which means 60 seconds:

$ dig @ github.com

github.com.		60	IN	A

That’s a pretty short TTL, and in theory if everybody’s DNS implementation followed the DNS standard it means that if Github decided to change the IP address for github.com, everyone should get the new IP address within 60 seconds. Let’s see how that plays out in practice

option 1: update a DNS record on the same nameservers

First, I updated my nameservers (Cloudflare) to have a new DNS record: an A record that maps test.jvns.ca to

$ dig @ test.jvns.ca
test.jvns.ca.		299	IN	A

This worked immediately! There was no need to wait at all, because there was no test.jvns.ca DNS record before that could have been cached. Great. But it looks like the new record is cached for ~5 minutes (299 seconds).

So, what if we try to change that IP? I changed it to, and then ran the same DNS query.

$ dig @ test.jvns.ca
test.jvns.ca.		144	IN	A

Hmm, it seems like that DNS server has the record still cached for another 144 seconds. Interestingly, if I query multiple times I actually get inconsistent results – sometimes it’ll give me the new IP and sometimes the old IP, I guess because actually load balances to a bunch of different backends which each have their own cache.

After I waited 5 minutes, all of the caches had updated and were always returning the new record. Awesome. That was pretty fast!

you can’t always rely on the TTL

As with most internet protocols, not everything obeys the DNS specification. Some ISP DNS servers will cache records for longer than the TTL specifies, like maybe for 2 days instead of 5 minutes. And people can always hardcode the old IP address in their /etc/hosts.

What I’d expect to happen in practice when updating a DNS record with a 5 minute TTL is that a large percentage of clients will move over to the new IPs quickly (like within 15 minutes), and then there will be a bunch of stragglers that slowly update over the next few days.

option 2: updating your nameservers

So we’ve seen that when you update an IP address without changing your nameservers, a lot of DNS servers will pick up the new IP pretty quickly. Great. But what happens if you change your nameservers? Let’s try it!

I didn’t want to update the nameservers for my blog, so instead I went with a different domain I own and use in the examples for the HTTP zine: examplecat.com.

Previously, my nameservers were set to dns1.p01.nsone.net. I decided to switch them over to Google’s nameservers – ns-cloud-b1.googledomains.com etc.

When I made the change, my domain registrar somewhat ominiously popped up the message – “Changes to examplecat.com saved. They’ll take effect within the next 48 hours”. Then I set up a new A record for the domain, to make it point to

Okay, let’s see if that did anything

$ dig @ examplecat.com
examplecat.com.		17	IN	A

No change. If I ask a different DNS server, it knows the new IP:

$ dig @ examplecat.com
examplecat.com.		299	IN	A

but is still clueless. The reason sees the new IP even though I just changed it 5 minutes ago is presumably that nobody had ever queried about examplecat.com before, so it had nothing in its cache.

nameserver TTLs are much longer

The reason that my registrar was saying “THIS WILL TAKE 48 HOURS” is that the TTLs on NS records (which are how recursive nameservers know which nameserver to ask) are MUCH longer!

The new nameserver is definitely returning the new IP address for examplecat.com

$ dig @ns-cloud-b1.googledomains.com examplecat.com
examplecat.com.		300	IN	A

But remember what happened when we queried for the github.com nameservers, way back?

$ dig @ github.com
github.com.		172800	IN	NS	ns-421.awsdns-52.com.
ns-421.awsdns-52.com.	172800	IN	A

172800 seconds is 48 hours! So nameserver updates will in general take a lot longer to expire from caches and propagate than just updating an IP address without changing your nameserver.

how do your nameservers get updated?

When I update the nameservers for examplecat.com, what happens is that he .com nameserver gets a new NS record with the new domain. Like this:

dig ns @j.gtld-servers.net examplecat.com

examplecat.com.		172800	IN	NS	ns-cloud-b1.googledomains.com

But how does that new NS record get there? What happens is that I tell my domain registrar what I want the new nameservers to be by updating it on the website, and then my domain registrar tells the .com nameservers to make the update.

For .com, these updates happen pretty fast (within a few minutes), but I think for some other TLDs the TLD nameservers might not apply updates as quickly.

your program’s DNS resolver library might also cache DNS records

One more reason TTLs might not be respected in practice: many programs need to resolve DNS names, and some programs will also cache DNS records indefinitely in memory (until the program is restarted).

For example, AWS has an article on Setting the JVM TTL for DNS Name Lookups. I haven’t written that much JVM code that does DNS lookups myself, but from a little Googling about the JVM and DNS it seems like you can configure the JVM so that it caches every DNS lookup indefinitely. (like this elasticsearch issue)

that’s all!

I hope this helps you understand what’s going on when updating your DNS!

As a disclaimer, again – TTLs definitely don’t tell the whole story about DNS propagation – some recursive DNS servers definitely don’t respect TTLs, even if the major ones like do. So even if you’re just updating an A record with a short TTL, it’s very possible that in practice you’ll still get some requests to the old IP for a day or two.

Also, I changed the nameservers for examplecat.com back to their old values after publishing this post.


Questions to help people decide what to learn

For the last few months, I’ve been working on and off on a way to help people evaluate their own learning & figure out what to learn next.

This past week I built a new iteration of this: https://questions.wizardzines.com, which today has 2 sets of questions:

  1. questions about UDP
  2. questions about sockets

It’s still a work in progress, but I’ve been working on this for quite a while so I wanted to write down how I got here.

the goal: help people learn on their own

First, let’s talk about my goal. I’m interested in helping people who are trying to learn on their own. I don’t have any specific materials I’m trying to teach – I want to help people learn what they want to learn.

I’ve done a lot of this by writing blog posts & zines, but I felt like I was missing something – were people really learning what they wanted to learn? How could they tell if they’d learned it?

I felt like I wanted some kind of “quiz” or “test”, but I wasn’t sure what it should look like.

formative assessment vs summative assessment

Let’s take a very quick detour into terminology. There are two kinds of assessment teachers use in school.

formative assessment: “evaluations used to modify teaching and learning activities to improve student attainment.”

summative assessment: used to determine grades

Grades are pretty pointless if you’re teaching yourself (who cares if you got an A in sockets?). But formative assessments! If you could take some kind of evaluation to help you decide what exactly you should teach yourself next! That seems more useful. So I got interested in building some kind of “formative assessment” tool.

(thanks to Sumana for reminding me of these terms!)

next step: ask on Twitter how people feel about quizzes

So I asked on Twitter (in this thread):

have you ever taken a class (online or offline!) where you were given a quiz first that you could use to check your understanding of the topic at the start? did it help you?

I got about 90 replies. Here are some themes I took away from the replies:

One thing I learned from this is that being told you don’t know something is a bad experience for a lot of people.

idea: build flashcards you can learn from

My first idea was to reframe a test as a way to learn. So instead of it being something that tells you what you don’t know (which, so what?), it helps you learn something new!

So I built a few sets of flashcards about various topics. Here’s the first set I built, flashcards on containers, if you want to try it out.

If you didn’t try it – it looks like this:

Basically – there are 14ish questions, you click the card to see the answer, and for each card you categorize it as “I knew that!”, “I learned something”, or “that’s confusing” (which is meant to be a kind of “other” category, where you didn’t know that and you didn’t learn anything).

The idea is that the answers contain enough information that you could actually learn a little bit from them, and hopefully be inspired to go learn more on your own if you’re interested.

good things about the flashcards

some of the positive feedback I got about the flashcards was:

problems with the flashcards

But there were some problems that were bothering me, too.

people dislike questions that don’t match their mental model

Probably the most important thing I learned from making these flashcards is that it really matters how well the question matches the reader’s mental model.

I started out by writing questions by taking statements I’d normally make about a topic, and turning them into questions. Sometimes this really didn’t work.

Here’s an example of it not working: I think the statement “a HTTP request has 4 parts: a body, the headers, the request method, and the path being requested” is relatively unobjectionable. That how I think about what a HTTP request is.

But what if I ask you “what are the 4 parts of a HTTP request?” and the answer is “a body, the headers, the request method, and the URL being requested”? It turns out, that’s totally different!! Not everyone thinks about HTTP requests as having 4 parts – they might think of it has having 3 parts (the first line, the headers, and the body). Or 2 parts and 1 optional part (the first line, and the headers, and maybe an optional body). Or some other way! So it’s weird to be asked “what are the 4 parts of a HTTP request”.

There were a lot of other examples like this, where people reacted badly to some question I asked that didn’t match up with how they think about a topic. So I learned that if I’m asking a question, it gets held to a higher standard for how well it matches with the reader’s mental model than when making the same statement.

An example of what I think would be a better question here is “Does every HTTP request have headers?” (yes! the HTTP/1.1 RFC requires that the Host header be set!). But even that is maybe a little tricky – probably at least one HTTP/1.0 client implementation is out there in the world sending requests without headers, even though 99.99% of HTTP requests have headers.

Of course, it’s ok if the question/answer doesn’t match the reader’s mental model if their mental model is incorrect, but if their model is correct then I think it should match.

get rid of multiple choice

The other thing I learned from these flashcards is that a lot of people dislike multiple choice. I haven’t thought about this that much, but honestly I don’t really like multiple choice either so I decided to get rid of it.

next step: get reminded of The Little Schemer

I don’t remember why, but I’ve had The Little Schemer kicking around in my head for a while. I haven’t actually read the whole thing myself, but I kept hearing people talking about it. Here’s the first page of The Little Schemer, if you haven’t heard of it:

This reminded me a lot of what I was trying to do – there are questions and answers, but the goal isn’t for you to get all the questions “right”. Instead, I think the goal is for you to think about whether you know the answer yet or not and learn as you go.

switch to a side-by-side format

So, I kept a similar question/answer format, but switched to a side-by-side format, like the Little Schemer.

What I like about putting the questions & answers next to each other:

Basically I like that it gives the reader more control, which I think is important.

call it “questions” instead of “flashcards”

I also renamed the project to “questions” because that’s really how I think about learning for myself – I don’t do “flashcards”, but I do constantly ask myself questions about topics I don’t understand, figure out the answers to those questions, and then repeat until I understand the topic as well as I want to.

But coming up with the right questions on your own is hard when you don’t a lot, so I’m hopeful that providing folks with a bunch of questions (and answers) to think about will help you decide what you want to learn next.

keep the “I learned something” button

When I released the first set of questions on UDP, I didn’t include an “I learned something” button, and I noticed something weird – a lot of people were tweeting things like “I got 810”, “I got 1010”.

I was a bit worried about this because the whole idea was to help people identify things they could learn, so saying “I got 810” felt like it was focusing on the things you already knew and ignoring the most important thing – the 2 questions where maybe you could learn something new!

So I added an “I learned something!” button back to each question and spent way too much time building a fun SVG+CSS animation that played when you pressed the button. And so far it seems to have worked – I see more people commenting “I learned something” and less “I got 910”.

building small things is hard

As usual, building small simple things takes more time than I’d expect! The concept of “some questions and answers” seems really simple, but I’ve already learned a lot by building this and I think I still have a lot more to learn about this format.

But I’m excited to learn more, and I’d love to know your thoughts. Here it is again if you’d like to try it: https://questions.wizardzines.com.


Metaphors in man pages

This morning I was watching a great talk by Maggie Appleton about metaphors. In the talk, she explains the difference between a “figurative metaphor” and a “cognitive metaphor”, and references this super interesting book called Metaphors We Live By which I immediately got and started reading.

Here’s an example from “Metaphors We Live By” of a bunch of metaphors we use for ideas:

There’s a long list of more English metaphors here, including many metaphors from the book.

I was surprised that there were so many different metaphors for ideas, and that we’re using metaphors like this all the time in normal language.

let’s look for metaphors in man pages!

Okay, let’s get to the point of this blog post, which is just a small fun exploration – there aren’t going to be any Deep Programming Insights here.

I went through some of the examples of metaphors in Metaphors To Live By and grepped all the man pages on my computer for them.

processes as people

This is one of the richer categories – a lot of different man pages seem to agree that processes are people, or at least alive in some way.

data as food

data as objects

processes as machines/objects


There are LOTS of containers: directories, files, strings, caches, queues, buffers, etc.


There are also lots of kinds of resources: bandwidth, TCP sockets, session IDs, stack space, memory, disk space.

orientation (up, down, above, below)


Limits as rooms/buildings (which have floors, and ceilings, which you hit) are kind of fun:

money / wealth

more miscellaneous metaphors

here are some more I found that didn’t fit into any of those categories yet.

we’re all using metaphors all the time

I found a lot more metaphors than I expected, and most of them are just part of how I’d normally talk about a program. Interesting!


Why strace doesn't work in Docker

While editing the capabilities page of the how containers work zine, I found myself trying to explain why strace doesn’t work in a Docker container.

The problem here is – if I run strace in a Docker container on my laptop, this happens:

$ docker run  -it ubuntu:18.04 /bin/bash
$ # ... install strace ...
root@e27f594da870:/# strace ls
strace: ptrace(PTRACE_TRACEME, ...): Operation not permitted

strace works using the ptrace system call, so if ptrace isn’t allowed, it’s definitely not gonna work! This is pretty easy to fix – on my machine, this fixes it:

docker run --cap-add=SYS_PTRACE  -it ubuntu:18.04 /bin/bash

But I wasn’t interested in fixing it, I wanted to know why it happens. So why does strace not work, and why does --cap-add=SYS_PTRACE fix it?

hypothesis 1: container processes are missing the CAP_SYS_PTRACE capability

I always thought the reason was that Docker container processes by default didn’t have the CAP_SYS_PTRACE capability. This is consistent with it being fixed by --cap-add=SYS_PTRACE, right?

But this actually doesn’t make sense for 2 reasons.

Reason 1: Experimentally, as a regular user, I can strace on any process run by my user. But if I check if my current process has the CAP_SYS_PTRACE capability, I don’t:

$ getpcaps $$
Capabilities for `11589': =

Reason 2: man capabilities says this about CAP_SYS_PTRACE:

       * Trace arbitrary processes using ptrace(2);

So the point of CAP_SYS_PTRACE is to let you ptrace arbitrary processes owned by any user, the way that root usually can. You shouldn’t need it to just ptrace a regular process owned by your user.

And I tested this a third way – I ran a Docker container with docker run --cap-add=SYS_PTRACE -it ubuntu:18.04 /bin/bash, dropped the CAP_SYS_PTRACE capability, and I could still strace processes even though I didn’t have that capability anymore. What? Why?

hypothesis 2: something about user namespaces???

My next (much less well-founded) hypothesis was something along the lines of “um, maybe the process is in a different user namespace and strace doesn’t work because of… reasons?” This isn’t really coherent but here’s what happened when I looked into it.

Is the container process in a different user namespace? Well, in the container:

root@e27f594da870:/# ls /proc/$$/ns/user -l
... /proc/1/ns/user -> 'user:[4026531837]'

On the host:

bork@kiwi:~$ ls /proc/$$/ns/user -l
... /proc/12177/ns/user -> 'user:[4026531837]'

Because the user namespace ID (4026531837) is the same, the root user in the container is the exact same user as the root user on the host. So there’s definitely no reason it shouldn’t be able to strace processes that it created!

This hypothesis doesn’t make much sense but I hadn’t realized that the root user in a Docker container is the same as the root user on the host, so I thought that was interesting.

hypothesis 3: the ptrace system call is being blocked by a seccomp-bpf rule

I also knew that Docker uses seccomp-bpf to stop container processes from running a lot of system calls. And ptrace is in the list of system calls blocked by Docker’s default seccomp profile! (actually the list of allowed system calls is a whitelist, so it’s just that ptrace is not in the default whitelist. But it comes out to the same thing.)

That easily explains why strace wouldn’t work in a Docker container – if the ptrace system call is totally blocked, then of course you can’t call it at all and strace would fail.

Let’s verify this hypothesis – if we disable all seccomp rules, can we strace in a Docker container?

$ docker run --security-opt seccomp=unconfined -it ubuntu:18.04  /bin/bash
$ strace ls
execve("/bin/ls", ["ls"], 0x7ffc69a65580 /* 8 vars */) = 0
... it works fine ...

Yes! It works! Great. Mystery solved, except…

why does --cap-add=SYS_PTRACE fix the problem?

What we still haven’t explained is: why does --cap-add=SYS_PTRACE would fix the problem?

The man page for docker run explains the --cap-add argument this way:

   Add Linux capabilities

That doesn’t have anything to do with seccomp rules! What’s going on?

let’s look at the Docker source code.

When the documentation doesn’t help, the only thing to do is go look at the source.

The nice thing about Go is, because dependencies are often vendored in a Go repository, you can just grep the repository to figure out where the code that does a thing is. So I cloned github.com/moby/moby and grepped for some things, like rg CAP_SYS_PTRACE.

Here’s what I think is going on. In containerd’s seccomp implementation, in contrib/seccomp/seccomp_default.go, there’s a bunch of code that makes sure that if a process has a capability, then it’s also given access (through a seccomp rule) to use the system calls that go with that capability.

		case "CAP_SYS_PTRACE":
			s.Syscalls = append(s.Syscalls, specs.LinuxSyscall{
				Names: []string{
				Action: specs.ActAllow,
				Args:   []specs.LinuxSeccompArg{},

There’s some other code that seems to do something very similar in profiles/seccomp/seccomp.go in moby and the default seccomp profile, so it’s possible that that’s what’s doing it instead.

So I think we have our answer!

--cap-add in Docker does a little more than what it says

The upshot seems to be that --cap-add doesn’t do exactly what it says it does in the man page, it’s more like --cap-add-and-also-whitelist-some-extra-system-calls-if-required. Which makes sense! If you have a capability like CAP_SYS_PTRACE which is supposed to let you use the process_vm_readv system call but that system call is blocked by a seccomp profile, that’s not going to help you much!

So allowing the process_vm_readv and ptrace system calls when you give the container CAP_SYS_PTRACE seems like a reasonable choice.

strace actually does work in newer versions of Docker

As of this commit (docker 19.03), Docker does actually allow the ptrace system calls for kernel versions newer than 4.8.

But the Docker version on my laptop is 18.09.7, so it predates that commit.

that’s all!

This was a fun small thing to investigate, and I think it’s a nice example of how containers are made of lots of moving pieces that work together in not-completely-obvious ways.

If you liked this, you might like my new zine called How Containers Work that explains the Linux kernel features that make containers work in 24 pages. You can read the pages on capabilities and seccomp-bpf from the zine.


New zine: How Containers Work!

On Friday I published a new zine: “How Containers Work!”. I also launched a fun redesign of wizardzines.com.

You can get it for $12 at https://wizardzines.com/zines/containers. If you buy it, you’ll get a PDF that you can either print out or read on your computer. Or you can get a pack of all 8 zines so far.

Here’s the cover and table of contents:

why containers?

I’ve spent a lot of time figuring out how to run things in containers over the last 3-4 years. And at the beginning I was really confused! I knew a bunch of things about Linux, and containers didn’t seem to fit in with anything I thought I knew (“is it a process? what’s a network namespace? what’s happening?“). The whole thing seemed really weird.

It turns out that containers ARE actually pretty weird. They’re not just one thing, they’re what you get when you glue together 6 different features that were mostly designed to work together but have a bunch of confusing edge cases.

As usual, the thing that helped me the most in my container adventures is a good understanding of the fundamentals – what exactly is actually happening on my server when I run a container?

So that’s what this zine is about – cgroups, namespaces, pivot_root, seccomp-bpf, and all the other Linux kernel features that make containers work.

Once I understood those ideas, it got a lot easier to debug when my containers were doing surprising things in production. I learned a couple of interesting and strange things about containers while writing this zine too – I’ll probably write a blog post about one of them later this week.

containers aren’t magic

This picture (page 6 of the zine) shows you how to run a fish container image with only 15 lines of bash. This is heavily inspired by bocker, which “implements” Docker in about 100 lines of bash.

The main things I see missing from that script compared to what Docker actually does when running a container (other than using an actual container image and not just a tarball) are:

container command line tools

The zine also goes over a bunch of command line tools & files that you can use to inspect running containers or play with Linux container features. Here’s a list:

I also made a short youtube video a while back called ways to spy on a Docker container that demos some of these command line tools.

container runtime agnostic

I tried to keep this zine pretty container-runtime-agnostic – I mention Docker a couple of times because it’s so widely used, but it’s about the Linux kernel features that make containers work in general, not Docker or LXC or systemd-nspawn or Kubernetes or whatever. If you understand the fundamentals you can figure all those things out!

we redesigned wizardzines.com!

On Friday I also launched a redesign of wizardzines.com! Melody Starling (who is amazing) did the design. I think now it’s better organized but the tiny touch that I’m most delighted by is that now the zines jump with joy when you hover over them.

One cool thing about working with a designer is – they don’t just make things look better, they help organize the information better so the website makes more sense and it’s easier to find things! This is probably obvious to anyone who knows anything about design but I haven’t worked with designers very much (or maybe ever?) so it was really cool to see.

One tiny example of this: Melody had the idea of adding a tiny FAQ on the landing page for each zine, where I can put the answers to all the questions people always ask! Here’s what the little FAQ box looks like:

I probably want to edit those questions & answers over time but it’s SO NICE to have somewhere to put them.

what’s next: maybe debugging! or working more on flashcards!

The two projects I’m thinking about the most right now are

  1. a zine about debugging, which I started last summer and haven’t gotten around to finishing yet
  2. a flashcards project that I’ve been adding to slowly over the last couple of months. I think could become a nice way to explain basic ideas.

Here’s a link to where to get the zine again :)


When debugging, your attitude matters

A while back I wrote What does debugging a program look like? on what to do when debugging (change one thing at a time! check your assumptions!).

But I was debugging some CSS last week, and I think that post is missing something important: your attitude.

Now – I’m not a very good CSS developer yet. I’ve never written CSS professionally and I don’t understand a lot of basic CSS concepts (I think I finally understood for the first time recently how position: absolute works). And last week I was working on the most complicated CSS project I’d ever attempted.

While I was debugging my CSS, I noticed myself doing some bad things that I normally would not! I was:

This strategy was exactly as effective as you might imagine (not very effective!), and it was because of my attitude about CSS! I had this unusual-for-me belief that CSS was Too Hard and impossible for me to understand. So let’s talk about that attitude a bit!

the problem attitude: “this is too hard for me to understand”

One specific problem I was having was – I had 2 divs stacked on top of one another, and I wanted Div A to be on top of Div B. My model of CSS stacking order at the start of this was basically “if you want Thing A to be on top of Thing B, change the z-index to make it work”. So I changed the z-index of Div A to be 5 or something.

But it didn’t work! In Firefox, div A was on top, but in Chrome, Div B was on top. Argh! Why? CSS is impossible!!! (if you want to see the exact actual situation I was in, I reproduced the different-in-firefox-and-chrome thing here after the fact)

I googled a bit, and I found out that a possible reason z-index might not work was because Div A and Div B were actually in different “stacking contexts”. If that was true, even if I set the z-index of Div A to 999999 it would still not put it on top of Div B. (here’s a small example of what this z-index problem looks like, though I think my specific bug had some extra complications)

I thought “man, this stacking context thing seems really complicated, why is it different between Firefox and Chrome, I’m not going to be able to figure this out”. So I tried a bunch of random things a bunch of blog posts suggested, which as usual did not work.

Finally I gave up this “change random things and pray” strategy and thought “well, what if I just read the documentation on stacking order, maybe it’s not that bad”.

So I read the MDN page on stacking order, which says:

When the z-index property is not specified on any element, elements are stacked in the following order (from bottom to top):
1. The background and borders of the root element
2. Descendant non-positioned blocks, in order of appearance in the HTML
3. Descendant positioned elements, in order of appearance in the HTML

This is SO SIMPLE! It just depends on the order in the HTML! I put Div A after Div B in the HTML (as a sibling) and it made everything work in both browsers.

better attitude: “let’s learn the basics and see if that helps”

This whole stacking problem turned out to really not be that complicated – all I needed to do was read a very short and simple documentation page to understand how stacking works!

Of course, computer things are not always this simple (and even in this specific case the rules about what creates a new stacking context are pretty complicated.). But I did not need to understand those more complicated rules in order to put Div A on top of Div B! I only needed to know the much simpler 3 rules above.

So – calm down for a second, learn a few of the basics, and see if that helps.

watching people who know what they’re doing is inspiring

Another area of CSS that I thought was “too hard” for me to understand was this whole position: absolute and position: relative business. I kept seeing (and sometimes using!) examples where people made complicated CSS things with position: absolute but I didn’t understand how they worked. Doesn’t position: absolute mean that the element is always in the same place on the screen? Why are these position: absolute things moving when I scroll like the rest of the document? (spoiler: no, that’s position: fixed.)

But last week, I paired with someone who’s a lot better at CSS than me on some code, and I saw that they were just typing in position: absolute and position: relative confidently into their code without seeming confused about it!! Could that be me?

I looked up the documentation on MDN on position: absolute, and it said:

The element is removed from the normal document flow, and no space is created for the element in the page layout. It is positioned relative to its closest positioned ancestor… Its final position is determined by the values of top, right, bottom, and left.

So things with position: absolute are positioned relative to their closest positioned ancestor! And you just use top/bottom/right/left to pick where! That’s so simple!

documentation that you can trust makes a big difference

I think another big source of my frustration with CSS is that I didn’t have the best grasp of where to find accurate information & advice. I knew that MDN was a reliable reference, but MDN doesn’t really help answer questions like “ok but seriously how do I center a div???” and I found myself reading a lot of random Stack Overflow answers/blog posts that I wasn’t 100% sure were correct.

This week I learned about CSS Tricks which has a lot of GREAT articles like Centering in CSS: A Complete Guide which seems very reputable and is written in a super clear way.

that’s all!

I don’t really know why I started to believe that it was “impossible” to understand basic CSS concepts since I don’t believe that about computers in general. Maybe because I’ve been writing CSS at a beginner level for a very long time but hadn’t ever really tried to do a more involved CSS project than “let’s arrange some divs in a grid with flexbox”!

But this attitude really got in the way of me writing the CSS I wanted to write! And once I let go of it and used my normal debugging techniques I was able to get a lot more things to work the way I wanted.


Getting started with shaders: signed distance functions!

Hello! A while back I learned how to make fun shiny spinny things like this using shaders:

My shader skills are still extremely basic, but this fun spinning thing turned out to be a lot easier to make than I thought it would be to make (with a lot of copying of code snippets from other people!).

The big idea I learned when doing this was something called “signed distance functions”, which I learned about from a very fun tutorial called Signed Distance Function tutorial: box & balloon.

In this post I’ll go through the steps I used to learn to write a simple shader and try to convince you that shaders are not that hard to get started with!

examples of more advanced shaders

If you haven’t seen people do really fancy things with shaders, here are a couple:

  1. this very complicated shader that is like a realistic video of a river: https://www.shadertoy.com/view/Xl2XRW
  2. a more abstract (and shorter!) fun shader with a lot of glowing circles: https://www.shadertoy.com/view/lstSzj

step 1: my first shader

I knew that you could make shaders on shadertoy, and so I went to https://www.shadertoy.com/new. They give you a default shader to start with that looks like this:

Here’s the code:

void mainImage( out vec4 fragColor, in vec2 fragCoord )
    // Normalized pixel coordinates (from 0 to 1)
    vec2 uv = fragCoord/iResolution.xy;

    // Time varying pixel color
    vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));

    // Output to screen
    fragColor = vec4(col,1.0);

This doesn’t do anything that exciting, but it already taught me the basic structure of a shader program!

the idea: map a pair of coordinates (and time) to a colour

The idea here is that you get a pair of coordinates as an input (fragCoord) and you need to output a RGBA vector with the colour of that. The function can also use the current time (iTime), which is how the picture changes over time.

The neat thing about this programming model (where you map a pair of coordinates and the time to) is that it’s extremely trivially parallelizable. I don’t understand a lot about GPUs but my understanding is that this kind of task (where you have 10000 trivially parallelizable calculations to do at once) is exactly the kind of thing GPUs are good at.

step 2: iterate faster with shadertoy-render

After a while of playing with shadertoy, I got tired of having to click “recompile” on the Shadertoy website every time I saved my shader.

I found a command line tool that will watch a file and update the animation in real time every time I save called shadertoy-render. So now I can just run:

shadertoy-render.py circle.glsl 

and iterate way faster!

step 3: draw a circle

Next I thought – I’m good at math! I can use some basic trigonometry to draw a bouncing rainbow circle!

I know the equation for a circle (x**2 + y**2 = whatever!), so I wrote some code to do that:

Here’s the code: (which you can also see on shadertoy)

void mainImage( out vec4 fragColor, in vec2 fragCoord )
    // Normalized pixel coordinates (from 0 to 1)
    vec2 uv = fragCoord/iResolution.xy;
    // Draw a circle whose center depends on what time it is
    vec2 shifted = uv - vec2((sin(iGlobalTime) + 1)/2, (1 + cos(iGlobalTime)) / 2);
    if (dot(shifted, shifted) < 0.03) {
        // Varying pixel colour
        vec3 col = 0.5 + 0.5*cos(iGlobalTime+uv.xyx+vec3(0,2,4));
        fragColor = vec4(col,1.0);
    } else {
        // make everything outside the circle black
        fragColor = vec4(0,0,0,1.0);

This takes the dot product of the coordinate vector fragCoord with itself, which is the same as calculating x^2 + y^2. I played with the center of the circle a little bit in this one too – I made the center vec2((sin(iGlobalTime) + 1)/2, (1 + cos(faster)) / 2), which means that the center of the circle also goes in a circle depending on what time it is.

shaders are a fun way to play with math!

One thing I think is fun about this already (even though we haven’t done anything super advanced!) is that these shaders give us a fun visual way to play with math – I used sin and cos to make something go in a circle, and if you want to get some better intuition about how trigonometric work, maybe writing shaders would be a fun way to do that!

I love that you get instant visual feedback about your math code – if you multiply something by 2, things get bigger! or smaller! or faster! or slower! or more red!

but how do we do something really fancy?

This bouncing circle is nice but it’s really far from the super fancy things I’ve seen other people do with shaders. So what’s the next step?

idea: instead of using if statements, use signed distance functions!

In my circle code above, I basically wrote:

if (dot(uv, uv) < 0.03) {
    // code for inside the circle
} else {
    // code for outside the circle

But the problem with this (and the reason I was feeling stuck) is that it’s not clear how it generalizes to more complicated shapes! Writing a bajillion if statements doesn’t seem like it would work well. And how do people render those 3d shapes anyway?

So! Signed distance functions are a different way to define a shape. Instead of using a hardcoded if statement, instead you define a function that tells you, for any point in the world, how far away that point is from your shape. For example, here’s a signed distance function for a sphere.

float sdSphere( vec3 p, float center )
  return length(p)-center;

Signed distance functions are awesome because they’re:

the steps to making a spinning top

When I started out I didn’t understand what code I needed to write to make a shiny spinning thing. It turns out that these are the basic steps:

  1. Make a signed distance function for the shape I want (in my case an octahedron)
  2. Raytrace the signed distance function so you can display it in a 2D picture (or raymarch? The tutorial I used called it raytracing and I don’t understand the difference between raytracing and raymarching yet)
  3. Write some code to texture the surface of your shape and make it shiny

I’m not going to explain signed distance functions or raytracing in detail in this post because I found this AMAZING tutorial on signed distance functions that is very friendly and honestly it does a way better job than I could do. It explains how to do the 3 steps above and the code has a ton of comments and it’s great.

step 4: copy the tutorial code and start changing things

Here I used the time honoured programming practice here of “copy the code and change things in a chaotic way until I get the result I want”.

My final shader of a bunch of shiny spinny things is here: https://www.shadertoy.com/view/wdlcR4

The animation comes out looking like this:

Basically to make this I just copied the tutorial on signed distance functions that renders the shape based on the signed distance function and:

making the octahedron spin!

Here’s some the I used to make the octahedron spin! This turned out to be really simple: first copied an octahedron signed distance function from this page and then added a rotate to make it rotate based on time and then suddenly it’s spinning!

vec2 sdfOctahedron( vec3 currentRayPosition, vec3 offset ){
    vec3 p = rotate((currentRayPosition), offset.xy, iTime * 3.0) - offset;
    float s = 0.1; // what is s?
    p = abs(p);
    float distance = (p.x+p.y+p.z-s)*0.57735027;
    float id = 1.0;
    return vec2( distance,  id );

making it shiny with some noise

The other thing I wanted to do was to make my shape look sparkly/shiny. I used a noise funciton that I found in this github gist to make the surface look textured.

Here’s how I used the noise function. Basically I just changed parameters to the noise function mostly at random (multiply by 2? 3? 1800? who knows!) until I got an effect I liked.

float x = noise(rotate(positionOfHit, vec2(0, 0), iGlobalTime * 3.0).xy * 1800.0);
float x2 = noise(lightDirection.xy * 400.0);
float y = min(max(x, 0.0), 1.0);
float y2 = min(max(x2, 0.0), 1.0) ;
vec3 balloonColor = vec3(y , y  + y2, y  + y2);

writing shaders is fun!

That’s all! I had a lot of fun making this thing spin and be shiny. If you also want to make fun animations with shaders, I hope this helps you make your cool thing!

As usual with subjects I don’t know tha well, I’ve probably said at least one wrong thing about shaders in this post, let me know what it is!

Again, here are the 2 resources I used:

  1. “SDF Tutorial: box & balloon”: https://www.shadertoy.com/view/Xl2XWt (which is really fun to modify and play around with)
  2. Tons of signed distance functions that you can copy and paste into your code http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm


Page created: Mon, Sep 28, 2020 - 09:05 AM GMT