Peter Molnar

Peter Molnar

Hol vasmű állott, most kőhalom 2

Shutter speed
1/60 sec
Focal length (as set)
50.0 mm
ISO 1600
K or M Lens

Empty windows, nature slowly taking over, rubble everywhere - a few decades of decay.


Hol vasmű állott, most kőhalom

Shutter speed
1/100 sec
Focal length (as set)
35.0 mm
ISO 3200
smc PENTAX-DA 35mm F2.4 AL

Just another abandoned, decaying factory in Budapest.


How to install microG an odexed stock android ROM

Why would anyone want an android without Google?

About 1.5 years ago I attended to a wedding. It took place outside of the city, at a restaurant with a very nice garden, where I've never before. In about 2 hours into the happening, my phone buzzed. I took it out, expecting a text message, or similar, but no: it was Google Maps, asking me if I'm at the place where I am, and since I'm there, could I upload some pictures of the place?

Since then this had become regular, until the point it became obstructive and annoying. I'm not alone: Brad Frost's1 entry talks about the same problem. I've tried everything to turn Google location tracking off. I went to Location History2 and disabled it. I went through the Google Maps application and removed every notification. This latter one cleared the madness for a while - until a new version of Maps came up, which introduced some extra notification setting, which then showed yet another popup out of the blue.

Google fails to respect user settings lately, turning into a desperately annoying data hoarding monster. They've been doing nasty things, like silently collecting cell tower information, even with location settings disabled for years3, but with the coming of GDPR4 they need to get consent - hence the silly amount of notifications they are bombarding you with.

Once I set a setting in a service, I expect it to stay they way I set it. I expect backwards compatibility, backfilled data, if needed. Google and Facebook are both failing at this; Facebook always had, Google only recently started. New app, we renamed all the settings, let's reset them to the default level of annoyance!

The whole problem on android can be traced back to one omnipotent application: Google Services Framework. This silent, invisible beast upgrades itself and Play Store whenever, wherever it wants it to. It does this all in the background, not even letting you know. If you happen to run an ancient phone, like my HTC Desire5, it will fill up that generous 150MB you have for user apps without a blink of an eye, and let you wonder why your phone can't boot any more.

The extremely sad part is that everyone started depending on GMS - Google Mobile Services - for convenience: it provides push services, you don't have to run you own. It all leads to the point that android, while in theory is Open Source, it will never be Free from Google, in it's current form.

Enter microG6: a few enthusiasts with the same feelings as me, but with actual results. microG is a service level replacement for Google; it's Free and Open Source, it's transparent. There's only one problem: it's very tricky to install it on nieche phones, with odexed ROMs.

So I made a guide. While this guide was made using a Nomu S107, but given it's an AOSP based ROM with tiny modifications, I'm fairly certain it can be applied to any similar, no-name less known brands and phones.

Important notes

The methods below might void your warranty. It might brick your phone. It will take while. It can cause an unexpected amount of hair pulled out.

Never do it on your only phone, or a phone you value high. I take no responsibility if anything goes wrong.

It was done on a Nomu S10, with Android 6.0 Marshmallow. It will most probably need to be altered for other versions.

The heavy lifting, the work, are all done by magnificent people out there; this article is merely summary of existing knowledge.

The only thing I can assure is that it worked for me, but it took a weekend to get to the bottom of it.


Operating system, adb, fastboot

The guide was made for Debian based linux, including Ubuntu and Mint.

I assume you have a general understanding and familiarity with fastboot and adb - these are both needed during the process.

It's doable on Windows as well, with very similar steps, but I don't have a Windows, so I can't make that guide.

SP Flash Tool (flashing stock ROMs on MediaTek devices)

The stock ZIPs Nomu provide can't be flashed via the regular recoveries, like TWRP. As a workaround, I used to extract it and flash the pieces by fastboot - that is because I wasn't aware of a tool, called SP Flash Tool and the MediaTek download mode.

When the phone is turned off and connected to a computer via USB, it shows up as modem (!) device, as ttyACM. The SP Flash tool uses this to flash the ROM, but in order to do that - even if the flash tool is run by root - needs some tweaking on the linux side.

In order to get this supported on Debian, some udev rules need to be added:

Run (as root):

cat > etc/udev/rules.d/20-mediatek-blacklist.rules << EOF
ATTRS{idVendor}=="0e8d", ENV{ID_MM_DEVICE_IGNORE}="1"
ATTRS{idVendor}=="6000", ENV{ID_MM_DEVICE_IGNORE}="1"

cat > etc/udev/rules.d/80-mediatek-usb.rules << EOF
SUBSYSTEM=="usb", ACTION=="add", ATTR{idVendor}=="0e8d", ATTR{idProduct}=="*"

systemctl restart udev.service

Once done add your user to the dialout and uucp groups as:

usermod -a -G dialout,uucp YOUR_USERNAME

Sp Flash tool needs an old version of libpng12, so get that from the Debian packages, or from the jessie (oldstable) repository:

wget http://ftp.uk.debian.org/debian/pool/main/libp/libpng/libpng12-0_1.2.50-2+deb8u3_amd64.deb
dpkg -i libpng12-0_1.2.50-2+deb8u3_amd64.deb
rm libpng12-0_1.2.50-2+deb8u3_amd64.deb

This should make it possible to flash, using the SP flash tool, which can be downloaded from spflashtool.com8.

Credit due to Miss Montage on needrom.com9 for finding these out.

TWRP recovery for Nomu S10

Do not flash TWRP recovery on the Nomu S10. There is some kind of safety check, which wants to trigger a factory reset, so what happens in short is the following:

At this point it's stuck with a boot logo on screen (not even a bootloop); it can't even be turned off, it's without a working system, no recovery, and a locked fastboot. If you reach this point, use SP Flash Tool described above.

Instead of flashing, a method called dirty boot will be used: via fastboot, the TWRP image will be booted from the PC, not from the phone, because TWRP is still neede in order to flash custom ZIPs. Jemmini10 made one for the Nomu S10; it in a zip11 ; extract it to get a .img file.

SuperSu flashable ZIP

Many things have changed since the early days of rooting android (2.3 and 4.x). Nowadays fiddling with the system /bin might result in mayhem, so clever people came up with the idea to put su, the actual binary needed for rooting into the boot image, ending up in /sbin, without triggering any "security". One of these systemless rooting apps is SuperSu12, which you'll need in a flashable zip13 format.

Xposed framework ZIP and apk

A vast amount of Android's real potential is lock behind bars - the reason is "security". I'm putting this in quotes, because it's smoke and mirrors: malware can use vulnerabilities to install itself deep inside the system that it's impossible to even detect it, yet you're not allowed to have full access on your phone. Not even the security suits and the malware scanners ask for root, and without that, they are not much more than a paid, bad joke.

Anyway: the XPosed Framework14 is here to help. It's a utility which lets you install modules which modules tweak low level android behaviour. For example, Volumesteps+ will let you change the granularity of volume steps, which is very useful for someone who'd find a volume level between the factory 8 and 9 the best. For us, the important module will be FakeGApps, which allows signature spoofing15, which is a hard requirement for microG to work.

For reasons I'm yet to understand, I had to both flash the zip16 and install the apk17 version to get XPosed on the phone. In theory, only one should have been enough, but for now, get both.

NanoDroid microG ZIP

NanoDroid18 is originally a Magisk module (it's another systemless rooting and framework, but I could never get it running on the Nomu), but it's also available as flashable ZIPs.

For our needs, only the NanoDroid microG zip19 is required.

The actual un-googling


  1. root the phone and install xposed
  2. enable signature spoofing via xposed module fakegapps
  3. removed all google related apps, libs, entries
  4. install microg

Now: in details.

1. Rooting the Nomu S10 with SuperSu and Xposed

Important: OEM unlocking will trigger a factory reset, it will wipe every user and app setting from the phone.

  1. Start android
  2. Enable Developer Options:
  3. Enable OEM unlocking
  4. Enable USB Debugging (ADB)
  5. reboot into fastboot:
  6. in fastboot, issue the oem unlock command:
  7. reboot the phone: fastboot reboot
  8. let the factory reset flush through
  9. re-enable Developer Options (see 2.)
  10. verify that OEM unlocking is on - is should be. If not, go back to 1. and start again.
  11. re-enable USB debugging (see 3.)
  12. boot into fastboot (see 5.)
  13. "dirty" boot TWRP recovery:
  14. install SuperSu via TWRP
  15. install Xposed via TWRP (see 14.)
  16. reboot the phone
  17. install the Xposed apk

2. Enable signature spoofing with FakeGapps

Once the Xposed Installer is up and running it will look like this:

Running and enabled Xposed

Under Menu, Download search for FakeGApps20, click Download and enable it:

FakeGApps in Xposed

For FakeGApps to take effect, the phone has to be rebooted, but it can be done together with the next step.

3. Remove all google apps and libraries

These commands should be run either via an adb shell, or from a Terminal Emulator on the phone.

adb shell

# become root - SuperSu will prompt for verification
su -
# remount / and /system for read-write
mount -o remount,rw /
mount -o remount,rw /system
# create a file with the list it items to delete
cat > /sdcard/ungoogle.sh << EOF
rm -rf /system/app/CarHomeGoogle*
rm -rf /system/app/ChromeBookmarksSyncAdapter*
rm -rf /system/app/ConfigUpdater*
rm -rf /system/app/FaceLock*
rm -rf /system/app/GenieWidget*
rm -rf /system/app/Gmail*
rm -rf /system/app/Gmail2
rm -rf /system/app/GmsCore*
rm -rf /system/app/Google*
rm -rf /system/app/LatinImeTutorial*
rm -rf /system/app/LatinImeDictionaryPack*
rm -rf /system/app/MarketUpdater*
rm -rf /system/app/MediaUploader*
rm -rf /system/app/NetworkLocation*
rm -rf /system/app/OneTimeInitializer*
rm -rf /system/app/Phonesky*
rm -rf /system/app/PlayStore*
rm -rf /system/app/SetupWizard*
rm -rf /system/app/Talk*
rm -rf /system/app/Talkback*
rm -rf /system/app/Vending*
rm -rf /system/app/VoiceSearch*
rm -rf /system/app/VoiceSearchStub*
rm -rf /system/etc/permissions/com.google.android.maps.xml
rm -rf /system/etc/permissions/com.google.android.media.effects.xml
rm -rf /system/etc/permissions/com.google.widevine.software.drm.xml
rm -rf /system/etc/permissions/features.xml
rm -rf /system/etc/preferred-apps/google.xml
rm -rf /system/etc/g.prop
rm -rf /system/addon.d/70-gapps.sh
rm -rf /system/framework/com.google.android.maps.jar
rm -rf /system/framework/com.google.android.media.effects.jar
rm -rf /system/framework/com.google.widevine.software.drm.jar
rm -rf /system/lib/libfilterpack_facedetect.so
rm -rf /system/lib/libfrsdk.so
rm -rf /system/lib/libgcomm_jni.so
rm -rf /system/lib/libgoogle_recognizer_jni.so
rm -rf /system/lib/libgoogle_recognizer_jni_l.so
rm -rf /system/lib/libfacelock_jni.so
rm -rf /system/lib/libfacelock_jni.so
rm -rf /system/lib/libgtalk_jni.so
rm -rf /system/lib/libgtalk_stabilize.so
rm -rf /system/lib/libjni_latinimegoogle.so
rm -rf /system/lib/libflint_engine_jni_api.so
rm -rf /system/lib/libpatts_engine_jni_api.so
rm -rf /system/lib/libspeexwrapper.so
rm -rf /system/lib/libvideochat_stabilize.so
rm -rf /system/lib/libvoicesearch.so
rm -rf /system/lib/libvorbisencoder.so
rm -rf /system/lib/libpicowrapper.so
rm -rf /system/priv-app/CarHomeGoogle*
rm -rf /system/priv-app/ChromeBookmarksSyncAdapter*
rm -rf /system/priv-app/ConfigUpdater*
rm -rf /system/priv-app/FaceLock*
rm -rf /system/priv-app/GenieWidget*
rm -rf /system/priv-app/Gmail*
rm -rf /system/priv-app/GmsCore*
rm -rf /system/priv-app/Google*
rm -rf /system/priv-app/LatinImeTutorial*
rm -rf /system/priv-app/LatinImeDictionaryPack*
rm -rf /system/priv-app/MarketUpdater*
rm -rf /system/priv-app/MediaUploader*
rm -rf /system/priv-app/NetworkLocation*
rm -rf /system/priv-app/OneTimeInitializer*
rm -rf /system/priv-app/Phonesky*
rm -rf /system/priv-app/PlayStore*
rm -rf /system/priv-app/SetupWizard*
rm -rf /system/priv-app/Talk*
rm -rf /system/priv-app/Talkback*
rm -rf /system/priv-app/Vending*
rm -rf /system/priv-app/VoiceSearch*
rm -rf /system/priv-app/VoiceSearchStub*
# execut the created list
sh /sdcard/ungoogle.sh

3. Install NanoDroid microG

Once Google is cleaned up and the FakeGapps module is ready, reboot into recovery (see 12. and 13.) and install the NanoDroid zip via TWRP.

If you done everything right, there will be no Google Services or apps left, if not - as I did - a few leftovers will need to be manually cleaned up.

If the microG flashing was successful, an app, called microG settings will show up:

FakeGApps in Xposed


  1. http://bradfrost.com/blog/post/google-you-creepy-sonofabitch/

  2. https://myaccount.google.com/activitycontrols/location

  3. https://qz.com/1131515/google-collects-android-users-locations-even-when-location-services-are-disabled/

  4. https://www.eugdpr.org/

  5. https://en.wikipedia.org/wiki/HTC_Bravo

  6. https://microg.org/

  7. http://www.nomu.hk/pro/s10_product_show/

  8. https://spflashtool.com/download/SP_Flash_Tool_v5.1744_Linux.zip

  9. https://www.needrom.com/download/how-to-setup-sp-flash-tool-linux-mtk

  10. https://forum.xda-developers.com/showthread.php?t=3482755

  11. https://forum.xda-developers.com/attachment.php?attachmentid=3947063

  12. http://www.supersu.com/

  13. https://s3-us-west-2.amazonaws.com/supersu/download/zip/SuperSU-v2.82-201705271822.zip

  14. https://forum.xda-developers.com/showthread.php?t=3034811

  15. https://github.com/microg/android_packages_apps_GmsCore/wiki/Signature-Spoofing

  16. https://dl-xda.xposed.info/framework/sdk23/arm/xposed-v89-sdk23-arm.zip

  17. https://forum.xda-developers.com/attachment.php?attachmentid=4393082

  18. http://nanolx.org/nanolx/nanodroid

  19. https://downloads.nanolx.org/NanoDroid/Stable/NanoDroid-microG-16.1.20180209.zip

  20. http://repo.xposed.info/module/com.thermatk.android.xf.fakegapps


We are living in instant messenger hell

Note: I have updated some parts of this entry. This is due to the fact that I wrote about XMP without spending enough time exploring what it's really capable of, for which I'm sorry. I made changes to my article according to these finds.

Me vs. IM

Before the dawn of the always online era (pre 2007) the world of instant messengers was completely different. For me, it all started with various IRC1 rooms, using mIRC2, later extended with ICQ3 in 1998.

I loved ICQ. I loved it's notifications sound - I have it as notification sound on my smartphone and it usually results in very confused expressions from people who haven't heard the little 'ah-oooh' for a decade -, it's capability of sending and receiving files, the way you could search for people based on location, interest tags, etc.

The sixth protocol version appeared in ICQ 2000b and faced a complete rework. Encryption was significantly improved. Thanks to the new protocol, ICQ learned how to call phones, and send SMS and pager messages. Users also got the option of sending contact requests to other users.4

Around this time, Windows included an instant messenger in their operating systems: MSN Messenger5, later renamed to Windows Live Messenger. It was inferior, but because it was built in to Windows, it took all the ICQ users away. It's completely dead now.

The multiplication of messengers had one useful effect though: people who got fed up running multiple clients for the same purpose - to message people - came up with the idea if multi-protocol applications. I used Trillian6 for many years, followed by Pidgin7 once I switched to linux.

With the help of these multi-protocol miracles it wasn't an issues when newcomers like Facebook or Google released their messaging functionality: both were built in top of XMPP8, an open standard for instant messaging, and they were both supported out of the box in those programs.

Around this time came Skype and it solved all the video call problems with ease. It was fast, p2p, encrypted, ran on every platform, supported more or less everything people needed, including multiple instances for multiple accounts. Skype was on a good way to eliminate everything else. Unfortunately none of the multi-protocol messengers ever had a native support to it: it only worked if a local instance of Skype was running.

A few years later iPhone appeared and it ate the consumer world; not long before that, BlackBerry did the same to the business. Smartphones came with their own, new issues: synchronization, and resource (battery and bandwidth) limitations. ICQ existed for Symbian S60, Windows CE, and a bunch of other, ancient platforms, but by the time iPhones and BlackBerries roamed the mobile land, it was in a neglected state in AOL and missed a marvellous opportunity.

Both of those problems were known and addressed in the XMPP specification. The protocol was low on resources by design, it supported device priority, and XEP-0280: Message Carbons9 took care of delivering messages to multiple clients. There was a catch though: none of the well known XMPP providers supported any of these additions, so you ended up using either your mobile device or your computer exclusively at the same time. Most of the big system - AOL, Yahoo!, MSN, Skype, etc - didn't even have a client for iOS, let alone for Android that time.

This lead to a new type of messenger generation: mobile only apps. WhatsApp10, BlackBerry Messenger11, Viber, etc - none of them offered any real way to be used from the desktop, and they all required - they still do - a working mobile phone number even to register.

For reasons I'm yet to comprehend, both Google and Facebook abandoned XMPP instead of extending fully implementing it. Google went completely proprietary and replaced gtalk12 with Hangouts13; Facebook started using MQTT14 for their messenger applications. Both of them were simple enough to be reverse engineered and added to libpurple, but they both tried to reinvent something that already existed.

For Skype, this was a turning point: it was bought by Microsoft, and they slowly moved it from p2p to a completely centralised webapp. The official reasoning included something about power hungry p2p connections... Soon, Skype lost all of it's appeal from it's previous iterations: video and voice was lagging, it was consuming silly amount of resources, it was impossible to stop it on Android, etc. Today, it resembles nothing from the original, incredible, p2p, secure, decentralised, resource-aware application it used to be.

I had to install WhatsApp yesterday - I resisted it as long as I could. It completely mangled competition in the UK and the Netherlands: nobody is willing to use anything else, not even regular text (SMS) or email. It did all this despite it's lack of multi-device support, and the fact that it's now owned by one of the nastiest, people-ignorant businesses around the globe15.

So, all together, in February 2018, for work and personal communication, I need to be able to connect to:

* I still have some contacts on ICQ, though it's a wasteland, and I can't even remember the last time I actually talked to anyone on it. This sort of applies to Hangouts: those who used to use it are now mostly on Facebook.

** WeChat is, so far, only a thing if you have Chinese contacts or if you live in/visit China. It's dominating China so far that other networks, like QQ, can be more or less ignored, but WeChat is essential.

If I install all those things on my phone, I'll run out of power in a matter of hours and the Nomu has an internal 5000mAh brick. They will consume any RAM I throw at them, and I don't even want to think about the privacy implications: out of curiosity I checked the ICQ app, but the policy pushed into my face on the first launch is rather scary. As for Facebook: I refuse to run Facebook in any form on my phone apart from 'mbasic', the no javascript web interface.

Typing on a touchscreen inefficient, and I'm very far from being a keyboard nerd; my logs will be application specific and probably not in any readable/parsable format.

On top of all this, a few days ago Google announced Google Hangouts Chat17. Right now, Google has the following applications to cover text, voice, and video chat:

That's 5 applications. 5. Only from Google.

Words for the future

I really, really want one, single thing, which allows me:

I sort of liked is Telegram18: cross device support, surprisingly fast and low on resources, but it gets attacked because they dared to roll their own crypto, and, in the end, it's still a centralised service, ending up as just another account to connect to, and just another app to run. Since I wrote this entry, a few has tried to point out, that Telegram is not better, than WhatsApp or Signal, but I have to disagree. Yes, WhatsApp is encrypted by default - this also means I need to run my phone as a gateway all the time. No phone = no desktop user. The desktop "app" is a power and resource eater Electron app.

Others asked about Signal. It's doing encryption the paranoid, aggressive way, but the same time, it depends on Google Services Framework on Android, Twilio, AWS, requires a smartphone app, eliminates 3rd party client options, and will only run the "desktop" Electron app if you pair it with a phone app - in which case it's very similar to WhatsApp. Like it or not, it's also a silo, with centralised services, even though you could, in theory, be able to install a complicated server of your own, that relies on the services listed above. It might be better, then WhatsApp, definitely not better from a usability point of view, than Telegram. Privacy wise... unless I can run my own server, without those kind of dependencies, no, thanks - it's just another silo.

I also believe OTR-like encryption is overrated, or at least not as important as many presses. Most of the messages will tell you less, than their metadata, so what's the point? Most of the encryption protocols are exclusive per connected client, meaning you can't have multiple devices with the same account exchanging the same messages - hence the need for the phone apps as gateways. XMPP with OMEMO19 is tacking this - if that's on by default, that could work. Note: TLS, infrastructure level encryption is a must, that is without question.

While Matrix20 looks promising, it's an everything-on-HTTP solution, which I still find odd and sad. HTTP is not a simple protocol - yes, it's omnipresent, but that doesn't make it the best for a particular purpose. There's another problem with it: no big player bought in which could bring the critical mass of users, and without that, it's practically impossible to get people to use it.

Video and voices calls are, in general, in a horrible shape: nearly everything is doing WebRTC, which, while usually works, is a terrible performer, insanely heavy on CPU, and, most of the time, always tries to go for the highest quality, consuming bandwidth like there is no tomorrow.

All this leaves me with XMPP and SIP.

XMPP is and could be able to cover everything, and, on top of it, it's federated, like email: anyone can run their own instance. I'm still a fan of email (yes, you read that right), and a signficant part of it is due to the options you can choose from: providers, clients, even being your own email service.

Unlike with most solutions and silos, the encryption problem (namely that if encryption is on, only one of the devices can get the messages, our you need to use a router device, like WhatsApp does) is covered and done with the XMPP extension OMEMO21. It's a multi-client encryption protocoll, that allows simultaneous devices to connect and encrypt at once.

In case of XMPP, voice and video could be handled by a P2P protocol, Jingle22, but, unfortunately, it's rarely supported. On Android, I found Astrachat23 which can do it, but it lacks many features when it comes to text based communications, unlike Conversations24. On desktop, I'm having serious problems getting Pidgin use video, so not everything is working yet.

This is where SIP comes in: an old, battle tested, proven VOIP protocol, which, so far, worked for me without any glitch in 2018. A few years ago many mobile providers were blocking SIP (among other VOIP protocols), but it's getting much better. Unfortunately I have not started running my own VOIP exchange yet, and ended up using Linphone25 as software and provider - for now. The unfortunate part of SIP is that Pidgin doesn't support it in any form.

There is one, very significant problem left: conformist people. I understand WhatsApp is simple and convenient, but it's a Facebook owned, phone only system.

I'd welcome thoughts and recommendations on how to make your friends use something that's not owned by Facebook.

Until then, I'll keep using Pidgin, with a swarm of plugins that need constant updating.

Adding networks to Pidgin (technical details)

Pidgin, which I mentioned before, is a multi protocol client. Out of the box, it's in a pretty bad shape: AIM, MSN, and Google Talk are dead as doornail, most of the systems it supports are arcane (eg. Zephyr) or sort of forgotten (ICQ). The version 3 of pidgin, and it's library, libpurple, has been in the making for a decade and it's still far ahead; the current 2.x line is barely supported.

There is hope however: people keep adding support for new systems, even to ones without proper or documented API.

For those who want to stick to strictly text interfaces, Bitlbee has a way to be compiled with libpurple support, but it's a bit weird to use when you have the same contact or same names present on multiple networks.

The guides below are made for Debian and it's derivatives, like Ubuntu and Mint. In order to build any of the plugins below, some common build tools are needed, apart from the per plugin specific ones:

sudo apt install libprotobuf-dev protobuf-compiler build-essential
sudo apt-get build-dep pidgin

How to conect to Skype with Pidgin (or libpurple)

The current iteration of the Skype plugin uses the web interface to connect to the system. It doesn't offer voice and video calls, but it supports individual and group chats alike.

If you have 2FA on, you'll need to use your app password as password and tick the Use alternative login method on the Advanced tab when adding the account.

git clone https://github.com/EionRobb/Skype4pidgin
cd Skype4pidgin/Skypeweb
cmake .
sudo make install

How to connect to Google Hangouts with Pidgin (or libpurple)

I've taken the instructions from the author's bitbucket site26:

sudo apt install -y libpurple-dev libjson-glib-dev libglib2.0-dev libprotobuf-c-dev protobuf-c-compiler mercurial make
hg clone https://bitbucket.org/EionRobb/purple-hangouts/
cd purple-hangouts
sudo make install

How to connec to Facebook and/or Workplace by Facebook with Pidgin (or libpurple)

The Workplace support is not yet merged into the main code: it's in the wip-work-chat branch. More information in the support ticket27.

Workplace and it's 'buddy' list is sort of a mystery at this point in time, so don't expect everything to run completely smooth, but it's much better, than nothing.

In order to log in to a Workplace account, tick Login as Workplace account on the Advanced tab.

git clone https://github.com/dequis/purple-facebook
cd purple-facebook
git checkout wip-work-chat
sudo make install

How to conect to Telegram with Pidgin (or libpurple)

The Telegram plugin works nicely, including inline images and and to end encrypted messages. Voice supports seems to be lacking unfortunately.

sudo apt install libgcrypt20-dev libwebp-dev
git clone https://github.com/majn/telegram-purple
cd telegram-purple
git submodule update --init --recursive
sudo make install

How to connect to WhatsApp with Pidgin (or libpurple)

Did I mention I hate this network? First of all a note: WhatsApp doesn't allow 3rd party applications at all. They might ban the phone number you use for life. This ban may be extended to Facebook with the same phone number but this has never been officially confirmed.

Apart from that it needs a lot of hacking around: the plugin is not enough, because WhatsApp doesn't tell you your password. In order to get your password, you need to fake a 'registration' from the computer.

Even if you do this, only one device will work: the other instances will get logged out, so there is no way to use WhatsApp from your phone and from your laptop. It's 2007 again, except it's mobile only instead of desktop only.

Please stop using WhatsApp and use something with a tad more openness in it; XMPP, Telegram, SIP, ICQ... basically anything.

If you're stuck with needing to communicate with stubborn and lazy people, like I am, continue reading, and install the plugin for pidgin:

sudo apt install libprotobuf-dev protobuf-compiler
git clone https://github.com/jakibaki/whatsapp-purple/
cd whatsapp-purple
sudo make install

However, this is not enough: the next step is yowsup, a command line python utility that allows you to 'register' to WhatsApp and reveals that so well hidden password.

sudo pip3 install yowsup

Once done, you need to first request an SMS, meaning you'll need a number that's able to receive SMS. Replace the COUNTRYCODE and PHONENUMBER string with your country code and phone number without prefixes, so for United Kingdom, that would be:

No 00, or + before the full international phone number.

$ yowsup-cli registration --requestcode sms -p PHONENUMBER --cc COUNTRYCODE --env android

    yowsup-cli  v2.0.15
    yowsup      v2.5.7

    Copyright (c) 2012-2016 Tarek Galal

    This software is provided free of charge. Copying and redistribution is

    If you appreciate this software and you would like to support future
    development please consider donating:

    status: b'sent'
    length: 6
    method: b'sms'
    retry_after: 78
    login: b'PHONENUMBER'

Once you got the SMS, use the secret code:

$ yowsup-cli registration --register SECRET-CODE -p PHONENUMBER --cc COUNTRYCODE --env android

    yowsup-cli  v2.0.15
    yowsup      v2.5.7

    Copyright (c) 2012-2016 Tarek Galal

    This software is provided free of charge. Copying and redistribution is

    If you appreciate this software and you would like to support future
    development please consider donating:

    INFO:yowsup.common.http.warequest:b'{"status":"ok","login":"PHONENUMBER","type":"existing","edge_routing_info":"CAA=","chat_dns_domain":"sl","pw":"[YOUR WHATSAPP PASSWORD YOU NEED TO COPY]=","expiration":4444444444.0,"kind":"free","price":"$0.99","cost":"0.99","currency":"USD","price_expiration":1520591114}\n'
    status: b'ok'
    login: b'PHONENUMBER'
    type: b'existing'
    expiration: 4444444444.0
    kind: b'free'
    price: b'$0.99'
    cost: b'0.99'
    currency: b'USD'
    price_expiration: 1520591114

That YOUR WHATSAPP PASSWORD YOU NEED TO COPY is the password you need to put in the password field of the account; the username is your PHONENUMBER.

How to connect to WeChat with Pidgin (or libpurple)

If there is something worse, than WhatsApp, it's WeChat: app only and rather agressive when it comes to accessing private data on the phone. If you want to use it, but avoid actually serving data to it, I recommend getting the Xposed Framework28 with XPrivacyLua29 on your phone before WeChat and restricting WeChat with it as much as possible.

sudo apt install cargo clang
git clone https://github.com/sbwtw/pidgin-wechat
cd pidgin-wechat
cargo build
sudo cp target/debug/libwechat.so /usr/lib/purple-2/

Pidgin will only ask for a username - fill that in with you WeChat username and connect. Pidgin will soon pop up a window with a QR code - scan it with the WeChat app and follow the process on screen.

Other networks

Pidgin has a list of third party plugins30, but it's outdated. I've been searching for forks and networks missing from the list on Github.

Extra Plugins for Pidgin

Purple Plugin Pack

There are a few useful plugins for Pidgin that can make life simpler; the Purple Plugin Pack31 contains most of the ones in my list:

XMPP Message Carbons

XEP-0280 Message Carbons32 is an extension that allows multiple devices to receive all messages.

sudo apt install libpurple-dev libglib2.0-dev libxml2-dev
git clone https://github.com/gkdr/carbons
cd carbons
sudo make install

Once installed, open a chat or conversation that happens on the relevant server and type:

/carbons on

This will not be delivered as message but executed on the server as command. Unfortunately not all of the XMPP servers support this.


OMEMO33 is a multi-legged encryption protocol that allows encrypted messages across multiple devices. It's built-in into Conversations34, one of the best XMPP clients for Android - Pidgin doesn't have it by default.

sudo apt install git cmake libpurple-dev libmxml-dev libxml2-dev libsqlite3-dev libgcrypt20-dev
git clone https://github.com/gkdr/lurch/
cd lurch
git submodule update --init --recursive
sudo make install

Message Delivery Receipts35

Yet another missing by default XMPP extension, which is quite useful.

git clone https://git.assembla.com/pidgin-xmpp-receipts.git 
cd pidgin-xmpp-receipts/
sudo cp xmpp-receipts.so /usr/lib/purple-2/

Porting old logs to Pidgin

I wrote a Python script which can port some old logs into Pidgin. It can deal with unmodifies logs from:

As for ZNC and Facebook, a lot of handywork is needed - see the comments in the script.


pip3 install arrow bs4

And the script:

import os
import sqlite3
import logging
import re
import glob
import sys
import hashlib
import arrow
import argparse
from bs4 import BeautifulSoup
import csv

def logfilename(dt, nulltime=False):
    if nulltime:
        t = '000000'
        t = dt.format('HHmmss')

    return "%s.%s%s%s.txt" % (

def logappend(fpath,dt,sender,msg):
    logging.debug('appending log: %s' % (fpath))
    with open(fpath, 'at') as f:
        f.write("(%s) %s: %s\n" % (
        dt.format('YYYY-MM-DD HH:mm:ss'),
    os.utime(fpath, (dt.timestamp, dt.timestamp))
    os.utime(os.path.dirname(fpath), (dt.timestamp, dt.timestamp))

def logcreate(fpath,contact, dt,account,plugin):
    logging.debug('creating converted log: %s' % (fpath))
    if not os.path.exists(fpath):
        with open(fpath, 'wt') as f:
            f.write("Conversation with %s at %s on %s (%s)\n" % (
                dt.format('ddd dd MMM YYYY hh:mm:ss A ZZZ'),

def do_facebook(account, logpathbase):
    plugin = 'facebook'

    # the source for message data is from a facebook export
    # for the buddy loookup: the  pidgin buddy list xml (blist.xml) has it, but
    # only after the alias was set for every facebook user by hand
    # the file contains lines constructed:
    # UID\tDisplay Nice Name
    lookupf = os.path.expanduser('~/tmp/facebook_lookup.csv')
    lookup = {}
    with open(lookupf, newline='') as csvfile:
        reader = csv.reader(csvfile, delimiter='\t')
        for row in reader:
            lookup.update({row[1]: row[0]})

    # the csv file for the messages is from the Facebook Data export
    # converted with https://pypi.python.org/pypi/fbchat_archive_parser
    # as: fbcap messages.htm -f csv > ~/tmp/facebook-messages.csv
    dataf = os.path.expanduser('~/tmp/facebook-messages.csv')
    reader = csv.DictReader(open(dataf),skipinitialspace=True)
    for row in reader:
        # skip conversations for now because I don't have any way of getting
        # the conversation id
        if ', ' in row['thread']:

        # the seconds are sometimes missing from the timestamps
            dt = arrow.get(row.get('date'), 'YYYY-MM-DDTHH:mmZZ')
                dt = arrow.get(row.get('date'), 'YYYY-MM-DDTHH:mm:ssZZ')
                logging.error('failed to parse entry: %s', row)

        dt = dt.to('UTC')
        contact = lookup.get(row.get('thread'))
        if not contact:
        msg = row.get('message')
        sender = row.get('sender')

        fpath = os.path.join(
            logfilename(dt, nulltime=True)

        if not os.path.isdir(os.path.dirname(fpath)):
        logcreate(fpath, contact, dt, account, plugin)
        logappend(fpath, dt, sender, msg)

def do_zncfixed(znclogs, logpathbase, znctz):
    # I manually organised the ZNC logs into pidgin-like
    # plugin/account/contact/logfiles.log
    # structure before parsing them
    LINESPLIT = re.compile(
    searchin = os.path.join(
    logs = glob.glob(searchin, recursive=True)
    for log in logs:
        contact = os.path.basename(os.path.dirname(log))
        account = os.path.basename(os.path.dirname(os.path.dirname(log)))
        plugin = os.path.basename(os.path.dirname(os.path.dirname(os.path.dirname(log))))
        logging.info('converting log file: %s' % (log))
        dt = arrow.get(os.path.basename(log).replace('.log', ''), 'YYYY-MM-DD')
        dt = dt.replace(tzinfo=znctz)

        if contact.startswith("#"):
            fname = "%s.chat" % (contact)
            fname = contact

        fpath = os.path.join(

        if not os.path.isdir(os.path.dirname(fpath)):

        with open(log, 'rb') as f:
            for line in f:
                line = line.decode('utf8', 'ignore')
                match = LINESPLIT.match(line)
                if not match:
                dt = dt.replace(
                logcreate(fpath, contact, dt, account, plugin)
                logappend(fpath, dt, match.group('sender'), match.group('msg'))

def do_msnplus(msgpluslogs, logpathbase, msgplustz):
    NOPAR = re.compile(r'\((.*)\)')
    NOCOLON = re.compile(r'(.*):?')

    searchin = os.path.join(
    logs = glob.glob(searchin, recursive=True)
    plugin = 'msn'
    for log in logs:
        logging.info('converting log file: %s' % (log))
        contact = os.path.basename(os.path.dirname(log))

        with open(log, 'rt', encoding='UTF-16') as f:
            html = BeautifulSoup(f.read(), "html.parser")
            account = html.find_all('li', attrs={'class':'in'}, limit=1)[0]
            account = NOPAR.sub('\g<1>', account.span.string)
            for session in html.findAll(attrs={'class': 'mplsession'}):
                dt = arrow.get(
                    session.get('id').replace('Session_', ''),
                dt = dt.replace(tzinfo=msgplustz)
                seconds = int(dt.format('s'))

                fpath = os.path.join(

                if not os.path.isdir(os.path.dirname(fpath)):

                for line in session.findAll('tr'):
                    if seconds == 59:
                        seconds = 0
                        seconds = seconds + 1

                    tspan = line.find(attrs={'class': 'time'}).extract()
                    time = tspan.string.replace('(', '').replace(')','').strip().split(':')

                    sender = line.find('th').string
                    if not sender:

                    sender = sender.strip().split(':')[0]
                    msg = line.find('td').get_text()

                    mindt = dt.replace(

                    logcreate(fpath, contact, dt, account, plugin)
                    logappend(fpath, mindt, sender, msg)

def do_trillian(trillianlogs, logpathbase, trilliantz):
    SPLIT_SESSIONS = re.compile(
        r'^Session Start\s+\((?P<participants>.*)?\):\s+(?P<timestamp>[^\n]+)'

    SPLIT_MESSAGES = re.compile(

    searchin = os.path.join(

    logs = glob.glob(searchin, recursive=True)
    for log in logs:
        if 'Channel' in log:
                "Group conversations are not supported yet, skipping %s" % log

        logging.info('converting log file: %s' % (log))
        contact = os.path.basename(log).replace('.log', '')
        plugin = os.path.basename(os.path.dirname(os.path.dirname(log))).lower()

        with open(log, 'rb') as f:
            c = f.read().decode('utf8', 'ignore')

            for session in SPLIT_SESSIONS.findall(c):
                participants, timestamp, session = session
                logging.debug('converting session starting at: %s' % (timestamp))
                participants = participants.split(':')
                account = participants[0]
                dt = arrow.get(timestamp, 'ddd MMM DD HH:mm:ss YYYY')
                dt = dt.replace(tzinfo=trilliantz)
                fpath = os.path.join(

                if not os.path.isdir(os.path.dirname(fpath)):

                seconds = int(dt.format('s'))
                curr_mindt = dt
                for line in SPLIT_MESSAGES.findall(session):
                    # this is a fix for ancient trillian logs where seconds
                    # were missing
                    if seconds == 59:
                        seconds = 0
                        seconds = seconds + 1

                    time, sender, msg = line
                        mindt = arrow.get(time,
                        'YYYY.MM.DD HH:mm:ss')
                        time = time.split(':')
                        mindt = dt.replace(

                    # creating the filw with the header has to be here to
                    # avoid empty or status-messages only files
                    logcreate(fpath, participants[1], dt, account, plugin)
                    logappend(fpath, mindt, sender, msg)

            if params.get('cleanup'):
                print('deleting old log: %s' % (log))

def do_skype(skypedbpath, logpathbase):
    db = sqlite3.connect(skypedbpath)

    cursor = db.cursor()
    cursor.execute('''SELECT `skypename` from Accounts''')
    accounts = cursor.fetchall()
    for account in accounts:
        account = account[0]
            `chatname` LIKE ?
        ORDER BY
            `timestamp` ASC
        ''', ('%' + account + '%',))

        messages = cursor.fetchall()
        for r in messages:
            dt = arrow.get(r[0])
            dt = dt.replace(tzinfo='UTC')
            fpath = os.path.join(
                logfilename(dt, nulltime=True)

            if not os.path.isdir(os.path.dirname(fpath)):

            logcreate(fpath, r[1], dt, account, 'skype')
            logappend(fpath, dt, r[3], r[4])

if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='Parameters for Skype v2 logs to Pidgin logs converter')

        help='absolute path to skype main.db'

        help='absolute path to Pidgin skype logs'

        help='facebook account name'

        help='change loglevel'

    for allowed in ['skype', 'trillian', 'msnplus', 'znc', 'facebook']:
            '--%s' % allowed,
            help='convert %s logs' % allowed

        if allowed != 'skype' or allowed != 'facebook':
                '--%s_logs' % allowed,
                default=os.path.expanduser('~/.%s/logs' % allowed),
                help='absolute path to %s logs' % allowed

                '--%s_timezone' % allowed,
                help='timezone name for %s logs (eg. US/Pacific)' % allowed

    params = vars(parser.parse_args())

    # remove the rest of the potential loggers
    while len(logging.root.handlers) > 0:

    LLEVEL = {
        'critical': 50,
        'error': 40,
        'warning': 30,
        'info': 20,
        'debug': 10

        format='%(asctime)s - %(levelname)s - %(message)s'

    if params.get('facebook'):
        logging.info('facebook enabled')

    if params.get('skype'):
        logging.info('Skype enabled; parsing skype logs')

    if params.get('trillian'):
        logging.info('Trillian enabled; parsing trillian logs')

    if params.get('msnplus'):
        logging.info('MSN Plus! enabled; parsing logs')

    if params.get('znc'):
        logging.info('ZNC enabled; parsing znc logs')

  1. http://www.irc.org/

  2. https://www.mirc.com/

  3. https://icq.com/

  4. https://medium.com/@Dimitryophoto/icq-20-years-is-no-limit-8734e1eea8ea

  5. https://en.wikipedia.org/wiki/Windows_Live_Messenger

  6. https://www.trillian.im/

  7. http://pidgin.im/

  8. https://xmpp.org/

  9. https://xmpp.org/extensions/xep-0280.html

  10. https://en.wikipedia.org/wiki/Whatsapp

  11. https://en.wikipedia.org/wiki/BlackBerry_Messenger

  12. https://en.wikipedia.org/wiki/Google_talk

  13. https://en.wikipedia.org/wiki/Google_Hangouts

  14. https://en.wikipedia.org/wiki/MQTT

  15. http://www.salimvirani.com/facebook/

  16. https://www.facebook.com/workplace

  17. https://www.blog.google/products/g-suite/move-projects-forward-one-placehangouts-chat-now-available/

  18. https://telegram.org/

  19. https://xmpp.org/extensions/xep-0384.html

  20. https://matrix.org/

  21. https://xmpp.org/extensions/xep-0384.html

  22. https://xmpp.org/extensions/xep-0166.html

  23. https://play.google.com/store/apps/details?id=com.mailsite.astrachat

  24. https://f-droid.org/packages/eu.siacs.conversations/

  25. https://www.linphone.org/

  26. https://bitbucket.org/EionRobb/purple-hangouts/src#markdown-header-compiling

  27. https://github.com/dequis/purple-facebook/issues/371

  28. http://repo.xposed.info/

  29. https://lua.xprivacy.eu/

  30. https://developer.pidgin.im/wiki/ThirdPartyPlugins

  31. https://bitbucket.org/rekkanoryo/purple-plugin-pack/

  32. https://xmpp.org/extensions/xep-0280.html

  33. https://xmpp.org/extensions/xep-0384.html

  34. https://f-droid.org/packages/eu.siacs.conversations/

  35. https://xmpp.org/extensions/xep-0184.html


The internet that took over the Internet

There is a video out there, titled The Fall of The Simpsons: How it Happened1. It starts by introducing a mediocre show that airs every night, called "The Simpsons", and compares it to a genius cartoon, that used to air in the early 90s, called "The Simpsons". Watch the video, because it's good, and I'm about to use it's conclusion.

It reckons that the tremendous difference is due to shrinking layers in jokes, and, more importantly, in the characters after season 7. I believe something similar happened online, which made the Internet become the internet.

Many moons ago, while still living in London, the pedal of our flatmate's sewing machine broke down, and I started digging for replacement parts for her. I stumbled upon a detailed website about ancient capacitors2. It resembled other, gorgeous sources of knowledge: one of my all time favourite is leofoo's site on historical Nikon equipment3. All decades old sites, containing specialist level knowledge on topics only used to be found in books in dusty corners of forgotten libraries.

There's an interesting article about how chronological ordering destroyed the original way of curating content4 during the early online era, and I think the article got many things right. Try to imagine a slow web: slow connection, slow updates, slow everything. Take away social networks - no Twitter, no Facebook. Forget news aggregators: no more Hacker News or Reddit, not even Technorati. Grab your laptop and put in down on a desk, preferably in a corner - you're not allowed to move it. Use the HTML version of DuckDuckGo5 to search, and navigate with links from one site to another. That's how it was like; surfing on the information highway, and if you really want to experience it, UbuWeb6 will allow you to do so.

Most of the content was hand crafted, arranged to be readable, not searchable; it was human first, not machine first. Nearly everything online had a lot of effort put into it, even if the result was eye-blowing red text on blue background7; somebody worked a lot on it. If you wanted it out there you learnt HTML, how to use FTP, how to link, how to format your page.

We used to have homepages. Homes on the Internet. Not profiles, no; profile is something the authorities make about you in dossier.

6 years ago Anil Dash released a video, "The web we lost"8 and lamented the web 2.0 - I despise this phrase; a horrible buzzword everyone used to label anything with; if you put 'cloud' and 'blockchain' together, you'll get the level of buzz that was 'web 2.0' -, that fall short to social media, but make no mistake: the Internet, the carefully laboured web 1.0, had already went underground when tools made it simple for anyone to publish with just a few clicks.

The social web lost against social media, because it didn't (couldn't?) keep up with making things even simpler. Always on, always instant, always present. It served the purpose of a disposable web perfectly, where the most common goal is to seek fame, attention, to follow trends, to gain followers.

There are people who never gave up, and are still tirelessly building tools, protocols, ideas, to lead people out of social media. The IndieWeb9's goals are simple: own your data, have an online home, and connect with others through this. And so it's completely reasonable to hear:

I want blogging to be as easy as tweeting.10

But... what will this really achieve? This may sound rude and elitist, but the more I think about it the more I believe: the true way out of the swamp of social media is for things to require a little effort.

To make people think about what they produce, to make them connect to their online content. It's like IKEA11: once you put time, and a minor amount of sweat - or swearing - into it, it'll feel more yours, than something comfortably delivered.

The Internet is still present, but it's shrinking. Content people really care about, customised looking homepages, carefully curated photo galleries are all diminishing. It would be fantastic to return to a world of personal websites, but that needs the love and work that used to be put into them, just like 20 years ago.

At this point in time, most people don't seem to relate to their online content. It's expendable. We need to make them care about it, and simpler tooling, on it's own, will not help with the lack of emotional connection.

  1. https://www.youtube.com/watch?v=KqFNbCcyFkk

  2. http://www.vintage-radio.com/repair-restore-information/valve_capacitors.html

  3. http://www.mir.com.my/rb/photography/

  4. https://stackingthebricks.com/how-blogs-broke-the-web/

  5. https://duckduckgo.com/html/

  6. http://www.slate.com/articles/technology/future_tense/2016/12/ubuweb_the_20_year_old_website_that_collects_the_forgotten_and_the_unfamiliar.html

  7. http://code.divshot.com/geo-bootstrap/

  8. http://anildash.com/2012/12/the-web-we-lost.html

  9. https://indieweb.org

  10. http://www.manton.org/2018/03/indieweb-generation-4-and-hosted-domains.html

  11. https://en.wikipedia.org/wiki/IKEA_effect


Guide on how to make your website printable with CSS

Printing?! It's 2018!

"Printing" doesn't always mean putting it on paper. When people print a web article, sometimes it ends up as a PDF, because the HTML save it not usable. The reasons for this differ: JavaScript rendered content, unsaved scripts in the end result, the lack of MHTML support in browsers, etc. What's important is that providing a print-friendly format for your site makes it possible for people to save it in a usable way.

Printing might still be relevant, because that's the only method that gives you a physical object. I have long entries about journeys, visits of foreign places. At a certain point in time I was tempted to put together a photobook from the images there, but the truth is: it's a lot of work, especially if you've more or less done it already once by writing your entry.

There's also the completely valid case of archiving: hard copies have a life of decades if not centuries, when stored properly, unlike any electronic media we currently have as an option.

That little extra CSS

Before jumping into the various hacks that helps printers it's important to mention, how to add printer-only instructions to your CSS. There are two ways, either using:

@media print {


inside an existing CSS, or by adding another CSS file specifically for print media into the HTML <head> section:

    <link rel="stylesheet" type="text/css" href="print.css" media="print">

White background, black(ish) text

Most printers operate with plain, white paper, so unless there's a very, very good reason for printing background color, just get rid of it.

It also applies to the font: a bit lighter from black, so saves tint.

* {
    background-color: #fff !important;
    color: #222;

Use printer and PDF safe fonts

If you take a look at the history of printers vs fonts there used to be many problems around this topic - even so they might still require a font cartridge to be able to properly print fonts out of the basic options.1

To avoid rendering problems, aliasing issues, generally speaking: unreadable fonts, stick to one of the base 14 fonts:

which are, by definition, part of the PDF standard2. So for example:

* {
    font-size: 11pt !important;
    font-family: Helvetica, sans-serif !important;

If you do insist on special fonts, eg. you have icons in fonts, you might want to consider using SVG instead of fonts for icons - otherwise printing them properly will become a problem.

Besides the potential printing issues one more reason to go with a standard, base font is that if for any reason the text needs to go through character recognition for scanning it back - say it's an archival hard copy and the only one left after a data loss indicent - the simpler and wider known the font, the better your chances for getting the characters properly recognized.

Pages and page breaks

It's very annoying to find a heading at the bottom of a printed page, or a paragraph broke into separate pages, although this latter depends on paragraph length. I generally recommend disallowing page breaks at these locations.

Apart from this it's a good idea to have a margin around the edges so you have an area where you can handle the page, not covering any of the text, or where it can be glued together as pages in a book.

@page {
    margin: 0.5in;

h1, h2, h3, h4, h5, h6 {
    page-break-after: avoid !important;

p, li, blockquote, figure, img {
    page-break-inside: avoid !important;


Printing images is tricky: most of the images are sized for the web and those sizing are too small by resolution, too large by percentage of space taken for printing. The alt-text and the image headline, which is usually in alt and title are also something to consider printing, but unfortunately the href trick doesn't work with them: that is because you can't add ::before or ::after to self-closing tags, such as images.

Lately, instead of using simple img tags, I switched to using figure, along with figcaption - this way the headline became possible to print.

Apart from this I've limited the size of the images by view-width and view-height, so they never become too large and occupy complete pages.

figure {
    margin: 1rem 0;

figcaption {
    text-align: left;
    margin: 0 auto;
    padding: 0.6rem;
    font-size: 0.9rem;

figure img {
    display: block;
    max-height: 35vh;
    max-width: 90vw;
    outline: none;
    width: auto;
    height: auto;
    margin: 0 auto;
    padding: 0;

This is how images inside figure (should) look in print with the styling above:

This is how images can look like when some width/height limitations are applied in printing

Source codes

If you have code blocks in your page it's useful to have them coloured, but still dark-on-light.

I'm using Pandoc's built-in syntax highlighting3 and the following styling for printing:

code, pre {
    max-width: 96%;
    border: none;
    color: #222;
    word-break: break-all;
    word-wrap: break-word;
    white-space: pre-wrap;
    page-break-inside: enabled;
    font-family: "Courier", "Courier New", monospace !important;

pre {
    border: 1pt dotted #666;
    padding: 0.6em;

/* code within pre - this is to avoid double borders */
pre code {
    border: none;

code.sourceCode span    { color: black; }
code.sourceCode span.al { color: black; }
code.sourceCode span.at { color: black; }
code.sourceCode span.bn { color: black; }
code.sourceCode span.bu { color: black; }
code.sourceCode span.cf { color: black; }
code.sourceCode span.ch { color: black; }
code.sourceCode span.co { color: darkgray; }
code.sourceCode span.dt { color: black; }
code.sourceCode span.dv { color: black; }
code.sourceCode span.er { color: black; }
code.sourceCode span.ex { color: darkorange; }
code.sourceCode span.fl { color: black; }
code.sourceCode span.fu { color: darkorange; }
code.sourceCode span.im { color: black; }
code.sourceCode span.kw { color: darkcyan; }
code.sourceCode span.op { color: black; }
code.sourceCode span.ot { color: black; }
code.sourceCode span.pp { color: black; }
code.sourceCode span.sc { color: black; }
code.sourceCode span.ss { color: black; }
code.sourceCode span.st { color: magenta; }
code.sourceCode span.va { color: darkturquoise; }

It should result in something similar:

Color printing source code

The basic CSS solution

Links are the single most important things on the internet; they are the internet. However, when they get printed, the end result usually looks something like this:

Before showing URLs - example showing Wikipedia entry "Mozilla software rebranded by Debian"

In order to avoid this problem, the URLs behind the links need to be shown as if they were part of the text. There is a rather simple way to do it:

a::after {
    content: " (" attr(href) ") ";
    font-size: 90%;

but unfortunately it makes the text rather ugly and very hard to read:

After showing URLs

Aaron Gustafson's solution4

There is a very nice, minimalistic Javascript solution5 that collects all links on the page and converts them into footnotes on the fly, when it detects a print request.

This solution is way nicer, so I certainly recommend using this as well (it's a supplement for the CSS solution above) even if it requres Javascript: (this is a copy-paste solution, just put it in your header)

<script type="text/javascript">
    // <![CDATA[
    Function:       footnoteLinks()
    Author:         Aaron Gustafson (aaron at easy-designs dot net)
    Creation Date:  8 May 2005
    Version:        1.3
    Homepage:       http://www.easy-designs.net/code/footnoteLinks/
    License:        Creative Commons Attribution-ShareAlike 2.0 License
    Note:           This version has reduced functionality as it is a demo of
                    the script's development
    function footnoteLinks(containerID,targetID) {
      if (!document.getElementById ||
          !document.getElementsByTagName ||
          !document.createElement) return false;
      if (!document.getElementById(containerID) ||
          !document.getElementById(targetID)) return false;
      var container = document.getElementById(containerID);
      var target    = document.getElementById(targetID);
      var h2        = document.createElement('h2');
      var h2_txt    = document.createTextNode('Links');
      var coll = container.getElementsByTagName('*');
      var ol   = document.createElement('ol');
      var myArr = [];
      var thisLink;
      var num = 1;
      for (var i=0; i<coll.length; i++) {
        var thisClass = coll[i].className;
        if ( coll[i].getAttribute('href') ||
             coll[i].getAttribute('cite') ) {
          thisLink = coll[i].getAttribute('href') ? coll[i].href : coll[i].cite;
          var note = document.createElement('sup');
          var note_txt;
          var j = inArray.apply(myArr,[thisLink]);
          if ( j || j===0 ) {
            note_txt = document.createTextNode(j+1);
          } else {
            var li     = document.createElement('li');
            var li_txt = document.createTextNode(thisLink);
            note_txt = document.createTextNode(num);
          if (coll[i].tagName.toLowerCase() == 'blockquote') {
            var lastChild = lastChildContainingText.apply(coll[i]);
          } else {
            coll[i].parentNode.insertBefore(note, coll[i].nextSibling);
      return true;
    window.onload = function() {
    // ]]>
  <script type="text/javascript">
    // <![CDATA[
    Excerpts from the jsUtilities Library
    Version:        2.1
    Homepage:       http://www.easy-designs.net/code/jsUtilities/
    License:        Creative Commons Attribution-ShareAlike 2.0 License
    Note:           If you change or improve on this script, please let us know.
    if(Array.prototype.push == null) {
      Array.prototype.push = function(item) {
        this[this.length] = item;
        return this.length;
    // ---------------------------------------------------------------------
    //                  function.apply (if unsupported)
    //           Courtesy of Aaron Boodman - http://youngpup.net
    // ---------------------------------------------------------------------
    if (!Function.prototype.apply) {
      Function.prototype.apply = function(oScope, args) {
        var sarg = [];
        var rtrn, call;
        if (!oScope) oScope = window;
        if (!args) args = [];
        for (var i = 0; i < args.length; i++) {
          sarg[i] = "args["+i+"]";
        call = "oScope.__applyTemp__(" + sarg.join(",") + ");";
        oScope.__applyTemp__ = this;
        rtrn = eval(call);
        oScope.__applyTemp__ = null;
        return rtrn;
    function inArray(needle) {
      for (var i=0; i < this.length; i++) {
        if (this[i] === needle) {
          return i;
      return false;
    function addClass(theClass) {
      if (this.className != '') {
        this.className += ' ' + theClass;
      } else {
        this.className = theClass;
    function lastChildContainingText() {
      var testChild = this.lastChild;
      var contentCntnr = ['p','li','dd'];
      while (testChild.nodeType != 1) {
        testChild = testChild.previousSibling;
      var tag = testChild.tagName.toLowerCase();
      var tagInArr = inArray.apply(contentCntnr, [tag]);
      if (!tagInArr && tagInArr!==0) {
        testChild = lastChildContainingText.apply(testChild);
      return testChild;
    // ]]>
  <style type="text/css" media="screen">
    .printOnly {
      display: none;
  <style type="text/css" media="print">
    a:visited:after {
      content: " (" attr(href) ") ";
      font-size: 90%;
    html.noted a:link:after,
    html.noted a:visited:after {
      content: '';

Alternative approach: always using footnotes for URLs

I little while ago I made a decision to put all links into footnotes by default - no in-text-links which will bring you to another site. This is a design decision and doesn't apply to most of the already existing sites, but if you, just as me, think, there is value in it, consider it as an option. It also makes the two hacks above obsolete, however, it has it's own problems, such as reading the site entries via RSS.


opacity and transparency: it can get blurry

A simple and sort of lazy solution to, instead of figuring out the proper color code, just apply opacity to a text to make it slightly different from the rest. Unfortunately some of these opacity settings can result in blurry or unusable text:

CSS opacity resulting in blurry text

Therefore I suggest to avoid opacity and transparency on all elements for your printing styles.

Happy printing!

  1. https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/print_c_fonts.mspx

  2. https://en.wikipedia.org/wiki/Portable_Document_Format#Standard_Type_1_Fonts_.28Standard_14_Fonts.29

  3. http://pandoc.org/MANUAL.html#syntax-highlighting

  4. https://alistapart.com/article/improvingprint

  5. https://alistapart.com/d/improvingprintfinal.html


Dawn at Dojo Stara Wieś

Shutter speed
1/80 sec
Focal length (as set)
35.0 mm
ISO 100
smc PENTAX-DA 35mm F2.4 AL

I had the chance from Pakua UK1 to spend a weekend at Dojo Stara Wieś2 in Poland. Unexpected as it is, the Dojo is a small village, built for Japanese martial arts, in Japanese architectural style.

While it's not the complete fairytale Japan one might expect, in the end, the only thing one could wish for is a small forest of giant bamboo, because everything else is tranquility here. The ponds were full of huge frogs and lovely newts, the air was filled with loud and happy birds - it's a lovely place.

I took the picture not that early, sometimes just after sunrise.

  1. https://www.pakuauk.com/

  2. http://www.dojostarawies.com/en.html


Engawa of the dojo building at Dojo Stara Wieś

Shutter speed
1/60 sec
Focal length (as set)
50.0 mm
ISO 400
K or M Lens

This is the ourdoor veranda, the engawa of the dojo building itself at Dojo Stara Wieś1. The building hosts three beautiful areas to practice martial arts in a place, which resembles their origin quite well.

  1. http://www.dojostarawies.com/en.html


La Caldera de Taburiente panorama

Shutter speed
Focal length (as set)

A panorama from Roque de los Muchachos on La Palma at 2426m.



Shutter speed
Focal length (as set)

The Roque de los Muchachos host a significant amount and rather important astronomy telescopes. Unfortunately visitors are not allowed up here during the night because even that tiny ligth pollution could distort measurements, but it's certainly a unique view, even during daytime.


La Caldera de Taburiente

Shutter speed
1/500 sec
Focal length (as set)
35.0 mm
ISO 80
smc PENTAX-DA 35mm F2.4 AL

The very top of La Palma is the Roque de los Muchachos with it's 2426m height. On cloudy days, this is the view - that pointy thing in the distance is the Teide on Tenerife, way above the clouds, with it's 3718m.


Stream of Cascada de Los Colores

Shutter speed
1/60 sec
Focal length (as set)
35.0 mm
ISO 400
smc PENTAX-DA 35mm F2.4 AL

On the Canary Island La Palma, unexpectedly, there is a lot of water. Some of this water ends up in the Cascada de Los Colores, a small waterfall of red, yellow, and green streams. Soon the stream becomes mostly red, stays like that for a while, slowly turns into yellow, as other, clear water connects to it, and in the end, it fades into ordinary water.


Using I²C sensors on a linux via a USB and IIO

Notes: no warranties. This is hardware, so it can cause trouble with your system, especially if you short-circuit something or - as I did once, many moons ago - solder on the fly why the thing is still connected to the USB port. Don't do that.

Proto-assembly of Digispark ATTiny85, Adafruit BME280, and Adafruit SI1145
Shutter speed
1/60 sec
Focal length (as set)
85.0 mm
ISO 800
HD PENTAX-DA 16-85mm F3.5-5.6 ED DC WR

USB I²C adapter

A few months ago I wrote about using a Raspberry Pi with some I²C sensors to collect data for Collectd1. While it worked well, it made me realise that having the RPi running a full fledged operating system means I need to apply security patches to yet another machine, and that is not something I want to deal with. I also have a former laptop, running as a ZFS based NAS, so why not use that?

After venturing into a fruitless dig to use the I²C port in the VGA connector2 I verified that indeed, as concluded in the tutorial, it doesn't work with embedded Intel graphics on linux.

Alternative I started looking at USB I²C adapter, but they are expensive. There is one project though, which looked very promising, and it didn't require a full-fledged Arduino either: Till Harbaum's I²C-Tiny-USB3.

It uses an ATtiny85 board - as the name suggests, it's tiny, and turned out to be a perfectly fine USB to I²C adapter. You can buy one here: https://amzn.to/2ubPs6I

Note: there's an Adafruit FT232H, which, in theory, is capable of the same thing. I haven't tested it.

I2C-Tiny-USB firmware

The git repository already contains a built hex file, but in case there are any modifications needed to be done, this is how it's done:

sudo -i
apt install gcc-avr avr-libc
cd /usr/src
git clone https://github.com/harbaum/I2C-Tiny-USB
cd I2C-Tiny-USB/digispark
make hex

Make sure the I2C_IS_AN_OPEN_COLLECTOR_BUS is uncommented; I've tried with real pull-up resistors, and, for my surprise, the sensors stopped showing up.

micronucleus flash utility

To flash the hex file, you'll need micronucleus, a tiny flasher utility.

sudo -i
apt install libusb-dev
cd /usr/src
git clone https://github.com/micronucleus/micronucleus
cd micronucleus/commandline
make CONFIG=t85_default
make install


micronucleus --run --dump-progress --type intel-hex main.hex

then connect the device through a USB port, and wait for the end of the flash process.

I²C on linux

Surprisingly enough, Debian did not show I²C hubs in /dev - apparently the kernel module for this is not loaded, so load it, and make that load permanent:

sudo -i
modprobe i2c-dev
echo "i2c-dev" >> /etc/modules

Connect the Attiny85

Normally a PC already has a serious amount of I²C adapters. As a result, the new device will show up with an extra device number, which number is rather important. The kernel log can help identify that:

dmesg | grep i2c-tiny-usb
[    3.721200] usb 5-2: Product: i2c-tiny-usb
[    3.725693] i2c-tiny-usb 5-2:1.0: version 2.01 found at bus 005 address 003
[    3.736109] i2c i2c-1: connected i2c-tiny-usb device
[    3.736584] usbcore: registered new interface driver i2c-tiny-usb

To read just the device number:

i2cdev=$(dmesg | grep 'connected i2c-tiny-usb device' | head -n1 | sed -r 's/.*\s+i2c-([0-9]+).*/\1/')

Note: the device number might change after a reboot. For me, it was 10 when simply plugged in, and 1 if it was connected during a reboot.

Detecting I2C devices

i2cdetect is a program that dumps all the devices responding on an I²C adapter. The Adafruit website has a collection for their sensors4. That 1 after the i2cdetect -y is the device number identified in the previous step, and it says I have 2 devices:

sudo -i
i2cdev=$(dmesg | grep 'connected i2c-tiny-usb device' | head -n1 | sed -r 's/.*\s+i2c-([0-9]+).*/\1/')
i2cdetect -y ${i2cdev}
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- -- 
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
60: 60 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
70: -- -- -- -- -- -- -- 77   

I²C 0x77: BME280 temperature, pressure, humidity sensor5

This is where things got interesting. Normally, when a BME280 sensors comes into play, every tutorial starts pulling out Python for the task, given that most of the Adafruit libraries are in Python.

Don't get me wrong, those are great libs, and the Python solutions are decent, but doing a pip3 search bme280 resulted in this:

bme280 (0.5)                           - Python Driver for the BME280 Temperature/Pressure/Humidity Sensor from Bosch.
Adafruit-BME280 (1.0.1)                - Python code to use the BME280 temperature/humidity/pressure sensor with a Raspberry Pi or BeagleBone black.
adafruit-circuitpython-bme280 (2.0.2)  - CircuitPython library for the Bosch BME280 temperature/humidity/pressure sensor.
bme280_exporter (0.1.0)                - Prometheus exporter for the Bosh BME280 sensor
RPi.bme280 (0.2.2)                     - A library to drive a Bosch BME280 temperature, humidity, pressure sensor over I2C

Which one to use? Then there are the dependencies, and the code quality varies from one to another.

So I started digging into the internet, github, and other sources, and somehow I realised there's a kernel module, named bmp280. The BMP280 is a sibling of the BME280 - it's without the humidity sensor. So the questions was: what in the world is drivers/iio/pressure/bmp280-i2c.c and how can I use it?

It turned out, that apart from hwmon, there's another sensor library layer in the linux kernel, called Industrial I/O - iio. It was added with this name somewhere in 2012, around 3.156, and it's purpose is to offer a subsystem fast speed sensors7. While fast speed is not a thing for me this time, but I do trust the kernel code quality.

For my greatest surprise, the BMP280 module is even included in the Debian Sid kernel as a module, and adding it was a mere:

sudo -i
modprobe bmp280
echo "bmp280" >> /etc/modules
modprobe bmp280-i2c
echo "bmp280-i2c" >> /etc/modules

To actually enable the device, the i2c bus has to be told of the sensor's existence:

sudo -i
i2cdev=$(dmesg | grep 'connected i2c-tiny-usb device' | head -n1 | sed -r 's/.*\s+i2c-([0-9]+).*/\1/')
echo "bme280 0x77" > /sys/bus/i2c/devices/i2c-${i2cdev}/new_device

The kernel log should show something like this:

kernel: bmp280 1-0077: 1-0077 supply vddd not found, using dummy regulator
kernel: bmp280 1-0077: 1-0077 supply vdda not found, using dummy regulator
kernel: i2c i2c-1: new_device: Instantiated device bme280 at 0x77

Verify the device is working:

tree /sys/bus/iio/devices/iio\:device0
├── dev
├── in_humidityrelative_input
├── in_humidityrelative_oversampling_ratio
├── in_pressure_input
├── in_pressure_oversampling_ratio
├── in_pressure_oversampling_ratio_available
├── in_temp_input
├── in_temp_oversampling_ratio
├── in_temp_oversampling_ratio_available
├── name
├── power
│   ├── async
│   ├── autosuspend_delay_ms
│   ├── control
│   ├── runtime_active_kids
│   ├── runtime_active_time
│   ├── runtime_enabled
│   ├── runtime_status
│   ├── runtime_suspended_time
│   └── runtime_usage
├── subsystem -> ../../../../../../../../../bus/iio
└── uevent

2 directories, 20 files

And that's it. The BME280 is ready to be used:

for f in  in_pressure_input in_temp_input in_humidityrelative_input; do echo "$f: $(cat /sys/bus/iio/devices/iio\:device0/$f)"; done
in_pressure_input: 102.112671875
in_temp_input: 26050
in_humidityrelative_input: 49.611328125

According to the BME280 datasheet8, under recommended modes of operation (3.5.1 Weather monitoring), the oversampling for each sensor should be 1, so:

sudo -i
echo 1 > /sys/bus/iio/devices/iio\:device0/in_pressure_oversampling_ratio
echo 1 > /sys/bus/iio/devices/iio\:device0/in_temp_oversampling_ratio
echo 1 > /sys/bus/iio/devices/iio\:device0/in_humidityrelative_oversampling_ratio

I²C 0x60: SI1145 UV index, light, IR sensor9

Unlike the BME280, the SI1145 doesn't have a built-in kernel module in Debian Sid - but it does exist as a kernel module, it's simply not included in the Debian Kernel. I've also learnt that this sensor is a heavyweight player, and that I should have bought something way simpler for mere light measurements; something that's already included the out-of-the-box kernel modules, like a TSL256110.

But I wasn't willing to give up the SI1145, being an expensie sensor, so in order to have it in the kernel, I had to compile the kernel module. Before getting started make sure:

Once those two are true, identify the kernel version:

uname -a
Linux system-hostname 4.17.0-1-amd64 #1 SMP Debian 4.17.3-1 (2018-07-02) x86_64 GNU/Linux

The output contains 4.17.3-1 - that is the actual kernel version, not the 4.17.0-1-amd64 which is the Debian name.

Get the kernel; extract it; add the SI1145 to the config; compile the drivers/iio/light modules; add that to the local modules.

sudo -i
cd /usr/src/
wget https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.17.3.tar.gz
tar xf linux-4.17.3.tar.gz
cd linux-4.17.3
cp /boot/config-4.17.0-1-amd64 .config
cp ../linux-headers-4.17.0-1-amd64/Module.symvers .
echo "CONFIG_SI1145=m" >> .config
make menuconfig
# save it
# exit
make prepare
make modules_prepare
make SUBDIRS=scripts/mod
make M=drivers/iio/light SUBDIRS=drivers/iio/light modules
cp drivers/iio/light/si1145.ko /lib/modules/$(uname -r)/kernel/drivers/iio/light/
modprobe si1145
echo "si1145" >> /etc/modules

Once that is done, and there are no error messages, enable the device:

sudo -i
i2cdev=$(dmesg | grep 'connected i2c-tiny-usb device' | head -n1 | sed -r 's/.*\s+i2c-([0-9]+).*/\1/')
echo "si1145 0x60" > /sys/bus/i2c/devices/i2c-${i2cdev}/new_device

The kernel log shoud show something like this:

kernel: si1145 1-0060: device ID part 0x45 rev 0x0 seq 0x8
kernel: si1145 1-0060: no irq, using polling
kernel: i2c i2c-1: new_device: Instantiated device si1145 at 0x60

Verify the device is working:

tree /sys/bus/iio/devices/iio\:device1
├── buffer
│   ├── data_available
│   ├── enable
│   ├── length
│   └── watermark
├── current_timestamp_clock
├── dev
├── in_intensity_ir_offset
├── in_intensity_ir_raw
├── in_intensity_ir_scale
├── in_intensity_ir_scale_available
├── in_intensity_offset
├── in_intensity_raw
├── in_intensity_scale
├── in_intensity_scale_available
├── in_proximity0_raw
├── in_proximity_offset
├── in_proximity_scale
├── in_proximity_scale_available
├── in_temp_offset
├── in_temp_raw
├── in_temp_scale
├── in_uvindex_raw
├── in_uvindex_scale
├── in_voltage_raw
├── name
├── out_current0_raw
├── power
│   ├── async
│   ├── autosuspend_delay_ms
│   ├── control
│   ├── runtime_active_kids
│   ├── runtime_active_time
│   ├── runtime_enabled
│   ├── runtime_status
│   ├── runtime_suspended_time
│   └── runtime_usage
├── sampling_frequency
├── scan_elements
│   ├── in_intensity_en
│   ├── in_intensity_index
│   ├── in_intensity_ir_en
│   ├── in_intensity_ir_index
│   ├── in_intensity_ir_type
│   ├── in_intensity_type
│   ├── in_proximity0_en
│   ├── in_proximity0_index
│   ├── in_proximity0_type
│   ├── in_temp_en
│   ├── in_temp_index
│   ├── in_temp_type
│   ├── in_timestamp_en
│   ├── in_timestamp_index
│   ├── in_timestamp_type
│   ├── in_uvindex_en
│   ├── in_uvindex_index
│   ├── in_uvindex_type
│   ├── in_voltage_en
│   ├── in_voltage_index
│   └── in_voltage_type
├── subsystem -> ../../../../../../../../../bus/iio
├── trigger
│   └── current_trigger
└── uevent

5 directories, 59 files

Note: I tried, others tried, but even though in theory, there's a temperature sensor on the SI1145, it doesn't work. It seems like it reads the value on startup, and that's it.

CLI script

In order to have a quick view, without collectd, or other dependencies, a script like this is more, than sufficient:

#!/usr/bin/env bash

temperature=$(echo "scale=2;$(cat /sys/bus/iio/devices/iio\:device0/in_temp_input)/1000" | bc)
pressure=$(echo "scale=2;$(cat /sys/bus/iio/devices/iio\:device0/in_pressure_input)*10/1" | bc) 
humidity=$(echo "scale=2;$(cat /sys/bus/iio/devices/iio\:device0/in_humidityrelative_input)/1" | bc) 
light_vis=$(cat /sys/bus/iio/devices/iio\:device1/in_intensity_raw) 
light_ir=$(cat /sys/bus/iio/devices/iio\:device1/in_intensity_ir_raw) 
light_uv=$(cat /sys/bus/iio/devices/iio\:device1/in_uvindex_raw) 

echo "$(hostname -f) $d

Temperature: $temperature °C
Pressure: $pressure mBar
Humidity: $humidity %
Visible light: $light_vis lm
IR light: $light_ir lm
UV light: $light_uv lm"

The output:

your.hostname Thu Jul 12 08:48:40 BST 2018

Temperature: 25.59 °C
Pressure: 1021.65 mBar
Humidity: 49.28 %
Visible light: 287 lm
IR light: 334 lm
UV light: 12 lm

Note: I'm not completely certain that the light unit is actually in lumens; the documentation is a bit fuzzy about that, so I assumed it is.


The next step is to actually collect the sensor readouts from the sensors. I'm still using collectd11, a small, ancient, yet stable and very good little metrics collection system, because it's enough. It writes ordinary rrd files, which can be plotted into graphs with tools like Collectd Graph Panel12

Unfortunately there's not yet an iio plugin for collectd (or I couldn't find it yet, and if you did, please let me know), so I had to add an extremely simple shell script as an exec plugin to collectd.


#!/usr/bin/env bash


# this will run only on collectd load, and once it's loaded, 
# even though it throws and error, additional runs don't make any
# problems
i2cdev=$(dmesg | grep 'connected i2c-tiny-usb device' | head -n1 | sed -r 's/.*\s+i2c-([0-9]+).*/\1/')
echo "bme280 0x77" > /sys/bus/i2c/devices/i2c-${i2cdev}/new_device
echo "si1145 0x60" > /sys/bus/i2c/devices/i2c-${i2cdev}/new_device

while true; do
    for sensor in /sys/bus/iio/devices/iio\:device*; do 
        name=$(cat "${sensor}/name")
        if [ "$name" == "bme280" ]; then

            # unit: °C
            temp=$(echo "scale=2;$(cat ${sensor}/in_temp_input)/1000" | bc )
            echo "PUTVAL $HOSTNAME/sensors-$name/temperature-temperature interval=$INTERVAL N:${temp}"

            # unit: mBar
            pressure=$(echo "scale=2;$(cat ${sensor}/in_pressure_input)*10/1" | bc)
            echo "PUTVAL $HOSTNAME/sensors-$name/pressure-pressure interval=$INTERVAL N:${pressure}"

            # unit: %
            humidity=$(echo "scale=2;$(cat ${sensor}/in_humidityrelative_input)/1" | bc)
            echo "PUTVAL $HOSTNAME/sensors-$name/percent-humidity interval=$INTERVAL N:${humidity}"

        elif [ "$name" == "si1145" ]; then

            # unit: lumen?
            ir=$(cat ${sensor}/in_intensity_ir_raw)
            echo "PUTVAL $HOSTNAME/sensors-$name/gauge-ir interval=$INTERVAL N:${ir}"

            light=$(cat ${sensor}/in_intensity_raw)
            echo "PUTVAL $HOSTNAME/sensors-$name/gauge-light interval=$INTERVAL N:${light}"

            uv=$(cat ${sensor}/in_uvindex_raw)
            echo "PUTVAL $HOSTNAME/sensors-$name/gauge-uv interval=$INTERVAL N:${uv}"

    sleep "$INTERVAL"


LoadPlugin "exec"
<Plugin exec>
  Exec "nobody" "/usr/local/lib/collectd/iio.sh"

The results are:

BME280 temperature graph in Collectd Graph Panel
SI1145 raw light measurement in Collectd Graph Panel


The Industrial I/O layer is something I've heard for the first time, but it's extremely promising: the code is clean, it already has support for a lot of sensors, and it seems to be possible to extend at a relative easy.

Unfortunately it's documentation it brief and I'm yet to find any metrics collector that supports it out of the box, but that doesn't mean there won't be any very soon.

Currently I'm very happy with my budget I2C USB solution - not having to run a Raspberry Pi for simple metrics collection is certainly in win, and utilising the sensors directly from the kernel also looks very decent.

  1. https://petermolnar.net/raspberry-pi-bme280-si1145-collectd-mosquitto/

  2. https://web.archive.org/web/20160506154718/http://www.paintyourdragon.com/?p=43

  3. https://github.com/harbaum/I2C-Tiny-USB/tree/master/digispark

  4. https://learn.adafruit.com/i2c-addresses

  5. https://www.adafruit.com/product/2652

  6. https://github.com/torvalds/linux/tree/a980e046098b0a40eaff5e4e7fcde6cf035b7c06

  7. https://wiki.analog.com/software/linux/docs/iio/iio

  8. https://cdn-shop.adafruit.com/datasheets/BST-BME280_DS001-10.pdf

  9. https://www.adafruit.com/product/1777

  10. https://www.adafruit.com/product/439

  11. http://collectd.org/

  12. https://github.com/pommi/CGP


Do websites want to force us to use Reader Mode?

Excuse me, sir, but where's the content?

A couple of days ago I blindly clicked on a link1 on Hacker News2 - it was poiting at a custom domain hosted on Medium. Out of curiosity, I changed the browser size to external 1280x720 - viewport 1280 × 646 -, turned off uBlock Origin3 and noscript4 so I'd mimic a common laptop setup, only to be presented with this:

Screenshot of blog.hiri.com/a-year-on-our-experience-launching-a-paid-proprietary-product-on-linux-db4f9116be08 when the window size is 1280x720

I don't even know where to start listing the problems.

Screenshot of javascript requests made by blog.hiri.com/a-year-on-our-experience-launching-a-paid-proprietary-product-on-linux-db4f9116be08

So, foolishly, I started a now flagged thread5, begging publishers to go and start a static blog, or just publish this as a plain, HTML document. It would even be better is it was a Word 97 HTML export.

I decided to keep the browser like that, same resolution, no adblockers, and visited 2 more sites: bbc.co.uk, and theguardian.com.

Screenshot of www.bbc.co.uk/news/uk-44933429
Screenshot of javascript requests made by www.bbc.co.uk/news/uk-44933429
Screenshot of www.theguardian.com/world/2018/jul/23/greeks-urged-to-leave-homes-as-wildfires-spread-near-athens
Screenshot of javascript requests made by www.theguardian.com/world/2018/jul/23/greeks-urged-to-leave-homes-as-wildfires-spread-near-athens

Well... at least the BBC doesn't have sticky headers and/or footers.

How did we get here?

Good examples

Let's take a look at something, which is actually readable - a random entry from Wikipedia:

Screenshot of a random article from wikipedia

Note the differences:

Or another readable thing:

Screenshot of textfiles.com/magazines/LOD/lod-1 - Legion of Doom technical journal, volume 1, 1987

A 31 years old text file - still perfectly readable.

Or loading the first mentioned article in Firefox Reader Mode6:

Screenshot of a Medium article in Firefox Reader Mode

Developers gonna developer

So back to that thread. While most of the reactions were positive, there were opposing ones as well; here are a few of those.

I barely see the problem. Sure, the header and footer aren't perfect, but stupidly large? I also don't feel any "cpu melting javascripts" and my PC is barely usable when I compile anything.For me, Medium provides a very readable experience that is much better than the average static blog. And I don't have to fear a malware ridden page like an old Wordpress installation. https://news.ycombinator.com/item?id=17592735

WordPress comes with it's own can of worms, but it did introduce automatic security updates in version 3.77 - that was in 2013 October. Any WordPress installation since have been receiving security patches, and WordPress backports security patches respectfully well.

As for being malware ridden... it doesn't even make it to the news pages any more when an ad network start spreading malware, but that's still a thing.8

Why is it that I only ever hear those complaints on HN and never elsewhere... Are you all still using Pentium 3 PCs and 56k modems?


A couple of years ago Facebook intruduced 2G Tuesdays9 and that should still be a thing for everyone out there. Rural Scotland? There isn't any phone signal, let alone 3 or 4G. Rural Germany? 6Mbps/1Mbps wired connections. And that is in Europe. Those who travel enough know this problem very well, and yes, 1.8MB - I initially stated 121kB in my original thread, that was a mistake, and due to uBlock not being completely off - of JavaScript is way too much. It was too much when jquery was served from a single CDN at may even actually got cached in the browser, but compiled, React apps won't be cached for long.

[...] people nowadays demand rich media content [...]


I remember when I first saw parallax scroll - of course it made me go "wow". It was a product commercial, I think, but soon everybody was doing parallax scroll, even for textual content. I was horrible. Slow, extremely hard to read due to all the moving parts.

There were times when I thought mouse trailing bouncing circles10 were cool. It turned out readable, small, fast text is cooler.

Nobody is "demanding" rich media content; people demand content. For free, but that is for another day. With some images, maybe even videos - and for that, we have <img>, <figure>, <video>, with all their glory.

> 121KB javascript is not heavy

Part of the problem is that HTML and CSS alone are horribly outdated in terms of being able to provide a modern-looking UI outside the box.

Want a slider? Unfortunately the gods at W3C/Google/etc. don't believe in a <input type="slider"> tag. Want a toggle switch? No <input type="toggle">. Want a tabbed interface? No <tabs><tab></tab></tabs> infrastructure. Want a login button that doesn't look like it came out of an 80's discotheque? You're probably going to need Angular, Polymer, MDL or one of those frameworks, and then jQuery to deal with the framework itself. You're already looking at 70-80kb for most of this stuff alone.

Want your website to be mobile-friendly? Swipe gestures? Pull to refresh? Add another 30-40kb.

Commenting? 20kb.

Commenting with "reactive design" just to make your users feel like their comments went through before they actually went through? 50kb.

Want to gather basic statistics about your users? Add another 10kb of analytics code.


This comment is certainly right, when it comes to UI. However... this is an article. Why would an article need swipe gestures, pull-to-refresh? Analytics is an interesting territory, but basics are well covered by analyzing server logs1112.

Mobile friendly design doesn't need anything at all; it actually needs less: HTML, by design, flows text to the available width, so any text will fill the available container.

For web UI, you need those, yes. To display an article, you really don't.

Medium vs blogs

I've been told that people/companies most usually post to Medium for the following reaons:

As for discoverability, I believe pushing the article link to Reddit, HN, etc. is a significant booster, but merely putting it on medium doesn't mean anything. I've had this problem a long while ago, with personal blogs, as is why is dicoverability never addressed in re-decentralize topics, but the truth is: there is no real need for it. Search engines are wonderful, and if your topic is good enough, people will find it by searching.

The looks more serious problem is funny, given the article I linked is on their own domain - if I wasn't aware of the generic issues with Medium layouts, I wouldn't know, it's a Medium article. One could make any blog look and feel the same. One could export an article from Typora13 and still look professional.

I've heard stories of moving to medium brought a lot more "reads" and hits on channels, but I'm sceptical. Eons ago I read an article, when PageRank was still a thing, where a certain site went to be #1 on Google for certain phrases without even contaning that phrase - only the links linking to the site did. The lesson there is that everything can be playes, and I find it hard to believe that purely posting to Medium would boost visibility that much. I could be wrong though.

Proposals - how do we fix this?

Always make the content the priority

There's an article to read, so let people read it. The rest is secondary for any visitor of yours.

Don't do sticky headers/footers

But if you really, really have to, make it certain it's the opposite of the display layout: for horizontal windows, the menu should be on the side; for vertical, it should be on the top.

You don't even need JS for it, since it's surprisingly simple to tell horizontal apart from vertical, even in pure CSS, with media queries:

 @media screen and (orientation:portrait) { … }
 @media screen and (orientation:landscape) { … }

Rich media != overdosed JavaScript

Embrace srcset14 and serve different, statically pre-generated images. Seriously consider if you need a framework at all15. BTW, React is the past, from before progessive enhancements, an it came back to haunt us for the rest of the eternity.

Use one good analytics system. There really is no need for multiple ones, just make is sure that one is well configured.

Don't install yet another commenting system - nobody cares. Learn from the bigger players and think it through if you actually need a commenting system or not16.

Some JS is useful, a lot of JS is completely unneeded for displaying articles. If your text is 8000 characters, there is simply no reasonable excuse to serve 225x more additional code to "enhance" that.


HTML was invented to easily share text documents. Even if it has images, videos, etc. in them, you're still sharing text. Never forget that the main purpose is to make that text readable.

There are many people out there with capped, terrible data connection, even in developed countries, and this is not changing in the close future. Every kB counts, let alone MBs.

MBs of Javascript has to be evaluated in the browser, which needs power. Power these days comes from batteries. More code = more drain.

Keep it simple, stupid.

  1. https://blog.hiri.com/a-year-on-our-experience-launching-a-paid-proprietary-product-on-linux-db4f9116be08

  2. https://news.ycombinator.com/

  3. https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/

  4. https://noscript.net/

  5. https://news.ycombinator.com/item?id=17592600

  6. https://support.mozilla.org/en-US/kb/firefox-reader-view-clutter-free-web-pages

  7. https://codex.wordpress.org/Configuring_Automatic_Background_Updates

  8. https://www.theguardian.com/technology/2016/mar/16/major-sites-new-york-times-bbc-ransomware-malvertising

  9. https://www.theverge.com/2015/10/28/9625062/facebook-2g-tuesdays-slow-internet-developing-world

  10. http://dynamicdrive.com/dynamicindex13/trailer.htm

  11. https://www.awstats.org/

  12. https://matomo.org/log-analytics/

  13. https://typora.io/

  14. https://www.sitepoint.com/how-to-build-responsive-images-with-srcset/

  15. http://youmightnotneedjquery.com/

  16. https://motherboard.vice.com/en_us/article/jp5yx8/im-on-twitter-too


Lessons of running a (semi) static, Indieweb-friendly site for 2 years

In 2016, I decided to leave WordPress behind. Some of their philosophy, mostly the "decisions, not options" started to leave the trail I thought to be the right one, but on it's own, that wouldn't have been enough: I had a painful experience with media handling hooks, which were respected on the frontend, and not on the backend, at which point, after staring at the backend code for days, I made up my mind: let's write a static generator.

This was strictly scratching my own itches1: I wanted to learn Python, but keep using tools, like exiftool and Pandoc, so instead of getting an off the shelf solution, I did actually write my own "static generator" - in the end, it's a glorified script.

Since the initial idea, I rewrote that script nearly 4 times, mainly to try out language features, async workers for processing, etc, and I've learnt a few things in the process. It is called NASG - short for 'not another static generator', and it lives on Github2, if anyone wants to see it.

Here are my learnings.

Learning to embrace "buying in"


I made a small Python daemon to handle certain requests; one of these routings was to handle incoming webmentions3. It merely put the requests in a queue - apart from some initial sanity checks on the POST request itself -, but it still needed a dynamic part.

This approach also required parsing the source websites on build. After countless iterations - changing parsing libraries, first within Python, then using XRay4 - I had a completely unrelated talk with a fellow sysadmin on how bad we are when in comes to "buying into" a solution. Basically if you feel like you can do it yourself it's rather hard for us to pay someone - instead we tend to learn it and just do it, let it be piping in the house of sensor automation.

None of these - webmentions, syndication, websub - are vital for my site. Do I really need to handle all of them myself? If I make it sure I can replace them, if the service goes out of business, why not use them?

With that in mind, I decided to use webmention.io5 as my incoming webmention (it even gave pingback support back) handler. I ask the service for any new comments on build, save them as YAML + Markdown, so the next time I only need to parse the new ones.

To send webmentions, Telegraph6 is nice, simple service, that offers an API access, so you don't have to deal with webmention endpoint discovery. I put down a text file, with slugified names of the source and target URLs to prevent sending the mention any time.


In case of websub7 superfeedr8 does the job quite well.


For syndication, I decided to go with IFTTT9 brid.gy publish10. IFTTT reads my RSS feed(s) and either creates link-only posts on WordPress11 and Tumblr12, or sends webmentions to brid.gy to publish to links Twitter13 and complete photos to Flickr14

IFTTT didn't work. Well, it worked right after setup, and syndicated one single article properly - since then, it decided to stop looking at my RSS update. I found Zapier15 instead. While it can do way more sophisticated, chained actions, that comes at a hefty, 50$/m price. Their free tier includes only 5, simple actions, to that is enough to send updates to WordPress.com, Twitter, Tumblr, Google Groups, and Flickr through brid.gy16.

I ended up outsourcing my newsletter as well. Years ago I sent a mail around to friends to ask them if they want updates from my site in mail; a few of them did. Unfortunately Google started putting these in either Spam or Promitions, so it never reached people; the very same happened with Blogtrottr17 mails. To overcome this, I set up a Google Group, where only my Gmail account can post, but anyone can subscribe, and another IFTTT hook18 that sends mails to that group with the contents of anything new in my RSS feed.

Search: keep it server side

I spent days looking for a way to integrate JavaScript based search (lunr.js or elasticlunr.js) in my site. I went as far as embedding JS in Python to pre-populate a search index - but to my horror, that index was 7.8MB at it's smallest size.

It turns out that the simplest solution is what I already had: SQLite, but it needed some alterations.

The initial solution required a small Python daemon to run in the background and spit extremely simple results back for a query. Besides the trouble of running another daemon, it needed the copy of the nasg git tree for the templates, a virtualenv for sanic (the HTTP server engine I used), and Jinja2 (templating), and a few other bits.

However, there is a simpler, yet uglier solution. Nearly every webserver out in the wild has PHP support these days, including mine, because I'm still running WordPress for friends and family.

To overcome the problem, I made a Jinja2 template, that creates a PHP file, which read-only reads the SQLite file I pre-populate with the search corpus during build. Unfortunately it's PHP 7.0, so instead of the FTS5 engine, I had to step back and use the FTS4 - still good enough. Apart from a plain, dead simple PHP engine that has SQLite support, there is no need for anything else, and because the SQLite file is read-only, there's no lock-collision issue either.

About those markup languages...

YAML can get messy

I went with the most common post format for static sites: YAML metadata + Markdown. Soon I started seeing weird errors with ' and " characters, so I dug into the YAML specification - don't do it, it's a hell dimension. There is a subset of YAML, title StrictYAML19 to address some of these problems, but the short summary is: YAML or not, try to use as simple markup as possible, and be consistent.

title: post title
summary: single-line long summary
published: 2018-08-07T10:00:00+00:00
- indieweb
- https://something.com/xyz

If one decides to use lists by newline and -, stick to that. No inline [] lists, no spaced - prefix; be consistent.

Same applies for dates and times. While I thought the "correct" date format is ISO 8601, that turned out to be a subset of it, named RFC 333920. Unfortunately I started using +0000 format instead of +00:00 from the beginning, so I'll stick to that.

Markdown can also get messy

There are valid arguments against Markdown21, so before choosing that as my main format, I tested as many as I could22 - in the end, I decided to stick to an extended version of Markdown, because that is still the closest-to-plain-text for my eyes. I also found Typora, which is a very nice Markdown WYSIWYG editor23. Yes, unfortunately, it's electron based. I'll swallow this frog for now.

The "extensions" I use with Markdown:

I've tried using the Python Markdown module; the end result was utterly broken HTML when I had code blocks with regexes that collided with the regexes Python Markdown was using. I tried the Python markdown2 module - worked better, didn't support language tag for code blocks.

In the end, I went back to where I started: Pandoc24. The regeneration of the whole site is ~60 seconds instead of ~20s with markdown2, but it doesn't really matter - it's still fast.

pandoc --to=html5 --quiet --no-highlight --from=markdown+footnotes+pipe_tables+strikeout+raw_html+definition_lists+backtick_code_blocks+fenced_code_attributes+lists_without_preceding_blankline+autolink_bare_uris

The take away is the same with YAML: do your own ruleset and stick to it; don't mix other flavours in.

Syntax highlighting is really messy

Pandoc has a built-in syntax highlighting method; so does the Python Markdown module (via Codehilite).

I have some entries that can break both, and break them bad.

Besides broken, Codehilite is VERBOSE. At a certain point, it managed to add 60KB of HTML markup to my text.

A long while ago I tried to completely eliminate JavaScript from my site, because I'm tired of the current trends. However, JS has it's place, especially as a progessive enhancement25.

That in mind, I went back to the solution that worked the best so far: prism.js26 The difference this time I that I only add it when there is a code block with language property, and I inline the whole JS block in the code - the 'developer' version, supporting a lot of languages, weighs around 58KB, which is a lot, but it works very nice, and it very fast.

No JS only means no syntax highlight, but at least my HTML code is readable, unlike with CodeHilite.


Static sites come with compromises when it comes to interactions, let that be webmentions, search, pubsub. They need either external services, or some simple, dynamic parts.

If you do go with dynamic, try to keep it as simple as possible. If the webserver has PHP support avoid adding a Python daemon and use that PHP instead.

There are very good, completely free services out there, run by mad scientists enthusiasts, like webmention.io and brid.gy. It's perfectly fine to use them.

Keep your markup consistent and don't deviate from the feature set you really need.

JavaScript has it's place, and prism.js is potentially the nicest syntax highlighter currently available for the web.

  1. https://indieweb.org/scratch_your_own_itch

  2. https://github.com/petermolnar/nasg/

  3. http://indieweb.org/webmention

  4. https://github.com/aaronpk/xray

  5. https://webmention.io/

  6. http://telegraph.p3k.io/

  7. https://indieweb.org/websub

  8. https://superfeedr.com/

  9. http://ifttt.com/

  10. https://brid.gy/about#publishing

  11. https://ifttt.com/applets/83096071d-syndicate-to-wordpress-com

  12. https://ifttt.com/applets/83095945d-syndicate-to-tumblr

  13. https://ifttt.com/applets/83095698d-syndicate-to-brid-gy-twitter-publish

  14. https://ifttt.com/applets/83095735d-syndicate-to-brid-gy-publish-flickr

  15. https://zapier.com/

  16. https://brid.gy/about#publishing

  17. https://blogtrottr.com/

  18. https://ifttt.com/applets/83095496d-syndicate-to-petermolnarnet-googlegroups-com

  19. http://hitchdev.com/strictyaml/features-removed/

  20. https://en.wikipedia.org/wiki/RFC_3339

  21. https://indieweb.org/markdown#Criticism

  22. https://en.wikipedia.org/wiki/List_of_lightweight_markup_languages

  23. http://typora.io/

  24. http://pandoc.org/MANUAL.html#pandocs-markdown

  25. https://en.wikipedia.org/wiki/Progressive_enhancement

  26. https://prismjs.com/


The three Facebooks

I recently wanted to check the upcoming gigs of a music venue. I tried to pull up their website1, but I couldn't find their agenda there - turned out it's sort of an abandoned site, because the hosting company is refusing to respond to any requests.

As a result their gigs are listed on Facebook - at least it can be access without logging in. My current browser setup is a bit complex, but the bottom line is I'm routing my Firefox through my home broadband. I'm used to very fast, unlimited desktop connections these days, both at work and at home, but the throttling I introduced by going through a few loops made some problem visible. When I loaded the Facebook page itself, it took quite a long while, even with noscript and ublock origin, and it made me curious, why.

So I made a fresh Firefox profile and loaded all three versions of Facebook I'm aware of.


Visiting the main Facebook site from a regular desktop client gives you the whole, full-blown, unfiltered experience - and the raw madness behind it.

The page executed 26.13 MB Javascript. That is 315x the size of the complete jquery framework, 193x of Bootstrap + Popper + Jquery together.

Facebook in full glory mode
Facebook and it's Javascript


.m is for mobile devices only; without faking my resolution and user agent in Firefox dev tools, I couldn't get there.

It's better, but it still had 1.28 MB Javascript in the end. On mobile, that is a serious amount of code to be executed.

Facebook in mobile mode - strictly for mobile only though


mbasic is a fascinating thing: it doesn't have JS at all. It's like the glorious, old days: ugly, very hard to find anything, but incredibly fast and light.

Facebook in good ol' days mode


desktop2 m.3 mbasic.4
Uncompressed everything 36.83 MB 2.22 MB 96.91 KB
Total used bandwidth 9.33 MB 1.01 MB 57.98 KB
JS code to execute 26.13 MB 1.28 MB n/a
JS bandwidth 4.22 MB 364.39 KB n/a
JS compression ratio 6.19x 3.59x 1.67x
CSS to parse 1.34 MB 232.81 KB inline
CSS bandwidth 279.73 KB 53.61 KB inline
CSS compression ratio 4.90x 4.34x -
HTML to parse 2.78 MB 172.06 KB 70.20 KB
HTML bandwith 199.73 KB 37.73 KB 14.20 KB
HTML compression ratio 14.25x 4.56x 4.94x


React is evil. It splits code up into small chunks, and on their own, they seem reasonably sized. However, when there's a myriad of these, they add up.

The compressed vs uncompressed ratio in desktop JS and HTML indicates extreme amount of repetition.

Most resources are unique, hashed names, and I'm guessing many of them are tied to A/B testing or something similar, so caching won't solve the issue either.

There's always a balanced way to do things. A couple of years ago, during the times of backbone.js an underscore.js, that balance was found, and everyone should learn from it.

Many moons ago, in 2012 (when Facebook still had an API), an article was published: The Making of Fastbook: An HTML5 Love Story5. It was a demonstration that the already bloated Facebook app could be replaced with a responsive, small, service worker powered HTML5 website.

Facebook won't change: it will keep being a monster on every level.

Don't follow their example.

  1. http://yuk.hu/

  2. https://facebook.com/yukbudapest

  3. https://m.facebook.com/yukbudapest

  4. https://mbasic.facebook.com/yukbudapest

  5. https://www.sencha.com/blog/the-making-of-fastbook-an-html5-love-story/


GPS tracking without a server

Nearly all self-hosted location tracking Android applications are based on server-client architecture: the one on the phone collects only a small points, if not only one, and sends it to a configured server. Traccar1, Owntracks2, etc.

While this setup is useful, it doesn't fit in my static, unless it hurts3 approach, and it needs data connectivity, which can be tricky during abroad trips. The rare occasions in rural Scotland and Wales tought me data connectivity is not omnipresent at all.

There used to be a magnificent little location tracker, which, besides the server-client approach, could store the location data in CSV and KML files locally: Backitude4. The program is gone from Play store, I have no idea, why, but I have a copy of the last APK of it5.

My flow is the following:

Backitude configuration

These are the modified setting properties:

I have an exported preferences file available7.


The syncthing configuration is optional; it could be simple done by manual transfers from the phone. It's also not the most simple thing to do, so I'll let the Syncting Documentation8 take care of describing the how-tos.

Python script

Before jumping to the script, there are 3 Python modules it needs:

pip3 install --user arrow gpxpy requests

And the script itself - please replace the INBASE, OUTBASE, and BINGKEY properties. To get a Bing key, visit Bing9.

import os
import sqlite3
import csv
import glob
import arrow
import re
import gpxpy.gpx
import requests

BINGKEY="get a bing maps key and insert it here"

def parse(row):
    DATE = re.compile(

    lat = row[0]
    lon = row[1]
    acc = row[2]
    alt = row[3]
    match = DATE.match(row[4])
    # in theory, arrow should have been able to parse the date, but I couldn't get
    # it working
    epoch = arrow.get("%s-%s-%s %s %s" % (
    ), 'YYYY-MM-DD hh:mm:ss SSS').timestamp

def exists(db, epoch, lat, lon):
    return db.execute('''
            epoch = ?
            latitude = ?
            longitude = ?
    ''', (epoch, lat, lon)).fetchone()

def ins(db, epoch,lat,lon,alt,acc):
    if exists(db, epoch, lat, lon):
    print('inserting data point with epoch %d' % (epoch))
    db.execute('''INSERT INTO data (epoch, latitude, longitude, altitude, accuracy) VALUES (?,?,?,?,?);''', (

if __name__ == '__main__':
    db = sqlite3.connect(os.path.join(OUTBASE, 'location-log.sqlite'))
    db.execute('PRAGMA auto_vacuum = INCREMENTAL;')
    db.execute('PRAGMA journal_mode = MEMORY;')
    db.execute('PRAGMA temp_store = MEMORY;')
    db.execute('PRAGMA locking_mode = NORMAL;')
    db.execute('PRAGMA synchronous = FULL;')
    db.execute('PRAGMA encoding = "UTF-8";')

    files = glob.glob(os.path.join(INBASE, '*.csv'))
    for logfile in files:
        with open(logfile) as csvfile:
                reader = csv.reader(csvfile)
            except Exception as e:
                print('failed to open CSV reader for file: %s; %s' % (logfile, e))
            # skip the first row, that's headers
            headers = next(reader, None)
            for row in reader:
                epoch,lat,lon,alt,acc = parse(row)
        # there's no need to commit per line, per file should be safe enough

    db.execute('PRAGMA auto_vacuum;')

    results = db.execute('''
        ORDER BY epoch ASC''').fetchall()
    prevdate = None
    gpx = gpxpy.gpx.GPX()

    for epoch, lat, lon, alt, acc in results:
        # in case you know your altitude might actually be valid with negative
        # values you may want to remove the -10
        if alt == 'NULL' or alt < -10:
            url = "http://dev.virtualearth.net/REST/v1/Elevation/List?points=%s,%s&key=%s" % (
            bing = requests.get(url).json()
            # gotta love enterprise API endpoints
            if not bing or \
                'resourceSets' not in bing or \
                not len(bing['resourceSets']) or \
                'resources' not in bing['resourceSets'][0] or \
                not len(bing['resourceSets'][0]) or \
                'elevations' not in bing['resourceSets'][0]['resources'][0] or \
                not bing['resourceSets'][0]['resources'][0]['elevations']:
                alt = 0
                alt = float(bing['resourceSets'][0]['resources'][0]['elevations'][0])
                print('got altitude from bing: %s for %s,%s' % (alt,lat,lon))
                        altitude = ?
                        epoch = ?
                        latitude = ?
                        longitude = ?
                    LIMIT 1
                ''',(alt, epoch, lat, lon))
        date = arrow.get(epoch).format('YYYY-MM-DD')
        if not prevdate or prevdate != date:
            # write previous out
            gpxfile = os.path.join(OUTBASE, "%s.gpx" % (date))
            with open(gpxfile, 'wt') as f:
                print('created file: %s' % gpxfile)

            # create new
            gpx = gpxpy.gpx.GPX()
            prevdate = date

            # Create first track in our GPX:
            gpx_track = gpxpy.gpx.GPXTrack()

            # Create first segment in our GPX track:
            gpx_segment = gpxpy.gpx.GPXTrackSegment()

        # Create points:


Once this is done, the OUTBASE directory will be populated by .gpx files, one per day.


GpsPrune is a desktop, QT based GPX track visualizer. It needs data connectivity to have nice maps in the background, but it can do a lot of funky things, including editing GPX tracks.

sudo apt install gpsprune

Keep it in mind that the export script overwrites the GPX files, so the data needs to be fixed in the SQLite database.

This is an example screenshot of GpsPrune, about our 2 day walk down from Mount Emei and it's endless stairs:


Happy tracking!

  1. https://www.traccar.org/

  2. https://owntracks.org/

  3. https://indieweb.org/manual_until_it_hurts

  4. http://www.gpsies.com/backitude.do

  5. gaugler.backitude.apk

  6. https://syncthing.net/

  7. backitude.prefs

  8. https://docs.syncthing.net/intro/getting-started.html

  9. https://msdn.microsoft.com/en-us/library/ff428642


Domoticz vs sensors

I have a couple of 433.92MHz things around me, and recently I developed an itch to log what is happening with them.

Devices include:

When I started looking for solutions to listen into 433MHz, I found a weird, extremely cheap project3:

For my genuine surprise, it works - but it's hard to match the incoming patterns, so I decided to keep looking.

Next next project I found was the librtlsdr4 combined with rtl_4335 - it converts a USB DVB-T TV tuner into a 433MHz receiver. It' sounded very nice, but at the same time, I found RFLink6. RFlink is a free, but not open source Arduino Mega firmware that can receive and send 433MHz/868MHz & 2.4GHz signals from a plethora of devices - and I had an unused, first generation, made in Italy Arduino Mega around, that's been waiting to be used for a decade.

Flashing the ROM

avrdude is a simple flashing utility for atmega boards, including arduinos; it will be needed to flash the ROM.

sudo apt install avrdude

Download and extract the RFLink ROM:

wget -ORFLink_v1.1_r48.zip https://doc-14-94-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/3esqvusiaem47f8nistrrisk5ofk9g6g/1540800000000/03880776249665269026/*/0BwEYW5Q6bg_ZLWFJUkY4bDZacms?e=download
unzip -d RFLink_v1.1_r48 RFLink_v1.1_r48.zip
cd RFLink_v1.1_r48

Note: I hardcoded the v48 version in this tutorial. Visit http://www.rflink.nl/ to see if there's a newer one.

Once the Arduino is connected, it'll show up as 'arduino mega' in dmesg, so find the device and flash the ROM as:

megausbdev="$(sudo dmesg  | grep -i 'arduino mega' | head -n1 | cut -d":" -f1 | awk '{print $3}')"
megattydev="$(sudo dmesg | grep "cdc_acm ${megausbdev}" | grep tty | cut -d":" -f3 | head -n1)"
sudo avrdude -v -p atmega2560 -c stk500 -P "/dev/${megattydev}" -b 115200 -D -U flash:w:RFLink.cpp.hex:i

Note: dmesg could be used without sudo if the sysctl parameter kernel.dmesg_restrict is set to 0.

Once this is done, wait until the mega reboots; after that, using minicom, we can verify if it's working.

sudo apt install minicom
minicom -b 57600 -D "/dev/${megattydev}" -w

You should see something like this:

Welcome to minicom 2.7.1

Compiled on May  6 2018, 08:02:47.
Port /dev/ttyACM3, 15:10:35

Press CTRL-A Z for help on special keys

teway V1.1 - R48;
20;00;Nodo RadioFrequencyLink - RFLink Gateway V1.1 - R48;

To exit, press CTRL+a then q.

To make the device always show up on the same /dev path, add the following udev rule:


# arduino mega as RFLink
SUBSYSTEMS=="usb", ATTRS{idVendor}=="2341", ATTRS{idProduct}=="0010", SYMLINK+="rflink"

If needed, restart udev:

sudo udevadm trigger

Physical wiring

There is a very nice, detailed tutorial in the RFLink website about connecting the different devices to the Mega itself at: http://www.rflink.nl/blog2/wiring


Domoticz is a home automation platform, which is very easy to set up, has a simple HTTP interface, and can log all those switches and devices I'm interested in.

Getting & starting Domoticz

sudo mkdir /opt/domoticz
cd /opt/domoticz
sudo wget https://releases.domoticz.com/releases/release/domoticz_linux_x86_64.tgz
tar xf domoticz_linux_x86_64.tgz
sudo /opt/domoticz/domoticz -www 8080 -sslwww 0 -dbase /opt/domoticz/domoticz.db -wwwroot /opt/domoticz/www -userdata /opt/domoticz -log -syslog

Now visit the server IP on port 8080 in your browser and get started with the setup.

  1. Connect the RFLink device to your server

  2. Find the ttyACM device for the RFLink

    megausbdev="$(sudo dmesg  | grep -i 'arduino mega' | head -n1 | cut -d":" -f1 | awk '{print $3}')"
    sudo dmesg | grep "cdc_acm ${megausbdev}" | grep tty | cut -d":" -f3 | head -n1)
    # this will print something like: ttyACM3
  3. Go to the Domoticz web interface

  4. Go to Setup, then Hardware

  5. In the Type drop down, select RFLink Gateway USB

  6. give it a name

  7. Serial Port should be the ttyACM port for the RFLink

Once done, the RFLink will start sniffing all the signals it can pick up, and your devices will start showing up in the Devic menu, under Setup:

devices found by RFLink in Domoticz

Notes and finds about my sensors

Energie wall sockets
They send on and off separately, but their signal doesn't always seem to reach the RFLink properly. Still working on them. No extra setup is needed, their default On/Off type is what they actually are.
Yale HSA6000 PIR sensors
The send on, soon after off, and they have a re-arm time of ~6 minutes. Once detected, they initially show up as Light sensor; this can be changed by first enabling the devices (clicking on the green arrow in the Devic menu, under Setup ), then going into Switches, clicking Edit on the sensor, and selecting the Motion sensor option in Switch type.
Yale HSA6000 door/window contacts
They only send on signal when an open is triggered; pressing the button sends an off. There is no way to know whether they are still open or already closed. They need to be set up as Push on button once they are enabled (clicking on the green arrow in the Devic menu, under Setup ) by going into Switches, clicking Edit on the sensor, and selecting the Push on button option in Switch type. Door contact type expects an off signal, so these are not proper door contacts.
gate keyfobs
I had to set the up as Push off buttons; if I set them as push on buttons, they log 'off' entries when they are pressed.


A few months ago I managed to set up collectd8 to process I²C data via a barely known linux subsystem, Industrial I/O, with the help of a few bash scripts9. In theory, Domoticz can deal with I²C on it's own - unfortunately it doesn't yet work on x86 platforms, and it can only do a few types of sensors. Besides that, I didn't want to lose the collectd data, given that Domoticz is only an experiment for now, so I started looking into my options. Domoticz have an excessive API10, but it's rather uncomfortable to use it, because you need to keep track of sensor and hardware IDs.

Fortunately, there is a workaround: using MQTT as middle ground, utilizing the MySensors serial protocol11.

A bit of explanation: MySensors is an open framework, both hardware and software components, to build custom sensors. On of the methods of sharing sensor information between sensors and controllers is via MQTT, a lightweight pubsub system.

The incredibly convenient part of it is that the information is push-based: Domoticz picks up new sensors if the initialization of them is sent, so no pre-setup, no tracking of internal Domoticz IDs are needed.

MQTT server

I'm not going into the details of setting up an MQTT service, because it's very simple; on Debian, it's more or less:

sudo apt install mosquitto
sudo systemctl enable mosquitto
sudo systemctl start mosquitto

In order to issue updates from bash, the mosquitto clients pack is needed as well:

sudo apt install mosquitto-clients

MySensors MQTT in Domoticz

  1. Go to Setup, then Hardware
  2. In the Type drop down, select MySensors Gateway with MQTT interface
  3. give it a name
  4. Remote Address, in our case, is
  5. Port is 1883
  6. Leave Username and Password empty, unless you set up authentication in your MQTT server
  7. Topic Prefix should be MyMQTT (default)
Adding MyMQTT to Domoticz

Sending sensor data with bash into MQTT

Initiate the sensor meta information

For Domoticz to know about the sensor - the type, the unit, etc - the sensor needs to be initialized; this is done with the presentation command when it comes to MySensors.

mosquitto_pub -t "domoticz/in/MyMQTT/${node_id}/${sensor_id}/0/0/${TYPE}" -m "${sensor_name}"

In details:

Sending sensor value updates

Unlike the previous initiation, this is a value update for our sensor:

mosquitto_pub -t "domoticz/in/MyMQTT/${node_id}/${sensor_id}/1/0/${METRICTYPE}" -m "${value}"

In details:

Example outcome for a BME280: once it's sending temperature, humidity, and pressure data, Domoticz automatically joins the 3 sensors into a single Weather Station entry

Working examples are in my git repository for collectd12.

Happy hacking.

  1. https://www.yaleasia.com/en/yale/yale-asia/products/yale-alarms/wireless-alarm-systems/b-hsa6400---yale-premium-series-home-security-alarm-system/

  2. https://www.amazon.co.uk/Energenie-Remote-Control-Sockets-Pack/dp/B004A7XGH8

  3. https://rurandom.org/justintime/w/Cheapest_ever_433_Mhz_transceiver_for_PCs

  4. http://osmocom.org/projects/sdr/wiki/Rtl-sdr

  5. https://github.com/merbanan/rtl_433

  6. http://www.rflink.nl/blog2/easyha

  7. https://www.domoticz.com/

  8. https://collectd.org/

  9. https://github.com/petermolnar/collectd-executors

  10. https://www.domoticz.com/wiki/Domoticz_API/JSON_URL's

  11. https://www.mysensors.org/download/serial_api_20

  12. https://github.com/petermolnar/collectd-executors


How to add themes to your website with manual and CSS prefers-color-scheme support

Note: a commented version of the code is available as a Github Gist as well1

Ever since I had a website, I nearly always had it dark, but after reading a lot on reading text on displays, and just listening to opinions and stories, it seemed like people do prefer dark text on white background - light themes. So I tried it.

my website with it's light theme in 2017

I felt weird and distant; it wasn't me. A couple of months ago I decided to switch my site back to a dark theme. Unfortunately it really doesn't work for everyone, which I can completely understand: I always wished I could tell sites that I want a dark version of them - not with browser addons, just by setting a preference.

My prayers got answered: in the upcoming version of CSS media queries - level 5 -, there is an element: prefers-color-scheme2 which is exactly this - a setting based on your operating system preference. Let's not talk about the fact that this will become yet another fingerprinting method.

While it's in experimental state, macOS Mojave with the preview 68 Safari is already supporting it3. Unfortunately neither Windows nor linux browsers do, regardless the preferred colour scheme4 option in Windows 10, or the :dark GTK3 theme option in Gnome being present for years.

Because it's highly experimental, and some might prefer a manual option, I wanted an solution that provides a button to change theme as well. There are blogposts out there about CSS-only5 or JS-based6 automated solutions, and complex, even more experimental solutions, based CSS variables7 but none of them provided a fallback.

This is my solution to support dynamic media queries for prefers-color-scheme with manual fallback, using an inlined alternative CSS.

Inlined alternative stylesheet

In my site, I have 3 <style> elements: the base, dark style; an alternative light style, which, by default, is only available for speech media - a successor of audible -, and a third, print-only one, which is out of scope for now.

I snippet of it looks like this:

    <style media="all">
        html {
            background-color: #111;
            color: #bbb;
        body {
            margin: 0;
            padding: 0;
            font-family: sans-serif;
            color: #ccc;
            background-color: #222;
            font-size: 100%;
            line-height: 1.3em;
            transition: all 0.2s;
    <style id="css_alt" media="speech">
        body {
            color: #222;
            background-color: #eee;

The idea is to toggle the speech to all on that css_alt element, either automatically or based on user preference. To do this the most semantic way I could think of I made a radiobutton, with 3 states: auto, dark, light.

<form class="theme" aria-hidden="true">
        <input name="colorscheme" value="auto" id="autoscheme" type="radio">
        <label for="autoscheme">auto</label>
        <input name="colorscheme" value="dark" id="darkscheme" type="radio">
        <label for="darkscheme">dark</label>
        <input name="colorscheme" value="light" id="lightscheme" type="radio">
        <label for="lightscheme">light</label>

Making radiobuttons nice

Unfortunately styling a radiobutton (or a checkbox) is near impossible - what you do instead is you hide the checkbox itself and add fancy CSS to show something nicer. That is the reason for wrapping them in a <span>.

label {
  font-weight: bold;
  font-size: 0.8em;
  cursor: pointer;
  margin: 0 0.3em;
  padding: 0.1em 0;

.theme {
  margin: 0 0.3em;
  color: #ccc;
  display: none;

.theme input {
  display: none;

.theme input + label {
  border-bottom: 3px solid transparent;

.theme input:checked + label {
  border-bottom: 3px solid #ccc;


In order to support both a user preference and the automated detection, I had to add the media query in JavaScript instead of using a mere @media query in CSS. I also had to put the script to the bottom of page - otherwise it won't find the elements, since they are not defined yet. I didn't want to use things like document.onload, because that would delay the execution, and I want this as invisible and fast for the visitor as possible.

var DEFAULT_THEME = 'dark';
var ALT_THEME = 'light';
var STORAGE_KEY = 'theme';
var colorscheme = document.getElementsByName('colorscheme');

/* changes the active radiobutton */
function indicateTheme(mode) {
    for(var i = colorscheme.length; i--; ) {
        if(colorscheme[i].value == mode) {
            colorscheme[i].checked = true;

/* turns alt stylesheet on/off */
function applyTheme(mode) {
    var st = document.getElementById('css_alt');
    if (mode == ALT_THEME) {
        st.setAttribute('media', 'all');
    else {
        st.setAttribute('media', 'speech');

/* handles radiobutton clicks */
function setTheme(e) {
    var mode = e.target.value;
    if (mode == 'auto') {
    else {
        localStorage.setItem(STORAGE_KEY, mode);
    /* when the auto button was clicked the auto-switcher needs to kick in */
    var e = window.matchMedia('(prefers-color-scheme: ' + ALT_THEME + ')');

/* handles the media query evaluation, so it expects a media query as parameter */
function autoTheme(e) {
    var current = localStorage.getItem(STORAGE_KEY);
    var mode = 'auto';
    var indicate = 'auto';
    /* user set preference has priority */
    if ( current != null) {
        indicate = mode = current;
    else if (e != null && e.matches) {
        mode = ALT_THEME;

/* create an event listener for media query matches and run it immediately */
var mql = window.matchMedia('(prefers-color-scheme: ' + ALT_THEME + ')');

/* set up listeners for radio button clicks */
for(var i = colorscheme.length; i--; ) {
    colorscheme[i].onclick = setTheme;

/* display theme switcher form(s) */
var themeforms = document.getElementsByClassName(STORAGE_KEY);
for(var i = themeforms.length; i--; ) {
    themeforms[i].style.display = 'inline-block';

The effect

macOS screen capture by Martijn van der Ven8:

Unfortunately this is the version which contains a former bug, where the indicator followed the media query detected value for the light theme. This is fixed in the code above.


if (! window.matchMedia("(prefers-color-scheme: dark)").matches)

doesn't work trying to match light. That is because prefers-color-scheme has 3 options: no-preference, light, dark, the default being no-preference. The correct method is to match light exactly.

  1. https://gist.github.com/petermolnar/d7ccaffadb92bf6c3d3615ed92832669

  2. https://drafts.csswg.org/mediaqueries-5/#descdef-media-prefers-color-scheme

  3. https://webkit.org/blog/8475/release-notes-for-safari-technology-preview-68/

  4. https://blogs.windows.com/windowsexperience/2016/08/08/windows-10-tip-personalize-your-pc-by-enabling-the-dark-theme/

  5. https://dri.es/adding-support-for-dark-mode-to-web-applications

  6. https://kevinchen.co/blog/support-macos-mojave-dark-mode-on-websites/

  7. https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_variables

  8. https://vanderven.se/martijn/


Mine, abandoned

Shutter speed
1/1600 sec
Focal length (as set)
87.5 mm
ISO 80
Tamron SP AF 70-200mm F2.8 Di LD [IF] Macro (A001)

There are a couple of abandoned mines in Sardinia - this one was turned into a museum, which would have required hours and guided tour to see inside. We were not that interested to inside, but the view certainly had stereotypical lost wild west feeling to it.


Not Ireland

Shutter speed
1/80 sec
Focal length (as set)
35.0 mm
ISO 80
smc PENTAX-DA 35mm F2.4 AL

This is a deceptive image: from all I know, this should be in Ireland. Reality says this is on the semi-island of Sardinia, below Tharros.



Shutter speed
1/160 sec
Focal length (as set)
180.0 mm
ISO 400
Tamron SP AF 70-200mm F2.8 Di LD [IF] Macro (A001)

This little fellow was staring at me after a rain at Sardinia, in a small forest of olive trees, covering ancient ruins of disturbingly sharply designed sacred wells1.

  1. https://www.atlasobscura.com/places/well-of-santa-cristina


That is not a snake

Shutter speed
1/250 sec
Focal length (as set)
120.0 mm
ISO 80
HD PENTAX-DA 55-300mm F4-5.8 ED WR

That piece of driftwood genuinely scared me when I spotted it through the rocks. The moment you move or zoom closer, the illusion disappears.


Gorropu Gorge

Shutter speed
1/60 sec
Focal length (as set)
0.0 mm
ISO 80
K or M Lens

Gorropu Gorge is gigantic, peaceful, quiet, and quite steep to get to when walking. There are signs up at the top of the hill before starting to descent that it's no a light walk, and I have to admit, it's a decent climb down and then up, but it's worth it.

Due to the size of the gorge, it's hard to show and represent it in photos, so I decided for trying to capture the colours that surround you when touring through it.



Shutter speed
1/1600 sec
Focal length (as set)
130.0 mm
ISO 80
Tamron SP AF 70-200mm F2.8 Di LD [IF] Macro (A001)

Spikes, dry dirt, heat - although it's a bit of an illusion. This was next to a stream, on an area which only gets wet during spring, but it was right next to fresh water.


Dune di Piscinas

Shutter speed
1/160 sec
Focal length (as set)
35.0 mm
ISO 80
smc PENTAX-DA 35mm F2.4 AL

One of the nicest little beaches on Sardinia, which we got to by a little walk, across the dunes.


Antico Borgo in Galtelli

Shutter speed
1/100 sec
Focal length (as set)
35.0 mm
ISO 80
smc PENTAX-DA 35mm F2.4 AL

Galtelli, the old capital of Sardinia has extremely narrow and quite steep streets, with adorable places to stay at; this is one of them.


Influential reads online: finds about the old Web

How the Blog Broke the Web

Back then, we didn’t have platforms or feeds or social networks or… blogs.

We had homepages.

The backgrounds were grey. The font, Times New Roman. Links could be any color as long as it was medium blue. The cool kids didn’t have parallax scrolling… but they did have horizontal rule GIFs.


This little piece stirred a maelstrom in my head, because it's damn right.

The argument goes by this: before streams or feeds - chronological ordering - websites had their own library system, invented and maintained by the site owner. This resulted in genuinely unique sites, not just on a theme level, but on a fundamental layer, as a reflection of how their creators were thinking.

That said, there are, of course, valid uses for chronological ordering, but for some content, maintaining a table of contents could make a much better structure. Unfortunately making content machine-readable by hand is painful.

It would be interesting to see a reprise of home page builders. Not a by-default-a-blog, but an oldschool website builder, but with up to date features in the background.

Why Do All Websites Look the Same?

The internet suffers from a lack of imagination, so I asked my students to redesign it


It ties in to the first one: most sites look the same, and that's not how it's supposed to be.

It contains wonderful ideas on how a certain websites could look and be completely unique, and, in my opinion, personal homepages should consider putting energy and time (and a lot of swearing) into making their online home truly their own as well.

The Slow Web (plus: the modern experience of film-watching)

[W]e need a Slow Internet Movement along the lines of Slow Food and Slow Cinema, if we're really going to take advantage of the archival nature of the Web.


I missed out on the golden days of the blogsphere. Not being a native English speaker, being occupied with a community around 2007, and a couple of other reasons contributed to this factor, so when I stumbled upon Rebecca's site, I got reminded how wonderfully packed personal websites used to be with text content. It's a joyful find, with things dating back to 1999, and with a plethora of now completely dead links.

As for this very entry: yes. We do need a slow web, one, where content is generated not for quick fame and likes, but for the love of the topic.

404 Page Not Found - The internet feeds on its own dying dreams

For with the collapse of the high-modernist ideology of style—what is as unique and unmistakable as your own fingerprints, as incomparable as your own body [e.g. MySpace, Geocities pages] . . . the producers of culture [big Internet companies] have nowhere to turn but to the past: the imitation of dead styles [glitter graphics, Geocities], speech through all the masks and voices stored up in the imaginary museum of a now global culture [the whole internet].


A recent find - this is an article from 2019. A devastating, sour summary, and a frightening reflection on an 1991 essay, describing how recycled nostalgia eats the very thing itself. It also taught me the phrase and the movemenet of vaporwave.

Every single time I try to revive or revisit something I missed out on in the past - BBS systems, for example - I find that they were, in fact, incredibly hard to deal with. They required deep understanding, you had to build a serious amount of things yourself, and it took a long time. While some aspects of this are wonderful - you'll certainly learn it for example -, from another perspective, it's impossible to get people involved if there isn't a simple way to start these days.

Recycling old things is not inherently bad, but in case of the internet, there isn't a simple way to use them without overshadowing the original.

why i’ll never delete my LiveJournal

In truth, I like who I was on the Internet better when I was young and brash though I know not how to do that anymore (and wouldn’t want the burden of it, honestly). My LJ is a space I guard in defense of my younger, wilder, more whimsical self. To alter or destroy this place would mean losing a version of me with an honesty I can no longer afford.


I never thought I'll find an article that summarizes feelings and drifting thoughts on what is now lost from the internet. Being online in the early 2000s meant to retreat from the world, it was another plane - it was not a connected world yet, but a text-based reality, away from the people you know in your physical existence.

It even touches an extremely important aspect: we need to be reminded how we used to be, and an unchanged, or archived version of our ancient journals or websites websites is a good start.

Patient Zero of the selfie age: Why JenniCam abandoned her digital life

"I keep JenniCam alive not because I want to be watched, but because I simply don’t mind being watched."


In 1996, I was in elementary school, in Hungary, my English was enough to understand 2 stupid dogs1 and some of The Real Adventures of Jonny Quest2.

So when I bump into articles talking about a certain Aussie who set up a non-stop webcam in 1996 about her life, it feels like a lightning strike about things I never heard of.

I'm not completely certain why I wanted to add this to the entry. Maybe it's because it feels like history is just sort of repeating itself, but is becoming more smoke and mirrors with each iteration; maybe to recognise that the early internet already pioneered most things that became mainstream(ish) 20 years later.

Before Insta fame, there was Jenni

Be more, or less, like Jenni? That is something to decide for everyone for themselves.

  1. https://web.archive.org/web/19990508175315/http://cartoonnetwork.com/doc/2stupiddogs/index.html

  2. https://www.imdb.com/title/tt0115226/


A journey to the underworld that is RDF

working with RDF - this one does not spark joy

I want to say it all started with a rather offensive tweet1, but it wouldn't be true. No, it all started with my curiosity to please the Google Structured Data testing tool2. Last year, in August, I added microdata3 to my website - it was more or less straightforward to do so.

Except it was ugly, and, after half a year, I'm certain to say, quite useless. I got no pretty Google cards - maybe because I refuse to do AMP4, maybe because I'm not important enough, who knows. But by the time I was reaching this conclusion, that aforementioned tweet happened, and I got caught up in Semantic Hell, also known as arguing about RDF.

The first time I heard about the Semantic Web collided with the dawn of the web 2.0 hype, so it wasn't hard to dismiss it when so much was happening. I was rather new to the whole web thing, and most of the academic discussions were not even available in Hungarian.

In that thread, it pointed was out to me that what I have on my site is microdata, not RDFa - I genuinely thought they are more or less interchangeable: both can use the same vocabulary, so it shouldn't really matter which HTML properties I use, should it? Well, it does, but I believe the basis for my confusion can be found in the microdata description: it was an initiative to make RDF simple enough for people making websites.

If you're just as confused as I was, in my own words:

With all this now known, I tried to turn mark up my content as microformats v1, microformats v2, and RDFa.

I already had errors with microdata...

Interesting, it has some problems...
it says URL for org is missing... it's there. Line 13.

...but those errors then became ever more peculiar problems with RDFa...

Undefined type, eh?

... while microformats v1 was parsed without any glitches. Sidenote: microformats (v1 and v2), unlike the previous things, are extra HTML class data, and v1 is still parsed by most search engines.

At this point I gave up on RDFa and moved over to test JSON-LD.

It's surprisingly easy to represent data in JSON-LD with schema.org context (vocabulary, why on earth was vocabulary renamed to context?! Oh. Because we're in hell.). There's a long entry about why JSON-LD happened6 and it has a lot of reasonable points.

What it forgets to talk about is that JSON-LD is an invisible duplication of what is either already or what should be in HTML. It's a decent way to store data, to exchange data, but not to present it to someone on the other end of the cable.

The most common JSON-LD vocabulary, Schema.org has it's own interesting world of problems. It wants to be a single point of entry, one gigantic vocabulary, for anything web, a humongous task and noble goal. However, it's still lacking a lot of definitions (ever tried to represent a resume with it?), it has weird quirks ('follows' on a Person can only be another Person, it can't be a Brand, a WebSite, or a simple URL) and it's driven heavily by Google (most people working on it are working at Google).

I ended up with compromises.

<html lang="en"  prefix="og: http://ogp.me/ns# article: http://ogp.me/ns/article#">
    <title>A piece of Powerscourt Waterfall - petermolnar.net</title>
<!-- JSON-LD as alternative -->
    <link rel="alternate" type="application/json" title="a-piece-of-powerscourt-waterfall JSON-LD" href="https://petermolnar.net/a-piece-of-powerscourt-waterfall/index.json" />
<!-- Open Graph vocabulary RDFa -->
    <meta property="og:title" content="A piece of Powerscourt Waterfall" />
    <meta property="og:type" content="article" />
    <meta property="og:url" content="https://petermolnar.net/a-piece-of-powerscourt-waterfall/" />
    <meta property="og:description" content="" />
    <meta property="article:published_time" content="2017-11-09T18:00:00+00:00" />
    <meta property="article:modified_time" content="2019-01-05T11:52:47.543053+00:00" />
    <meta property="article:author" content="Peter Molnar (mail@petermolnar.net)" />
    <meta property="og:image" content="https://petermolnar.net/a-piece-of-powerscourt-waterfall/a-piece-of-powerscourt-waterfall_b.jpg" />
    <meta property="og:image:type" content="image/jpeg" />
    <meta property="og:image:width" content="1280" />
    <meta property="og:image:height" content="847" />
<!-- the rest of meta and header elements -->
<!-- followed by the content, with microformats v1 and v2 markup -->

HTML provides an interesting functionality, the rel=alternate. This is meant to be the representation of the same data, but in another format. The most common use is links to RSS and Atom feeds.

I don't know if Google will consume the JSON-LD alternate format, but it's there, and anyone can easily use it.

As for RDFa, I turned to meta elements. Unlike with JSON-LD, I decided to use the extremely simple vocabulary of Open Graph - at least Facebook is known to consume that.

The tragedy of this whole story: HTML5 has so many tags that is should be possible to do structured data without any need for any of the things above.

My content is now:

This way it's simple, but compatible enough for most cases.

  1. http://web.archive.org/web/20190211232147/https:/twitter.com/csarven/status/1091314310465421312

  2. https://search.google.com/structured-data/testing-tool

  3. https://github.com/petermolnar/nasg/commit/9c749f4591333744588bdf183b22ba638babcb20

  4. https://www.ampproject.org/

  5. https://web.archive.org/web/20190203123749/https://twitter.com/RubenVerborgh/status/1092029740364587008

  6. http://manu.sporny.org/2014/json-ld-origins-2/


Gopher? Gopher.

"BBS The Documentary" from Jason Scott1 showed me a world I never touched, never experienced - Eastern Europe and dial up in the 80s... we didn't even have a phone line until the early 90s at home. So I eagerly started digging on how to set up a BBS, to at least get a minor feel from the time of WarGames2, only to realize, I'd most probably need to write the whole thing from scratch. Not that is wouldn't be fun, but it wouldn't be enough fun.

Soon I forgot about it, until about week ago an unusual entry popped up on Hacker News3: We must revive Gopherspace4 - from 2017.

The basis of the entry describes how ugly the web has become with all the tracking, ads, attention driven social media, an puts it in constast with the purity of Gopher. HTTP and HTML are absolutely fantastic pieces of engineering - but indeed they became bloated and abused. Gopher on the other hand, is time travel, to a time when a global network was completely new.

After reading a bit about the Gopher protocol5, I have to say: of course it's pure, it needs to be compared with HTTP 1.0 and HTML 1, because it never got a 2.0. It certainly has that oldschool feeling of following links around, finding bottomless servers that has been sitting around for 20+ years with content.

I wanted to contribute to this tiny community of literally just hundreds of servers around the world.

The Python script6 I generate my website with uses markdown source content files and Pandoc7 creates nice HTML out of them. Apparently it can also create 80 columns wrapped plain text just as easily. Setting up pygopherd8 is pretty straightforward as well.

The only difference from the docs you might find in case of pygopherd is that the gophermap files don't need the i in front of ordinary text content.

An example snippet:

petermolnar.net's gopherhole - phlog, if you prefer

1article    /category/article   petermolnar.net 70
1journal    /category/journal   petermolnar.net 70
1note   /category/note  petermolnar.net 70
1photo  /category/photo petermolnar.net 70

will look like:

lynx browser rendering the gopherfile above


article - petermolnar.net

0A journey to the underworld that is RDF        /web-of-the-machines/index.txt  petermolnar.net 70
I got into an argument on Twitter - it made me realize I don’t know
enough about RDF to argue about it. Afterwards I tried out a lot of
different ways to drew my own conclusions on RDF(a), microdata, JSON-LD,
vocabularies, schema.org, etc. In short: this one does not spark joy.
Irdf-it-does-not-spark-joy      /web-of-the-machines/rdf-it-does-not-spark-joy.jpg      petermolnar.net 70
Igsdtt_microdata_error_01       /web-of-the-machines/gsdtt_microdata_error_01.png       petermolnar.net 70
Igsdtt_microdata_error_02       /web-of-the-machines/gsdtt_microdata_error_02.png       petermolnar.net 70
Igsdtt_rdfa_error_01    /web-of-the-machines/gsdtt_rdfa_error_01.png    petermolnar.net 70
Igsdtt_rdfa_error_02    /web-of-the-machines/gsdtt_rdfa_error_02.png    petermolnar.net 70

0How to add themes to your website with manual and CSS prefers-color-scheme support     /os-theme-switcher-css-with-fallback/index.txt  petermolnar.net 70
prefers-color-scheme is a new CSS media query feature, which propagates
your OS level color preference. While it’s very nice, it’s way too new
lynx rendering my articles gophermap from the snippet above

There are good guides out there for setting up gopher content9, there is really no need for one more, but if you do have any questions, feel free to get in touch.

  1. https://www.youtube.com/watch?v=mJgRHYw9-fU&list=PLgE-9Sxs2IBVgJkY-1ZMj0tIFxsJ-vOkv

  2. https://www.imdb.com/title/tt0086567/

  3. https://news.ycombinator.com/item?id=19178885

  4. https://box.matto.nl/revivegopher.html

  5. https://www.minnpost.com/business/2016/08/rise-and-fall-gopher-protocol/

  6. http://github.com/petermolnar/nasg

  7. http://pandoc.org/

  8. https://github.com/jgoerzen/pygopherd

  9. https://davebucklin.com/play/2018/03/31/how-to-gopher.html


Snowy rooftops of the living accommodations at Dojo Stara Wies

Shutter speed
1/50 sec
Focal length (as set)
85.0 mm
ISO 800
K or M Lens

A year later to our previous visit1 we repeated our Spring Retreat with Pa-Kua2 to the magnificent dojo at Stara Wieś. On the contrary to last year's glorious ~23°C, the first morning was gloomy, with sleet and snow. At least it was different...

  1. https://petermolnar.net/dawn-at-dojo-stara-wies/

  2. https://www.pakuauk.com/


Experiences with the Pa-Kua International League

Note: this is not an official statement in any form; it's merely my own, personal view and opinion on Pa-Kua.

Eons ago I did ITF Taekwondo, followed by a some no-name branch of karate, then ITF Taekwondo again, then years of medieval re-enactment with swords and archery, then a few months of Yang style Tai-Chi, including the martial aspect and their broad sword.

I did the Tai-Chi for the shortest time, but left a forceful impression on me - mainly due to my teacher, Johnny Burke1, because it felt whole; it radiated out into my everyday life. Karate was rather mindless, Taekwondo was way too competition oriented, re-enactment was fun for a while, but soulless, especially after knight fights became a thing. Unfortunately Johnny left Mei Quan, and I left London and Tai-Chi.

In 2017, as a company summer program, someone organised oriental archery for us. This lead me to Pa-Kua2 and their traditional Chinese archery. There is a Hungarian man, Lajos Kassai3, who also made his own research in the 80s, in order to revive ancient Hungarian horseback archery, and to create a version of the recurve bow they used to use. I met people following his teachings and it shows vast similarities to the archery Pa-Kua teaches. Around 20094 China started to popularise folk archery as well - there are now people writing about and practising reprised Manchurian archery5, which also shares common techniques. While it's not the same, I have no doubts that of the archery of Pa-Kua works, and that is a Chinese style archery.

Soon I joined their martial art classes as well; occasionally acrobatics and weapons.

There are countless wushu movies out there indicating there is, or there used to be, more to kung fu than movements: acupressure points, healing, philosophy, sometimes religion, and so on, but it seems like during their way out of China, many of these aspects fell off, and the world is now left with fighting styles without their foundation. There are exceptions - such as the aforementioned Mei Quan Academy of Tai-Chi in London, or, in my opinion, the Pa-Kua International League. I've mentioned archery, martial art, and weapons, but it also teaches Chinese medicine, massage, acupressure, acrobatics, etc., so unlike a traditional dojo, it offers a lot more.

The logo of Pa-Kua International League


Is Pa-Kua a Chinese martial art?

Pa-Kua split it's teachings into disciplines. Some of these are based on traditional Chinese knowledge (energy, reflexology); others are infusions of mainly Chinese and other far Asian practice (acrobatics, edged weapons, martial art, sintony, cosmodynamics); yet others are mainly results of historical reconstruction (archery); whereas some are completely modern, for modern times (rhythm).

The main influence of martial arts discipline - based on the actual elements being taught and some personal research - is certainly Chinese, but not strictly one specific Chinese style.

I saw videos calling Pa-Kua fake.

During the past decade some people embraced the idiotic stand that MMA is the only efficient martial art. MMA is training gladiators.

Traditional martial arts was meant to be a way to kill fast and efficiently. They changed since, especially internal styles. Would Pa-Kua be efficient against MMA? No, it probably won't. It's not the goal. It's not a hard, competition style; you should be comparing it to Xingyi, Bagua, Tai-Chi, and the other, mainly internal styles instead.

The goal is to practise, to find your balance, learn to control one's self in every aspect, both physically and mentally.

Going a bit further: the authenticity of a martial art is a whole spectrum of turmoil. A lot of Chinese styles were nearly wiped out first in the 17th century, then in the mid 20th century. People tried to keep them alive, some of them by passing it strictly within a family - this resulted in hundreds, if not thousands of streams of a formerly organised styles6. It's not that surprising not to be able to find someone based on a pinyin version of a Chinese name on Google, but it doesn't mean they never existed. Many villages in rural China only got electricity 10-15 years ago, let alone the monasteries in the mountains, and I seriously doubt historical paperwork was digitised at all. (I've been to villages and monasteries like this.) This problem goes way beyond this by the way; finding translated Chinese knowledge is a massive pain, let alone origin stories in a world where history is quite flexible.

The best option you have it go decide for yourself. Go; meet some actually high belts; talk to them, train with them. See what and how they teach, and decide for yourself.

I've heard that Pa-Kua is just a pyramid scheme.

When it comes to belts and ranks, it's an organisation.

The international school needs funding, and knowledge needs people who can dedicate their lives to teaching and research. Since there is no membership fee, all the activities that are controlled by the school - progress with belts, intensive courses, etc - are paid directly to the school who distributes it they way they want to. It's not that different to non-profit organisations.

Local practices are completely in the hands of the leading instructor/master. You pay them directly, they rent/own the building, etc. That is just like and standard dojo.

Pa-Kua has Japanese uniform, so it can't be Chinese!

If you judge a school based on their clothing, you're doing it wrong.

Buying Chinese silk robes was a hard stunt anywhere before aliexpress, so I'm not going to blame anyone for utilising something more widely available - the karate gi.

Pa-Kua teaches katana, so it can't be Chinese!

Everyone knows that the katana is a Japanese weapon. What people don't know is that China had a lot of very similar weapons in the family of dao swords: changdao, dadao, miaodao, zhanmadao, wodao, etc7.

Chinese Swords by Paliandr0 on DeviantArt, https://www.deviantart.com/paliandr0/art/Chinese-Swords-481512284

Yes, for practical reasons, Pa-Kua utilizes katanas; the historical similarities between weapons allows it do so. The differences between these weapons are tiny, and katanas and bokens are far more accessible - and cheaper -, than, for example, a zhanmadao.

As you progress, the weapons practice will soon incorporate knife(s), spear, baguadao, miaodao, jian, etc. as well.

As I mentioned at the beginning, I did European medieval re-enactment for years, and my main weapon was one handed straight sword. Boosted by this I took a jian course at Pa-Kua and I have to admit, it's a ridiculously different weapon, and it's extremely hard to handle. There are good reasons why it's at higher levels. The katana-like weapons are much more straightforward to learn - not to master, just to learn -, which is probably the reason why the school decided to start with those.

Is it true that you can buy (black) belts in Pa-Kua?

If you've done some kind of martial you've been conditioned to identify a belt with a certain degree of capability, and that to achieve a belt, you need to pass a physical exam, with clearly defined requirements.

Here, the belts are mainly theory-indicators. They show what can safely be taught to its wearer and what things the wearer knows in theory already. It's completely normal if a green or grey belt Pa-Kua practitioner has never done a full contact fight.

You can achieve these belts through intensive courses. These are face to face trainings with multiple masters in a dense timeframe. You will most probably lack practice, but the theoretical knowledge will be there.

So the short answer: no, you can't simply buy belts, but you're allowed to participate in intensive training to gain them faster.

I'm not convinced.

If you're looking for something extremely orthodox, the school is not for you. Similarly, if you want to fight and beat people, do hard contact, train with ex-soldiers, it's also not the place.

I met a few of the regional leaders, and they definitely have a wide and interesting knowledge. To access this knowledge, you need to pay. This may not be the ideal, imagined way of learning, but it's always been like this, and making money from teaching is never easy8.


The Pa-Kua International League is not simply another martial art dojo: it offers a broad knowledge that used to accompany martial arts.

Did it start out as a fake? I’ll never know. But in that 40 years since it's establishment it grew, and today there's a lot of proficiency within the school.

It's not strictly Chinese and has other far Asian influences.

It's expensive compared to other schools, and there are ways to progress mainly on theoretical knowledge, but you always get something for your tuition fee.

Belt colour doesn't indicate the same thing as in most Westernised martial arts.

The martial arts discipline is an internal style. Do not expect contact fights until far into upper belts.

Every single high ranking member I met was talented and had a lot to offer. However, their main focus may not be martial arts, due to the split across disciplines, so don't judge anyone just by their martial arts skill. There are, and always were, scholar monks as well.

I'd encourage to try the whole spectrum of Pa-Kua: try every discipline and get the full picture. Only after that decide if it's for you or not.

If you disagree, agree, want to discuss, have questions, spotted a mistake, feel free to get in touch with me; contact options are at the bottom of the page.

  1. https://schoolofeverything.com/teacher/johnnyburke

  2. https://pakua.com

  3. https://en.wikipedia.org/wiki/Lajos_Kassai

  4. http://www.chinaarchery.org/archives/94

  5. http://www.manchuarchery.org/photographs

  6. http://thelastmasters.com/a-few-thoughts-on-emei-mountain-kung-fu/

  7. http://www.ancientpages.com/2018/09/19/deeper-look-into-chinese-swords-throughout-the-history-of-the-dynasties/

  8. http://time.com/4587078/kung-fu-martial-arts-hakka-hong-kong-preserve/


Snowy morning between the houses at Dojo Stara Wies

Shutter speed
1/50 sec
Focal length (as set)
85.0 mm
ISO 800
K or M Lens

Same morning as the previous image1 about the lovely dojo at Stara Wieś, with it's curving roads across the fantasy Japanese village.

  1. https://petermolnar.net/stara-wies-dojo-snowy-rooftops


Snowy panorama of Dojo Stara Wies

Shutter speed
Focal length (as set)

When you expect the same weather one year apart on the same spot in Central Europe, it usually doesn't work. I deliberately got up 5am to make use of the incredible water surfaces next to the houses at Dojo Stara Wieś, only to realize that this time my companions are sleet, cold, and grey misery.

The truth is, the place is still beautiful, even if you're shivering in your bones.


Panorama of Dojo Stara Wies

Shutter speed
Focal length (as set)

Before saying goodbye to Stara Wieś, I wanted to make an image of the whole little village. This should have been made either earlier in the morning, or much later, at sunset, but when you go there to train, one can't simply run and leave the class to take a panorama; especially when the classes are up in the big building on the left, at the top of the hill.


The sauna building and the tea house of Dojo Stara Wies

Shutter speed
1/500 sec
Focal length (as set)
35.0 mm
ISO 80
smc PENTAX-DA 35mm F2.4 AL

Along the previous picture, this is another perspective on tea house and the sauna building at Dojo Stara Wieś, with a lot of sunshine at a dazzlingly cold morning.


The tea house building of Dojo Stara Wies

Shutter speed
1/400 sec
Focal length (as set)
85.0 mm
ISO 80
K or M Lens

On my attempt to re-create a photo I mistakenly shoot as video - and therefore only have it in a small resolution1 - I got up early on my second day during our visit to Stara Wieś as well.

No rain, no sleet, lovely sunshine. And cold. And wavy water surfaces. As a result I wasn't able to re-shot the image, but at least I found another perspective to show the surroundings of the tea house.

  1. https://petermolnar.net/dawn-at-dojo-stara-wies/


Rebuilding my home server on a tight budget

If you have an unlimited budget don't read on: get 2x4TB 2.5" SSDs and stick them in an old ThinkPad. I still believe it's the perfect home server.


Unfortunately I don't have unlimited budget, rather a particularly limited one. I also I had to put a system together that fits in a very tight space - England and it's teeny flats - and has at least 4TB, reliable storage.

I had some spare parts: a 250GB 2.5" LiteON SSD, an ancient 64GB 2.5" Samsung 470 SSD, 8, 4, and 2 GB DDRIII SODIMMS, but that 4TB meant I need to think in 3.5" drives, 2 of them at least, to have a real ZFS mirror.

Considering my as-less-as-possible wallet for this at first I caved in: I bought a QNAP NAS. I believed their rather convincing marketing about how advanced these things are. Well, they are not, at least the consumer ones aren't. I couldn't even find a way to display the raw S.M.A.R.T. state of the drive, let alone ZFS features. After a long read it turned out that all those nice features are enterprise-only. I ended up returning it the next day.

Back to the drawing board.

Places like mini-itx.com and pcpartpicker are absolutely invaluable tools when it comes to designing a computer from parts, but unfortunately they don't include old models, or arcane, hard to come by parts.

The main issue was the lack of space: all the shelves I could place it on were only 30cm deep. A long time ago I gave a Lian-Li house a go, but it ended up so cramped inside that I had to give up back then. Also: the thinner the better. I couldn't believe nobody ever done a 1U house that fits 2x3.5" drives - I know it's possible, so there should be something out there!

Then I finally found it. It exists, and it's called inWin IW-RF100S1

inWin IW-RF100S rack case

A 1U chassis with 1/3 of the normal depth which can take a mini-ITX motherboard, 2x3.5" drives AND 2x2.5" drives and has a built in PSU! I've been looking for a case like this for about 4 years now.

Choosing drives was simple: a WD Red 4TB2 and a Seagate Ironwolf 4TB3- different brand, different batch, so there can't be same batch => fails the same time problems.

Finding a motherboard on the other hand turned tricky and resulted in compromises. My original minimum requirements were at least 4xSATA, if Intel, then AES-NI support in the CPU, <25W TDP (so passive cooling would be enough), HDMI (I have no VGA connector capable display at home any more) and ECC RAM support.

There are nice Supermicro and ASRock Rack server boards with ECC support, but they only got VGA. They are also pricey and usually without CPU, so I'd need to hunt down a super rare and rather expensive Intel Xeon E3-1235LV5 for that 25W TDP. It's an insanely good CPU, but the motherboard and this processor would push the setup with and extra £500 at least, more likely with and extra £800, so I dropped the ECC RAM requirement. Yes, I know, my ZFS will be destroyed and my bloodline will be cursed.

In the end I settled with and ASRock J42054. It has 10 W TDP, passive cooling, and fits the remaining requirements.

Notes and finds

The fans that come with the inWin are LOUD. 10+k rpm, proper server level, vacuum cleaner loudness. I bought a Geli silent fan, but if I replaced the originals, it became disturbing because the metal railing for the fans disrupt the airflow. I put it ~2cm further away with a double sided tape and it's now working fine. The fan makes an average 8°C difference, but even with completely passive cooling, the CPUs, running at max were ~50°C max.

The PSU fan is surprisingly quiet despite it's size. No need for hacks.

I added a tiny layer of foam under the drive trays, so no wobble possible at all.

I also added some tiny rubber legs to the case, but I'm leaning towards buying some anti-resonance domes.

The whole setup fits under an ordinary bookshelf.


Total: £421.49

Operating system

ZFS vs linux: the drama keeps rollin'

As my previous system was, my laptop system, and my main server system is Debian, I obviously installed Debian initially. The difference, in this case was, that wanted to stick to Stable and not faff around with Unstable at all.

I've been having disappointing experiences with the linux community for years now, starting with pulseaudio, that lead into systemd, but I managed to overcome this. Every single time I tried FreeBSD I got burnt on something, so I wasn't keen to compromise my main backup system again.


Until I started reading of the next beauty of the linux kernel community who I now believe is repeating the errors of anyone on the topic of the food chain - namely about how a feature deprecation broke ZFS on Linux (ZoL).

My tolerance for ZFS is pretty non-existant. Sun explicitly did not want their code to work on Linux, so why would we do extra work to get their code to work properly?

- Greg Kroah-Hartman5

That is really not the community I believed linux was. It used to be the underdog, the one that always found a way to make things work on it, even if it was via reverse engineering close source.

This, on it's own may not have been a breaking point, but something extra happened. After building that mirror ZoL pool on Linux I eventually decided to try FreeNAS and I tried to import the pool. Except I couldn't.

The Linux hate is strong today. zpool feature "org.zfsonlinux:userobj_accounting (User/Group object accounting.)". They added Linux-only features to zpool - and made them active by default when creating pools with no special argument. WTF! #zfs

- Martin Cracauer6

ZoL enabled a few extra features by default which is not supported in any other ZFS implementation yet, so if you want to mount a pool, you can only do it read-only and even then it needs some trickery.

ZFS is a brilliant filesystem and is one of the key, bare minimum requirements for my storage. It's more important than the operating system on top of it.

Enter FreeNAS

So I installed FreeNAS, rebuilt the mirror (with 4TB, the whole linux-FreeNAS dance took nearly a full 24 hours of copying data here, then there), and started getting familiar with the FreeNAS interface.

I have to admit that I like it. The new web GUI of FreeNAS 11 is clear, simple, and offers a lot of neat utility: cloud sync (so I can back up my cloud things on my NAS, not the other way around), alerting, even collectd is on by default.

The plugins and jails are very nice, the virtual machines support is decent, so if I do ever have to run Debian again, I could.

The disk layout I ended up with:

For now, I'm happy.

Notes and finds


I've learnt a lot from this experience. Nothing in my former system was telling me there's something wrong with one of the drives apart from ZFS - smart still says the disk is healthy. Trust ZFS.

The FreeNAS GUI is nice and might even work for non-IT/non-sysadmin people. If you have a spouse who should have access to these as well, it's a highly appreciated factor.

Linux may have lived long enough to start becoming the villain.

  1. https://www.ipc.in-win.com/IW-RF100S

  2. https://www.wd.com/products/internal-storage/wd-red.html#WD40EFRX

  3. https://www.seagate.com/gb/en/internal-hard-drives/hdd/ironwolf/

  4. https://www.asrock.com/MB/Intel/J4205-ITX/index.us.asp

  5. https://marc.info/?l=linux-kernel&m=154714516832389&w=2

  6. https://twitter.com/MartinCracauer/status/1007399058355445760


Bamboo Lined Path

Shutter speed
1/80 sec
Focal length (as set)
50.0 mm
ISO 80
K or M Lens

Yunxi Zhujing is one of the less visited, smaller attractions of Hangzhou. It's easy and possible to get there by bus from the West Lake, and it's a nice escape from the swarm of tourists at the big highlights around.


Bamboo Pattern

Shutter speed
1/80 sec
Focal length (as set)
50.0 mm
ISO 80
K or M Lens

Taken at Yunxi Zhujing, Hangzou.


Hangzhou West Lake at night

Shutter speed
Focal length (as set)

The West Lake in Hangzhou is probably one of the most visited tourist spots in the whole of China. Apparently it's true beauty only appears when there's mist and fog around - having had a clear night when we were there, it seems fairly true. Without the mystical cover, it's a merely large, although very nice lake with a bright, and modern view.


Greens of West Lake

Shutter speed
Focal length (as set)

The West Lake itself is a big, open water, but around it, especially in the corners, there are wonderful, smaller areas, filled with lush greens, and sprouting lotus.


Page created: Mon, Aug 19, 2019 - 09:05 PM GMT