Peter Molnar

Peter Molnar

Working on the Emerald Valley

Shutter speed
1/60 sec
Focal length (as set)
35.0 mm
ISO 80
smc PENTAX-DA 35mm F2.4 AL

Swarms of tourists are usually bad in a scenic area, but at least in this very case, it shows how well the Emerald Valley is maintained.

Sat, 07 Sep 2019 09:00:00 +0100

The greens of the Emerald Valley

Shutter speed
1/15 sec
Focal length (as set)
39.0 mm
ISO 80
HD PENTAX-DA 16-85mm F3.5-5.6 ED DC WR

There is a good reason why the Emerald (or Jade) Valley is called like that: there are hundreds of different shades of greens all around, including the ponds themselves.

Fri, 06 Sep 2019 09:00:00 +0100

Emerald Valley

Shutter speed
1/4000 sec
Focal length (as set)
50.0 mm
ISO 80
K or M Lens

China has a system to rank scenic areas, which takes capacity, beauty, historical importance etc. into account. The AAAAA spots, like Huangshan, are the nicest, largest, most crowded places in China, because everyone hears about them. Regardless of this they are usually still worthy of visiting.

The AAAA category, on the other hand, are lesser known, quieter places, which still have a lot to offer. The Emerald (or Jade) Valley is one of these. It was delevoped soon after Crouching Tiger, Hidden Dragon, because it was one of the filming locations, but in the 20 years since the movie, the area certainly had a drop in the mass of tourists. Despite all of that, it was still impossible to get pictures without people in it, so in 2019, in China, I gave up: I started taking my photos by calculating with the humans in the landscape and tried to make the best of it.

I took this picture with a fairly hollow depth of field, just to try something different out in landscapes. I have mixed feelings about the outcome, but considering I selected and uploaded it, I lean towards the positive.

Thu, 05 Sep 2019 09:00:00 +0100

Huangshan - stairs and tourists

Shutter speed
1/30 sec
Focal length (as set)
50.0 mm
ISO 80
K or M Lens

Our 2019 visit to China was the first time ever that even with patient waiting it was impossible to get a view of the scenery without people. So I kept the people. At least they make the steepness of Huangshan is visible.

Sat, 31 Aug 2019 09:00:00 +0100

Huangshan scenery 4

Shutter speed
1/800 sec
Focal length (as set)
35.0 mm
ISO 80
smc PENTAX-DA 35mm F2.4 AL

The deeper you go into the rear valley, the wilder, the more alien the landscape becomes in Huangshan.

Fri, 30 Aug 2019 09:00:00 +0100

Huangshan scenery 3

Shutter speed
1/2000 sec
Focal length (as set)
50.0 mm
ISO 80
K or M Lens

These mountains with the flowers and the spectacular pines are worthy of being the topic of so many chinese landscape paintings.

Thu, 29 Aug 2019 09:00:00 +0100

Huangshan scenery 2

Shutter speed
1/125 sec
Focal length (as set)
16.0 mm
ISO 80
HD PENTAX-DA 16-85mm F3.5-5.6 ED DC WR

A view of the rear valley from the scenic route of Huangshan.

Wed, 28 Aug 2019 09:00:00 +0100

Huangshan scenery 1

Shutter speed
1/400 sec
Focal length (as set)
35.0 mm
ISO 80
smc PENTAX-DA 35mm F2.4 AL

A view of the front valley from the scenic route of Huangshan.

Tue, 27 Aug 2019 09:00:00 +0100

Huangshan Panorama 2

Shutter speed
Focal length (as set)

It's nearly impossible to properly capture landscape with "deep shadows and brilliant highlights", but I tried to do my best. Non-HDR, multi picture panorama put together with Huggin.

Mon, 26 Aug 2019 21:00:00 +0100

Huangshan Panorama 1

Shutter speed
Focal length (as set)

Huangshan is a unique place. It's also vast, surprisingly long to walk, and even after the Golden Week it's still packed with people. Regardless of that it's beautiful.

Fri, 23 Aug 2019 09:00:00 +0100

Tangkoucun 汤口村

Shutter speed
1/20 sec
Focal length (as set)
39.0 mm
ISO 6400
HD PENTAX-DA 16-85mm F3.5-5.6 ED DC WR

Tangkoucun ( 汤口村 ) is the small town at the feet of Huangshan mountains - this is where the bus to the scenic area takes you to. If you get here in dark, it's certainly not one of the most welcoming looking places, but it gets a lot nicer in light.

Wed, 21 Aug 2019 15:30:00 +0100

Greens of West Lake

Shutter speed
Focal length (as set)

The West Lake itself is a big, open water, but around it, especially in the corners, there are wonderful, smaller areas, filled with lush greens, and sprouting lotus.

Wed, 14 Aug 2019 09:00:00 +0100

Hangzhou West Lake at night

Shutter speed
Focal length (as set)

The West Lake in Hangzhou is probably one of the most visited tourist spots in the whole of China. Apparently it's true beauty only appears when there's mist and fog around - having had a clear night when we were there, it seems fairly true. Without the mystical cover, it's a merely large, although very nice lake with a bright, and modern view.

Tue, 13 Aug 2019 09:00:00 +0100

Bamboo Pattern

Shutter speed
1/80 sec
Focal length (as set)
50.0 mm
ISO 80
K or M Lens

Taken at Yunxi Zhujing, Hangzou.

Mon, 12 Aug 2019 09:00:00 +0100

Bamboo Lined Path

Shutter speed
1/80 sec
Focal length (as set)
50.0 mm
ISO 80
K or M Lens

Yunxi Zhujing is one of the less visited, smaller attractions of Hangzhou. It's easy and possible to get there by bus from the West Lake, and it's a nice escape from the swarm of tourists at the big highlights around.

Fri, 09 Aug 2019 09:00:00 +0100

Rebuilding my home server on a tight budget

If you have an unlimited budget don't read on: get 2x4TB 2.5" SSDs and stick them in an old ThinkPad. I still believe it's the perfect home server.


Unfortunately I don't have unlimited budget, rather a particularly limited one. I also I had to put a system together that fits in a very tight space - England and it's teeny flats - and has at least 4TB, reliable storage.

I had some spare parts: a 250GB 2.5" LiteON SSD, an ancient 64GB 2.5" Samsung 470 SSD, 8, 4, and 2 GB DDRIII SODIMMS, but that 4TB meant I need to think in 3.5" drives, 2 of them at least, to have a real ZFS mirror.

Considering my as-less-as-possible wallet for this at first I caved in: I bought a QNAP NAS. I believed their rather convincing marketing about how advanced these things are. Well, they are not, at least the consumer ones aren't. I couldn't even find a way to display the raw S.M.A.R.T. state of the drive, let alone ZFS features. After a long read it turned out that all those nice features are enterprise-only. I ended up returning it the next day.

Back to the drawing board.

Places like mini-itx.com and pcpartpicker are absolutely invaluable tools when it comes to designing a computer from parts, but unfortunately they don't include old models, or arcane, hard to come by parts.

The main issue was the lack of space: all the shelves I could place it on were only 30cm deep. A long time ago I gave a Lian-Li house a go, but it ended up so cramped inside that I had to give up back then. Also: the thinner the better. I couldn't believe nobody ever done a 1U house that fits 2x3.5" drives - I know it's possible, so there should be something out there!

Then I finally found it. It exists, and it's called inWin IW-RF100S1

inWin IW-RF100S rack case

A 1U chassis with 1/3 of the normal depth which can take a mini-ITX motherboard, 2x3.5" drives AND 2x2.5" drives and has a built in PSU! I've been looking for a case like this for about 4 years now.

Choosing drives was simple: a WD Red 4TB2 and a Seagate Ironwolf 4TB3- different brand, different batch, so there can't be same batch => fails the same time problems.

Finding a motherboard on the other hand turned tricky and resulted in compromises. My original minimum requirements were at least 4xSATA, if Intel, then AES-NI support in the CPU, <25W TDP (so passive cooling would be enough), HDMI (I have no VGA connector capable display at home any more) and ECC RAM support.

There are nice Supermicro and ASRock Rack server boards with ECC support, but they only got VGA. They are also pricey and usually without CPU, so I'd need to hunt down a super rare and rather expensive Intel Xeon E3-1235LV5 for that 25W TDP. It's an insanely good CPU, but the motherboard and this processor would push the setup with and extra £500 at least, more likely with and extra £800, so I dropped the ECC RAM requirement. Yes, I know, my ZFS will be destroyed and my bloodline will be cursed.

In the end I settled with and ASRock J42054. It has 10 W TDP, passive cooling, and fits the remaining requirements.

Notes and finds

The fans that come with the inWin are LOUD. 10+k rpm, proper server level, vacuum cleaner loudness. I bought a Geli silent fan, but if I replaced the originals, it became disturbing because the metal railing for the fans disrupt the airflow. I put it ~2cm further away with a double sided tape and it's now working fine. The fan makes an average 8°C difference, but even with completely passive cooling, the CPUs, running at max were ~50°C max.

The PSU fan is surprisingly quiet despite it's size. No need for hacks.

I added a tiny layer of foam under the drive trays, so no wobble possible at all.

I also added some tiny rubber legs to the case, but I'm leaning towards buying some anti-resonance domes.

The whole setup fits under an ordinary bookshelf.


Total: £421.49

Operating system

ZFS vs linux: the drama keeps rollin'

As my previous system was, my laptop system, and my main server system is Debian, I obviously installed Debian initially. The difference, in this case was, that wanted to stick to Stable and not faff around with Unstable at all.

I've been having disappointing experiences with the linux community for years now, starting with pulseaudio, that lead into systemd, but I managed to overcome this. Every single time I tried FreeBSD I got burnt on something, so I wasn't keen to compromise my main backup system again.


Until I started reading of the next beauty of the linux kernel community who I now believe is repeating the errors of anyone on the topic of the food chain - namely about how a feature deprecation broke ZFS on Linux (ZoL).

My tolerance for ZFS is pretty non-existant. Sun explicitly did not want their code to work on Linux, so why would we do extra work to get their code to work properly?

- Greg Kroah-Hartman5

That is really not the community I believed linux was. It used to be the underdog, the one that always found a way to make things work on it, even if it was via reverse engineering close source.

This, on it's own may not have been a breaking point, but something extra happened. After building that mirror ZoL pool on Linux I eventually decided to try FreeNAS and I tried to import the pool. Except I couldn't.

The Linux hate is strong today. zpool feature "org.zfsonlinux:userobj_accounting (User/Group object accounting.)". They added Linux-only features to zpool - and made them active by default when creating pools with no special argument. WTF! #zfs

- Martin Cracauer6

ZoL enabled a few extra features by default which is not supported in any other ZFS implementation yet, so if you want to mount a pool, you can only do it read-only and even then it needs some trickery.

ZFS is a brilliant filesystem and is one of the key, bare minimum requirements for my storage. It's more important than the operating system on top of it.

Enter FreeNAS

So I installed FreeNAS, rebuilt the mirror (with 4TB, the whole linux-FreeNAS dance took nearly a full 24 hours of copying data here, then there), and started getting familiar with the FreeNAS interface.

I have to admit that I like it. The new web GUI of FreeNAS 11 is clear, simple, and offers a lot of neat utility: cloud sync (so I can back up my cloud things on my NAS, not the other way around), alerting, even collectd is on by default.

The plugins and jails are very nice, the virtual machines support is decent, so if I do ever have to run Debian again, I could.

The disk layout I ended up with:

For now, I'm happy.

Notes and finds


I've learnt a lot from this experience. Nothing in my former system was telling me there's something wrong with one of the drives apart from ZFS - smart still says the disk is healthy. Trust ZFS.

The FreeNAS GUI is nice and might even work for non-IT/non-sysadmin people. If you have a spouse who should have access to these as well, it's a highly appreciated factor.

Linux may have lived long enough to start becoming the villain.

  1. https://www.ipc.in-win.com/IW-RF100S

  2. https://www.wd.com/products/internal-storage/wd-red.html#WD40EFRX

  3. https://www.seagate.com/gb/en/internal-hard-drives/hdd/ironwolf/

  4. https://www.asrock.com/MB/Intel/J4205-ITX/index.us.asp

  5. https://marc.info/?l=linux-kernel&m=154714516832389&w=2

  6. https://twitter.com/MartinCracauer/status/1007399058355445760

Fri, 21 Jun 2019 18:00:00 +0100

The tea house building of Dojo Stara Wies

Shutter speed
1/400 sec
Focal length (as set)
85.0 mm
ISO 80
K or M Lens

On my attempt to re-create a photo I mistakenly shoot as video - and therefore only have it in a small resolution1 - I got up early on my second day during our visit to Stara Wieś as well.

No rain, no sleet, lovely sunshine. And cold. And wavy water surfaces. As a result I wasn't able to re-shot the image, but at least I found another perspective to show the surroundings of the tea house.

  1. https://petermolnar.net/dawn-at-dojo-stara-wies/

Fri, 03 May 2019 21:00:00 +0100

The sauna building and the tea house of Dojo Stara Wies

Shutter speed
1/500 sec
Focal length (as set)
35.0 mm
ISO 80
smc PENTAX-DA 35mm F2.4 AL

Along the previous picture, this is another perspective on tea house and the sauna building at Dojo Stara Wieś, with a lot of sunshine at a dazzlingly cold morning.

Wed, 01 May 2019 21:00:00 +0100

Panorama of Dojo Stara Wies

Shutter speed
Focal length (as set)

Before saying goodbye to Stara Wieś, I wanted to make an image of the whole little village. This should have been made either earlier in the morning, or much later, at sunset, but when you go there to train, one can't simply run and leave the class to take a panorama; especially when the classes are up in the big building on the left, at the top of the hill.

Mon, 29 Apr 2019 20:00:00 +0100

Snowy panorama of Dojo Stara Wies

Shutter speed
Focal length (as set)

When you expect the same weather one year apart on the same spot in Central Europe, it usually doesn't work. I deliberately got up 5am to make use of the incredible water surfaces next to the houses at Dojo Stara Wieś, only to realize that this time my companions are sleet, cold, and grey misery.

The truth is, the place is still beautiful, even if you're shivering in your bones.

Sat, 27 Apr 2019 19:00:00 +0100

Snowy morning between the houses at Dojo Stara Wies

Shutter speed
1/50 sec
Focal length (as set)
85.0 mm
ISO 800
K or M Lens

Same morning as the previous image1 about the lovely dojo at Stara Wieś, with it's curving roads across the fantasy Japanese village.

  1. https://petermolnar.net/stara-wies-dojo-snowy-rooftops

Thu, 25 Apr 2019 21:00:00 +0100

Experiences with the Pa-Kua International League

Note: this is not an official statement in any form; it's merely my own, personal view and opinion on Pa-Kua.

Eons ago I did ITF Taekwondo, followed by a some no-name branch of karate, then ITF Taekwondo again, then years of medieval re-enactment with swords and archery, then a few months of Yang style Tai-Chi, including the martial aspect and their broad sword.

I did the Tai-Chi for the shortest time, but left a forceful impression on me - mainly due to my teacher, Johnny Burke1, because it felt whole; it radiated out into my everyday life. Karate was rather mindless, Taekwondo was way too competition oriented, re-enactment was fun for a while, but soulless, especially after knight fights became a thing. Unfortunately Johnny left Mei Quan, and I left London and Tai-Chi.

In 2017, as a company summer program, someone organised oriental archery for us. This lead me to Pa-Kua2 and their traditional Chinese archery. There is a Hungarian man, Lajos Kassai3, who also made his own research in the 80s, in order to revive ancient Hungarian horseback archery, and to create a version of the recurve bow they used to use. I met people following his teachings and it shows vast similarities to the archery Pa-Kua teaches. Around 20094 China started to popularise folk archery as well - there are now people writing about and practising reprised Manchurian archery5, which also shares common techniques. While it's not the same, I have no doubts that of the archery of Pa-Kua works, and that is a Chinese style archery.

Soon I joined their martial art classes as well; occasionally acrobatics and weapons.

There are countless wushu movies out there indicating there is, or there used to be, more to kung fu than movements: acupressure points, healing, philosophy, sometimes religion, and so on, but it seems like during their way out of China, many of these aspects fell off, and the world is now left with fighting styles without their foundation. There are exceptions - such as the aforementioned Mei Quan Academy of Tai-Chi in London, or, in my opinion, the Pa-Kua International League. I've mentioned archery, martial art, and weapons, but it also teaches Chinese medicine, massage, acupressure, acrobatics, etc., so unlike a traditional dojo, it offers a lot more.

The logo of Pa-Kua International League


Is Pa-Kua a Chinese martial art?

Pa-Kua split it's teachings into disciplines. Some of these are based on traditional Chinese knowledge (energy, reflexology); others are infusions of mainly Chinese and other far Asian practice (acrobatics, edged weapons, martial art, sintony, cosmodynamics); yet others are mainly results of historical reconstruction (archery); whereas some are completely modern, for modern times (rhythm).

The main influence of martial arts discipline - based on the actual elements being taught and some personal research - is certainly Chinese, but not strictly one specific Chinese style.

I saw videos calling Pa-Kua fake.

During the past decade some people embraced the idiotic stand that MMA is the only efficient martial art. MMA is training gladiators.

Traditional martial arts was meant to be a way to kill fast and efficiently. They changed since, especially internal styles. Would Pa-Kua be efficient against MMA? No, it probably won't. It's not the goal. It's not a hard, competition style; you should be comparing it to Xingyi, Bagua, Tai-Chi, and the other, mainly internal styles instead.

The goal is to practise, to find your balance, learn to control one's self in every aspect, both physically and mentally.

Going a bit further: the authenticity of a martial art is a whole spectrum of turmoil. A lot of Chinese styles were nearly wiped out first in the 17th century, then in the mid 20th century. People tried to keep them alive, some of them by passing it strictly within a family - this resulted in hundreds, if not thousands of streams of a formerly organised styles6. It's not that surprising not to be able to find someone based on a pinyin version of a Chinese name on Google, but it doesn't mean they never existed. Many villages in rural China only got electricity 10-15 years ago, let alone the monasteries in the mountains, and I seriously doubt historical paperwork was digitised at all. (I've been to villages and monasteries like this.) This problem goes way beyond this by the way; finding translated Chinese knowledge is a massive pain, let alone origin stories in a world where history is quite flexible.

The best option you have it go decide for yourself. Go; meet some actually high belts; talk to them, train with them. See what and how they teach, and decide for yourself.

I've heard that Pa-Kua is just a pyramid scheme.

When it comes to belts and ranks, it's an organisation.

The international school needs funding, and knowledge needs people who can dedicate their lives to teaching and research. Since there is no membership fee, all the activities that are controlled by the school - progress with belts, intensive courses, etc - are paid directly to the school who distributes it they way they want to. It's not that different to non-profit organisations.

Local practices are completely in the hands of the leading instructor/master. You pay them directly, they rent/own the building, etc. That is just like and standard dojo.

Pa-Kua has Japanese uniform, so it can't be Chinese!

If you judge a school based on their clothing, you're doing it wrong.

Buying Chinese silk robes was a hard stunt anywhere before aliexpress, so I'm not going to blame anyone for utilising something more widely available - the karate gi.

Pa-Kua teaches katana, so it can't be Chinese!

Everyone knows that the katana is a Japanese weapon. What people don't know is that China had a lot of very similar weapons in the family of dao swords: changdao, dadao, miaodao, zhanmadao, wodao, etc7.

Chinese Swords by Paliandr0 on DeviantArt, https://www.deviantart.com/paliandr0/art/Chinese-Swords-481512284

Yes, for practical reasons, Pa-Kua utilizes katanas; the historical similarities between weapons allows it do so. The differences between these weapons are tiny, and katanas and bokens are far more accessible - and cheaper -, than, for example, a zhanmadao.

As you progress, the weapons practice will soon incorporate knife(s), spear, baguadao, miaodao, jian, etc. as well.

As I mentioned at the beginning, I did European medieval re-enactment for years, and my main weapon was one handed straight sword. Boosted by this I took a jian course at Pa-Kua and I have to admit, it's a ridiculously different weapon, and it's extremely hard to handle. There are good reasons why it's at higher levels. The katana-like weapons are much more straightforward to learn - not to master, just to learn -, which is probably the reason why the school decided to start with those.

Is it true that you can buy (black) belts in Pa-Kua?

If you've done some kind of martial you've been conditioned to identify a belt with a certain degree of capability, and that to achieve a belt, you need to pass a physical exam, with clearly defined requirements.

Here, the belts are mainly theory-indicators. They show what can safely be taught to its wearer and what things the wearer knows in theory already. It's completely normal if a green or grey belt Pa-Kua practitioner has never done a full contact fight.

You can achieve these belts through intensive courses. These are face to face trainings with multiple masters in a dense timeframe. You will most probably lack practice, but the theoretical knowledge will be there.

So the short answer: no, you can't simply buy belts, but you're allowed to participate in intensive training to gain them faster.

I'm not convinced.

If you're looking for something extremely orthodox, the school is not for you. Similarly, if you want to fight and beat people, do hard contact, train with ex-soldiers, it's also not the place.

I met a few of the regional leaders, and they definitely have a wide and interesting knowledge. To access this knowledge, you need to pay. This may not be the ideal, imagined way of learning, but it's always been like this, and making money from teaching is never easy8.


The Pa-Kua International League is not simply another martial art dojo: it offers a broad knowledge that used to accompany martial arts.

Did it start out as a fake? I’ll never know. But in that 40 years since it's establishment it grew, and today there's a lot of proficiency within the school.

It's not strictly Chinese and has other far Asian influences.

It's expensive compared to other schools, and there are ways to progress mainly on theoretical knowledge, but you always get something for your tuition fee.

Belt colour doesn't indicate the same thing as in most Westernised martial arts.

The martial arts discipline is an internal style. Do not expect contact fights until far into upper belts.

Every single high ranking member I met was talented and had a lot to offer. However, their main focus may not be martial arts, due to the split across disciplines, so don't judge anyone just by their martial arts skill. There are, and always were, scholar monks as well.

I'd encourage to try the whole spectrum of Pa-Kua: try every discipline and get the full picture. Only after that decide if it's for you or not.

If you disagree, agree, want to discuss, have questions, spotted a mistake, feel free to get in touch with me; contact options are at the bottom of the page.

  1. https://schoolofeverything.com/teacher/johnnyburke

  2. https://pakua.com

  3. https://en.wikipedia.org/wiki/Lajos_Kassai

  4. http://www.chinaarchery.org/archives/94

  5. http://www.manchuarchery.org/photographs

  6. http://thelastmasters.com/a-few-thoughts-on-emei-mountain-kung-fu/

  7. http://www.ancientpages.com/2018/09/19/deeper-look-into-chinese-swords-throughout-the-history-of-the-dynasties/

  8. http://time.com/4587078/kung-fu-martial-arts-hakka-hong-kong-preserve/

Wed, 24 Apr 2019 14:00:00 +0100

Snowy rooftops of the living accommodations at Dojo Stara Wies

Shutter speed
1/50 sec
Focal length (as set)
85.0 mm
ISO 800
K or M Lens

A year later to our previous visit1 we repeated our Spring Retreat with Pa-Kua2 to the magnificent dojo at Stara Wieś. On the contrary to last year's glorious ~23°C, the first morning was gloomy, with sleet and snow. At least it was different...

  1. https://petermolnar.net/dawn-at-dojo-stara-wies/

  2. https://www.pakuauk.com/

Tue, 23 Apr 2019 21:00:00 +0000

Gopher? Gopher.

"BBS The Documentary" from Jason Scott1 showed me a world I never touched, never experienced - Eastern Europe and dial up in the 80s... we didn't even have a phone line until the early 90s at home. So I eagerly started digging on how to set up a BBS, to at least get a minor feel from the time of WarGames2, only to realize, I'd most probably need to write the whole thing from scratch. Not that is wouldn't be fun, but it wouldn't be enough fun.

Soon I forgot about it, until about week ago an unusual entry popped up on Hacker News3: We must revive Gopherspace4 - from 2017.

The basis of the entry describes how ugly the web has become with all the tracking, ads, attention driven social media, an puts it in constast with the purity of Gopher. HTTP and HTML are absolutely fantastic pieces of engineering - but indeed they became bloated and abused. Gopher on the other hand, is time travel, to a time when a global network was completely new.

After reading a bit about the Gopher protocol5, I have to say: of course it's pure, it needs to be compared with HTTP 1.0 and HTML 1, because it never got a 2.0. It certainly has that oldschool feeling of following links around, finding bottomless servers that has been sitting around for 20+ years with content.

I wanted to contribute to this tiny community of literally just hundreds of servers around the world.

The Python script6 I generate my website with uses markdown source content files and Pandoc7 creates nice HTML out of them. Apparently it can also create 80 columns wrapped plain text just as easily. Setting up pygopherd8 is pretty straightforward as well.

The only difference from the docs you might find in case of pygopherd is that the gophermap files don't need the i in front of ordinary text content.

An example snippet:

petermolnar.net's gopherhole - phlog, if you prefer

1article    /category/article   petermolnar.net 70
1journal    /category/journal   petermolnar.net 70
1note   /category/note  petermolnar.net 70
1photo  /category/photo petermolnar.net 70

will look like:

lynx browser rendering the gopherfile above


article - petermolnar.net

0A journey to the underworld that is RDF        /web-of-the-machines/index.txt  petermolnar.net 70
I got into an argument on Twitter - it made me realize I don’t know
enough about RDF to argue about it. Afterwards I tried out a lot of
different ways to drew my own conclusions on RDF(a), microdata, JSON-LD,
vocabularies, schema.org, etc. In short: this one does not spark joy.
Irdf-it-does-not-spark-joy      /web-of-the-machines/rdf-it-does-not-spark-joy.jpg      petermolnar.net 70
Igsdtt_microdata_error_01       /web-of-the-machines/gsdtt_microdata_error_01.png       petermolnar.net 70
Igsdtt_microdata_error_02       /web-of-the-machines/gsdtt_microdata_error_02.png       petermolnar.net 70
Igsdtt_rdfa_error_01    /web-of-the-machines/gsdtt_rdfa_error_01.png    petermolnar.net 70
Igsdtt_rdfa_error_02    /web-of-the-machines/gsdtt_rdfa_error_02.png    petermolnar.net 70

0How to add themes to your website with manual and CSS prefers-color-scheme support     /os-theme-switcher-css-with-fallback/index.txt  petermolnar.net 70
prefers-color-scheme is a new CSS media query feature, which propagates
your OS level color preference. While it’s very nice, it’s way too new
lynx rendering my articles gophermap from the snippet above

There are good guides out there for setting up gopher content9, there is really no need for one more, but if you do have any questions, feel free to get in touch.

  1. https://www.youtube.com/watch?v=mJgRHYw9-fU&list=PLgE-9Sxs2IBVgJkY-1ZMj0tIFxsJ-vOkv

  2. https://www.imdb.com/title/tt0086567/

  3. https://news.ycombinator.com/item?id=19178885

  4. https://box.matto.nl/revivegopher.html

  5. https://www.minnpost.com/business/2016/08/rise-and-fall-gopher-protocol/

  6. http://github.com/petermolnar/nasg

  7. http://pandoc.org/

  8. https://github.com/jgoerzen/pygopherd

  9. https://davebucklin.com/play/2018/03/31/how-to-gopher.html

Tue, 26 Feb 2019 22:00:00 +0000

A journey to the underworld that is RDF

working with RDF - this one does not spark joy

I want to say it all started with a rather offensive tweet1, but it wouldn't be true. No, it all started with my curiosity to please the Google Structured Data testing tool2. Last year, in August, I added microdata3 to my website - it was more or less straightforward to do so.

Except it was ugly, and, after half a year, I'm certain to say, quite useless. I got no pretty Google cards - maybe because I refuse to do AMP4, maybe because I'm not important enough, who knows. But by the time I was reaching this conclusion, that aforementioned tweet happened, and I got caught up in Semantic Hell, also known as arguing about RDF.

The first time I heard about the Semantic Web collided with the dawn of the web 2.0 hype, so it wasn't hard to dismiss it when so much was happening. I was rather new to the whole web thing, and most of the academic discussions were not even available in Hungarian.

In that thread, it pointed was out to me that what I have on my site is microdata, not RDFa - I genuinely thought they are more or less interchangeable: both can use the same vocabulary, so it shouldn't really matter which HTML properties I use, should it? Well, it does, but I believe the basis for my confusion can be found in the microdata description: it was an initiative to make RDF simple enough for people making websites.

If you're just as confused as I was, in my own words:

With all this now known, I tried to turn mark up my content as microformats v1, microformats v2, and RDFa.

I already had errors with microdata...

Interesting, it has some problems...
it says URL for org is missing... it's there. Line 13.

...but those errors then became ever more peculiar problems with RDFa...

Undefined type, eh?

... while microformats v1 was parsed without any glitches. Sidenote: microformats (v1 and v2), unlike the previous things, are extra HTML class data, and v1 is still parsed by most search engines.

At this point I gave up on RDFa and moved over to test JSON-LD.

It's surprisingly easy to represent data in JSON-LD with schema.org context (vocabulary, why on earth was vocabulary renamed to context?! Oh. Because we're in hell.). There's a long entry about why JSON-LD happened6 and it has a lot of reasonable points.

What it forgets to talk about is that JSON-LD is an invisible duplication of what is either already or what should be in HTML. It's a decent way to store data, to exchange data, but not to present it to someone on the other end of the cable.

The most common JSON-LD vocabulary, Schema.org has it's own interesting world of problems. It wants to be a single point of entry, one gigantic vocabulary, for anything web, a humongous task and noble goal. However, it's still lacking a lot of definitions (ever tried to represent a resume with it?), it has weird quirks ('follows' on a Person can only be another Person, it can't be a Brand, a WebSite, or a simple URL) and it's driven heavily by Google (most people working on it are working at Google).

I ended up with compromises.

<html lang="en"  prefix="og: http://ogp.me/ns# article: http://ogp.me/ns/article#">
    <title>A piece of Powerscourt Waterfall - petermolnar.net</title>
<!-- JSON-LD as alternative -->
    <link rel="alternate" type="application/json" title="a-piece-of-powerscourt-waterfall JSON-LD" href="https://petermolnar.net/a-piece-of-powerscourt-waterfall/index.json" />
<!-- Open Graph vocabulary RDFa -->
    <meta property="og:title" content="A piece of Powerscourt Waterfall" />
    <meta property="og:type" content="article" />
    <meta property="og:url" content="https://petermolnar.net/a-piece-of-powerscourt-waterfall/" />
    <meta property="og:description" content="" />
    <meta property="article:published_time" content="2017-11-09T18:00:00+00:00" />
    <meta property="article:modified_time" content="2019-01-05T11:52:47.543053+00:00" />
    <meta property="article:author" content="Peter Molnar (mail@petermolnar.net)" />
    <meta property="og:image" content="https://petermolnar.net/a-piece-of-powerscourt-waterfall/a-piece-of-powerscourt-waterfall_b.jpg" />
    <meta property="og:image:type" content="image/jpeg" />
    <meta property="og:image:width" content="1280" />
    <meta property="og:image:height" content="847" />
<!-- the rest of meta and header elements -->
<!-- followed by the content, with microformats v1 and v2 markup -->

HTML provides an interesting functionality, the rel=alternate. This is meant to be the representation of the same data, but in another format. The most common use is links to RSS and Atom feeds.

I don't know if Google will consume the JSON-LD alternate format, but it's there, and anyone can easily use it.

As for RDFa, I turned to meta elements. Unlike with JSON-LD, I decided to use the extremely simple vocabulary of Open Graph - at least Facebook is known to consume that.

The tragedy of this whole story: HTML5 has so many tags that is should be possible to do structured data without any need for any of the things above.

My content is now:

This way it's simple, but compatible enough for most cases.

  1. http://web.archive.org/web/20190211232147/https:/twitter.com/csarven/status/1091314310465421312

  2. https://search.google.com/structured-data/testing-tool

  3. https://github.com/petermolnar/nasg/commit/9c749f4591333744588bdf183b22ba638babcb20

  4. https://www.ampproject.org/

  5. https://web.archive.org/web/20190203123749/https://twitter.com/RubenVerborgh/status/1092029740364587008

  6. http://manu.sporny.org/2014/json-ld-origins-2/

Sun, 10 Feb 2019 20:10:00 +0000

Influential reads online: finds about the old Web

How the Blog Broke the Web

Back then, we didn’t have platforms or feeds or social networks or… blogs.

We had homepages.

The backgrounds were grey. The font, Times New Roman. Links could be any color as long as it was medium blue. The cool kids didn’t have parallax scrolling… but they did have horizontal rule GIFs.


This little piece stirred a maelstrom in my head, because it's damn right.

The argument goes by this: before streams or feeds - chronological ordering - websites had their own library system, invented and maintained by the site owner. This resulted in genuinely unique sites, not just on a theme level, but on a fundamental layer, as a reflection of how their creators were thinking.

That said, there are, of course, valid uses for chronological ordering, but for some content, maintaining a table of contents could make a much better structure. Unfortunately making content machine-readable by hand is painful.

It would be interesting to see a reprise of home page builders. Not a by-default-a-blog, but an oldschool website builder, but with up to date features in the background.

Why Do All Websites Look the Same?

The internet suffers from a lack of imagination, so I asked my students to redesign it


It ties in to the first one: most sites look the same, and that's not how it's supposed to be.

It contains wonderful ideas on how a certain websites could look and be completely unique, and, in my opinion, personal homepages should consider putting energy and time (and a lot of swearing) into making their online home truly their own as well.

The Slow Web (plus: the modern experience of film-watching)

[W]e need a Slow Internet Movement along the lines of Slow Food and Slow Cinema, if we're really going to take advantage of the archival nature of the Web.


I missed out on the golden days of the blogsphere. Not being a native English speaker, being occupied with a community around 2007, and a couple of other reasons contributed to this factor, so when I stumbled upon Rebecca's site, I got reminded how wonderfully packed personal websites used to be with text content. It's a joyful find, with things dating back to 1999, and with a plethora of now completely dead links.

As for this very entry: yes. We do need a slow web, one, where content is generated not for quick fame and likes, but for the love of the topic.

404 Page Not Found - The internet feeds on its own dying dreams

For with the collapse of the high-modernist ideology of style—what is as unique and unmistakable as your own fingerprints, as incomparable as your own body [e.g. MySpace, Geocities pages] . . . the producers of culture [big Internet companies] have nowhere to turn but to the past: the imitation of dead styles [glitter graphics, Geocities], speech through all the masks and voices stored up in the imaginary museum of a now global culture [the whole internet].


A recent find - this is an article from 2019. A devastating, sour summary, and a frightening reflection on an 1991 essay, describing how recycled nostalgia eats the very thing itself. It also taught me the phrase and the movemenet of vaporwave.

Every single time I try to revive or revisit something I missed out on in the past - BBS systems, for example - I find that they were, in fact, incredibly hard to deal with. They required deep understanding, you had to build a serious amount of things yourself, and it took a long time. While some aspects of this are wonderful - you'll certainly learn it for example -, from another perspective, it's impossible to get people involved if there isn't a simple way to start these days.

Recycling old things is not inherently bad, but in case of the internet, there isn't a simple way to use them without overshadowing the original.

why i’ll never delete my LiveJournal

In truth, I like who I was on the Internet better when I was young and brash though I know not how to do that anymore (and wouldn’t want the burden of it, honestly). My LJ is a space I guard in defense of my younger, wilder, more whimsical self. To alter or destroy this place would mean losing a version of me with an honesty I can no longer afford.


I never thought I'll find an article that summarizes feelings and drifting thoughts on what is now lost from the internet. Being online in the early 2000s meant to retreat from the world, it was another plane - it was not a connected world yet, but a text-based reality, away from the people you know in your physical existence.

It even touches an extremely important aspect: we need to be reminded how we used to be, and an unchanged, or archived version of our ancient journals or websites websites is a good start.

Patient Zero of the selfie age: Why JenniCam abandoned her digital life

"I keep JenniCam alive not because I want to be watched, but because I simply don’t mind being watched."


In 1996, I was in elementary school, in Hungary, my English was enough to understand 2 stupid dogs1 and some of The Real Adventures of Jonny Quest2.

So when I bump into articles talking about a certain Aussie who set up a non-stop webcam in 1996 about her life, it feels like a lightning strike about things I never heard of.

I'm not completely certain why I wanted to add this to the entry. Maybe it's because it feels like history is just sort of repeating itself, but is becoming more smoke and mirrors with each iteration; maybe to recognise that the early internet already pioneered most things that became mainstream(ish) 20 years later.

Before Insta fame, there was Jenni

Be more, or less, like Jenni? That is something to decide for everyone for themselves.

  1. https://web.archive.org/web/19990508175315/http://cartoonnetwork.com/doc/2stupiddogs/index.html

  2. https://www.imdb.com/title/tt0115226/

Fri, 18 Jan 2019 13:00:00 +0000

Antico Borgo in Galtelli

Shutter speed
1/100 sec
Focal length (as set)
35.0 mm
ISO 80
smc PENTAX-DA 35mm F2.4 AL

Galtelli, the old capital of Sardinia has extremely narrow and quite steep streets, with adorable places to stay at; this is one of them.

Thu, 10 Jan 2019 18:00:00 +0000

Dune di Piscinas

Shutter speed
1/160 sec
Focal length (as set)
35.0 mm
ISO 80
smc PENTAX-DA 35mm F2.4 AL

One of the nicest little beaches on Sardinia, which we got to by a little walk, across the dunes.

Wed, 09 Jan 2019 18:00:00 +0000


Shutter speed
1/1600 sec
Focal length (as set)
130.0 mm
ISO 80
Tamron SP AF 70-200mm F2.8 Di LD [IF] Macro (A001)

Spikes, dry dirt, heat - although it's a bit of an illusion. This was next to a stream, on an area which only gets wet during spring, but it was right next to fresh water.

Tue, 08 Jan 2019 18:00:00 +0000

Gorropu Gorge

Shutter speed
1/60 sec
Focal length (as set)
0.0 mm
ISO 80
K or M Lens

Gorropu Gorge is gigantic, peaceful, quiet, and quite steep to get to when walking. There are signs up at the top of the hill before starting to descent that it's no a light walk, and I have to admit, it's a decent climb down and then up, but it's worth it.

Due to the size of the gorge, it's hard to show and represent it in photos, so I decided for trying to capture the colours that surround you when touring through it.

Mon, 07 Jan 2019 18:00:00 +0000

That is not a snake

Shutter speed
1/250 sec
Focal length (as set)
120.0 mm
ISO 80
HD PENTAX-DA 55-300mm F4-5.8 ED WR

That piece of driftwood genuinely scared me when I spotted it through the rocks. The moment you move or zoom closer, the illusion disappears.

Sun, 06 Jan 2019 18:00:00 +0000


Shutter speed
1/160 sec
Focal length (as set)
180.0 mm
ISO 400
Tamron SP AF 70-200mm F2.8 Di LD [IF] Macro (A001)

This little fellow was staring at me after a rain at Sardinia, in a small forest of olive trees, covering ancient ruins of disturbingly sharply designed sacred wells1.

  1. https://www.atlasobscura.com/places/well-of-santa-cristina

Sat, 05 Jan 2019 18:00:00 +0000

Not Ireland

Shutter speed
1/80 sec
Focal length (as set)
35.0 mm
ISO 80
smc PENTAX-DA 35mm F2.4 AL

This is a deceptive image: from all I know, this should be in Ireland. Reality says this is on the semi-island of Sardinia, below Tharros.

Fri, 04 Jan 2019 18:00:00 +0000

Mine, abandoned

Shutter speed
1/1600 sec
Focal length (as set)
87.5 mm
ISO 80
Tamron SP AF 70-200mm F2.8 Di LD [IF] Macro (A001)

There are a couple of abandoned mines in Sardinia - this one was turned into a museum, which would have required hours and guided tour to see inside. We were not that interested to inside, but the view certainly had stereotypical lost wild west feeling to it.

Thu, 03 Jan 2019 23:09:00 +0000

How to add themes to your website with manual and CSS prefers-color-scheme support

Note: a commented version of the code is available as a Github Gist as well1

Ever since I had a website, I nearly always had it dark, but after reading a lot on reading text on displays, and just listening to opinions and stories, it seemed like people do prefer dark text on white background - light themes. So I tried it.

my website with it's light theme in 2017

I felt weird and distant; it wasn't me. A couple of months ago I decided to switch my site back to a dark theme. Unfortunately it really doesn't work for everyone, which I can completely understand: I always wished I could tell sites that I want a dark version of them - not with browser addons, just by setting a preference.

My prayers got answered: in the upcoming version of CSS media queries - level 5 -, there is an element: prefers-color-scheme2 which is exactly this - a setting based on your operating system preference. Let's not talk about the fact that this will become yet another fingerprinting method.

While it's in experimental state, macOS Mojave with the preview 68 Safari is already supporting it3. Unfortunately neither Windows nor linux browsers do, regardless the preferred colour scheme4 option in Windows 10, or the :dark GTK3 theme option in Gnome being present for years.

Because it's highly experimental, and some might prefer a manual option, I wanted an solution that provides a button to change theme as well. There are blogposts out there about CSS-only5 or JS-based6 automated solutions, and complex, even more experimental solutions, based CSS variables7 but none of them provided a fallback.

This is my solution to support dynamic media queries for prefers-color-scheme with manual fallback, using an inlined alternative CSS.

Inlined alternative stylesheet

In my site, I have 3 <style> elements: the base, dark style; an alternative light style, which, by default, is only available for speech media - a successor of audible -, and a third, print-only one, which is out of scope for now.

I snippet of it looks like this:

    <style media="all">
        html {
            background-color: #111;
            color: #bbb;
        body {
            margin: 0;
            padding: 0;
            font-family: sans-serif;
            color: #ccc;
            background-color: #222;
            font-size: 100%;
            line-height: 1.3em;
            transition: all 0.2s;
    <style id="css_alt" media="speech">
        body {
            color: #222;
            background-color: #eee;

The idea is to toggle the speech to all on that css_alt element, either automatically or based on user preference. To do this the most semantic way I could think of I made a radiobutton, with 3 states: auto, dark, light.

<form class="theme" aria-hidden="true">
        <input name="colorscheme" value="auto" id="autoscheme" type="radio">
        <label for="autoscheme">auto</label>
        <input name="colorscheme" value="dark" id="darkscheme" type="radio">
        <label for="darkscheme">dark</label>
        <input name="colorscheme" value="light" id="lightscheme" type="radio">
        <label for="lightscheme">light</label>

Making radiobuttons nice

Unfortunately styling a radiobutton (or a checkbox) is near impossible - what you do instead is you hide the checkbox itself and add fancy CSS to show something nicer. That is the reason for wrapping them in a <span>.

label {
  font-weight: bold;
  font-size: 0.8em;
  cursor: pointer;
  margin: 0 0.3em;
  padding: 0.1em 0;

.theme {
  margin: 0 0.3em;
  color: #ccc;
  display: none;

.theme input {
  display: none;

.theme input + label {
  border-bottom: 3px solid transparent;

.theme input:checked + label {
  border-bottom: 3px solid #ccc;


In order to support both a user preference and the automated detection, I had to add the media query in JavaScript instead of using a mere @media query in CSS. I also had to put the script to the bottom of page - otherwise it won't find the elements, since they are not defined yet. I didn't want to use things like document.onload, because that would delay the execution, and I want this as invisible and fast for the visitor as possible.

var DEFAULT_THEME = 'dark';
var ALT_THEME = 'light';
var STORAGE_KEY = 'theme';
var colorscheme = document.getElementsByName('colorscheme');

/* changes the active radiobutton */
function indicateTheme(mode) {
    for(var i = colorscheme.length; i--; ) {
        if(colorscheme[i].value == mode) {
            colorscheme[i].checked = true;

/* turns alt stylesheet on/off */
function applyTheme(mode) {
    var st = document.getElementById('css_alt');
    if (mode == ALT_THEME) {
        st.setAttribute('media', 'all');
    else {
        st.setAttribute('media', 'speech');

/* handles radiobutton clicks */
function setTheme(e) {
    var mode = e.target.value;
    if (mode == 'auto') {
    else {
        localStorage.setItem(STORAGE_KEY, mode);
    /* when the auto button was clicked the auto-switcher needs to kick in */
    var e = window.matchMedia('(prefers-color-scheme: ' + ALT_THEME + ')');

/* handles the media query evaluation, so it expects a media query as parameter */
function autoTheme(e) {
    var current = localStorage.getItem(STORAGE_KEY);
    var mode = 'auto';
    var indicate = 'auto';
    /* user set preference has priority */
    if ( current != null) {
        indicate = mode = current;
    else if (e != null && e.matches) {
        mode = ALT_THEME;

/* create an event listener for media query matches and run it immediately */
var mql = window.matchMedia('(prefers-color-scheme: ' + ALT_THEME + ')');

/* set up listeners for radio button clicks */
for(var i = colorscheme.length; i--; ) {
    colorscheme[i].onclick = setTheme;

/* display theme switcher form(s) */
var themeforms = document.getElementsByClassName(STORAGE_KEY);
for(var i = themeforms.length; i--; ) {
    themeforms[i].style.display = 'inline-block';

The effect

macOS screen capture by Martijn van der Ven8:

Unfortunately this is the version which contains a former bug, where the indicator followed the media query detected value for the light theme. This is fixed in the code above.


if (! window.matchMedia("(prefers-color-scheme: dark)").matches)

doesn't work trying to match light. That is because prefers-color-scheme has 3 options: no-preference, light, dark, the default being no-preference. The correct method is to match light exactly.

  1. https://gist.github.com/petermolnar/d7ccaffadb92bf6c3d3615ed92832669

  2. https://drafts.csswg.org/mediaqueries-5/#descdef-media-prefers-color-scheme

  3. https://webkit.org/blog/8475/release-notes-for-safari-technology-preview-68/

  4. https://blogs.windows.com/windowsexperience/2016/08/08/windows-10-tip-personalize-your-pc-by-enabling-the-dark-theme/

  5. https://dri.es/adding-support-for-dark-mode-to-web-applications

  6. https://kevinchen.co/blog/support-macos-mojave-dark-mode-on-websites/

  7. https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_variables

  8. https://vanderven.se/martijn/

Fri, 16 Nov 2018 22:45:00 +0000

Domoticz vs sensors

I have a couple of 433.92MHz things around me, and recently I developed an itch to log what is happening with them.

Devices include:

When I started looking for solutions to listen into 433MHz, I found a weird, extremely cheap project3:

For my genuine surprise, it works - but it's hard to match the incoming patterns, so I decided to keep looking.

Next next project I found was the librtlsdr4 combined with rtl_4335 - it converts a USB DVB-T TV tuner into a 433MHz receiver. It' sounded very nice, but at the same time, I found RFLink6. RFlink is a free, but not open source Arduino Mega firmware that can receive and send 433MHz/868MHz & 2.4GHz signals from a plethora of devices - and I had an unused, first generation, made in Italy Arduino Mega around, that's been waiting to be used for a decade.

Flashing the ROM

avrdude is a simple flashing utility for atmega boards, including arduinos; it will be needed to flash the ROM.

sudo apt install avrdude

Download and extract the RFLink ROM:

wget -ORFLink_v1.1_r48.zip https://doc-14-94-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/3esqvusiaem47f8nistrrisk5ofk9g6g/1540800000000/03880776249665269026/*/0BwEYW5Q6bg_ZLWFJUkY4bDZacms?e=download
unzip -d RFLink_v1.1_r48 RFLink_v1.1_r48.zip
cd RFLink_v1.1_r48

Note: I hardcoded the v48 version in this tutorial. Visit http://www.rflink.nl/ to see if there's a newer one.

Once the Arduino is connected, it'll show up as 'arduino mega' in dmesg, so find the device and flash the ROM as:

megausbdev="$(sudo dmesg  | grep -i 'arduino mega' | head -n1 | cut -d":" -f1 | awk '{print $3}')"
megattydev="$(sudo dmesg | grep "cdc_acm ${megausbdev}" | grep tty | cut -d":" -f3 | head -n1)"
sudo avrdude -v -p atmega2560 -c stk500 -P "/dev/${megattydev}" -b 115200 -D -U flash:w:RFLink.cpp.hex:i

Note: dmesg could be used without sudo if the sysctl parameter kernel.dmesg_restrict is set to 0.

Once this is done, wait until the mega reboots; after that, using minicom, we can verify if it's working.

sudo apt install minicom
minicom -b 57600 -D "/dev/${megattydev}" -w

You should see something like this:

Welcome to minicom 2.7.1

Compiled on May  6 2018, 08:02:47.
Port /dev/ttyACM3, 15:10:35

Press CTRL-A Z for help on special keys

teway V1.1 - R48;
20;00;Nodo RadioFrequencyLink - RFLink Gateway V1.1 - R48;

To exit, press CTRL+a then q.

To make the device always show up on the same /dev path, add the following udev rule:


# arduino mega as RFLink
SUBSYSTEMS=="usb", ATTRS{idVendor}=="2341", ATTRS{idProduct}=="0010", SYMLINK+="rflink"

If needed, restart udev:

sudo udevadm trigger

Physical wiring

There is a very nice, detailed tutorial in the RFLink website about connecting the different devices to the Mega itself at: http://www.rflink.nl/blog2/wiring


Domoticz is a home automation platform, which is very easy to set up, has a simple HTTP interface, and can log all those switches and devices I'm interested in.

Getting & starting Domoticz

sudo mkdir /opt/domoticz
cd /opt/domoticz
sudo wget https://releases.domoticz.com/releases/release/domoticz_linux_x86_64.tgz
tar xf domoticz_linux_x86_64.tgz
sudo /opt/domoticz/domoticz -www 8080 -sslwww 0 -dbase /opt/domoticz/domoticz.db -wwwroot /opt/domoticz/www -userdata /opt/domoticz -log -syslog

Now visit the server IP on port 8080 in your browser and get started with the setup.

  1. Connect the RFLink device to your server

  2. Find the ttyACM device for the RFLink

    megausbdev="$(sudo dmesg  | grep -i 'arduino mega' | head -n1 | cut -d":" -f1 | awk '{print $3}')"
    sudo dmesg | grep "cdc_acm ${megausbdev}" | grep tty | cut -d":" -f3 | head -n1)
    # this will print something like: ttyACM3
  3. Go to the Domoticz web interface

  4. Go to Setup, then Hardware

  5. In the Type drop down, select RFLink Gateway USB

  6. give it a name

  7. Serial Port should be the ttyACM port for the RFLink

Once done, the RFLink will start sniffing all the signals it can pick up, and your devices will start showing up in the Devic menu, under Setup:

devices found by RFLink in Domoticz

Notes and finds about my sensors

Energie wall sockets
They send on and off separately, but their signal doesn't always seem to reach the RFLink properly. Still working on them. No extra setup is needed, their default On/Off type is what they actually are.
Yale HSA6000 PIR sensors
The send on, soon after off, and they have a re-arm time of ~6 minutes. Once detected, they initially show up as Light sensor; this can be changed by first enabling the devices (clicking on the green arrow in the Devic menu, under Setup ), then going into Switches, clicking Edit on the sensor, and selecting the Motion sensor option in Switch type.
Yale HSA6000 door/window contacts
They only send on signal when an open is triggered; pressing the button sends an off. There is no way to know whether they are still open or already closed. They need to be set up as Push on button once they are enabled (clicking on the green arrow in the Devic menu, under Setup ) by going into Switches, clicking Edit on the sensor, and selecting the Push on button option in Switch type. Door contact type expects an off signal, so these are not proper door contacts.
gate keyfobs
I had to set the up as Push off buttons; if I set them as push on buttons, they log 'off' entries when they are pressed.


A few months ago I managed to set up collectd8 to process I²C data via a barely known linux subsystem, Industrial I/O, with the help of a few bash scripts9. In theory, Domoticz can deal with I²C on it's own - unfortunately it doesn't yet work on x86 platforms, and it can only do a few types of sensors. Besides that, I didn't want to lose the collectd data, given that Domoticz is only an experiment for now, so I started looking into my options. Domoticz have an excessive API10, but it's rather uncomfortable to use it, because you need to keep track of sensor and hardware IDs.

Fortunately, there is a workaround: using MQTT as middle ground, utilizing the MySensors serial protocol11.

A bit of explanation: MySensors is an open framework, both hardware and software components, to build custom sensors. On of the methods of sharing sensor information between sensors and controllers is via MQTT, a lightweight pubsub system.

The incredibly convenient part of it is that the information is push-based: Domoticz picks up new sensors if the initialization of them is sent, so no pre-setup, no tracking of internal Domoticz IDs are needed.

MQTT server

I'm not going into the details of setting up an MQTT service, because it's very simple; on Debian, it's more or less:

sudo apt install mosquitto
sudo systemctl enable mosquitto
sudo systemctl start mosquitto

In order to issue updates from bash, the mosquitto clients pack is needed as well:

sudo apt install mosquitto-clients

MySensors MQTT in Domoticz

  1. Go to Setup, then Hardware
  2. In the Type drop down, select MySensors Gateway with MQTT interface
  3. give it a name
  4. Remote Address, in our case, is
  5. Port is 1883
  6. Leave Username and Password empty, unless you set up authentication in your MQTT server
  7. Topic Prefix should be MyMQTT (default)
Adding MyMQTT to Domoticz

Sending sensor data with bash into MQTT

Initiate the sensor meta information

For Domoticz to know about the sensor - the type, the unit, etc - the sensor needs to be initialized; this is done with the presentation command when it comes to MySensors.

mosquitto_pub -t "domoticz/in/MyMQTT/${node_id}/${sensor_id}/0/0/${TYPE}" -m "${sensor_name}"

In details:

Sending sensor value updates

Unlike the previous initiation, this is a value update for our sensor:

mosquitto_pub -t "domoticz/in/MyMQTT/${node_id}/${sensor_id}/1/0/${METRICTYPE}" -m "${value}"

In details:

Example outcome for a BME280: once it's sending temperature, humidity, and pressure data, Domoticz automatically joins the 3 sensors into a single Weather Station entry

Working examples are in my git repository for collectd12.

Happy hacking.

  1. https://www.yaleasia.com/en/yale/yale-asia/products/yale-alarms/wireless-alarm-systems/b-hsa6400---yale-premium-series-home-security-alarm-system/

  2. https://www.amazon.co.uk/Energenie-Remote-Control-Sockets-Pack/dp/B004A7XGH8

  3. https://rurandom.org/justintime/w/Cheapest_ever_433_Mhz_transceiver_for_PCs

  4. http://osmocom.org/projects/sdr/wiki/Rtl-sdr

  5. https://github.com/merbanan/rtl_433

  6. http://www.rflink.nl/blog2/easyha

  7. https://www.domoticz.com/

  8. https://collectd.org/

  9. https://github.com/petermolnar/collectd-executors

  10. https://www.domoticz.com/wiki/Domoticz_API/JSON_URL's

  11. https://www.mysensors.org/download/serial_api_20

  12. https://github.com/petermolnar/collectd-executors

Mon, 29 Oct 2018 18:00:00 +0000

GPS tracking without a server

Nearly all self-hosted location tracking Android applications are based on server-client architecture: the one on the phone collects only a small points, if not only one, and sends it to a configured server. Traccar1, Owntracks2, etc.

While this setup is useful, it doesn't fit in my static, unless it hurts3 approach, and it needs data connectivity, which can be tricky during abroad trips. The rare occasions in rural Scotland and Wales tought me data connectivity is not omnipresent at all.

There used to be a magnificent little location tracker, which, besides the server-client approach, could store the location data in CSV and KML files locally: Backitude4. The program is gone from Play store, I have no idea, why, but I have a copy of the last APK of it5.

My flow is the following:

Backitude configuration

These are the modified setting properties:

I have an exported preferences file available7.


The syncthing configuration is optional; it could be simple done by manual transfers from the phone. It's also not the most simple thing to do, so I'll let the Syncting Documentation8 take care of describing the how-tos.

Python script

Before jumping to the script, there are 3 Python modules it needs:

pip3 install --user arrow gpxpy requests

And the script itself - please replace the INBASE, OUTBASE, and BINGKEY properties. To get a Bing key, visit Bing9.

import os
import sqlite3
import csv
import glob
import arrow
import re
import gpxpy.gpx
import requests

BINGKEY="get a bing maps key and insert it here"

def parse(row):
    DATE = re.compile(

    lat = row[0]
    lon = row[1]
    acc = row[2]
    alt = row[3]
    match = DATE.match(row[4])
    # in theory, arrow should have been able to parse the date, but I couldn't get
    # it working
    epoch = arrow.get("%s-%s-%s %s %s" % (
    ), 'YYYY-MM-DD hh:mm:ss SSS').timestamp

def exists(db, epoch, lat, lon):
    return db.execute('''
            epoch = ?
            latitude = ?
            longitude = ?
    ''', (epoch, lat, lon)).fetchone()

def ins(db, epoch,lat,lon,alt,acc):
    if exists(db, epoch, lat, lon):
    print('inserting data point with epoch %d' % (epoch))
    db.execute('''INSERT INTO data (epoch, latitude, longitude, altitude, accuracy) VALUES (?,?,?,?,?);''', (

if __name__ == '__main__':
    db = sqlite3.connect(os.path.join(OUTBASE, 'location-log.sqlite'))
    db.execute('PRAGMA auto_vacuum = INCREMENTAL;')
    db.execute('PRAGMA journal_mode = MEMORY;')
    db.execute('PRAGMA temp_store = MEMORY;')
    db.execute('PRAGMA locking_mode = NORMAL;')
    db.execute('PRAGMA synchronous = FULL;')
    db.execute('PRAGMA encoding = "UTF-8";')

    files = glob.glob(os.path.join(INBASE, '*.csv'))
    for logfile in files:
        with open(logfile) as csvfile:
                reader = csv.reader(csvfile)
            except Exception as e:
                print('failed to open CSV reader for file: %s; %s' % (logfile, e))
            # skip the first row, that's headers
            headers = next(reader, None)
            for row in reader:
                epoch,lat,lon,alt,acc = parse(row)
        # there's no need to commit per line, per file should be safe enough

    db.execute('PRAGMA auto_vacuum;')

    results = db.execute('''
        ORDER BY epoch ASC''').fetchall()
    prevdate = None
    gpx = gpxpy.gpx.GPX()

    for epoch, lat, lon, alt, acc in results:
        # in case you know your altitude might actually be valid with negative
        # values you may want to remove the -10
        if alt == 'NULL' or alt < -10:
            url = "http://dev.virtualearth.net/REST/v1/Elevation/List?points=%s,%s&key=%s" % (
            bing = requests.get(url).json()
            # gotta love enterprise API endpoints
            if not bing or \
                'resourceSets' not in bing or \
                not len(bing['resourceSets']) or \
                'resources' not in bing['resourceSets'][0] or \
                not len(bing['resourceSets'][0]) or \
                'elevations' not in bing['resourceSets'][0]['resources'][0] or \
                not bing['resourceSets'][0]['resources'][0]['elevations']:
                alt = 0
                alt = float(bing['resourceSets'][0]['resources'][0]['elevations'][0])
                print('got altitude from bing: %s for %s,%s' % (alt,lat,lon))
                        altitude = ?
                        epoch = ?
                        latitude = ?
                        longitude = ?
                    LIMIT 1
                ''',(alt, epoch, lat, lon))
        date = arrow.get(epoch).format('YYYY-MM-DD')
        if not prevdate or prevdate != date:
            # write previous out
            gpxfile = os.path.join(OUTBASE, "%s.gpx" % (date))
            with open(gpxfile, 'wt') as f:
                print('created file: %s' % gpxfile)

            # create new
            gpx = gpxpy.gpx.GPX()
            prevdate = date

            # Create first track in our GPX:
            gpx_track = gpxpy.gpx.GPXTrack()

            # Create first segment in our GPX track:
            gpx_segment = gpxpy.gpx.GPXTrackSegment()

        # Create points:


Once this is done, the OUTBASE directory will be populated by .gpx files, one per day.


GpsPrune is a desktop, QT based GPX track visualizer. It needs data connectivity to have nice maps in the background, but it can do a lot of funky things, including editing GPX tracks.

sudo apt install gpsprune

Keep it in mind that the export script overwrites the GPX files, so the data needs to be fixed in the SQLite database.

This is an example screenshot of GpsPrune, about our 2 day walk down from Mount Emei and it's endless stairs:


Happy tracking!

  1. https://www.traccar.org/

  2. https://owntracks.org/

  3. https://indieweb.org/manual_until_it_hurts

  4. http://www.gpsies.com/backitude.do

  5. gaugler.backitude.apk

  6. https://syncthing.net/

  7. backitude.prefs

  8. https://docs.syncthing.net/intro/getting-started.html

  9. https://msdn.microsoft.com/en-us/library/ff428642

Thu, 27 Sep 2018 11:05:00 +0100

The three Facebooks

I recently wanted to check the upcoming gigs of a music venue. I tried to pull up their website1, but I couldn't find their agenda there - turned out it's sort of an abandoned site, because the hosting company is refusing to respond to any requests.

As a result their gigs are listed on Facebook - at least it can be access without logging in. My current browser setup is a bit complex, but the bottom line is I'm routing my Firefox through my home broadband. I'm used to very fast, unlimited desktop connections these days, both at work and at home, but the throttling I introduced by going through a few loops made some problem visible. When I loaded the Facebook page itself, it took quite a long while, even with noscript and ublock origin, and it made me curious, why.

So I made a fresh Firefox profile and loaded all three versions of Facebook I'm aware of.


Visiting the main Facebook site from a regular desktop client gives you the whole, full-blown, unfiltered experience - and the raw madness behind it.

The page executed 26.13 MB Javascript. That is 315x the size of the complete jquery framework, 193x of Bootstrap + Popper + Jquery together.

Facebook in full glory mode
Facebook and it's Javascript


.m is for mobile devices only; without faking my resolution and user agent in Firefox dev tools, I couldn't get there.

It's better, but it still had 1.28 MB Javascript in the end. On mobile, that is a serious amount of code to be executed.

Facebook in mobile mode - strictly for mobile only though


mbasic is a fascinating thing: it doesn't have JS at all. It's like the glorious, old days: ugly, very hard to find anything, but incredibly fast and light.

Facebook in good ol' days mode


desktop2 m.3 mbasic.4
Uncompressed everything 36.83 MB 2.22 MB 96.91 KB
Total used bandwidth 9.33 MB 1.01 MB 57.98 KB
JS code to execute 26.13 MB 1.28 MB n/a
JS bandwidth 4.22 MB 364.39 KB n/a
JS compression ratio 6.19x 3.59x 1.67x
CSS to parse 1.34 MB 232.81 KB inline
CSS bandwidth 279.73 KB 53.61 KB inline
CSS compression ratio 4.90x 4.34x -
HTML to parse 2.78 MB 172.06 KB 70.20 KB
HTML bandwith 199.73 KB 37.73 KB 14.20 KB
HTML compression ratio 14.25x 4.56x 4.94x


React is evil. It splits code up into small chunks, and on their own, they seem reasonably sized. However, when there's a myriad of these, they add up.

The compressed vs uncompressed ratio in desktop JS and HTML indicates extreme amount of repetition.

Most resources are unique, hashed names, and I'm guessing many of them are tied to A/B testing or something similar, so caching won't solve the issue either.

There's always a balanced way to do things. A couple of years ago, during the times of backbone.js an underscore.js, that balance was found, and everyone should learn from it.

Many moons ago, in 2012 (when Facebook still had an API), an article was published: The Making of Fastbook: An HTML5 Love Story5. It was a demonstration that the already bloated Facebook app could be replaced with a responsive, small, service worker powered HTML5 website.

Facebook won't change: it will keep being a monster on every level.

Don't follow their example.

  1. http://yuk.hu/

  2. https://facebook.com/yukbudapest

  3. https://m.facebook.com/yukbudapest

  4. https://mbasic.facebook.com/yukbudapest

  5. https://www.sencha.com/blog/the-making-of-fastbook-an-html5-love-story/

Thu, 23 Aug 2018 10:45:00 +0100

Lessons of running a (semi) static, Indieweb-friendly site for 2 years

In 2016, I decided to leave WordPress behind. Some of their philosophy, mostly the "decisions, not options" started to leave the trail I thought to be the right one, but on it's own, that wouldn't have been enough: I had a painful experience with media handling hooks, which were respected on the frontend, and not on the backend, at which point, after staring at the backend code for days, I made up my mind: let's write a static generator.

This was strictly scratching my own itches1: I wanted to learn Python, but keep using tools, like exiftool and Pandoc, so instead of getting an off the shelf solution, I did actually write my own "static generator" - in the end, it's a glorified script.

Since the initial idea, I rewrote that script nearly 4 times, mainly to try out language features, async workers for processing, etc, and I've learnt a few things in the process. It is called NASG - short for 'not another static generator', and it lives on Github2, if anyone wants to see it.

Here are my learnings.

Learning to embrace "buying in"


I made a small Python daemon to handle certain requests; one of these routings was to handle incoming webmentions3. It merely put the requests in a queue - apart from some initial sanity checks on the POST request itself -, but it still needed a dynamic part.

This approach also required parsing the source websites on build. After countless iterations - changing parsing libraries, first within Python, then using XRay4 - I had a completely unrelated talk with a fellow sysadmin on how bad we are when in comes to "buying into" a solution. Basically if you feel like you can do it yourself it's rather hard for us to pay someone - instead we tend to learn it and just do it, let it be piping in the house of sensor automation.

None of these - webmentions, syndication, websub - are vital for my site. Do I really need to handle all of them myself? If I make it sure I can replace them, if the service goes out of business, why not use them?

With that in mind, I decided to use webmention.io5 as my incoming webmention (it even gave pingback support back) handler. I ask the service for any new comments on build, save them as YAML + Markdown, so the next time I only need to parse the new ones.

To send webmentions, Telegraph6 is nice, simple service, that offers an API access, so you don't have to deal with webmention endpoint discovery. I put down a text file, with slugified names of the source and target URLs to prevent sending the mention any time.


In case of websub7 superfeedr8 does the job quite well.


For syndication, I decided to go with IFTTT9 brid.gy publish10. IFTTT reads my RSS feed(s) and either creates link-only posts on WordPress11 and Tumblr12, or sends webmentions to brid.gy to publish to links Twitter13 and complete photos to Flickr14

IFTTT didn't work. Well, it worked right after setup, and syndicated one single article properly - since then, it decided to stop looking at my RSS update. I found Zapier15 instead. While it can do way more sophisticated, chained actions, that comes at a hefty, 50$/m price. Their free tier includes only 5, simple actions, to that is enough to send updates to WordPress.com, Twitter, Tumblr, Google Groups, and Flickr through brid.gy16.

I ended up outsourcing my newsletter as well. Years ago I sent a mail around to friends to ask them if they want updates from my site in mail; a few of them did. Unfortunately Google started putting these in either Spam or Promitions, so it never reached people; the very same happened with Blogtrottr17 mails. To overcome this, I set up a Google Group, where only my Gmail account can post, but anyone can subscribe, and another IFTTT hook18 that sends mails to that group with the contents of anything new in my RSS feed.

Search: keep it server side

I spent days looking for a way to integrate JavaScript based search (lunr.js or elasticlunr.js) in my site. I went as far as embedding JS in Python to pre-populate a search index - but to my horror, that index was 7.8MB at it's smallest size.

It turns out that the simplest solution is what I already had: SQLite, but it needed some alterations.

The initial solution required a small Python daemon to run in the background and spit extremely simple results back for a query. Besides the trouble of running another daemon, it needed the copy of the nasg git tree for the templates, a virtualenv for sanic (the HTTP server engine I used), and Jinja2 (templating), and a few other bits.

However, there is a simpler, yet uglier solution. Nearly every webserver out in the wild has PHP support these days, including mine, because I'm still running WordPress for friends and family.

To overcome the problem, I made a Jinja2 template, that creates a PHP file, which read-only reads the SQLite file I pre-populate with the search corpus during build. Unfortunately it's PHP 7.0, so instead of the FTS5 engine, I had to step back and use the FTS4 - still good enough. Apart from a plain, dead simple PHP engine that has SQLite support, there is no need for anything else, and because the SQLite file is read-only, there's no lock-collision issue either.

About those markup languages...

YAML can get messy

I went with the most common post format for static sites: YAML metadata + Markdown. Soon I started seeing weird errors with ' and " characters, so I dug into the YAML specification - don't do it, it's a hell dimension. There is a subset of YAML, title StrictYAML19 to address some of these problems, but the short summary is: YAML or not, try to use as simple markup as possible, and be consistent.

title: post title
summary: single-line long summary
published: 2018-08-07T10:00:00+00:00
- indieweb
- https://something.com/xyz

If one decides to use lists by newline and -, stick to that. No inline [] lists, no spaced - prefix; be consistent.

Same applies for dates and times. While I thought the "correct" date format is ISO 8601, that turned out to be a subset of it, named RFC 333920. Unfortunately I started using +0000 format instead of +00:00 from the beginning, so I'll stick to that.

Markdown can also get messy

There are valid arguments against Markdown21, so before choosing that as my main format, I tested as many as I could22 - in the end, I decided to stick to an extended version of Markdown, because that is still the closest-to-plain-text for my eyes. I also found Typora, which is a very nice Markdown WYSIWYG editor23. Yes, unfortunately, it's electron based. I'll swallow this frog for now.

The "extensions" I use with Markdown:

I've tried using the Python Markdown module; the end result was utterly broken HTML when I had code blocks with regexes that collided with the regexes Python Markdown was using. I tried the Python markdown2 module - worked better, didn't support language tag for code blocks.

In the end, I went back to where I started: Pandoc24. The regeneration of the whole site is ~60 seconds instead of ~20s with markdown2, but it doesn't really matter - it's still fast.

pandoc --to=html5 --quiet --no-highlight --from=markdown+footnotes+pipe_tables+strikeout+raw_html+definition_lists+backtick_code_blocks+fenced_code_attributes+lists_without_preceding_blankline+autolink_bare_uris

The take away is the same with YAML: do your own ruleset and stick to it; don't mix other flavours in.

Syntax highlighting is really messy

Pandoc has a built-in syntax highlighting method; so does the Python Markdown module (via Codehilite).

I have some entries that can break both, and break them bad.

Besides broken, Codehilite is VERBOSE. At a certain point, it managed to add 60KB of HTML markup to my text.

A long while ago I tried to completely eliminate JavaScript from my site, because I'm tired of the current trends. However, JS has it's place, especially as a progessive enhancement25.

That in mind, I went back to the solution that worked the best so far: prism.js26 The difference this time I that I only add it when there is a code block with language property, and I inline the whole JS block in the code - the 'developer' version, supporting a lot of languages, weighs around 58KB, which is a lot, but it works very nice, and it very fast.

No JS only means no syntax highlight, but at least my HTML code is readable, unlike with CodeHilite.


Static sites come with compromises when it comes to interactions, let that be webmentions, search, pubsub. They need either external services, or some simple, dynamic parts.

If you do go with dynamic, try to keep it as simple as possible. If the webserver has PHP support avoid adding a Python daemon and use that PHP instead.

There are very good, completely free services out there, run by mad scientists enthusiasts, like webmention.io and brid.gy. It's perfectly fine to use them.

Keep your markup consistent and don't deviate from the feature set you really need.

JavaScript has it's place, and prism.js is potentially the nicest syntax highlighter currently available for the web.

  1. https://indieweb.org/scratch_your_own_itch

  2. https://github.com/petermolnar/nasg/

  3. http://indieweb.org/webmention

  4. https://github.com/aaronpk/xray

  5. https://webmention.io/

  6. http://telegraph.p3k.io/

  7. https://indieweb.org/websub

  8. https://superfeedr.com/

  9. http://ifttt.com/

  10. https://brid.gy/about#publishing

  11. https://ifttt.com/applets/83096071d-syndicate-to-wordpress-com

  12. https://ifttt.com/applets/83095945d-syndicate-to-tumblr

  13. https://ifttt.com/applets/83095698d-syndicate-to-brid-gy-twitter-publish

  14. https://ifttt.com/applets/83095735d-syndicate-to-brid-gy-publish-flickr

  15. https://zapier.com/

  16. https://brid.gy/about#publishing

  17. https://blogtrottr.com/

  18. https://ifttt.com/applets/83095496d-syndicate-to-petermolnarnet-googlegroups-com

  19. http://hitchdev.com/strictyaml/features-removed/

  20. https://en.wikipedia.org/wiki/RFC_3339

  21. https://indieweb.org/markdown#Criticism

  22. https://en.wikipedia.org/wiki/List_of_lightweight_markup_languages

  23. http://typora.io/

  24. http://pandoc.org/MANUAL.html#pandocs-markdown

  25. https://en.wikipedia.org/wiki/Progressive_enhancement

  26. https://prismjs.com/

Tue, 07 Aug 2018 18:33:00 +0100

Do websites want to force us to use Reader Mode?

Excuse me, sir, but where's the content?

A couple of days ago I blindly clicked on a link1 on Hacker News2 - it was poiting at a custom domain hosted on Medium. Out of curiosity, I changed the browser size to external 1280x720 - viewport 1280 × 646 -, turned off uBlock Origin3 and noscript4 so I'd mimic a common laptop setup, only to be presented with this:

Screenshot of blog.hiri.com/a-year-on-our-experience-launching-a-paid-proprietary-product-on-linux-db4f9116be08 when the window size is 1280x720

I don't even know where to start listing the problems.

Screenshot of javascript requests made by blog.hiri.com/a-year-on-our-experience-launching-a-paid-proprietary-product-on-linux-db4f9116be08

So, foolishly, I started a now flagged thread5, begging publishers to go and start a static blog, or just publish this as a plain, HTML document. It would even be better is it was a Word 97 HTML export.

I decided to keep the browser like that, same resolution, no adblockers, and visited 2 more sites: bbc.co.uk, and theguardian.com.

Screenshot of www.bbc.co.uk/news/uk-44933429
Screenshot of javascript requests made by www.bbc.co.uk/news/uk-44933429
Screenshot of www.theguardian.com/world/2018/jul/23/greeks-urged-to-leave-homes-as-wildfires-spread-near-athens
Screenshot of javascript requests made by www.theguardian.com/world/2018/jul/23/greeks-urged-to-leave-homes-as-wildfires-spread-near-athens

Well... at least the BBC doesn't have sticky headers and/or footers.

How did we get here?

Good examples

Let's take a look at something, which is actually readable - a random entry from Wikipedia:

Screenshot of a random article from wikipedia

Note the differences:

Or another readable thing:

Screenshot of textfiles.com/magazines/LOD/lod-1 - Legion of Doom technical journal, volume 1, 1987

A 31 years old text file - still perfectly readable.

Or loading the first mentioned article in Firefox Reader Mode6:

Screenshot of a Medium article in Firefox Reader Mode

Developers gonna developer

So back to that thread. While most of the reactions were positive, there were opposing ones as well; here are a few of those.

I barely see the problem. Sure, the header and footer aren't perfect, but stupidly large? I also don't feel any "cpu melting javascripts" and my PC is barely usable when I compile anything.For me, Medium provides a very readable experience that is much better than the average static blog. And I don't have to fear a malware ridden page like an old Wordpress installation. https://news.ycombinator.com/item?id=17592735

WordPress comes with it's own can of worms, but it did introduce automatic security updates in version 3.77 - that was in 2013 October. Any WordPress installation since have been receiving security patches, and WordPress backports security patches respectfully well.

As for being malware ridden... it doesn't even make it to the news pages any more when an ad network start spreading malware, but that's still a thing.8

Why is it that I only ever hear those complaints on HN and never elsewhere... Are you all still using Pentium 3 PCs and 56k modems?


A couple of years ago Facebook intruduced 2G Tuesdays9 and that should still be a thing for everyone out there. Rural Scotland? There isn't any phone signal, let alone 3 or 4G. Rural Germany? 6Mbps/1Mbps wired connections. And that is in Europe. Those who travel enough know this problem very well, and yes, 1.8MB - I initially stated 121kB in my original thread, that was a mistake, and due to uBlock not being completely off - of JavaScript is way too much. It was too much when jquery was served from a single CDN at may even actually got cached in the browser, but compiled, React apps won't be cached for long.

[...] people nowadays demand rich media content [...]


I remember when I first saw parallax scroll - of course it made me go "wow". It was a product commercial, I think, but soon everybody was doing parallax scroll, even for textual content. I was horrible. Slow, extremely hard to read due to all the moving parts.

There were times when I thought mouse trailing bouncing circles10 were cool. It turned out readable, small, fast text is cooler.

Nobody is "demanding" rich media content; people demand content. For free, but that is for another day. With some images, maybe even videos - and for that, we have <img>, <figure>, <video>, with all their glory.

> 121KB javascript is not heavy

Part of the problem is that HTML and CSS alone are horribly outdated in terms of being able to provide a modern-looking UI outside the box.

Want a slider? Unfortunately the gods at W3C/Google/etc. don't believe in a <input type="slider"> tag. Want a toggle switch? No <input type="toggle">. Want a tabbed interface? No <tabs><tab></tab></tabs> infrastructure. Want a login button that doesn't look like it came out of an 80's discotheque? You're probably going to need Angular, Polymer, MDL or one of those frameworks, and then jQuery to deal with the framework itself. You're already looking at 70-80kb for most of this stuff alone.

Want your website to be mobile-friendly? Swipe gestures? Pull to refresh? Add another 30-40kb.

Commenting? 20kb.

Commenting with "reactive design" just to make your users feel like their comments went through before they actually went through? 50kb.

Want to gather basic statistics about your users? Add another 10kb of analytics code.


This comment is certainly right, when it comes to UI. However... this is an article. Why would an article need swipe gestures, pull-to-refresh? Analytics is an interesting territory, but basics are well covered by analyzing server logs1112.

Mobile friendly design doesn't need anything at all; it actually needs less: HTML, by design, flows text to the available width, so any text will fill the available container.

For web UI, you need those, yes. To display an article, you really don't.

Medium vs blogs

I've been told that people/companies most usually post to Medium for the following reaons:

As for discoverability, I believe pushing the article link to Reddit, HN, etc. is a significant booster, but merely putting it on medium doesn't mean anything. I've had this problem a long while ago, with personal blogs, as is why is dicoverability never addressed in re-decentralize topics, but the truth is: there is no real need for it. Search engines are wonderful, and if your topic is good enough, people will find it by searching.

The looks more serious problem is funny, given the article I linked is on their own domain - if I wasn't aware of the generic issues with Medium layouts, I wouldn't know, it's a Medium article. One could make any blog look and feel the same. One could export an article from Typora13 and still look professional.

I've heard stories of moving to medium brought a lot more "reads" and hits on channels, but I'm sceptical. Eons ago I read an article, when PageRank was still a thing, where a certain site went to be #1 on Google for certain phrases without even contaning that phrase - only the links linking to the site did. The lesson there is that everything can be playes, and I find it hard to believe that purely posting to Medium would boost visibility that much. I could be wrong though.

Proposals - how do we fix this?

Always make the content the priority

There's an article to read, so let people read it. The rest is secondary for any visitor of yours.

Don't do sticky headers/footers

But if you really, really have to, make it certain it's the opposite of the display layout: for horizontal windows, the menu should be on the side; for vertical, it should be on the top.

You don't even need JS for it, since it's surprisingly simple to tell horizontal apart from vertical, even in pure CSS, with media queries:

 @media screen and (orientation:portrait) { … }
 @media screen and (orientation:landscape) { … }

Rich media != overdosed JavaScript

Embrace srcset14 and serve different, statically pre-generated images. Seriously consider if you need a framework at all15. BTW, React is the past, from before progessive enhancements, an it came back to haunt us for the rest of the eternity.

Use one good analytics system. There really is no need for multiple ones, just make is sure that one is well configured.

Don't install yet another commenting system - nobody cares. Learn from the bigger players and think it through if you actually need a commenting system or not16.

Some JS is useful, a lot of JS is completely unneeded for displaying articles. If your text is 8000 characters, there is simply no reasonable excuse to serve 225x more additional code to "enhance" that.


HTML was invented to easily share text documents. Even if it has images, videos, etc. in them, you're still sharing text. Never forget that the main purpose is to make that text readable.

There are many people out there with capped, terrible data connection, even in developed countries, and this is not changing in the close future. Every kB counts, let alone MBs.

MBs of Javascript has to be evaluated in the browser, which needs power. Power these days comes from batteries. More code = more drain.

Keep it simple, stupid.

  1. https://blog.hiri.com/a-year-on-our-experience-launching-a-paid-proprietary-product-on-linux-db4f9116be08

  2. https://news.ycombinator.com/

  3. https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/

  4. https://noscript.net/

  5. https://news.ycombinator.com/item?id=17592600

  6. https://support.mozilla.org/en-US/kb/firefox-reader-view-clutter-free-web-pages

  7. https://codex.wordpress.org/Configuring_Automatic_Background_Updates

  8. https://www.theguardian.com/technology/2016/mar/16/major-sites-new-york-times-bbc-ransomware-malvertising

  9. https://www.theverge.com/2015/10/28/9625062/facebook-2g-tuesdays-slow-internet-developing-world

  10. http://dynamicdrive.com/dynamicindex13/trailer.htm

  11. https://www.awstats.org/

  12. https://matomo.org/log-analytics/

  13. https://typora.io/

  14. https://www.sitepoint.com/how-to-build-responsive-images-with-srcset/

  15. http://youmightnotneedjquery.com/

  16. https://motherboard.vice.com/en_us/article/jp5yx8/im-on-twitter-too

Wed, 25 Jul 2018 10:30:00 +0100

Using I²C sensors on a linux via a USB and IIO

Notes: no warranties. This is hardware, so it can cause trouble with your system, especially if you short-circuit something or - as I did once, many moons ago - solder on the fly why the thing is still connected to the USB port. Don't do that.

Proto-assembly of Digispark ATTiny85, Adafruit BME280, and Adafruit SI1145
Shutter speed
1/60 sec
Focal length (as set)
85.0 mm
ISO 800
HD PENTAX-DA 16-85mm F3.5-5.6 ED DC WR

USB I²C adapter

A few months ago I wrote about using a Raspberry Pi with some I²C sensors to collect data for Collectd1. While it worked well, it made me realise that having the RPi running a full fledged operating system means I need to apply security patches to yet another machine, and that is not something I want to deal with. I also have a former laptop, running as a ZFS based NAS, so why not use that?

After venturing into a fruitless dig to use the I²C port in the VGA connector2 I verified that indeed, as concluded in the tutorial, it doesn't work with embedded Intel graphics on linux.

Alternative I started looking at USB I²C adapter, but they are expensive. There is one project though, which looked very promising, and it didn't require a full-fledged Arduino either: Till Harbaum's I²C-Tiny-USB3.

It uses an ATtiny85 board - as the name suggests, it's tiny, and turned out to be a perfectly fine USB to I²C adapter. You can buy one here: https://amzn.to/2ubPs6I

Note: there's an Adafruit FT232H, which, in theory, is capable of the same thing. I haven't tested it.

I2C-Tiny-USB firmware

The git repository already contains a built hex file, but in case there are any modifications needed to be done, this is how it's done:

sudo -i
apt install gcc-avr avr-libc
cd /usr/src
git clone https://github.com/harbaum/I2C-Tiny-USB
cd I2C-Tiny-USB/digispark
make hex

Make sure the I2C_IS_AN_OPEN_COLLECTOR_BUS is uncommented; I've tried with real pull-up resistors, and, for my surprise, the sensors stopped showing up.

micronucleus flash utility

To flash the hex file, you'll need micronucleus, a tiny flasher utility.

sudo -i
apt install libusb-dev
cd /usr/src
git clone https://github.com/micronucleus/micronucleus
cd micronucleus/commandline
make CONFIG=t85_default
make install


micronucleus --run --dump-progress --type intel-hex main.hex

then connect the device through a USB port, and wait for the end of the flash process.

I²C on linux

Surprisingly enough, Debian did not show I²C hubs in /dev - apparently the kernel module for this is not loaded, so load it, and make that load permanent:

sudo -i
modprobe i2c-dev
echo "i2c-dev" >> /etc/modules

Connect the Attiny85

Normally a PC already has a serious amount of I²C adapters. As a result, the new device will show up with an extra device number, which number is rather important. The kernel log can help identify that:

dmesg | grep i2c-tiny-usb
[    3.721200] usb 5-2: Product: i2c-tiny-usb
[    3.725693] i2c-tiny-usb 5-2:1.0: version 2.01 found at bus 005 address 003
[    3.736109] i2c i2c-1: connected i2c-tiny-usb device
[    3.736584] usbcore: registered new interface driver i2c-tiny-usb

To read just the device number:

i2cdev=$(dmesg | grep 'connected i2c-tiny-usb device' | head -n1 | sed -r 's/.*\s+i2c-([0-9]+).*/\1/')

Note: the device number might change after a reboot. For me, it was 10 when simply plugged in, and 1 if it was connected during a reboot.

Detecting I2C devices

i2cdetect is a program that dumps all the devices responding on an I²C adapter. The Adafruit website has a collection for their sensors4. That 1 after the i2cdetect -y is the device number identified in the previous step, and it says I have 2 devices:

sudo -i
i2cdev=$(dmesg | grep 'connected i2c-tiny-usb device' | head -n1 | sed -r 's/.*\s+i2c-([0-9]+).*/\1/')
i2cdetect -y ${i2cdev}
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- -- 
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
60: 60 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
70: -- -- -- -- -- -- -- 77   

I²C 0x77: BME280 temperature, pressure, humidity sensor5

This is where things got interesting. Normally, when a BME280 sensors comes into play, every tutorial starts pulling out Python for the task, given that most of the Adafruit libraries are in Python.

Don't get me wrong, those are great libs, and the Python solutions are decent, but doing a pip3 search bme280 resulted in this:

bme280 (0.5)                           - Python Driver for the BME280 Temperature/Pressure/Humidity Sensor from Bosch.
Adafruit-BME280 (1.0.1)                - Python code to use the BME280 temperature/humidity/pressure sensor with a Raspberry Pi or BeagleBone black.
adafruit-circuitpython-bme280 (2.0.2)  - CircuitPython library for the Bosch BME280 temperature/humidity/pressure sensor.
bme280_exporter (0.1.0)                - Prometheus exporter for the Bosh BME280 sensor
RPi.bme280 (0.2.2)                     - A library to drive a Bosch BME280 temperature, humidity, pressure sensor over I2C

Which one to use? Then there are the dependencies, and the code quality varies from one to another.

So I started digging into the internet, github, and other sources, and somehow I realised there's a kernel module, named bmp280. The BMP280 is a sibling of the BME280 - it's without the humidity sensor. So the questions was: what in the world is drivers/iio/pressure/bmp280-i2c.c and how can I use it?

It turned out, that apart from hwmon, there's another sensor library layer in the linux kernel, called Industrial I/O - iio. It was added with this name somewhere in 2012, around 3.156, and it's purpose is to offer a subsystem fast speed sensors7. While fast speed is not a thing for me this time, but I do trust the kernel code quality.

For my greatest surprise, the BMP280 module is even included in the Debian Sid kernel as a module, and adding it was a mere:

sudo -i
modprobe bmp280
echo "bmp280" >> /etc/modules
modprobe bmp280-i2c
echo "bmp280-i2c" >> /etc/modules

To actually enable the device, the i2c bus has to be told of the sensor's existence:

sudo -i
i2cdev=$(dmesg | grep 'connected i2c-tiny-usb device' | head -n1 | sed -r 's/.*\s+i2c-([0-9]+).*/\1/')
echo "bme280 0x77" > /sys/bus/i2c/devices/i2c-${i2cdev}/new_device

The kernel log should show something like this:

kernel: bmp280 1-0077: 1-0077 supply vddd not found, using dummy regulator
kernel: bmp280 1-0077: 1-0077 supply vdda not found, using dummy regulator
kernel: i2c i2c-1: new_device: Instantiated device bme280 at 0x77

Verify the device is working:

tree /sys/bus/iio/devices/iio\:device0
├── dev
├── in_humidityrelative_input
├── in_humidityrelative_oversampling_ratio
├── in_pressure_input
├── in_pressure_oversampling_ratio
├── in_pressure_oversampling_ratio_available
├── in_temp_input
├── in_temp_oversampling_ratio
├── in_temp_oversampling_ratio_available
├── name
├── power
│   ├── async
│   ├── autosuspend_delay_ms
│   ├── control
│   ├── runtime_active_kids
│   ├── runtime_active_time
│   ├── runtime_enabled
│   ├── runtime_status
│   ├── runtime_suspended_time
│   └── runtime_usage
├── subsystem -> ../../../../../../../../../bus/iio
└── uevent

2 directories, 20 files

And that's it. The BME280 is ready to be used:

for f in  in_pressure_input in_temp_input in_humidityrelative_input; do echo "$f: $(cat /sys/bus/iio/devices/iio\:device0/$f)"; done
in_pressure_input: 102.112671875
in_temp_input: 26050
in_humidityrelative_input: 49.611328125

According to the BME280 datasheet8, under recommended modes of operation (3.5.1 Weather monitoring), the oversampling for each sensor should be 1, so:

sudo -i
echo 1 > /sys/bus/iio/devices/iio\:device0/in_pressure_oversampling_ratio
echo 1 > /sys/bus/iio/devices/iio\:device0/in_temp_oversampling_ratio
echo 1 > /sys/bus/iio/devices/iio\:device0/in_humidityrelative_oversampling_ratio

I²C 0x60: SI1145 UV index, light, IR sensor9

Unlike the BME280, the SI1145 doesn't have a built-in kernel module in Debian Sid - but it does exist as a kernel module, it's simply not included in the Debian Kernel. I've also learnt that this sensor is a heavyweight player, and that I should have bought something way simpler for mere light measurements; something that's already included the out-of-the-box kernel modules, like a TSL256110.

But I wasn't willing to give up the SI1145, being an expensie sensor, so in order to have it in the kernel, I had to compile the kernel module. Before getting started make sure:

Once those two are true, identify the kernel version:

uname -a
Linux system-hostname 4.17.0-1-amd64 #1 SMP Debian 4.17.3-1 (2018-07-02) x86_64 GNU/Linux

The output contains 4.17.3-1 - that is the actual kernel version, not the 4.17.0-1-amd64 which is the Debian name.

Get the kernel; extract it; add the SI1145 to the config; compile the drivers/iio/light modules; add that to the local modules.

sudo -i
cd /usr/src/
wget https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.17.3.tar.gz
tar xf linux-4.17.3.tar.gz
cd linux-4.17.3
cp /boot/config-4.17.0-1-amd64 .config
cp ../linux-headers-4.17.0-1-amd64/Module.symvers .
echo "CONFIG_SI1145=m" >> .config
make menuconfig
# save it
# exit
make prepare
make modules_prepare
make SUBDIRS=scripts/mod
make M=drivers/iio/light SUBDIRS=drivers/iio/light modules
cp drivers/iio/light/si1145.ko /lib/modules/$(uname -r)/kernel/drivers/iio/light/
modprobe si1145
echo "si1145" >> /etc/modules

Once that is done, and there are no error messages, enable the device:

sudo -i
i2cdev=$(dmesg | grep 'connected i2c-tiny-usb device' | head -n1 | sed -r 's/.*\s+i2c-([0-9]+).*/\1/')
echo "si1145 0x60" > /sys/bus/i2c/devices/i2c-${i2cdev}/new_device

The kernel log shoud show something like this:

kernel: si1145 1-0060: device ID part 0x45 rev 0x0 seq 0x8
kernel: si1145 1-0060: no irq, using polling
kernel: i2c i2c-1: new_device: Instantiated device si1145 at 0x60

Verify the device is working:

tree /sys/bus/iio/devices/iio\:device1
├── buffer
│   ├── data_available
│   ├── enable
│   ├── length
│   └── watermark
├── current_timestamp_clock
├── dev
├── in_intensity_ir_offset
├── in_intensity_ir_raw
├── in_intensity_ir_scale
├── in_intensity_ir_scale_available
├── in_intensity_offset
├── in_intensity_raw
├── in_intensity_scale
├── in_intensity_scale_available
├── in_proximity0_raw
├── in_proximity_offset
├── in_proximity_scale
├── in_proximity_scale_available
├── in_temp_offset
├── in_temp_raw
├── in_temp_scale
├── in_uvindex_raw
├── in_uvindex_scale
├── in_voltage_raw
├── name
├── out_current0_raw
├── power
│   ├── async
│   ├── autosuspend_delay_ms
│   ├── control
│   ├── runtime_active_kids
│   ├── runtime_active_time
│   ├── runtime_enabled
│   ├── runtime_status
│   ├── runtime_suspended_time
│   └── runtime_usage
├── sampling_frequency
├── scan_elements
│   ├── in_intensity_en
│   ├── in_intensity_index
│   ├── in_intensity_ir_en
│   ├── in_intensity_ir_index
│   ├── in_intensity_ir_type
│   ├── in_intensity_type
│   ├── in_proximity0_en
│   ├── in_proximity0_index
│   ├── in_proximity0_type
│   ├── in_temp_en
│   ├── in_temp_index
│   ├── in_temp_type
│   ├── in_timestamp_en
│   ├── in_timestamp_index
│   ├── in_timestamp_type
│   ├── in_uvindex_en
│   ├── in_uvindex_index
│   ├── in_uvindex_type
│   ├── in_voltage_en
│   ├── in_voltage_index
│   └── in_voltage_type
├── subsystem -> ../../../../../../../../../bus/iio
├── trigger
│   └── current_trigger
└── uevent

5 directories, 59 files

Note: I tried, others tried, but even though in theory, there's a temperature sensor on the SI1145, it doesn't work. It seems like it reads the value on startup, and that's it.

CLI script

In order to have a quick view, without collectd, or other dependencies, a script like this is more, than sufficient:

#!/usr/bin/env bash

temperature=$(echo "scale=2;$(cat /sys/bus/iio/devices/iio\:device0/in_temp_input)/1000" | bc)
pressure=$(echo "scale=2;$(cat /sys/bus/iio/devices/iio\:device0/in_pressure_input)*10/1" | bc) 
humidity=$(echo "scale=2;$(cat /sys/bus/iio/devices/iio\:device0/in_humidityrelative_input)/1" | bc) 
light_vis=$(cat /sys/bus/iio/devices/iio\:device1/in_intensity_raw) 
light_ir=$(cat /sys/bus/iio/devices/iio\:device1/in_intensity_ir_raw) 
light_uv=$(cat /sys/bus/iio/devices/iio\:device1/in_uvindex_raw) 

echo "$(hostname -f) $d

Temperature: $temperature °C
Pressure: $pressure mBar
Humidity: $humidity %
Visible light: $light_vis lm
IR light: $light_ir lm
UV light: $light_uv lm"

The output:

your.hostname Thu Jul 12 08:48:40 BST 2018

Temperature: 25.59 °C
Pressure: 1021.65 mBar
Humidity: 49.28 %
Visible light: 287 lm
IR light: 334 lm
UV light: 12 lm

Note: I'm not completely certain that the light unit is actually in lumens; the documentation is a bit fuzzy about that, so I assumed it is.


The next step is to actually collect the sensor readouts from the sensors. I'm still using collectd11, a small, ancient, yet stable and very good little metrics collection system, because it's enough. It writes ordinary rrd files, which can be plotted into graphs with tools like Collectd Graph Panel12

Unfortunately there's not yet an iio plugin for collectd (or I couldn't find it yet, and if you did, please let me know), so I had to add an extremely simple shell script as an exec plugin to collectd.


#!/usr/bin/env bash


# this will run only on collectd load, and once it's loaded, 
# even though it throws and error, additional runs don't make any
# problems
i2cdev=$(dmesg | grep 'connected i2c-tiny-usb device' | head -n1 | sed -r 's/.*\s+i2c-([0-9]+).*/\1/')
echo "bme280 0x77" > /sys/bus/i2c/devices/i2c-${i2cdev}/new_device
echo "si1145 0x60" > /sys/bus/i2c/devices/i2c-${i2cdev}/new_device

while true; do
    for sensor in /sys/bus/iio/devices/iio\:device*; do 
        name=$(cat "${sensor}/name")
        if [ "$name" == "bme280" ]; then

            # unit: °C
            temp=$(echo "scale=2;$(cat ${sensor}/in_temp_input)/1000" | bc )
            echo "PUTVAL $HOSTNAME/sensors-$name/temperature-temperature interval=$INTERVAL N:${temp}"

            # unit: mBar
            pressure=$(echo "scale=2;$(cat ${sensor}/in_pressure_input)*10/1" | bc)
            echo "PUTVAL $HOSTNAME/sensors-$name/pressure-pressure interval=$INTERVAL N:${pressure}"

            # unit: %
            humidity=$(echo "scale=2;$(cat ${sensor}/in_humidityrelative_input)/1" | bc)
            echo "PUTVAL $HOSTNAME/sensors-$name/percent-humidity interval=$INTERVAL N:${humidity}"

        elif [ "$name" == "si1145" ]; then

            # unit: lumen?
            ir=$(cat ${sensor}/in_intensity_ir_raw)
            echo "PUTVAL $HOSTNAME/sensors-$name/gauge-ir interval=$INTERVAL N:${ir}"

            light=$(cat ${sensor}/in_intensity_raw)
            echo "PUTVAL $HOSTNAME/sensors-$name/gauge-light interval=$INTERVAL N:${light}"

            uv=$(cat ${sensor}/in_uvindex_raw)
            echo "PUTVAL $HOSTNAME/sensors-$name/gauge-uv interval=$INTERVAL N:${uv}"

    sleep "$INTERVAL"


LoadPlugin "exec"
<Plugin exec>
  Exec "nobody" "/usr/local/lib/collectd/iio.sh"

The results are:

BME280 temperature graph in Collectd Graph Panel
SI1145 raw light measurement in Collectd Graph Panel


The Industrial I/O layer is something I've heard for the first time, but it's extremely promising: the code is clean, it already has support for a lot of sensors, and it seems to be possible to extend at a relative easy.

Unfortunately it's documentation it brief and I'm yet to find any metrics collector that supports it out of the box, but that doesn't mean there won't be any very soon.

Currently I'm very happy with my budget I2C USB solution - not having to run a Raspberry Pi for simple metrics collection is certainly in win, and utilising the sensors directly from the kernel also looks very decent.

  1. https://petermolnar.net/raspberry-pi-bme280-si1145-collectd-mosquitto/

  2. https://web.archive.org/web/20160506154718/http://www.paintyourdragon.com/?p=43

  3. https://github.com/harbaum/I2C-Tiny-USB/tree/master/digispark

  4. https://learn.adafruit.com/i2c-addresses

  5. https://www.adafruit.com/product/2652

  6. https://github.com/torvalds/linux/tree/a980e046098b0a40eaff5e4e7fcde6cf035b7c06

  7. https://wiki.analog.com/software/linux/docs/iio/iio

  8. https://cdn-shop.adafruit.com/datasheets/BST-BME280_DS001-10.pdf

  9. https://www.adafruit.com/product/1777

  10. https://www.adafruit.com/product/439

  11. http://collectd.org/

  12. https://github.com/pommi/CGP

Fri, 13 Jul 2018 21:00:00 +0000

Stream of Cascada de Los Colores

Shutter speed
1/60 sec
Focal length (as set)
35.0 mm
ISO 400
smc PENTAX-DA 35mm F2.4 AL

On the Canary Island La Palma, unexpectedly, there is a lot of water. Some of this water ends up in the Cascada de Los Colores, a small waterfall of red, yellow, and green streams. Soon the stream becomes mostly red, stays like that for a while, slowly turns into yellow, as other, clear water connects to it, and in the end, it fades into ordinary water.

Mon, 28 May 2018 10:00:00 +0000

Page created: Mon, Sep 16, 2019 - 09:05 AM GMT