At $WORKPLACE I've been observing with interest the different species of interactions that can happen during group chat. Let's divide them into three different areas.

$WORKPLACE_ALPHA -- An agency. This was fun, we set up bots that created a lot of noise in the channel. Remember this was pre-covid, so the chat service in question, HipChat, was a side channel, not a hard requirement for communication. Everything was conducted in a single main channel, all real-time work collaboration went through here. There was no threading. As the team was small, we were all able to keep track of things fairly well, we would sometimes go into DMs if things got complicated.

$WORKPLACE_BETA -- This was a much more stoical workplace Slack, with zero real "banter" and no arguments. I felt quite apprehensive about posting in this Slack. There were only a few channels and little activity; few problems required real-time collaboration to resolve, and if that did become necessary, it was done in DMs. Email was still heavily used here.

$WORKPLACE_GAMMA -- They use Slack as a real "replacement for email", in the way it's supposed to be used. This means a lot of channels, a LOT of threading, heavy reliance on emoji reactions, and while it does feel casual, there's also not much off-topic chat because of the emphasis on signal.

There is still heavy disagreement on the relative merits of coarse-grained vs fine-grained channels. I tend to favour coarse-grained channels, but on the other hand, when you have automations and other non-natural inputs affecting channels, channels can end up functioning more like a specialized Twitter stream or similar, in which case that channel does need to be fine-grained. It's clear that there's a positive correlation between large and coarse-grained channels and off-topic banter. (I don't mean to thereby claim that fine-grained is preferable, or the converse.)

Posted 2022-07-20

I had to download a bunch of files from a Cloudflare-protected site. Said site is annoying to scrape, because it is heavily using JS to generate unguessable links. Selenium is a possibility but I found that it would be defeated by these Cloudflare CAPTCHAs. Cloudscraper is not feasible either because links are not present in the downloaded HTML directly.

It's an acceptable solution to click the links manually in this case, but then another problem emerges. Firefox's download manager is rather substandard. This isn't an issue in most cases, but in this particular case it was a limiting factor, as downloads would fail and not be retried. Firefox also doesn't limit in-progress downloads, meaning they would saturate the connection and fail.

One possibility is you can use aria2c in daemon mode, and use a Firefox extension to add downloads to it. Aria2c will then queue the downloads and run them in order, with optional retry. The extension embeds something called AriaNg, which provides a nice web interface on top of the aria2c daemon, so it's actually quite friendly. The only tricky part is that you may need to start aria2c yourself. You can do that using the config below, which I took from the Arch wiki.



Posted 2022-07-08

Here I've been attempting to detail my setup that I revised as of the end of 2021.

DHCP configuration -- This is done using isc-dhcp-server on Debian. I use the network range 192.168.0.x for my internal network, being fairly small. Dynamic DHCP assignments are restricted to the .16 - .127 range. I create host stanzas to store reservations, e.g.:

host sprinkhaan {
  hardware ethernet DC:53:60:F3:A7:FF;

I use option domain-search "" to set up the default search domain for all DHCP clients.

DNS configuration -- This is done using BIND. All hosts receive as their DNS server. This forwards to my ISP's DNS servers (Zen, who have been great so far). Using BIND I am able to create "split-horizon" DNS so that e.g. my Puppet server resolves to an internal IP when requested internally. The main zone file defines all hosts in (The original idea here was to draw a sharp distinction between physical and virtual hosts, but I've become less certain on this. Still, it's good to use a separate domain from the real The downside of this setup is that some duplication of the assignments is needed, specifically the fixed-address setup in dhcpd.conf needs to be essentially replicated in This doesn't actually matter much in practice because I don't add hosts very often.

PPP interface -- This is done using pppd and a Draytek Vigor 130. The provider file that I use for pppd is fairly standard. I based it on a useful post from a person called Ruben. The only real difference between the two configurations is that I do have to provide my real account password in chap-secrets.

Firewall configuration -- I use nftables. There's only one really notable thing that I do in nftables, which is a bit complicated, and that's clamp my TCP maximum segment size to my path MTU.

table inet filter {
    # ...
    chain forward {
        # ...
        tcp flags syn tcp option maxseg size set rt mtu
        # ...

You can find more information on this at the nftables wiki. This problem manifests itself in strange ways with some web sites simply timing out when you attempt to connect to them, while most sites work fine. For instance and had this strict MTU requirement as of late 2021. There may also be other ways to address this. I know that setting MTU on clients also resolves the issue, but not all devices seem to respect the MTU setting when it's sent in DHCP. It may also be that setting mtu in /etc/network/interfaces will make a difference -- I have never tried this.

Interface configuration -- In /etc/network/interfaces, we create the regular ppp0 interface that is used for WAN access.

auto ppp0
iface ppp0 inet ppp
    provider zen

We create a bridge between two interfaces. One has a Wifi AP and one has a hardware switch.

auto br0
iface br0 inet static
    bridge_ports eno2 eno3

The wifi AP and switch just use their standard Draytek firmware for configuration, which seems to work fine so far.

A few notes/updates: The Vigor 130 has a CLI interface that is accessible via telnet (I believe). This allows some more advanced operations to be performed. There were also some concerns about whether the Zen connection should be Annex A or Annex B. I think we eventually came to the conclusion that it should be Annex A. During the setup of the line, there were frequent connection drops, where the pppd log would read "Modem hangup". These eventually seem to have just gone away without further intervention on my part. I can only assume this was the "line training" phase that is much discussed on UK broadband forums.

Posted 2022-05-11

There's a genre of blog posts that frequently get linked on Hacker News, which are basically attempts to explain why problems that seem conceptually simple are very complex when attempted within an organization.

I'm referring to these as Chesterton's fence posts. There's an analogy to the eponymous principle: those not directly involved in the day to day materiality of commercial software production cannot see the looming network of past failures that has led to the current abundance of caution. Of course, one could still make an argument that some of these temporal drags on tasks completion are due to organizational dysfunction rather than prudence.

How many Microsoft employees does it take to change a lightbulb?

I could do that in a weekend!

The unexpected complications of minor features

Simple software things that are actually very complicated

Reality has a surprising amount of detail

A similar, famous example on HN is that of Dropbox, famously dismissed as something trivially accomplishable via rsync -- a fact that's not wrong, but is nonetheless missing the point.

Posted 2022-04-21

This is an extract from an unpublished paper. I first heard of this concept on the excellent Talking Politics podcast (RIP).

The Copernican principle (or Copernicus method) is a concept developed by American astrophysicist J. Richard Gott. The principle was first hypothesized by Gott in 1969. It is a probability-based method that allows estimating the potential lifespan of observable objects without any further information, other than the dates of their observation.

The method is based on dividing the hypothetical lifespan of a thing into four quarters. It yields a lower bound on the future lifespan of a thing and an upper bound.

If I am observing the object at the beginning of the middle period, there are three quarters of its lifespan left to go. That means that the future is three times as long as the past. However if one assumes one is at the end of that middle period, the future is only one-third as long as the past.

The canonical example used by Gott is the Berlin Wall. Here are the facts we know in this application:

  • We visited the wall in 1969.
  • The wall is eight years old at that time.

Thus the lower bound for the time-left is given by 8 × ⅓, that is, two and two thirds. This would mean that the earliest date for the "end" of the object, i.e. the collapse of the Wall, is two-thirds of the way through 1971.

The upper bound, on the other hand, is given by 8 × 3, 24 years. This means that the upper bound for the fall of the Wall is 1993.

The Copernicus method would claim that there is a 50% chance that the lifespan falls into these bounds.

As one can see this produces a fairly wide interval: just over 21 years in the case of an 8-year-old object. A 45-year-old object gives an interval of 120 years. In fact, the interval size is given by 8x/3, or equivalently, multiplying by two and two thirds. So these bounds are very large, and remember they're always mediated by the condition that the probability of these bounds being true at all is given at only 0.5.

A similar concept is the `Lindy effect' which was first named by Albert Goldman. Goldman's concept was rather divorced from the one that is current now, though: that concept finds its ancestor in Benoit Mandelbrot. The point can be crudely summarized in the following manner: the future predicted lifespan of a thing varies proportionally to its past lifespan.

It's unclear whether the Copernican method applies to repeated 'observations'. For instance, if Gott were to revisit the Berlin wall in, say, 1975, would the same calculation apply? Do we gain any more information by having that six-year gap, beyond the fact that the wall is now 14 years old?

One can use this information to create heuristics based on how long a future piece of knowledge is likely to be valuable for. For instance, SQL was created in 1974, making it 48 years old as of this writing. The lower bound for SQL's lifespan is thus 2038, making it likely a worthwhile investment. However SQL is an extreme outlier in this scenario. As a matter of simple mental arithmetic, if one decides to focus on current media from 1992, such media has lasted thirty years: one can see the lower bound of its lifespan under the Copernican principle would be ten years.

Of course, this only applies in the scenario where we lack other information. If we assume that certain allegations cast a shadow over Woody Allen's career, it may not be prudent to assume that simply because of the empirical fact of his film presenting itself to consciousness that the Copernican principle applies. The lifespan may be artificially shortened, or behave nonlinearly.

Posted 2022-04-10

Friends described opposite impulses in Schubert’s personality. Johann Mayrhofer, a poet and librettist, defined it as “tenderness and coarseness, sensuality and candour, sociability and melancholy”. More dramatically Joseph Kenner, another acquaintance, said long after Schubert’s death: “His body, strong as it was, succumbed to the cleavage in his souls, as I would put it, of which one pressed heaven-wards and the other bathed in slime. Anyone who knew Schubert knows how he was made of two natures, foreign to each other; how powerful the craving for pleasure dragged his soul down to the slough of moral degradation.”

The enduring chill of Schubert's Winterreise, Phil Hebblethwaite

Posted 2022-02-14

It's about that time of the year where I do an upgrade from oldstable->stable, so here's the notes on Bullseye.

Upgrading notes:

  • Dropped npm patch for old node versions.
  • Upgraded syncthing config file to new version v32.
  • Dropped backport spec for gcc-doc now present in bullseye.
  • Dropped MELPA request for haskell-mode now present in bullseye.
  • Upgrading PHP to 7.4.
  • A few emacs packages have been renamed to have an elpa- prefix.
  • Emacs package no longer has a version suffix in its package name, just being emacs instead.
  • /usr/bin/python no longer installed by default -- this is a problem for the non-packaged version of youtube-dl. I haven't found a good solution for this yet. I'm probably going to be evaluating yt-dlp for my needs.
  • Use Debian's /etc/ssl/openssl.cnf, dropping my own.
  • Drop some older pip packages and a few that are more trouble than they're worth.
  • Drop custom package of archive puppet module as now available in bullseye.
  • tt-rss upgraded and was thoroughly broken, not sure how. It doesn't matter too much in my case. I had to manually reapply the database schema.
  • Manually upgrade postgresql clusters to v13.
  • Something new called Tracker exists, I've yet to investigate this.
  • Whonix packages for monero-gui look broken, but the old ones for buster work fine.
  • ddclient Linode support patch refreshed.
  • python-rgain dropped, replace it with pip package rgain3 for anankasm.

Overall this was a fairly smooth upgrade, especially because Puppet only had a minor version bump so it wasn't necessary to purge and regenerate all of the Puppet client certificates (which is normally the biggest pain point).

Yow! x3

Posted 2021-09-01

A few poorly-reasoned rants to express my feelings on React.

Community support on Reactiflux discord is primarily awful (no offence to the people who do help). The main channels suffer from what you could call the #ubuntu problem: excessive demand, no supply. Lots of noobs ask poorly formatted unclear questions and as a result, users with the requisite experience to answer the relevant questions don't have the patience to monitor the channel. As a result, even asking extremely clear questions with definite answers don't get the same response as you would get on (for instance) #c++ on freenode.

No guidance on how to structure applications. Again, #NOTAFRAMEWORK caveat, but in some ways this is like building a gun without a safety catch; if your thing X causes someone to die when it's used wrong, and it happens to be used wrong in 99% of cases where it's used, does that speak well of the thing? There are a set of best practices in the community but nothing that's really agreed-upon or endorsed, and it's near-impossible to come to a fresh React codebase without doing huge amounts of spelunking. Vue makes much more sense in this regard. Even though I can't pin down a precise technical reason for it (SFCs? docuemntation?), in practice project X is far more likely to be structured similarly to project Y in Vue than React.

Unacceptable levels of boilerplate when creating forms. I don't think I need to demonstrate, even React fans admit that it's not where the toolkit shines (and Vue competes badly here too when Vuex is in use; AngularJS is better). APIs like Formik are ludicrously complicated for what they do. I think that traditional server-side web frameworks are far better at handling applications that are basically forms, which is a surprising number of applications.

There are patterns labeled as "advanced" that are completely necessary to the day-to-day usage of react. eg, HOCs, render props, children. Even Redux is a fair example. "But you can get so far without these!", I hear you cry. You can, but do you want to? Will what you've produced have any value if it's a write-only mess? Also, what the hell is a Saga and why do I need it?

Facepalm-inducing C-R-A bug.

Create-react-app sucks with its 'eject' concept. Why say: "Stick 100% with the preset, or completely dissociate from us and never get any updates"? Surely it's not beyond the wit of man to form an abstraction on top of the C-R-A tooling that allows SOME conditional behaviour but is robust against upgrades (by which I mean, loudly complains when upgrades will break)? Instead they go with the Mac philosophy of pre-canned-and-designed-experience or nothing at all, and I feel like that's essentially out of what can be called laziness or, more accurately, unwillingness to invest time and thus money. Surely FB has enough of those two resources? (For an aside here, see Two Categories of FOSS Project.)

Everywhere in the frontend world and in React too, you get this boosterish, marketing-adjacent attitude, showing up in the documentation:

Apollo Link is easy to use with a variety of GraphQL libraries. It's designed to go anywhere you need to fetch GraphQL results.

The first sentence is OK, though pretty much unnecessary; the second sentence is rage-inducing. We're programmers, can't we just communicate clearly? All the "Made with <3" and emoji abuse throughout the documentation has the same alienating effect. I know I'm old, but just give me an HTML4 page with <P> tags and no styling; that's when you know the software is quality. (Ideally it was written by a 65-year-old academic in emacs.)

There are still race conditions in the webpack TS watcher to this day that make it incredibly frustrating to work in large TS codebases. This is largely the result of bit-rot, which in turn is the result of the VSCode monoculture not caring about command line tooling. Many frontend developers just do not care that their whole setup is radically dependent on editor choice. Not only this, but this specific issue is one that is extremely difficult to find a community space even to talk about, because it's the intersection of 2 different infrastructure -- Webpack, which plenty of people do know about but very few people want to know about, and Typescript, which only happens to be slow enough to more-easily trigger this particular latent issue.

Not React-specific, but Apollo forces your API into weird shapes. When I do useMutation I am recommended to put callback for error handling into THIS call, the call for the hook, rather than the place where I actually call the mutation. This isn't the normal pattern in software.

Every time you want to do a conditional or a branching statement you essentially have to factor the conditional into a new functional component, that or use an ugly ternary. Again, very principled, but not very ergonomic. The lack of first-class IDE support makes this one a particular pain, since it shows up constantly. I find myself hoping that the business domain logic can be expressed as a simple foo() && <bar/> just so that I can avoid factoring out yet-another-conditional.

In some ways React reminds me of Clojure, which is both good and bad. A single basic unit (the JSX expression, or component) that's endlessly composable in principled ways. The problems it has are also ones that Clojure codebases can show: a lack of standardization of any particular structuring method, so the overall arrangement reflects the particular mental quirks of the author. The syntax of Clojure is more attractive to me, and certainly much less verbose than JS, even though syntax should certainly not be the focus in Lisp. React is really fairly readable at a micro level, provided that the author wasn't insane, but at a whole-application level it is far more challenging to reason about than most people will admit, and this is partly due to the relentless decomposing that the functional style forces. It's not so bad in Clojure, I think because Clojure backend code is fundamentally about data manipulation which puts an upper limit on the level of the call stack you can need while still having a good design. In React there's no practical, conventional, or aesthetic limit to the depth of that call stack. And pulling parts out from deep inside the structure is mostly a massive pain, where in Clojure you can just use the REPL.

Posted 2021-04-08

This C-R-A issue is an instance of a larger concept which surfaces frequently in the open source community now, where the maintainers of a project decide to outright ignore a large section of the user base and mark their bug WONTFIX. This has happened throughout history but recently projects have become divided into two real categories. Category 1 is "open source in name but largely closed to contributions and certainly closed to design debate" -- this is what larger projects tend to become, and projects run by companies generally start off this way and get worse over time. Category 2 is "open source & community driven", this is broadly how I'd characterize projects like Debian, Python, and the majority of smaller projects (especially software libraries).

Category 1 projects do the following: Remove features, regardless of whether users want them or not. Try desperately to reduce the size of their test matrix by reducing features. Justify not adding very popular wanted features based on their test matrix. Justify not merging PRs because the author wasn't "in-group". Accuse anyone who criticizes these policies of being 'entitled'. Opportunistically co-opt the language of therapy and self-care to support these decisions, which really doesn't apply when the majority of maintainers of category 1 projects are paid contributors (see Silva 2013 for more on the omnipresence of the language of therapy, and of course "self-care" is a co-opted concept from Lorde).

Another example of this is this request to add shell hooks to Docker. Again there's some justification for this decision.

A proto-version of this division surfaced in the systemd controversy years ago, that continues to rumble on. This is overall way less important than it was made out to be at the time; for reference, I take the side of systemd on a technical basis (and I think that in this case the users against systemd were largely wrong), but it perfectly exemplifies this developers-as-Eloi, users-as-Morlocks approach. Note that I am NOT saying that all one-man FOSS projects have a responsibility to implement whatever users want.

Firefox over the last 5 years has become far more of a category 1 project than it used to be. GNOME has also been a category 1 project for many years.

Update 2022-02-22: The category 1 projects roll on.

Posted 2021-04-08

Base data creation:

  (l:List {id: 42, name: "Stuff"}),
  (i1:Item {id: "a", description: "Fry"}),
  (i2:Item {id: "b", description: "Bender"}),
  (i3:Item {id: "c", description: "Leela"}),
  (i1)-[:IN_LIST {position: 0}]->(l),
  (i2)-[:IN_LIST {position: 1}]->(l),
  (i3)-[:IN_LIST {position: 2}]->(l);

Cypher statement to move an item by its id property and 'swap' it with another item with id c. Here, 'swap' doesn't mean the literal meaning of swap, which would be to exchange the indices of the appropriate items. Moving an item to an item that is currently before it means that it gets moved before that item, while moving an item to an item after it means that it gets moved after that item. The net effect is a full range of movement possibilities.

  (l:List {id: 42}),
  (i1:Item {id: "a"})-[r1:IN_LIST]->(l),
  (i2:Item {id: "c"})-[r2:IN_LIST]->(l)
    r1.position AS oldPosition,
    r2.position AS newPosition,
    r1 AS r1,
      WHEN r2.position < r1.position THEN 1
      WHEN r2.position > r1.position THEN -1
      ELSE 0
    END AS signum
MATCH (i:Item)-[r3:IN_LIST]->(:List {name: "Stuff"})
  (newPosition < oldPosition
 AND r3.position >= newPosition AND r3.position < oldPosition)
 OR (newPosition > oldPosition
AND r3.position > oldPosition AND r3.position <= newPosition)
SET r2.position = r2.position + signum, r1.position = newPosition;

Update December 2020: This is subtle enough that I thought that it merited its own repository with a test suite. See the implementation for more details on how this works. BTW I don't make any performance claims about this code.

Posted 2020-10-14
OOB redirect_uri values
Posted 2020-10-07
Quick HTTP server
Posted 2020-10-07
The Lowest UUIDv4
Posted 2020-09-24
Ad-hoc Patreon audio scraping
Posted 2020-05-17
SSH key setup
Posted 2019-09-11
Stretch to Buster
Posted 2019-08-05
Subprocess Pipe Comparison
Posted 2019-07-02
The X3 Wiki Archive
Posted 2019-06-16
Fabric 2 cheat sheet
Posted 2019-03-05
Using comboboxes in Qt5
Posted 2019-02-27
System Puppet, CentOS 7 Client
Posted 2019-02-25
X3 savegames
Posted 2019-02-02
Shadow Tween technique in Vue
Posted 2019-01-06
Width list transition in Vue
Posted 2018-12-18
Emoji Representations
Posted 2018-09-14
Thoughts on Cheesesteak & More
Posted 2018-08-29
Vue + GraphQL + PostgreSQL
Posted 2018-07-20
Neo4j Cypher query to NetworkX
Posted 2018-05-09
FP & the 'Context Problem'
Posted 2018-02-27
Cloake Vegetable Biryani
Posted 2018-02-25
FFXII Builds
Posted 2018-02-02
Custom deployments solution
Posted 2017-12-09
SCons and Google Mock
Posted 2017-11-30
Sunday Lamb Aloo
Posted 2017-11-19
centos 6 debian lxc host
Posted 2017-11-03
Srichacha Noodle Soup
Posted 2017-10-17
Kaeng Kari
Posted 2017-10-13
Ayam Bakar (Sri Owen)
Posted 2017-10-12
Pangek Ikan (Sri Owen)
Posted 2017-10-12
Chicken Tikka Balti Masala
Posted 2017-10-06
Clojure Log Configuration
Posted 2017-09-28
Clojure Idioms: strict-get
Posted 2017-09-28
Posted 2017-09-18
Philly Cheesesteak
Posted 2017-09-14
Posted 2017-09-13
Srichacha Kaeng Pa
Posted 2017-08-31
Malaidar Aloo
Posted 2017-08-10
BBQ Balti Chicken
Posted 2017-07-19
Sabzi Korma
Posted 2017-07-18
Vegetable Tikka Masala
Posted 2017-07-02
Soto Ayam
Posted 2017-06-08
Bombay Aloo w/Bunjarra
Posted 2017-06-03
Chicken Dopiaza
Posted 2017-06-01
LJ Bunjarra
Posted 2017-05-31
Glasgow Lamb Shoulder Tikka
Posted 2017-05-24
Tofu Char Kway Teow
Posted 2017-05-12
King Prawn Balti
Posted 2017-04-24
Ad-hoc Quorn Rogan Josh
Posted 2017-04-15
Glasgow Vindaloo
Posted 2017-03-28
Posted 2017-03-26
Toombs Saag Balti
Posted 2017-02-25
Glasgow Bombay Rogan Josh
Posted 2017-02-21
Glasgow Chicken Balti
Posted 2017-02-16
Quorn Balti & Cloake Naan
Posted 2017-02-03
Two Spice Marinades
Posted 2017-01-18

This blog is powered by coffee and ikiwiki.