This feed omits posts by rms. Just 'cause.

'Live' tweeting historical witch trials. Modern English, period sources.

An ex-witch called Deliverance Hobbs is now bewitched because she confessed. The spectral form of another witch keeps appearing and beating her with iron rods as punishment. [...]

A witch had an inch-long teat on her belly. She claimed it was a hernia, but it looked recently-sucked, with a hole at the tip. When it was squeezed, "white milky matter" came out. She also had three smaller teats on her genitals. A girl claimed that Rose Cullender, the witch with the milky teat on her belly and three extra teats on her vulva, had been coming to her bed at night, one time bringing a huge dog with her. [...]

A child caught an invisible mouse and threw it into the fire where it exploded with a flash like gunpowder. No one but the child saw a mouse, but everyone saw the flash. [...]

A bee flew into a child's face. The child vomited up a large iron nail. The child later confirmed that the bee had been carrying the nail and had "forced it into her mouth". [...]

The child got sick. One night a strange toad fell out of her blanket and ran along the hearth. A boy picked it up with tongs and held it in the fire, where it exploded like a gunshot. The child had been very sick, but after the magic toad exploded in the fire she completely recovered. [...]

The boy told the Devil he would commit to a life of deceit if the Devil would grant him one strange superpower: the boy asked for his saliva to be given the power to scald dogs as though it were boiling water. The Devil granted the boy his wish (to "make his spittle scald a dog"). The boy spat on a dog and a large amount of scalding hot water poured all over it. The possessed boy admitted to having spat a torrent of boiling water onto the dog. His honesty and remorse caused the Devil to leave his body with a terrifyingly loud noise. [...]

A witch riding a goat was able to carry 15 or 16 children in one trip by taking a long wooden pole, sticking one end into the goat's anus and seating the children all along the length of the pole. [...]

Asked where precisely on her body the Devil sucked her blood, the witch said the Devil sucked at a location he had chosen himself, just slightly above her anus. Because of the Devil's constant sucking at that spot, there is now a teat-like growth on the witch's body, near her anus. Elizabeth asked the Devil why he sucked her blood from the teat just above her anus. He said that it nourished him. [...]

A witch's blood is infected with a "poisonous ferment". If a witch gives you a hard glare while thinking spiteful thoughts, pestilential spirits will shoot out of her eyes and infect you. Nature works by "subtle streams" of "minute particles". An example would be the jets of pestilence which shoot out of witch's eyes via by means of their malicious imagination and cause "dangerous and strange alterations" in those weak enough to be affected. The contagious witch-curse can be airborne - spread into the air by a witch's glaring eyes. The pestilence can also be transmitted in other more obvious ways such as the victim being hit by a witch or being given a poisoned apple. [...]

Some say, why would the Devil waste his time running errands for a "silly old woman"? But if the Devil is wicked then he probably also uses his time unwisely. When we hear about the crazy behaviour of spirits and familiars, some people ask why the devil would frolic so ludicrously. Perhaps witches are only visited by spirits or demons who are very junior, low ranking or disgraced. [...]

The idea that witches can change their bodies into animals is no harder to believe than the idea that the thoughts or imagination of a pregnant woman can cause her foetus to have real and monstrous birth defects, which is of course a widely credited fact.

Previously, previously, previously, previously, previously.

Posted Wed Jan 20 22:31:20 2021 Tags:
I sincerely hope that today's regime change can mark a return to my blog's core competency: fart jokes and tentacles.
Posted Wed Jan 20 20:52:28 2021 Tags:
The Great Wikipedia Titty Scandal: This is the story of a Wikipedia administrator gone mad with 80,000 boob pages.

Digging into Neelix's history, however, his fellow administrators couldn't believe what they found. He hadn't just created a handful of redirects, as the original report described; he'd quietly created thousands upon thousands of new redirects, each one a chaotic, if not offensive, permutation of the word "tits" and "boobs." For example, he created redirects for "tittypumper," "tittypumpers," "tit pump," "pump titties," "pumping boobies" and hundreds more for "breast pump." In fact, for seemingly every Wikipedia article related to breasts, he did something similar. [...]

"I especially don't see the value of creating pages with titles like 'titty banged,' 'frenchfucking,' 'licks boobs,' 'boobyfeeding,' 'a trip down mammary lane' and so on. Wikipedia is not censored, but we're also not Urban Dictionary," added Ivanvector. [...]

"I've just gone through all 80,000 page creations, and he was creating nonsense like 'anti-trousers' years ago," added Iridescent. "This isn't anything new, it's just the first time it's come to light."

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Posted Wed Jan 20 19:24:23 2021 Tags:
By my extremely scientific survey, 10% of all music videos made in 2020 have used the following filter. Stop. It is not charming like an 808 handclap. It is irritating like an airhorn sample. I don't know which video-editing software came with this as one of the stock old-timey presets, but FFS, stop using it.

Bonus points here for the unnecessarily burned-in pillarboxing.

Previously, previously, previously, previously, previously, previously.

Posted Tue Jan 19 17:40:52 2021 Tags:
Your account has been disabled. You can't use Facebook because your account, or activity on it, doesn't follow our Community Standards.

Note that my Facebook account has zero friends, zero posts, zero photos, and has made zero comments in the last 4+ years. It only still exists so that I can admin our business pages.

Upload a photo of yourself. Upload a photo that clearly shows your face. Make sure the photo is well-lit and isn't blurry.
We have received your information. Thank you for sending your information. We have fewer people available to review information due to the coronavirus (COVID-19) pandemic. This means we may be unable to review your account. We apologize for any inconvenience.

"We may be unable to review your account. We apologize for any inconvenience "

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Posted Mon Jan 18 16:32:53 2021 Tags:
So many people have died in Los Angeles County that officials have suspended air-quality regulations that limit the number of cremations.

Health officials and the L.A. County coroner requested the change because the current death rate is "more than double that of pre-pandemic years, leading to hospitals, funeral homes and crematoriums exceeding capacity, without the ability to process the backlog," the South Coast Air Quality Management District said Sunday.

Previously, previously, previously, previously, previously, previously, previously, previously.

Posted Mon Jan 18 07:35:12 2021 Tags:
Doin' fine.

'When he was shot dead in 1993,, most of the animals were shipped away, but four hippos were left to fend for themselves in a pond.

Although nobody knows exactly how many there are, estimates put the total number between 80 and 100, making them the largest invasive species on the planet. Scientists forecast that the number of hippos will swell to almost 1,500 by 2040. They conclude, that at that point, environmental impacts will be irreversible and numbers impossible to control.

"Nobody likes the idea of shooting a hippo, but we have to accept that no other strategy is going to work," [...]

Environmentalists have been trying to sterilise the hippos for years [...] Male hippos have retractable testes and females' reproductive organs are even harder to find, according to scientists. "We didn't understand the female anatomy," said David Echeverri Lopez, a government environmentalist. "We tried to sterilise females on several occasions and were always unsuccessful."

He is also playing an impossible game of catch-up. Mr Echeverri told The Telegraph that he is able to castrate roughly a hippo per year, whereas scientists estimate that the population grows by 10 percent annually. [...]

"Relocation might have been possible 30 years ago, when there were only four hippos," said Dr Castelblanco-Martínez. "Castration could also have been effective if officials had provided sufficient resources for the programme early on, but a cull is now the only option."

Previously, previously, previously, previously, previously.

Posted Mon Jan 18 02:22:11 2021 Tags:

This post explains how to parse the HID Unit Global Item as explained by the HID Specification, page 37. The table there is quite confusing and it took me a while to fully understand it (Benjamin Tissoires was really the one who cracked it). I couldn't find any better explanation online which means either I'm incredibly dense and everyone's figured it out or no-one has posted a better explanation. On the off-chance it's the latter [1], here are the instructions on how to parse this item.

We know a HID Report Descriptor consists of a number of items that describe the content of each HID Report (read: an event from a device). These Items include things like Logical Minimum/Maximum for axis ranges, etc. A HID Unit item specifies the physical unit to apply. For example, a Report Descriptor may specify that X and Y axes are in mm which can be quite useful for all the obvious reasons.

Like most HID items, a HID Unit Item consists of a one-byte item tag and 1, 2 or 4 byte payload. The Unit item in the Report Descriptor itself has the binary value 0110 01nn where the nn is either 1, 2, or 3 indicating 1, 2 or 4 bytes of payload, respectively. That's standard HID.

The payload is divided into nibbles (4-bit units) and goes from LSB to MSB. The lowest-order 4 bits (first byte & 0xf) define the unit System to apply: one of SI Linear, SI Rotation, English Linear or English Rotation (well, or None/Reserved). The rest of the nibbles are in this order: "length", "mass", "time", "temperature", "current", "luminous intensity". In something resembling code this means:


system = value & 0xf
length_exponent = (value & 0xf0) >> 4
mass_exponent = (value & 0xf00) >> 8
time_exponent = (value & 0xf000) >> 12
...
The System defines which unit is used for length (e.g. SILinear means length is in cm). The actual value of each nibble is the exponent for the unit in use [2]. In something resembling code:

switch (system)
case SILinear:
print("length is in cm^{length_exponent}");
break;
case SIRotation:
print("length is in rad^{length_exponent}");
break;
case EnglishLinear:
print("length is in in^{length_exponent}");
break;
case EnglishRotation:
print("length is in deg^{length_exponent}");
break;
case None:
case Reserved"
print("boo!");
break;

For example, the value 0x321 means "SI Linear" (0x1) so the remaining nibbles represent, in ascending nibble order: Centimeters, Grams, Seconds, Kelvin, Ampere, Candela. The length nibble has a value of 0x2 so it's square cm, the mass nibble has a value of 0x3 so it is cubic grams (well, it's just an example, so...). This means that any report containing this item comes in cm²g³. As a more realistic example: 0xF011 would be cm/s.

If we changed the lowest nibble to English Rotation (0x4), i.e. our value is now 0x324, the units represent: Degrees, Slug, Seconds, F, Ampere, Candela [3]. The length nibble 0x2 means square degrees, the mass nibble is cubic slugs. As a more realistic example, 0xF014 would be degrees/s.

Any nibble with value 0 means the unit isn't in use, so the example from the spec with value 0x00F0D121 is SI linear, units cm² g s⁻³ A⁻¹, which is... Voltage! Of course you knew that and totally didn't have to double-check with wikipedia.

Because bits are expensive and the base units are of course either too big or too small or otherwise not quite right, HID also provides a Unit Exponent item. The Unit Exponent item (a separate item to Unit in the Report Descriptor) then describes the exponent to be applied to the actual value in the report. For example, a Unit Eponent of -3 means 10⁻³ to be applied to the value. If the report descriptor specifies an item of Unit 0x00F0D121 (i.e. V) and Unit Exponent -3, the value of this item is mV (milliVolt), Unit Exponent of 3 would be kV (kiloVolt).

Now, in hindsight all this is pretty obvious and maybe even sensible. It'd have been nice if the spec would've explained it a bit clearer but then I would have nothing to write about, so I guess overall I call it a draw.

[1] This whole adventure was started because there's a touchpad out there that measures touch pressure in radians, so at least one other person out there struggled with the docs...
[2] The nibble value is twos complement (i.e. it's a signed 4-bit integer). Values 0x1-0x7 are exponents 1 to 7, values 0x8-0xf are exponents -8 to -1.
[3] English Linear should've trolled everyone and use Centimetres instead of Centimeters in SI Linear.

Posted Wed Jan 13 11:28:00 2021 Tags:

"Systems design" is a branch of study that tries to find universal architectural patterns that are valid across disciplines.

You might think that's not a possibility. Back in university, students used to tease the Systems Design Engineers, calling it "boxes and arrows" engineering. Not real engineering, you see, since it didn't touch anything tangible, like buildings, motors, hydrochloric acid, or, uh, electrons.

I don't think the Systems Design people took this criticism too seriously since everyone also knew that programme had the toughest admittance criteria in the whole university.

(A mechanical engineer told me they saw electrical/computer engineers the same way: waveforms on a screen instead of real physical things that you could touch, change, and fix.)

I don't think any of us really understood what boxes-and-arrows engineering really was back then, but luckily for you, now I'm old. Let me tell you some stories.

What is systems design?

I started thinking more clearly about systems design when I was at a big tech company and helped people refine their self-promotion employee review packets. Most of it was straightforward, helping them map their accomplishments to the next step up the engineering ladder:

  • As a Novice going for Junior, you had to prove you could fix bugs without too much supervision;
  • Going for Senior, you had to prove you could implement a whole design with little supervision;
  • Going for Staff, you had to show you could produce designs based on business problems with basically no management;
  • Going for Senior Staff, you had to solve bigger and bigger business problems; and so on.

After helping a few dozen people with their assessments, I noticed a trend. Most developers mapped well onto the ladder, but some didn't fit, even though they seemed like great engineers to me.

There were two groups of misfits:

  1. People who maxed out as a senior engineer (building things) but didn't seem to want to, or be able to, make it to staff engineer (translating business problems).

  2. People who were ranked at junior levels, but were better at translating business problems than at fixing bugs.

Group #1 was formally accounted for: the official word was most employees should never expect to get past Senior Engineer. That's why they called it Senior. It wasn't not much consolation to people who wanted to earn more money or to keep improving for the next 20-30 years of a career, but it was something we could talk about.

(The book Radical Candor by Kim Scott has some discussion about how to handle great engineers who just want to build things. She suggests a separate progression for "rock solid" engineers, who want to become world-class experts at things they're great at, and "steep trajectory" engineers, who might have less attention to detail but who want to manage ever-bigger goals and jump around a lot.)

People in group #2 weren't supposed to exist. They were doing some hard jobs - translating business problems into designs - with great expertise, but these accomplishments weren't interesting to the junior-level promotion committees, who had been trained to look for "exactly one level up" attributes like deep technical knowledge in one or two specific areas, a history of rapid and numerous bug fixes, small independent launches, and so on. Meanwhile, their peers who couldn't (yet) architect their way out of a paper bag rose more quickly through the early ranks, because they wrote reams of code fast.

Tanya Reilly has an excellent talk (and transcribed slides) called Being Glue that perfectly captures this effect. In her words: "Glue work is expected when you're senior... and risky when you're not."

What she calls glue work, I'm going to call systems design. They're two sides of the same issue. Humans are the most unruly systems of all, and yet, amazingly, they follow many of the same patterns as other systems.

People who are naturally excellent at glue work often stall out early in the prescribed engineering pipeline, even when they'd be great in later stages (staff engineers, directors, and executives) that traditional engineers struggle at. In fact, it's well documented that an executive in a tech company requires almost a totally different skill set than a programmer, and rising through the ranks doesn't prepare you for that job at all. Many big tech companies hire executives from outside the company, and sometimes even from outside their own industry, for that reason.

...but I guess I still haven't answered the question. What is systems design? It's the thing that will eventually kill your project if you do it wrong, but probably not right away. It's macroeconomics instead of microeconomics. It's fixing which promotion ladders your company even has, rather than trying to climb the ladders. It's knowing when a distributed system is or isn't appropriate, not just knowing how to build one. It's repairing the incentives in a political system, not just getting elected and passing your favourite laws.

Most of all, systems design is invisible to people who don't know how to look for it. At least with code, you can measure output by the line or the bug, and you can hire more programmers to get more code. With systems design, the key insight might be a one-sentence explanation given at the right time to the right person, that affects the next 5 years of work, or is the difference between hypergrowth and steady growth.

Sorry, I don't know how to explain it better than that. What I can do instead is talk about some systems design problems and archetypes that repeat, over and over, across multiple fields. If you can recognize these archetypes, and handle them before they kill your project, you're on your way to being a systems designer.

Systems of control: hierarchies and decentralization

Let's start with an obvious one: the problem of centralized vs distributed control structures. If I ask you what's a better org structure: a command-and-control hierarchy or a flat organization, most people have been indoctrinated to say the latter. Similarly if I ask whether you should have an old crusty centralized database or a fancy distributed database, everyone wants to build the latter. If you're an SRE and we start talking about pets and cattle, you always vote for cattle. You'd laugh at me if I suggested using anything but a distributed software version control system (ie. git). The future of money, I've heard, is distributed decentralized cryptocurrency. If you want to defeat censorship, you need a distributed social network. The trend is clear. What's to debate?

Well, real structures are more complicated than that. The best introductory article I know on this topic is Jo Freeman's The Tyranny of Structurelessness, which includes the famous quote: "This apparent lack of structure too often disguised an informal, unacknowledged and unaccountable leadership that was all the more pernicious because its very existence was denied."

"Informal, unacknowledged, and unaccountable" control is just as common in distributed computing systems as it is in human social systems.

The truth is, nearly every attempt to design a hierarchy-free, "flat" control system just moves the central control around until you can't see it anymore. Human structures all have leaders, whether implicit or explicit, and the explicit ones tend to be more diverse.

The web depends on centrally controlled DNS and centrally approved TLS certificate issuers; the global Internet depends on a small cabal who sorts out routing problems. Every blockchain depends on whoever decides if your preferred chain will fork this week, and whoever runs the popular exchanges, and whoever decides whether to arrest those people. Distributed radio networks depend on centralized government spectrum licenses. Democracy depends on someone enforcing your right to vote. Capitalism depends on someone enforcing the rules of a "free" marketplace.

At my first startup, we tried to run the development team as a flat organization, where everyone's opinions were listened to and everyone could debate the best way to do something. The overall consensus was that we mostly succeeded. But I was shocked when one of my co-workers said to me afterward: "Our team felt flat and egalitarian. But you can't ever forget that it was only that way because you forced it to be that way."

Truly distributed systems do exist. Earth's ecosystem is perhaps one (although it's becoming increasingly fragile and dependent on humans not to break it). Truly distributed databases using Raft consensus or similar algorithms certainly exist and work. Distributed version control (like git) really is distributed, although we ironically end up re-centralizing our usage of it through something like Github.

CAP theorem is perhaps the best-known statement of the tradeoffs in distributed systems, between consistency, availability, and "partition tolerance." Normally we think of the CAP theorem as applying to databases, but it applies to all distributed systems. Centralized databases do well at consistency and availability, but suck at partition tolerance; so do authoritarian government structures.

In systems design, there is rarely a single right answer that applies everywhere. But with centralized vs distributed systems, my rule of thumb is to do exactly what Jo Freeman suggested: at least make sure the control structure is explicit. When it's explicit, you can debug it.

Chicken-egg problems

Another archetypal systems design question is the "chicken-egg problem," which is short for: which came first, the chicken or the egg?

In case that's not a common question where you come from, the idea is eggs produce chickens, and chickens produce eggs. That's all fine once it's going, but what happened, back in ancient history? Was the very first step in the first iteration an egg, or a chicken?

The question sounds silly and faux-philosophical at first, but there's a real answer and that answer applies to real problems in the business world.

The answer to the riddle is "neither"; unless you're a Bible literalist, you can't trace back to the Original Chicken that laid the Original Egg. Instead there was probably a chicken-like bird that laid a mostly egg-ish egg, and before that, there were millions of years of evolution, going all the way back to single-celled organisms and whatever phenomenon first spawned those. What came "first"? All that other stuff.

Chicken-egg problems appear all the time when building software or launching products. Which came first, HTML5 web browsers or HTML5 web content? Neither, of course. They evolved in loose synchronization, tracing back to the first HTML experiments and way before HTML itself, growing slowly and then quickly in popularity along the way.

I refer to chicken-egg problems a lot because designers are oblivious to them a lot. Here are some famous chicken-egg problems:

  • Electrical distribution networks
  • Phone and fax technologies
  • The Internet
  • IPv6
  • Every social network (who will use it if nobody is using it?)
  • CDs, DVDs, and Blu-Ray vs HD DVD
  • HDTV (1080p etc), 4k TV, 8k TV, 3D TV
  • Interstate highways
  • Company towns (usually built around a single industry)
  • Ivy league universities (could you start a new one?)
  • Every new video game console
  • Every desktop OS, phone OS, and app store

The defining characteristic of a chicken-egg technology or product is that it's not useful to you unless other people use it. Since adopting new technology isn't free (in dollars, or time, or both), people aren't likely to adopt it unless they can see some value, but until they do, the value isn't there, so they don't. A conundrum.

It's remarkable to me how many dreamers think they can simply outwait the problem ("it'll catch on eventually!") or outspend the problem ("my new mobile OS will be great, we'll just subsidize a few million phones"). And how many people think getting past a chicken-egg problem, or not, is just luck.

But no! Just like with real chickens and real eggs, there's a way to do it by bootstrapping from something smaller. The main techniques are to lower the cost of adoption, and to deliver more value even when there are fewer users.

Video game console makers (Nintendo, Sony, Microsoft) have become skilled at this; they're the only ones I know who do it on purpose every few years. Some tricks they use are:

  • Subsidizing the cost of early console sales.
  • Backward compatibility, so people who buy can use older games even before there's much native content.
  • Games that are "mostly the same" but "look better" on the new console.
  • Compatible gamepads between generations, so developers can port old games more easily.
  • "Exclusive launch titles": co-marketing that ensures there's value up front for consumers (new games!) and for content producers (subsidies, free advertising, higher prices).

In contrast, the designs that baffle me the most are ones that absolutely ignore the chicken-egg problem. Firefox and Ubuntu phones, distributed open source social networks, alternative app stores, Linux on the desktop, Netflix competitors.

Followers of this diary have already seen me rant about IPv6: it provides nearly no value to anyone until it is 100% deployed (so we can finally shut down IPv4!), but costs immediately in added complexity and maintenance (building and running a whole parallel Internet). Could IPv6 have been rolled out faster, if the designers had prioritized unwinding the chicken-egg problem? Absolutely yes. But they didn't acknowledge it as the absolute core of their design problem, the way Android, Xbox, Blu-Ray, and Facebook did.

If your product or company has a chicken-egg problem, and you can't clearly spell out your concrete plan for solving it, then investors definitely should not invest in your company. Solving the chicken-egg problem should be the first thing on your list, not some afterthought.

By the way, while we're here, there are even more advanced versions of the chicken-egg problem. Facebook or faxes are the basic form: the more people who use Facebook or have a fax machine, the more value all those users get from each other.

The next level up is a two-sided market, such as Uber or Ebay. Nobody can get a ride from Uber unless there are drivers; but drivers don't want to work for Uber unless they can get work. Uber has to attract both kinds of users (and worse: in the same geographic region! at the same time of day!) before either kind gets anything from the deal. This is hard. They decided to spend their way to success, although even Uber was careful to do so only in a few markets at a time, especially at first.

The most difficult level I know is a three-sided market. For example, UberEats connects consumers, drivers, and restaurants. Getting a three-sided market rolling is insanely complicated, expensive, and failure-prone. I would never attempt it myself, so I'm impressed at the people who try. UberEats had a head start since Uber had consumers and drivers in their network already, and only needed to add "one more side" to their market. Most of their competitors had to attract all three sides just to start. Whoa.

If you're building a one-sided, two-sided, or three-sided market, you'd better understand systems design, chickens, and eggs.

Second-system effect

Taking a detour from business, let's move to an issue that engineers experience more directly: second-system effect, a term that comes from the excellent book, The Mythical Man-Month, by Fred Brooks.

Second system effect arises through the following steps:

  • An initial product starts small and is built incrementally, starting with a low budget and a few users.
  • Over time, the product gains popularity and becomes profitable.
  • The system evolves, getting more and more hacks on top, and early design tradeoffs start to be a bottleneck.
  • The engineers figure out a new design that would fix all the mistakes we know about, plus more! (And they're probably right.)
  • Since the product is already popular, it's easy to justify spending the time to "do it right this time" and "build a strong platform for the next 10 years." So a project is launched to rewrite everything from scratch. It's expected to take several months, maybe a couple of years, and a big engineering team.

Sound familiar? People were trying this back in 1975 when the book was written, and they're still trying it now. It rarely goes well; even when it does work, it's incredibly painful.

25 years after the book, Joel Spolsky wrote Things you should never do, part 1 about the company-destroying effect of Netscape/Mozilla trying this. "They did it by making the single worst strategic mistake that any software company can make: they decided to rewrite the code from scratch."

[Update 2020-12-28: I mention Joel's now-20-year-old article not because Mozilla was such a landmark example, but because it's such a great article.]

Some other examples of second system effect are IPv6, Python 3, Perl 6, the Plan9 OS, and the United States system of government.

The results are remarkably consistent:

  • The project takes longer than expected to reach feature parity.
  • The new design often does solve the architectural problems in the original; however, it unexpectedly creates new architectural problems that weren't in the original.
  • Development time is split (or different developers are assigned) between maintaining the old system and launching the new system.
  • As the project gets increasingly overdue, project managers are increasingly likely to shut down the old system to force users to switch to the new one, even though users still prefer the old one.

Second systems can be merely expensive, or they can bankrupt your company, or destroy your user community. The attention to Perl 6 severely weakened the progress of perl; the work on Python 3 fractured the python community for more than a decade (and still does); IPv6 is obstinately still trying to deprecate IPv4, 25 years later, even though the problems it was created to solve are largely obsolete.

As for solutions, there isn't much to say about the second system effect except you should do your utmost to prevent it; it's entirely self-inflicted. Refactor your code instead. Even if it seems like incrementalism will be more work... it's worth it. Maintaining two systems in parallel is a lot more expensive than you think.

In his book, Fred Brooks called it the "second" system on purpose, because it was his opinion that after experiencing it once, any designer will build their third and later systems more incrementally so they never have to go through that again. If you're lucky enough to learn from historical wisdom, perhaps even your second system won't suffer from this strategic error.

A more embarrassing related problem is when large companies try to build a replacement for their own first system, but the developers of the first system have left or have already learned their Second System Lesson and are not willing to play that game. Thus, a new team is assembled to build the replacement, without the experience of having built the first one, but with all the confidence of a group of users who are intimately experienced with its surface flaws. I don't even know what this phenomenon should be called; the vicarious second system effect? Anyway, my condolences if you find yourself building or using such a product. You can expect years of pain.

[Update 2020-12-28: someone reminded me that CADT ("cascade of attention-deficit teenagers") is probably related to this last phenomenon.]

Innovator's dilemmas

Let's finally talk about a systems design issue that's good news for your startup, albeit bad news for big companies. The Innovator's Dilemma is a great book by Clayton Christensen that discusses a fascinating phenomenon.

Innovator's dilemmas are so elegant and beautiful you can hardly believe they exist as such a repeatable abstraction. Here's the latest one I've heard about, via an Anandtech Article about Apple Silicon:

A summary of the Innovator's Dilemma is as follows:

  • You (Intel in this case) make an awesome product in a highly profitable industry.
  • Some crappy startup appears (ARM in this case) and makes a crappy competing product with crappy specs. The only thing they seem to have going for them is they can make some low-end garbage for cheap.
  • As a big successful company, your whole business is optimized for improving profits and margins. Your hard-working employees realize that if they cede the ultra-low-end garbage portion of the market to this competitor, they'll have more time to spend on high-valued customers. As a bonus, your average margin goes up! Genius.
  • The next year, your competitor's product gets just a little bit better, and you give up the new bottom of your market, and your margins and profits further improve. This cycle repeats, year after year. (We call this "retreating upmarket.")
  • The crappy competitor has some kind of structural technical advantage that allows their performance (however you define performance; something relevant to your market) to improve, year over year, at a higher percentage rate than your product can. And/or their product can do something yours can't do at all (in ARM's case: power efficiency).
  • Eventually, one year, the crappy competitor's product finally exceeds the performance metrics of your own product, and promptly blows your entire fucking company instantly to smithereens.

Hey now, we've started swearing, was that really called for? Yes, I think so. If I were an Intel executive looking at this chart and Apple's new laptops, I would be scared out of my mind right now. There is no more upmarket to retreat to. The competitor's product is better, and getting better faster than mine. The game is already over, and I didn't even realize I was playing.

What makes the Innovator's Dilemma so beautiful, from a systems design point of view, is the "dilemma" part. The dilemma comes from the fact that all large companies are heavily optimized to discard ideas that aren't as profitable as their existing core business. Any company that doesn't optimize like this fails; by definition their profitability would go down. So thousands of worker bees propose thousands of low-margin and high-margin projects, and the company discards the former and invests heavily in the latter (this is called "sustaining innovation" in the book), and they keep making more and more money, and all is well.

But this optimization creates a corporate political environment (aha, you see we're still talking about systems design?) where, for example, Intel could never create a product like ARM. A successful low-priced chip would take time, energy, and profitability away from the high-priced chips, and literally would have made Intel less successful for years of its history. Even once ARM appeared and their trendline of improvements was established, they still had lower margins, so competing with them would still cannibalize their own high-margin products, and worse, now ARM had a head start.

In case you're a big company reading this: the book has a few suggestions for what you can do to avoid this trap. But if you're Intel, you should have read the book a few years ago, not now.

Innovator's dilemma plots are the prettiest when discussing hardware and manufacturing, but the concept applies to software too, especially when software is held back by a hardware limitation. For example, distributed version control systems (where you download the entire repository history to every client) were amusing toys until suddenly disks were big enough and networks were fast enough, and then DVCSes wiped out everything else (except in projects with huge media files).

Fancy expensive databases were the only way to get high transaction throughput, until SSDs came along and made any dumb database fast enough for most jobs.

Complicated database indexes and schemas were great until AWS came along and let everyone just brute force mapreduce everything using short-term rental VMs.

JITs were mostly untenable until memory was so much slower than CPU that compiling was not the expensive part. Software-based network packet processing on a CPU was slower than custom silicon until generic CPUs got fast enough relative to RAM. And so on.

The Innovator's Dilemma is the book that first coined the term "disruptive innovation." Nowadays, startups talk about disrupting this and disrupting that. "Disruption" is an exciting word, everybody wants to do it! The word disruption has lost most of its meaning at this point; it's a joke as often as a serious claim.

But in the book, it had a meaning. There are two kinds of innovations: sustaining and disruptive. Sustaining is the kind that big companies are great at. If you want to make the fastest x86 processor, nobody does it better than Intel (with AMD occasionally nipping at their heels). Intel has every incentive to keep making their x86 processors better. They also charge the highest margins, which means the greatest profits, which means the most money available to pour into more sustaining innovation. There is no dilemma; they dump money and engineers and time into that, and they mostly deliver, and it pays off.

A "disruptive" innovation was meant to refer to specifically the kind you see in that plot up above: the kind where an entirely new thing sucks for a very long time, and then suddenly and instantly blows you away. This is the kind that creates the dilemma.

If you're a startup and you think you have a truly disruptive innovation, then that's great news for you. It's a perfect answer to that awkward investor question, "What if [big company] decides to do this too?" because the honest truth is "their own politics will tear that initiative apart from the inside."

The trick is to determine whether you actually have one of these exact "disruption" things. They're rare. And as an early startup, you don't yet have a historical plot like the one above that makes it clear; you have to convince yourself that you'll realistically be able to improve your thing faster than the incumbent can improve theirs, over a long period of time.

Or, if your innovation only depends on an existing trend - like in the software-based packet processing example above - then you can try to time it so that your software product is ready to mature at the same time as the hardware trend crosses over.

In conclusion: watch out for systems design. It's the sort of thing that can make you massively succeed or completely fail, independent of how well you write code or run your company, and that's scary. Sometimes you need some boxes and arrows.

Posted Sun Dec 27 13:34:08 2020 Tags:

          A good bit of work to do.      
I have been fan of code coverage. When combined with (unit) testing, it indicates which code of your software has been run and therefore tested. Some 15 years ago, when I worked on making the Chemistry Development Kit code base more stable, I worked on various things: modularization, documentation, (unit) testing. I explored the option in Java. I even extended PMD with CDK-specific unit tests. And my StackOverflow question on JUnit test dependencies still gives me karma points :)

Fast forward to now. Routinely building software has become quite common place as is unit testing. The tools to support this have changed the field. And tools come and go. Travis-CI will became rare for open science projects, but where GitHub replaced SourceForge, GitHub Actions step it.

But I submitted a manuscript to the Journal of Open Source Software, to learn from their submission system (which is open and just plain awesome). One reviewer urged me to test the test coverage of my code and give me a pointer to JaCoCo and codecov.io. I am not sure if the CDK used JaCoCo in the past too, but getting all info on a website was not trivial, tho we got that done. Rajarshi may remember that. but with continuous building and codecov.io it is automatically available on a website. With every commit. Cool!

However, autumn had already started and I had plenty of project work to finish. But it is holiday now, and I could start working on the reviewer comments. It turned out the pointers were enough, and I got codecov.io working for Bacting. Not being tested with a test suite does not mean it is not tested at all. I use Bacting daily, and this use will only grow in the coming year.

That brings me to another reviewer question. How much of the Bioclipse 2 API does Bacting support. Now, that question is a bit tricky. There is the code Bioclipse 2.6 release (doi:10.1186/1471-2105-10-397), but there were a few dozen plugins with many more Bioclipse managers. So, I checked what managers I had locally checked out and created a GitHub Project for this with various columns. And for each manager I have (or want) in Bacting, I created an issue with checkboxes, one for each method to implement. And that looks like this:


I really hope the Maastricht University GitLab will become more user visible in the next year.

Posted Sat Dec 26 11:20:00 2020 Tags:
Review of "An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development" by Knauff and Nejasmic. #latex #word #efficiency #metrics
Posted Sun Dec 6 16:00:16 2020 Tags:

In Part 1 I've shown you how to create your own distribution image using the freedesktop.org CI templates. In Part 2, I've shown you how to truly build nested images. In this part, I'll talk about the ci-fairy tool that is part of the same repository of ci-templates.

When you're building a CI pipeline, there are some tasks that most projects need in some way or another. The ci-fairy tool is a grab-bag of solutions for these. Some of those solutions are for a pipeline itself, others are for running locally. So let's go through the various commands available.

Using ci-fairy in a pipeline

It's as simple as including the template in your .gitlab-ci.yml file.


include:
- 'https://gitlab.freedesktop.org/freedesktop/ci-templates/-/raw/master/templates/ci-fairy.yml'
Of course, if you want to track a specific sha instead of following master, just sub that sha there. freedesktop.org projects can include ci-fairy like this:

include:
- project: 'freedesktop/ci-templates'
ref: master
file: '/templates/ci-fairy.yml'
Once that's done, you have access to a .fdo.ci-fairy job template that you can extends from. This will download an image from quay.io that is capable of git, python, bash and obviously ci-fairy. This image is a fixed one and referenced by a unique sha so even if where we keep working on ci-fairy upstream you should never see regression, updating requires you to explicitly update the sha of the included ci-fairy template. Obviously, if you're using master like above you'll always get the latest.

Due to how the ci-templates work, it's good to set the FDO_UPSTREAM_REPO variable with the upstream project name. This means ci-fairy will be able to find the equivalent origin/master branch, where that's not available in the merge request. Note, this is not your personal fork but the upstream one, e.g. "freedesktop/ci-templates" if you are working on the ci-templates itself.

Checking commit messages

ci-fairy has a command to check commits for a few basic expectations in commit messages. This currently includes things like enforcing a 80 char subject line length, that there is an empty line after the subject line, that no fixup or squash commits are in the history, etc. If you have complex requirements you need to write your own but for most projects this job ensures that there are no obvious errors in the git commit log:


check-commit:
extends:
- .fdo.ci-fairy
script:
- ci-fairy check-commits --signed-off-by
except:
- master@upstream/project
Since you don't ever want this to fail on an already merged commit, exclude this job the master branch of the upstream project - the MRs should've caught this already anyway.

Checking merge requests

To rebase a contributors merge request, the contributor must tick the checkbox to Allow commits from members who can merge to the target branch. The default value is off which is frustrating (gitlab is working on it though) and causes unnecessary delays in processing merge requests. ci-fairy has command to check for this value on an MR and fail - contributors ideally pay attention to the pipeline and fix this accordingly.


check-merge-request:
extends:
- .fdo.ci-fairy
script:
- ci-fairy check-merge-request --require-allow-collaboration
allow_failure: true
As a tip: run this job towards the end of the pipeline to give collaborators a chance to file an MR before this job fails.

Using ci-fairy locally

The two examples above are the most useful ones for CI pipelines, but ci-fairy also has some useful local commands. For that you'll have to install it, but that's as simple as


$ pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates
A big focus on ci-fairy for local commands is that it should, usually, be able to work without any specific configuration if you run it in the repository itself.

Linting

Just hacked on the CI config?


$ ci-fairy lint
and done, you get the same error back that the online linter for your project would return.

Pipeline checks

Just pushed to the repo?


$ ci-fairy wait-for-pipeline
Pipeline https://gitlab.freedesktop.org/username/project/-/pipelines/238586
status: success | 7/7 | created: 0 | pending: 0 | running: 0 | failed: 0 | success: 7 ....
The command is self-explanatory, I think.

Summary

There are a few other parts to ci-fairy including templating and even minio handling. I recommend looking at e.g. the libinput CI pipeline which uses much of ci-fairy's functionality. And the online documentation for ci-fairy, who knows, there may be something useful in there for you.

The useful contribution of ci-fairy is primarily that it tries to detect the settings for each project automatically, regardless of whether it's run inside a MR pipeline or just as part of a normal pipeline. So the same commands will work without custom configuration on a per-project basis. And for many things it works without API tokens, so the setup costs are just the pip install.

If you have recurring jobs, let us know, we're always looking to add more useful functionality to this little tool.

Posted Fri Dec 4 04:00:00 2020 Tags:

Welcome to this week's edition of "building a startup in 2020," in which all your meetings are suddenly remote, and you probably weren't prepared for it.

I know I wasn't. We started a "fully remote" company back in 2019, but that was supposed to mean we still got together in person every month or two to do strategic planning, share meals, and resolve any accumulated conflicts. Well, not this year. Instead, we had to learn to have better remote meetings, all while building our whole team from scratch.

You can find endless articles on the Internet about how to have a good meeting. So many articles, in fact, that I can no longer find the ones that I liked the best, so that I can quote from them and give them credit :( Sorry! I'll have to paraphrase. Please send links if you think some of this sounds familiar.

Here are a few meeting tips I've accumulated over the years, with some additions from the last few months.

The most efficient meeting is no meeting.

Let's start with what should be obvious by now: sometimes you don't need a meeting at all. For example, status updates almost always are better delivered in some written medium (like email) that can be retained for future reference, and skimmed (or ignored) faster than people can speak.

Alas, skipping meetings doesn't solve every problem, or else remote work would be a lot easier for everyone.

Remember: every minute costs multiple person-minutes.

Imagine a meeting where a manager is presenting to 9 people. That costs 1+9 person-minutes per minute. A single one-hour meeting costs you 10 hours of employee salaries! With modern tech employees, that adds up really, really fast. You need to spend it wisely.

Now, assuming everyone needed to see that presentation - which is rarely the case - then one big meeting is a pretty efficient way to go. You can inform N people in O(N) minutes. That's pretty close to optimal. Of course, in the purest form of a presentation meeting, you could have just recorded the presentation in advance and let some of the people watch it at 2x speed, saving precious minutes. But that doesn't work in the typical case where you allow some Q&A, either during or afterwards.

As a meeting trends away from a presentation and toward group discussion, efficiency drops fast. Almost always, a discussion will be dominated by 2-3 people, leaving the others to sit and get bored. We all know what to do here, even though we don't always do it: split the discussion into a separate, much smaller meeting with just the people who care, and have them provide a text status report back when it's done.

The text status report is really important, even if you think nobody cares about the result of the meeting. That's because without the status report, nobody can be quite sure it's safe to skip the meeting. If they can read text notes later, it gives them the confidence to not show up. That typically saves far more cost than the cost of writing down the notes. (To say nothing of the cost of forgetting the decision and having to meet again later.)

Around here we take seriously copious meeting notes. It's a bit ridiculous. But it pays off frequently.

In big meetings, some people don't talk.

A related problem with big meetings is the people who don't get to talk even though they want to, or who always get interrupted or talked over. (There was a really great article about this a few months ago, but I can't find it, alas.)

Historically this has been much worse when your meeting has remote attendees, because it turns out latency blows up our social cues completely. Nobody quite knows how long to wait before speaking, but one thing's for sure: when some of the team is sitting in one room (~zero latency), and some are remote (typically hundreds of milliseconds of latency), the remote people almost never get to talk.

It's not just latency, either; remote users typically can't hear as well, and aren't heard as well, and people don't notice their gestures and body language.

Unexpectedly, the 2020 work-from-home trend has helped remote workers, by eliminating the central room with a bunch of zero-latency people. It levels the playing field, although some people invariably still have worse equipment or worse latency.

That helps the fairness problem, but it doesn't solve personality and etiquette problems. Even if everyone's all in the same room, some people are naturally tuned to wait longer before speaking, and some wait for less time, and the latter almost always end up dominating the conversation. The only ways I know to deal with this are a) have smaller meetings, and b) have a facilitator or moderator who decides who gets to talk.

You can get really complicated about meeting facilitation. (See also: that article I can't find, sigh.) Some conferencing tools nowadays have a "raise hand" button, or they count, for each user, the total amount of time they've spent talking, so people can self regulate. Unfortunately, these fancy features are not well correlated with the other, probably more important, conferencing software features like "not crashing" or "minimizing latency" or "having a phone dial-in just in case someone's network flakes out."

It turns out that in almost all tools, you can use the "mute" feature (which everyone has) to substitute for a "raise hand" feature (which not everyone has, and which often works badly even when they do). Have everyone go on mute, and then unmuting yourself is like raising your hand. The facilitator can call on each unmuted person in turn.

All these tricks sound like good ideas, but they haven't caught on for us. Everyone constantly muting or raising their hand, or having to wait for a facilitator before they can speak, kills the flow of a conversation and makes it feel a bit too much like Robert's Rules of Order. Of course, that's easy for me to say; I'm one of the people who usually ends up speaking either way.

When I'm in a meeting, I try to pay attention to everyone on the screen to see if someone looks like they want to talk, but is getting talked over. But that's obviously not a perfect solution given my human failings and the likelihood that some people might want to speak but don't make it very obvious.

Compared to all that fancy technique, much more effective has been just to make meetings smaller. With 3-4 people in a meeting, all this matters a lot less. It's easy to see if someone isn't participating or if they have something to say. And with a 2-person meeting, it's downright trivial. We'll get to that in a bit.

Amazon-style proposal review meetings

You can use a different technique for a meeting about a complicated product or engineering proposal. The two variants I know are the supposed "2-page review" or "6-pager review" meetings at Amazon (although I've never worked at Amazon), and the "design review" meetings I saw a few times back at a different bigco when I worked there.

The basic technique is: - Write the doc in advance - Distribute the doc to everyone interested - People can comment and discuss in the document before the meeting - The meeting owner walks through any unresolved comments in the document during the meeting, while someone else takes notes.

In the Amazon variant of this, "in advance" might be during the meeting itself, when people apparently sit there for a few minutes reading the doc in front of everyone else. I haven't tried that; it sounds awkward. But maybe it works.

In the variant I've done, we talk about only the document comments, and it seems to work pretty well. First, it avoids the tendency to just walk through a complicated doc in front of everyone, which is very inefficient since they've already read it. Second, it makes sure that everyone who had an unresolved opinion - and thus an unresolved comment in the doc - gets their turn to speak, which helps the moderation/etiquette problem.

So this style is functional. You need to enforce that the document is delivered far enough in advance, and that everyone reads it well in advance, so there can be vigorous discussion in the text ahead of time.

You might wonder, what's the point of the meeting, if you're going to put all the comments in text form anyway?

In my experience, the biggest advantage of the meeting is simply the deadline. We tried sending out design docs without a design review meeting, and people would never finish reading the doc, so the author never knew it was done. By scheduling a meeting, everyone knows the time limit for reviews, so they actually read the doc by then. And of course, if there are any really controversial points, sometimes it's easier to resolve them in a meeting.

Conversely, a design review without an already-commented doc tends to float in the ether, go overtime, and not result in a decision. It also means fewer people can skip the meeting; when people have read and commented on the doc in advance, many of the comments can be entirely resolved in advance. Only people with outstanding issues need to attend the review.

"Management by walking around"

An underappreciated part of big office culture is the impromptu "meetings" that happen between people sitting near each other, or running into each other in the mini-kitchen. A very particular variant of these impromptu meetings is "management by walking around," as in, a manager or executive wanders the floor of the building and starts random conversations of the form "how's it going?" and "what are you up to this week?" and "is customer X still having problems?"

At first glance, this "walking around" style seems very inefficient and incomplete. A big executive at a big company can't ever talk to everyone. The people they talk to aren't prepared because it's not a "real" meeting. It doesn't follow the hierarchy, so you have inefficiently duplicated communication channels.

But it works better than you'd think! The reasons are laid out in High Output Management by Andy Grove (of Intel fame), which I reviewed last year. The essential insight in that book is that these meetings should be used, not for the manager to "manage" employees, but for the manager to get a random selection of direct, unfiltered feedback.

As the story goes, in a company full of knowledge workers, the people at the bottom of the hierarchy tend to know the most about whatever problem they're working on. The managers and executives tend to know far fewer details, and so are generally ill-equipped to make decisions or give advice. Plus, the executive simply doesn't have time to give advice to everyone, so if walking around was part of the advice-giving process, it would be an incomplete, unfair, and unhelpful disaster.

On the other hand, managers and executives are supposed to be the keepers of company values (see my earlier review) and bigger context. By collecting a random sample of inputs from individual contributors on the floor, they can bypass the traditional hierarchical filtering mechanism (which tends to turn all news into good news after only one or two levels of manager), thus getting a clearer idea of how the real world is going, which can help refine the strategy.

I still think it's a great book. You should read it.

But one little problem: we're in a pandemic. There's no building, no floor, and no walking. WWAGD (What Would Andy Grove Do)?

Well, I don't know. But what I do is...

Schedule way too many 1:1 meetings

Here's something I started just a couple of months ago, which has had, I think, a really disproportionate outcome: I started skipping most larger meetings, and having 1:1s with everyone in the company instead.

Now, "everyone in the company" is a luxury I won't be able to keep up forever, as we grow. Right now, I try to schedule about an hour every two weeks with more senior people, and about 30 minutes every week with more junior people (like co-op students). Sometimes these meetings get jiggled around or grow or shrink a bit, but it averages about 30 minutes per person per week, and this adds up pretty fast, especially if I also want to do other work. Hypothetically.

I don't know if there are articles about scheduling 1:1s, but bi-weekly 1:1 meetings also have a separate problem, which is the total mess that ensues if you skip them. Then it turns out you're only meeting with some people once a month, which seems too rare. I haven't really figured this out, other than to completely remangle my schedule if I ever need to take a vacation or sick day, alas. Something about this scheme is going to need to improve.

As we grow, I think I can still maintain a "meet with everyone" 1:1 schedule, it just might need to get more and more complex, where I meet some people more often and some people less often, to give a weighted "random" sample across the whole team, over a longer period of time. We'll see.

Anyway, the most important part of these 1:1s is to do them Andy Grove style: they're for collecting feedback much more than "managing." The feedback then turns into general strategy and plans, that can be discussed and passed around more widely.

Formalizing informal donut chats

The above was for me. I'm the CEO, so I want to make sure to talk to everyone. Someday, eventually we're going to get all organized and have a management hierarchy or something, I guess, and then presumably other executives or managers will want to do something similar in their own orgs and sub-orgs.

Even sooner, though, we obviously can't expect all communications to pass through 1:1s with the CEO. Therefore, shockingly, other people might need to talk directly to each other too. How does that work? Does everyone need to talk to everyone else? O(N^2) complexity?

Well, maybe. Probably not. I don't know. For now, we're using a Slack tool called Donut which, honestly, is kinda buggy and annoying, but it's the best we have. Its job is simply to randomly pair each person with one other person, once a week, for a 1:1, ostensibly to eat virtual donuts together. I'm told it is better than nothing. I opted out since I already have 1:1s with everyone, thank goodness, because the app was driving me nuts.

What doesn't work well at all, unfortunately, is just expecting people to have 1:1 meetings naturally when an issue comes up. Even if they're working on the same stuff. It's a very hard habit to get into, especially when you have a bunch of introverted tech industry types. Explicitly prompting people to have 1:1 meetings with each other works better.

(Plus, there's various advice out there that says regularly scheduled 1:1s are great for finding problems that nobody would ever schedule a meeting for, even if you do work in the same office. "We have to use up this 30-minute meeting, no matter what" is miraculous for surfacing small conflicts before they turn into large ones.)

"Pairing" meetings

As a slight variation on the donut, some of my co-workers have invented a more work-oriented style of random crossover meeting where instead of just eating virtual donuts, they share a screen and do pair programming (or some other part of their regular work) with the randomly selected person for an hour or two. I'm told this has been pretty educational and fun, making things feel a bit more collaborative like it might feel in an office.

Do you have any remote meeting tips?

Posted Mon Nov 23 10:14:12 2020 Tags:

Since the AppStore launched, developers have complained about the review process as too strict. Applications are mostly rejected either for not meeting requirements, not having enough functionality or circumventing Apple’s business model.

Yet, the AppStore reviews are too lax and they should be much stricter.

Let me explain why I think so, what I believe some new rules need to be, and how the AppStore can be improved.

Prioritizing the Needs of the Many

Apple states that they have 28 million registered developers, but I believe that only a fraction of those are actively developing applications on a daily basis. That number is closer to 5 million developers.

I understand deeply why developers are frustrated with the AppStore review process - I have suffered my fair share of AppStore rejections: both by missing simple issues and by trying to push the limits of what was allowed. I founded Xamarin, a company that built tools for mobile developers, and had a chance to become intimately familiar with the rejections that our own customers got.

Yet, there are 1.5 billion active Apple devices, devices that people trust to be keep their data secure and private. The overriding concern should be the 1.5 billion active users, and not the 0.33% (or 1.86% if you are feeling generous).

People have deposited their trust on Apple and Google to keep their devices safe. I wrote about this previously. While it is an industry sport to make fun of Google, I respect the work that Google puts on securing and managing my data - so much that I have trusted them with my email, photographs and documents for more than 15 years.

I trust both companies both because of their public track record, and because of conversations that I have had with friends working at both companies about their processes, their practices and the principles that they care about (Keeping up with Information Security is a part-time hobby of minex).

Today’s AppStore Policies are Insufficient

AppStore policies, and their automated and human reviews have helped nurture and curate the applications that are available. But with a target market as large and rich as iOS and Android these ecosystems have become a juicy target for scammers, swindlers, gangsters, nation states and hackers.

While some developers are upset with the Apple Store rejections, profiteers have figured out that they can make a fortune while abiding by the existing rules. These rules allow behaviors that are in either poor taste, or explicitly manipulating the psyche of the user.

First, let me share my perspective as a parent, and

I have kids aged 10, 7 and 4, and my eldest had access to an iPad since she was a year old, and I have experienced first hand how angering some applications on the AppStore can be to a small human.

It breaks my heart every time they burst out crying because something in these virtual worlds was designed to nag them, is frustrating or incomprehensible to them. We sometimes teach them how to deal with those problems, but this is not always possible. Try explaining to a 3 year old why they have to watch a 30 seconds ad in the middle of a dinosaur game to continue playing, or teach them that at arbitrary points during the game tapping on the screen will not dismiss an ad, but will instead take them to download another app, or direct them another web site.

This is infuriating.

Another problem happens when they play games defective by design. By this I mean that these games have had functionality or capabilities removed that can be solved by purchasing virtual items (coins, bucks, costumes, pets and so on).

I get to watch my kids display a full spectrum of negative experiences when they deal with these games.

We now have a rule at home “No free games or games with In-App Purchases”. While this works for “Can I get a new game?”, it does not work for the existing games that they play, and those that they play with their friends.

Like any good rule, there are exceptions, and I have allowed the kids to buy a handful of games with in-app purchases from reputable sources. They have to pay for those from their allowance.

These dark patterns are not limited applications for kids, read the end of this post for a list of negative scenarios that my followers encountered that will ring familiar.

Closing the AppStore Loopholes

Applications using these practices should be banned:

  • Those that use Dark Patterns to get users to purchase applications or subscriptions: These are things like “Free one week trial”, and then they start charging a high fee per week. Even if this activity is forbidden, some apps that do this get published.

  • Defective-by-design: there are too many games out there that can not be enjoyed unless you spend money in their applications. They get the kids hooked up, and then I have to deal with whiney 4 year olds, 7 year olds and 10 year old to spend their money on virtual currencies to level up.

  • Apps loaded with ads: I understand that using ads to monetize your application is one way of supporting the development, but there needs to be a threshold on how many ads are on the screen, and shown by time, as these apps can be incredibly frustrating to use. And not all apps offer a “Pay to remove the ad”, I suspect because the pay-to-remove is not as profitable as showing ads non-stop.

  • Watch an ad to continue: another nasty problem are defective-by-design games and application that rather than requesting money directly, steer kids towards watching ads (sometimes “watch an ad for 30 seconds”) to get something or achieve something. They are driving ad revenue by forcing kids to watch garbage.

  • Install chains: there are networks of ill-behaved applications that trick kids into installing applications that are part of their network of applications. It starts with an innocent looking app, and before the day is over, you are have 30 new scammy apps installed on your machine.

  • Notification Abuse: these are applications that send advertisements or promotional offers to your device. From discount offers, timed offers and product offerings. It used to be that Apple banned these practices on their AppReview guidelines, but I never saw those enforced and resorted to turning off notifications. These days these promotions are allowed. I would like them to be banned, have the ability to report them as spam, and infringers to have their notification rights suspended.

  • ) Ban on Selling your Data to Third Parties: ban applications that sell your data to third parties. Sometimes the data collection is explicit (for example using the Facebook app), but sometimes unknowingly, an application uses a third party SDK that does its dirty work behind the scenes. Third party SDKs should be registered with Apple, and applications should disclose which third party SDKs are in use. If one of those 3rd party SDKs is found to abuse the rules or is stealing data, all applications that rely on the SDK can be remotely deactivated. While this was recently in the news this is by no means a new practice, this has been happening for years.

One area that is grayer are Applications that are designed to be addictive to increase engagement (some games, Facebook and Twitter) as they are a major problem for our psyches and for our society. Sadly, it is likely beyond the scope of what the AppStore Review team can do. One option is to pass legislation that would cover this (Shutdown Laws are one example).

Changes in the AppStore UI

It is not apps for children that have this problem. I find myself thinking twice before downloading applications with "In App Purchases". That label has become a red flag: one that sends the message "scammy behavior ahead"

I would rather pay for an app than a free app with In-App Purchases. This is unfair to many creators that can only monetize their work via an In-App Purchases.

This could be addressed either by offering a free trial period for the app (managed by the AppStore), or by listing explicitly that there is an “Unlock by paying” option to distinguish these from “Offers In-App Purchases” which is a catch-all expression for both legitimate, scammy or nasty sales.

My list of wishes:

  • Offer Trial Periods for applications: this would send a clear message that this is a paid application, but you can try to use it. And by offering this directly by the AppStore, developers would not have to deal with the In-App purchase workflow, bringing joy to developers and users alike.

  • Explicit Labels: Rather than using the catch-all “Offers In-App Purchases”, show the nature of the purchase: “Unlock Features by Paying”, “Offers Subscriptions”, “Buy virtual services” and “Sells virtual coins/items”

  • Better Filtering: Today, it is not possible to filter searches to those that are paid apps (which tend to be less slimy than those with In-App Purchases)

  • Disclose the class of In-App Purchases available on each app that offers it up-front: I should not have to scroll and hunt for the information and mentally attempt to understand what the item description is to make a purchase

  • Report Abuse: Human reviewers and automated reviews are not able to spot every violation of the existing rules or my proposed additional rules. Users should be able to report applications that break the rules and developers should be aware that their application can be removed from circulation for breaking the rules or scammy behavior.

Some Bad Practices

Check some of the bad practices in this compilation

Posted Thu Sep 24 21:51:27 2020 Tags:

Some bad app patterns as some followers described them

You can read more in the replies to my request a few weeks ago:

Posted Thu Sep 24 02:33:00 2020 Tags:

This is the continuation from these posts: part 1, part 2, part 3 and part 4.

In the posts linked above, I describe how it's possible to have custom keyboard layouts in $HOME or /etc/xkb that will get picked up by libxkbcommon. This only works for the Wayland stack, the X stack doesn't use libxkbcommon. In this post I'll explain why it's unlikely this will ever happen in X.

As described in the previous posts, users configure with rules, models, layouts, variants and options (RMLVO). What XKB uses internally though are keycodes, compat, geometry, symbols types (KcCGST) [1].

There are, effectively, two KcCGST keymap compilers: libxkbcommon and xkbcomp. libxkbcommon can go from RMLVO to a full keymap, xkbcomp relies on other tools (e.g. setxkbmap) which in turn use a utility library called libxkbfile to can parse rules files. The X server has a copy of the libxkbfile code. It doesn't use libxkbfile itself but it relies on the header files provided by it for some structs.

Wayland's keyboard configuration works like this:

  • the compositor decides on the RMLVO keybard layout, through an out-of-band channel (e.g. gsettings, weston.ini, etc.)
  • the compositor invokes libxkbcommon to generate a KcCGST keymap and passes that full keymap to the client
  • the client compiles that keymap with libxkbcommon and feeds any key events into libxkbcommon's state tracker to get the right keysyms
The advantage we have here is that only the full keymap is passed between entities. Changing how that keymap is generated does not affect the client. This, coincidentally [2], is also how Xwayland gets the keymap passed to it and why Xwayland works with user-specific layouts.

X works differently. Notably, KcCGST can come in two forms, the partial form specifying names only and the full keymap. The partial form looks like this:


$ setxkbmap -print -layout fr -variant azerty -option ctrl:nocaps
xkb_keymap {
xkb_keycodes { include "evdev+aliases(azerty)" };
xkb_types { include "complete" };
xkb_compat { include "complete" };
xkb_symbols { include "pc+fr(azerty)+inet(evdev)+ctrl(nocaps)" };
xkb_geometry { include "pc(pc105)" };
};
This defines the component names but not the actual keymap, punting that to the next part in the stack. This will turn out to be the achilles heel. Keymap handling in the server has two distinct aproaches:
  • During keyboard device init, the input driver passes RMLVO to the server, based on defaults or xorg.conf options
  • The server has its own rules file parser and creates the KcCGST component names (as above)
  • The server forks off xkbcomp and passes the component names to stdin
  • xkbcomp generates a keymap based on the components and writes it out as XKM file format
  • the server reads in the XKM format and updates its internal structs
This has been the approach for decades. To give you an indication of how fast-moving this part of the server is: XKM caching was the latest feature added... in 2009.

Driver initialisation is nice, but barely used these days. You set your keyboard layout in e.g. GNOME or KDE and that will apply it in the running session. Or run setxkbmap, for those with a higher affinity to neckbeards. setxkbmap works like this:

  • setkxkbmap parses the rules file to convert RMLVO to KcCGST component names
  • setkxkbmap calls XkbGetKeyboardByName and hands those component names to the server
  • The server forks off xkbcomp and passes the component names to stdin
  • xkbcomp generates a keymap based on the components and writes it out as XKM file format
  • the server reads in the XKM format and updates its internal structs
Notably, the RMLVO to KcCGST conversion is done on the client side, not the server side. And the only way to send a keymap to the server is that XkbGetKeyboardByName request - which only takes KcCGST, you can't even pass it a full keymap. This is also a long-standing potential issue with XKB: if your client tools uses different XKB data files than the server, you don't get the keymap you expected.

Other parts of the stack do basically the same as setxkbmap which is just a thin wrapper around libxkbfile anyway.

Now, you can use xkbcomp on the client side to generate a keymap, but you can't hand it as-is to the server. xkbcomp can do this (using libxkbfile) by updating the XKB state one-by-one (XkbSetMap, XkbSetCompatMap, XkbSetNames, etc.). But at this point you're at the stage where you ask the server to knowingly compile a wrong keymap before updating the parts of it.

So, realistically, the only way to get user-specific XKB layouts into the X server would require updating libxkbfile to provide the same behavior as libxkbcommon, update the server to actually use libxkbfile instead of its own copy, and updating xkbcomp to support the changes in part 2, part 3. All while ensuring no regressions in code that's decades old, barely maintained, has no tests, and, let's be honest, not particularly pretty to look at. User-specific XKB layouts are somewhat a niche case to begin with, so I don't expect anyone to ever volunteer and do this work [3], much less find the resources to review and merge that code. The X server is unlikely to see another real release and this is definitely not something you want to sneak in in a minor update.

The other option would be to extend XKB-the-protocol with a request to take a full keymap so the server. Given the inertia involved and that the server won't see more full releases, this is not going to happen.

So as a summary: if you want custom keymaps on your machine, switch to Wayland (and/or fix any remaining issues preventing you from doing so) instead of hoping this will ever work on X. xmodmap will remain your only solution for X.

[1] Geometry is so pointless that libxkbcommon doesn't even implement this. It is a complex format to allow rendering a picture of your keyboard but it'd be a per-model thing and with evdev everyone is using the same model, so ...
[2] totally not coincidental btw
[3] libxkbcommon has been around for a decade now and no-one has volunteered to do this in the years since, so...

Posted Fri Sep 4 00:39:00 2020 Tags: