This feed omits posts by jwz. Just 'cause.

UK censorship: jail sentence for possessing a copy of The Anarchist's Cookbook.

Nazism is despicable, and the Anarchist's Cookbook explains how to make some explosives — but merely possessing a copy of a book must never be criminalized. That is just one small step away from thoughtcrime.

Posted Fri Jan 21 18:05:25 2022 Tags:

*[Antibiotic-resistant bacteria] now a leading cause of death worldwide, study finds.*

Scientists have warned us for years that we need to put an end to overuse of antibiotics, or they will cease to work and that will kill lots of people. The reason we did not do so is that it would have reduced the profits of Big Ag. It's not the farmer workers that get more pay, nor the owners of small farms that sell their crops to Big Ag. It's the giant distributor/processors that get the money.

Posted Fri Jan 21 18:05:25 2022 Tags:

*Woman sentenced to death in Pakistan over "blasphemous" WhatsApp activity.*

Unusually, the target this time is a Muslim who claims to be religious. But morally that makes no difference. It is vicious to punish anyone for "blasphemy" against any religion, and it demonstrates the danger of giving religion any more power than any other opinion.

Posted Fri Jan 21 18:05:25 2022 Tags:

Teenagers in Kutztown, Pennsylvania, have responded to Republican book-banners by starting a book club for reading banned books.

Posted Fri Jan 21 18:05:25 2022 Tags:

* More than a thousand crows roost in Sunnyvale every night, ruffling locals’ feathers with caws and droppings.* At least they do it with good caws.

Can Buffy slay crows?

Posted Fri Jan 21 18:05:25 2022 Tags:

*Supreme Court rejects [the bullshitter's] bid to shield [official government] documents from January 6 panel.*

Posted Fri Jan 21 18:05:25 2022 Tags:

*New York attorney general alleges Trump firm misled banks and tax officials.*

Questioning the wrecker himself for a civil suit might be a mistake, since it would require giving him immunity from prosecution, and that would be most unfortunate.

Posted Fri Jan 21 18:05:25 2022 Tags:

* More than 100 members of the global super-rich called on Wednesday for governments around the world to "tax us now" to help pay for the pandemic response and tackle the gulf between rich and poor.*

Posted Fri Jan 21 18:05:25 2022 Tags:

A study from the MIT business school found that America's "toxic corporate culture" is one of the main factors that motivated many workers to quit their jobs last year.

Posted Fri Jan 21 18:05:25 2022 Tags:

US citizens: call on Congress to pass the Postal Service Reform Act.

Posted Fri Jan 21 18:05:25 2022 Tags:

source: Wikimedia.

My last blog was already two months ago. The reason for this was the log4j security risk. Since much of our software actually is written in Java, the question was indeed if the CDK (doi:10.1186/s13321-017-0220-4), BridgeDb (doi:10.1186/1471-2105-11-5), Bacting (doi:10.21105/joss.02558), etc were affected. 

Basically, the toolkit is that old, that everyone jumped on in: it was just good. Now, practically, the problems were minor. The Chemistry Development Kit dependency was a build dependency: it still has support for log4j, but the user decides what logging platform to use. This was the result of an abstraction for Bioclipse, allowing CDK log messages to be passed to the Eclipse logger, instead of log4j. Still, you want even that build dependency to be updated. CDK 2.7.1 has been released now.

BridgeDb had a similar situation, tho some BridgeDb modules do have a runtime dependency which may have impact. However, the core did not, and the webservice did not. But the same applies here: even the build dependency should have the latest version. BridgeDb 3.0.13 has been released.

Now, if read up on the Blue Obelisk movement (maybe the 2011 update paper needs an update, doi:10.1186/1758-2946-3-37), then you know all the dependencies between projects. So, besides multiple releases for multiple projects, it also required updates on other packages and additional releases were made for the Blue Obelisk core projects Euclid and CMLXOM. Euclid 2.0 and CMLXOM 4.0 were released.

On the bright side, many Java software projects generally worked on library updates, Java 17 support, etc. It totally messed up my schedule and generally a really relaxed xmas holiday.

Who payed for this? Mostly myself. Yes, you're welcome.

Posted Fri Jan 14 07:17:00 2022 Tags:

Recently a security hole in a certain open source Java library resulted in a worldwide emergency kerfuffle as, say, 40% of the possibly hundreds of millions of worldwide deployments of this library needed to be updated in a hurry. (The other 60% also needed to be updated in a hurry, but won't be until they facilitate some ransomware, which is pretty normal for these situations.)

I have a 20+ year history of poking fun at Java in this space, and it pains me to stop now. But the truth is: this could have happened to anyone.

What happened was:

  • Someone wrote a library they thought was neat
  • They decided to share it with the world for free
  • Millions of people liked it and used it everywhere
  • Some contributors contributed some good ideas and, in this case, at least one bad idea
  • Out of a sense of stewardship, they decided to maintain backward compatibility with the bad idea
  • The bad idea turned out to have one or more security flaws that affected all the users
  • The stewards responded quickly with a fix

From this, if you believe the Internet Consensus, we can conclude that open source doesn't work, people don't get paid enough, capitalism is a sham, billionaires are built on the backs of the proletariat, your $50 Patreon donation makes a real difference, and Blorkchain Would Have Solved This.

(Miraculously the Internet Consensus is always the same both before and after these kinds of events. In engineering we call this a "non-causal system" because the outputs are produced before the inputs.)

Nevertheless, I have been dared to take my take on the issue. It, too, was the same before and after, but the difference is I didn't write it down until now, which makes this journal a causal system. You could probably write an interesting philosophical paper about observations of a non-causal system nevertheless being causal, but mercifully, I will not.

Free Software is Communism

So anyway, meandering gently toward the point, let's go back in time to the original Free Software movement. Long ago, before the average reader of this sentence was born, a person whose name is now unpopular was at a university, where they had a printer, and the printer firmware was buggy. This person firmly believed they could quickly fix the printer firmware if only they had the source code. (In the spirit of every "I could do this better in a weekend" story, I'm not sure whether we ever confirmed if this was true. In any case printer firmware is still buggy.)

As a result, they started a nonprofit organization to rewrite all of Unix, which the printer did not run and which therefore would not solve any of the original problem, but was a pretty cool project nonetheless and was much more fun than the original problem, and the rest was history.

This story archetype is the Hero's Journey that inspires all software development:

  • I have a problem
  • I do not know how to solve that problem
  • But I strongly believe, without evidence, that I can solve a generalized version of that problem if I form a large enough team and work at it for 35 years(*)
  • We are now accepting donations

(*) initial estimate is often less than 35 years

Now, you have probably heard all this before, and if you're a software developer you have probably lived it. This part is not really in question. The burning question for us today, as we enjoy the (hopefully) peak of late-stage capitalism, is: ...but where will the donations come from?


Before we get back onto communism, let me draw an important distinction. Most communist governments in history ended up being authoritarian systems, which is to say, top-down control. Ironically, the people at the top seem to have more power than the people at the bottom, which at first seems like the antithesis of communism. This is not the place to claim an understanding of why that always seems to happen. But one has to acknowledge a pattern when one sees it.

On the other hand, it's easy to find examples of authoritarianism outside communism. Our world is filled with top-down control systems. Many corporations are in many ways, top-down controlled. The US system of government is increasingly top-down controlled (ie. authoritarian), despite the many safety measures introduced early to try to prevent that.

When politicians rail against communism it is because they don't want you to notice the ever-growing non-communist authoritarianism.

Authoritarianism is self-reinforcing. Once some people or groups start having more power, they tend to use that power to adjust or capture the rules of the system so they can accumulate more power, and so on. Sometimes this is peacefully reversible, and sometimes it eventually leads to uprisings and revolutions.

People like to write about facism and communism as if they are opposite ends of some spectrum, but that's not really true in the most important sense. Fascism blatantly, and communism accidentally but consistently, leads to authoritarianism. And authoritarianism is the problem.

Authoritarianism is about taking things from me. Communism, in its noncorporeal theoretical form, is about giving things away.

I read a book once which argued that the problem with modern political discourse is it pits the "I don't want things taken from me" (liberty!) people against the "XYZ is a human right" (entitlement!) people. And that a better way to frame the cultural argument is "XYZ is my responsibility to society."

As a simple example, "Internet access is a human right," is just a sneaky way of saying "someone should give people free Internet." Who is someone? It's left unspecified, which is skipping over the entire mechanism by which we deliver the Internet. It's much more revealing to write, "To live in a healthy society, it's our responsibility to make sure every person has Internet access." Suddenly, oh, crap. The someone is me!

Healthy society is created through constant effort, by all of us, as a gift to our fellow members. It's not extracted from us as a mandatory payment to our overlords who will do all the work.

If there's one thing we know for sure about overlords, it's that they never do all the work.

Free software is a gift.

I would like to inquire about the return policy

Here's the thing about gifts: the sender chooses them, not the recipient. We can have norms around what gifts are appropriate, and agreements to not over-spend, and wishlists, and so on. But I won't always get the exact gift I want. Sometimes I didn't even want a gift. Sometimes the gift interprets JNDI strings in my log messages and executes random code from my LDAP server. This is the nature of gifts.

On the other hand, the best gifts are the things I never would have bought for myself, because they seemed too expensive or I didn't even realize I would like them or they were too much work to obtain, or because someone hand-made them just for me. These feel like luxuries of the sort capitalism cannot produce, because deciding, going out, and buying something for myself isn't luxury, it's everyday. It's lonely. It's a negotiation. It's limited by my own lack of creativity.

The best part of free software is it sometimes produces stuff you never would have been willing to pay to develop (Linux), and sometimes at quality levels too high to be rational for the market to provide (sqlite).

The worst part of free software is you get what you get, and the developers don't have to listen to you. (And as a developer, the gift recipients aren't always so grateful either.)

Paying for gifts

...does not work.

You don't say to someone, "here's $100, maybe this time get me a gift worth $100 more than you'd regularly spend." It's kind of insulting. It still probably won't get you exactly the thing you wanted. Actually, the other person might just pocket the $100 and run off with it.

We already have a way for you to spend $100 to get the thing you want. It's a market. A market works fine for that. It's not very inspiring, but most of the time it's quite efficient. Even gift-givers will often buy things on the same market, but with a different selection criteria, thus adding value of their own.

When you try to pay for gifts, it turns the whole gift process into a transaction. It stops being a gift. It becomes an inefficient, misdesigned, awkward market.

There's research showing that, for example, financial compensation in a job is more likely a demotivator than a motivator (ie. if you pay me too little, I'll work less hard or quit, but if you double my pay, it won't double my output). If you tie cash compensation to specific metrics, people will game the metrics and usually do an overall worse job. If you pay someone for doing you a favour, they are less likely to repeat the favour. Gifts are inherently socially and emotionally meaningful. Ruin the giftiness, and you ruin the intangible rewards.

So it is with free software. You literally cannot pay for it. If you do, it becomes something else.

This is why we have things like the Linux Foundation, where the idea is you can give a gift because you appreciate and want to support Linux (and ideally you are a rich megacorporation so your gift is very big), but it dilutes the influence of that money through an organization that supposedly will not try to influence the gift of Linux that was already happening. You end up with multiple gift flows in different directions. Money goes here, code goes there. They are interdependent - maybe if one flow slows down the other flow will also slow down - but not directly tied. It's a delicate balance. People who keep receiving Christmas gifts but never give any might eventually stop receiving them. But might not.

Anyway, gifts will not get you 24-hour guaranteed response times to security incidents.

Gifts won't get you guaranteed high quality code reviews.

Gifts will not, for heaven's sake, prevent developers from implementing bad ideas occasionally that turn into security holes. Nothing will. Have you met developers?

Open source

I've avoided the term "open source" so far because it means something different from the original idea of Free Software.

Open source was, as I understand it, coined to explain what happened when Netscape originally opened their Mozilla source code, back at the end of the 1990s. That was not a gift. That was a transaction. Or at least, it was intended to be.

The promise of open source was:

  • You, the company, can still mostly control your project
  • Customers will still pay you to add new features
  • Actually customers might pay other people to add new features, but you can still capitalize on it because you get their code too
  • Linux distributions only package open source code so you'll onboard more customers more easily this way
  • You can distance yourself from this anti-capitalist gift-giving philosophical stuff that makes investors nervous
  • Plus a bunch of people will look at the code and find bugs for you for free!

Maybe this sounds cynical, but capitalists are cynical, and you know what? It worked! Okay, not for Netscape Corporation (sorry), but for a lot of other people since then.

It also failed a lot of people. Many developers and companies have been disappointed to learn that just uploading your code to github doesn't make a community of developers appear. (It does make it more likely that AWS will fork your product and make more money from it than you do.) Code reviews are famously rare even in security-critical projects. Supply chain issues are rampant.

In fact, we've now gotten to the point where some people hesitate to give away their source code, mainly because of this confusion of gifts and customers. If I spend some spare time hacking something together on a weekend and give it away, that's a gift. If you yell at me for making it, that makes giving less fun, and I will spend fewer weekends making gifts.

Whereas when a company has a product and open sources it and you complain, that's customers giving valuable feedback and it's worth money to learn from them and service them, because you eventually earn money in exchange (through whatever business model they've established). No gift necessary.

Call it cynical or call it a win/win relationship. But it's not a gift.

The startup ecosystem

Since the creation of the open source designation 20+ years ago, software startups have taken off more than ever. I attribute this to a combination of factors:

  • Cloud computing has made it vastly cheaper to get started
  • Incubators like YCombinator have industrialized the process of assembling and running a small software company
  • Megacorps have become exponentially richer but no more creative, so they need to acquire or acqui-hire those startups faster and faster in order to grow.

Although a lot of startups open source their code, and they all depend heavily on open source ecosystems, the startup world's motivations are amazingly different from the free software and open source worlds.

Gifts exist in the startup world. They are things like "we were both in YCombinator so I will intro you to this investor I like" or "I got extremely rich so let me invest in your startup and incidentally I get a lottery ticket for becoming even more rich." These absolutely are still gifts. They each strengthen social ties. The startup world is a society, and the society is built up from these gifts. It's a society that largely ignores the trials and tribulations of anyone who isn't a rich software engineer insider, but history has hosted many past societies of that sort and it takes a long time to build and deploy enough guillotines, and anyway they are having fun and producing a lot and surely that counts for something.

If free software gifts are communism and open source is cynically capitalist exploitation, then startups may be, weirdly, the democratization of capitalism.

Hear me out. Big companies don't care what you think; you can't pay them enough to care. Gift givers care only a little what you think; if they gave you what you wanted, it wouldn't be a gift. But startups, well, there are a lot of them and their mantras are "do things that don't scale" and "focus on the customer" and "build rapid feedback loops." What that spells for you is a whole bunch of people who want to give you what you want, in exchange for money, and who are excited to amortize the costs of that over all the other customers who want the same thing.

It's kind of exciting, conceptually, and more self-optimizing than untuned gift giving, and so it's not super surprising to me that it has started to eclipse the earlier concepts of free software and open source. More and more "open" projects are backed by small companies, who have financial incentives to make their users happy because some of the users turn into paying customers. They'll even provide the uptime SLAs and security fix turnaround guarantees you wanted so much. Our company, Tailscale, is unabashedly one of those. Nothing to be ashamed of there. The system works.

What doesn't work is assuming those startup mechanics apply to everyone out there who gives you a software gift. Not every project on github is the same.

Not everyone has the same motivations.

Giving them money won't change their motivations.

Trying to pay them or regulate them taints the gift.

If you wanted to pay someone to fix some software, you didn't want a gift. You wanted a company.

But if there is no company and someone gave you something anyway? Say thanks.


This isn't where evolution stops. There's a lot more to say about how SaaS taints the unwritten agreement of open source (because you don't have to give back your changes to the code), and how startups tend to go bankrupt and their tech dies with them, and how the best developers are not good at starting companies (no matter how much easier it has become), and how acquiring a startup usually destroys all the stuff they innovated, and how open source is often used as a way to exfiltrate past those kinds of disasters, and how simultaneously, whole promising branches of the "gift economy" structure have never been explored. But that's enough for today. Maybe another time.

Posted Thu Dec 30 12:43:51 2021 Tags:

What if all these weird tech trends actually add up to something?

Last time, we explored why various bits of trendy technology are, in my opinion, simply never going to be able to achieve their goals. But we ended on a hopeful(?) note: maybe that doesn't matter. Maybe the fact that people really, really, really want it, is enough.

Since writing that, I've been thinking about it more.

I think we are all gradually becoming more aware of patterns, of major things wrong with our society. They echo some patterns we've been seeing for decades now. The patterns go far beyond tech, extending into economics and politics and culture. There's a growing feeling of malaise many of us feel:

  • Rich, powerful, greedy people and corporations just get richer, more powerful, and more greedy.
  • Everyone seems to increasingly be in it for themselves, not for society.
  • Or, people who are in it for society tend to lose or to get screwed until they give up.
  • Artists really don't get enough of a reward for all the benefit they provide.
  • Big banks and big governments really do nonspecifically just suck a lot.
  • The gap between the haves and have-nots keeps widening.
  • You can't hope to run an Internet service unless you pay out a fraction to one of the Big Cloud Providers, just like you couldn't run software without paying IBM and then Microsoft, back in those days.
  • Bloody egress fees, man. What a racket.
  • Your phone can run mapreduce jobs 10x-100x faster than your timeshared cloud instance that costs more. Plus it has a GPU.
  • One SSD in a Macbook is ~1000x faster than the default disk in an EC2 instance.
  • Software stacks, governments, and financial systems: they all keep getting more and more bloated and complex while somehow delivering less per dollar, gigahertz, gigabyte, or watt.
  • Computers are so hard to run now, that we are supposed to give up and pay a subscription to someone - well, actually to every software microvendor - to do it for us.
  • We even pay 30% margins to App Stores mainly so they can not let us download apps that are "too dangerous."
  • IT security has become literally impossible: if you install all the patches, you get SolarWinds-style supply chain malware delivered to you automatically. If you don't install the patches, well, that's worse. Either way, enjoy your ransomware.
  • Software intercompatibility is trending toward zero. Text chat apps are literally the easiest thing in the world to imagine making compatible - they just send very short strings, very rarely, to very small networks of people! But I use at least 7 separate ones because every vendor wants their own stupid castle and won't share. Don't even get me started about books or video.
  • The most reasonable daycare and public transit in the Bay Area is available only with your Big Tech Employee ID card.
  • Everything about modern business is designed to funnel money, faster and faster, to a few people who have demonstrated they can be productive. This totally works, up to a point. But we've now reached the extreme corner cases of capitalism. Winning money is surely a motivator, but that motivation goes down the more you have. Eventually it simply stops mattering at all. Capitalism has become a "success disaster."

Writing all this down, you know what? I'm kind of mad about it too. Not so mad that I'll go chasing obviously-ill-fated scurrilous rainbow financial instruments. But there's something here that needs solving. If I'm not solving it, or part of it, or at least trying, then I'm... wasting my time. Who cares about money? This is a systemic train wreck, well underway.

We have, in Western society, managed to simultaneously botch the dreams of democracy, capitalism, social coherence, and techno-utopianism, all at once. It's embarrassing actually. I am embarrassed. You should be embarrassed.


I'm a networking person and a systems person, so please forgive me if I talk about all this through my favourite lens. Societies, governments, economies, social networks, and scalable computing all have something in common: they are all distributed systems.


And everyone.

Everyone seems to have an increasingly horrifically misguided idea of how distributed systems work.

There is of course the most obvious horrifically misguided recently-popular "decentralized" system, whose name shall not be spoken in this essay. Instead let's back up to something older and better understood: markets. The fundamental mechanism of the capitalist model.

Markets are great! They work! Centrally planning a whole society clearly does not work (demonstrated, bloodily, several times). Centrally planning corporations seems to work, up to a certain size. Connecting those corporations together using markets is the most efficient option we've found so far.

But there's a catch. People like to use the term free market to describe the optimal market system, but that's pretty lousy terminology. The truth is, functioning markets are not "free" at all. They are regulated. Unregulated markets rapidly devolve into monopolies, oligopolies, monopsonies, and, if things get really bad, libertarianism. Once you arrive there, every thread ends up with people posting about "a monopoly on the use of force" and "paying taxes at gunpoint" and "I'll run my own fire department" and things that "end at the tip of the other person's nose," and all useful discourse terminates forevermore.

The job of market regulation - fundamentally a restriction on your freedom - is to prevent all that bad stuff. Markets work well as long as they're in, as we call it in engineering, the "continuous control region," that is, the part far away from any weird outliers. You need no participant in the market to have too much power. You need downside protection (bankruptcy, social safety net, insurance). You need fair enforcement of contracts (which is different from literal enforcement of contracts).

And yet: markets are distributed systems.

Even though there are, in fact, very strict regulators and regulations, I can still enter into a contract with you without ever telling anyone. I can buy something from you, in cash, and nobody needs to know. (Tax authorities merely want to know, and anyway, notifying them is asynchronous and lossy.) Prices are set through peer-to-peer negotiation and supply and demand, almost automatically, through what some call an "invisible hand." It's really neat.

As long as we're in the continuous control region.

As long as the regulators are doing their job.

Here's what everyone peddling the new trendy systems is so desperately trying to forget, that makes all of them absurdly expensive and destined to fail, even if the things we want from them are beautiful and desirable and well worth working on. Here is the very bad news:

Regulation is a centralized function.

The job of regulation is to stop distributed systems from going awry.

Because distributed systems always go awry.

If you design a distributed control system to stop a distributed system from going awry, it might even work. It'll be unnecessarily expensive and complex, but it might work... until the control system itself, inevitably, goes awry.

I find myself linking to this article way too much lately, but here it is again: The Tyranny of Structurelessness by Jo Freeman. You should read it. The summary is that in any system, if you don't have an explicit hierarchy, then you have an implicit one.

Despite my ongoing best efforts, I have never seen any exception to this rule.

Even the fanciest pantsed distributed databases, with all the Rafts and Paxoses and red/greens and active/passives and Byzantine generals and dining philosophers and CAP theorems, are subject to this. You can do a bunch of math to absolutely prove beyond a shadow of a doubt that your database is completely distributed and has no single points of failure. There are papers that do this. You can do it too. Go ahead. I'll wait.

<several PhDs later>

Okay, great. Now skip paying your AWS bill for a few months.

Whoops, there's a hierarchy after all!

You can stay in denial, or you can get serious.

Western society, economics, capitalism, finance, government, the tech sector, the cloud. They are all distributed systems already. They are all in severe distress. Things are going very bad very quickly. It will get worse. Major rework is needed. We all feel it.

We are not doing the rework.

We are chasing rainbows.

We don't need deregulation. We need better designed regulation.

The major rework we need isn't some math theory, some kind of Paxos for Capitalism, or Paxos for Government. The sad, boring fact is that no fundamental advances in math or computer science are needed to solve these problems.

All we need is to build distributed systems that work. That means decentralized bulk activity, hierarchical regulation.

As a society, we are so much richer, so much luckier, than we have ever been.

It's all so much easier, and harder, than they've been telling you.

Let's build what we already know is right.

Posted Thu Dec 2 12:38:46 2021 Tags:

I guess I know something about train wrecks.

One night when I was 10 years old, me and my mom were driving home. We came to a train crossing outside of town. There was another car stopped right on the tracks, stalled. A lady was inside, trying to get her car to start. It didn’t.

Train crossings are bumpy, cars were worse then, it was a long time ago, I don’t know, I don’t remember clearly. Anyway, it was cold out and most people didn’t have cell phones yet, so when the car wouldn’t start and it was too heavy to push, there wasn’t much to be done. My mom convinced her to get the heck away from the tracks and come sit in our car to warm up and make a plan. We heard the whistle of an arriving train. And I made what I now consider one of the (several) biggest mistakes of my life: I closed my eyes.

It was only a few seconds later when I realized OH MY GOD WHAT WAS I THINKING I COULD HAVE WATCHED A TRAIN DESTROY A CAR RIGHT IN FRONT OF ME!!! But I wasn’t brave enough, I panicked, I closed my eyes, and you know what? The train wreck happened anyway. I just didn’t get to see it.

It was in the local newspaper a couple days later. The newspaper said the car ran into the train, and not the other way around. I was boggled. I learned later that this was my first, surprisingly on-the-nose, encounter with the Gell-Mann Amnesia Effect. (To this day, I still believe some of the things I read. I have no idea why.)

What’s the point of this story? That the train crash still happens, whether or not you’re watching. And everything you've read about it is probably wrong. And I’m glad my mom helped that lady get out of the way.

Anyway that’s why I don't mute blockchain-related keywords on twitter.

The blockchain train crash

Ten years(!) have passed since I wrote Why bitcoin will fail. And yet, here we are, still talking about bitcoin. Did it fail?

According to the many cryptobots who pester me, apparently not. They still gleefully repost my old article periodically, pointing out that at the time, bitcoins were worth maybe three dollars, and now they're worth infinity dollars, and wow, that apenperson sure must feel dumb for not HODLING BIGTIME back when they had the chance, lol.

Do I feel dumb? Well, hmm. It’s complicated. Everything I predicted seems to have come true. If your definition of “failure” is “not achieving any of the stated goals,” then I guess bitcoin has profoundly... not succeeded. But that doesn’t really tell the whole story, does it? A proper failure would be in the past tense by now.

What I do know is I’ve learned some stuff in 10 years.

What I got right

But first, let’s review the claims I made in the original article:

  • If you like bitcoin, you must think the gold standard was a good idea.
    To create gold currency, you do pointless busywork (“mining”). Gold is a stupid inconvenient currency that’s worse than paper.
    Printing and destroying money is a key economic tool.

Yup. Over the years we’ve seen an ongoing, embarrassing overlap between “goldbug” zealots and bitcoin zealots. The busywork mining has gotten absurdly more expensive than it was in 2011, and somehow is now a significant fraction of worldwide energy usage (what. the. heck), and various blockchains’ environmental impact is now the most common argument people use against them.

Beyond my imagination, bitcoin has achieved the unlikely goal of being even less convenient than gold for actually buying things (the job of a currency). The exchange rate of bitcoin is almost completely a random walk, impossible for anyone to manage (unlike a regular currency), and much worse than even gold.

  • Even if it was a good idea, governments would squash it.
    The only reason they haven’t is it’s too small to matter.

Yes and yes.

Congratulations, we’ve now seen the bitcoin movement get big enough to matter! There’s a corresponding increase in regulation, from SEC investigations, to outright banning in some countries, to the IRS wanting to tax you on it, to anti-terrorist financing and KYC rules. Each new regulation removes yet another supposed advantage of using something other than cash.

Also, it’s now obvious that use of bitcoin (and related blockchains) for payments is almost entirely scams and illegal stuff. This agrees with my prediction, but in a way I didn’t expect. It turns out to be maybe tautological. Like the old American saying, “If you outlaw guns, then only outlaws will have guns,” you could argue that we have now regulated bitcoin so much that only criminals can productively use bitcoin.

But it's grown enough to now be producing the largest (and ongoing!) ransomware outbreak the world has ever seen, so, you win, I guess.

  • The whole technological basis is flawed.
    The weak link is not SHA256, it’s the rest of the cryptosystem.

Yes, in multitudes.

We’ve seen forked chains, theft, mysteriously reversed transactions, 51% attacks.

It turned out bitcoin is irreconcilably privacy-destroying. (Law enforcement teams say thanks!) This was originally billed as a feature until some drug dealers got caught. The feature, or maybe bug, can’t be fixed without changing the system, which can’t be done without getting everyone to upgrade.

But ha, it's a decentralized system. Since nobody could figure out how to get everyone to upgrade a decentralized system all at once, it was more profitable to instead spin up zillions of new blockchains, each with its own systemic flaws, but all sharing the one big systemic flaw: it’s an ownerless distributed cryptosystem, so when each fatal flaw is inevitably revealed, nobody can fix it.

On top of the technical problems, there were social problems. Jo Freeman's Tyranny of Structurelessness showed up here, as it does whenever you try to pretend you have no social control hierarchy. We learned that the people who write the code, and the people who have the biggest mining rigs, and the people who operate exchanges, and the people who go on expensive and very shady cruises to hobnob with the cabal, and something about North Korea, and basically everyone who is not you, all have disproportionate control over what happens with this “decentralized” “currency.” And this is equally true in all the other “decentralized” chains invented to either scam you or solve technical problems in the original, or both.

For heaven's sake, people, it's software. You built a system, or series of systems, that will fail in completely predictable ways, forever, if you didn't get the software perfectly right the first time. What did you think would happen.

  • It doesn’t work offline.
    Paper money does.

Still true. On the other hand, the global expansion of cellular data availability has been relentless and perhaps this never did matter.

What I got wrong

Okay, well. The title.

“Why bitcoin will fail” wasn’t right. It would have been better to call it “Why bitcoin should fail,” because it really should have! But it didn’t, at least not yet, at least in the minds of its ever-growing user base. I feel like this is important.

A few years ago I learned the investor variant of Sturgeon’s Law. Here’s what a VC told me: 90% of new things will fail. Therefore you can just predict every new thing will fail, and 90% of the time you’ll be right. That’s a pretty good way to feel good about yourself, but it’s not very useful. Anybody can do that. Instead, can you pick the 10% that will succeed?

Even though I accurately predicted a bunch of things about bitcoin that wouldn’t work, I didn’t predict all the other things about bitcoin that wouldn't work. Maybe that seems like splitting hairs, but it isn’t. If you miss the reasons something won’t work, then you might just as easily miss the reasons why it will work. It suggests that you don’t know what you’re talking about.

Here are some reasons I missed for why bitcoin (and blockchains generally) didn’t and still don't work, for anything productive:

  • Scams. Lots and lots of scams. Blockchains became the center of gravity of almost all scams on the Internet. I don’t know what kind of achievement that is, exactly, but it’s sure something.

  • Citizens moving money out of authoritarian regimes. This is, by definition, illegal, but is it a net benefit to society? I don’t know. Maybe sometimes.

  • Other kinds of organized crime and trafficking. I don’t know what fraction of money laundering nowadays goes through blockchains. Maybe it’s still a small percentage. But it seems to be a growing percentage.

  • More and more blockchains. There are so many of them now (see “scams”, above), claiming to do all sorts of things. None of them do. But somehow even bitcoin is still alive, even though a whole ecosystem of derivative junk has sprouted trying to compete with it.

  • Corrupt or collapsed exchanges. I predicted technical problems, but most of the failures we’ve seen have been simple, old fashioned grifters and incompetents. Indeed, the failures of this new financial system are just like the historical failures of old financial systems, albeit with faster iterations. Some people are excited about how much faster we can make more expensive mistakes now. I'm not so sure.

  • Gambling and speculation. I wrote the whole article expecting bitcoin to fail at being a currency, but that charade ended almost immediately. What exists now is an expensive, power-hungry, distributed, online gambling system. The house still always wins, but it’s not totally clear who the house is, which is how the house likes it. Gambling has always been fundamentally a drain on society (a “tax on the uneducated,” someone once told me), but it’s always very popular anyway. Bitcoin is casino chips. Casino chips aren’t currency, but they don't “fail” either.

Despite all that - and I didn't even need to exaggerate! - bitcoin has still not failed, if failure means it’s gone. It's very much still around.

That’s because I forgot one essential reason bitcoin has survived:

Because people really, really, really want it to.

If there’s one lesson I learned over and over in the last ten years, that’s it. Projects don’t survive merely because they are good ideas; many good ideas get cancelled or outcompeted. Ask anyone who works at an overfunded tech company.

Similarly, movements don’t die just because they are, in every conceivable way, stupid. Projects live or die because of the energy people do or do not continue to put into them.

Here's a metaphor. Blockchains today are like… XML in the early 2000s. A long-burning overcomplicated trash fire that a bunch of large, cash-rich, unscrupulous, consultant-filled mega-organizations foisted on us for years and years, that we all now pretend we weren’t dumb enough to fall for. Remember SOAP? Remember when we tried to make HTML4 into XHTML? Ew, no.

The thing is, a ton of major tech infrastructure spending was somehow justified in the name of XML. A lot of computer systems, especially in the financial and manufacturing industries, got a long-needed overhaul. Fixed-width COBOL databases couldn't do XML, so bam, gotta replace some fixed-width COBOL databases (or at least slap a new API translator in front). The XML part sucked and still sucks and we’ll be dealing with it for decades.

But is that really so bad, in the name of progress?


It's been ten years, and it all went pretty badly, so let me make a new prediction.

A lot of stuff will get redesigned in the name of blockchains. Like XML, the blockchains will always make it worse, but if carefully managed, maybe not too much worse. Something good will eventually come out of it, by pure random chance, because of all those massive rewrites. Blockchains will take credit for it, like XML took credit for it. And then we'll finally move on to the next thing.

Nowadays if someone picks XML for a new design, we look at them like they’re from another planet. So, too, it will be with decentralized consensus blockchains, someday.

So, too, it was for the gold standard for international trade.

But it took more than 50 years.

Posted Wed Nov 17 22:38:25 2021 Tags:

Let's talk about bug/feature tradeoffs.

Anyone who knows me has probably already heard me rant about Crossing the Chasm, my most favourite business book of all time. I love its simple explanation of market segmentation and why the life cycle of a tech startup so often goes the way it does. Reading that book is what taught me that business success is not just a result of luck or hard work. Strategy matters too.

As our company prepares for our chasm-crossing phase, I've been thinking about the math behind why chasm-crossing works and why our metrics plots (doesn't every startup do their key business metrics in R?) look the way they do, and I realized that chasm-crossing strategy must have a simple mathematical basis behind it. I bet I could math this.

And so, our simulated software engineering (SWE) team is back!

In previous episodes of SimSWE, we learned it's objectively good to be short-term decisive even if you're wrong and to avoid multitasking. Later, I expanded on all that, plus more, in my epic treatise on software scheduling. And then, as a bonus, our simulated SWEs went on to buy homes and distort prices in the California housing market.

This time, I want to explore the chasm-crossing process and, while we're here, answer the unanswerable question: what's a bug and what's a feature?

Nobody can agree on what they mean. When does "lack of a feature" become a bug? When a key customer demands it? When the project manager declares a code freeze but you still want to merge your almost-finished pull request? When it's Really Really Important that you launch at a particular conference?

The answer is, users don't care what you call it. Let's reformulate the question.

We need to make a distinction between needs and wants.

Back when I lived in New York, I took some fiction writing classes. One thing I learned is there is a specific recipe for "interesting" characters in a story, as follows: understand how characters' needs differ from their wants. It's rare that the two are the same. And that way lies drama.

So it is with customers. I want a browser that doesn't suck all my RAM and drain my battery. But I need a browser that works on every website and minimizes malware infections, so I use Chrome.

I want a scripting language that isn't filled with decades-old quoting idiosyncracies, but I need a scripting language that works everywhere, so I mostly use POSIX sh.

Some people call needs "table stakes." You must be this tall to ride the roller coaster, no exceptions. If you are not this tall, you cannot ride the roller coaster. Whether you want to ride the roller coaster is an orthogonal question related to your personal preferences.

Needs are AND. Wants are OR. A product must satisfy all your needs. It can get away with satisfying only one want, if you want it badly enough.

Needs are roadblocks to your product's adoption. (I previously wrote about roadblock analysis.)

A want is a reason to use some new software. A need is a reason you can't.

About 20 years ago(!), Joel on Software wrote about the 80/20 myth:

80% of the people use 20% of the features. So you convince yourself that you only need to implement 20% of the features, and you can still sell 80% as many copies. Unfortunately, it’s never the same 20%.

– Joel Spolsky

And yet, if you're starting a new project, you can't exactly do 100% of the features people want, all at once. What can you do instead?

Market segments, use cases, and needs

The best (and thankfully becoming common) advice to startups nowadays is to really nail just one use case at first. Pick a want, find people who want it, figure out what those people have in common, call it a market segment, solve the needs of the people in that segment, repeat.

This is all harder than it sounds, mostly because of your own human psychology. But it all lends itself well to rapid iteration, which is why our earlier SimSWE tips to be decisive and to avoid multitasking are right.

Getting back to Crossing the Chasm, the most essential advice in the book - and the hardest to follow - is to focus on your chosen market segment and ignore all requests from outside that segment. Pick one want. Fulfill all the needs.

Let's make a simulation to show what happens if you do or don't. And if we're lucky, the simulation will give us some insight into why that's such good advice.

Simulating wants and needs

The plot below simulates a market that with 10,000 potential users, 10 potential wants, and 15 potential needs. Each user has a varying number of wants (averaging 3 each) and needs (averaging 5 each).

For a user to be interested in our product, it's sufficient for our product to fulfill any of their wants. On the other hand, for a user to actually adopt the product, they need to be interested, and we need to fulfill all their needs.

Side note: we can think of a "Minimum Viable Product" (MVP) as a product that fulfills one want, but none of the needs. There will be some tiny number of users who have no special needs and could actually use it. But a much larger group might want to use it. The MVP gives you a context for discussion with that larger group.

Before we get to all that deliberate activity, though, here's an example run of the simulator, with random-ish sequencing of wants and needs.

The dim dotted line is the Total Addressable Market (TAM). Every time you implement a want, the TAM goes up. Fun! This is what venture capitalist dreams are made of. All the users in the TAM are "interested" in your product, even if they aren't able to use it yet.

The dashed line is the "unblocked" users. These are users who are in the TAM and whose needs you've entirely filled. They legitimately could buy your product and be happy with it. Assuming they hear about you, go through the trial and sales process, etc. This is the maximum number of users you could have with your current product.

Finally, the red line is the number of users you actually have at any given time. It takes into effect marketing, word-of-mouth, and adoption delays.


I'm already excited about this simulation because it shows how adoption curves "really look" in real life. In particular, you can see the telltale signs of a "real" adoption curve:

  • Exponentially growing uptake at first, which slows down as you saturate the market (ie. an "S-curve" shape).

  • When you look more closely, the big S-curve is made up of a bunch of smaller S-curves. Each time we fulfill a need, some group of users becomes unblocked, and we can move toward saturating an ever-bigger market.

Observe also that the jumps in the dotted line (fulfilled wants) are big at first, and smaller each time. That's because each user has an average of three wants, and you only need to satisfy one of them. Because of overlapping wants, the second want is split between new users and users you already have. Each successive want has a greater and greater overlap with your already-interested users, and thus less and less effect.

(Alas, this gives a good mathematical rationale for why "mature" products stop improving. Yes, there are all sorts of additional things your audience might want. But if adding them doesn't increase your TAM, it's strategically questionable whether you should bother. Bring on the continuous improvement debate.)

(On the other hand, this simulation is somewhat unrealistic because of the pre-defined market size of only 10,000 participants. If, instead of fulfilling more wants for your existing market segment, you add a new market segment, those new wants might have a bigger impact and your "big" S-curve might get a newer, bigger S-curve added to it. This is small consolation to your existing users who would like some more stuff added that they care about, though.)

In contrast, the jumps in the dashed line (needs fulfilled) start small and get bigger. This also makes sense intuitively: since users can't adopt the product until all their needs are met, and the typical user has 5 needs, certainly the first 4 needs are going to attract only a small group of less-discerning people. Even the first 5 needs will only capture the group of users with exactly those 5 needs or fewer. But by the time you're reaching the end of the to-do list, every new need is unlocking a big group of almost-satisfied users.

(This part is coool because it explains what startups so often experience: at first, fulfilling needs for your target market creates a small jump in absolute user count. But through this "AND" effect, each subsequent need you fulfill can create a bigger and bigger jump. Even if the new features seem fairly small or relatively easy compared to your early work!)

Comparing strategies

Of course, that was a single simulation based on a bunch of made-up arbitrary assumptions and some numerical constants selected mainly on the basis of how pretty the graph would look.

The good part comes when we compare multiple product management strategies:

Let's continue to assume a fixed market segment of 10,000 users, each of whom have an assortment of wants and needs.

The four plots above correspond to four ways of prioritizing those wants and needs:

  1. Features First: the "maximum hype" approach. Implement all 10 wants before solving any needs at all. This maximizes TAM as early as possible. Some early-stage investors get starry-eyed when they see that, but unfortunately you don't get a lot of live users because although people are excited, they can't actually use the product. This is also what you get if you don't, as Steve Blank would say, "get out of the building" and talk to real customers.

  2. Alternating: switch between implementing wants and needs, semi-randomly. It turns out this grows your userbase considerably faster than the first option, for the same reason that you'll do okay at rock-paper-scissors by using a random number generator instead of always choosing rock. The main thing here is shipping those randomly-ordered milestones as fast as you can. As SimSWE 1 and 2 emphasized, if you do that, you can get away with not being super great at prioritization.

  3. Needs First: just implement exactly one want, then fix all the needs before moving on to other wants. This is a purified Crossing the Chasm model. You can see that the TAM doesn't start increasing until pretty late, because new use cases are on hold. But we get precious real users earlier, which spread word-of-mouth sooner and lead to faster exponential adoption later.

  4. Perfectionism: the naive opposite of features-first; a variant of needs-first where we don't even solve a single want before we start trying to address needs. Since the product does nothing useful, but very reliably, nobody wants to buy it at first (~zero TAM). When we finally start launching use cases, we can add them pretty quickly, but actual growth lags behind, at first, because we were late in getting our exponential growth curve started. think of this as the "we got SOC2 compliance before we had any customers" strategy.

In these plots, the important things to look for are getting more users sooner (money in the bank!) and total area under the curve (aggregate value delivered). Users you get earlier are users who give you money and spread word-of-mouth over a longer time, so they are much more valuable than users you add later.

In this version of the plot, it looks like #3 is winning, #4 is not too bad, and even #2 might be kind of okay. In the end, is there really much difference?

Let's zoom in!

More needs fulfilled, more momentum

This plot zooms the y axis to the first 1000 customers, leaving the x axis unchanged from before. Now the differences are more dramatic.

Here you can see that needs-first starts attracting at least a noticeable number of live customers at time 150 or so. The others take much longer to get rolling.

This feels intuitively right: in the early days of a startup, you build an MVP, nobody uses it, you find a few willing suckers early adopters and listen to their feedback, fix the first couple of roadblocks, and now you have a few happy niche users. If all goes well, those users will refer you to more users with a few more roadblocks, and so on. At that stage, it's way too early to worry about expanding your TAM.

What I find exciting - but not all that surprising, having now immersed ourselves in the math - is that the needs-first approach turns out to not be a compromise. The word-of-mouth advantage from having zero-roadblock, excited, active users early on means the slow part of the exponential growth can get started early, which over time makes all the other effects look small. And each successive fulfilled need unlocks an ever-greater number of users.

In contrast, you can see how increasing the TAM early on has not much benefit. It might get your investors excited, but if you don't have live users, there is nobody to spread word-of-mouth yet. Surprisingly little is lost by just focusing on one small want, clearing out roadblocks for people who want that, and worrying about the rest later.

Posted Mon Oct 25 13:00:52 2021 Tags:

After a nine year hiatus, a new version of the X Input Protocol is out. Credit for the work goes to Povilas Kanapickas, who also implemented support for XI 2.4 in the various pieces of the stack [0]. So let's have a look.

X has had touch events since XI 2.2 (2012) but those were only really useful for direct touch devices (read: touchscreens). There were accommodations for indirect touch devices like touchpads but they were never used. The synaptics driver set the required bits for a while but it was dropped in 2015 because ... it was complicated to make use of and no-one seemed to actually use it anyway. Meanwhile, the rest of the world moved on and touchpad gestures are now prevalent. They've been standard in MacOS for ages, in Windows for almost ages and - with recent GNOME releases - now feature prominently on the Linux desktop as well. They have been part of libinput and the Wayland protocol for years (and even recently gained a new set of "hold" gestures). Meanwhile, X was left behind in the dust or mud, depending on your local climate.

XI 2.4 fixes this, it adds pinch and swipe gestures to the XI2 protocol and makes those available to supporting clients [2]. Notably here is that the interpretation of gestures is left to the driver [1]. The server takes the gestures and does the required state handling but otherwise has no decision into what constitutes a gesture. This is of course no different to e.g. 2-finger scrolling on a touchpad where the server just receives scroll events and passes them on accordingly.

XI 2.4 gesture events are quite similar to touch events in that they are processed as a sequence of begin/update/end with both types having their own event types. So the events you will receive are e.g. XIGesturePinchBegin or XIGestureSwipeUpdate. As with touch events, a client must select for all three (begin/update/end) on a window. Only one gesture can exist at any time, so if you are a multi-tasking octopus prepare to be disappointed.

Because gestures are tied to an indirect-touch device, the location they apply at is wherever the cursor is currently positioned. In that, they work similar to button presses, and passive grabs apply as expected too. So long-term the window manager will likely want a passive grab on the root window for swipe gestures while applications will implement pinch-to-zoom as you'd expect.

In terms of API there are no suprises. libXi 1.8 is the version to implement the new features and there we have a new XIGestureClassInfo returned by XIQueryDevice and of course the two events: XIGesturePinchEvent and XIGestureSwipeEvent. Grabbing is done via e.g. XIGrabSwipeGestureBegin, so for those of you with XI2 experience this will all look familiar. For those of you without - it's probably no longer worth investing time into becoming an XI2 expert.

Overall, it's a nice addition to the protocol and it will help getting the X server slightly closer to Wayland for a widely-used feature. Once GTK, mutter and all the other pieces in the stack are in place, it will just work for any (GTK) application that supports gestures under Wayland already. The same will be true for Qt I expect.

X server 21.1 will be out in a few weeks, xf86-input-libinput 1.2.0 is already out and so are xorgproto 2021.5 and libXi 1.8.

[0] In addition to taking on the Xorg release, so clearly there are no limits here
[1] More specifically: it's done by libinput since neither xf86-input-evdev nor xf86-input-synaptics will ever see gestures being implemented
[2] Hold gestures missed out on the various deadlines

Posted Thu Sep 23 05:26:00 2021 Tags:

Xorg is about to released.

And it's a release without Xwayland.

And... wait, what? Let's unwind this a bit, and ideally you should come away with a better understanding of Xorg vs Xwayland, and possibly even Wayland itself.

Heads up: if you are familiar with X, the below is simplified to the point it hurts. Sorry about that, but as an X developer you're probably good at coping with pain.

Let's go back to the 1980s, when fashion was weird and there were still reasons to be optimistic about the future. Because this is a thought exercise, we go back with full hindsight 20/20 vision and, ideally, the winning Lotto numbers in case we have some time for some self-indulgence.

If we were to implement an X server from scratch, we'd come away with a set of components. libxprotocol that handles the actual protocol wire format parsing and provides a C api to access that (quite like libxcb, actually). That one will just be the protocol-to-code conversion layer.

We'd have a libxserver component which handles all the state management required for an X server to actually behave like an X server (nothing in the X protocol require an X server to display anything). That library has a few entry points for abstract input events (pointer and keyboard, because this is the 80s after all) and a few exit points for rendered output.

libxserver uses libxprotocol but that's an implementation detail, we can ignore the protocol for the rest of the post.

Let's create a github organisation and host those two libraries. We now have: and [1].

Now, to actually implement a working functional X server, our new project would link against libxserver hook into this library's API points. For input, you'd use libinput and pass those events through, for output you'd use the modesetting driver that knows how to scream at the hardware until something finally shows up. This is somewhere between outrageously simplified and unacceptably wrong but it'll do for this post.

Your X server has to handle a lot of the hardware-specifics but other than that it's a wrapper around libxserver which does the work of ... well, being an X server.

Our stack looks like this:

| xserver [libxserver]|--------[ X client ]
| |
|[libinput] [modesetting]|
| kernel |
Hooray, we have re-implemented Xorg. Or rather, XFree86 because we're 20 years from all the pent-up frustratrion that caused the Xorg fork. Let's host this project on

Now, let's say instead of physical display devices, we want to render into an framebuffer, and we have no input devices.

| xserver [libxserver]|--------[ X client ]
| |
| [write()] |
| some buffer |
This is basically Xvfb or, if you are writing out PostScript, Xprint. Let's host those on github too, we're accumulating quite a set of projects here.

Now, let's say those buffers are allocated elsewhere and we're just rendering to them. And those buffer are passed to us via an IPC protocol, like... Wayland!

| xserver [libxserver]|--------[ X client ]
| |
|input events [render]|
| |
| Wayland compositor |
And voila, we have Xwayland. If you swap out the protocol you can have Xquartz (X on Macos) or Xwin (X on Windows) or Xnext/Xephyr (X on X) or Xvnc (X over VNC). The principle is always the same.

Fun fact: the Wayland compositor doesn't need to run on the hardware, you can play display server matryoshka until you run out of turtles.

In our glorious revisioned past all these are distinct projects, re-using libxserver and some external libraries where needed. Depending on the projects things may be very simple or get very complex, it depends on how we render things.

But in the end, we have several independent projects all providing us with an X server process - the specific X bits are done in libxserver though. We can release Xwayland without having to release Xorg or Xvfb.

libxserver won't need a lot of releases, the behaviour is largely specified by the protocol requirements and once you're done implementing it, it'll be quite a slow-moving project.

Ok, now, fast forward to 2021, lose some hindsight, hope, and attitude and - oh, we have exactly the above structure. Except that it's not spread across multiple independent repos on github, it's all sitting in the same git directory: our Xorg, Xwayland, Xvfb, etc. are all sitting in hw/$name, and libxserver is basically the rest of the repo.

A traditional X server release was a tag in that git directory. An XWayland-only release is basically an rm -rf hw/*-but-not-xwayland followed by a tag, an Xorg-only release is basically an rm -rf hw/*-but-not-xfree86 [2].

In theory, we could've moved all these out into separate projects a while ago but the benefits are small and no-one has the time for that anyway.

So there you have it - you can have Xorg-only or XWayland-only releases without the world coming to an end.

Now, for the "Xorg is dead" claims - it's very likely that the current release will be the last Xorg release. [3] There is little interest in an X server that runs on hardware, or rather: there's little interest in the effort required to push out releases. Povilas did a great job in getting this one out but again, it's likely this is the last release. [4]

Xwayland - very different, it'll hang around for a long time because it's "just" a protocol translation layer. And of course the interest is there, so we have volunteers to do the releases.

So basically: expecting Xwayland releases, be surprised (but not confused) by Xorg releases.

[1] Github of course doesn't exist yet because we're in the 80s. Time-travelling is complicated.
[2] Historical directory name, just accept it.
[3] Just like the previous release...
[4] At least until the next volunteer steps ups. Turns out the problem "no-one wants to work on this" is easily fixed by "me! me! I want to work on this". A concept that is apparently quite hard to understand in the peanut gallery.

Posted Wed Sep 22 03:16:00 2021 Tags:

Gut Ding braucht Weile. Almost three years ago, we added high-resolution wheel scrolling to the kernel (v5.0). The desktop stack however was first lagging and eventually left behind (except for an update a year ago or so, see here). However, I'm happy to announce that thanks to José Expósito's efforts, we now pushed it across the line. So - in a socially distanced manner and masked up to your eyebrows - gather round children, for it is storytime.

Historical History

In the beginning, there was the wheel detent. Or rather there were 24 of them, dividing a 360 degree [1] movement of a wheel into a neat set of 15 clicks. libinput exposed those wheel clicks as part of the "pointer axis" namespace and you could get the click count with libinput_event_pointer_get_axis_discrete() (announced here). The degree value is exposed as libinput_event_pointer_get_axis_value(). Other scroll backends (finger-scrolling or button-based scrolling) expose the pixel-precise value via that same function.

In a "recent" Microsoft Windows version (Vista!), MS added the ability for wheels to trigger more than 24 clicks per rotation. The MS Windows API now treats one "traditional" wheel click as a value of 120, anything finer-grained will be a fraction thereof. You may have a mouse that triggers quarter-wheel clicks, each sending a value of 30. This makes for smoother scrolling and is supported(-ish) by a lot of mice introduced in the last 10 years [2]. Obviously, three small scrolls are nicer than one large scroll, so the UX is less bad than before.

Now it's time for libinput to catch up with Windows Vista! For $reasons, the existing pointer axis API could get changed to accommodate for the high-res values, so a new API was added for scroll events. Read on for the details, you will believe what happens next.

Out with the old, in with the new

As of libinput 1.19, libinput has three new events: LIBINPUT_EVENT_POINTER_SCROLL_WHEEL, LIBINPUT_EVENT_POINTER_SCROLL_FINGER, and LIBINPUT_EVENT_POINTER_SCROLL_CONTINUOUS. These events reflect, perhaps unsuprisingly, scroll movements of a wheel, a finger or along a continuous axis (e.g. button scrolling). And they replace the old event LIBINPUT_EVENT_POINTER_AXIS. Those familiar with libinput will notice that the new event names encode the scroll source in the event name now. This makes them slightly more flexible and saves callers an extra call.

In terms of actual API, the new events have two new functions: libinput_event_pointer_get_scroll_value(). For the FINGER and CONTINUOUS events, the value returned is in "pixels" [3]. For the new WHEEL events, the value is in degrees. IOW this is a drop-in replacement for the old libinput_event_pointer_get_axis_value() function. The second call is libinput_event_pointer_get_scroll_value_v120() which, for WHEEL events, returns the 120-based logical units the kernel uses as well. libinput_event_pointer_has_axis() returns true if the given axis has a value, just as before. With those three calls you now get the data for the new events.

Backwards compatibility

To ensure backwards compatibility, libinput generates both old and new events so the rule for callers is: if you want to support the new events, just ignore the old ones completely. libinput also guarantees new events even on pre-5.0 kernels. This makes the old and new code easy to ifdef out, and once you get past the immediate event handling the code paths are virtually identical.

When, oh when?

These changes have been merged into the libinput main branch and will be part of libinput 1.19. Which is due to be released over the next month or so, so feel free to work backwards from that for your favourite distribution.

Having said that, libinput is merely the lowest block in the Jenga tower that is the desktop stack. José linked to the various MRs in the upstream libinput MR, so if you're on your seat's edge waiting for e.g. GTK to get this, well, there's an MR for that.

[1] That's degrees of an angle, not Fahrenheit
[2] As usual, on a significant number of those you'll need to know whatever proprietary protocol the vendor deemed to be important IP. Older MS mice stand out here because they use straight HID.
[3] libinput doesn't really have a concept of pixels, but it has a normalized pixel that movements are defined as. Most callers take that as real pixels except for the high-resolution displays where it's appropriately scaled.

Posted Tue Aug 31 07:50:00 2021 Tags:

I've been working on portals recently and one of the issues for me was that the documentation just didn't quite hit the sweet spot. At least the bits I found were either too high-level or too implementation-specific. So here's a set of notes on how a portal works, in the hope that this is actually correct.

First, Portals are supposed to be a way for sandboxed applications (flatpaks) to trigger functionality they don't have direct access too. The prime example: opening a file without the application having access to $HOME. This is done by the applications talking to portals instead of doing the functionality themselves.

There is really only one portal process: /usr/libexec/xdg-desktop-portal, started as a systemd user service. That process owns a DBus bus name (org.freedesktop.portal.Desktop) and an object on that name (/org/freedesktop/portal/desktop). You can see that bus name and object with D-Feet, from DBus' POV there's nothing special about it. What makes it the portal is simply that the application running inside the sandbox can talk to that DBus name and thus call the various methods. Obviously the xdg-desktop-portal needs to run outside the sandbox to do its things.

There are multiple portal interfaces, all available on that one object. Those interfaces have names like org.freedesktop.portal.FileChooser (to open/save files). The xdg-desktop-portal implements those interfaces and thus handles any method calls on those interfaces. So where an application is sandboxed, it doesn't implement the functionality itself, it instead calls e.g. the OpenFile() method on the org.freedesktop.portal.FileChooser interface. Then it gets an fd back and can read the content of that file without needing full access to the file system.

Some interfaces are fully handled within xdg-desktop-portal. For example, the Camera portal checks a few things internally, pops up a dialog for the user to confirm access to if needed [1] but otherwise there's nothing else involved with this specific method call.

Other interfaces have a backend "implementation" DBus interface. For example, the org.freedesktop.portal.FileChooser interface has a org.freedesktop.impl.portal.FileChooser (notice the "impl") counterpart. xdg-desktop-portal does not implement those impl.portals. xdg-desktop-portal instead routes the DBus calls to the respective "impl.portal". Your sandboxed application calls OpenFile(), xdg-desktop-portal now calls OpenFile() on org.freedesktop.impl.portal.FileChooser. That interface returns a value, xdg-desktop-portal extracts it and returns it back to the application in respones to the original OpenFile() call.

What provides those impl.portals doesn't matter to xdg-desktop-portal, and this is where things are hot-swappable. GTK and Qt both provide (some of) those impl portals, There are GTK and Qt-specific portals with xdg-desktop-portal-gtk and xdg-desktop-portal-kde but another one is provided by GNOME Shell directly. You can check the files in /usr/share/xdg-desktop-portal/portals/ and see which impl portal is provided on which bus name. The reason those impl.portals exist is so they can be native to the desktop environment - regardless what application you're running and with a generic xdg-desktop-portal, you see the native file chooser dialog for your desktop environment.

So the full call sequence is:

  • At startup, xdg-desktop-portal parses the /usr/libexec/xdg-desktop-portal/*.portal files to know which impl.portal interface is provided on which bus name
  • The application calls OpenFile() on the org.freedesktop.portal.FileChooser interface on the object path /org/freedesktop/portal/desktop. It can do so because the bus name this object sits on is not restricted by the sandbox
  • xdg-desktop-portal receives that call. This is portal with an impl.portal so xdg-desktop-portal calls OpenFile() on the bus name that provides the org.freedesktop.impl.portal.FileChooser interface (as previously established by reading the *.portal files)
  • Assuming xdg-desktop-portal-gtk provides that portal at the moment, that process now pops up a GTK FileChooser dialog that runs outside the sandbox. User selects a file
  • xdg-desktop-portal-gtk sends back the fd for the file to the xdg-desktop-portal, and the impl.portal parts are done
  • xdg-desktop-portal receives that fd and sends it back as reply to the OpenFile() method in the normal portal
  • The application receives the fd and can read the file now
A few details here aren't fully correct, but it's correct enough to understand the sequence - the exact details depend on the method call anyway.

Finally: because of DBus restrictions, the various methods in the portal interfaces don't just reply with values. Instead, the xdg-desktop-portal creates a new org.freedesktop.portal.Request object and returns the object path for that. Once that's done the method is complete from DBus' POV. When the actual return value arrives (e.g. the fd), that value is passed via a signal on that Request object, which is then destroyed. This roundabout way is done for purely technical reasons, regular DBus methods would time out while the user picks a file path.

Anyway. Maybe this helps someone understanding how the portal bits fit together.

[1] it does so using another portal but let's ignore that
[2] not really hot-swappable though. You need to restart xdg-desktop-portal but not your host. So luke-warm-swappable only

Edit Sep 01: clarify that it's not GTK/Qt providing the portals, but xdg-desktop-portal-gtk and -kde

Posted Tue Aug 31 06:29:00 2021 Tags: