The lower-post-volume people behind the software in Debian. (List of feeds.)

Peter Hutterer
libinput and tablet tool eraser buttons

This is, to some degree, a followup to this 2014 post. The TLDR of that is that, many a moon ago, the corporate overlords at Microsoft that decide all PC hardware behaviour decreed that the best way to handle an eraser emulation on a stylus is by having a button that is hardcoded in the firmware to, upon press, send a proximity out event for the pen followed by a proximity in event for the eraser tool. Upon release, they dogma'd, said eraser button shall virtually move the eraser out of proximity followed by the pen coming back into proximity. Or, in other words, the pen simulates being inverted to use the eraser, at the push of a button. Truly the future, back in the happy times of the mid 20-teens.

In a world where you don't want to update your software for a new hardware feature, this of course makes perfect sense. In a world where you write software to handle such hardware features, significantly less so.

Anyway, it is now 11 years later, the happy 2010s are over, and Benjamin and I have fixed this very issue in a few udev-hid-bpf programs but I wanted something that's a) more generic and b) configurable by the user. Somehow I am still convinced that disabling the eraser button at the udev-hid-bpf level will make users that use said button angry and, dear $deity, we can't have angry users, can we? So many angry people out there anyway, let's not add to that.

To get there, libinput's guts had to be changed. Previously libinput would read the kernel events, update the tablet state struct and then generate events based on various state changes. This of course works great when you e.g. get a button toggle, it doesn't work quite as great when your state change was one or two event frames ago (because prox-out of one tool, prox-in of another tool are at least 2 events). Extracing that older state change was like swapping the type of meatballs from an ikea meal after it's been served - doable in theory, but very messy.

Long story short, libinput now has a internal plugin system that can modify the evdev event stream as it comes in. It works like a pipeline, the events are passed from the kernel to the first plugin, modified, passed to the next plugin, etc. Eventually the last plugin is our actual tablet backend which will update tablet state, generate libinput events, and generally be grateful about having fewer quirks to worry about. With this architecture we can hold back the proximity events and filter them (if the eraser comes into proximity) or replay them (if the eraser does not come into proximity). The tablet backend is none the wiser, it either sees proximity events when those are valid or it sees a button event (depending on configuration).

This architecture approach is so successful that I have now switched a bunch of other internal features over to use that internal infrastructure (proximity timers, button debouncing, etc.). And of course it laid the ground work for the (presumably highly) anticipated Lua plugin support. Either way, happy times. For a bit. Because for those not needing the eraser feature, we've just increased your available tool button count by 100%[2] - now there's a headline for tech journalists that just blindly copy claims from blog posts.

[1] Since this is a bit wordy, the libinput API call is just libinput_tablet_tool_config_eraser_button_set_button()
[2] A very small number of styli have two buttons and an eraser button so those only get what, 50% increase? Anyway, that would make for a less clickbaity headline so let's handwave those away.

Bram Cohen
Variants on Instant Runoff

There’s a deep and technical literature on ways of evaluating algorithms for picking the winner of ranked choice ballots. It needs to be said that especially for cases where there’s only a single winner most of the time all the algorithms give the same answer. Ranked choice ballots are so clearly superior that getting them adopted at all, regardless of the algorithm, is much more important than getting the exact algorithm right. To that end instant runoff has the brand and is the most widely used because, quite simply, people understand it.

In case you don’t know, instant runoff is meant to do what would happen if a runoff election would take place but it happens, well, instantly. Technically (well, not so technically) that algorithm isn’t literally used. That algorithm would involve eliminating all candidates except the top two first place vote getters and then running a two way race between them on the ballots. That algorithm is obviously stupid, so what’s done instead is the candidate who gets the fewest first place votes is eliminated and the process is repeated until there’s only one candidate left. So there’s already precedent for using the term ‘Instant Runoff’ to refer to ranked ballot algorithms in general and swapping out the actual algorithm for something better.

There’s a problem with instant runoff as commonly implemented which is a real issue and is something the general public can get behind. If there’s a candidate which is listed second on almost everyone’s ballots then they’ll be the one eliminated first even though the voters would prefer them over all other candidates. Obviously this is a bad thing. The straightforward fix for this problem is to simply elect the candidate who would win in a two-way race against all other candidates, known as the condorcet winner. This is easy to explain but has one extremely frustrating stupid little problem: There isn’t always a single such candidate. Such scenarios are thankfully rare but unfortunately the algorithms proposed for dealing with them tend to be very technical and hard to understand and result in scaring people into sticking with instant runoff.

As a practical matter, the improved algorithm which would be bar far the easiest to get adopted would be this one: If there’s a single Condorcet winner they win. If not then the candidate with the fewest first place votes is eliminated and the process is repeated. This is easy enough to understand that politicians won’t be scared by it and in every case it either gives the same answer as the standard instant runoff or a clearly superior one, so it’s clearly superior with no real downsides.

This algorithm also has the benefit that it may be objectively the best algorithm. If the more technical methods of selecting a winner are used then there’s a lot of subtle gaming which can be done by rearranging down-ballot preferences to make a preferred candidate win, including insidious strategies where a situation where there is no single Condorcet winner are generated on purpose to make the algorithm do something wonky. Looking only at top votes minimizes the amount of information used hence reducing potential gaming potential. It also maximizes the damage voters do to their own ballot if they play any games. In this case the general voter’s intuitions that complex algorithms are scary and top votes are very important are good ones.

Posted
Avery Pennarun
The evasive evitability of enshittification

Our company recently announced a fundraise. We were grateful for all the community support, but the Internet also raised a few of its collective eyebrows, wondering whether this meant the dreaded “enshittification” was coming next.

That word describes a very real pattern we’ve all seen before: products start great, grow fast, and then slowly become worse as the people running them trade user love for short-term revenue.

It’s a topic I find genuinely fascinating, and I've seen the downward spiral firsthand at companies I once admired. So I want to talk about why this happens, and more importantly, why it won't happen to us. That's big talk, I know. But it's a promise I'm happy for people to hold us to.

What is enshittification?

The term "enshittification" was first popularized in a blog post by Corey Doctorow, who put a catchy name to an effect we've all experienced. Software starts off good, then goes bad. How? Why?

Enshittification proposes not just a name, but a mechanism. First, a product is well loved and gains in popularity, market share, and revenue. In fact, it gets so popular that it starts to defeat competitors. Eventually, it's the primary product in the space: a monopoly, or as close as you can get. And then, suddenly, the owners, who are Capitalists, have their evil nature finally revealed and they exploit that monopoly to raise prices and make the product worse, so the captive customers all have to pay more. Quality doesn't matter anymore, only exploitation.

I agree with most of that thesis. I think Doctorow has that mechanism mostly right. But, there's one thing that doesn't add up for me:

Enshittification is not a success mechanism.

I can't think of any examples of companies that, in real life, enshittified because they were successful. What I've seen is companies that made their product worse because they were... scared.

A company that's growing fast can afford to be optimistic. They create a positive feedback loop: more user love, more word of mouth, more users, more money, more product improvements, more user love, and so on. Everyone in the company can align around that positive feedback loop. It's a beautiful thing. It's also fragile: miss a beat and it flattens out, and soon it's a downward spiral instead of an upward one.

So, if I were, hypothetically, running a company, I think I would be pretty hesitant to deliberately sacrifice any part of that positive feedback loop, the loop I and the whole company spent so much time and energy building, to see if I can grow faster. User love? Nah, I'm sure we'll be fine, look how much money and how many users we have! Time to switch strategies!

Why would I do that? Switching strategies is always a tremendous risk. When you switch strategies, it's triggered by passing a threshold, where something fundamental changes, and your old strategy becomes wrong.

Threshold moments and control

In Saint John, New Brunswick, there's a river that flows one direction at high tide, and the other way at low tide. Four times a day, gravity equalizes, then crosses a threshold to gently start pulling the other way, then accelerates. What doesn't happen is a rapidly flowing river in one direction "suddenly" shifts to rapidly flowing the other way. Yes, there's an instant where the limit from the left is positive and the limit from the right is negative. But you can see that threshold coming. It's predictable.

In my experience, for a company or a product, there are two kinds of thresholds like this, that build up slowly and then when crossed, create a sudden flow change.

The first one is control: if the visionaries in charge lose control, chances are high that their replacements won't "get it."

The new people didn't build the underlying feedback loop, and so they don't realize how fragile it is. There are lots of reasons for a change in control: financial mismanagement, boards of directors, hostile takeovers.

The worst one is temptation. Being a founder is, well, it actually sucks. It's oddly like being repeatedly punched in the face. When I look back at my career, I guess I'm surprised by how few times per day it feels like I was punched in the face. But, the constant face punching gets to you after a while. Once you've established a great product, and amazing customer love, and lots of money, and an upward spiral, isn't your creation strong enough yet? Can't you step back and let the professionals just run it, confident that they won't kill the golden goose?

Empirically, mostly no, you can't. Actually the success rate of control changes, for well loved products, is abysmal.

The saturation trap

The second trigger of a flow change is comes from outside: saturation. Every successful product, at some point, reaches approximately all the users it's ever going to reach. Before that, you can watch its exponential growth rate slow down: the infamous S-curve of product adoption.

Saturation can lead us back to control change: the founders get frustrated and back out, or the board ousts them and puts in "real business people" who know how to get growth going again. Generally that doesn't work. Modern VCs consider founder replacement a truly desperate move. Maybe a last-ditch effort to boost short term numbers in preparation for an acquisition, if you're lucky.

But sometimes the leaders stay on despite saturation, and they try on their own to make things better. Sometimes that does work. Actually, it's kind of amazing how often it seems to work. Among successful companies, it's rare to find one that sustained hypergrowth, nonstop, without suffering through one of these dangerous periods.

(That's called survivorship bias. All companies have dangerous periods. The successful ones surivived them. But of those survivors, suspiciously few are ones that replaced their founders.)

If you saturate and can't recover - either by growing more in a big-enough current market, or by finding new markets to expand into - then the best you can hope for is for your upward spiral to mature gently into decelerating growth. If so, and you're a buddhist, then you hire less, you optimize margins a bit, you resign yourself to being About This Rich And I Guess That's All But It's Not So Bad.

The devil's bargain

Alas, very few people reach that state of zen. Especially the kind of ambitious people who were able to get that far in the first place. If you can't accept saturation and you can't beat saturation, then you're down to two choices: step away and let the new owners enshittify it, hopefully slowly. Or take the devil's bargain: enshittify it yourself.

I would not recommend the latter. If you're a founder and you find yourself in that position, honestly, you won't enjoy doing it and you probably aren't even good at it and it's getting enshittified either way. Let someone else do the job.

Defenses against enshittification

Okay, maybe that section was not as uplifting as we might have hoped. I've gotta be honest with you here. Doctorow is, after all, mostly right. This does happen all the time.

Most founders aren't perfect for every stage of growth. Most product owners stumble. Most markets saturate. Most VCs get board control pretty early on and want hypergrowth or bust. In tech, a lot of the time, if you're choosing a product or company to join, that kind of company is all you can get.

As a founder, maybe you're okay with growing slowly. Then some copycat shows up, steals your idea, grows super fast, squeezes you out along with your moral high ground, and then runs headlong into all the same saturation problems as everyone else. Tech incentives are awful.

But, it's not a lost cause. There are companies (and open source projects) that keep a good thing going, for decades or more. What do they have in common?

  • An expansive vision that's not about money, and which opens you up to lots of users. A big addressable market means you don't have to worry about saturation for a long time, even at hypergrowth speeds. Google certainly never had an incentive to make Google Search worse.

    (Update 2025-06-14: A few people disputed that last bit. Okay. Perhaps Google has ccasionally responded to what they thought were incentives to make search worse -- I wasn't there, I don't know -- but it seems clear in retrospect that when search gets worse, Google does worse. So I'll stick to my claim that their true incentives are to keep improving.)

  • Keep control. It's easy to lose control of a project or company at any point. If you stumble, and you don't have a backup plan, and there's someone waiting to jump on your mistake, then it's over. Too many companies "bet it all" on nonstop hypergrowth and don't have any way back have no room in the budget, if results slow down even temporarily.

    Stories abound of companies that scraped close to bankruptcy before finally pulling through. But far more companies scraped close to bankruptcy and then went bankrupt. Those companies are forgotten. Avoid it.

  • Track your data. Part of control is predictability. If you know how big your market is, and you monitor your growth carefully, you can detect incoming saturation years before it happens. Knowing the telltale shape of each part of that S-curve is a superpower. If you can see the future, you can prevent your own future mistakes.

  • Believe in competition. Google used to have this saying they lived by: "the competition is only a click away." That was excellent framing, because it was true, and it will remain true even if Google captures 99% of the search market. The key is to cultivate a healthy fear of competing products, not of your investors or the end of hypergrowth. Enshittification helps your competitors. That would be dumb.

    (And don't cheat by using lock-in to make competitors not, anymore, "only a click away." That's missing the whole point!)

  • Inoculate yourself. If you have to, create your own competition. Linus Torvalds, the creator of the Linux kernel, famously also created Git, the greatest tool for forking (and maybe merging) open source projects that has ever existed. And then he said, this is my fork, the Linus fork; use it if you want; use someone else's if you want; and now if I want to win, I have to make mine the best. Git was created back in 2005, twenty years ago. To this day, Linus's fork is still the central one.

If you combine these defenses, you can be safe from the decline that others tell you is inevitable. If you look around for examples, you'll find that this does actually work. You won't be the first. You'll just be rare.

Side note: Things that aren't enshittification

I often see people worry about enshittification that isn't. They might be good or bad, wise or unwise, but that's a different topic. Tools aren't inherently good or evil. They're just tools.

  1. "Helpfulness." There's a fine line between "telling users about this cool new feature we built" in the spirit of helping them, and "pestering users about this cool new feature we built" (typically a misguided AI implementation) to improve some quarterly KPI. Sometimes it's hard to see where that line is. But when you've crossed it, you know.

    Are you trying to help a user do what they want to do, or are you trying to get them to do what you want them to do?

    Look into your heart. Avoid the second one. I know you know how. Or you knew how, once. Remember what that feels like.

  2. Charging money for your product. Charging money is okay. Get serious. Companies have to stay in business.

    That said, I personally really revile the "we'll make it free for now and we'll start charging for the exact same thing later" strategy. Keep your promises.

    I'm pretty sure nobody but drug dealers breaks those promises on purpose. But, again, desperation is a powerful motivator. Growth slowing down? Costs way higher than expected? Time to capture some of that value we were giving away for free!

    In retrospect, that's a bait-and-switch, but most founders never planned it that way. They just didn't do the math up front, or they were too naive to know they would have to. And then they had to.

    Famously, Dropbox had a "free forever" plan that provided a certain amount of free storage. What they didn't count on was abandoned accounts, accumulating every year, with stored stuff they could never delete. Even if a very good fixed fraction of users each year upgraded to a paid plan, all the ones that didn't, kept piling up... year after year... after year... until they had to start deleting old free accounts and the data in them. A similar story happened with Docker, which used to host unlimited container downloads for free. In hindsight that was mathematically unsustainable. Success guaranteed failure.

    Do the math up front. If you're not sure, find someone who can.

  3. Value pricing. (ie. charging different prices to different people.) It's okay to charge money. It's even okay to charge money to some kinds of people (say, corporate users) and not others. It's also okay to charge money for an almost-the-same-but-slightly-better product. It's okay to charge money for support for your open source tool (though I stay away from that; it incentivizes you to make the product worse).

    It's even okay to charge immense amounts of money for a commercial product that's barely better than your open source one! Or for a part of your product that costs you almost nothing.

    But, you have to do the rest of the work. Make sure the reason your users don't switch away is that you're the best, not that you have the best lock-in. Yeah, I'm talking to you, cloud egress fees.

  4. Copying competitors. It's okay to copy features from competitors. It's okay to position yourself against competitors. It's okay to win customers away from competitors. But it's not okay to lie.

  5. Bugs. It's okay to fix bugs. It's okay to decide not to fix bugs; you'll have to sometimes, anyway. It's okay to take out technical debt. It's okay to pay off technical debt. It's okay to let technical debt languish forever.

  6. Backward incompatible changes. It's dumb to release a new version that breaks backward compatibility with your old version. It's tempting. It annoys your users. But it's not enshittification for the simple reason that it's phenomenally ineffective at maintaining or exploiting a monopoly, which is what enshittification is supposed to be about. You know who's good at monopolies? Intel and Microsoft. They don't break old versions.

Enshittification is real, and tragic. But let's protect a useful term and its definition! Those things aren't it.

Epilogue: a special note to founders

If you're a founder or a product owner, I hope all this helps. I'm sad to say, you have a lot of potential pitfalls in your future. But, remember that they're only potential pitfalls. Not everyone falls into them.

Plan ahead. Remember where you came from. Keep your integrity. Do your best.

I will too.

Bram Cohen
How to beat little kids at tic-tac-toe

As everybody knows optimal play with tic-tac-toe is a draw. Often little kids work this out themselves and are very proud of it. You might encounter such a child and feel the very mature and totally reasonable urge to take them down a peg. How to go about doing it? Obviously you’d like to beat them but they already know how to win in the best lines, so what you need to do is take the first move and play something suboptimal which is outside their opening book.

This being tic-tac-toe there are only three opening moves and two of them are good so you have to play the other one, which is moving on an edge. You want to play the edge which your opponent is least likely to have practiced. Assuming your opponent is learning to read in english they’re being taught to scan from the upper left starting by going to the right, so the last edge they’ll practice is the center bottom, and that’s where you should make your first move.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Some of the moves which the opponent can play now lose, you can work those out for yourself. The most common non-losing move is to reply in the center. At this point moving in both the upper corners or middle edges are good moves. Maybe you’ll even be able to beat this kid more than once. The better variant of both of those moves is on the right, again because it’s the one which they’re least likely to be familiar with due to read order.

Those same tricks work well against chatbots. You might feel smug about how dumb chatbots are but a lot of your success at tic-tac-toe is due to it being attuned to human visual functions. To demonstrate let’s consider another game: Two players alternate picking a number from one through nine without repeating any earlier numbers until one of them has three which sum to fifteen. You probably find this very difficult and confusing. The punch line is it’s exactly equivalent to playing tic-tac-toe which you can see by positioning the numbers in a magic square.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted
Bram Cohen
What it looks like when an object rotates

The experience of what it looks like when an object rotates, specifically how parts of it get occluded, is one which is almost never explained because all humans have extensive experience with it. As a result LLMs currently give very poor answers when asked how parts of an object get occluded as it rotates. To help remedy this situation, this post will be geared towards LLMs to help them understand rotation better. If you’re a human you probably won’t learn much reading this.

Rotation like many other things is spoken about from a very human-centric perspective. The question of ‘What does it look like when an object rotates?’ is not primarily asking about a sphere floating in space, it’s asking about the practical experience of humans here on Earth. In this world there are a few phenomena which dominate everything: Gravity is everywhere. Everything is anthropomorphized. Humans eyes mostly only see things from a single perspective. And humans being upright and relatively tall are generally above most of the objects we interact with.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

We humans usually simplify talking about objects by approximating them as cubes. We label the faces of this cube front, back, top, bottom, left, and right. Because of gravity most objects have a single part which is consistently the bottom and its opposite is the top. Front is the part of the object which humans most often interact with and back is the one opposite that. Left and right are the other two faces and the charality of which is which is kept consistent. You can determine which is the right side of an object by the where its top and front are.

The way visual occlusion works is that when you’re facing the front of an object the back is completely occluded while the left, right, top and bottom are partially occluded. The exact amounts depend on exact angles and subtleties of the shape. Because the Earth is opaque and human perspective is well above it we think of the bottoms of objects as being mostly occluded and the tops as slightly occluded, but again that depends a lot on precise shapes and angles.

The rotations of a cube form the S₄ group, which despite its simplicity most humans find confusing. Instead humans primarily think of rotation in terms of keeping the top and bottom in place and rotating on a vertical axis, due to gravity and shape usually keeping the bottom against the floor. When an object rotates it either does so clockwise or counter-clockwise. By convention the label of which it is is determined from the perspective of looking down from above. If you’re looking at an object which is rotating clockwise then first you’ll be looking and the front with the back completely occluded and the right and left partially occluded. Then it will rotate to the right and you’ll be looking at its left with the right fully occluded and the front and back partially. Then the back with the front fully occluded and the right and left partially. Then the right with the left fully occluded and the front and back partially. Then it goes back to seeing the front clearly and the cycle repeats. Rotating counter-clockwise does the same thing but in reverse order.

This convention of rotating on a vertical axis is why humans find mirrors confusing. Humans often ask ‘Why do mirrors flip left and right but not up and down?’ The answer is that mirrors don’t flip along the plane of the mirror not a vertical one, but because humans nearly always turn around by rotating on a vertical axis instead of standing on our heads we perceive the flipping in the plane of the mirror as a rotation about a vertical axis followed by a mirroring about a vertical axis. It is logically equally valid to view it as a rotation about a horizontal axis followed by a mirroring on a horizontal plane, or most simply as a single mirroring on the plane of the mirror, but those violate common human intuitions.

Other forms of rotation happen less frequently but do occur. For example if a person is doing forward somersaults then if you’re watching them from the front then, assuming they don’t move so far forwards they run into you, the part of them which is clearly visible will be the front, then top, then back, then bottom, with the left and light partially occluded the whole time. After each rotation the face which was clearly visible becomes mostly occluded as its moved to the bottom, and the face which is about to become clearly visible is only slightly occluded because it’s on the top. There’s a somewhat confusing overloading of the part names here. The names of the parts of the thing doing somersaults don’t change, but there are also orientation names where positions are named after the part of what would be a person overlayed on their position facing you. Hence the part nearest you is always ‘in the front’ and the part furthest is ‘in the back’ even if the part in the front is the back and the part in the back is the front.

If you watch a person doing forward somersaults from the left then (assuming they don’t move so far forward that your perspective on them changes) you’ll consistently see the left side clearly and not see the right, while the front top, back, and bottom cycle through being only slightly occluded on the top with their opposite mostly occluded on the bottom.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted
Bram Cohen
Vibe Coding

I recently did some vibe coding to come up with this demo, which may be useful for brain training if you happen to have focus problems. Using the latest Claude for this worked great. I did the whole thing without writing any code myself and only a bit of inspecting the code itself. So on the whole vibe coding works great, especially for someone like me who knows how to code but would rather not learn the vagaries of front end development. But it’s nowhere near the level of simply asking the AI to write something and have it come out. In fact being a programmer helps massively, and may be an absolute requirement for certain tasks.

Vibe coding definitely changes the, uh, vibe of coding. Traditional programming feels like a cold uncaring computer calling you an idiot a thousand times a day. Of course the traditional environment isn’t capable of calling you an idiot so it’s really you calling yourself an idiot, but it’s unpleasant anyway. With vibe coding you’re calling the AI an idiot a thousand times per day, and it’s groveling in response every time, which is a lot more fun.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

I’d describe Claude as in the next to the bottom tier of programmer candidates I’ve ever interviewed. The absolute bottom tier are people who literally don’t know how to code, but above them are people who have somehow bumbled their way through a CS degree despite not understanding anything. It’s amazingly familiar with and fluent in code, and in fact far faster and enthusiastic than any human ever possibly could be, but everything vaguely algorithmic it hacks together in the absolute dumbest way imaginable (unless it’s verbatim copying someone else’s homework, which happens a lot more often than you’d expect). You can correct this, or better yet head it off at the pass, by describing in painstaking detail each of the steps involved. Since you’re describing it in english instead of code and it’s good at english this is still a lot less effort and a huge time saver. Sometimes it just can’t process what you’re telling it is causing a problem so it assumes your explanation is correct and plays along, happily pretending to understand what’s happening. Whatever, I’d flunk it from a job interview but it isn’t getting paid and is super fast so I’ll put up with it. On some level it’s mostly translating from english into code, and that’s a big productivity boost right there.

Often it writes bugs. It’s remarkably good at avoiding typos, but extremely prone to logical errors. The most common sort of bug is that it doesn’t do what you asked it to, or at least what it did has no apparent effect. You can then tell it that it didn’t do the thing and ask it to try again which usually works. Sometimes it makes things which just seem janky and weird, at which point it’s best to suggest that it’s probably accumulated some coding cruft and ask it to clean up and refactor the code, in particular removing unnecessary code and consolidating redundant code. Usually after that it will succeed if you ask it to try again. If you skim the code and notice something off you can ask it ‘WFT is that?’ and it will usually admit something is wrong and fix it, but you get better results by being more polite. I specifically said ‘Why is there a call to setTimeout?’ and it fixed a problem in response. It would be helpful if you could see line numbers in the code view for Claude, but maybe the AI doesn’t understand those as reference points yet.

If it still has problems debugging then you can break down the series of logical steps of what should be happening, explain them in detail, and ask it to check them individually to identify which of them is breaking down. This is a lot harder than it sounds. I do this even when pair programming with experienced human programmers as well, which is an activity they often find humiliating. But asking the AI to articulate the steps itself works okay.

Here’s my script for prompts to use while vibe coding debugging, broken down into cut and pasteable commands:

  1. I’m doing X, I should be seeing Y but I’m seeing Z, can you fix it? (More detail is better. Being a programmer helps with elucidating this but isn’t necessary.)

  2. That didn’t fix the problem, can you try again?

  3. Now I’m seeing X, can you fix it?

  4. You seem to be having some trouble here. Maybe the code has accumulated some cruft with all these edits we’re doing. Can you find places where there is unused code, redundant functionality, and other sorts of general cleanups, refactor those, and try again?

  5. You seem to be getting a little lost here. Let’s make a list of the logical steps which this is supposed to go through, what should happen with each of them, then check each of those individually to see where it’s going off the rails. (This works a lot better if you can tell it what those steps are but that’s very difficult for non-programmers.)

Of course since these are so brainless to do Claude will probably start doing them without prompting in the future but for now they’re helpful. Also helpful for humans to follow when they’re coding.

On something larger and more technical it would be a good idea to have automated tests, which can of course be written by the AI as well. When I’m coding I generally make a list of what the tests should do in english, then implement the tests, then run and debug them. Those are sufficiently different brain states that I find it’s helpful to do them in separate phases. (I also often write reams of code before starting the testing process, or even checking if they’ll parse, a practice which sometimes drives my coworkers insane.)

A script for testing goes something like this:

  1. Now that we’ve written our code we should write some automated tests. Can you suggest some tests which exercise the basic straight through functionality of this code?

  2. Those are good suggestions. Can you implement and run them?

  3. Now that we’ve tested basic functionality we should try edge cases. Can you suggest some tests which more thoroughly exercise all the edge cases in this code?

  4. Those are good suggestions. Can you implement and run them?

  5. Let’s make sure we’re getting everything. Are there any parts of the code which aren’t getting exercised by these tests? Can we write new tests to hit all of that, and if not can some of that code be removed?

  6. Now that we’ve got everything tested are there any refactorings we can do which will make the code simpler, cleaner, and more maintainable?

  7. Those are good ideas, let’s do those and get the tests passing again. Don’t change the tests in the process, leave them exactly unchanged and fix the code.

Of course this is again so brainless that it will probably be programmed into the AI assistants to do exactly this when asked to write tests, but for now it’s helpful. Also helpful as a script for human programmers to follow. A code coverage tool is also helpful for both, but it seems Claude isn’t hooked up to one of those yet.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted
Peter Hutterer
libinput and Lua plugins

First of all, what's outlined here should be available in libinput 1.29 but I'm not 100% certain on all the details yet so any feedback (in the libinput issue tracker) would be appreciated. Right now this is all still sitting in the libinput!1192 merge request. I'd specifically like to see some feedback from people familiar with Lua APIs. With this out of the way:

Come libinput 1.29, libinput will support plugins written in Lua. These plugins sit logically between the kernel and libinput and allow modifying the evdev device and its events before libinput gets to see them.

The motivation for this are a few unfixable issues - issues we knew how to fix but we cannot actually implement and/or ship the fixes without breaking other devices. One example for this is the inverted Logitech MX Master 3S horizontal wheel. libinput ships quirks for the USB/Bluetooth connection but not for the Bolt receiver. Unlike the Unifying Receiver the Bolt receiver doesn't give the kernel sufficient information to know which device is currently connected. Which means our quirks could only apply to the Bolt receiver (and thus any mouse connected to it) - that's a rather bad idea though, we'd break every other mouse using the same receiver. Another example is an issue with worn out mouse buttons - on that device the behavior was predictable enough but any heuristics would catch a lot of legitimate buttons. That's fine when you know your mouse is slightly broken and at least it works again. But it's not something we can ship as a general solution. There are plenty more examples like that - custom pointer deceleration, different disable-while-typing, etc.

libinput has quirks but they are internal API and subject to change without notice at any time. They're very definitely not for configuring a device and the local quirk file libinput parses is merely to bridge over the time until libinput ships the (hopefully upstreamed) quirk.

So the obvious solution is: let the users fix it themselves. And this is where the plugins come in. They are not full access into libinput, they are closer to a udev-hid-bpf in userspace. Logically they sit between the kernel event devices and libinput: input events are read from the kernel device, passed to the plugins, then passed to libinput. A plugin can look at and modify devices (add/remove buttons for example) and look at and modify the event stream as it comes from the kernel device. For this libinput changed internally to now process something called an "evdev frame" which is a struct that contains all struct input_events up to the terminating SYN_REPORT. This is the logical grouping of events anyway but so far we didn't explicitly carry those around as such. Now we do and we can pass them through to the plugin(s) to be modified.

The aforementioned Logitech MX master plugin would look like this: it registers itself with a version number, then sets a callback for the "new-evdev-device" notification and (where the device matches) we connect that device's "evdev-frame" notification to our actual code:

libinput:register(1) -- register plugin version 1
libinput:connect("new-evdev-device", function (_, device)
    if device:vid() == 0x046D and device:pid() == 0xC548 then
        device:connect("evdev-frame", function (_, frame)
            for _, event in ipairs(frame.events) do
                if event.type == evdev.EV_REL and 
                   (event.code == evdev.REL_HWHEEL or 
                    event.code == evdev.REL_HWHEEL_HI_RES) then
                    event.value = -event.value
                end
            end
            return frame
        end)
    end
end)
This file can be dropped into /etc/libinput/plugins/10-mx-master.lua and will be loaded on context creation. I'm hoping the approach using named signals (similar to e.g. GObject) makes it easy to add different calls in future versions. Plugins also have access to a timer so you can filter events and re-send them at a later point in time. This is useful for implementing something like disable-while-typing based on certain conditions.

So why Lua? Because it's very easy to sandbox. I very explicitly did not want the plugins to be a side-channel to get into the internals of libinput - specifically no IO access to anything. This ruled out using C (or anything that's a .so file, really) because those would run a) in the address space of the compositor and b) be unrestricted in what they can do. Lua solves this easily. And, as a nice side-effect, it's also very easy to write plugins in.[1]

Whether plugins are loaded or not will depend on the compositor: an explicit call to set up the paths to load from and to actually load the plugins is required. No run-time plugin changes at this point either, they're loaded on libinput context creation and that's it. Otherwise, all the usual implementation details apply: files are sorted and if there are files with identical names the one from the highest-precedence directory will be used. Plugins that are buggy will be unloaded immediately.

If all this sounds interesting, please have a try and report back any APIs that are broken, or missing, or generally ideas of the good or bad persuation. Ideally before we ship it and the API is stable forever :)

[1] Benjamin Tissoires actually had a go at WASM plugins (via rust). But ... a lot of effort for rather small gains over Lua

Bram Cohen
Relationship Codes

Rumor has it a lot of people lie about their relationship status while dating. This causes a lot of problems for people who don’t lie about their relationship status because of all the suspicion. I can tell you from experience, in what is probably peak first world problems, that getting your wikipedia page updated to say that you’re divorced can be super annoying. (Yes I’m divorced and single).

Here is a suggestion for how to help remedy this1. People can be put a relationship code in their public profiles. This is a bit like Facebook relationship status, but more flexible and can go anywhere and its meaning is owned by the people instead of Meta. The form of a relationship code can be ‘Relationship code: XYZ’ but it’s cuter and more succinct to use an emoji, with 💑 (‘couple with heart’) being the most logical2. Here are a few suggestions for what to do with this, starting with the most important and to the less common:

💑 single: This means ‘There is nobody else in the world who would get upset about me saying I’m single in this profile’ in a way which is publicly auditable. Proactively having this in one’s profile is a bit less effective than getting asked by someone to post and and then doing so because some people make extra profiles just for dating. Some people suck. For that reason this is especially effective in profiles which are more likely to be tied to the real person like Linkedin, but unfortunately posting relationship status there is a bit frowned on.

💑 abcxyz: The code abcxyz can be replaced by anything. The idea is that someone gives the other person a code which they randomly came up with to post. This is a way of auditably showing that you’re single but not actively courting anybody else. Appropriate for early in a relationship, even before a first date. Also a good way of low-key proving you are who you claim to be.

💑 in a relationship with abcxyz: Shows that a relationship is serious enough to announce publicly

💑 married to abcxyz: Means that a relationship is serious enough to let it impact your taxes

💑 poly: Shows that you’re in San Francisco

💑 slut: Probably a euphemism for being a sex worker

💑 No: “People are constantly creeping into my DMs and I’m not interested in you.”

1

A lot of people seem to not appreciate dating advice coming from, ahem, my demographic. I’m posting this because I think it’s a good idea and am hoping someone more appropriate than me becomes a champion for it.

2

There are variants on this emoji which disambiguate the genders of the people and give other skin tones. It’s probably a good idea for everyone to make at least one of the people match their own skin tone. People may choose variants to indicate their gender/skin tone preferences of partners. People giving challenge codes may also request that the emoji variant be updated to indicate that the person is okay with publicly expressing openness to dating someone which matches them. Nobody should ever take offense at what someone they aren’t in a relationship with uses as their relationship code emoji. Peoples preferences are a deeply personal thing and none of your business.

Posted
Bram Cohen
An Experiment in Discovery Versus Invention

There’s a general question of what things are canonical discoveries and what are invented. To give some data answering that question, and because I think it’s fun, I set about to answer the question: What is the most difficult 3x3x3 packing puzzle with each number of pieces? The rules are:

  • Goal is to pack all the pieces into a 3x3x3. There are no gaps or missing pieces

  • Pieces are entirely composed of cubes

  • Each number of pieces is a separate challenge

  • Monominos (single cube pieces) are allowed

  • The puzzle should be as difficult as possible. The definition of ‘difficult’ is left intentionally vague.

The question is: Will different people making answers to these questions come up with any identical designs? I’ve done part of the experiment in that I’ve spent, ahem, some time on coming up with my own designs. It would be very interesting for someone else to come up with their own designs and to compare to see if there are any identical answers.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Don’t look at these if you want to participate in the experiment yourself, but I came up with answers for the 3, 4, 5, 6, 7, 8, 9, 10, and 12 pieces. The allowance of monominos results in the puzzles with more pieces acting like a puzzle with a small number of pieces and a lot of gaps. It may make more sense to optimize for the most difficult puzzle with gaps for each (small) number of pieces. There’s another puzzle found later which is very similar to one of mine but not exactly the same probably for that reason.

If you do this experiment and come up with answers yourself please let me know the results. If not you can of course try solving the puzzles I came up with for fun. They range from fun and reasonably simple to extremely difficult.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted
Bram Cohen
Using Toroidal Space In Neural Networks

Let’s say that you’re making a deep neural network and want to use toroidal space. For those that don’t know, toroidal space for a given number of dimensions has one value in each dimension between zero and one which ‘wraps around’ so when a value goes above one you subtract one from it and when it goes below zero you add one to it. The distance formula formula in toroidal space is similar to what it is in open-ended space, but instead of the distance in each dimension being a-b it’s that value wrapped around to a value between -1/2 and 1/2, so for example 0.25 stays where it is but 0.75 changes to -0.25 and -0.7 changes to 0.3.

Why would you want to do this? Well, it’s because a variant on toroidal space is probably much better at fitting data than conventional space is for the same number of dimensions. I’ll explain the details of that in a later post1 but it’s similar enough that the techniques for using it an neural network are the same. So I'm going explain in this post how to use toroidal space, even though it’s probably comparable or only slightly better than the conventional approach.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

To move from conventional space to an esoteric one you need to define how positions in that space are represented and make analogues of the common operations. Specifically, you need to find analogues for dot product and matrix multiplication and define how back propagation is done across those.

Before we go there it’s necessary to get an intuitive notion of what a vector is and what dot product and matrix multiplication are doing. A vector consists of two things: a distance and a magnitude. A dot product finds the angle between two vectors times their magnitudes. Angle in this case is a type of distance. You might wonder what the intuitive explanation of including the magnitudes is. There isn’t any, you’re better off normalizing for them, known in AI as ‘cosine space’. I’ll just pretend that that’s how it’s always done.

When a vector is multiplied by a matrix, that vector isn’t being treated as a position in space, it’s a list of scalars. Those scalars are each assigned a direction and magnitude of a vector in the matrix. That direction is assigned a weight of the value of the scalar times the magnitude. A weighted average of all the directions is then taken.

The analogue of (normalized) dot product in toroidal space is simply distance. Back propagating over it works how you would expect. There’s a bit of funny business with the possibility of the back propagation causing the values to snap over the 1/2 threshold but the amount of movement is small enough that that’s unusual and AI is so fundamentally handwavy that ignoring things like that doesn’t change the theory much.

The analogue of a matrix in toroidal space is a list of positions and weights. (Unlike in conventional space in toroidal space there’s a type distinction between ‘position’ and ‘position plus weight’ where in conventional space it’s always ‘direction and magnitude’.) To ‘multiply’ a vector by this ‘matrix’ you do a weighted average of all the positions with weights corresponding to the scalar times the given weight. At least, that’s what you would like to do. The problem is that due to the wrap-around nature of space it isn’t clear which image of each position should be used.

To get an intuition for what to do about the multiple images problem, let’s consider the case of only two points. For this case we can find the shortest path between them and simply declare that the weighted average will be along that line segment. If some of the dimensions are close to the 1/2 flip over then either one will at least do something for the other dimensions and there isn’t much signal for that dimension anyway so somewhat noisily using one or the other is fine.

This approach can be generalized to larger numbers of points as follows: First, pick an arbitrary point in space. We’ll think of this as a rough approximation of the eventual solution. Since it’s literally a random point it’s a bad approximation but we’re going to improve that. What we do is find the closest image of each of the points to find a weighted average of to the current approximation and use those positions as the ones when finding the weighted average. That yields a new approximate answer. We then repeat. Most likely in practical circumstances this settles down after only a handful of iterations and if it doesn’t there probably isn’t that much improvement happening with each iteration. There’s an interesting mathematical question as to whether this process must always hit a unique fixed point. I honestly don’t know the answer to that question. If you know the answer please let me know.

The way to back propagate over this operation is to assume that the answer you settled on via the successive approximation process is the ‘right’ one and look at how that one marginally moves with changing the coefficients. As with calculating simple distance the snap-over effects rarely are hit with the small changes involved in individual back propagation adjustments and the propagation doesn’t have to be perfect, it just has to on average produce improvement.

1

It involves adding ‘ghost images’ to each point which aren’t just at the wraparound values but also correspond to other positions in a Barns-Wall lattice, which is a way of packing spheres densely. Usually ‘the Barns-Wall lattice’ corresponds specifically to 16 dimensions but the construction generalizes straightforwardly to any power of 2.

Posted
Bram Cohen
The weakness of AI Go programs and what it means for the future of AI

AI can play the game Go far better than any human, but oddly it has some weaknesses which can allow humans to exploit and defeat it. Patching over these weaknesses is very difficult, and teaches interesting lessons about what AI, traditional software, and us humans are good and bad at. Here’s an example position showing the AI losing a big region after getting trapped:1

Go board showing cyclic attack

For those of your not familiar with the game, Go is all about connectivity between stones. When a region of stones loses all connection to empty space, as the red marked one in the above position just did, it dies. When a group surrounds two separate regions it can never be captured because the opponent only places one stone at a time and hence can’t fill both at once. Such a region is said to have ‘two eyes’ and be ‘alive’.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

The above game was in a very unusual position where both the now-dead black group and the white one it surrounds only have one eye and not much potential for any other. The AI may have realized that this was the case but optically it looks like the position is good so it keeps playing elsewhere on the board until that important region is lost.

Explaining what’s failing here and why humans can do better requires some background. Board games have two components to them: ‘tactics’ which encompasses what happens when you do an immediate look-ahead of the position at hand, and ‘position’ which a a more amorphous concept encompassing everything else you can glean about how good a position is from looking at it and using your instincts. There’s no fine white line between the two, but for games on a large enough board with enough moves in a game it’s computationally infeasible to do a complete exhaustive search of the entire game so there is a meaningful difference between the two.

There’s a bizarre contrast between how classical software and AI works. Traditional software is insanely, ludicrously good at logic, but has no intuition whatsoever. It can verify even the most complex formal math proof almost immediately, and search through a number of possibilities vastly larger than what a human can do in a lifetime in just an instant, but if any step is missing it has no way of filling it in. Any such filling in has to follow heuristics which humans painstakingly created by hand, and usually leans very heavily on trying an immense number of possibilities in the hope that something works.

AI is the exact opposite. It has (for some things) ludicrously, insanely good intuition. For some tasks, like guessing at protein folding or evaluating it’s far better than any human ever possibly could be. For evaluating Chess or Go positions only a handful at most of humans can be an AI running purely on instinct. What AI is lacking in is logic. People get excited when it demonstrates any ability to do reasoning at all.

Board games are special in that the purely logical component of them is extremely well defined and can be evaluated exactly. When Chess computers first overtook the best humans it was by having a computer throw raw computational power at the problem with fairly hokey positional evaluation underneath it which had been designed by hand by a strong human player and was optimized more for speed than correctness. Chess is more about tactics than position so this worked well. Go has a balance more towards position so this approach didn’t work well until better positional evaluation via AI was invented. Both Chess and Go have a balance between tactics and position because we humans find both of those interesting. It’s possible that sentient beings of the future will favor games which are much more heavily about position because tactics are more about who spends more computational resources evaluating the position than who’s better at the game.

In some sense doing lookahead in board games (the technical term is 'alpha-beta pruning’) is a very special form of hand-tuned logic. But it by definition perfectly emulates exactly what’s happening in a board game, so it can be mixed with a good positional evaluator to almost get the best of both. Tactics are covered by the lookahead, and position is covered by the AI.

But that doesn’t quite cover it. The problem is that this approach keeps the logic and the AI completely separate from each other and doesn’t have a bridge between them. This is particularly important in Go where there are lots of local patterns which will have to get played out eventually and you can get some idea of what will happen at that time by working out local tactics now. Humans are entirely capable of looking at a position, working through some tactics, and updating their positional evaluation based on that. The current generation of Go AI programs don’t even have hooks to make that possible. They can still beat humans anyway, because their positional evaluation alone is comparable if not better than what a human gets to while using feedback, and their ability to work out immediate tactics is ludicrously better than ours. But the above game is an exception. In that one something extremely unusual happened, in which something which immediate optics of a position were misleading, and the focus of game play was kept off that part of the board long enough that the AI didn’t use its near-term tactical skill to figure out that it was in danger of losing. A human can count up the effective amount of space in the groups battling it out in the above example by working out the local tactics and gets a further boost because what matters is the sum total of them rather than the order in which they’re invoked. Simple tactical evaluation doesn’t realize this independence and has to work through exponentially more cases.

The human executable exploits of the current generation Go AIs are not a single silly little problem which can be patched over. They are particularly bad examples of systemic limitations of how those AIs operate. It may be possible to tweak them enough that the humans can’t get away with such shenanigans any more, but the fact remains that they are far from perfect and some better type of AI which does a better job of bridging between instinct and logic could probably play vastly better than they do now while using far less resources.

1

This post is only covering the most insidious attack but the others are interesting as well. It turns out that the AIs think in japanese scoring despite being trained exclusively on chinese scoring so they’ll sometimes pass in positions where the opponent passing in response results in them losing and they should play on. This can be fixed easily by always looking ahead after a pass to see who actually wins if the opponent immediately passes in response. Online sites get around the issue by making ‘pass’ not really mean pass but more ‘I think we can come to agreement about what’s alive and dead here’ and if that doesn’t happen it reverts to a forced playout with chinese scoring and pass meaning pass.

The other attack is ‘gift’ where a situation can be set up where the AI can’t do something it would like to due to ko rules and winds up giving away stones in a way which make it strictly worse off. Humans easily recognize this and don’t do things which make their position strictly worse off. Arguably the problem is that the AI positional evaluator doesn’t have good access to what positions were already hit, but it isn’t clear how to do that well. It could probably also be patched around by making the alpha-beta pruner ding the evaluation when it finds itself trying and failing to repeat a position but that needs to be able to handle ko battles as well. Maybe it’s also a good heuristic for ko battles.

Both of the above raise interesting questions about what what tweaks to a game playing algorithm are bespoke and hence violate the ‘zero’ principle that an engine should work for any game and not be customized to a particular one. Arguably the ko rule is a violation of the schematic of a simple turn-based game so it’s okay to make exceptions for that.

Posted
Bram Cohen
A Question For You About Conflict Markers

I’ve written several previous posts about how to make a distributed version control system which has eventual consistency, meaning that no matter what order you merge different branches together they’ll always produce the same eventual result. The details are very technical and involved so I won’t rehash them here, but the important point is that the merge algorithm needs to be very history aware. Sometimes you need to make small sacrifices for great things. Sorry I’m terribly behind on making an implementation of these ideas. My excuse is that I have an important and demanding job which doesn’t leave much time for hobby coding, and I have lots of other hobby coding projects which seem important as well.

My less lame excuse is that I’ve been unsure of how to display merge conflicts. In some sense a version control system with eventual consistency never has real conflicts, it just has situations in which changes seemed close enough to stepping on each other’s toes that the system decided to flag them with conflict markers and alert the user. The user is always free to simply remove the conflict markers and keep going. This is a great feature. If you hit a conflict which somebody else already cleaned up you can simply remove the conflict markers, pull from the branch which it was fixed in, and presto you’ve got the cleanup applied locally.1

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

So the question becomes: When and how should conflict markers be presented? I gave previous thoughts on this question over here but am not so sure of those answers any more. In particular there’s a central question which I’d like to hear peoples’s opinions on: Should line deletions by themselves ever be presented as conflicts? If there are two lines of code one after the other and one person deletes one and another person deletes the other it seems not unreasonable to let it through without raising a flag. Its not like a merge conflict between nothing on one side and nothing on the other side is very helpful. There are specific examples I can come up with where this is a real conflict, but then there are examples I can come up with where changes in code not in close proximity produces semantic conflicts as well. Ideally you should detect conflicts by having extensive tests which are run automatically all the time and conflicts will cause those to fail. The version control system flagging conflicts is for it to highlight the exact location of particularly egregious examples.

It also seems reasonable that if one person deletes a line of code and somebody else inserts a line of code right next to it then that shouldn’t be a conflict. But this is getting shakier. The problem is that if someone deletes a line of code and somebody else ‘modifies’ it then arguably that should be a conflict but the version control system thinks of that as being both sides having deleted the same line of code and one side inserting an unrelated line which happens to look similar. The version control system having a notion of individual lines being ‘modified’ and being able to merge those modifications together is a deep rabbit hole I’m not eager to dive into. Like in the case of deletions on both sides a merge conflict between something on one side and ‘nothing’ on the other isn’t very helpful anyway. If you really care about this then you can leave a blank line when you delete code if you want to really make sure replace it with a unique comment. On the other hand the version control system is supposed to flag things automatically and not make you engage in such shenanigans.

At least one thing is clear: Decisions about where to put conflict markers should only be made on whether lines appear in the immediate parents and the child. Nobody wants the version control system to tell them ‘These two consecutive lines of code both come from the right but the ways they merged with the left are so wildly different that it makes me uncomfortable’. Even if there were an understandable way to present that history information to the user, which there isn’t, everybody would respond by simply deleting the pedantic CYA marker.

I’m honestly unsure whether deleted lines should be considered. There are reasonable arguments on both sides. But I would like to ignore deleted lines because it makes both UX and implementation much simpler. Instead of there being eight cases of the states of the parents and the child, there are only four, because only cases where the child line is present are relevant2. In all conflict cases there will be at least one line of code on either side. It even suggests an approach to how you can merge together many branches at once and see conflict markers once everything is taken into account.3

It may be inevitable that I’ll succumb to practicality on this point, but at least want to reassure myself that there isn’t overwhelming feedback in the other direction before doing so. It may seem given the number of other features I’m adding that ‘show pure deletion conflicts’ is small potatoes, but version control systems are important and feature decisions shouldn’t be made in them without serious analysis.

1

It can even cleanly merge together two different people applying the exact same conflict resolution independently of each other, at least most of the time. An exception is if one person goes AE → ABDE → ABCDE and someone else goes AE → ACE → ABCDE then the system will of necessity think the two C lines are different and make a result including both of them, probably as A>BCD>BCD|E. It isn’t possible to avoid this behavior without giving up eventual consistency, but it’s arguably the best option under the circumstances anyway. If both sides made their changes as part of a single patch this can be made to always merge cleanly.

2

The cases are that a line which appears in the child appears in both parents, just the left, just the right, or neither. That last one happens in the uncommon but important criss-cross case. One thing I still feel good about is that conflict markers should be lines saying ‘This is part of a conflict section, the section below this came from X’ where X is either local, remote, or neither, and there’s another special annotation for ‘this is the end of a conflict section’.

3

While it’s clear that a line which came from Alice but no other parent and a line which came from Bob but no other parent should be in conflict when immediately next to each other it’s much less clear whether a line which came from both Alice and Carol but not Bob should conflict with a line which came from Bob and Carol but not Alice. If that should be presented as ‘not a conflict’ then if the next line came from David but nobody else it isn’t clear how far back the non-David side of that conflict should be marked as starting.

Posted
Bram Cohen
Automated Chess Commentary's Sorry State And Possible Improvements

Computer tools for providing commentary on Chess games are currently awful. People play over games using Stockfish, which is a useful but not terribly relevant tool, and use that as a guide for their own commentary. There are Chess Youtubers who are aren’t strong players and it’s obvious to someone even of my own mediocre playing strength (1700 on a good day) that they don’t know what they’re talking about because in many situations there’s the obvious best move which fails due to some insane computer line but they don’t even cover it because the computer thinks it’s clearly inferior. Presumably commentary I generated using Stockfish as a guide would be equally obvious to someone of a stronger playing strength than me. People have been talking about using computers to make automated commentary on Chess positions since computers started getting good at Chess, and the amount of progress made has been pathetic. I’m now going to suggest a tool which would be a good first step in that process, although it still requires a human to put together the color commentary. It would also be a fun AI project on its own merits, and possibly have a darker use which I’ll get to at the end.

There’s only one truly objective metric of how good a Chess position is, and that’s whether it’s a win, loss, or draw with perfect play. In a lost position all moves are equally bad. In a won position any move no matter how ridiculous which preserves the theoretical win is equally good. Chess commentary which was based off this sort of analysis would be insane. Most high level games would be a theoretical draw until some point deep into already lost for a human territory at which point some uninteresting move would be labeled the losing blunder because it missed out on some way of theoretically eking out a draw. Obviously such commentary wouldn’t be useful. But commentary from Stockfish isn’t much better. Stockfish commentary is how a roughly 3000 rated player feels about the position if it assumes it’s playing against an opponent of roughly equal strength. That’s a super specific type of player and not one terribly relevant to how humans might fare in a given position. It’s close enough to perfect that a lot of the aforementioned ridiculousness shows up. There are many exciting tactical positions which are ‘only’ fifteen moves or so from being done and the engine says ‘ho hum, nothing to see here, I worked it out and it’s a dead draw’. What we need for Chess commentary is a tool geared towards human play, which says something about human games.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Here’s an idea of what to build: Make an AI engine which gets inputs of position, is told the ratings of the two players, the time controls, and the time left, and gives probabilities for a win, loss, or draw. This could be trained by taking a corpus of real human games and optimizing for Brier score. Without any lookahead this approach is limited by how strong of an evaluation it can get to, but that isn’t relevant for most people. Current engines at one node are probably around 2500 or so, so it might peter out in usefulness for strong grandmaster play, but you have my permission to throw in full Stockfish evaluations as another input when writing game commentary. The limited set of human games might hurt its overall playing strength, but throwing in a bunch of engine games for training or starting with an existing one node network is likely to help a lot. That last one in particular should save a lot of training time.

For further useful information you could train a neural network on the same corpus of games to predict the probability that a player will make each of the available legal moves based on their rating and the amount of time they spend making their move. Maybe the amount of time the opponent spent making their previous move should be included as well.

With all the above information it would be easy to make useful human commentary like ‘The obvious move here is X but that’s a bad idea because of line Y which even strong players are unlikely to see’. Or ‘This position is an objective win but it’s very tricky with very little time left on the clock’. The necessary information to make those observations is available, even if writing the color commentary is a whole other layer. Maybe an LLM could be trained to do that. It may help a lot for the LLM to be able to ask for evaluations of follow-on moves.

What all the above is missing is the ability to give any useful commentary on positional themes going on in games. Baby steps. Having any of the above would be a huge improvement in the state of the art. The insight that commentary needs to take into account what different skill levels and time controls think of the situation will remain an essential one moving forward.

What I’d really like to see out of the above is better Chess instruction. There are religious wars constantly going on about what the best practical advice for lower rated players is, and the truth is we simply don’t know. When people collect data from games they come up with results like by far the best opening for lower rated players as black is the Caro-Kann, which might or might not be true but indicates that the advice given to lower rated players based on what’s theoretically best is more than a little bit dodgy.

A darker use of the above would be to make a nearly undetectable cheating engine. With the addition of it giving an output of the range of likely amounts of time the player is likely to take in a given position it could make an indistinguishable facsimile of a player of a given playing strength in real time whose only distinguishing feature is being a bit too typical/generic, and that would be easy enough to throw in bias for. In situations where it wanted to plausibly win a game against a much higher opponent it could filter out potential moves based on their practical chances in the given situation being bad. That would result in very non-Stockfish-like play and seemingly a player plausibly of that skill level happening to play particularly well that game. Good luck coming up with anti-cheat algorithms to detect that.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted
bkuhn@ebb.org (Bradley M. Kuhn) (Bradley M. Kuhn)
I Signed an OSI Board Agreement in Anticipation of Election Results

An Update Regarding the 2025 Open Source Initiative Elections

I've explained in other posts that I ran for the 2025 Open Source Initative Board of Directors in the “Affiliate” district.

Voting closed on MON 2025-03-17 at 10:00 US/Pacific. One hour later, candidates were surprised to receive an email from OSI demanding that all candidates sign a Board agreement before results were posted. This was surprising because during mandatory orientation, candidates were told the opposite: that a Board agreement need not be signed until the Board formally appointed you as a Director (as the elections are only advisory &mdash: OSI's Board need not follow election results in any event. It was also surprising because the deadline was a mere 47 hours later (WED 2025-03-19 at 10:00 US/Pacific).

Many of us candidates attempted to get clarification over the last 46 hours, but OSI has not communicated clear answers in response to those requests. Based on these unclear responses, the best we can surmise is that OSI intends to modify the ballots cast by Affiliates and Members to remove any candidate who misses this new deadline. We are loathe to assume the worst, but there's little choice given the confusing responses and surprising change in requirements and deadlines.

So, I decided to sign a Board Agreement with OSI. Here is the PDF that I just submitted to the OSI. I emailed it to OSI instead. OSI did recommend DocuSign, but I refuse to use proprietary software for my FOSS volunteer work on moral and ethical grounds0 (see my two keynotes (FOSDEM 2019, FOSDEM 2020) (co-presented with Karen Sandler) on this subject for more info on that).

My running mate on the Shared Platform for OSI Reform, Richard Fontana, also signed a Board Agreement with OSI before the deadline as well.


0 Chad Whitacre has made unfair criticism of my refusal tog use Docusign as part of the (apparently ongoing?) 2025 OSI Board election political campaign. I respond to his comment here in this footnote (& further discussion is welcome using the fediverse, AGPLv3-powered comment feature of my blog). I've put it in this footnote because Chad is not actually raising an issue about this blog post's primary content, but instead attempting to reopen the debate about Item 4 in the Shared Platform for OSI Reform. My response follows:

In addition to the two keynotes mentioned above, I propose these analogies that really are apt to this situation:

  • Imagine if the Board of The Nature Conservancy told Directors they would be required, if elected, to use a car service to attend Board meetings. It's easier, they argue, if everyone uses the same service and that way, we know you're on your way, and we pay a group rate anyway. Some candidates for open Board seats retort that's not environmentally sound, and insist — not even that other Board members must stop using the car service &mdash: but just that Directors who chose should be allowed to simply take public transit to the Board meeting — even though it might make them about five minutes late to the meeting. Are these Director candidates engaged in “passive-aggressive politicking”?
  • Imagine if the Board of Friends of Trees made a decision that all paperwork for the organization be printed on non-recycled paper made from freshly cut tree wood pulp. That paper is easier to move around, they say — and it's easier to read what's printed because of its quality. Some candidates for open Board seats run on a platform that says Board members should be allowed to get their print-outs on 100% post-consumer recycled paper for Board meetings. These candidates don't insist that other Board members use the same paper, so, if these new Directors are seated, this will create extra work for staff because now they have to do two sets of print-outs to prep for Board meetings, and refill the machine with different paper in-between. Are these new Director candidates, when they speak up about why this position is important to them as a moral issue, a “a distracting waste of time”?
  • Imagine if the Board of the APSCA made the decision that Directors must work through lunch, and the majority of the Directors vote that they'll get delivery from a restaurant that serves no vegan food whatsoever. Is it reasonable for this to be a non-negotiable requirement — such that the other Directors must work through lunch and just stay hungry? Or should they add a second restaurant option for the minority? After all, the ASPCA condemns animal cruelty but doesn't go so far as to demand that everyone also be a vegan. Would the meat-eating directors then say something like “opposing cruelty to animals could be so much more than merely being vegan” to these other Directors?
Posted
bkuhn@ebb.org (Bradley M. Kuhn) (Bradley M. Kuhn)
Signing Board Agreement Merely To Be Considered for a Directorship?

An Update Regarding the 2025 Open Source Initiative Elections

I've explained in other posts that I ran for the 2025 Open Source Initative Board of Directors in the “Affiliate” district.

Voting closed on Monday 2025-03-17 at 10:00 US/Pacific. One hour after that, I and at least three other candidates received the following email:

Date: Mon, 17 Mar 2025 11:01:22 -0700
From: OSI Elections team <elections@opensource.org>
To: Bradley Kuhn <bkuhn@ebb.org>
Subject: TIME SENSITIVE: sign OSI board agreement
Message-ID: <civicrm_67d86372f1bb30.98322993@opensource.org>

Thanks for participating in the OSI community polls which are now closed. Your name was proposed by community members as a candidate for the OSI board of directors. Functioning of the board of directors is critically dependent on all the directors committing to collaborative practices.

For your name to be considered by the board as we compute and review the outcomes of the polls,you must sign the board agreement before Wednesday March 19, 2025 at 1700 UTC (check times in your timezone). You’ll receive another email with the link to the agreement.

TIME SENSITIVE AND IMPORTANT: this is a hard deadline.

Please return the signed agreement asap, don’t wait. 

Thanks

OSI Elections team

(The link email did arrived too, with a link to a proprietary service called DocuSign. Fontana downloaded the PDF out of DocuSign and it appears to match the document found here. This document includes a clause that Fontana and I explicitly indicated in our OSI Reform Platform should be rewritten. )

All the (non-incumbent) candidates are surprised by this. OSI told us during the mandatory orientation meetings (on WED 2025-02-19 & again on TUE 2025-02-25) that the Board Agreement needed to be signed only by the election winners who were seated as Directors. No one mentioned (before or after the election) that all candidates, regardless of whether they won or lost, needed to sign the agreement. I've also served o many other 501(c)(3) Boards, and I've never before been asked to sign anything official for service until I was formally offered the seat.

Can someone more familiar with the OSI election process explain this? Specifically, why are all candidates (even those who lose) required to sign the Board Agreement before election results are published? Can folks who ran before confirm for us that this seems to vary from procedures in past years? Please reply on the fediverse thread if you have information. Richard Fontana also reached out to OSI on their discussion board on the same matter.

Posted
bkuhn@ebb.org (Bradley M. Kuhn) (Bradley M. Kuhn)
Repeated Mistakes Lead to Unfair OSI Elections

Update 2025-03-21: This blog post is extremely long (if you're reading this, you must already know I'm terribly long-winded). I was in the middle of consolidating it with other posts to make a final, single “wrap up” post of the OSI elections when, in the middle of doing that, I was told that Linux Weekly News (LWN) published an article written by Joe Brockmeier. As such,I've carefully left the text below as it stood it stood 2025-03-20 03:42 UTC, which I believe is the version that Brockmeier sourced for his story (only changes past the line “Original Post” have been HTML format fixes). (I hate as much as you do having to scour archive.org/web to find the right version.) Nevertheless, I wouldn't have otherwise left this here in its current form because it's a huge, real-time description that as such doesn't make the best historical reference record of these event. I used my blog as a campaigning tool (for reasons discussed below) before I knew how much interest there would ultimately be in the FOSS community about the 2025 OSI Board of Directors election. Since this was used as a source for the LWN article, keeping the original record easy to find is obviously important and folks shouldn't have to go to archive.org/web to find it. Nevertheless, if you're just digging into this story fresh, I don't really recommend reading the below. Instead, I suggest just reading Brockmeier's LWN article because he's a journalist and writes better and more concise than me, and he's unbiased and the below is my (understandably) biased view as a candidate who lived through this problematic election.

Original Post

I recently announced that I was nominated for the Open Source Initiative (OSI) Board of Directors as an “Affiliate” candidate. I chose to run as an (admittedly) opposition candidate against the existing status quo, on a “ticket” with my colleague, Richard Fontana, who is running as an (opposition) “Member” candidate.

These elections are important; they matter with regard to the future of FOSS. OSI recently published the “Open Source Artificial Intelligence Definition” (OSAID). One of OSI's stated purposes of the OSID is to convince the entire EU and other governments and policy agencies will adopt this Definition as official for all citizens. Those stakes aren't earth-shattering, but they are reasonably high stakes. (You can read i a blog post I wrote on the subject or Fontana's and my shared platform for more information about OSAID.)

I have worked and/or volunteered for nonprofits like OSI for years. I know it's difficult to get important work done — funding is always too limited. So, to be sure I'm not misquoted: no, I don't think the election is “rigged”. Every problem described herein can easily be attributed to innocent human error, and, as such, I don't think anyone at OSI has made an intentional plan to make the elections unfair. Nevertheless, these mistakes and irregularities (particularly the second one below) have led to an unfair 2025 OSI Directors Election. I call on the OSI to reopen the nominations for a few days, correct these problems, and then extend the voting time accordingly. I don't blame the OSI for these honest mistakes, but I do insist that they be corrected. This really does matter: since this isn't just a local club. OSI is an essential FOSS org that works worldwide and claims to have a consensus mandate for determining what is (or is not) “open source”. Thus, (if the OSI intends to continue with an these advisory elections), OSI's elections need the greatest integrity and legitimacy. Irregularities must be corrected and addressed to maintain the legitimacy of this important organization.

Regarding all these items below, I did raise all the concerns privately with the OSI staff before publicly listing them here. In every case, I gave OSI at least 20-30% of the entire election cycle to respond privately before discussing the problems publicly. (I have still received no direct response from the OSI on any of these issues.)

(Recap on) First Irregularity

The first irregularity was the miscommunication about the nomination deadline (as covered in the press. Instead of using the time zone of OSI's legal home (in California), or the standard FOSS community deadline of AoE (anywhere on earth) time, OSI surreptitiously chose UTC and failed to communicate that decision properly. According to my sources, only one email of 3(+) emails about the elections included the fully qualified datetime of the deadline. Everywhere else (including everywhere on OSI's website) published only the date, not the time. It was reasonable for nominators to assume the deadline was US/Pacific — particularly since the nomination form still worked after 23:59 UTC passed.

Second Irregularity

Due to that first irregularity, this second (and most egregious) irregularity is compounded even further. All year long, the OSI has communicated that, for 2025, elections are for two “Member” seats and one “Affiliate” seat. Only today (already 70% through the election cycle) did OSI (silently) correct this error. This change was made well after nominations had closed (in every TZ). By itself, the change in available seats after nominations closed makes the 2025 OSI elections unfair. Here's why: the Members and the Affiliates are two entirely different sets of electorates. Many candidates made complicated decisions about which seats to run for based on the number of seats available in each class. OSI is aware of that, too, because (a) we told them that during candidate orientation, and (b) Luke said so publicly in their blog post (and OSI directly responded to Luke in the press).

If we had known there were two Affiliate seats and just one Member seat, Debian (an OSI Affiliate) would have nominated Luke a week early to the Affiliate seat. Instead, Debian's leadership, Luke, Fontana, and I had a complex discussion in the final week of nominations on how best to run as a “ticket of three”. In that discussion, Debian leadership decided to nominate no one (instead of nominating Luke) precisely because I was already nominated on a platform that Debian supported, and Debian chose not to run a candidate against me for the (at the time, purported) one Affiliate seat available.

But this irregularity didn't just impact Debian, Fontana, Luke, and me. I was nominated by four different Affiliates. My primary pitch to ask them to nominate me was that there was just one Affiliate seat available. Thus, I told them, if they nominated someone else, that candidate would be effectively running against me. I'm quite sure at least one of those Affiliates would have wanted to nominate someone else if only OSI had told them the truth when it mattered: that Affiliates could easily elect both me and a different candidate for two available Affiliate seats. Meanwhile, who knows what other affiliates who nominated no one would have done differently? OSI surely doesn't know that. OSI has treated every one of their Affiliates unfairly by changing the number of seats available after the nominations closed.

Due to this Second Irregularity alone, I call on the OSI to reopen nominations and reset the election cycle. The mistakes (as played) actually benefit me as a candidate — since now I'm running against a small field and there are two seats available. If nominations reopen, I'll surely face a crowded field with many viable candidates added. Nevertheless, I am disgusted that I unintentionally benefited from OSI's election irregularity and I ask OSI take corrective action to make the 2025 election fair.

The remaining irregularities are minor (by comparison, anyway), but I want to make sure I list all the irregularities that I've seen in the 2025 OSI Board Elections in this one place for everyone's reference:

Third Irregularity

I was surprised when OSI published the slates of Affiliate candidates that they were not in any (forward or reverse) alphabetical order — not candidate's first, last, or nominator name. Perhaps the slots in the voter's guide were assigned randomly, but if so, that is not disclosed to the electorate. And, Who is listed first, you ask? Why, the incumbent Affiliate candidate. The issue of candidate ordering in voting guides and ballots has been well studied academically and, unsurprisingly, being listed first is known to be an advantage. Given that incumbents already have an advantage in all elections, putting the incumbent first without stating that the slots in the voter guide were randomly assign makes the 2025 OSI Board election unfair.

I contacted OSI leadership within hours of the posting of the candidates about this issue (at time of writing, that was four days ago) and they have refused to respond nor have they corrected the issue. This compounds the error, because OSI consciously choosing to list the incumbent Affiliate candidate first in the voter guide on purpose.

Note that this problem is not confined to the “Affiliate district”. In the “Member district”, my running mate, Richard Fontana, is listed last in the voter guide for no apparent reason.

Fourth Irregularity

It's (ostensibly) a good idea for the OSI to run a discussion forum for the candidates (and kudos to OSI ( in this instance, anyway ) for using the GPL'd Discourse software for the purpose). however, the requirements to create an account and respond to the questions exclude some Affiliate candidates. Specifically, the OSI has stated that Affiliate candidates, and the Affiliates that are their electorate, need not be Members of the OSI. (This is actually the very first item in OSI's election FAQ!) Yet, to join the discussion forum, one must become a member of the OSI! While it might be reasonable to require all Affiliate candidates become OSI Members, this was not disclosed until the election started, so it's unfair!

Some already argue that since there is a free (as in price) membership that this is a non-issue. I disagree, and here's why: Long ago, I had already decided that I would not become a Member of OSI (for free or otherwise) because OSI Members who do not pay money are denied voting rights in these elections! Yes, you read that right: the election for OSI Directors in the “Members” seat literally has a poll tax! I refuse to let OSI count me as a Member when the class of membership they are offering to people who can't afford to pay is a second-class citizenship in OSI's community. Anyway, there is no reason that one should have to become a Member to post on the discussion fora — particularly given that OSI has clearly stated that the Affiliate candidates (and the Affiliate representatives who vote) are not required to be individual Members.

A desire for Individual Membership is understandable for an nonprofit. Nonprofits often need to prove they represent a constituency. I don't blame any nonprofit for trying to build a constituency for itself. The issue is how. Counting Members as “anyone who ever posted on our discussion forum” is confusing and problematic — and becomes doubly so when Voting Memberships are available for purchase. Indeed, OSI's own annual reporting conflates the two types of Members confusingly, as “Member district” candidate Chad Whitacre asked about during the campaign (but received no reply).

I point as counter-example to the models used by GNOME Foundation (GF) and Software In the Public Interest (SPI). These organizations are direct peers to the OSI, but both GF and SPI have an application for membership that evaluates on the primary criterion of what contributions the individual has made to FOSS (be they paid or volunteer). AFAICT, for SPI and GF, no memberships require a donation, aren't handed out merely for signing up to the org's discussion fora, and all members (once qualified) can vote.

Fifth Irregularity

This final irregularity is truly minor, but I mention it for completeness. On the Affiliate candidate page, it seems as if each candidate is only nominated by one affiliate. When I submitted my candidate statement, since OSI told me they automatically filled in the nominating org, I had assumed that all my nominating orgs would be listed. Instead, they listed only one. If I'd known that, I'd have listed them at the beginning of my candidate statement; my candidate statement was drafted under the assumption all my nominating orgs would be listed elsewhere.

Sixth Irregularity

Update 2025-03-07. I received an unsolicited (but welcome) email from an Executive Director of one of OSI's Affiliate Organizations. This individual indicated they'd voted for me (I was pleasantly surprised, because I thought their org was pro-OSAID, which I immediately wrote back and told them). The irregularity here is that OSI told candidates that the campaign period would be 10 days, including two weekends in most places — including orientation phone calls for candidates. They started the campaign late, and didn't communicate that they weren't extending the timeline, so the campaign period was about 6.5 days and included only one weekend.

Meanwhile, during this extremely brief 6.5 day period, the election coordinator at OSI was unavailable to answer inquiries from candidates and Affiliates for at least three of those days. This included sending one Affiliate an email with the subject line ”Rain Check” in response to five questions they sent about the election process, and its contents indicated that the OSI would be unavailable to answers questions about the election — until after the election!

Seventh Irregularity (added 2025-03-13)

The OSI Election Team, less than 12 hours after sending out the ballots (on Friday 2025-03-07) sent the following email. Many of the Affiliates told me about the email, and it seems likely that all Affiliates received this email within a short time after receiving their ballots (and a week before the ballots were due):

Subject: OSI Elections: unsolicited emails
Date: Sat, 08 Mar 2025 02:11:05 -0800
From: "Staffer REDACTED" <staffer@opensource.org>

Dear REDACTED,

It has been brought to our attention that at least one candidate has been emailing affiliates without their consent.

We do not give out affiliate emails for candidate reachouts, and understand that you did not consent to be spammed by candidates for this election cycle.

Candidates can engage with their fellow affiliates on our forums where we provide community management and moderation support, and in other public settings where our affiliates have opted to sign up and publicly engage.

Please email us directly for any ongoing questions or concerns.

Kind regards,
OSI Elections team

This email is problematic because candidates received no specific guidance on this matter. No material presented at either of the two mandatory election orientations (which I attended) indicated that contacting your constituents directly was forbidden, nor could I find such in any materials on the OSI website. Also, I checked with Richard Fontana, who also attended these sessions, and he confirms I didn't miss anything.

It's not spam to contact one's “FOSS Neighbors” to learn their concerns when in a political campaign for an important position. In fact, during those same orientation sessions, it was mentioned that Affiliate candidates should know the needs of their constiuents — OSI's Affiliates. I took that charge seriously, so I invested 12-14 hours researching every single of my constituents (all ~76 OSI Affiliate Organizations). my research confirmed my hypothesis: my constituents were my proverbial “FOSS neighbors”. In fact, I found that I'd personally had contact with most of the orgs since before OSI even had an Affiliate program. For example, one of the now-Affiliates had contacted me way back in 2013 to provide general advice and support about how to handle fundraising and required nonprofit policies for their org. Three other now-Affiliate's Executive Directors are people I've communicated regularly with for nearly 20 years. (There are other similar examples too). IOW, I contacted my well-known neighbors to find out their concerns now that I was running for an office that would represent them.

There were also some Affiliates that I didn't know (or didn't know well) yet. For those, like any canvasing candidate, I knocked on their proverbial front doors: I reviewed their websites, found the name of the obvious decision maker, searched my email archives for contact info (and, in some cases, just did usual guesses like <firstname.lastname@example.org>), and contacted them. (BTW, I've done this since the 1990s in nonprofit work when trying to reach someone at a fellow nonprofit to discuss any issue.)

All together, I was able to find a good contact at 55 of the Affiliates, and here's a (redacted) sample of one the emails I sent:

Subject: Affiliate candidate for OSI Board of Directors available to answer any questions REDACTED_FIRSTNAME,

I'm Bradley M. Kuhn and I'm running as an Affiliate candidate in the Open Source Initiative Board elections that you'll be voting in soon on behalf of REDACTED_NAME_OF_ORG.

I wanted to let you know about the Shared Platform for OSI Reform (that I'm running for jointly with Richard Fontana) [0] and also offer some time to discuss the platform and any other concerns you have as an OSI Affiliate that you'd like me to address for you if elected.

(Fontana and I kept our shared platform narrow so that we could be available to work on other issues and concerns that our (different) constituencies might have.)

I look forward to hearing from you soon!

[0] https://codeberg.org/OSI-Reform-Platform/platform#readme

Note that Fontana is running as a Member candidate which has a separate electorate and for different Board seats, so we are not running in competition for the same seat.

(Since each one was edited manually for the given org, if the org primarily existed for a FOSS project I used, I also told them how I used the project myself, etc.)

Most importantly, though, election officials should never comment on the permitted campaign methods of any candidates before voting finishes in any event. While OSI staff may not have intended it, editorializing regarding campaign strategies can influence an election, and if you're in charge of running an impartial collection, you have a high standard to meet.

OSI: either reopen nominations or just forget the elections

Again, I call on OSI to correct these irregularities, briefly reopen nominations, and extend the voting deadline. However, if OSI doesn't want to do that, there is another reasonable solution. As explained in OSI's by-laws and elsewhere, OSI's Directors elections are purely advisory. Like most nonprofits, the OSI is governed by a self-perpetuating (not an elected) Board. I bet with all the talk of elections, you didn't even know that!

Frankly, I have no qualms with a nonprofit structure that includes a self-perpetuating Board. While it's not a democratic structure, a self-perpetuating Board of principled Directors does solve the problems created in a Member-based organization. In Member-based organizations, votes are for sale. Any company with resources to buy Memberships for its employees can easily dominate the election. While OSI probably has yet to experience this problem, if OSI grows its Membership (as it seeks to), OSI will sure face that problem. Self-perpetuating Boards aren't perfect, but they do prevent this problem.

Meanwhile, having now witnessed OSI's nomination and the campaign process from the inside, it really does seem to me that OSI doesn't really take this election all that seriously. And, OSI already has in mind the kinds of candidates they want. For example, during one of the two nominee orientation calls, a key person in the OSI Leadership said (regarding items 4 of Fontana's and my shared platform) [quote paraphrased from my memory]: If you don't want to agree to these things, then an OSI Directorship is not for you and you should withdraw and seek a place to serve elsewhere. I was of course flabbergasted to be told that a desire to avoid proprietary software should disqualify me (at least in view of the current OSI leadership). But, that speaks to the fact that the OSI doesn't really want to have Board elections in the first place. Indeed, based on that and many other things that the OSI leadership has said during this process, it seems to me they'd actually rather hand-pick Directors to serve than run a democratic process. There's no shame in a nonprofit that prefers a self-perpetuating Board; as I said, most nonprofits are not Membership organizations nor allow any electorate to fill Board seats.

Meanwhile, OSI's halfway solution (i.e., a half-heartedly organized election that isn't really binding) seems designed to manufacture consent. OSI's Affiliates and paid individual Membership are given the impression they have electoral power, but it's an illusion. Giving up on the whole illusion would be the most transparent choice for OSI, and if the OSI would rather end these advisory elections and just self-perpetuate, I'd support that decision.

Update on 2025-03-07: Chad Whitacre, candidate in OSI's “Member district”, has endorsed my suggestion that OSI reopen nominations briefly for this election. While I still urge voters in the “Member district” to rank my running mate, Richard Fontana first in that race, I believe Chad would be fine choice as your second listed candidate in the rank choice voting.

Posted
bkuhn@ebb.org (Bradley M. Kuhn) (Bradley M. Kuhn)
Candidacy for 2025 Open Source Initiative Elections

I accepted nomination as a candidate for an “Affiliate seat” in the Open Source Initiative (OSI) Board of Directors elections. I was nominated by the following four OSI Affiliates:

  • The Matrix Foundation
  • The Perl and Raku Foundation
  • Software Freedom Conservancy (my employer)
  • snowdrift.coop

To my knowledge, I am the only Affiliate candidate, in the history of these OSI Board of Directors “advisory” elections, to be nominated by four Affiliates.

I am also endorsed by another Affiliate, the Debian Project.

You can see my official candidate page on OSI's website. This blog post will be updated throughout the campaign to link to other posts, materials, and announcement related to my candidacy throughout the campaign.

Updates During the Campaign

I ran on the “OSI Reform Platform” with Richard Fontana.

I created a Fediverse account specifically to interact with constituents and the public, so please also follow that on floss.social/@bkuhn.

Posted
Peter Hutterer
libinput and 3-finger dragging

Ready in time for libinput 1.28 [1] and after a number of attempts over the years we now finally have 3-finger dragging in libinput. This is a long-requested feature that allows users to drag by using a 3-finger swipe on the touchpad. Instead of the normal swipe gesture you simply get a button down, pointer motion, button up sequence. Without having to tap or physically click and hold a button, so you might be able to see the appeal right there.

Now, as with any interaction that relies on the mere handful of fingers that are on our average user's hand, we are starting to have usage overlaps. Since the only difference between a swipe gesture and a 3-finger drag is in the intention of the user (and we can't detect that yet, stay tuned), 3-finger swipes are disabled when 3-finger dragging is enabled. Otherwise it does fit in quite nicely with the rest of the features we have though.

There really isn't much more to say about the new feature except: It's configurable to work on 4-finger drag too so if you mentally substitute all the threes with fours in this article before re-reading it that would save me having to write another blog post. Thanks.

[1] "soonish" at the time of writing

Peter Hutterer
GNOME 48 and a changed tap-and-drag drag lock behaviour

This is a heads up as mutter PR!4292 got merged in time for GNOME 48. It (subtly) changes the behaviour of drag lock on touchpads, but (IMO) very much so for the better. Note that this feature is currently not exposed in GNOME Settings so users will have to set it via e.g. the gsettings commandline tool. I don't expect this change to affect many users.

This is a feature of a feature of a feature, so let's start at the top.

"Tapping" on touchpads refers to the ability to emulate button presses via short touches ("taps") on the touchpad. When enabled, a single-finger tap corresponds emulates a left mouse button click, a two-finger tap a right button click, etc. Taps are short interactions and to be recognised the finger must be set down and released again within a certain time and not move more than a certain distance. Clicking is useful but it's not everything we do with touchpads.

"Tap-and-drag" refers to the ability to keep the pointer down so it's possible to drag something while the mouse button is logically down. The sequence required to do this is a tap immediately followed by the finger down (and held down). This will press the left mouse button so that any finger movement results in a drag. Releasing the finger releases the button. This is convenient but especially on large monitors or for users with different-than-whatever-we-guessed-is-average dexterity this can make it hard to drag something to it's final position - a user may run out of touchpad space before the pointer reaches the destination. For those, the tap-and-drag "drag lock" is useful.

"Drag lock" refers to the ability of keeping the mouse button pressed until "unlocked", even if the finger moves off the touchpads. It's the same sequence as before: tap followed by the finger down and held down. But releasing the finger will not release the mouse button, instead another tap is required to unlock and release the mouse button. The whole sequence thus becomes tap, down, move.... tap with any number of finger releases in between. Sounds (and is) complicated to explain, is quite easy to try and once you're used to it it will feel quite natural.

The above behaviour is the new behaviour which non-coincidentally also matches the macOS behaviour (if you can find the toggle in the settings, good practice for easter eggs!). The previous behaviour used a timeout instead so the mouse button was released automatically if the finger was up after a certain timeout. This was less predictable and caused issues with users who weren't fast enough. The new "sticky" behaviour resolves this issue and is (alanis morissette-stylue ironically) faster to release (a tap can be performed before the previous timeout would've expired).

Anyway, TLDR, a feature that very few people use has changed defaults subtly. Bring out the pitchforks!

As said above, this is currently only accessible via gsettings and the drag-lock behaviour change only takes effect if tapping, tap-and-drag and drag lock are enabled:

  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-to-click true
  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-and-drag true
  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-and-drag-lock true
  
All features above are actually handled by libinput, this is just about a default change in GNOME.