The lower-post-volume people behind the software in Debian. (List of feeds.)

I recently did some vibe coding to come up with this demo, which may be useful for brain training if you happen to have focus problems. Using the latest Claude for this worked great. I did the whole thing without writing any code myself and only a bit of inspecting the code itself. So on the whole vibe coding works great, especially for someone like me who knows how to code but would rather not learn the vagaries of front end development. But it’s nowhere near the level of simply asking the AI to write something and have it come out. In fact being a programmer helps massively, and may be an absolute requirement for certain tasks.

Vibe coding definitely changes the, uh, vibe of coding. Traditional programming feels like a cold uncaring computer calling you an idiot a thousand times a day. Of course the traditional environment isn’t capable of calling you an idiot so it’s really you calling yourself an idiot, but it’s unpleasant anyway. With vibe coding you’re calling the AI an idiot a thousand times per day, and it’s groveling in response every time, which is a lot more fun.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

I’d describe Claude as in the next to the bottom tier of programmer candidates I’ve ever interviewed. The absolute bottom tier are people who literally don’t know how to code, but above them are people who have somehow bumbled their way through a CS degree despite not understanding anything. It’s amazingly familiar with and fluent in code, and in fact far faster and enthusiastic than any human ever possibly could be, but everything vaguely algorithmic it hacks together in the absolute dumbest way imaginable (unless it’s verbatim copying someone else’s homework, which happens a lot more often than you’d expect). You can correct this, or better yet head it off at the pass, by describing in painstaking detail each of the steps involved. Since you’re describing it in english instead of code and it’s good at english this is still a lot less effort and a huge time saver. Sometimes it just can’t process what you’re telling it is causing a problem so it assumes your explanation is correct and plays along, happily pretending to understand what’s happening. Whatever, I’d flunk it from a job interview but it isn’t getting paid and is super fast so I’ll put up with it. On some level it’s mostly translating from english into code, and that’s a big productivity boost right there.

Often it writes bugs. It’s remarkably good at avoiding typos, but extremely prone to logical errors. The most common sort of bug is that it doesn’t do what you asked it to, or at least what it did has no apparent effect. You can then tell it that it didn’t do the thing and ask it to try again which usually works. Sometimes it makes things which just seem janky and weird, at which point it’s best to suggest that it’s probably accumulated some coding cruft and ask it to clean up and refactor the code, in particular removing unnecessary code and consolidating redundant code. Usually after that it will succeed if you ask it to try again. If you skim the code and notice something off you can ask it ‘WFT is that?’ and it will usually admit something is wrong and fix it, but you get better results by being more polite. I specifically said ‘Why is there a call to setTimeout?’ and it fixed a problem in response. It would be helpful if you could see line numbers in the code view for Claude, but maybe the AI doesn’t understand those as reference points yet.

If it still has problems debugging then you can break down the series of logical steps of what should be happening, explain them in detail, and ask it to check them individually to identify which of them is breaking down. This is a lot harder than it sounds. I do this even when pair programming with experienced human programmers as well, which is an activity they often find humiliating. But asking the AI to articulate the steps itself works okay.

Here’s my script for prompts to use while vibe coding debugging, broken down into cut and pasteable commands:

  1. I’m doing X, I should be seeing Y but I’m seeing Z, can you fix it? (More detail is better. Being a programmer helps with elucidating this but isn’t necessary.)

  2. That didn’t fix the problem, can you try again?

  3. Now I’m seeing X, can you fix it?

  4. You seem to be having some trouble here. Maybe the code has accumulated some cruft with all these edits we’re doing. Can you find places where there is unused code, redundant functionality, and other sorts of general cleanups, refactor those, and try again?

  5. You seem to be getting a little lost here. Let’s make a list of the logical steps which this is supposed to go through, what should happen with each of them, then check each of those individually to see where it’s going off the rails. (This works a lot better if you can tell it what those steps are but that’s very difficult for non-programmers.)

Of course since these are so brainless to do Claude will probably start doing them without prompting in the future but for now they’re helpful. Also helpful for humans to follow when they’re coding.

On something larger and more technical it would be a good idea to have automated tests, which can of course be written by the AI as well. When I’m coding I generally make a list of what the tests should do in english, then implement the tests, then run and debug them. Those are sufficiently different brain states that I find it’s helpful to do them in separate phases. (I also often write reams of code before starting the testing process, or even checking if they’ll parse, a practice which sometimes drives my coworkers insane.)

A script for testing goes something like this:

  1. Now that we’ve written our code we should write some automated tests. Can you suggest some tests which exercise the basic straight through functionality of this code?

  2. Those are good suggestions. Can you implement and run them?

  3. Now that we’ve tested basic functionality we should try edge cases. Can you suggest some tests which more thoroughly exercise all the edge cases in this code?

  4. Those are good suggestions. Can you implement and run them?

  5. Let’s make sure we’re getting everything. Are there any parts of the code which aren’t getting exercised by these tests? Can we write new tests to hit all of that, and if not can some of that code be removed?

  6. Now that we’ve got everything tested are there any refactorings we can do which will make the code simpler, cleaner, and more maintainable?

  7. Those are good ideas, let’s do those and get the tests passing again. Don’t change the tests in the process, leave them exactly unchanged and fix the code.

Of course this is again so brainless that it will probably be programmed into the AI assistants to do exactly this when asked to write tests, but for now it’s helpful. Also helpful as a script for human programmers to follow. A code coverage tool is also helpful for both, but it seems Claude isn’t hooked up to one of those yet.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Wed May 28 22:08:37 2025 Tags:

First of all, what's outlined here should be available in libinput 1.29 but I'm not 100% certain on all the details yet so any feedback (in the libinput issue tracker) would be appreciated. Right now this is all still sitting in the libinput!1192 merge request. I'd specifically like to see some feedback from people familiar with Lua APIs. With this out of the way:

Come libinput 1.29, libinput will support plugins written in Lua. These plugins sit logically between the kernel and libinput and allow modifying the evdev device and its events before libinput gets to see them.

The motivation for this are a few unfixable issues - issues we knew how to fix but we cannot actually implement and/or ship the fixes without breaking other devices. One example for this is the inverted Logitech MX Master 3S horizontal wheel. libinput ships quirks for the USB/Bluetooth connection but not for the Bolt receiver. Unlike the Unifying Receiver the Bolt receiver doesn't give the kernel sufficient information to know which device is currently connected. Which means our quirks could only apply to the Bolt receiver (and thus any mouse connected to it) - that's a rather bad idea though, we'd break every other mouse using the same receiver. Another example is an issue with worn out mouse buttons - on that device the behavior was predictable enough but any heuristics would catch a lot of legitimate buttons. That's fine when you know your mouse is slightly broken and at least it works again. But it's not something we can ship as a general solution. There are plenty more examples like that - custom pointer deceleration, different disable-while-typing, etc.

libinput has quirks but they are internal API and subject to change without notice at any time. They're very definitely not for configuring a device and the local quirk file libinput parses is merely to bridge over the time until libinput ships the (hopefully upstreamed) quirk.

So the obvious solution is: let the users fix it themselves. And this is where the plugins come in. They are not full access into libinput, they are closer to a udev-hid-bpf in userspace. Logically they sit between the kernel event devices and libinput: input events are read from the kernel device, passed to the plugins, then passed to libinput. A plugin can look at and modify devices (add/remove buttons for example) and look at and modify the event stream as it comes from the kernel device. For this libinput changed internally to now process something called an "evdev frame" which is a struct that contains all struct input_events up to the terminating SYN_REPORT. This is the logical grouping of events anyway but so far we didn't explicitly carry those around as such. Now we do and we can pass them through to the plugin(s) to be modified.

The aforementioned Logitech MX master plugin would look like this: it registers itself with a version number, then sets a callback for the "new-evdev-device" notification and (where the device matches) we connect that device's "evdev-frame" notification to our actual code:

libinput:register(1) -- register plugin version 1
libinput:connect("new-evdev-device", function (_, device)
    if device:vid() == 0x046D and device:pid() == 0xC548 then
        device:connect("evdev-frame", function (_, frame)
            for _, event in ipairs(frame.events) do
                if event.type == evdev.EV_REL and 
                   (event.code == evdev.REL_HWHEEL or 
                    event.code == evdev.REL_HWHEEL_HI_RES) then
                    event.value = -event.value
                end
            end
            return frame
        end)
    end
end)
This file can be dropped into /etc/libinput/plugins/10-mx-master.lua and will be loaded on context creation. I'm hoping the approach using named signals (similar to e.g. GObject) makes it easy to add different calls in future versions. Plugins also have access to a timer so you can filter events and re-send them at a later point in time. This is useful for implementing something like disable-while-typing based on certain conditions.

So why Lua? Because it's very easy to sandbox. I very explicitly did not want the plugins to be a side-channel to get into the internals of libinput - specifically no IO access to anything. This ruled out using C (or anything that's a .so file, really) because those would run a) in the address space of the compositor and b) be unrestricted in what they can do. Lua solves this easily. And, as a nice side-effect, it's also very easy to write plugins in.[1]

Whether plugins are loaded or not will depend on the compositor: an explicit call to set up the paths to load from and to actually load the plugins is required. No run-time plugin changes at this point either, they're loaded on libinput context creation and that's it. Otherwise, all the usual implementation details apply: files are sorted and if there are files with identical names the one from the highest-precedence directory will be used. Plugins that are buggy will be unloaded immediately.

If all this sounds interesting, please have a try and report back any APIs that are broken, or missing, or generally ideas of the good or bad persuation. Ideally before we ship it and the API is stable forever :)

[1] Benjamin Tissoires actually had a go at WASM plugins (via rust). But ... a lot of effort for rather small gains over Lua

Posted Wed May 21 04:09:00 2025 Tags:

Rumor has it a lot of people lie about their relationship status while dating. This causes a lot of problems for people who don’t lie about their relationship status because of all the suspicion. I can tell you from experience, in what is probably peak first world problems, that getting your wikipedia page updated to say that you’re divorced can be super annoying. (Yes I’m divorced and single).

Here is a suggestion for how to help remedy this1. People can be put a relationship code in their public profiles. This is a bit like Facebook relationship status, but more flexible and can go anywhere and its meaning is owned by the people instead of Meta. The form of a relationship code can be ‘Relationship code: XYZ’ but it’s cuter and more succinct to use an emoji, with 💑 (‘couple with heart’) being the most logical2. Here are a few suggestions for what to do with this, starting with the most important and to the less common:

💑 single: This means ‘There is nobody else in the world who would get upset about me saying I’m single in this profile’ in a way which is publicly auditable. Proactively having this in one’s profile is a bit less effective than getting asked by someone to post and and then doing so because some people make extra profiles just for dating. Some people suck. For that reason this is especially effective in profiles which are more likely to be tied to the real person like Linkedin, but unfortunately posting relationship status there is a bit frowned on.

💑 abcxyz: The code abcxyz can be replaced by anything. The idea is that someone gives the other person a code which they randomly came up with to post. This is a way of auditably showing that you’re single but not actively courting anybody else. Appropriate for early in a relationship, even before a first date. Also a good way of low-key proving you are who you claim to be.

💑 in a relationship with abcxyz: Shows that a relationship is serious enough to announce publicly

💑 married to abcxyz: Means that a relationship is serious enough to let it impact your taxes

💑 poly: Shows that you’re in San Francisco

💑 slut: Probably a euphemism for being a sex worker

💑 No: “People are constantly creeping into my DMs and I’m not interested in you.”

1

A lot of people seem to not appreciate dating advice coming from, ahem, my demographic. I’m posting this because I think it’s a good idea and am hoping someone more appropriate than me becomes a champion for it.

2

There are variants on this emoji which disambiguate the genders of the people and give other skin tones. It’s probably a good idea for everyone to make at least one of the people match their own skin tone. People may choose variants to indicate their gender/skin tone preferences of partners. People giving challenge codes may also request that the emoji variant be updated to indicate that the person is okay with publicly expressing openness to dating someone which matches them. Nobody should ever take offense at what someone they aren’t in a relationship with uses as their relationship code emoji. Peoples preferences are a deeply personal thing and none of your business.

Posted Sat May 17 23:42:07 2025 Tags:

There’s a general question of what things are canonical discoveries and what are invented. To give some data answering that question, and because I think it’s fun, I set about to answer the question: What is the most difficult 3x3x3 packing puzzle with each number of pieces? The rules are:

  • Goal is to pack all the pieces into a 3x3x3. There are no gaps or missing pieces

  • Pieces are entirely composed of cubes

  • Each number of pieces is a separate challenge

  • Monominos (single cube pieces) are allowed

  • The puzzle should be as difficult as possible. The definition of ‘difficult’ is left intentionally vague.

The question is: Will different people making answers to these questions come up with any identical designs? I’ve done part of the experiment in that I’ve spent, ahem, some time on coming up with my own designs. It would be very interesting for someone else to come up with their own designs and to compare to see if there are any identical answers.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Don’t look at these if you want to participate in the experiment yourself, but I came up with answers for the 3, 4, 5, 6, 7, 8, 9, 10, and 12 pieces. The allowance of monominos results in the puzzles with more pieces acting like a puzzle with a small number of pieces and a lot of gaps. It may make more sense to optimize for the most difficult puzzle with gaps for each (small) number of pieces. There’s another puzzle found later which is very similar to one of mine but not exactly the same probably for that reason.

If you do this experiment and come up with answers yourself please let me know the results. If not you can of course try solving the puzzles I came up with for fun. They range from fun and reasonably simple to extremely difficult.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sun May 11 23:46:59 2025 Tags:

Let’s say that you’re making a deep neural network and want to use toroidal space. For those that don’t know, toroidal space for a given number of dimensions has one value in each dimension between zero and one which ‘wraps around’ so when a value goes above one you subtract one from it and when it goes below zero you add one to it. The distance formula formula in toroidal space is similar to what it is in open-ended space, but instead of the distance in each dimension being a-b it’s that value wrapped around to a value between -1/2 and 1/2, so for example 0.25 stays where it is but 0.75 changes to -0.25 and -0.7 changes to 0.3.

Why would you want to do this? Well, it’s because a variant on toroidal space is probably much better at fitting data than conventional space is for the same number of dimensions. I’ll explain the details of that in a later post1 but it’s similar enough that the techniques for using it an neural network are the same. So I'm going explain in this post how to use toroidal space, even though it’s probably comparable or only slightly better than the conventional approach.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

To move from conventional space to an esoteric one you need to define how positions in that space are represented and make analogues of the common operations. Specifically, you need to find analogues for dot product and matrix multiplication and define how back propagation is done across those.

Before we go there it’s necessary to get an intuitive notion of what a vector is and what dot product and matrix multiplication are doing. A vector consists of two things: a distance and a magnitude. A dot product finds the angle between two vectors times their magnitudes. Angle in this case is a type of distance. You might wonder what the intuitive explanation of including the magnitudes is. There isn’t any, you’re better off normalizing for them, known in AI as ‘cosine space’. I’ll just pretend that that’s how it’s always done.

When a vector is multiplied by a matrix, that vector isn’t being treated as a position in space, it’s a list of scalars. Those scalars are each assigned a direction and magnitude of a vector in the matrix. That direction is assigned a weight of the value of the scalar times the magnitude. A weighted average of all the directions is then taken.

The analogue of (normalized) dot product in toroidal space is simply distance. Back propagating over it works how you would expect. There’s a bit of funny business with the possibility of the back propagation causing the values to snap over the 1/2 threshold but the amount of movement is small enough that that’s unusual and AI is so fundamentally handwavy that ignoring things like that doesn’t change the theory much.

The analogue of a matrix in toroidal space is a list of positions and weights. (Unlike in conventional space in toroidal space there’s a type distinction between ‘position’ and ‘position plus weight’ where in conventional space it’s always ‘direction and magnitude’.) To ‘multiply’ a vector by this ‘matrix’ you do a weighted average of all the positions with weights corresponding to the scalar times the given weight. At least, that’s what you would like to do. The problem is that due to the wrap-around nature of space it isn’t clear which image of each position should be used.

To get an intuition for what to do about the multiple images problem, let’s consider the case of only two points. For this case we can find the shortest path between them and simply declare that the weighted average will be along that line segment. If some of the dimensions are close to the 1/2 flip over then either one will at least do something for the other dimensions and there isn’t much signal for that dimension anyway so somewhat noisily using one or the other is fine.

This approach can be generalized to larger numbers of points as follows: First, pick an arbitrary point in space. We’ll think of this as a rough approximation of the eventual solution. Since it’s literally a random point it’s a bad approximation but we’re going to improve that. What we do is find the closest image of each of the points to find a weighted average of to the current approximation and use those positions as the ones when finding the weighted average. That yields a new approximate answer. We then repeat. Most likely in practical circumstances this settles down after only a handful of iterations and if it doesn’t there probably isn’t that much improvement happening with each iteration. There’s an interesting mathematical question as to whether this process must always hit a unique fixed point. I honestly don’t know the answer to that question. If you know the answer please let me know.

The way to back propagate over this operation is to assume that the answer you settled on via the successive approximation process is the ‘right’ one and look at how that one marginally moves with changing the coefficients. As with calculating simple distance the snap-over effects rarely are hit with the small changes involved in individual back propagation adjustments and the propagation doesn’t have to be perfect, it just has to on average produce improvement.

1

It involves adding ‘ghost images’ to each point which aren’t just at the wraparound values but also correspond to other positions in a Barns-Wall lattice, which is a way of packing spheres densely. Usually ‘the Barns-Wall lattice’ corresponds specifically to 16 dimensions but the construction generalizes straightforwardly to any power of 2.

Posted Wed May 7 04:23:04 2025 Tags:

AI can play the game Go far better than any human, but oddly it has some weaknesses which can allow humans to exploit and defeat it. Patching over these weaknesses is very difficult, and teaches interesting lessons about what AI, traditional software, and us humans are good and bad at. Here’s an example position showing the AI losing a big region after getting trapped:1

Go board showing cyclic attack

For those of your not familiar with the game, Go is all about connectivity between stones. When a region of stones loses all connection to empty space, as the red marked one in the above position just did, it dies. When a group surrounds two separate regions it can never be captured because the opponent only places one stone at a time and hence can’t fill both at once. Such a region is said to have ‘two eyes’ and be ‘alive’.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

The above game was in a very unusual position where both the now-dead black group and the white one it surrounds only have one eye and not much potential for any other. The AI may have realized that this was the case but optically it looks like the position is good so it keeps playing elsewhere on the board until that important region is lost.

Explaining what’s failing here and why humans can do better requires some background. Board games have two components to them: ‘tactics’ which encompasses what happens when you do an immediate look-ahead of the position at hand, and ‘position’ which a a more amorphous concept encompassing everything else you can glean about how good a position is from looking at it and using your instincts. There’s no fine white line between the two, but for games on a large enough board with enough moves in a game it’s computationally infeasible to do a complete exhaustive search of the entire game so there is a meaningful difference between the two.

There’s a bizarre contrast between how classical software and AI works. Traditional software is insanely, ludicrously good at logic, but has no intuition whatsoever. It can verify even the most complex formal math proof almost immediately, and search through a number of possibilities vastly larger than what a human can do in a lifetime in just an instant, but if any step is missing it has no way of filling it in. Any such filling in has to follow heuristics which humans painstakingly created by hand, and usually leans very heavily on trying an immense number of possibilities in the hope that something works.

AI is the exact opposite. It has (for some things) ludicrously, insanely good intuition. For some tasks, like guessing at protein folding or evaluating it’s far better than any human ever possibly could be. For evaluating Chess or Go positions only a handful at most of humans can be an AI running purely on instinct. What AI is lacking in is logic. People get excited when it demonstrates any ability to do reasoning at all.

Board games are special in that the purely logical component of them is extremely well defined and can be evaluated exactly. When Chess computers first overtook the best humans it was by having a computer throw raw computational power at the problem with fairly hokey positional evaluation underneath it which had been designed by hand by a strong human player and was optimized more for speed than correctness. Chess is more about tactics than position so this worked well. Go has a balance more towards position so this approach didn’t work well until better positional evaluation via AI was invented. Both Chess and Go have a balance between tactics and position because we humans find both of those interesting. It’s possible that sentient beings of the future will favor games which are much more heavily about position because tactics are more about who spends more computational resources evaluating the position than who’s better at the game.

In some sense doing lookahead in board games (the technical term is 'alpha-beta pruning’) is a very special form of hand-tuned logic. But it by definition perfectly emulates exactly what’s happening in a board game, so it can be mixed with a good positional evaluator to almost get the best of both. Tactics are covered by the lookahead, and position is covered by the AI.

But that doesn’t quite cover it. The problem is that this approach keeps the logic and the AI completely separate from each other and doesn’t have a bridge between them. This is particularly important in Go where there are lots of local patterns which will have to get played out eventually and you can get some idea of what will happen at that time by working out local tactics now. Humans are entirely capable of looking at a position, working through some tactics, and updating their positional evaluation based on that. The current generation of Go AI programs don’t even have hooks to make that possible. They can still beat humans anyway, because their positional evaluation alone is comparable if not better than what a human gets to while using feedback, and their ability to work out immediate tactics is ludicrously better than ours. But the above game is an exception. In that one something extremely unusual happened, in which something which immediate optics of a position were misleading, and the focus of game play was kept off that part of the board long enough that the AI didn’t use its near-term tactical skill to figure out that it was in danger of losing. A human can count up the effective amount of space in the groups battling it out in the above example by working out the local tactics and gets a further boost because what matters is the sum total of them rather than the order in which they’re invoked. Simple tactical evaluation doesn’t realize this independence and has to work through exponentially more cases.

The human executable exploits of the current generation Go AIs are not a single silly little problem which can be patched over. They are particularly bad examples of systemic limitations of how those AIs operate. It may be possible to tweak them enough that the humans can’t get away with such shenanigans any more, but the fact remains that they are far from perfect and some better type of AI which does a better job of bridging between instinct and logic could probably play vastly better than they do now while using far less resources.

1

This post is only covering the most insidious attack but the others are interesting as well. It turns out that the AIs think in japanese scoring despite being trained exclusively on chinese scoring so they’ll sometimes pass in positions where the opponent passing in response results in them losing and they should play on. This can be fixed easily by always looking ahead after a pass to see who actually wins if the opponent immediately passes in response. Online sites get around the issue by making ‘pass’ not really mean pass but more ‘I think we can come to agreement about what’s alive and dead here’ and if that doesn’t happen it reverts to a forced playout with chinese scoring and pass meaning pass.

The other attack is ‘gift’ where a situation can be set up where the AI can’t do something it would like to due to ko rules and winds up giving away stones in a way which make it strictly worse off. Humans easily recognize this and don’t do things which make their position strictly worse off. Arguably the problem is that the AI positional evaluator doesn’t have good access to what positions were already hit, but it isn’t clear how to do that well. It could probably also be patched around by making the alpha-beta pruner ding the evaluation when it finds itself trying and failing to repeat a position but that needs to be able to handle ko battles as well. Maybe it’s also a good heuristic for ko battles.

Both of the above raise interesting questions about what what tweaks to a game playing algorithm are bespoke and hence violate the ‘zero’ principle that an engine should work for any game and not be customized to a particular one. Arguably the ko rule is a violation of the schematic of a simple turn-based game so it’s okay to make exceptions for that.

Posted Sat May 3 21:23:58 2025 Tags:

I’ve written several previous posts about how to make a distributed version control system which has eventual consistency, meaning that no matter what order you merge different branches together they’ll always produce the same eventual result. The details are very technical and involved so I won’t rehash them here, but the important point is that the merge algorithm needs to be very history aware. Sometimes you need to make small sacrifices for great things. Sorry I’m terribly behind on making an implementation of these ideas. My excuse is that I have an important and demanding job which doesn’t leave much time for hobby coding, and I have lots of other hobby coding projects which seem important as well.

My less lame excuse is that I’ve been unsure of how to display merge conflicts. In some sense a version control system with eventual consistency never has real conflicts, it just has situations in which changes seemed close enough to stepping on each other’s toes that the system decided to flag them with conflict markers and alert the user. The user is always free to simply remove the conflict markers and keep going. This is a great feature. If you hit a conflict which somebody else already cleaned up you can simply remove the conflict markers, pull from the branch which it was fixed in, and presto you’ve got the cleanup applied locally.1

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

So the question becomes: When and how should conflict markers be presented? I gave previous thoughts on this question over here but am not so sure of those answers any more. In particular there’s a central question which I’d like to hear peoples’s opinions on: Should line deletions by themselves ever be presented as conflicts? If there are two lines of code one after the other and one person deletes one and another person deletes the other it seems not unreasonable to let it through without raising a flag. Its not like a merge conflict between nothing on one side and nothing on the other side is very helpful. There are specific examples I can come up with where this is a real conflict, but then there are examples I can come up with where changes in code not in close proximity produces semantic conflicts as well. Ideally you should detect conflicts by having extensive tests which are run automatically all the time and conflicts will cause those to fail. The version control system flagging conflicts is for it to highlight the exact location of particularly egregious examples.

It also seems reasonable that if one person deletes a line of code and somebody else inserts a line of code right next to it then that shouldn’t be a conflict. But this is getting shakier. The problem is that if someone deletes a line of code and somebody else ‘modifies’ it then arguably that should be a conflict but the version control system thinks of that as being both sides having deleted the same line of code and one side inserting an unrelated line which happens to look similar. The version control system having a notion of individual lines being ‘modified’ and being able to merge those modifications together is a deep rabbit hole I’m not eager to dive into. Like in the case of deletions on both sides a merge conflict between something on one side and ‘nothing’ on the other isn’t very helpful anyway. If you really care about this then you can leave a blank line when you delete code if you want to really make sure replace it with a unique comment. On the other hand the version control system is supposed to flag things automatically and not make you engage in such shenanigans.

At least one thing is clear: Decisions about where to put conflict markers should only be made on whether lines appear in the immediate parents and the child. Nobody wants the version control system to tell them ‘These two consecutive lines of code both come from the right but the ways they merged with the left are so wildly different that it makes me uncomfortable’. Even if there were an understandable way to present that history information to the user, which there isn’t, everybody would respond by simply deleting the pedantic CYA marker.

I’m honestly unsure whether deleted lines should be considered. There are reasonable arguments on both sides. But I would like to ignore deleted lines because it makes both UX and implementation much simpler. Instead of there being eight cases of the states of the parents and the child, there are only four, because only cases where the child line is present are relevant2. In all conflict cases there will be at least one line of code on either side. It even suggests an approach to how you can merge together many branches at once and see conflict markers once everything is taken into account.3

It may be inevitable that I’ll succumb to practicality on this point, but at least want to reassure myself that there isn’t overwhelming feedback in the other direction before doing so. It may seem given the number of other features I’m adding that ‘show pure deletion conflicts’ is small potatoes, but version control systems are important and feature decisions shouldn’t be made in them without serious analysis.

1

It can even cleanly merge together two different people applying the exact same conflict resolution independently of each other, at least most of the time. An exception is if one person goes AE → ABDE → ABCDE and someone else goes AE → ACE → ABCDE then the system will of necessity think the two C lines are different and make a result including both of them, probably as A>BCD>BCD|E. It isn’t possible to avoid this behavior without giving up eventual consistency, but it’s arguably the best option under the circumstances anyway. If both sides made their changes as part of a single patch this can be made to always merge cleanly.

2

The cases are that a line which appears in the child appears in both parents, just the left, just the right, or neither. That last one happens in the uncommon but important criss-cross case. One thing I still feel good about is that conflict markers should be lines saying ‘This is part of a conflict section, the section below this came from X’ where X is either local, remote, or neither, and there’s another special annotation for ‘this is the end of a conflict section’.

3

While it’s clear that a line which came from Alice but no other parent and a line which came from Bob but no other parent should be in conflict when immediately next to each other it’s much less clear whether a line which came from both Alice and Carol but not Bob should conflict with a line which came from Bob and Carol but not Alice. If that should be presented as ‘not a conflict’ then if the next line came from David but nobody else it isn’t clear how far back the non-David side of that conflict should be marked as starting.

Posted Tue Apr 29 04:39:25 2025 Tags:
Looking at what's happening, and analyzing rationales. #nist #iso #deployment #performance #security
Posted Wed Apr 23 22:40:28 2025 Tags:

Computer tools for providing commentary on Chess games are currently awful. People play over games using Stockfish, which is a useful but not terribly relevant tool, and use that as a guide for their own commentary. There are Chess Youtubers who are aren’t strong players and it’s obvious to someone even of my own mediocre playing strength (1700 on a good day) that they don’t know what they’re talking about because in many situations there’s the obvious best move which fails due to some insane computer line but they don’t even cover it because the computer thinks it’s clearly inferior. Presumably commentary I generated using Stockfish as a guide would be equally obvious to someone of a stronger playing strength than me. People have been talking about using computers to make automated commentary on Chess positions since computers started getting good at Chess, and the amount of progress made has been pathetic. I’m now going to suggest a tool which would be a good first step in that process, although it still requires a human to put together the color commentary. It would also be a fun AI project on its own merits, and possibly have a darker use which I’ll get to at the end.

There’s only one truly objective metric of how good a Chess position is, and that’s whether it’s a win, loss, or draw with perfect play. In a lost position all moves are equally bad. In a won position any move no matter how ridiculous which preserves the theoretical win is equally good. Chess commentary which was based off this sort of analysis would be insane. Most high level games would be a theoretical draw until some point deep into already lost for a human territory at which point some uninteresting move would be labeled the losing blunder because it missed out on some way of theoretically eking out a draw. Obviously such commentary wouldn’t be useful. But commentary from Stockfish isn’t much better. Stockfish commentary is how a roughly 3000 rated player feels about the position if it assumes it’s playing against an opponent of roughly equal strength. That’s a super specific type of player and not one terribly relevant to how humans might fare in a given position. It’s close enough to perfect that a lot of the aforementioned ridiculousness shows up. There are many exciting tactical positions which are ‘only’ fifteen moves or so from being done and the engine says ‘ho hum, nothing to see here, I worked it out and it’s a dead draw’. What we need for Chess commentary is a tool geared towards human play, which says something about human games.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Here’s an idea of what to build: Make an AI engine which gets inputs of position, is told the ratings of the two players, the time controls, and the time left, and gives probabilities for a win, loss, or draw. This could be trained by taking a corpus of real human games and optimizing for Brier score. Without any lookahead this approach is limited by how strong of an evaluation it can get to, but that isn’t relevant for most people. Current engines at one node are probably around 2500 or so, so it might peter out in usefulness for strong grandmaster play, but you have my permission to throw in full Stockfish evaluations as another input when writing game commentary. The limited set of human games might hurt its overall playing strength, but throwing in a bunch of engine games for training or starting with an existing one node network is likely to help a lot. That last one in particular should save a lot of training time.

For further useful information you could train a neural network on the same corpus of games to predict the probability that a player will make each of the available legal moves based on their rating and the amount of time they spend making their move. Maybe the amount of time the opponent spent making their previous move should be included as well.

With all the above information it would be easy to make useful human commentary like ‘The obvious move here is X but that’s a bad idea because of line Y which even strong players are unlikely to see’. Or ‘This position is an objective win but it’s very tricky with very little time left on the clock’. The necessary information to make those observations is available, even if writing the color commentary is a whole other layer. Maybe an LLM could be trained to do that. It may help a lot for the LLM to be able to ask for evaluations of follow-on moves.

What all the above is missing is the ability to give any useful commentary on positional themes going on in games. Baby steps. Having any of the above would be a huge improvement in the state of the art. The insight that commentary needs to take into account what different skill levels and time controls think of the situation will remain an essential one moving forward.

What I’d really like to see out of the above is better Chess instruction. There are religious wars constantly going on about what the best practical advice for lower rated players is, and the truth is we simply don’t know. When people collect data from games they come up with results like by far the best opening for lower rated players as black is the Caro-Kann, which might or might not be true but indicates that the advice given to lower rated players based on what’s theoretically best is more than a little bit dodgy.

A darker use of the above would be to make a nearly undetectable cheating engine. With the addition of it giving an output of the range of likely amounts of time the player is likely to take in a given position it could make an indistinguishable facsimile of a player of a given playing strength in real time whose only distinguishing feature is being a bit too typical/generic, and that would be easy enough to throw in bias for. In situations where it wanted to plausibly win a game against a much higher opponent it could filter out potential moves based on their practical chances in the given situation being bad. That would result in very non-Stockfish-like play and seemingly a player plausibly of that skill level happening to play particularly well that game. Good luck coming up with anti-cheat algorithms to detect that.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sun Apr 20 04:09:03 2025 Tags:

There are two different goals of Chess AI: To figure out what is objectively the very best move in each situation, and to figure out what is, for me as a human, the best way to play in each situation. And possibly explain why. The explanations part I have no good ideas for short of doing an extraordinary amount of manual work to make a training set but the others can be done in a fairly automated manner.

(As with all AI posts everything I say here is speculative, may be wrong, and may be reinventing known techniques, but has reasonable justifications about why it might be a good idea.)

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

First a controversial high level opinion which saves a whole lot of computational power: There is zero reason to try to train an AI on deep evaluations of positions. Alpha-beta pruning works equally well for all evaluation algorithms. Tactics are tactics. What training should do is optimize for immediate accuracy on a zero node eval. What deep evaluation is good for is generating more accurate numbers to go into the training data. For a real engine you as want switching information to say which moves should be evaluated more deeply by the alpha-beta pruner. For that information I’m going to assume that when doing a deep alpha-beta search you can get information about how critical each branch is and that can be used as a training set. I’m going to hand wave and assume that there’s a reasonable way of making that be the output of an alpha-beta search even though I don’t know how to do it.

Switching gears for a moment, there’s something I really want but doesn’t seem to exist: An evaluation function which doesn’t say what the best move for a godlike computer program is, but one which says what’s the best practical move for me, a human being, to make in this situation. Thankfully that can be generated straightforwardly if you have the right data set. Specifically, you need a huge corpus of games played by humans and the ratings of the players involved. You then train an AI with input of the ratings of the players and the current position and it returns probability of win/loss/draw. This is something people would pay real money for access to and can be generated from an otherwise fairly worthless corpus of human games. You could even get fancy and customize it a bit to a particular player’s style if you have enough games from them, but that’s a bit tricky because each human generates very few games and you’d have to somehow relate them to other players by style to get any real signal.

Back to making not a human player but a godlike player. Let’s say you’re making something like Leela, with lots of volunteer computers running tasks to improve it. As is often the case with these sorts of things the bottleneck seems to be bandwidth. To improve a model you need to send a copy of it to all the workers, have them locally generate suggested improvements to all the weights, then send those back. That requires a complete upload and download of the model from each worker. Bandwidth costs can be reduced either by making generations take longer or by making the model smaller. My guess is that biasing more towards making the model smaller is likely to get better results due to the dramatically improved training and lower computational overhead and hence deeper searches when using it in practice.

To make suggestions on how to improve a model a worker does as follows: First, they download the last model. Then they generate a self-play game with it. After each move of the game they take the evaluation which the deeper look-ahead gave and train the no look-ahead eval against that to update their suggestions for weight updates. Once it’s time for the next generation they upload all their suggested updates to the central server which sums all the weight updates suggestions (possibly weighting them by the amount of games which went into them) and uses that for the next generation model.

This approach shows how chess is in some sense an ‘easy’ problem for AI because you don’t need training data for it. You can generate all the training data you want out of thin air on an as needed basis.

Obviously there are security issues here if any of the workers are adversarial but I’m not sure what the best way to deal with those is.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sun Mar 30 02:15:12 2025 Tags:

There are numerous videos reviewing 3D printer filaments to help you determine which is the best. I’ve spent way too much time watching these and running over data and can quickly summarize all the information relevant to most people doing 3D printing: Use PLA. There, you’re finished. If you want or need more information or are interested in running tests yourself read on.

There are two big components of the ‘strength’ of a material: stiffness and toughness. Stiffness refers to how hard it is to bend (or stretch/break) while toughness refers to how well it recovers from being bent (or stretched). These can be further broken down into subcategories, like whether the material successfully snaps back after getting bent or is permanently deformed. An important thing to understand is that the measures used aren’t deep ethereal properties of material, they’re benchmark numbers based on what happens if you run particular tests. This isn’t a problem with the tests, it’s an acknowledgement of how complex real materials are.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

For the vast majority of 3D printing projects what you care about is stiffness rather than toughness. If your model is breaking then most of the time the solution is to engineer it to not bend that much in the first place. The disappointing thing is that PLA has very good stiffness, usually better even than the exotic filaments people like experimenting with. In principle proper annealing can get you better stiffness, but when doing that you wind up reinventing injection molding badly and it turns out the best material for that is also PLA. The supposedly better benchmarks of PLA+ are universally tradeoffs where they get better toughness in exchange for worse stiffness. PLA is brittle, so it shatters on failure, and mixing it with any random gunk tends to make that tradeoff, but it isn’t what you actually want.

(If you happen to have an application where your material bending or impact resistance is important you should consider TPU. The tradeoffs of different versions of that are complex and I don’t have any experience with it so can’t offer much detailed advice.)

Given all the above, plus PLA’s generally nontoxic and easy to print nature, it’s the go-to filament for the vast majority of 3D printing applications. But let’s say you need something ‘better’, or are trying to justify the ridiculous amounts of time you’ve spent researching this subject, what is there to use? The starting place is PLA’s weaknesses: It gets destroyed by sunlight, can’t handle exposure to many corrosive chemicals, and melts at such a low temperature that it can be destroyed in a hot car or a sauna. There are a lot of fancy filaments which do better on these benchmarks, but for the vast majority of things PLA isn’t quite good enough at PETG would fit the bill. The problem with PETG is that it isn’t very stiff. But in principle adding carbon fiber fixes this problem. So, does it?

There are two components of stiffness for 3d printing: Layer adhesion and bending modulus. Usually layer adhesion issues can be fixed by printing in the correct orientation, or sometimes printing in multiple pieces at appropriate orientations. One could argue that the answer ‘you can engineer around that’ is a cop-out but in this cases the effect is so extreme that it can’t be ignored. More on this below, but my test is of bending modulus.

Now that I’ve finished an overly long justification of why I’m doing bending modulus tests we can get to the tests themselves. You can get the models I used for the tests over here. The basic idea is to make a long thin bar in the material to be tested, hang a weight from the middle, and see how much it bends. Here are the results:

CarbonX PETG-CF is a great stronger material, especially if you want/need something light weight spindly. It’s considerably more expensive than PLA but cheaper and easier to print than fancier materials and compared to PLA and especially PETG the effective cost is much less because you need less of it. The Flashforge PETG-CF (which is my stand-in ‘generic’ PETG-CF as it’s what turns up in an Amazon search) is a great solution if you want something with about the same price and characteristics as PLA but better able to handle high temperatures and sunlight. It’s so close to PLA that I’m suspicious that it’s actually just a mislabeled roll of PLA but I haven’t tested that. I don’t know why the Bambu PETG-CF performed so badly. It’s possibly it got damaged by moisture between when I got it and tested it but I tried drying it thoroughly and that didn’t help.

Clearly not all carbon fiber filaments are the same and more thorough testing should be done with a setup less janky than mine. If anybody wants to use my models as a starting point for that please go ahead.

The big caveat here is that you can engineer around a bad bending modulus. The force needed to bend a beam goes up with the cube of its width, so unless something has very confined dimensions you can make it much stronger by making it chunkier. You can do it without using all that much more material by making I-beam like structures. Note that when 3D printing you can make enclosed areas no problem so the equivalent of an I-beam should have a square, triangular, or circular cross section with a hollow middle. The angle of printing is also of course very important.

The conclusion is that if you want something more robust than PLA you can use generic PETG engineered to be very chunky, or PETG-CF with appropriate tradeoffs between price and strength for your application.

A safety warning: Be careful to ventilate your space thoroughly when printing carbon fiber filaments, and don’t shred or machine them after printing. Carbon fiber has the same ill effects on lungs as asbestos so you don’t want to be breathing it in. In my tests the amount of volatile organic compounds produced are small, but it’s a good idea to be careful.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sun Mar 23 21:16:26 2025 Tags:

An Update Regarding the 2025 Open Source Initiative Elections

I've explained in other posts that I ran for the 2025 Open Source Initative Board of Directors in the “Affiliate” district.

Voting closed on MON 2025-03-17 at 10:00 US/Pacific. One hour later, candidates were surprised to receive an email from OSI demanding that all candidates sign a Board agreement before results were posted. This was surprising because during mandatory orientation, candidates were told the opposite: that a Board agreement need not be signed until the Board formally appointed you as a Director (as the elections are only advisory &mdash: OSI's Board need not follow election results in any event. It was also surprising because the deadline was a mere 47 hours later (WED 2025-03-19 at 10:00 US/Pacific).

Many of us candidates attempted to get clarification over the last 46 hours, but OSI has not communicated clear answers in response to those requests. Based on these unclear responses, the best we can surmise is that OSI intends to modify the ballots cast by Affiliates and Members to remove any candidate who misses this new deadline. We are loathe to assume the worst, but there's little choice given the confusing responses and surprising change in requirements and deadlines.

So, I decided to sign a Board Agreement with OSI. Here is the PDF that I just submitted to the OSI. I emailed it to OSI instead. OSI did recommend DocuSign, but I refuse to use proprietary software for my FOSS volunteer work on moral and ethical grounds0 (see my two keynotes (FOSDEM 2019, FOSDEM 2020) (co-presented with Karen Sandler) on this subject for more info on that).

My running mate on the Shared Platform for OSI Reform, Richard Fontana, also signed a Board Agreement with OSI before the deadline as well.


0 Chad Whitacre has made unfair criticism of my refusal tog use Docusign as part of the (apparently ongoing?) 2025 OSI Board election political campaign. I respond to his comment here in this footnote (& further discussion is welcome using the fediverse, AGPLv3-powered comment feature of my blog). I've put it in this footnote because Chad is not actually raising an issue about this blog post's primary content, but instead attempting to reopen the debate about Item 4 in the Shared Platform for OSI Reform. My response follows:

In addition to the two keynotes mentioned above, I propose these analogies that really are apt to this situation:

  • Imagine if the Board of The Nature Conservancy told Directors they would be required, if elected, to use a car service to attend Board meetings. It's easier, they argue, if everyone uses the same service and that way, we know you're on your way, and we pay a group rate anyway. Some candidates for open Board seats retort that's not environmentally sound, and insist — not even that other Board members must stop using the car service &mdash: but just that Directors who chose should be allowed to simply take public transit to the Board meeting — even though it might make them about five minutes late to the meeting. Are these Director candidates engaged in “passive-aggressive politicking”?
  • Imagine if the Board of Friends of Trees made a decision that all paperwork for the organization be printed on non-recycled paper made from freshly cut tree wood pulp. That paper is easier to move around, they say — and it's easier to read what's printed because of its quality. Some candidates for open Board seats run on a platform that says Board members should be allowed to get their print-outs on 100% post-consumer recycled paper for Board meetings. These candidates don't insist that other Board members use the same paper, so, if these new Directors are seated, this will create extra work for staff because now they have to do two sets of print-outs to prep for Board meetings, and refill the machine with different paper in-between. Are these new Director candidates, when they speak up about why this position is important to them as a moral issue, a “a distracting waste of time”?
  • Imagine if the Board of the APSCA made the decision that Directors must work through lunch, and the majority of the Directors vote that they'll get delivery from a restaurant that serves no vegan food whatsoever. Is it reasonable for this to be a non-negotiable requirement — such that the other Directors must work through lunch and just stay hungry? Or should they add a second restaurant option for the minority? After all, the ASPCA condemns animal cruelty but doesn't go so far as to demand that everyone also be a vegan. Would the meat-eating directors then say something like “opposing cruelty to animals could be so much more than merely being vegan” to these other Directors?
Posted Wed Mar 19 08:59:00 2025 Tags:

An Update Regarding the 2025 Open Source Initiative Elections

I've explained in other posts that I ran for the 2025 Open Source Initative Board of Directors in the “Affiliate” district.

Voting closed on Monday 2025-03-17 at 10:00 US/Pacific. One hour after that, I and at least three other candidates received the following email:

Date: Mon, 17 Mar 2025 11:01:22 -0700
From: OSI Elections team <elections@opensource.org>
To: Bradley Kuhn <bkuhn@ebb.org>
Subject: TIME SENSITIVE: sign OSI board agreement
Message-ID: <civicrm_67d86372f1bb30.98322993@opensource.org>

Thanks for participating in the OSI community polls which are now closed. Your name was proposed by community members as a candidate for the OSI board of directors. Functioning of the board of directors is critically dependent on all the directors committing to collaborative practices.

For your name to be considered by the board as we compute and review the outcomes of the polls,you must sign the board agreement before Wednesday March 19, 2025 at 1700 UTC (check times in your timezone). You’ll receive another email with the link to the agreement.

TIME SENSITIVE AND IMPORTANT: this is a hard deadline.

Please return the signed agreement asap, don’t wait. 

Thanks

OSI Elections team

(The link email did arrived too, with a link to a proprietary service called DocuSign. Fontana downloaded the PDF out of DocuSign and it appears to match the document found here. This document includes a clause that Fontana and I explicitly indicated in our OSI Reform Platform should be rewritten. )

All the (non-incumbent) candidates are surprised by this. OSI told us during the mandatory orientation meetings (on WED 2025-02-19 & again on TUE 2025-02-25) that the Board Agreement needed to be signed only by the election winners who were seated as Directors. No one mentioned (before or after the election) that all candidates, regardless of whether they won or lost, needed to sign the agreement. I've also served o many other 501(c)(3) Boards, and I've never before been asked to sign anything official for service until I was formally offered the seat.

Can someone more familiar with the OSI election process explain this? Specifically, why are all candidates (even those who lose) required to sign the Board Agreement before election results are published? Can folks who ran before confirm for us that this seems to vary from procedures in past years? Please reply on the fediverse thread if you have information. Richard Fontana also reached out to OSI on their discussion board on the same matter.

Posted Mon Mar 17 20:26:00 2025 Tags:

Update 2025-03-21: This blog post is extremely long (if you're reading this, you must already know I'm terribly long-winded). I was in the middle of consolidating it with other posts to make a final, single “wrap up” post of the OSI elections when, in the middle of doing that, I was told that Linux Weekly News (LWN) published an article written by Joe Brockmeier. As such,I've carefully left the text below as it stood it stood 2025-03-20 03:42 UTC, which I believe is the version that Brockmeier sourced for his story (only changes past the line “Original Post” have been HTML format fixes). (I hate as much as you do having to scour archive.org/web to find the right version.) Nevertheless, I wouldn't have otherwise left this here in its current form because it's a huge, real-time description that as such doesn't make the best historical reference record of these event. I used my blog as a campaigning tool (for reasons discussed below) before I knew how much interest there would ultimately be in the FOSS community about the 2025 OSI Board of Directors election. Since this was used as a source for the LWN article, keeping the original record easy to find is obviously important and folks shouldn't have to go to archive.org/web to find it. Nevertheless, if you're just digging into this story fresh, I don't really recommend reading the below. Instead, I suggest just reading Brockmeier's LWN article because he's a journalist and writes better and more concise than me, and he's unbiased and the below is my (understandably) biased view as a candidate who lived through this problematic election.

Original Post

I recently announced that I was nominated for the Open Source Initiative (OSI) Board of Directors as an “Affiliate” candidate. I chose to run as an (admittedly) opposition candidate against the existing status quo, on a “ticket” with my colleague, Richard Fontana, who is running as an (opposition) “Member” candidate.

These elections are important; they matter with regard to the future of FOSS. OSI recently published the “Open Source Artificial Intelligence Definition” (OSAID). One of OSI's stated purposes of the OSID is to convince the entire EU and other governments and policy agencies will adopt this Definition as official for all citizens. Those stakes aren't earth-shattering, but they are reasonably high stakes. (You can read i a blog post I wrote on the subject or Fontana's and my shared platform for more information about OSAID.)

I have worked and/or volunteered for nonprofits like OSI for years. I know it's difficult to get important work done — funding is always too limited. So, to be sure I'm not misquoted: no, I don't think the election is “rigged”. Every problem described herein can easily be attributed to innocent human error, and, as such, I don't think anyone at OSI has made an intentional plan to make the elections unfair. Nevertheless, these mistakes and irregularities (particularly the second one below) have led to an unfair 2025 OSI Directors Election. I call on the OSI to reopen the nominations for a few days, correct these problems, and then extend the voting time accordingly. I don't blame the OSI for these honest mistakes, but I do insist that they be corrected. This really does matter: since this isn't just a local club. OSI is an essential FOSS org that works worldwide and claims to have a consensus mandate for determining what is (or is not) “open source”. Thus, (if the OSI intends to continue with an these advisory elections), OSI's elections need the greatest integrity and legitimacy. Irregularities must be corrected and addressed to maintain the legitimacy of this important organization.

Regarding all these items below, I did raise all the concerns privately with the OSI staff before publicly listing them here. In every case, I gave OSI at least 20-30% of the entire election cycle to respond privately before discussing the problems publicly. (I have still received no direct response from the OSI on any of these issues.)

(Recap on) First Irregularity

The first irregularity was the miscommunication about the nomination deadline (as covered in the press. Instead of using the time zone of OSI's legal home (in California), or the standard FOSS community deadline of AoE (anywhere on earth) time, OSI surreptitiously chose UTC and failed to communicate that decision properly. According to my sources, only one email of 3(+) emails about the elections included the fully qualified datetime of the deadline. Everywhere else (including everywhere on OSI's website) published only the date, not the time. It was reasonable for nominators to assume the deadline was US/Pacific — particularly since the nomination form still worked after 23:59 UTC passed.

Second Irregularity

Due to that first irregularity, this second (and most egregious) irregularity is compounded even further. All year long, the OSI has communicated that, for 2025, elections are for two “Member” seats and one “Affiliate” seat. Only today (already 70% through the election cycle) did OSI (silently) correct this error. This change was made well after nominations had closed (in every TZ). By itself, the change in available seats after nominations closed makes the 2025 OSI elections unfair. Here's why: the Members and the Affiliates are two entirely different sets of electorates. Many candidates made complicated decisions about which seats to run for based on the number of seats available in each class. OSI is aware of that, too, because (a) we told them that during candidate orientation, and (b) Luke said so publicly in their blog post (and OSI directly responded to Luke in the press).

If we had known there were two Affiliate seats and just one Member seat, Debian (an OSI Affiliate) would have nominated Luke a week early to the Affiliate seat. Instead, Debian's leadership, Luke, Fontana, and I had a complex discussion in the final week of nominations on how best to run as a “ticket of three”. In that discussion, Debian leadership decided to nominate no one (instead of nominating Luke) precisely because I was already nominated on a platform that Debian supported, and Debian chose not to run a candidate against me for the (at the time, purported) one Affiliate seat available.

But this irregularity didn't just impact Debian, Fontana, Luke, and me. I was nominated by four different Affiliates. My primary pitch to ask them to nominate me was that there was just one Affiliate seat available. Thus, I told them, if they nominated someone else, that candidate would be effectively running against me. I'm quite sure at least one of those Affiliates would have wanted to nominate someone else if only OSI had told them the truth when it mattered: that Affiliates could easily elect both me and a different candidate for two available Affiliate seats. Meanwhile, who knows what other affiliates who nominated no one would have done differently? OSI surely doesn't know that. OSI has treated every one of their Affiliates unfairly by changing the number of seats available after the nominations closed.

Due to this Second Irregularity alone, I call on the OSI to reopen nominations and reset the election cycle. The mistakes (as played) actually benefit me as a candidate — since now I'm running against a small field and there are two seats available. If nominations reopen, I'll surely face a crowded field with many viable candidates added. Nevertheless, I am disgusted that I unintentionally benefited from OSI's election irregularity and I ask OSI take corrective action to make the 2025 election fair.

The remaining irregularities are minor (by comparison, anyway), but I want to make sure I list all the irregularities that I've seen in the 2025 OSI Board Elections in this one place for everyone's reference:

Third Irregularity

I was surprised when OSI published the slates of Affiliate candidates that they were not in any (forward or reverse) alphabetical order — not candidate's first, last, or nominator name. Perhaps the slots in the voter's guide were assigned randomly, but if so, that is not disclosed to the electorate. And, Who is listed first, you ask? Why, the incumbent Affiliate candidate. The issue of candidate ordering in voting guides and ballots has been well studied academically and, unsurprisingly, being listed first is known to be an advantage. Given that incumbents already have an advantage in all elections, putting the incumbent first without stating that the slots in the voter guide were randomly assign makes the 2025 OSI Board election unfair.

I contacted OSI leadership within hours of the posting of the candidates about this issue (at time of writing, that was four days ago) and they have refused to respond nor have they corrected the issue. This compounds the error, because OSI consciously choosing to list the incumbent Affiliate candidate first in the voter guide on purpose.

Note that this problem is not confined to the “Affiliate district”. In the “Member district”, my running mate, Richard Fontana, is listed last in the voter guide for no apparent reason.

Fourth Irregularity

It's (ostensibly) a good idea for the OSI to run a discussion forum for the candidates (and kudos to OSI ( in this instance, anyway ) for using the GPL'd Discourse software for the purpose). however, the requirements to create an account and respond to the questions exclude some Affiliate candidates. Specifically, the OSI has stated that Affiliate candidates, and the Affiliates that are their electorate, need not be Members of the OSI. (This is actually the very first item in OSI's election FAQ!) Yet, to join the discussion forum, one must become a member of the OSI! While it might be reasonable to require all Affiliate candidates become OSI Members, this was not disclosed until the election started, so it's unfair!

Some already argue that since there is a free (as in price) membership that this is a non-issue. I disagree, and here's why: Long ago, I had already decided that I would not become a Member of OSI (for free or otherwise) because OSI Members who do not pay money are denied voting rights in these elections! Yes, you read that right: the election for OSI Directors in the “Members” seat literally has a poll tax! I refuse to let OSI count me as a Member when the class of membership they are offering to people who can't afford to pay is a second-class citizenship in OSI's community. Anyway, there is no reason that one should have to become a Member to post on the discussion fora — particularly given that OSI has clearly stated that the Affiliate candidates (and the Affiliate representatives who vote) are not required to be individual Members.

A desire for Individual Membership is understandable for an nonprofit. Nonprofits often need to prove they represent a constituency. I don't blame any nonprofit for trying to build a constituency for itself. The issue is how. Counting Members as “anyone who ever posted on our discussion forum” is confusing and problematic — and becomes doubly so when Voting Memberships are available for purchase. Indeed, OSI's own annual reporting conflates the two types of Members confusingly, as “Member district” candidate Chad Whitacre asked about during the campaign (but received no reply).

I point as counter-example to the models used by GNOME Foundation (GF) and Software In the Public Interest (SPI). These organizations are direct peers to the OSI, but both GF and SPI have an application for membership that evaluates on the primary criterion of what contributions the individual has made to FOSS (be they paid or volunteer). AFAICT, for SPI and GF, no memberships require a donation, aren't handed out merely for signing up to the org's discussion fora, and all members (once qualified) can vote.

Fifth Irregularity

This final irregularity is truly minor, but I mention it for completeness. On the Affiliate candidate page, it seems as if each candidate is only nominated by one affiliate. When I submitted my candidate statement, since OSI told me they automatically filled in the nominating org, I had assumed that all my nominating orgs would be listed. Instead, they listed only one. If I'd known that, I'd have listed them at the beginning of my candidate statement; my candidate statement was drafted under the assumption all my nominating orgs would be listed elsewhere.

Sixth Irregularity

Update 2025-03-07. I received an unsolicited (but welcome) email from an Executive Director of one of OSI's Affiliate Organizations. This individual indicated they'd voted for me (I was pleasantly surprised, because I thought their org was pro-OSAID, which I immediately wrote back and told them). The irregularity here is that OSI told candidates that the campaign period would be 10 days, including two weekends in most places — including orientation phone calls for candidates. They started the campaign late, and didn't communicate that they weren't extending the timeline, so the campaign period was about 6.5 days and included only one weekend.

Meanwhile, during this extremely brief 6.5 day period, the election coordinator at OSI was unavailable to answer inquiries from candidates and Affiliates for at least three of those days. This included sending one Affiliate an email with the subject line ”Rain Check” in response to five questions they sent about the election process, and its contents indicated that the OSI would be unavailable to answers questions about the election — until after the election!

Seventh Irregularity (added 2025-03-13)

The OSI Election Team, less than 12 hours after sending out the ballots (on Friday 2025-03-07) sent the following email. Many of the Affiliates told me about the email, and it seems likely that all Affiliates received this email within a short time after receiving their ballots (and a week before the ballots were due):

Subject: OSI Elections: unsolicited emails
Date: Sat, 08 Mar 2025 02:11:05 -0800
From: "Staffer REDACTED" <staffer@opensource.org>

Dear REDACTED,

It has been brought to our attention that at least one candidate has been emailing affiliates without their consent.

We do not give out affiliate emails for candidate reachouts, and understand that you did not consent to be spammed by candidates for this election cycle.

Candidates can engage with their fellow affiliates on our forums where we provide community management and moderation support, and in other public settings where our affiliates have opted to sign up and publicly engage.

Please email us directly for any ongoing questions or concerns.

Kind regards,
OSI Elections team

This email is problematic because candidates received no specific guidance on this matter. No material presented at either of the two mandatory election orientations (which I attended) indicated that contacting your constituents directly was forbidden, nor could I find such in any materials on the OSI website. Also, I checked with Richard Fontana, who also attended these sessions, and he confirms I didn't miss anything.

It's not spam to contact one's “FOSS Neighbors” to learn their concerns when in a political campaign for an important position. In fact, during those same orientation sessions, it was mentioned that Affiliate candidates should know the needs of their constiuents — OSI's Affiliates. I took that charge seriously, so I invested 12-14 hours researching every single of my constituents (all ~76 OSI Affiliate Organizations). my research confirmed my hypothesis: my constituents were my proverbial “FOSS neighbors”. In fact, I found that I'd personally had contact with most of the orgs since before OSI even had an Affiliate program. For example, one of the now-Affiliates had contacted me way back in 2013 to provide general advice and support about how to handle fundraising and required nonprofit policies for their org. Three other now-Affiliate's Executive Directors are people I've communicated regularly with for nearly 20 years. (There are other similar examples too). IOW, I contacted my well-known neighbors to find out their concerns now that I was running for an office that would represent them.

There were also some Affiliates that I didn't know (or didn't know well) yet. For those, like any canvasing candidate, I knocked on their proverbial front doors: I reviewed their websites, found the name of the obvious decision maker, searched my email archives for contact info (and, in some cases, just did usual guesses like <firstname.lastname@example.org>), and contacted them. (BTW, I've done this since the 1990s in nonprofit work when trying to reach someone at a fellow nonprofit to discuss any issue.)

All together, I was able to find a good contact at 55 of the Affiliates, and here's a (redacted) sample of one the emails I sent:

Subject: Affiliate candidate for OSI Board of Directors available to answer any questions REDACTED_FIRSTNAME,

I'm Bradley M. Kuhn and I'm running as an Affiliate candidate in the Open Source Initiative Board elections that you'll be voting in soon on behalf of REDACTED_NAME_OF_ORG.

I wanted to let you know about the Shared Platform for OSI Reform (that I'm running for jointly with Richard Fontana) [0] and also offer some time to discuss the platform and any other concerns you have as an OSI Affiliate that you'd like me to address for you if elected.

(Fontana and I kept our shared platform narrow so that we could be available to work on other issues and concerns that our (different) constituencies might have.)

I look forward to hearing from you soon!

[0] https://codeberg.org/OSI-Reform-Platform/platform#readme

Note that Fontana is running as a Member candidate which has a separate electorate and for different Board seats, so we are not running in competition for the same seat.

(Since each one was edited manually for the given org, if the org primarily existed for a FOSS project I used, I also told them how I used the project myself, etc.)

Most importantly, though, election officials should never comment on the permitted campaign methods of any candidates before voting finishes in any event. While OSI staff may not have intended it, editorializing regarding campaign strategies can influence an election, and if you're in charge of running an impartial collection, you have a high standard to meet.

OSI: either reopen nominations or just forget the elections

Again, I call on OSI to correct these irregularities, briefly reopen nominations, and extend the voting deadline. However, if OSI doesn't want to do that, there is another reasonable solution. As explained in OSI's by-laws and elsewhere, OSI's Directors elections are purely advisory. Like most nonprofits, the OSI is governed by a self-perpetuating (not an elected) Board. I bet with all the talk of elections, you didn't even know that!

Frankly, I have no qualms with a nonprofit structure that includes a self-perpetuating Board. While it's not a democratic structure, a self-perpetuating Board of principled Directors does solve the problems created in a Member-based organization. In Member-based organizations, votes are for sale. Any company with resources to buy Memberships for its employees can easily dominate the election. While OSI probably has yet to experience this problem, if OSI grows its Membership (as it seeks to), OSI will sure face that problem. Self-perpetuating Boards aren't perfect, but they do prevent this problem.

Meanwhile, having now witnessed OSI's nomination and the campaign process from the inside, it really does seem to me that OSI doesn't really take this election all that seriously. And, OSI already has in mind the kinds of candidates they want. For example, during one of the two nominee orientation calls, a key person in the OSI Leadership said (regarding items 4 of Fontana's and my shared platform) [quote paraphrased from my memory]: If you don't want to agree to these things, then an OSI Directorship is not for you and you should withdraw and seek a place to serve elsewhere. I was of course flabbergasted to be told that a desire to avoid proprietary software should disqualify me (at least in view of the current OSI leadership). But, that speaks to the fact that the OSI doesn't really want to have Board elections in the first place. Indeed, based on that and many other things that the OSI leadership has said during this process, it seems to me they'd actually rather hand-pick Directors to serve than run a democratic process. There's no shame in a nonprofit that prefers a self-perpetuating Board; as I said, most nonprofits are not Membership organizations nor allow any electorate to fill Board seats.

Meanwhile, OSI's halfway solution (i.e., a half-heartedly organized election that isn't really binding) seems designed to manufacture consent. OSI's Affiliates and paid individual Membership are given the impression they have electoral power, but it's an illusion. Giving up on the whole illusion would be the most transparent choice for OSI, and if the OSI would rather end these advisory elections and just self-perpetuate, I'd support that decision.

Update on 2025-03-07: Chad Whitacre, candidate in OSI's “Member district”, has endorsed my suggestion that OSI reopen nominations briefly for this election. While I still urge voters in the “Member district” to rank my running mate, Richard Fontana first in that race, I believe Chad would be fine choice as your second listed candidate in the rank choice voting.

Posted Mon Mar 3 11:00:00 2025 Tags:

In the early days of space missions it was observed that spinning objects flip over spontaneously. This got people freaked out that it could happen to the Earth as a whole. Any solid object spinning in three dimensions will do this, and the amount of time it spends between flips has nothing to do with how quickly the flips happen.

Since then you might assume that this possibility was debunked. That didn’t happen. People just kind of got over it. The model of the Earth as a single solid object is overly simplistic, with some kind of fluid flows going on below the surface. Unfortunately we have no idea what those flows are actually doing and how they might affect this process. It’s a great irony of our universe that we know more about distant galaxies than the core of our own planet.

Unfortunately the one bit of evidence we have about the long-term stability of the Earth’s axis might point towards such flips happening regularly. The Earth’s magnetic field is known to invert every once in a while. But it’s just as plausible that what’s going on is the gooey molten inner core of the Earth keeps pointing in the same direction that whole time while the crunchy outer crust flips over.

If a flip like this happens over the course of a day then the Sun would go amusingly skeewumpus for a day and then start rising in the west and setting in the east. Unlike what you see in ‘The 3 body problem’ apparent gravity on the surface of the planet would remain normal that whole time. (The author of that book supposedly has a physics background. I’m calling shenanigans.) But there might be tides going an order of magnitude or higher than they normally go, and planetary weather patterns would invert causing all kinds of chaos. That would include California getting massive hurricanes from over the Pacific while Florida would be much more chill.

A suddenly flip like that is very unlikely, but that might not be a good thing. If the flip takes years then midway through it the poles will be aligned through the Sun, so they’ll spend months on end in either baking Sun or pitch black, getting baked to a crisp or frozen solid, far beyond the most extreme weather we have under normal circumstances. The equatorial regions will be spared, being in constant twilight, with the passage of time mostly denoted by spectacular northern and southern lights which alternate every 12 hours. And there will probably be lots of tectonic activity, but not as bad as on Venus.

Posted Mon Mar 3 03:47:57 2025 Tags:

I accepted nomination as a candidate for an “Affiliate seat” in the Open Source Initiative (OSI) Board of Directors elections. I was nominated by the following four OSI Affiliates:

  • The Matrix Foundation
  • The Perl and Raku Foundation
  • Software Freedom Conservancy (my employer)
  • snowdrift.coop

To my knowledge, I am the only Affiliate candidate, in the history of these OSI Board of Directors “advisory” elections, to be nominated by four Affiliates.

I am also endorsed by another Affiliate, the Debian Project.

You can see my official candidate page on OSI's website. This blog post will be updated throughout the campaign to link to other posts, materials, and announcement related to my candidacy throughout the campaign.

Updates During the Campaign

I ran on the “OSI Reform Platform” with Richard Fontana.

I created a Fediverse account specifically to interact with constituents and the public, so please also follow that on floss.social/@bkuhn.

Posted Wed Feb 26 21:01:05 2025 Tags:

Ready in time for libinput 1.28 [1] and after a number of attempts over the years we now finally have 3-finger dragging in libinput. This is a long-requested feature that allows users to drag by using a 3-finger swipe on the touchpad. Instead of the normal swipe gesture you simply get a button down, pointer motion, button up sequence. Without having to tap or physically click and hold a button, so you might be able to see the appeal right there.

Now, as with any interaction that relies on the mere handful of fingers that are on our average user's hand, we are starting to have usage overlaps. Since the only difference between a swipe gesture and a 3-finger drag is in the intention of the user (and we can't detect that yet, stay tuned), 3-finger swipes are disabled when 3-finger dragging is enabled. Otherwise it does fit in quite nicely with the rest of the features we have though.

There really isn't much more to say about the new feature except: It's configurable to work on 4-finger drag too so if you mentally substitute all the threes with fours in this article before re-reading it that would save me having to write another blog post. Thanks.

[1] "soonish" at the time of writing

Posted Mon Feb 24 05:38:00 2025 Tags:

This is a heads up as mutter PR!4292 got merged in time for GNOME 48. It (subtly) changes the behaviour of drag lock on touchpads, but (IMO) very much so for the better. Note that this feature is currently not exposed in GNOME Settings so users will have to set it via e.g. the gsettings commandline tool. I don't expect this change to affect many users.

This is a feature of a feature of a feature, so let's start at the top.

"Tapping" on touchpads refers to the ability to emulate button presses via short touches ("taps") on the touchpad. When enabled, a single-finger tap corresponds emulates a left mouse button click, a two-finger tap a right button click, etc. Taps are short interactions and to be recognised the finger must be set down and released again within a certain time and not move more than a certain distance. Clicking is useful but it's not everything we do with touchpads.

"Tap-and-drag" refers to the ability to keep the pointer down so it's possible to drag something while the mouse button is logically down. The sequence required to do this is a tap immediately followed by the finger down (and held down). This will press the left mouse button so that any finger movement results in a drag. Releasing the finger releases the button. This is convenient but especially on large monitors or for users with different-than-whatever-we-guessed-is-average dexterity this can make it hard to drag something to it's final position - a user may run out of touchpad space before the pointer reaches the destination. For those, the tap-and-drag "drag lock" is useful.

"Drag lock" refers to the ability of keeping the mouse button pressed until "unlocked", even if the finger moves off the touchpads. It's the same sequence as before: tap followed by the finger down and held down. But releasing the finger will not release the mouse button, instead another tap is required to unlock and release the mouse button. The whole sequence thus becomes tap, down, move.... tap with any number of finger releases in between. Sounds (and is) complicated to explain, is quite easy to try and once you're used to it it will feel quite natural.

The above behaviour is the new behaviour which non-coincidentally also matches the macOS behaviour (if you can find the toggle in the settings, good practice for easter eggs!). The previous behaviour used a timeout instead so the mouse button was released automatically if the finger was up after a certain timeout. This was less predictable and caused issues with users who weren't fast enough. The new "sticky" behaviour resolves this issue and is (alanis morissette-stylue ironically) faster to release (a tap can be performed before the previous timeout would've expired).

Anyway, TLDR, a feature that very few people use has changed defaults subtly. Bring out the pitchforks!

As said above, this is currently only accessible via gsettings and the drag-lock behaviour change only takes effect if tapping, tap-and-drag and drag lock are enabled:

  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-to-click true
  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-and-drag true
  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-and-drag-lock true
  
All features above are actually handled by libinput, this is just about a default change in GNOME.
Posted Mon Feb 24 04:17:00 2025 Tags:
Looking at some claims that quantum computers won't work. #quantum #energy #variables #errors #rsa #secrecy
Posted Sat Jan 18 17:45:19 2025 Tags:

This is a heads up that if you file an issue in the libinput issue tracker, it's very likely this issue will be closed. And this post explains why that's a good thing, why it doesn't mean what you want, and most importantly why you shouldn't get angry about it.

Unfixed issues have, roughly, two states: they're either waiting for someone who can triage and ideally fix it (let's call those someones "maintainers") or they're waiting on the reporter to provide some more info or test something. Let's call the former state "actionable" and the second state "needinfo". The first state is typically not explicitly communicated but the latter can be via different means, most commonly via a "needinfo" label. Labels are of course great because you can be explicit about what is needed and with our bugbot you can automate much of this.

Alas, using labels has one disadvantage: GitLab does not allow the typical bug reporter to set or remove labels - you need to have at least the Planner role in the project (or group) and, well, suprisingly reporting an issue doesn't mean you get immediately added to the project. So setting a "needinfo" label requires the maintainer to remove the label. And until that happens you have a open bug that has needinfo set and looks like it's still needing info. Not a good look, that is.

So how about we use something other than labels, so the reporter can communicate that the bug has changed to actionable? Well, as it turns out there is exactly thing a reporter can do on their own bugs other than post comments: close it and re-open it. That's it [1]. So given this vast array of options (one button!), we shall use them (click it!).

So for the forseeable future libinput will follow the following pattern:

  • Reporter files an issue
  • Maintainer looks at it, posts a comment requesting some information, closes the bug
  • Reporter attaches information, re-opens bug
  • Maintainer looks at it and either: files a PR to fix the issue or closes the bug with the wontfix/notourbug/cantfix label
Obviously the close/reopen stage may happen a few times. For the final closing where the issue isn't fixed the labels actually work well: they preserve for posterity why the bug was closed and in this case they do not need to be changed by the reporter anyway. But until that final closing the result of this approach is that an open bug is a bug that is actionable for a maintainer.

This process should work (in libinput at least), all it requires is for reporters to not get grumpy about issue being closed. And that's where this blog post (and the comments bugbot will add when closing) come in. So here's hoping. And to stave off the first question: yes, I too wish there was a better (and equally simple) way to go about this.

[1] we shall ignore magic comments that are parsed by language-understanding bots because that future isn't yet the present

Posted Wed Dec 18 03:21:00 2024 Tags: