The lower-post-volume people behind the software in Debian. (List of feeds.)

Let’s say that you’re making a deep neural network and want to use toroidal space. For those that don’t know, toroidal space for a given number of dimensions has one value in each dimension between zero and one which ‘wraps around’ so when a value goes above one you subtract one from it and when it goes below zero you add one to it. The distance formula formula in toroidal space is similar to what it is in open-ended space, but instead of the distance in each dimension being a-b it’s that value wrapped around to a value between -1/2 and 1/2, so for example 0.25 stays where it is but 0.75 changes to -0.25 and -0.7 changes to 0.3.

Why would you want to do this? Well, it’s because a variant on toroidal space is probably much better at fitting data than conventional space is for the same number of dimensions. I’ll explain the details of that in a later post1 but it’s similar enough that the techniques for using it an neural network are the same. So I'm going explain in this post how to use toroidal space, even though it’s probably comparable or only slightly better than the conventional approach.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

To move from conventional space to an esoteric one you need to define how positions in that space are represented and make analogues of the common operations. Specifically, you need to find analogues for dot product and matrix multiplication and define how back propagation is done across those.

Before we go there it’s necessary to get an intuitive notion of what a vector is and what dot product and matrix multiplication are doing. A vector consists of two things: a distance and a magnitude. A dot product finds the angle between two vectors times their magnitudes. Angle in this case is a type of distance. You might wonder what the intuitive explanation of including the magnitudes is. There isn’t any, you’re better off normalizing for them, known in AI as ‘cosine space’. I’ll just pretend that that’s how it’s always done.

When a vector is multiplied by a matrix, that vector isn’t being treated as a position in space, it’s a list of scalars. Those scalars are each assigned a direction and magnitude of a vector in the matrix. That direction is assigned a weight of the value of the scalar times the magnitude. A weighted average of all the directions is then taken.

The analogue of (normalized) dot product in toroidal space is simply distance. Back propagating over it works how you would expect. There’s a bit of funny business with the possibility of the back propagation causing the values to snap over the 1/2 threshold but the amount of movement is small enough that that’s unusual and AI is so fundamentally handwavy that ignoring things like that doesn’t change the theory much.

The analogue of a matrix in toroidal space is a list of positions and weights. (Unlike in conventional space in toroidal space there’s a type distinction between ‘position’ and ‘position plus weight’ where in conventional space it’s always ‘direction and magnitude’.) To ‘multiply’ a vector by this ‘matrix’ you do a weighted average of all the positions with weights corresponding to the scalar times the given weight. At least, that’s what you would like to do. The problem is that due to the wrap-around nature of space it isn’t clear which image of each position should be used.

To get an intuition for what to do about the multiple images problem, let’s consider the case of only two points. For this case we can find the shortest path between them and simply declare that the weighted average will be along that line segment. If some of the dimensions are close to the 1/2 flip over then either one will at least do something for the other dimensions and there isn’t much signal for that dimension anyway so somewhat noisily using one or the other is fine.

This approach can be generalized to larger numbers of points as follows: First, pick an arbitrary point in space. We’ll think of this as a rough approximation of the eventual solution. Since it’s literally a random point it’s a bad approximation but we’re going to improve that. What we do is find the closest image of each of the points to find a weighted average of to the current approximation and use those positions as the ones when finding the weighted average. That yields a new approximate answer. We then repeat. Most likely in practical circumstances this settles down after only a handful of iterations and if it doesn’t there probably isn’t that much improvement happening with each iteration. There’s an interesting mathematical question as to whether this process must always hit a unique fixed point. I honestly don’t know the answer to that question. If you know the answer please let me know.

The way to back propagate over this operation is to assume that the answer you settled on via the successive approximation process is the ‘right’ one and look at how that one marginally moves with changing the coefficients. As with calculating simple distance the snap-over effects rarely are hit with the small changes involved in individual back propagation adjustments and the propagation doesn’t have to be perfect, it just has to on average produce improvement.

1

It involves adding ‘ghost images’ to each point which aren’t just at the wraparound values but also correspond to other positions in a Barns-Wall lattice, which is a way of packing spheres densely. Usually ‘the Barns-Wall lattice’ corresponds specifically to 16 dimensions but the construction generalizes straightforwardly to any power of 2.

Posted Wed May 7 04:23:04 2025 Tags:

AI can play the game Go far better than any human, but oddly it has some weaknesses which can allow humans to exploit and defeat it. Patching over these weaknesses is very difficult, and teaches interesting lessons about what AI, traditional software, and us humans are good and bad at. Here’s an example position showing the AI losing a big region after getting trapped:1

Go board showing cyclic attack

For those of your not familiar with the game, Go is all about connectivity between stones. When a region of stones loses all connection to empty space, as the red marked one in the above position just did, it dies. When a group surrounds two separate regions it can never be captured because the opponent only places one stone at a time and hence can’t fill both at once. Such a region is said to have ‘two eyes’ and be ‘alive’.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

The above game was in a very unusual position where both the now-dead black group and the white one it surrounds only have one eye and not much potential for any other. The AI may have realized that this was the case but optically it looks like the position is good so it keeps playing elsewhere on the board until that important region is lost.

Explaining what’s failing here and why humans can do better requires some background. Board games have two components to them: ‘tactics’ which encompasses what happens when you do an immediate look-ahead of the position at hand, and ‘position’ which a a more amorphous concept encompassing everything else you can glean about how good a position is from looking at it and using your instincts. There’s no fine white line between the two, but for games on a large enough board with enough moves in a game it’s computationally infeasible to do a complete exhaustive search of the entire game so there is a meaningful difference between the two.

There’s a bizarre contrast between how classical software and AI works. Traditional software is insanely, ludicrously good at logic, but has no intuition whatsoever. It can verify even the most complex formal math proof almost immediately, and search through a number of possibilities vastly larger than what a human can do in a lifetime in just an instant, but if any step is missing it has no way of filling it in. Any such filling in has to follow heuristics which humans painstakingly created by hand, and usually leans very heavily on trying an immense number of possibilities in the hope that something works.

AI is the exact opposite. It has (for some things) ludicrously, insanely good intuition. For some tasks, like guessing at protein folding or evaluating it’s far better than any human ever possibly could be. For evaluating Chess or Go positions only a handful at most of humans can be an AI running purely on instinct. What AI is lacking in is logic. People get excited when it demonstrates any ability to do reasoning at all.

Board games are special in that the purely logical component of them is extremely well defined and can be evaluated exactly. When Chess computers first overtook the best humans it was by having a computer throw raw computational power at the problem with fairly hokey positional evaluation underneath it which had been designed by hand by a strong human player and was optimized more for speed than correctness. Chess is more about tactics than position so this worked well. Go has a balance more towards position so this approach didn’t work well until better positional evaluation via AI was invented. Both Chess and Go have a balance between tactics and position because we humans find both of those interesting. It’s possible that sentient beings of the future will favor games which are much more heavily about position because tactics are more about who spends more computational resources evaluating the position than who’s better at the game.

In some sense doing lookahead in board games (the technical term is 'alpha-beta pruning’) is a very special form of hand-tuned logic. But it by definition perfectly emulates exactly what’s happening in a board game, so it can be mixed with a good positional evaluator to almost get the best of both. Tactics are covered by the lookahead, and position is covered by the AI.

But that doesn’t quite cover it. The problem is that this approach keeps the logic and the AI completely separate from each other and doesn’t have a bridge between them. This is particularly important in Go where there are lots of local patterns which will have to get played out eventually and you can get some idea of what will happen at that time by working out local tactics now. Humans are entirely capable of looking at a position, working through some tactics, and updating their positional evaluation based on that. The current generation of Go AI programs don’t even have hooks to make that possible. They can still beat humans anyway, because their positional evaluation alone is comparable if not better than what a human gets to while using feedback, and their ability to work out immediate tactics is ludicrously better than ours. But the above game is an exception. In that one something extremely unusual happened, in which something which immediate optics of a position were misleading, and the focus of game play was kept off that part of the board long enough that the AI didn’t use its near-term tactical skill to figure out that it was in danger of losing. A human can count up the effective amount of space in the groups battling it out in the above example by working out the local tactics and gets a further boost because what matters is the sum total of them rather than the order in which they’re invoked. Simple tactical evaluation doesn’t realize this independence and has to work through exponentially more cases.

The human executable exploits of the current generation Go AIs are not a single silly little problem which can be patched over. They are particularly bad examples of systemic limitations of how those AIs operate. It may be possible to tweak them enough that the humans can’t get away with such shenanigans any more, but the fact remains that they are far from perfect and some better type of AI which does a better job of bridging between instinct and logic could probably play vastly better than they do now while using far less resources.

1

This post is only covering the most insidious attack but the others are interesting as well. It turns out that the AIs think in japanese scoring despite being trained exclusively on chinese scoring so they’ll sometimes pass in positions where the opponent passing in response results in them losing and they should play on. This can be fixed easily by always looking ahead after a pass to see who actually wins if the opponent immediately passes in response. Online sites get around the issue by making ‘pass’ not really mean pass but more ‘I think we can come to agreement about what’s alive and dead here’ and if that doesn’t happen it reverts to a forced playout with chinese scoring and pass meaning pass.

The other attack is ‘gift’ where a situation can be set up where the AI can’t do something it would like to due to ko rules and winds up giving away stones in a way which make it strictly worse off. Humans easily recognize this and don’t do things which make their position strictly worse off. Arguably the problem is that the AI positional evaluator doesn’t have good access to what positions were already hit, but it isn’t clear how to do that well. It could probably also be patched around by making the alpha-beta pruner ding the evaluation when it finds itself trying and failing to repeat a position but that needs to be able to handle ko battles as well. Maybe it’s also a good heuristic for ko battles.

Both of the above raise interesting questions about what what tweaks to a game playing algorithm are bespoke and hence violate the ‘zero’ principle that an engine should work for any game and not be customized to a particular one. Arguably the ko rule is a violation of the schematic of a simple turn-based game so it’s okay to make exceptions for that.

Posted Sat May 3 21:23:58 2025 Tags:

I’ve written several previous posts about how to make a distributed version control system which has eventual consistency, meaning that no matter what order you merge different branches together they’ll always produce the same eventual result. The details are very technical and involved so I won’t rehash them here, but the important point is that the merge algorithm needs to be very history aware. Sometimes you need to make small sacrifices for great things. Sorry I’m terribly behind on making an implementation of these ideas. My excuse is that I have an important and demanding job which doesn’t leave much time for hobby coding, and I have lots of other hobby coding projects which seem important as well.

My less lame excuse is that I’ve been unsure of how to display merge conflicts. In some sense a version control system with eventual consistency never has real conflicts, it just has situations in which changes seemed close enough to stepping on each other’s toes that the system decided to flag them with conflict markers and alert the user. The user is always free to simply remove the conflict markers and keep going. This is a great feature. If you hit a conflict which somebody else already cleaned up you can simply remove the conflict markers, pull from the branch which it was fixed in, and presto you’ve got the cleanup applied locally.1

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

So the question becomes: When and how should conflict markers be presented? I gave previous thoughts on this question over here but am not so sure of those answers any more. In particular there’s a central question which I’d like to hear peoples’s opinions on: Should line deletions by themselves ever be presented as conflicts? If there are two lines of code one after the other and one person deletes one and another person deletes the other it seems not unreasonable to let it through without raising a flag. Its not like a merge conflict between nothing on one side and nothing on the other side is very helpful. There are specific examples I can come up with where this is a real conflict, but then there are examples I can come up with where changes in code not in close proximity produces semantic conflicts as well. Ideally you should detect conflicts by having extensive tests which are run automatically all the time and conflicts will cause those to fail. The version control system flagging conflicts is for it to highlight the exact location of particularly egregious examples.

It also seems reasonable that if one person deletes a line of code and somebody else inserts a line of code right next to it then that shouldn’t be a conflict. But this is getting shakier. The problem is that if someone deletes a line of code and somebody else ‘modifies’ it then arguably that should be a conflict but the version control system thinks of that as being both sides having deleted the same line of code and one side inserting an unrelated line which happens to look similar. The version control system having a notion of individual lines being ‘modified’ and being able to merge those modifications together is a deep rabbit hole I’m not eager to dive into. Like in the case of deletions on both sides a merge conflict between something on one side and ‘nothing’ on the other isn’t very helpful anyway. If you really care about this then you can leave a blank line when you delete code if you want to really make sure replace it with a unique comment. On the other hand the version control system is supposed to flag things automatically and not make you engage in such shenanigans.

At least one thing is clear: Decisions about where to put conflict markers should only be made on whether lines appear in the immediate parents and the child. Nobody wants the version control system to tell them ‘These two consecutive lines of code both come from the right but the ways they merged with the left are so wildly different that it makes me uncomfortable’. Even if there were an understandable way to present that history information to the user, which there isn’t, everybody would respond by simply deleting the pedantic CYA marker.

I’m honestly unsure whether deleted lines should be considered. There are reasonable arguments on both sides. But I would like to ignore deleted lines because it makes both UX and implementation much simpler. Instead of there being eight cases of the states of the parents and the child, there are only four, because only cases where the child line is present are relevant2. In all conflict cases there will be at least one line of code on either side. It even suggests an approach to how you can merge together many branches at once and see conflict markers once everything is taken into account.3

It may be inevitable that I’ll succumb to practicality on this point, but at least want to reassure myself that there isn’t overwhelming feedback in the other direction before doing so. It may seem given the number of other features I’m adding that ‘show pure deletion conflicts’ is small potatoes, but version control systems are important and feature decisions shouldn’t be made in them without serious analysis.

1

It can even cleanly merge together two different people applying the exact same conflict resolution independently of each other, at least most of the time. An exception is if one person goes AE → ABDE → ABCDE and someone else goes AE → ACE → ABCDE then the system will of necessity think the two C lines are different and make a result including both of them, probably as A>BCD>BCD|E. It isn’t possible to avoid this behavior without giving up eventual consistency, but it’s arguably the best option under the circumstances anyway. If both sides made their changes as part of a single patch this can be made to always merge cleanly.

2

The cases are that a line which appears in the child appears in both parents, just the left, just the right, or neither. That last one happens in the uncommon but important criss-cross case. One thing I still feel good about is that conflict markers should be lines saying ‘This is part of a conflict section, the section below this came from X’ where X is either local, remote, or neither, and there’s another special annotation for ‘this is the end of a conflict section’.

3

While it’s clear that a line which came from Alice but no other parent and a line which came from Bob but no other parent should be in conflict when immediately next to each other it’s much less clear whether a line which came from both Alice and Carol but not Bob should conflict with a line which came from Bob and Carol but not Alice. If that should be presented as ‘not a conflict’ then if the next line came from David but nobody else it isn’t clear how far back the non-David side of that conflict should be marked as starting.

Posted Tue Apr 29 04:39:25 2025 Tags:
Looking at what's happening, and analyzing rationales. #nist #iso #deployment #performance #security
Posted Wed Apr 23 22:40:28 2025 Tags:

Computer tools for providing commentary on Chess games are currently awful. People play over games using Stockfish, which is a useful but not terribly relevant tool, and use that as a guide for their own commentary. There are Chess Youtubers who are aren’t strong players and it’s obvious to someone even of my own mediocre playing strength (1700 on a good day) that they don’t know what they’re talking about because in many situations there’s the obvious best move which fails due to some insane computer line but they don’t even cover it because the computer thinks it’s clearly inferior. Presumably commentary I generated using Stockfish as a guide would be equally obvious to someone of a stronger playing strength than me. People have been talking about using computers to make automated commentary on Chess positions since computers started getting good at Chess, and the amount of progress made has been pathetic. I’m now going to suggest a tool which would be a good first step in that process, although it still requires a human to put together the color commentary. It would also be a fun AI project on its own merits, and possibly have a darker use which I’ll get to at the end.

There’s only one truly objective metric of how good a Chess position is, and that’s whether it’s a win, loss, or draw with perfect play. In a lost position all moves are equally bad. In a won position any move no matter how ridiculous which preserves the theoretical win is equally good. Chess commentary which was based off this sort of analysis would be insane. Most high level games would be a theoretical draw until some point deep into already lost for a human territory at which point some uninteresting move would be labeled the losing blunder because it missed out on some way of theoretically eking out a draw. Obviously such commentary wouldn’t be useful. But commentary from Stockfish isn’t much better. Stockfish commentary is how a roughly 3000 rated player feels about the position if it assumes it’s playing against an opponent of roughly equal strength. That’s a super specific type of player and not one terribly relevant to how humans might fare in a given position. It’s close enough to perfect that a lot of the aforementioned ridiculousness shows up. There are many exciting tactical positions which are ‘only’ fifteen moves or so from being done and the engine says ‘ho hum, nothing to see here, I worked it out and it’s a dead draw’. What we need for Chess commentary is a tool geared towards human play, which says something about human games.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Here’s an idea of what to build: Make an AI engine which gets inputs of position, is told the ratings of the two players, the time controls, and the time left, and gives probabilities for a win, loss, or draw. This could be trained by taking a corpus of real human games and optimizing for Brier score. Without any lookahead this approach is limited by how strong of an evaluation it can get to, but that isn’t relevant for most people. Current engines at one node are probably around 2500 or so, so it might peter out in usefulness for strong grandmaster play, but you have my permission to throw in full Stockfish evaluations as another input when writing game commentary. The limited set of human games might hurt its overall playing strength, but throwing in a bunch of engine games for training or starting with an existing one node network is likely to help a lot. That last one in particular should save a lot of training time.

For further useful information you could train a neural network on the same corpus of games to predict the probability that a player will make each of the available legal moves based on their rating and the amount of time they spend making their move. Maybe the amount of time the opponent spent making their previous move should be included as well.

With all the above information it would be easy to make useful human commentary like ‘The obvious move here is X but that’s a bad idea because of line Y which even strong players are unlikely to see’. Or ‘This position is an objective win but it’s very tricky with very little time left on the clock’. The necessary information to make those observations is available, even if writing the color commentary is a whole other layer. Maybe an LLM could be trained to do that. It may help a lot for the LLM to be able to ask for evaluations of follow-on moves.

What all the above is missing is the ability to give any useful commentary on positional themes going on in games. Baby steps. Having any of the above would be a huge improvement in the state of the art. The insight that commentary needs to take into account what different skill levels and time controls think of the situation will remain an essential one moving forward.

What I’d really like to see out of the above is better Chess instruction. There are religious wars constantly going on about what the best practical advice for lower rated players is, and the truth is we simply don’t know. When people collect data from games they come up with results like by far the best opening for lower rated players as black is the Caro-Kann, which might or might not be true but indicates that the advice given to lower rated players based on what’s theoretically best is more than a little bit dodgy.

A darker use of the above would be to make a nearly undetectable cheating engine. With the addition of it giving an output of the range of likely amounts of time the player is likely to take in a given position it could make an indistinguishable facsimile of a player of a given playing strength in real time whose only distinguishing feature is being a bit too typical/generic, and that would be easy enough to throw in bias for. In situations where it wanted to plausibly win a game against a much higher opponent it could filter out potential moves based on their practical chances in the given situation being bad. That would result in very non-Stockfish-like play and seemingly a player plausibly of that skill level happening to play particularly well that game. Good luck coming up with anti-cheat algorithms to detect that.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sun Apr 20 04:09:03 2025 Tags:

There are two different goals of Chess AI: To figure out what is objectively the very best move in each situation, and to figure out what is, for me as a human, the best way to play in each situation. And possibly explain why. The explanations part I have no good ideas for short of doing an extraordinary amount of manual work to make a training set but the others can be done in a fairly automated manner.

(As with all AI posts everything I say here is speculative, may be wrong, and may be reinventing known techniques, but has reasonable justifications about why it might be a good idea.)

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

First a controversial high level opinion which saves a whole lot of computational power: There is zero reason to try to train an AI on deep evaluations of positions. Alpha-beta pruning works equally well for all evaluation algorithms. Tactics are tactics. What training should do is optimize for immediate accuracy on a zero node eval. What deep evaluation is good for is generating more accurate numbers to go into the training data. For a real engine you as want switching information to say which moves should be evaluated more deeply by the alpha-beta pruner. For that information I’m going to assume that when doing a deep alpha-beta search you can get information about how critical each branch is and that can be used as a training set. I’m going to hand wave and assume that there’s a reasonable way of making that be the output of an alpha-beta search even though I don’t know how to do it.

Switching gears for a moment, there’s something I really want but doesn’t seem to exist: An evaluation function which doesn’t say what the best move for a godlike computer program is, but one which says what’s the best practical move for me, a human being, to make in this situation. Thankfully that can be generated straightforwardly if you have the right data set. Specifically, you need a huge corpus of games played by humans and the ratings of the players involved. You then train an AI with input of the ratings of the players and the current position and it returns probability of win/loss/draw. This is something people would pay real money for access to and can be generated from an otherwise fairly worthless corpus of human games. You could even get fancy and customize it a bit to a particular player’s style if you have enough games from them, but that’s a bit tricky because each human generates very few games and you’d have to somehow relate them to other players by style to get any real signal.

Back to making not a human player but a godlike player. Let’s say you’re making something like Leela, with lots of volunteer computers running tasks to improve it. As is often the case with these sorts of things the bottleneck seems to be bandwidth. To improve a model you need to send a copy of it to all the workers, have them locally generate suggested improvements to all the weights, then send those back. That requires a complete upload and download of the model from each worker. Bandwidth costs can be reduced either by making generations take longer or by making the model smaller. My guess is that biasing more towards making the model smaller is likely to get better results due to the dramatically improved training and lower computational overhead and hence deeper searches when using it in practice.

To make suggestions on how to improve a model a worker does as follows: First, they download the last model. Then they generate a self-play game with it. After each move of the game they take the evaluation which the deeper look-ahead gave and train the no look-ahead eval against that to update their suggestions for weight updates. Once it’s time for the next generation they upload all their suggested updates to the central server which sums all the weight updates suggestions (possibly weighting them by the amount of games which went into them) and uses that for the next generation model.

This approach shows how chess is in some sense an ‘easy’ problem for AI because you don’t need training data for it. You can generate all the training data you want out of thin air on an as needed basis.

Obviously there are security issues here if any of the workers are adversarial but I’m not sure what the best way to deal with those is.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sun Mar 30 02:15:12 2025 Tags:

There are numerous videos reviewing 3D printer filaments to help you determine which is the best. I’ve spent way too much time watching these and running over data and can quickly summarize all the information relevant to most people doing 3D printing: Use PLA. There, you’re finished. If you want or need more information or are interested in running tests yourself read on.

There are two big components of the ‘strength’ of a material: stiffness and toughness. Stiffness refers to how hard it is to bend (or stretch/break) while toughness refers to how well it recovers from being bent (or stretched). These can be further broken down into subcategories, like whether the material successfully snaps back after getting bent or is permanently deformed. An important thing to understand is that the measures used aren’t deep ethereal properties of material, they’re benchmark numbers based on what happens if you run particular tests. This isn’t a problem with the tests, it’s an acknowledgement of how complex real materials are.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

For the vast majority of 3D printing projects what you care about is stiffness rather than toughness. If your model is breaking then most of the time the solution is to engineer it to not bend that much in the first place. The disappointing thing is that PLA has very good stiffness, usually better even than the exotic filaments people like experimenting with. In principle proper annealing can get you better stiffness, but when doing that you wind up reinventing injection molding badly and it turns out the best material for that is also PLA. The supposedly better benchmarks of PLA+ are universally tradeoffs where they get better toughness in exchange for worse stiffness. PLA is brittle, so it shatters on failure, and mixing it with any random gunk tends to make that tradeoff, but it isn’t what you actually want.

(If you happen to have an application where your material bending or impact resistance is important you should consider TPU. The tradeoffs of different versions of that are complex and I don’t have any experience with it so can’t offer much detailed advice.)

Given all the above, plus PLA’s generally nontoxic and easy to print nature, it’s the go-to filament for the vast majority of 3D printing applications. But let’s say you need something ‘better’, or are trying to justify the ridiculous amounts of time you’ve spent researching this subject, what is there to use? The starting place is PLA’s weaknesses: It gets destroyed by sunlight, can’t handle exposure to many corrosive chemicals, and melts at such a low temperature that it can be destroyed in a hot car or a sauna. There are a lot of fancy filaments which do better on these benchmarks, but for the vast majority of things PLA isn’t quite good enough at PETG would fit the bill. The problem with PETG is that it isn’t very stiff. But in principle adding carbon fiber fixes this problem. So, does it?

There are two components of stiffness for 3d printing: Layer adhesion and bending modulus. Usually layer adhesion issues can be fixed by printing in the correct orientation, or sometimes printing in multiple pieces at appropriate orientations. One could argue that the answer ‘you can engineer around that’ is a cop-out but in this cases the effect is so extreme that it can’t be ignored. More on this below, but my test is of bending modulus.

Now that I’ve finished an overly long justification of why I’m doing bending modulus tests we can get to the tests themselves. You can get the models I used for the tests over here. The basic idea is to make a long thin bar in the material to be tested, hang a weight from the middle, and see how much it bends. Here are the results:

CarbonX PETG-CF is a great stronger material, especially if you want/need something light weight spindly. It’s considerably more expensive than PLA but cheaper and easier to print than fancier materials and compared to PLA and especially PETG the effective cost is much less because you need less of it. The Flashforge PETG-CF (which is my stand-in ‘generic’ PETG-CF as it’s what turns up in an Amazon search) is a great solution if you want something with about the same price and characteristics as PLA but better able to handle high temperatures and sunlight. It’s so close to PLA that I’m suspicious that it’s actually just a mislabeled roll of PLA but I haven’t tested that. I don’t know why the Bambu PETG-CF performed so badly. It’s possibly it got damaged by moisture between when I got it and tested it but I tried drying it thoroughly and that didn’t help.

Clearly not all carbon fiber filaments are the same and more thorough testing should be done with a setup less janky than mine. If anybody wants to use my models as a starting point for that please go ahead.

The big caveat here is that you can engineer around a bad bending modulus. The force needed to bend a beam goes up with the cube of its width, so unless something has very confined dimensions you can make it much stronger by making it chunkier. You can do it without using all that much more material by making I-beam like structures. Note that when 3D printing you can make enclosed areas no problem so the equivalent of an I-beam should have a square, triangular, or circular cross section with a hollow middle. The angle of printing is also of course very important.

The conclusion is that if you want something more robust than PLA you can use generic PETG engineered to be very chunky, or PETG-CF with appropriate tradeoffs between price and strength for your application.

A safety warning: Be careful to ventilate your space thoroughly when printing carbon fiber filaments, and don’t shred or machine them after printing. Carbon fiber has the same ill effects on lungs as asbestos so you don’t want to be breathing it in. In my tests the amount of volatile organic compounds produced are small, but it’s a good idea to be careful.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sun Mar 23 21:16:26 2025 Tags:

An Update Regarding the 2025 Open Source Initiative Elections

I've explained in other posts that I ran for the 2025 Open Source Initative Board of Directors in the “Affiliate” district.

Voting closed on MON 2025-03-17 at 10:00 US/Pacific. One hour later, candidates were surprised to receive an email from OSI demanding that all candidates sign a Board agreement before results were posted. This was surprising because during mandatory orientation, candidates were told the opposite: that a Board agreement need not be signed until the Board formally appointed you as a Director (as the elections are only advisory &mdash: OSI's Board need not follow election results in any event. It was also surprising because the deadline was a mere 47 hours later (WED 2025-03-19 at 10:00 US/Pacific).

Many of us candidates attempted to get clarification over the last 46 hours, but OSI has not communicated clear answers in response to those requests. Based on these unclear responses, the best we can surmise is that OSI intends to modify the ballots cast by Affiliates and Members to remove any candidate who misses this new deadline. We are loathe to assume the worst, but there's little choice given the confusing responses and surprising change in requirements and deadlines.

So, I decided to sign a Board Agreement with OSI. Here is the PDF that I just submitted to the OSI. I emailed it to OSI instead. OSI did recommend DocuSign, but I refuse to use proprietary software for my FOSS volunteer work on moral and ethical grounds0 (see my two keynotes (FOSDEM 2019, FOSDEM 2020) (co-presented with Karen Sandler) on this subject for more info on that).

My running mate on the Shared Platform for OSI Reform, Richard Fontana, also signed a Board Agreement with OSI before the deadline as well.


0 Chad Whitacre has made unfair criticism of my refusal tog use Docusign as part of the (apparently ongoing?) 2025 OSI Board election political campaign. I respond to his comment here in this footnote (& further discussion is welcome using the fediverse, AGPLv3-powered comment feature of my blog). I've put it in this footnote because Chad is not actually raising an issue about this blog post's primary content, but instead attempting to reopen the debate about Item 4 in the Shared Platform for OSI Reform. My response follows:

In addition to the two keynotes mentioned above, I propose these analogies that really are apt to this situation:

  • Imagine if the Board of The Nature Conservancy told Directors they would be required, if elected, to use a car service to attend Board meetings. It's easier, they argue, if everyone uses the same service and that way, we know you're on your way, and we pay a group rate anyway. Some candidates for open Board seats retort that's not environmentally sound, and insist — not even that other Board members must stop using the car service &mdash: but just that Directors who chose should be allowed to simply take public transit to the Board meeting — even though it might make them about five minutes late to the meeting. Are these Director candidates engaged in “passive-aggressive politicking”?
  • Imagine if the Board of Friends of Trees made a decision that all paperwork for the organization be printed on non-recycled paper made from freshly cut tree wood pulp. That paper is easier to move around, they say — and it's easier to read what's printed because of its quality. Some candidates for open Board seats run on a platform that says Board members should be allowed to get their print-outs on 100% post-consumer recycled paper for Board meetings. These candidates don't insist that other Board members use the same paper, so, if these new Directors are seated, this will create extra work for staff because now they have to do two sets of print-outs to prep for Board meetings, and refill the machine with different paper in-between. Are these new Director candidates, when they speak up about why this position is important to them as a moral issue, a “a distracting waste of time”?
  • Imagine if the Board of the APSCA made the decision that Directors must work through lunch, and the majority of the Directors vote that they'll get delivery from a restaurant that serves no vegan food whatsoever. Is it reasonable for this to be a non-negotiable requirement — such that the other Directors must work through lunch and just stay hungry? Or should they add a second restaurant option for the minority? After all, the ASPCA condemns animal cruelty but doesn't go so far as to demand that everyone also be a vegan. Would the meat-eating directors then say something like “opposing cruelty to animals could be so much more than merely being vegan” to these other Directors?
Posted Wed Mar 19 08:59:00 2025 Tags:

An Update Regarding the 2025 Open Source Initiative Elections

I've explained in other posts that I ran for the 2025 Open Source Initative Board of Directors in the “Affiliate” district.

Voting closed on Monday 2025-03-17 at 10:00 US/Pacific. One hour after that, I and at least three other candidates received the following email:

Date: Mon, 17 Mar 2025 11:01:22 -0700
From: OSI Elections team <elections@opensource.org>
To: Bradley Kuhn <bkuhn@ebb.org>
Subject: TIME SENSITIVE: sign OSI board agreement
Message-ID: <civicrm_67d86372f1bb30.98322993@opensource.org>

Thanks for participating in the OSI community polls which are now closed. Your name was proposed by community members as a candidate for the OSI board of directors. Functioning of the board of directors is critically dependent on all the directors committing to collaborative practices.

For your name to be considered by the board as we compute and review the outcomes of the polls,you must sign the board agreement before Wednesday March 19, 2025 at 1700 UTC (check times in your timezone). You’ll receive another email with the link to the agreement.

TIME SENSITIVE AND IMPORTANT: this is a hard deadline.

Please return the signed agreement asap, don’t wait. 

Thanks

OSI Elections team

(The link email did arrived too, with a link to a proprietary service called DocuSign. Fontana downloaded the PDF out of DocuSign and it appears to match the document found here. This document includes a clause that Fontana and I explicitly indicated in our OSI Reform Platform should be rewritten. )

All the (non-incumbent) candidates are surprised by this. OSI told us during the mandatory orientation meetings (on WED 2025-02-19 & again on TUE 2025-02-25) that the Board Agreement needed to be signed only by the election winners who were seated as Directors. No one mentioned (before or after the election) that all candidates, regardless of whether they won or lost, needed to sign the agreement. I've also served o many other 501(c)(3) Boards, and I've never before been asked to sign anything official for service until I was formally offered the seat.

Can someone more familiar with the OSI election process explain this? Specifically, why are all candidates (even those who lose) required to sign the Board Agreement before election results are published? Can folks who ran before confirm for us that this seems to vary from procedures in past years? Please reply on the fediverse thread if you have information. Richard Fontana also reached out to OSI on their discussion board on the same matter.

Posted Mon Mar 17 20:26:00 2025 Tags:

Update 2025-03-21: This blog post is extremely long (if you're reading this, you must already know I'm terribly long-winded). I was in the middle of consolidating it with other posts to make a final, single “wrap up” post of the OSI elections when, in the middle of doing that, I was told that Linux Weekly News (LWN) published an article written by Joe Brockmeier. As such,I've carefully left the text below as it stood it stood 2025-03-20 03:42 UTC, which I believe is the version that Brockmeier sourced for his story (only changes past the line “Original Post” have been HTML format fixes). (I hate as much as you do having to scour archive.org/web to find the right version.) Nevertheless, I wouldn't have otherwise left this here in its current form because it's a huge, real-time description that as such doesn't make the best historical reference record of these event. I used my blog as a campaigning tool (for reasons discussed below) before I knew how much interest there would ultimately be in the FOSS community about the 2025 OSI Board of Directors election. Since this was used as a source for the LWN article, keeping the original record easy to find is obviously important and folks shouldn't have to go to archive.org/web to find it. Nevertheless, if you're just digging into this story fresh, I don't really recommend reading the below. Instead, I suggest just reading Brockmeier's LWN article because he's a journalist and writes better and more concise than me, and he's unbiased and the below is my (understandably) biased view as a candidate who lived through this problematic election.

Original Post

I recently announced that I was nominated for the Open Source Initiative (OSI) Board of Directors as an “Affiliate” candidate. I chose to run as an (admittedly) opposition candidate against the existing status quo, on a “ticket” with my colleague, Richard Fontana, who is running as an (opposition) “Member” candidate.

These elections are important; they matter with regard to the future of FOSS. OSI recently published the “Open Source Artificial Intelligence Definition” (OSAID). One of OSI's stated purposes of the OSID is to convince the entire EU and other governments and policy agencies will adopt this Definition as official for all citizens. Those stakes aren't earth-shattering, but they are reasonably high stakes. (You can read i a blog post I wrote on the subject or Fontana's and my shared platform for more information about OSAID.)

I have worked and/or volunteered for nonprofits like OSI for years. I know it's difficult to get important work done — funding is always too limited. So, to be sure I'm not misquoted: no, I don't think the election is “rigged”. Every problem described herein can easily be attributed to innocent human error, and, as such, I don't think anyone at OSI has made an intentional plan to make the elections unfair. Nevertheless, these mistakes and irregularities (particularly the second one below) have led to an unfair 2025 OSI Directors Election. I call on the OSI to reopen the nominations for a few days, correct these problems, and then extend the voting time accordingly. I don't blame the OSI for these honest mistakes, but I do insist that they be corrected. This really does matter: since this isn't just a local club. OSI is an essential FOSS org that works worldwide and claims to have a consensus mandate for determining what is (or is not) “open source”. Thus, (if the OSI intends to continue with an these advisory elections), OSI's elections need the greatest integrity and legitimacy. Irregularities must be corrected and addressed to maintain the legitimacy of this important organization.

Regarding all these items below, I did raise all the concerns privately with the OSI staff before publicly listing them here. In every case, I gave OSI at least 20-30% of the entire election cycle to respond privately before discussing the problems publicly. (I have still received no direct response from the OSI on any of these issues.)

(Recap on) First Irregularity

The first irregularity was the miscommunication about the nomination deadline (as covered in the press. Instead of using the time zone of OSI's legal home (in California), or the standard FOSS community deadline of AoE (anywhere on earth) time, OSI surreptitiously chose UTC and failed to communicate that decision properly. According to my sources, only one email of 3(+) emails about the elections included the fully qualified datetime of the deadline. Everywhere else (including everywhere on OSI's website) published only the date, not the time. It was reasonable for nominators to assume the deadline was US/Pacific — particularly since the nomination form still worked after 23:59 UTC passed.

Second Irregularity

Due to that first irregularity, this second (and most egregious) irregularity is compounded even further. All year long, the OSI has communicated that, for 2025, elections are for two “Member” seats and one “Affiliate” seat. Only today (already 70% through the election cycle) did OSI (silently) correct this error. This change was made well after nominations had closed (in every TZ). By itself, the change in available seats after nominations closed makes the 2025 OSI elections unfair. Here's why: the Members and the Affiliates are two entirely different sets of electorates. Many candidates made complicated decisions about which seats to run for based on the number of seats available in each class. OSI is aware of that, too, because (a) we told them that during candidate orientation, and (b) Luke said so publicly in their blog post (and OSI directly responded to Luke in the press).

If we had known there were two Affiliate seats and just one Member seat, Debian (an OSI Affiliate) would have nominated Luke a week early to the Affiliate seat. Instead, Debian's leadership, Luke, Fontana, and I had a complex discussion in the final week of nominations on how best to run as a “ticket of three”. In that discussion, Debian leadership decided to nominate no one (instead of nominating Luke) precisely because I was already nominated on a platform that Debian supported, and Debian chose not to run a candidate against me for the (at the time, purported) one Affiliate seat available.

But this irregularity didn't just impact Debian, Fontana, Luke, and me. I was nominated by four different Affiliates. My primary pitch to ask them to nominate me was that there was just one Affiliate seat available. Thus, I told them, if they nominated someone else, that candidate would be effectively running against me. I'm quite sure at least one of those Affiliates would have wanted to nominate someone else if only OSI had told them the truth when it mattered: that Affiliates could easily elect both me and a different candidate for two available Affiliate seats. Meanwhile, who knows what other affiliates who nominated no one would have done differently? OSI surely doesn't know that. OSI has treated every one of their Affiliates unfairly by changing the number of seats available after the nominations closed.

Due to this Second Irregularity alone, I call on the OSI to reopen nominations and reset the election cycle. The mistakes (as played) actually benefit me as a candidate — since now I'm running against a small field and there are two seats available. If nominations reopen, I'll surely face a crowded field with many viable candidates added. Nevertheless, I am disgusted that I unintentionally benefited from OSI's election irregularity and I ask OSI take corrective action to make the 2025 election fair.

The remaining irregularities are minor (by comparison, anyway), but I want to make sure I list all the irregularities that I've seen in the 2025 OSI Board Elections in this one place for everyone's reference:

Third Irregularity

I was surprised when OSI published the slates of Affiliate candidates that they were not in any (forward or reverse) alphabetical order — not candidate's first, last, or nominator name. Perhaps the slots in the voter's guide were assigned randomly, but if so, that is not disclosed to the electorate. And, Who is listed first, you ask? Why, the incumbent Affiliate candidate. The issue of candidate ordering in voting guides and ballots has been well studied academically and, unsurprisingly, being listed first is known to be an advantage. Given that incumbents already have an advantage in all elections, putting the incumbent first without stating that the slots in the voter guide were randomly assign makes the 2025 OSI Board election unfair.

I contacted OSI leadership within hours of the posting of the candidates about this issue (at time of writing, that was four days ago) and they have refused to respond nor have they corrected the issue. This compounds the error, because OSI consciously choosing to list the incumbent Affiliate candidate first in the voter guide on purpose.

Note that this problem is not confined to the “Affiliate district”. In the “Member district”, my running mate, Richard Fontana, is listed last in the voter guide for no apparent reason.

Fourth Irregularity

It's (ostensibly) a good idea for the OSI to run a discussion forum for the candidates (and kudos to OSI ( in this instance, anyway ) for using the GPL'd Discourse software for the purpose). however, the requirements to create an account and respond to the questions exclude some Affiliate candidates. Specifically, the OSI has stated that Affiliate candidates, and the Affiliates that are their electorate, need not be Members of the OSI. (This is actually the very first item in OSI's election FAQ!) Yet, to join the discussion forum, one must become a member of the OSI! While it might be reasonable to require all Affiliate candidates become OSI Members, this was not disclosed until the election started, so it's unfair!

Some already argue that since there is a free (as in price) membership that this is a non-issue. I disagree, and here's why: Long ago, I had already decided that I would not become a Member of OSI (for free or otherwise) because OSI Members who do not pay money are denied voting rights in these elections! Yes, you read that right: the election for OSI Directors in the “Members” seat literally has a poll tax! I refuse to let OSI count me as a Member when the class of membership they are offering to people who can't afford to pay is a second-class citizenship in OSI's community. Anyway, there is no reason that one should have to become a Member to post on the discussion fora — particularly given that OSI has clearly stated that the Affiliate candidates (and the Affiliate representatives who vote) are not required to be individual Members.

A desire for Individual Membership is understandable for an nonprofit. Nonprofits often need to prove they represent a constituency. I don't blame any nonprofit for trying to build a constituency for itself. The issue is how. Counting Members as “anyone who ever posted on our discussion forum” is confusing and problematic — and becomes doubly so when Voting Memberships are available for purchase. Indeed, OSI's own annual reporting conflates the two types of Members confusingly, as “Member district” candidate Chad Whitacre asked about during the campaign (but received no reply).

I point as counter-example to the models used by GNOME Foundation (GF) and Software In the Public Interest (SPI). These organizations are direct peers to the OSI, but both GF and SPI have an application for membership that evaluates on the primary criterion of what contributions the individual has made to FOSS (be they paid or volunteer). AFAICT, for SPI and GF, no memberships require a donation, aren't handed out merely for signing up to the org's discussion fora, and all members (once qualified) can vote.

Fifth Irregularity

This final irregularity is truly minor, but I mention it for completeness. On the Affiliate candidate page, it seems as if each candidate is only nominated by one affiliate. When I submitted my candidate statement, since OSI told me they automatically filled in the nominating org, I had assumed that all my nominating orgs would be listed. Instead, they listed only one. If I'd known that, I'd have listed them at the beginning of my candidate statement; my candidate statement was drafted under the assumption all my nominating orgs would be listed elsewhere.

Sixth Irregularity

Update 2025-03-07. I received an unsolicited (but welcome) email from an Executive Director of one of OSI's Affiliate Organizations. This individual indicated they'd voted for me (I was pleasantly surprised, because I thought their org was pro-OSAID, which I immediately wrote back and told them). The irregularity here is that OSI told candidates that the campaign period would be 10 days, including two weekends in most places — including orientation phone calls for candidates. They started the campaign late, and didn't communicate that they weren't extending the timeline, so the campaign period was about 6.5 days and included only one weekend.

Meanwhile, during this extremely brief 6.5 day period, the election coordinator at OSI was unavailable to answer inquiries from candidates and Affiliates for at least three of those days. This included sending one Affiliate an email with the subject line ”Rain Check” in response to five questions they sent about the election process, and its contents indicated that the OSI would be unavailable to answers questions about the election — until after the election!

Seventh Irregularity (added 2025-03-13)

The OSI Election Team, less than 12 hours after sending out the ballots (on Friday 2025-03-07) sent the following email. Many of the Affiliates told me about the email, and it seems likely that all Affiliates received this email within a short time after receiving their ballots (and a week before the ballots were due):

Subject: OSI Elections: unsolicited emails
Date: Sat, 08 Mar 2025 02:11:05 -0800
From: "Staffer REDACTED" <staffer@opensource.org>

Dear REDACTED,

It has been brought to our attention that at least one candidate has been emailing affiliates without their consent.

We do not give out affiliate emails for candidate reachouts, and understand that you did not consent to be spammed by candidates for this election cycle.

Candidates can engage with their fellow affiliates on our forums where we provide community management and moderation support, and in other public settings where our affiliates have opted to sign up and publicly engage.

Please email us directly for any ongoing questions or concerns.

Kind regards,
OSI Elections team

This email is problematic because candidates received no specific guidance on this matter. No material presented at either of the two mandatory election orientations (which I attended) indicated that contacting your constituents directly was forbidden, nor could I find such in any materials on the OSI website. Also, I checked with Richard Fontana, who also attended these sessions, and he confirms I didn't miss anything.

It's not spam to contact one's “FOSS Neighbors” to learn their concerns when in a political campaign for an important position. In fact, during those same orientation sessions, it was mentioned that Affiliate candidates should know the needs of their constiuents — OSI's Affiliates. I took that charge seriously, so I invested 12-14 hours researching every single of my constituents (all ~76 OSI Affiliate Organizations). my research confirmed my hypothesis: my constituents were my proverbial “FOSS neighbors”. In fact, I found that I'd personally had contact with most of the orgs since before OSI even had an Affiliate program. For example, one of the now-Affiliates had contacted me way back in 2013 to provide general advice and support about how to handle fundraising and required nonprofit policies for their org. Three other now-Affiliate's Executive Directors are people I've communicated regularly with for nearly 20 years. (There are other similar examples too). IOW, I contacted my well-known neighbors to find out their concerns now that I was running for an office that would represent them.

There were also some Affiliates that I didn't know (or didn't know well) yet. For those, like any canvasing candidate, I knocked on their proverbial front doors: I reviewed their websites, found the name of the obvious decision maker, searched my email archives for contact info (and, in some cases, just did usual guesses like <firstname.lastname@example.org>), and contacted them. (BTW, I've done this since the 1990s in nonprofit work when trying to reach someone at a fellow nonprofit to discuss any issue.)

All together, I was able to find a good contact at 55 of the Affiliates, and here's a (redacted) sample of one the emails I sent:

Subject: Affiliate candidate for OSI Board of Directors available to answer any questions REDACTED_FIRSTNAME,

I'm Bradley M. Kuhn and I'm running as an Affiliate candidate in the Open Source Initiative Board elections that you'll be voting in soon on behalf of REDACTED_NAME_OF_ORG.

I wanted to let you know about the Shared Platform for OSI Reform (that I'm running for jointly with Richard Fontana) [0] and also offer some time to discuss the platform and any other concerns you have as an OSI Affiliate that you'd like me to address for you if elected.

(Fontana and I kept our shared platform narrow so that we could be available to work on other issues and concerns that our (different) constituencies might have.)

I look forward to hearing from you soon!

[0] https://codeberg.org/OSI-Reform-Platform/platform#readme

Note that Fontana is running as a Member candidate which has a separate electorate and for different Board seats, so we are not running in competition for the same seat.

(Since each one was edited manually for the given org, if the org primarily existed for a FOSS project I used, I also told them how I used the project myself, etc.)

Most importantly, though, election officials should never comment on the permitted campaign methods of any candidates before voting finishes in any event. While OSI staff may not have intended it, editorializing regarding campaign strategies can influence an election, and if you're in charge of running an impartial collection, you have a high standard to meet.

OSI: either reopen nominations or just forget the elections

Again, I call on OSI to correct these irregularities, briefly reopen nominations, and extend the voting deadline. However, if OSI doesn't want to do that, there is another reasonable solution. As explained in OSI's by-laws and elsewhere, OSI's Directors elections are purely advisory. Like most nonprofits, the OSI is governed by a self-perpetuating (not an elected) Board. I bet with all the talk of elections, you didn't even know that!

Frankly, I have no qualms with a nonprofit structure that includes a self-perpetuating Board. While it's not a democratic structure, a self-perpetuating Board of principled Directors does solve the problems created in a Member-based organization. In Member-based organizations, votes are for sale. Any company with resources to buy Memberships for its employees can easily dominate the election. While OSI probably has yet to experience this problem, if OSI grows its Membership (as it seeks to), OSI will sure face that problem. Self-perpetuating Boards aren't perfect, but they do prevent this problem.

Meanwhile, having now witnessed OSI's nomination and the campaign process from the inside, it really does seem to me that OSI doesn't really take this election all that seriously. And, OSI already has in mind the kinds of candidates they want. For example, during one of the two nominee orientation calls, a key person in the OSI Leadership said (regarding items 4 of Fontana's and my shared platform) [quote paraphrased from my memory]: If you don't want to agree to these things, then an OSI Directorship is not for you and you should withdraw and seek a place to serve elsewhere. I was of course flabbergasted to be told that a desire to avoid proprietary software should disqualify me (at least in view of the current OSI leadership). But, that speaks to the fact that the OSI doesn't really want to have Board elections in the first place. Indeed, based on that and many other things that the OSI leadership has said during this process, it seems to me they'd actually rather hand-pick Directors to serve than run a democratic process. There's no shame in a nonprofit that prefers a self-perpetuating Board; as I said, most nonprofits are not Membership organizations nor allow any electorate to fill Board seats.

Meanwhile, OSI's halfway solution (i.e., a half-heartedly organized election that isn't really binding) seems designed to manufacture consent. OSI's Affiliates and paid individual Membership are given the impression they have electoral power, but it's an illusion. Giving up on the whole illusion would be the most transparent choice for OSI, and if the OSI would rather end these advisory elections and just self-perpetuate, I'd support that decision.

Update on 2025-03-07: Chad Whitacre, candidate in OSI's “Member district”, has endorsed my suggestion that OSI reopen nominations briefly for this election. While I still urge voters in the “Member district” to rank my running mate, Richard Fontana first in that race, I believe Chad would be fine choice as your second listed candidate in the rank choice voting.

Posted Mon Mar 3 11:00:00 2025 Tags:

In the early days of space missions it was observed that spinning objects flip over spontaneously. This got people freaked out that it could happen to the Earth as a whole. Any solid object spinning in three dimensions will do this, and the amount of time it spends between flips has nothing to do with how quickly the flips happen.

Since then you might assume that this possibility was debunked. That didn’t happen. People just kind of got over it. The model of the Earth as a single solid object is overly simplistic, with some kind of fluid flows going on below the surface. Unfortunately we have no idea what those flows are actually doing and how they might affect this process. It’s a great irony of our universe that we know more about distant galaxies than the core of our own planet.

Unfortunately the one bit of evidence we have about the long-term stability of the Earth’s axis might point towards such flips happening regularly. The Earth’s magnetic field is known to invert every once in a while. But it’s just as plausible that what’s going on is the gooey molten inner core of the Earth keeps pointing in the same direction that whole time while the crunchy outer crust flips over.

If a flip like this happens over the course of a day then the Sun would go amusingly skeewumpus for a day and then start rising in the west and setting in the east. Unlike what you see in ‘The 3 body problem’ apparent gravity on the surface of the planet would remain normal that whole time. (The author of that book supposedly has a physics background. I’m calling shenanigans.) But there might be tides going an order of magnitude or higher than they normally go, and planetary weather patterns would invert causing all kinds of chaos. That would include California getting massive hurricanes from over the Pacific while Florida would be much more chill.

A suddenly flip like that is very unlikely, but that might not be a good thing. If the flip takes years then midway through it the poles will be aligned through the Sun, so they’ll spend months on end in either baking Sun or pitch black, getting baked to a crisp or frozen solid, far beyond the most extreme weather we have under normal circumstances. The equatorial regions will be spared, being in constant twilight, with the passage of time mostly denoted by spectacular northern and southern lights which alternate every 12 hours. And there will probably be lots of tectonic activity, but not as bad as on Venus.

Posted Mon Mar 3 03:47:57 2025 Tags:

Before getting into it I have to say: My experience in school was miserable. I hated it. My impression of the value of school is colored by that experience. On the other hand many if not most people hated school but it’s socially unacceptable to publicly say that school, especially college, was anything but transformative. So here’s to speaking up for the unheard masses.

The central thesis of ‘The Case Against Education’ by Bryan Caplan is that the vast bulk of benefit which students get from their schooling is the diploma, not the education. While his estimate of the value of the diploma at 80% sounds gut-wrenching, it’s hard to avoid getting somewhere close to that if you do even vaguely realistic estimates. Most classes cover material which students will never use even if they were to remember it. The claims that the real purpose is to enrich students lives and make them better citizens are not backed up by those things happening or even being seriously attempted.

All those arguments and counter-arguments are gone over painstakingly in the book, but they wind up roughly where anyone observing what actually goes on in schools would expect. Students retain almost nothing from school. What’s more interesting is the impacts on student’s politics and morals, which is almost nothing. It barely moves the needle. The one thing it does have an effect on is that it makes people have a lot fewer children, especially women. Caplan doesn’t go deep into why this may be, but if I may speculate based on what other areas of research have shown my guess is that it’s caused by (1) not having children while in school and (2) raising women’s standards for men high enough that many of them never settle.

This raises the question of how things got here and what could be done to fix it. The possibility which Caplan oddly does not consider is that it’s a broad conspiracy propped up by the rich and well-educated to make a path for their disappointing children to have solid careers. One in three students at Harvard got in through a path other than earning it from high school achievement. The parents of those kids repay Harvard with money or prestige. It’s remarkable and bizarre that employers don’t look on Harvard graduates with skepticism due to this fact, but they don’t.

What the book does go into is the question of why employers, especially private sector employers, continue to highly value degrees. The answer is an obvious but infuriating one: School, especially college, is something with all the trappings of a job: Boring, unimportant, authoritarian, and demanding. People who do well at it are likely to succeed at any job. (There’s a bunch of discussion of personality types and whatnot which are just poor proxies for succeeding at a sucky job and don’t add much to the discussion.) The only things which school is missing are productive output and pay. What alternative criteria there are for employers which happen to be exciting, meaningful, egalitarian, or fun to find high achieving employees who couldn’t cope with school due to it missing those things is unclear.

The obvious fix for this is to do the exact same thing but with an actual employer. If you get a job at a participating qualified employer and manage to stay working there for a prespecified number of years you get a certificate of sucking it up. The obvious objections are that this would be a big subsidy to large employers, especially those with known toxic work environments whose certificates would be especially valued, and that people who failed the program would have wasted years of their life. Those objections are true, but apply just as much to what happens in universities. In any case, programs like this are rare in the real world, even internationally, and unlikely to become common any time soon.

One source of improvement going on now which Caplan oddly doesn’t go into is the devaluing of the most useless majors. This tends to naturally feed on itself by a somewhat circular logic: Employers devalue the most ridiculous majors, which causes only people who are lacking judgement to pursue them as degrees, which causes employers to further devalue those majors. Such logic is often not a good thing. It’s what got us into overeducation in the first place. But at least in this case it’s causing students to make choices of major which cause them to get more education out of their schooling.

What ‘The Case Against Education’ does go into in great detail is the case for vocational education, which is overwhelming. All but the very best students would be better off getting a vocational degree, both for their own monetary self-interest and the amount of productive work they’re doing for society. The sneering attitude generally given towards vocational degrees (and even engineering degrees!) is obnoxious and unwarranted and should be changed. If vocational degrees were given even a fraction of the prestige which is given to four year degrees the world would be a better place.

What remains is the question of how to improve the education itself. Caplan barely touches on this but I will speculate. Rather than go into vitriolic rants about the problems in subjects which are not my field, I’ll talk about the ones which are, specifically mathematics and computer science.

Many if not most students feel that math classes are torture, a boring subject which they will never use and can barely pass. This assessment is, I’m sad to say, fairly accurate. The reforms which actual mathematicians favor are twofold. First, what’s considered basic literacy in mathematics should be expanded to include probability and expected value, both important concepts apply to people’s everyday lives. Second, everything beyond that shouldn’t be taught as something important which students need to be force-fed, but something beautiful which it’s enriching to learn, like art or literature. There are of course a tiny fraction of students who are likely to go into math-heavy fields, and there should be advanced classes available for them, but those should also be taught by people who love the subject, with an emphasis on its beauty. Nobody should ever be subject to the misery of trigonometry classes as they’re taught today. And people with PhDs in mathematics should be viewed as qualified to teach the subject.

Computer Science’s main problem is right there in its name: It’s Computer Science, not Programming. By an accident of history it’s socially acceptable to get something approximating a vocational programming degree by getting one in computer science. Either an alternative degree program focusing on software engineering should be set up, or the focus of computer science should be put on practical software development. Thankfully a lot of that is already happening.

Then there things which are so basic that they don’t even fit in a field. Can we drop cursive and teach everybody touch typing? Bring back cooking, cleaning, and shop classes? Yes it was a problem that those classes were gendered in the past, but a better solution to that would be to teach them to everybody instead of teach them to nobody.

Posted Sat Mar 1 22:02:22 2025 Tags:

There’s a new speedcubing single solve record which is by far the funniest one ever. No, it isn’t because it’s by an 11 year old kid. He’s the best speedcuber in the world. It isn’t even because he fumbles the solve at the end at the end and still shatters the world record, on a solve which isn’t even all that lucky, although that is very funny. What really makes it funny is that there are multiple videos breaking down the whole solve, by people who are themselves good speedsolvers, going over every single turn done in painstaking detail, which completely miss a key part of what’s happening.

The background to this is that there have long been pissing matches within the world of speedcubing about what’s the best solving algorithm. On one end there are methods which minimize total number of moves, with Petrus being particularly good at keeping that number down, and at the other end are methods which optimize for turning as fast as possible, with CFOP being an extreme of that. The problem with the turn optimizing algorithms is that they require a lot of analysis while the solve is being done and the finger placements to do the turns required tend to be awkward, in the end rendering the whole thing very slow. The speed oriented methods by contrast require only a few looks at the cube and then doing memorized sequences between them. (Speedcubers call sequences ‘algorithms’ which makes me cringe but I’ll go with it.) Good speedcubers have not only memorized and practiced all the algorithms but worked out exact finger placements throughout them for optimal speed. This is the approach which has worked much better in practice. CFOP has dominated speedsolving for the last two decades.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

What seems to be happening to the speedsolvers who aren’t initiated is that this is a fairly lucky CFOP solve which happens to yield particularly good finger placements. While this is technically true it’s missing that those setups aren’t accidents. While this solve is a bit lucky, several of those things are guaranteed to happen because of some subtle things done in the solve. This isn’t actually a CFOP solve at all. It’s EOCross.

A quick summary of CFOP: First you solve the bottom four edges, which is done intuitively (really planned out during the inspection time provided before solving begins). Then a ‘look’ is done and one of the bottom corners and the edge next to it is solved. This is repeated three more times to finish the bottom two layers. Then a look is done and an algorithm is used to orient all the top layer pieces. Finally there’s one last look and an algorithm is done to position all the last layer pieces. Yes that’s a lot of memorized algorithms.

This is how EOCross works: First you solve the bottom edges and all edge orientations, a process which absolutely must be planned out during inspection. The meaning of ‘edge orientations’ in this context may sounds a bit mysterious and it’s subtle enough that it caused the accidental trolling of the new world record. If you only rotate the up, down, right, and left faces of a Rubik’s Cube the edges don’t change orientation. Literally they do change orientation in the sense that they rotate while moved, but whenever they go back to the position they started they’ll always be in the same orientation they were at the beginning. The solve then proceeds with doing corner and edge pairs from the first two layers but with the restriction that the front and back faces aren’t turned. Finally all that’s left is the last layer which happens to be guaranteed to have all the edges correctly oriented, and a single algorithm is done for those. Yes that’s an even larger number of algorithms to memorize.

That may have been a bit much to follow, but the punch line is that to an uninitiated CFOP solver an EOCross solve looks like a CFOP solve where the edge orientations happen to land nicely.

Technically EOCross is a variant on a solving method called ZZ but it’s sufficiently different that it should be considered a different method. It was invented several years ago and devotees have been optimizing the details ever since. There have of course been claims that it should beat CFOP, to which the response has mostly been to point out that no EOCross solvers have been anywhere near the best speedcubers. The rebuttal has been that the top solvers haven’t tried it because it’s so much work to learn and if they did they’d be faster. Given how handily the world record was just broken that rebuttal seems to have been correct. Good work EOCrossians getting the method optimized.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Fri Feb 28 08:26:35 2025 Tags:

I accepted nomination as a candidate for an “Affiliate seat” in the Open Source Initiative (OSI) Board of Directors elections. I was nominated by the following four OSI Affiliates:

  • The Matrix Foundation
  • The Perl and Raku Foundation
  • Software Freedom Conservancy (my employer)
  • snowdrift.coop

To my knowledge, I am the only Affiliate candidate, in the history of these OSI Board of Directors “advisory” elections, to be nominated by four Affiliates.

I am also endorsed by another Affiliate, the Debian Project.

You can see my official candidate page on OSI's website. This blog post will be updated throughout the campaign to link to other posts, materials, and announcement related to my candidacy throughout the campaign.

Updates During the Campaign

I ran on the “OSI Reform Platform” with Richard Fontana.

I created a Fediverse account specifically to interact with constituents and the public, so please also follow that on floss.social/@bkuhn.

Posted Wed Feb 26 21:01:05 2025 Tags:

Computer scientists have long studied the question of what things can fit through mouse holes. Early on it was an open question as to whether there even exists a star which is larger than a mouse hole. That question got settled with a seminal result tackling an easier problem: Is there a planet bigger than a mouse hole? Because the mouse hole exists on a planet that planet must be bigger than a mouse hole. Because the mouse hole can’t fit through itself the planet can’t either. Because there are stars bigger than planets there must exist a star which can’t fit through a mouse hole.

The next question is whether there exists a continent which can’t fit through a mouse hole. This frustratingly remains an open problem. Because the earth is composed of continents and water and there are no mouse holes on water that would seem to imply that there must be a mouse hole on a continent and therefore the continent is larger than the mouse hole. But there’s a loophole: Boats are also on water and may contain mouse holes. While it is known that mouse holes exist it remains an open question whether they occur on continents, on boats, or both. (All mouse holes are known to be about the same size, so ‘a mousehole’ and ‘any mousehole’ mean essentially the same thing.) If it could be shown that there there exists a continent larger than all boats, which is widely believed to be the case, then we would know that there exists a continent larger than a mouse hole, but for now all we can prove is that there is either a boat larger than a mouse hole or a continent larger than a mouse hole.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Several years ago there was a breakthrough result showing that there exists a blue whale larger than a mouse hole. This is a very exciting result not only on its own merits but also because it breaks the so-called recursion barrier: Blue whales are the first things known to be larger than mouse holes which cannot themselves contain mouse holes. Unfortunately whether there exists a continent larger than all blue whales remains open so this result can’t be used to resolve the question of whether there exists a continent larger than a mouse hole.

Now there is an exciting new result showing that there exists an elephant larger than a mouse hole. Unfortunately this result comes with a large caveat: While it’s known that blue whales don’t exist on other planets the question of whether elephants exist on other planets remains open. So it’s still possible that there doesn’t exist a continent larger than a mouse hole or even possible that there exists an elephant on another planet larger than the entire earth which would make this result trivial, although that isn’t believed to be the case. If it could be shown that there aren’t elephants on other planets, or that they aren’t any larger than the elephants on earth, then it would be known that there’s an elephant on our planet larger than a mouse hole.

This is an exciting time in mouseholeology with important results coming in quickly. The coming years are all but guaranteed to bring new breakthroughs in our understanding of which things can fit through mouse holes.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Wed Feb 26 02:48:51 2025 Tags:

Ready in time for libinput 1.28 [1] and after a number of attempts over the years we now finally have 3-finger dragging in libinput. This is a long-requested feature that allows users to drag by using a 3-finger swipe on the touchpad. Instead of the normal swipe gesture you simply get a button down, pointer motion, button up sequence. Without having to tap or physically click and hold a button, so you might be able to see the appeal right there.

Now, as with any interaction that relies on the mere handful of fingers that are on our average user's hand, we are starting to have usage overlaps. Since the only difference between a swipe gesture and a 3-finger drag is in the intention of the user (and we can't detect that yet, stay tuned), 3-finger swipes are disabled when 3-finger dragging is enabled. Otherwise it does fit in quite nicely with the rest of the features we have though.

There really isn't much more to say about the new feature except: It's configurable to work on 4-finger drag too so if you mentally substitute all the threes with fours in this article before re-reading it that would save me having to write another blog post. Thanks.

[1] "soonish" at the time of writing

Posted Mon Feb 24 05:38:00 2025 Tags:

This is a heads up as mutter PR!4292 got merged in time for GNOME 48. It (subtly) changes the behaviour of drag lock on touchpads, but (IMO) very much so for the better. Note that this feature is currently not exposed in GNOME Settings so users will have to set it via e.g. the gsettings commandline tool. I don't expect this change to affect many users.

This is a feature of a feature of a feature, so let's start at the top.

"Tapping" on touchpads refers to the ability to emulate button presses via short touches ("taps") on the touchpad. When enabled, a single-finger tap corresponds emulates a left mouse button click, a two-finger tap a right button click, etc. Taps are short interactions and to be recognised the finger must be set down and released again within a certain time and not move more than a certain distance. Clicking is useful but it's not everything we do with touchpads.

"Tap-and-drag" refers to the ability to keep the pointer down so it's possible to drag something while the mouse button is logically down. The sequence required to do this is a tap immediately followed by the finger down (and held down). This will press the left mouse button so that any finger movement results in a drag. Releasing the finger releases the button. This is convenient but especially on large monitors or for users with different-than-whatever-we-guessed-is-average dexterity this can make it hard to drag something to it's final position - a user may run out of touchpad space before the pointer reaches the destination. For those, the tap-and-drag "drag lock" is useful.

"Drag lock" refers to the ability of keeping the mouse button pressed until "unlocked", even if the finger moves off the touchpads. It's the same sequence as before: tap followed by the finger down and held down. But releasing the finger will not release the mouse button, instead another tap is required to unlock and release the mouse button. The whole sequence thus becomes tap, down, move.... tap with any number of finger releases in between. Sounds (and is) complicated to explain, is quite easy to try and once you're used to it it will feel quite natural.

The above behaviour is the new behaviour which non-coincidentally also matches the macOS behaviour (if you can find the toggle in the settings, good practice for easter eggs!). The previous behaviour used a timeout instead so the mouse button was released automatically if the finger was up after a certain timeout. This was less predictable and caused issues with users who weren't fast enough. The new "sticky" behaviour resolves this issue and is (alanis morissette-stylue ironically) faster to release (a tap can be performed before the previous timeout would've expired).

Anyway, TLDR, a feature that very few people use has changed defaults subtly. Bring out the pitchforks!

As said above, this is currently only accessible via gsettings and the drag-lock behaviour change only takes effect if tapping, tap-and-drag and drag lock are enabled:

  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-to-click true
  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-and-drag true
  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-and-drag-lock true
  
All features above are actually handled by libinput, this is just about a default change in GNOME.
Posted Mon Feb 24 04:17:00 2025 Tags:
Looking at some claims that quantum computers won't work. #quantum #energy #variables #errors #rsa #secrecy
Posted Sat Jan 18 17:45:19 2025 Tags:

This is a heads up that if you file an issue in the libinput issue tracker, it's very likely this issue will be closed. And this post explains why that's a good thing, why it doesn't mean what you want, and most importantly why you shouldn't get angry about it.

Unfixed issues have, roughly, two states: they're either waiting for someone who can triage and ideally fix it (let's call those someones "maintainers") or they're waiting on the reporter to provide some more info or test something. Let's call the former state "actionable" and the second state "needinfo". The first state is typically not explicitly communicated but the latter can be via different means, most commonly via a "needinfo" label. Labels are of course great because you can be explicit about what is needed and with our bugbot you can automate much of this.

Alas, using labels has one disadvantage: GitLab does not allow the typical bug reporter to set or remove labels - you need to have at least the Planner role in the project (or group) and, well, suprisingly reporting an issue doesn't mean you get immediately added to the project. So setting a "needinfo" label requires the maintainer to remove the label. And until that happens you have a open bug that has needinfo set and looks like it's still needing info. Not a good look, that is.

So how about we use something other than labels, so the reporter can communicate that the bug has changed to actionable? Well, as it turns out there is exactly thing a reporter can do on their own bugs other than post comments: close it and re-open it. That's it [1]. So given this vast array of options (one button!), we shall use them (click it!).

So for the forseeable future libinput will follow the following pattern:

  • Reporter files an issue
  • Maintainer looks at it, posts a comment requesting some information, closes the bug
  • Reporter attaches information, re-opens bug
  • Maintainer looks at it and either: files a PR to fix the issue or closes the bug with the wontfix/notourbug/cantfix label
Obviously the close/reopen stage may happen a few times. For the final closing where the issue isn't fixed the labels actually work well: they preserve for posterity why the bug was closed and in this case they do not need to be changed by the reporter anyway. But until that final closing the result of this approach is that an open bug is a bug that is actionable for a maintainer.

This process should work (in libinput at least), all it requires is for reporters to not get grumpy about issue being closed. And that's where this blog post (and the comments bugbot will add when closing) come in. So here's hoping. And to stave off the first question: yes, I too wish there was a better (and equally simple) way to go about this.

[1] we shall ignore magic comments that are parsed by language-understanding bots because that future isn't yet the present

Posted Wed Dec 18 03:21:00 2024 Tags:

A while ago I was looking at Rust-based parsing of HID reports but, surprisingly, outside of C wrappers and the usual cratesquatting I couldn't find anything ready to use. So I figured, why not write my own, NIH style. Yay! Gave me a good excuse to learn API design for Rust and whatnot. Anyway, the result of this effort is the hidutils collection of repositories which includes commandline tools like hid-recorder and hid-replay but, more importantly, the hidreport (documentation) and hut (documentation) crates. Let's have a look at the latter two.

Both crates were intentionally written with minimal dependencies, they currently only depend on thiserror and arguably even that dependency can be removed.

HID Usage Tables (HUT)

As you know, HID Fields have a so-called "Usage" which is divided into a Usage Page (like a chapter) and a Usage ID. The HID Usage tells us what a sequence of bits in a HID Report represents, e.g. "this is the X axis" or "this is button number 5". These usages are specified in the HID Usage Tables (HUT) (currently at version 1.5 (PDF)). The hut crate is generated from the official HUT json file and contains all current HID Usages together with the various conversions you will need to get from a numeric value in a report descriptor to the named usage and vice versa. Which means you can do things like this:

  let gd_x = GenericDesktop::X;
  let usage_page = gd_x.usage_page();
  assert!(matches!(usage_page, UsagePage::GenericDesktop));
  
Or the more likely need: convert from a numeric page/id tuple to a named usage.
  let usage = Usage::new_from_page_and_id(0x1, 0x30); // GenericDesktop / X
  println!("Usage is {}", usage.name());
  
90% of this crate are the various conversions from a named usage to the numeric value and vice versa. It's a huge crate in that there are lots of enum values but the actual functionality is relatively simple.

hidreport - Report Descriptor parsing

The hidreport crate is the one that can take a set of HID Report Descriptor bytes obtained from a device and parse the contents. Or extract the value of a HID Field from a HID Report, given the HID Report Descriptor. So let's assume we have a bunch of bytes that are HID report descriptor read from the device (or sysfs) we can do this:

  let rdesc: ReportDescriptor = ReportDescriptor::try_from(bytes).unwrap();
  
I'm not going to copy/paste the code to run through this report descriptor but suffice to day it will give us access to the input, output and feature reports on the device together with every field inside those reports. Now let's read from the device and parse the data for whatever the first field is in the report (this is obviously device-specific, could be a button, a coordinate, anything):
   let input_report_bytes = read_from_device();
   let report = rdesc.find_input_report(&input_report_bytes).unwrap();
   let field = report.fields().first().unwrap();
   match field {
       Field::Variable(var) => {
          let val: u32 = var.extract(&input_report_bytes).unwrap().into();
          println!("Field {:?} is of value {}", field, val);
       },
       _ => {}
   }
  
The full documentation is of course on docs.rs and I'd be happy to take suggestions on how to improve the API and/or add features not currently present.

hid-recorder

The hidreport and hut crates are still quite new but we have an existing test bed that we use regularly. The venerable hid-recorder tool has been rewritten twice already. Benjamin Tissoires' first version was in C, then a Python version of it became part of hid-tools and now we have the third version written in Rust. Which has a few nice features over the Python version and we're using it heavily for e.g. udev-hid-bpf debugging and development. An examle output of that is below and it shows that you can get all the information out of the device via the hidreport and hut crates.

$ sudo hid-recorder /dev/hidraw1
# Microsoft Microsoft® 2.4GHz Transceiver v9.0
# Report descriptor length: 223 bytes
# 0x05, 0x01,                    // Usage Page (Generic Desktop)              0
# 0x09, 0x02,                    // Usage (Mouse)                             2
# 0xa1, 0x01,                    // Collection (Application)                  4
# 0x05, 0x01,                    //   Usage Page (Generic Desktop)            6
# 0x09, 0x02,                    //   Usage (Mouse)                           8
# 0xa1, 0x02,                    //   Collection (Logical)                    10
# 0x85, 0x1a,                    //     Report ID (26)                        12
# 0x09, 0x01,                    //     Usage (Pointer)                       14
# 0xa1, 0x00,                    //     Collection (Physical)                 16
# 0x05, 0x09,                    //       Usage Page (Button)                 18
# 0x19, 0x01,                    //       UsageMinimum (1)                    20
# 0x29, 0x05,                    //       UsageMaximum (5)                    22
# 0x95, 0x05,                    //       Report Count (5)                    24
# 0x75, 0x01,                    //       Report Size (1)                     26
... omitted for brevity
# 0x75, 0x01,                    //     Report Size (1)                       213
# 0xb1, 0x02,                    //     Feature (Data,Var,Abs)                215
# 0x75, 0x03,                    //     Report Size (3)                       217
# 0xb1, 0x01,                    //     Feature (Cnst,Arr,Abs)                219
# 0xc0,                          //   End Collection                          221
# 0xc0,                          // End Collection                            222
R: 223 05 01 09 02 a1 01 05 01 09 02 a1 02 85 1a 09 ... omitted for previty
N: Microsoft Microsoft® 2.4GHz Transceiver v9.0
I: 3 45e 7a5
# Report descriptor:
# ------- Input Report -------
# Report ID: 26
#    Report size: 80 bits
#  |   Bit:    8       | Usage: 0009/0001: Button / Button 1                          | Logical Range:     0..=1     |
#  |   Bit:    9       | Usage: 0009/0002: Button / Button 2                          | Logical Range:     0..=1     |
#  |   Bit:   10       | Usage: 0009/0003: Button / Button 3                          | Logical Range:     0..=1     |
#  |   Bit:   11       | Usage: 0009/0004: Button / Button 4                          | Logical Range:     0..=1     |
#  |   Bit:   12       | Usage: 0009/0005: Button / Button 5                          | Logical Range:     0..=1     |
#  |   Bits:  13..=15  | ######### Padding                                            |
#  |   Bits:  16..=31  | Usage: 0001/0030: Generic Desktop / X                        | Logical Range: -32767..=32767 |
#  |   Bits:  32..=47  | Usage: 0001/0031: Generic Desktop / Y                        | Logical Range: -32767..=32767 |
#  |   Bits:  48..=63  | Usage: 0001/0038: Generic Desktop / Wheel                    | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
#  |   Bits:  64..=79  | Usage: 000c/0238: Consumer / AC Pan                          | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
# ------- Input Report -------
# Report ID: 31
#    Report size: 24 bits
#  |   Bits:   8..=23  | Usage: 000c/0238: Consumer / AC Pan                          | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
# ------- Feature Report -------
# Report ID: 18
#    Report size: 16 bits
#  |   Bits:   8..=9   | Usage: 0001/0048: Generic Desktop / Resolution Multiplier    | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  10..=11  | Usage: 0001/0048: Generic Desktop / Resolution Multiplier    | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  12..=15  | ######### Padding                                            |
# ------- Feature Report -------
# Report ID: 23
#    Report size: 16 bits
#  |   Bits:   8..=9   | Usage: ff00/ff06: Vendor Defined Page 0xFF00 / Vendor Usage 0xff06 | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  10..=11  | Usage: ff00/ff0f: Vendor Defined Page 0xFF00 / Vendor Usage 0xff0f | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bit:   12       | Usage: ff00/ff04: Vendor Defined Page 0xFF00 / Vendor Usage 0xff04 | Logical Range:     0..=1     | Physical Range:     0..=0     |
#  |   Bits:  13..=15  | ######### Padding                                            |
##############################################################################
# Recorded events below in format:
# E: .  [bytes ...]
#
# Current time: 11:31:20
# Report ID: 26 /
#                Button 1:     0 | Button 2:     0 | Button 3:     0 | Button 4:     0 | Button 5:     0 | X:     5 | Y:     0 |
#                Wheel:     0 |
#                AC Pan:     0 |
E: 000000.000124 10 1a 00 05 00 00 00 00 00 00 00
  
Posted Tue Nov 19 01:54:00 2024 Tags: