The lower-post-volume people behind the software in Debian. (List of feeds.)

Board games are said to have ‘position’ which is about the longer term implications of what’s happening on the board, and ‘tactics’ which is amount immediate consequences. Let’s consider a game which is purely tactical. In this game the two sides alternate picking a bit which is added to a string and after they’ve both moved 64 times the secure hash of the string is calculated and that’s used to pick the winner. I suggest 64 as the number of moves because it’s cryptographic in size, so the initial moves will have unclear meanings and will become clearer towards the end of the game.

The first question to ask about this is what are the chances that the first player to move will have a theoretical win, assuming both sides have unlimited computational capability. It turns out if the chances are greater or less than a certain special value then the probability of one particular side having the win goes up rapidly as you get further from the end of the game. If the win probability is set to exactly that value then the winning chances remain stable as you calculate backwards. Calculating this value is left as an exercise to the reader. The interesting thing is that the value isn’t 50%, in fact it’s fairly partisan, which raises the question of whether the level of advantage for white in Chess is set about right to optimize for it being a tense game.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

There are other variants possible. The number of possible plays could be more than 2, or somewhat variable since you might have a chance of making the opponent skip their turn and you go again. This would allow the ‘effective’ fanout to be something non-integer, but it’s an interesting question whether there’s a way for it to be less than 2.

There’s a variant where instead of there being a fixed number of moves in a game after each move the player who just moved has some probability of winning (or losing). It isn’t obvious whether any win probability guarantees a 100% probability that the game is winnable by one side or the other. It seems like that should be a foundational result in computer science but I’m unfamiliar with it.

In practice of course analyzing this sort of game is constrained by computational ability. That can be ‘emulated’ by assuming that the outcomes are truly random and there’s an oracle which can be accessed a set number of times on one’s turn to say who wins/whether a player wins in a given position. There are a lot of variants possible based on the amount of queries the sides have and whether there’s optionality in it and whether you can think on the opponent’s time. It feels like optimal play is slightly randomized. Intuitively if one player has more thinking time than the other then the weaker player needs to mix things up a bit so their analysis isn’t just a subset of what the opponent is seeing. But this is a wild guess. Real analysis of this sort of game would be very interesting.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sat Jan 25 23:26:36 2025 Tags:
Looking at some claims that quantum computers won't work. #quantum #energy #variables #errors #rsa #secrecy
Posted Sat Jan 18 17:45:19 2025 Tags:

You might notice Katy is having a bit of an interdimensional anomaly here. I implemented this as an experiment because it’s different from any color cycling effect I’ve ever seen before. Normally in color cycling there’s one position which is true, then the hues rotate until all of them are swapped, then they keep rotating until they’re back to true. In this one at any given moment there are two opposite hues which are true and the ones at 90 degrees from those are swapped, and the color cycling effect is rotating which angle is true. It’s also doing a much better job of keeping luminance constant due to using okhsl.

Code is here. It’s leaning on libraries for most of the work, but I did write some code to dither just the low order bits of the RGB values. That’s a technique which should be used more often. This effect would also work on animated video. You could even adjust the angle as a directorial trick, to draw the viewer’s eye towards particular things by making their color true.

(Now that I think about it low order bit dithering could be improved by using error in the okhsl gamut. It also be improved by other diffusion techniques, which in turn can be further improved by dynamically choosing which neighboring pixel most wants to have error in the opposite direction already. I’m going to exercise some self-control and not implement any of this, but you most definitely should pick it up where I left off. All video manipulation should be done in 16 bit color the entire time and only dithered down to 8 bit on final display.)

As a bonus, I also simplified the color swatches I gave previously into two separate ones, for light and dark backgrounds. Files are here and here.

All of the above is done within the limitations of the sRGB color space. The sRGB standard kind of sucks. It’s based off the very first color television which was ever made in 1954 and the standardization which came later made it consistent but not broader. Now that OLED is getting used everywhere my expectation is that things are going to start supporting Rec2100 under the hood and once that becomes ubiquitous new content will be produced in formats which support that extra color depth. It’s going to take a few years.

Posted Fri Dec 27 15:32:45 2024 Tags:

This is a heads up that if you file an issue in the libinput issue tracker, it's very likely this issue will be closed. And this post explains why that's a good thing, why it doesn't mean what you want, and most importantly why you shouldn't get angry about it.

Unfixed issues have, roughly, two states: they're either waiting for someone who can triage and ideally fix it (let's call those someones "maintainers") or they're waiting on the reporter to provide some more info or test something. Let's call the former state "actionable" and the second state "needinfo". The first state is typically not explicitly communicated but the latter can be via different means, most commonly via a "needinfo" label. Labels are of course great because you can be explicit about what is needed and with our bugbot you can automate much of this.

Alas, using labels has one disadvantage: GitLab does not allow the typical bug reporter to set or remove labels - you need to have at least the Planner role in the project (or group) and, well, suprisingly reporting an issue doesn't mean you get immediately added to the project. So setting a "needinfo" label requires the maintainer to remove the label. And until that happens you have a open bug that has needinfo set and looks like it's still needing info. Not a good look, that is.

So how about we use something other than labels, so the reporter can communicate that the bug has changed to actionable? Well, as it turns out there is exactly thing a reporter can do on their own bugs other than post comments: close it and re-open it. That's it [1]. So given this vast array of options (one button!), we shall use them (click it!).

So for the forseeable future libinput will follow the following pattern:

  • Reporter files an issue
  • Maintainer looks at it, posts a comment requesting some information, closes the bug
  • Reporter attaches information, re-opens bug
  • Maintainer looks at it and either: files a PR to fix the issue or closes the bug with the wontfix/notourbug/cantfix label
Obviously the close/reopen stage may happen a few times. For the final closing where the issue isn't fixed the labels actually work well: they preserve for posterity why the bug was closed and in this case they do not need to be changed by the reporter anyway. But until that final closing the result of this approach is that an open bug is a bug that is actionable for a maintainer.

This process should work (in libinput at least), all it requires is for reporters to not get grumpy about issue being closed. And that's where this blog post (and the comments bugbot will add when closing) come in. So here's hoping. And to stave off the first question: yes, I too wish there was a better (and equally simple) way to go about this.

[1] we shall ignore magic comments that are parsed by language-understanding bots because that future isn't yet the present

Posted Wed Dec 18 03:21:00 2024 Tags:

Different sports have different attitudes to rules changes. Most competitive sports get constant rules tweaks (except for baseball, which seems dead set on achieving cultural irrelevance). Track and field tries to keep consistent rules over time so performances in different years are comparable. Poker has a history of lots of experimentation but has mostly been a few main variants the last few decades. Go never changes the rules at all (except for tweaking the scoring system out of necessity but still not admitting it. That’s a story for another day). Chess hasn’t changed the rules for a long time and it’s a problem.

I’m writing this right after a new world chess champion has been crowned. While the result was better than previous championships in that it succeeded in crowning an unambiguous world champion, it failed at two bigger goals: Being exciting and selecting a winner who’s widely viewed as demonstrating that they’re the strongest among all the competitors.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

The source of the lack of excitement is no secret: Most of the games were draws. Out of the 14 games, 5 were decisive. This also creates a problem for the accuracy of the result. In a less drawish game a match with 6 games of which only one was drawn would have equal statistical significance (assuming it wasn’t overly partisan). In fact it’s even worse than that appears on its face. The candidates tournament, which selected the challenger, was a 7 person double round robin where the winner scored 9/14 and three other competitors scored 8.5/14. Picking a winner based on such a close result has hardly any significance at all, and that format means that unless one of the competitors was ludicrously better than the others such a close result was expected. If the format were instead that the top four finishers from the candidates tournament played single elimination matches against each other then the eventual result would be viewed with far more authority. Partially the problem here is that this is just a badly designed tournament but some of the reasoning behind this format is because of the drawishness. Such matches would be long and arduous and not much fun, as they were in the past. Some previous FIDE title tournaments were far worse, following a very misguided idea that making the results random would lead to more excitement. That makes sense in sports like Soccer where there are few teams and it’s important that all of them have a shot, but the ethos of Chess is that the better player should consistently win, and adding randomness can easily lead to never seeing the same player win twice.

This leads to the question of how the rules of chess could be modified to not have so many draws. Most proposed variations suffer from being far too alien to chess players to be taken seriously, but there are two approaches which are, or should be, taken seriously which I’d like to highlight: Fischer random and the variants explored by Kramnik. In Fischer random a starting position with the pieces in the back rank scrambled randomly is selected. In the latter Kramnik, a former world champion, suggested game variants, and the Deepmind team trained an AI on them and measured the draw rate and partisanship of all of them based on how that engine did in self-play games. (Partisanship is the degree to which the game favors one player over the other). I love this methodology. Of the variants tried I’d like to highlight two of the best ones. One is ‘torpedo’, in which pawns can move two squares at once from any position, not just the starting one. The other is no-castle, which is exactly what it sounds like. No castle has the benefit that it gets rid of the most complex and confusing chess rules and that it’s just changing the opening position, in fact it’s a position reachable from the standard Chess opening position. (For some reason people don’t do no-castle Fischer Random tournaments, which seems ridiculous. Might as well combine the best ideas.)

Both no castle and torpedo have about the same level of partisanship as regular chess, which may be a good thing for reasons I really ought to do a separate blog post about. They also both do a good job of making the game less drawish. The reason for this is, in my opinion, basically the same, or at least two sides of the same coin. Torpedo makes the pawns stronger, so they’re more likely to promote and decide the game. No castle nerfs the king so it’s more likely to get captured. Of these two approaches torpedo feels far more alien to regular chess than no castle does. My proposal to make Chess even more non-drawish is to nerf the king even more: Make it so the king can’t move diagonally. Notably this would cause even king versus king endgames to be decisive. It would also result in a lot more exciting attacks because the king would be so poorly defended. This needs extensive play testing to be taken seriously, but repeating the Deepmind experiment is vastly easier now that AI has come so far, and it would be great if Chess or at least a very Chess-like game could have a world championship which was much more exciting and meaningful.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Fri Dec 13 06:21:54 2024 Tags:

Before I get into the meat of this rant, I’d like to say that my main point is not that giving monthly numbers is incompetent, but I will get that out of the way now. There’s this ongoing deep mystery of the universe: “Why does our monthly revenue always drop in February?” It’s because it has less days in it, ya dumbass. What you should be doing is average daily numbers with months as the time intervals of measurement, and longer term stats being weighted by lengths of months. That’s still subject to some artifacts but they’re vastly smaller and hard to avoid. (Whoever decided to make days and years not line up properly should be fired.)

Even if you aren’t engaged in that bit of gross incompetence, it’s still that case that not only year over year but any fixed time interval delta is something you should never, ever use. This includes measuring historical inflation, the performance of a stock, and the price of tea in China. You may find this surprising because the vast majority of graphs you ever see present data in exactly this way, but it’s true and is a huge problem.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Let’s consider the criteria we want out of a smoothing algorithm:

  1. There shouldn’t be any weird artifacts in how the data is presented

  2. There should be minimal parameterization/options for p-hacking

  3. Only the data from a fixed window should be considered

  4. Data from the future should not be considered

So what’s wrong with straightforwardly applying even weighting across the entire time interval? The issue is that it grants a special and extreme cutoff status to two very specific points in time: Right now and the beginning of the interval. While right now is hard to do anything about (but more on that later) granting special status to an arbitrary point in the past is just wrong. For example, here’s a year over year weighted average in a scenario where there was a big spike one month, which is the most artifact-laden scenario:

What happened exactly a year after the spike? Absolutely nothing, it’s an artifact of the smoothing algorithm used. You could argue that by picking a standard interval to use the amount of possible p-hacking is mitigated and it’s true. But the effects are still there and people can always decide to show the data at all only when the artifact goes the way they want and it’s extremely hard to standardize enough to avoid trivial p-hacks. For example it’s viewed as acceptable to show year to date, and that may hit a different convenient cutoff date.

(The graphs in this post all use weighted geometric means and for simplicity assume that all months are the same length. As I said at the top you should take into account month lengths with real world data but it’s beside the point here.)

What you should be using is linear weighted moving average, which instead of applying an even weighting to every data point has the weighting go down linearly as things go into the past, hitting zero at the beginning of the window. Because the early stuff is weighted less the size of the window is sort of narrower than with constant weighting. To get roughly apples to apples you can set it so that the amount of weight coming from the last half year is the same in either case, which corresponds to the time interval of the linear weighting being multiplied by the square root of 2, which is roughly 17 months instead of 12. Here’s what it looks like with linear weighted averages:

As you can see the LWMA doesn’t have a sudden crash at the end and much more closely models how you would experience the spike as you lived through it. There’s still some p-hacking which can be done, for example you can average over a month or a ten years instead of one year if that tells a story you prefer, but the effects of changing the time interval used are vastly less dramatic. Changing by a single month will rarely have any noticeable effect at all, while doing the same with a strict cutoff will matter fairly often.

Stock values in particular are usually shown as deviations from a single point at the beginning of a time period, which is all kinds of wrong. Much more appropriate would be to display it in the same way as inflation: annual rate of return smoothed out using LWMA over a set window. That view is much less exciting but much more informative.

In defense of the future

Now I’m going to go out on a limb and advocate for something more speculative. What I said above should be the default, what I’m about to say should be done at least sometimes, but for now it’s done hardly ever.

Of the four criteria at the top the dodgyest one by far is that you shouldn’t use data from the future. When you’re talking about the current time you don’t have much choice in the matter because you don’t know what will happen in the future, but when looking at past data it makes sense to retroactively take what happened later into account when evaluating what happened at a particular time. This is especially true of things which have measurement error, both from noise in measurement and noise in arrival time of effect. Using both before and after data simply gives better information. What it sacrifices is the criterion that you don’t retroactively change how you view past data after you give information at the time. While there’s some benefit to revisiting the drama as you experienced it at the time, that shouldn’t always be more important than accuracy.

Here’s a comparison of how a rolling linearly weighted average which uses data from the future would show things in real time versus in retrospect:

(This assumes a window of 17 months centered around the current time and data from the future elided, so the current value is across a window of 9 months.)

The after the fact smoothing is much more reasonable. It’s utilizing 20/20 hindsight, which is both its strength and weakness. The point is sometimes that’s what you want.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Mon Dec 2 17:25:59 2024 Tags:

The book ‘Manchild in the Promised Land’ is a classic of American literature which traumatized me when I was assigned to read it in middle school. It is a literarily important work, using a style of hyperrealism which was innovative at the time and eventually lead to reality television and youtube channels which have full staffs but go out of their way to make it appear that the whole thing was produced by a single person. In parallel it also was part of the rise of the blaxsploitation genre. The author and most of the books ardent followers claim that it proves that black people in the US are that way due to the oppression of white people. It’s also been claimed to be oppression porn, and to glorify drug use and criminality. I’ll get to whether those things are true, but first there are some things I need to explain.

The book is an autobiography about the author’s growing up in Harlem, how he was a involved in all manner of criminality when he was younger, and eventually managed to get out of it, go to law school, and have a respectable career. What struck me when I was younger was that it was the first depiction of the interactions between men and women I’d ever seen which wasn’t propaganda bullshit. Back in the 80s the depictions of dating in sitcoms and movies were stagey and dumb, but worse than that they’d alternate between christian moralizing and juvenile fantasizing of the writers. Even back then I could see transparently through them. This book was different. It depicted people in actually uncomfortable situations, doing things you weren’t supposed to talk about, and having normal human reactions. The thing which caused damage to my impressionable young mind was that most of these interactions involved pimps and hos and present a very dark side of humanity.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

There’s something I now need to admit. I haven’t, and most likely won’t, be able to force myself to finish re-reading this book. I know that’s a bit hypocritical when writing a review, but there’s something I realized early on re-reading it which recontextualized everything in it and made not able to cope with reading it any more. And that is that it’s all made up.

It was the homophobia which gave it away. All the gay people are presented as aggressive creepazoids, with not much motivation other than to fulfill the role of creepazoid in this literary universe of general oppression. Such people do exist in real life, but they tend to have the good sense to not openly go after people they don’t know and have no reason to think will be receptive, especially back when homosexuality was downright illegal and getting beating someone up for being gay wouldn’t land you in any trouble at all (The story is set in the 1940s). The logical conclusion is that the author didn’t know any real out gay people and was making them up as characters in a story using a common trope of the time. Looking into the author more, everything falls apart. He comes across as a dweeb in interviews, not someone anyone would take seriously as a pimp. None of the characters other than him mention seem to ever have been mentioned. He conveniently claims to have been a lawyer but then stopped practicing because he could make more money giving talks, but there doesn’t seem to be evidence that he ever practiced or passed the bar, attended law school, or got into law school in the first place. (I’m guessing he did some but not all of those.) Most ridiculously the timing doesn’t work. Either he was hustling soldiers on leave from WWII when he was eight years old, or that was somebody else’s story. In an interview with NPR they said it was a novel but written as an autobiography, which is surprising given that nearly everything referencing it claims unambiguously that it’s simply an autobiography. My guess is that they did some actual journalism and politely let him fall back on ‘it’s a novel’ when they called him out on his lies.

One may ask whether this matters. Does a work of art’s meaning or import depend on the artist who created it? In general I lean towards saying no, that the truth of even a claimed personal experience is less important than whether that experience is prototypical of that of many others. But in this case the context and meaning are so completely changed based on whether it’s real that I have to go the other way. If it’s a real story it’s about someone who was once a pimp, came to the realization that hos are real people too, changed his life around and is bringing a laudable message of inclusion. If it’s made up then it’s a loser fantasizing about having been a pimp. That doesn’t mean that this isn’t an important and influential book, but it does man that if you’re going to teach it you should include the context that it’s a seminal work of incel literature.

All that said, being a work of fiction doesn’t mean that there’s nothing about reality which can be gleaned from a work. I truly believe that the stories in this book were ones told to or observed by the author involving older, cooler boys and were either true or at least demonstrations that such stories could win you social status. A lot of it jives with things which I personally witnessed in the 80s and 90s growing up in walking distance of where the book is set. So now let’s get to critiques of the book and whether they’re true or not. The first question at hand is: Does it show that black americans are kept where they are via oppression from white americans? Well… not really. It’s complicated. A lot of the white people in it are very personally charitable to the author, helping him get through school and better himself. That’s the opposite of oppression. Where it does show a lot of oppression is from the police, both in the form of under- and over- policing. Police brutality is commonplace, but the cops won’t show up when you call for them and actually need them. This is definitely a real problem, thankfully less so now than the better part of a century ago, but there’s still a default assumption in the US that when the cops show up they’ll make the situation worse. Maybe over the course of my lifetime that’s been downgraded from an assumption to a fear, but it’s still a very real hangover and the war on drugs isn’t done yet. What the book most definitely shows is the black community oppressing itself. People in the ghetto are mostly interacting with other people in the ghetto, so it’s to be expected that crimes which happen there also have victims from there. The depictions of drug dealers intentionally getting potential customers addicted and of men forcing women into prostitution are particularly disturbing, but are things which actually happen.

That leads to the other big question, which is whether this book glorifies criminality. Yes it glorifies criminality! Half of it is the author fantasizing about being a badass pimp! He claims cocaine isn’t addictive! Gang rape is portrayed as a rite of passage! (The one thing which the author seems to really not like is heroin. This was before crack.) What’s puzzling is how this book is held up as a paragon of showing greatness of black culture. Saying ‘There’s a lot of glorification of criminality and misogyny in black culture and that’s a bad thing’ is something one isn’t allowed to say in polite company, or can only be said by black academics who obfuscate it with layers of jargon. While I understand getting defensive about that, it might behoove people who want to avoid the issue to not outright promote works which are emblematic of those trends (by, say, making them assigned reading in school). It doesn’t help to fall back on ‘this has a deep meaning which only black people can understand’ when the obvious problems are pointed out, as the author had a penchant for doing in interviews.

This brings me to the most central and currently culturally relevant aspect of what’s real in this book. The (very different, not at all bullshit) book ‘We Have Never Been Woke’ by Musa al-Gharbi makes a compelling case for the claim that woke worldview is a largely self-serving one of intellectual elites who claim to speak for underprivileged people but don’t. That book is from the point of view of someone who is himself cringily woke and can point out the hypocrisy and disconnect from that end. The other side of it is that when underprivileged people who woke claims to speak for say they are not you should believe them. This book is an example of that. So is all of gangsta rap. It is not productive to pretend a culture is something it is not just because you think it should be different. And there I think Manchild gets some amount of redemption: It claims to be a work which shines a light on cold, hard, unpleasant reality, and for all its faults it does, only not with the meaning the author intended.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Tue Nov 26 21:04:24 2024 Tags:

A while ago I was looking at Rust-based parsing of HID reports but, surprisingly, outside of C wrappers and the usual cratesquatting I couldn't find anything ready to use. So I figured, why not write my own, NIH style. Yay! Gave me a good excuse to learn API design for Rust and whatnot. Anyway, the result of this effort is the hidutils collection of repositories which includes commandline tools like hid-recorder and hid-replay but, more importantly, the hidreport (documentation) and hut (documentation) crates. Let's have a look at the latter two.

Both crates were intentionally written with minimal dependencies, they currently only depend on thiserror and arguably even that dependency can be removed.

HID Usage Tables (HUT)

As you know, HID Fields have a so-called "Usage" which is divided into a Usage Page (like a chapter) and a Usage ID. The HID Usage tells us what a sequence of bits in a HID Report represents, e.g. "this is the X axis" or "this is button number 5". These usages are specified in the HID Usage Tables (HUT) (currently at version 1.5 (PDF)). The hut crate is generated from the official HUT json file and contains all current HID Usages together with the various conversions you will need to get from a numeric value in a report descriptor to the named usage and vice versa. Which means you can do things like this:

  let gd_x = GenericDesktop::X;
  let usage_page = gd_x.usage_page();
  assert!(matches!(usage_page, UsagePage::GenericDesktop));
  
Or the more likely need: convert from a numeric page/id tuple to a named usage.
  let usage = Usage::new_from_page_and_id(0x1, 0x30); // GenericDesktop / X
  println!("Usage is {}", usage.name());
  
90% of this crate are the various conversions from a named usage to the numeric value and vice versa. It's a huge crate in that there are lots of enum values but the actual functionality is relatively simple.

hidreport - Report Descriptor parsing

The hidreport crate is the one that can take a set of HID Report Descriptor bytes obtained from a device and parse the contents. Or extract the value of a HID Field from a HID Report, given the HID Report Descriptor. So let's assume we have a bunch of bytes that are HID report descriptor read from the device (or sysfs) we can do this:

  let rdesc: ReportDescriptor = ReportDescriptor::try_from(bytes).unwrap();
  
I'm not going to copy/paste the code to run through this report descriptor but suffice to day it will give us access to the input, output and feature reports on the device together with every field inside those reports. Now let's read from the device and parse the data for whatever the first field is in the report (this is obviously device-specific, could be a button, a coordinate, anything):
   let input_report_bytes = read_from_device();
   let report = rdesc.find_input_report(&input_report_bytes).unwrap();
   let field = report.fields().first().unwrap();
   match field {
       Field::Variable(var) => {
          let val: u32 = var.extract(&input_report_bytes).unwrap().into();
          println!("Field {:?} is of value {}", field, val);
       },
       _ => {}
   }
  
The full documentation is of course on docs.rs and I'd be happy to take suggestions on how to improve the API and/or add features not currently present.

hid-recorder

The hidreport and hut crates are still quite new but we have an existing test bed that we use regularly. The venerable hid-recorder tool has been rewritten twice already. Benjamin Tissoires' first version was in C, then a Python version of it became part of hid-tools and now we have the third version written in Rust. Which has a few nice features over the Python version and we're using it heavily for e.g. udev-hid-bpf debugging and development. An examle output of that is below and it shows that you can get all the information out of the device via the hidreport and hut crates.

$ sudo hid-recorder /dev/hidraw1
# Microsoft Microsoft® 2.4GHz Transceiver v9.0
# Report descriptor length: 223 bytes
# 0x05, 0x01,                    // Usage Page (Generic Desktop)              0
# 0x09, 0x02,                    // Usage (Mouse)                             2
# 0xa1, 0x01,                    // Collection (Application)                  4
# 0x05, 0x01,                    //   Usage Page (Generic Desktop)            6
# 0x09, 0x02,                    //   Usage (Mouse)                           8
# 0xa1, 0x02,                    //   Collection (Logical)                    10
# 0x85, 0x1a,                    //     Report ID (26)                        12
# 0x09, 0x01,                    //     Usage (Pointer)                       14
# 0xa1, 0x00,                    //     Collection (Physical)                 16
# 0x05, 0x09,                    //       Usage Page (Button)                 18
# 0x19, 0x01,                    //       UsageMinimum (1)                    20
# 0x29, 0x05,                    //       UsageMaximum (5)                    22
# 0x95, 0x05,                    //       Report Count (5)                    24
# 0x75, 0x01,                    //       Report Size (1)                     26
... omitted for brevity
# 0x75, 0x01,                    //     Report Size (1)                       213
# 0xb1, 0x02,                    //     Feature (Data,Var,Abs)                215
# 0x75, 0x03,                    //     Report Size (3)                       217
# 0xb1, 0x01,                    //     Feature (Cnst,Arr,Abs)                219
# 0xc0,                          //   End Collection                          221
# 0xc0,                          // End Collection                            222
R: 223 05 01 09 02 a1 01 05 01 09 02 a1 02 85 1a 09 ... omitted for previty
N: Microsoft Microsoft® 2.4GHz Transceiver v9.0
I: 3 45e 7a5
# Report descriptor:
# ------- Input Report -------
# Report ID: 26
#    Report size: 80 bits
#  |   Bit:    8       | Usage: 0009/0001: Button / Button 1                          | Logical Range:     0..=1     |
#  |   Bit:    9       | Usage: 0009/0002: Button / Button 2                          | Logical Range:     0..=1     |
#  |   Bit:   10       | Usage: 0009/0003: Button / Button 3                          | Logical Range:     0..=1     |
#  |   Bit:   11       | Usage: 0009/0004: Button / Button 4                          | Logical Range:     0..=1     |
#  |   Bit:   12       | Usage: 0009/0005: Button / Button 5                          | Logical Range:     0..=1     |
#  |   Bits:  13..=15  | ######### Padding                                            |
#  |   Bits:  16..=31  | Usage: 0001/0030: Generic Desktop / X                        | Logical Range: -32767..=32767 |
#  |   Bits:  32..=47  | Usage: 0001/0031: Generic Desktop / Y                        | Logical Range: -32767..=32767 |
#  |   Bits:  48..=63  | Usage: 0001/0038: Generic Desktop / Wheel                    | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
#  |   Bits:  64..=79  | Usage: 000c/0238: Consumer / AC Pan                          | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
# ------- Input Report -------
# Report ID: 31
#    Report size: 24 bits
#  |   Bits:   8..=23  | Usage: 000c/0238: Consumer / AC Pan                          | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
# ------- Feature Report -------
# Report ID: 18
#    Report size: 16 bits
#  |   Bits:   8..=9   | Usage: 0001/0048: Generic Desktop / Resolution Multiplier    | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  10..=11  | Usage: 0001/0048: Generic Desktop / Resolution Multiplier    | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  12..=15  | ######### Padding                                            |
# ------- Feature Report -------
# Report ID: 23
#    Report size: 16 bits
#  |   Bits:   8..=9   | Usage: ff00/ff06: Vendor Defined Page 0xFF00 / Vendor Usage 0xff06 | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  10..=11  | Usage: ff00/ff0f: Vendor Defined Page 0xFF00 / Vendor Usage 0xff0f | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bit:   12       | Usage: ff00/ff04: Vendor Defined Page 0xFF00 / Vendor Usage 0xff04 | Logical Range:     0..=1     | Physical Range:     0..=0     |
#  |   Bits:  13..=15  | ######### Padding                                            |
##############################################################################
# Recorded events below in format:
# E: .  [bytes ...]
#
# Current time: 11:31:20
# Report ID: 26 /
#                Button 1:     0 | Button 2:     0 | Button 3:     0 | Button 4:     0 | Button 5:     0 | X:     5 | Y:     0 |
#                Wheel:     0 |
#                AC Pan:     0 |
E: 000000.000124 10 1a 00 05 00 00 00 00 00 00 00
  
Posted Tue Nov 19 01:54:00 2024 Tags:

In this weekend’s edition of ‘Bram gets nerd sniped by something ridiculous so makes a blog post about it to make it somebody else’s problem’ Mark Rober said something about ‘A Lava Lamp made out of real lava’. Unfortunately he just poured lava on a regular lava lamp to destroy it but this does raise the question of whether you could have a real lava lamp which uses a molten salt instead of water.

First the requirements. The lamp is made of three substances: the lamp itself, the ‘liquid’ inside, and the ‘solid’ inside. The lamp must be transparent and remain solid across the range of temperatures used. The ‘liquid’ must be solid at room temperature, become liquid at a high but not too high temperature, and be transparent in its liquid phase. It should also be opaque in its solid phase to give a cool reveal of what the thing does as it heats up but but that’s hard to avoid. The ‘solid’ should have a melting point higher than the ‘liquid’ but not so high that it softens the lamp and be opaque. The density of the ‘solid’ should be just barely below that of the ‘liquid’ in its melted form and just barely above in its solid form to give it that distinctive lava lamp buoyancy effect. The ‘solid’ and ‘liquid’ should not react with each other or stick to the lamp or decompose over time.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

That was a lot of requirements, but it does seem to be possible to meet them. The choice for the lamp is obvious: Borosilicate glass. That’s physically strong, transparent, can withstand big temperature changes (due to low thermal expansion) and is chemically inert. All the same reasons why it’s ideal for cookware. It doesn’t get soft until over 800C, so the melting points of the other materials should be well below that.

For the ‘liquid’ there also turns out to only be one real option: Zinc Chloride. That’s transparent and has a melting point of 290C and a density of 2.9 (it’s also opaque at room temperature). The other transparent salts aren’t dense enough.

For the ‘solid’ there once again only seems to be one option: Boron Trioxide. That has a melting point of 450C and a density of 2.46. Every other oxide has a density which is way too high, but this one overshoots it a bit. It’s much easier to get get the densities closer together by making mixing the Boron Trioxide with something heavy than the Zinc Chloride with something light, so some Lead(II) oxide can be mixed in. That has a density of 9.53 so not much of it is needed and a melting point of 888C so the combined melting point will still be completely reasonable. (Due to eutectic-type effects it might be barely higher at all.) It should also add some color, possibly multiple ones because the colors formed depend on how it cools. Bismuth(III) oxide should also work and may be a bit more colorful.

I’m crossing my fingers a bit on these things not reacting but given that they’re glasses and salts it seems reasonable. The glasses may have a bit of a tendency to stick to each other. Hopefully not so much because one is a solid at these temperatures and the other is a liquid, but it’s probably a good idea to coat the top and bottom of the insides of the lamp with Silicon and to use an overall shape where the pieces inside never come close to the walls, in particular having an inverted cone shape at the bottom and a similar tapering at the top. The whole lamp should also be sealed because oxygen and water might react at the high temperatures reached, and there should be an argon bubble at the top because there is some expansion and contraction going on. Those same concerns apply to regular lava lamps which explains a lot about how they’re shaped.

Anyone who wants to feel free to try this build. You don’t need any more permission from me. I’d like to see it happen and don’t have the time to spend on building it myself.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sun Nov 17 04:53:55 2024 Tags:

Let’s say you happen to need a color code something and want RGB values corresponding to all the common color names. Let’s further say that you’re an obsessive maniac and would like to avoid losing your sanity studying color theory just for the sake of this one silly task. How do you do it? Easy, leverage the work I already did for you. Here’s the chart, followed by caveats and methodology:

First the caveats: The level of saturation of these colors varies a lot mostly because sRGB sucks and your monitor can only display faded versions of some of them. (For colorblind accessibility it might be a good idea to use slightly lower saturation.) A luminance high enough to make yellow have decent saturation washes out other things so this was set to a consistent level which is a reasonable compromise. (This isn’t the fault of sRGB, it’s a limitation of human eyes.) Purple hues at angles 300 and 320 are both things which my eyes accept as the single ideal of Purple and don’t realize are two different things until I see them next to each other. The value given is midway between. Reasonable descriptions of them are ‘Royal Purple’ and ‘Common Purple’. They have an analogous relationship to the one between Blue and Cyan.

The methodology behind this first has to answer the question ‘What is a color?’ For the purposes of this exercise we’ll just pretend that hues are colors. The next question is why particular positions in the hue continuum count as colors. Hues are a twisting road. In particular places the road bends, making a gradient crossing over it look not like a straight line. The places where those bends happen we call colors. Which bends get a name is dependent on where you set the threshold and cultural factors. The exact point where the bend happens is also hard to define exactly. I located them by the highly scientific process of picking them out with my own two eyes.

There’s a standard statement of what the common color words are in English to which I’m adding Cyan and Pink. Cyan is a proper name for what’s usually called ‘Light Blue’, a name which makes no sense because both Cyan and Blue can appear at any amount of luminance. It may be that Cyan is denied a proper common name because our displays can barely show it. The biggest limitation of sRGB by far is that it can only display Cyan with poor saturation. Pink I put in both because it’s a primary color (as is Cyan) and because it’s a very common color word in English, mostly denied its proper place out of an absurd bit of misogyny and homophobia. It’s especially funny that in printing Pink is euphemized as ‘Magenta’ even though the shade which is used is a light one and the common usage of ‘Magenta’ is to refer to a darker shade.

One important thing to note is that the primary colors, which have sRGB values set to 00 and FF, are NOT on ideal color hues. Those colors correspond to the most saturated things your display can show, which is important, but the positioning of RGB was selected first and foremost for them to be 120 degrees from each other to maximize how much color display could be done with only three phosphores. They happen to be very close to the true ideals but not exactly.

To pick out exact shades I used an OKHSL color picker with saturation 100% and light and dark luminance at 65% and 43%. There are still a few artifacts of OKHSL, in particular its hues bend a tiny bit in the darker shades. To compensate I picked out color angles which are good at both high and low luminance, mostly resulting in Cyan being shifted over because in darker shades Teal is elbowing it out of the way. One thing OKHSL does NOT do is maintain the property that two colors which are opposite each other in space are true opposites, which is annoying because oppositeness is one of the most objective things you can say about colors. (That could probably be improved by having brightness correction be done by cubing a single value instead of three values separately but I personally am not going to put together such a proposal.)

Annoyingly the human perceptual system doesn’t see fit to put color angles with canonical names opposite each other, instead placing them roughly evenly but with seemingly random noise added. This of course creates problems for color wheels, which want both to show what colors are opposites and what the color names are.

Posted Sun Nov 10 07:07:22 2024 Tags:
Questioning a puzzling claim about mass surveillance. #attackers #governments #corporations #surveillance #cryptowars
Posted Mon Oct 28 14:25:09 2024 Tags:

If you want to know how bad your computer display is, go here, select sRGB for the gamut, OKLab for the geometric color coordinate space, and Color for the spectral colors, and rotate it around. You’ll get a much better sense of what’s going on rotating it in 3D yourself, but I’ll do some explaining. Here’s a screenshot showing just how much of the color space your monitor is missing:

The colored shape shows the extremes of what your color can display. It’s warped to reflect the cognitive distance between colors, so the distances in space reflect the apparent distance between the colors in your brain. Ideally that shape would fill the entire basket, which represents all the colors your eyes can perceive. You might notice that it comes nowhere close. It’s a stick going from black at the bottom to white at the top, with just enough around it that you can get colors but the saturation is poor.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

The biggest chunks missing from this are that there’s very little bright blue and dark cyan. This may be why people mischaracterize cyan as ‘light blue’. Our display technologies are literally incapable of displaying a highly saturated light blue or a highly saturated dark cyan. It’s likely that most of the paintings from Picasso’s blue period can’t be displayed properly on a monitor, and that he was going with blues not as a gimmick but because it’s literally half the human cognitive space. If you have the ability to make a display or physical object without the standard restrictions go with bright blue or dark cyan. Even better contrast them with each other and break everybody’s brains.

Sadly this situation is basically unfixable. The Rec2020 standard covers much more of the color space, but you can’t simply display sRGB values as Rec2020 values. That will result in more intense colors, but because the original inputs weren’t designed for it the effect will be cartoony and weird. You can simply display the correct values specified by sRGB, but that will waste the potential of the display . If there was content recored for sRGB which specified that in its format it would display very well, but that’s has a chicken and egg problem, and the displaying input recorded for Rec2020 on a legacy sRGB display is even worse than going the other way around. Maybe about the best you can do is have a Rec2020 display which applies a superlinear saturation filter to sRGB input so low saturations are true but ‘fully saturated’ values look more intense.

This is an example of how modern televisions do a huge amount of processing of their input before displaying it and it’s extremely difficult to disentangle how good the physical display is from the quality of the processing software. Another example of that is in the display of gradients. A 10 bit color display will naturally display gradients much better than an 8 bit color display, but an 8 bit color display can dither the errors well enough to be nearly imperceptible. The problem then is that causes flicker due to the dithering changing between frames. There are ways of fixing this by keeping information between frames but I don’t think there’s an open source implementation of this for televisions to use. One has to assume that many if not nearly all of the proprietary ones do it.

Speaking of which, this is a problem how software in general handles color precision. It’s true that 8 bits is plenty for display, but like with audio you should keep all intermediate representations with much greater precision and only smash them down when making the final output. Ideally operating systems would pretend that final display had 16 bit color and fix it on final display, or even in the monitor. Lossy video compression in particular inexplicably gives 8 bit color output resulting in truly awful dark gradients. The standard Python image libraries don’t even have an option for higher color precision resulting in them producing terrible gradients. This should be completely fixable.

Popping back up the stack I’d like to fantasize about a data format for display technology which supports the entire range of human perceptible colors. This would encode color as three values: x, y, and luma. x would go between 0 and 2 with y between 0 and 1. It’s a little hard to describe what exactly these values mean, but (0, 0) would be red, (1, 0) yellow, (2, 0) green, (2, 1) cyan, (1, 1) blue, and (0, 1) pink. The outer edge goes around the color wheel keeping opposite colors opposite and doing an okay job of corresponding with cognitive space even in raw form. You could make an approximate rendering of this in sRGB as a smushed color wheel but by definition the outer edge of that would be horrendously faded compared to how it should look. Luminance should work as it implicitly does in RGB: Luminance 0 is exactly black and x and y have no effect. As it goes up the cognitive width which x and y represent increases up until the midway point, then it shrinks again until it gets to luminance 1 which is exactly white and x and y again have no effect. This shrinking at the top is to reflect how real displays work. If you want to get much better bright colors you can use the luminance of 1/2 as white at the expense of everything being darker. Many movies do this, which makes them look great when covering a whole display but dark when open in a window next to true white on a monitor.

Deep diving on color theory of course has given me multiple ideas for hobby software projects, most of which I’ll probably never get around to because many things don’t pan out but mostly because I have limited hobby coding time so things get triaged heavily. If anybody wants to beat me to the punch on these please go ahead:

  • A color cycling utility which instead of rotating the color space reflects it. Usually when color cycling there’s one position which is true, then it rotates until all hues are changed to their opposite, then it rotates back around the other way. This would instead at all times have two opposite hues which are true and two opposite colors which are flipped and cycle which those are. Ideally this would be implemented by converting to okHSL, changing H to (d-H) % 1 and converting back again. As you change d it will cycle. You could trivially change it to a very nice traditional color cycler using (d+H) % 1.

  • A color cycling utility which allows user controlled real time three dimensional color rotation. If you take an input image, convert it into the okLab color space and shrink its color space to fit in a sphere centered at (0.76, 0, 0) with radius 0.125 then this can be done without hitting any values which are unreachable in sRGB. The interface for rotating it should be similar to this one. In the past when I’ve tried doing these sorts of three dimensional color rotations they’ve looked horrible when white-black gets off axis. Hopefully that’s because the cognitive distance in that direction is so much greater than in the color directions and keeping everything uniform will fix it, but it may be that getting white and black inverted at all fundamentally looks weird.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sun Oct 27 23:46:43 2024 Tags:

TLDR: if you know what EVIOCREVOKE does, the same now works for hidraw devices via HIDIOCREVOKE.

The HID standard is the most common hardware protocol for input devices. In the Linux kernel HID is typically translated to the evdev protocol which is what libinput and all Xorg input drivers use. evdev is the kernel's input API and used for all devices, not just HID ones.

evdev is mostly compatible with HID but there are quite a few niche cases where they differ a fair bit. And some cases where evdev doesn't work well because of different assumptions, e.g. it's near-impossible to correctly express a device with 40 generic buttons (as opposed to named buttons like "left", "right", ...[0]). In particular for gaming devices it's quite common to access the HID device directly via the /dev/hidraw nodes. And of course for configuration of devices accessing the hidraw node is a must too (see Solaar, openrazer, libratbag, etc.). Alas, /dev/hidraw nodes are only accessible as root - right now applications work around this by either "run as root" or shipping udev rules tagging the device with uaccess.

evdev too can only be accessed as root (or the input group) but many many moons ago when dinosaurs still roamed the earth (version 3.12 to be precise), David Rheinsberg merged the EVIOCREVOKE ioctl. When called the file descriptor immediately becomes invalid, any further reads/writes will fail with ENODEV. This is a cornerstone for systemd-logind: it hands out a file descriptor via DBus to Xorg or the Wayland compositor but keeps a copy. On VT switch it calls the ioctl, thus preventing any events from reaching said X server/compositor. In turn this means that a) X no longer needs to run as root[1] since it can get input devices from logind and b) X loses access to those input devices at logind's leisure so we don't have to worry about leaking passwords.

Real-time forward to 2024 and kernel 6.12 now gained the HIDIOCREVOKE for /dev/hidraw nodes. The corresponding logind support has also been merged. The principle is the same: logind can hand out an fd to a hidraw node and can revoke it at will so we don't have to worry about data leakage to processes that should not longer receive events. This is the first of many steps towards more general HID support in userspace. It's not immediately usable since logind will only hand out those fds to the session leader (read: compositor or Xorg) so if you as application want that fd you need to convince your display server to give it to you. For that we may have something like the inputfd Wayland protocol (or maybe a portal but right now it seems a Wayland protocol is more likely). But that aside, let's hooray nonetheless. One step down, many more to go.

One of the other side-effects of this is that logind now has an fd to any device opened by a user-space process. With HID-BPF this means we can eventually "firewall" these devices from malicious applications: we could e.g. allow libratbag to configure your mouse' buttons but block any attempts to upload a new firmware. This is very much an idea for now, there's a lot of code that needs to be written to get there. But getting there we can now, so full of optimism we go[2].

[0] to illustrate: the button that goes back in your browser is actually evdev's BTN_SIDE and BTN_BACK is ... just another button assigned to nothing particular by default.
[1] and c) I have to care less about X server CVEs.
[2] mind you, optimism is just another word for naïveté

Posted Fri Oct 4 00:27:00 2024 Tags:

Matt Parker asks for an 'ideal’ jigsaw with two solutions:

The criteria seem to be that the jigsaw should be (a) 25 pieces in a 5x5 grid (b) each edge type occurs exactly twice (c) there are exactly two solutions (d) in the two different solutions the maximum number of pieces are reoriented upside down. He’s done a search using a computer but that turns out to be unnecessary.

There’s a very simple mathematical trick which can be applied here. If the rows and columns are numbered 1-5, first swap columns 2 and 4, then swap rows 2 and 4, like this:

Then you flip it all upside down, and presto, all the criteria satisfied perfectly. Except… not quite. The problem is that this doesn’t result in two solutions, it results in four. You can independently decide whether to swap the rows and the columns. The most elegant fix for this seems to be to change the shapes of the pieces a bit so that instead of each piece just sharing an edge with the ones in the left, right, up, and down directions they also share with the ones in the upper right and lower left. This not only enforces that there are exactly two solutions, it also preserves the property that each edge type occurs exactly twice, even on the diagonal edges

Posted Sun Sep 15 01:43:46 2024 Tags:

For roughly the last two years I’ve been working on bringing gaming to Chia. This will have:

  • Playing games of skill for real money (specifically XCH)

  • Real time play, without waiting for the blockchain to make every move

  • Enforcement of the game rules so no cheating

  • No casino so no fee paid to a casino. The only extra money you lose out on is transaction fees which are currently near zero.

  • Legal in many more jurisdictions than casino gaming. There are meaningful legal distinctions between games of skill and games of chance and games played with and without and intermediary. These are intentional parts of the law and not a loophole.

  • Whoever you’re playing against won’t be able to skip out on their debts. Money won or lost is transferred immediately.

Those are, to put it mildly, huge features, and it’s been a big engineering lift to make them possible. Current status is that we’re finishing up debugging the core of it (you can follow ongoing development here) and will soon start building it out into a real application which will be normal software development instead of a science fair project and that is expected to ship in a months timeframe.

At a high level the way it works is that when two people want to play a session they use an offer and acceptance to set up a state channel at the beginning, which takes about a minute for the transaction to go through on chain. (State channels are very similar to payment channels like are used in Bitcoin but can do more things like support gaming, which is a problem because Bitcoin can’t support gaming at all, even not in a state channel.) Then they play over that state channel and when they’re done they close out the session with another transaction which pays out the amount they had left at the end. If there’s a dispute in the middle (which there’s no reason for unless one player tries to misbehave or has a serious technical problem) then whatever games are pending get played out on chain. Poker is a fairly good fit for this because session is many short games instead of one long game.

The restrictions of this medium are that it’s restricted to turn based games with very few moves for exactly two players. To get an idea of why more than two players is a problem see envy-free cake cutting. There needs to be as few turns as possible to make the fallback to playing on chain not excessively slow. What exactly is possible in turn based games is a little hard to convey but there will be a suite of (fun and addictive!) games shipped with it initially which does a good job of showing what things are possible and how to implement them. Randomness can be done using commit and reveal, but supporting card replacement value like happens in Poker is problematic. The vast bulk of the academic work on supporting ‘Mental Poker’ is on handling that one not very important feature. Instead we’ll start with a Poker variant which uses an infinite deck because that makes the problem go away completely. (If you search for ‘commit and reveal’ there are a lot of crypto projects claiming they’re doing something cutting edge and amazing and that it has to be done on chain. That isn’t true. It’s trivial and can be done over state channels.)

Posted Fri Aug 16 00:14:07 2024 Tags:
You're making Clang angry. You wouldn't like Clang when it's angry. #compilers #optimization #bugs #timing #security #codescans
Posted Sat Aug 3 13:48:32 2024 Tags:

Over the last months I've started looking into a few of the papercuts that affects graphics tablet users in GNOME. So now that most of those have gone in, let's see what has happened:

Calibration fixes and improvements (GNOME 47)

The calibration code, a descendent of the old xinput_calibrator tool was in a pretty rough shape and didn't work particularly well. That's now fixed and I've made the calibrator a little bit easier to use too. Previously the timeout was quite short which made calibration quite stressfull, that timeout is now per target rather than to complete the whole calibration process. Likewise, the calibration targets now accept larger variations - something probably not needed for real use-cases (you want the calibration to be exact) but it certainly makes testing easier since clicking near the target is good enough.

The other feature added was to allow calibration even when the tablet is manually mapped to a monitor. Previously this only worked in the "auto" configuration but some tablets don't correctly map to the right screen and lost calibration abilities. That's fixed now too.

A picture says a thousand words, except in this case where the screenshot provides no value whatsoever. But here you have it anyway.

Generic tablet fallback (GNOME 47)

Traditionally, GNOME would rely on libwacom to get some information about tablets so it could present users with the right configuration options. The drawback was that a tablet not recognised by libwacom didn't exist in GNOME Settings - and there was no immediately obvious way of fixing this, the panel either didn't show up or (with multiple tablets) the unrecognised one was missing. The tablet worked (because the kernel and libinput didn't require libwacom) but it just couldn't be configured.

libwacom 2.11 changed the default fallback tablet to be a built-in one since this is now the most common unsupported tablet we see. Together with the new fallback handling in GNOME settings this means that any unsupported tablet is treated as a generic built-in tablet and provides the basic configuration options for those (Map to Monitor, Calibrate, assigning stylus buttons). The tablet should still be added to libwacom but at least it's no longer a requirement for configuration. Plus there's now a link to the GNOME Help to explain things. Below is a screenshot on how this looks like (after modifying my libwacom to no longer recognise the tablet, poor Intuos).

Monitor mapping names (GNOME 47)

For historical reasons, the names of the display in the GNOME Settings Display configuration differed from the one used by the Wacom panel. Not ideal and that bit is now fixed with the Wacom panel listing the name of the monitor and the connector name if multiple monitors share the same name. You get the best value out of this if you have a monitor vendor with short names. (This is not a purchase recommendation).

Highlighted SVGs (GNOME 46)

If you're an avid tablet user, you may have multiple stylus tools - but it's also likely that you have multiple tools of the same type which makes differentiating them in the GUI hard. Which is why they're highlighted now - if you bring the tool into proximity, the matching image is highlighted to make it easier to know which stylus you're about to configure. Oh, and in the process we added a new SVG for AES styli too to make the picture look more like the actual physical tool. The <blink> tag may no longer be cool but at least we can disco our way through the stylus configuration now.

More Pressure Curves (GNOME 46)

GNOME Settings historically presents a slider from "Soft" to "Firm" to adjust the feel of the tablet tip (which influences the pressure values sent to the application). Behind the scenes this was converted into a set of 7 fixed curves but thanks to a old mutter bug those curves only covered a small amount of the possible range. This is now fixed so you can really go from pencil-hard to jelly-soft and the slider now controls an almost-continous range instead of just 7 curves. Behold, a picture of slidery goodness:

Miscellaneous fixes

And of course a bunch of miscellaneous fixes. Things that I quickly found were support for Alt in the tablet pad keymappings, fixing of erroneous backwards movement when wrapping around on the ring, a long-standing stylus button mismatch, better stylus naming and a rather odd fix causing configuration issues if the eraser was the first tool ever to be brought into proximity.

There are a few more things in the pipe but I figured this is enough to write a blog post so I no longer have to remember to write a blog post about all this.

Posted Wed Jun 26 04:59:00 2024 Tags: