The lower-post-volume people behind the software in Debian. (List of feeds.)

There are numerous videos reviewing 3D printer filaments to help you determine which is the best. I’ve spent way too much time watching these and running over data and can quickly summarize all the information relevant to most people doing 3D printing: Use PLA. There, you’re finished. If you want or need more information or are interested in running tests yourself read on.

There are two big components of the ‘strength’ of a material: stiffness and toughness. Stiffness refers to how hard it is to bend (or stretch/break) while toughness refers to how well it recovers from being bent (or stretched). These can be further broken down into subcategories, like whether the material successfully snaps back after getting bent or is permanently deformed. An important thing to understand is that the measures used aren’t deep ethereal properties of material, they’re benchmark numbers based on what happens if you run particular tests. This isn’t a problem with the tests, it’s an acknowledgement of how complex real materials are.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

For the vast majority of 3D printing projects what you care about is stiffness rather than toughness. If your model is breaking then most of the time the solution is to engineer it to not bend that much in the first place. The disappointing thing is that PLA has very good stiffness, usually better even than the exotic filaments people like experimenting with. In principle proper annealing can get you better stiffness, but when doing that you wind up reinventing injection molding badly and it turns out the best material for that is also PLA. The supposedly better benchmarks of PLA+ are universally tradeoffs where they get better toughness in exchange for worse stiffness. PLA is brittle, so it shatters on failure, and mixing it with any random gunk tends to make that tradeoff, but it isn’t what you actually want.

(If you happen to have an application where your material bending or impact resistance is important you should consider TPU. The tradeoffs of different versions of that are complex and I don’t have any experience with it so can’t offer much detailed advice.)

Given all the above, plus PLA’s generally nontoxic and easy to print nature, it’s the go-to filament for the vast majority of 3D printing applications. But let’s say you need something ‘better’, or are trying to justify the ridiculous amounts of time you’ve spent researching this subject, what is there to use? The starting place is PLA’s weaknesses: It gets destroyed by sunlight, can’t handle exposure to many corrosive chemicals, and melts at such a low temperature that it can be destroyed in a hot car or a sauna. There are a lot of fancy filaments which do better on these benchmarks, but for the vast majority of things PLA isn’t quite good enough at PETG would fit the bill. The problem with PETG is that it isn’t very stiff. But in principle adding carbon fiber fixes this problem. So, does it?

There are two components of stiffness for 3d printing: Layer adhesion and bending modulus. Usually layer adhesion issues can be fixed by printing in the correct orientation, or sometimes printing in multiple pieces at appropriate orientations. One could argue that the answer ‘you can engineer around that’ is a cop-out but in this cases the effect is so extreme that it can’t be ignored. More on this below, but my test is of bending modulus.

Now that I’ve finished an overly long justification of why I’m doing bending modulus tests we can get to the tests themselves. You can get the models I used for the tests over here. The basic idea is to make a long thin bar in the material to be tested, hang a weight from the middle, and see how much it bends. Here are the results:

CarbonX PETG-CF is a great stronger material, especially if you want/need something light weight spindly. It’s considerably more expensive than PLA but cheaper and easier to print than fancier materials and compared to PLA and especially PETG the effective cost is much less because you need less of it. The Flashforge PETG-CF (which is my stand-in ‘generic’ PETG-CF as it’s what turns up in an Amazon search) is a great solution if you want something with about the same price and characteristics as PLA but better able to handle high temperatures and sunlight. It’s so close to PLA that I’m suspicious that it’s actually just a mislabeled roll of PLA but I haven’t tested that. I don’t know why the Bambu PETG-CF performed so badly. It’s possibly it got damaged by moisture between when I got it and tested it but I tried drying it thoroughly and that didn’t help.

Clearly not all carbon fiber filaments are the same and more thorough testing should be done with a setup less janky than mine. If anybody wants to use my models as a starting point for that please go ahead.

The big caveat here is that you can engineer around a bad bending modulus. The force needed to bend a beam goes up with the cube of its width, so unless something has very confined dimensions you can make it much stronger by making it chunkier. You can do it without using all that much more material by making I-beam like structures. Note that when 3D printing you can make enclosed areas no problem so the equivalent of an I-beam should have a square, triangular, or circular cross section with a hollow middle. The angle of printing is also of course very important.

The conclusion is that if you want something more robust than PLA you can use generic PETG engineered to be very chunky, or PETG-CF with appropriate tradeoffs between price and strength for your application.

A safety warning: Be careful to ventilate your space thoroughly when printing carbon fiber filaments, and don’t shred or machine them after printing. Carbon fiber has the same ill effects on lungs as asbestos so you don’t want to be breathing it in. In my tests the amount of volatile organic compounds produced are small, but it’s a good idea to be careful.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sun Mar 23 21:16:26 2025 Tags:

An Update Regarding the 2025 Open Source Initiative Elections

I've explained in other posts that I ran for the 2025 Open Source Initative Board of Directors in the “Affiliate” district.

Voting closed on MON 2025-03-17 at 10:00 US/Pacific. One hour later, candidates were surprised to receive an email from OSI demanding that all candidates sign a Board agreement before results were posted. This was surprising because during mandatory orientation, candidates were told the opposite: that a Board agreement need not be signed until the Board formally appointed you as a Director (as the elections are only advisory &mdash: OSI's Board need not follow election results in any event. It was also surprising because the deadline was a mere 47 hours later (WED 2025-03-19 at 10:00 US/Pacific).

Many of us candidates attempted to get clarification over the last 46 hours, but OSI has not communicated clear answers in response to those requests. Based on these unclear responses, the best we can surmise is that OSI intends to modify the ballots cast by Affiliates and Members to remove any candidate who misses this new deadline. We are loathe to assume the worst, but there's little choice given the confusing responses and surprising change in requirements and deadlines.

So, I decided to sign a Board Agreement with OSI. Here is the PDF that I just submitted to the OSI. I emailed it to OSI instead. OSI did recommend DocuSign, but I refuse to use proprietary software for my FOSS volunteer work on moral and ethical grounds0 (see my two keynotes (FOSDEM 2019, FOSDEM 2020) (co-presented with Karen Sandler) on this subject for more info on that).

My running mate on the Shared Platform for OSI Reform, Richard Fontana, also signed a Board Agreement with OSI before the deadline as well.


0 Chad Whitacre has made unfair criticism of my refusal tog use Docusign as part of the (apparently ongoing?) 2025 OSI Board election political campaign. I respond to his comment here in this footnote (& further discussion is welcome using the fediverse, AGPLv3-powered comment feature of my blog). I've put it in this footnote because Chad is not actually raising an issue about this blog post's primary content, but instead attempting to reopen the debate about Item 4 in the Shared Platform for OSI Reform. My response follows:

In addition to the two keynotes mentioned above, I propose these analogies that really are apt to this situation:

  • Imagine if the Board of The Nature Conservancy told Directors they would be required, if elected, to use a car service to attend Board meetings. It's easier, they argue, if everyone uses the same service and that way, we know you're on your way, and we pay a group rate anyway. Some candidates for open Board seats retort that's not environmentally sound, and insist — not even that other Board members must stop using the car service &mdash: but just that Directors who chose should be allowed to simply take public transit to the Board meeting — even though it might make them about five minutes late to the meeting. Are these Director candidates engaged in “passive-aggressive politicking”?
  • Imagine if the Board of Friends of Trees made a decision that all paperwork for the organization be printed on non-recycled paper made from freshly cut tree wood pulp. That paper is easier to move around, they say — and it's easier to read what's printed because of its quality. Some candidates for open Board seats run on a platform that says Board members should be allowed to get their print-outs on 100% post-consumer recycled paper for Board meetings. These candidates don't insist that other Board members use the same paper, so, if these new Directors are seated, this will create extra work for staff because now they have to do two sets of print-outs to prep for Board meetings, and refill the machine with different paper in-between. Are these new Director candidates, when they speak up about why this position is important to them as a moral issue, a “a distracting waste of time”?
  • Imagine if the Board of the APSCA made the decision that Directors must work through lunch, and the majority of the Directors vote that they'll get delivery from a restaurant that serves no vegan food whatsoever. Is it reasonable for this to be a non-negotiable requirement — such that the other Directors must work through lunch and just stay hungry? Or should they add a second restaurant option for the minority? After all, the ASPCA condemns animal cruelty but doesn't go so far as to demand that everyone also be a vegan. Would the meat-eating directors then say something like “opposing cruelty to animals could be so much more than merely being vegan” to these other Directors?
Posted Wed Mar 19 08:59:00 2025 Tags:

An Update Regarding the 2025 Open Source Initiative Elections

I've explained in other posts that I ran for the 2025 Open Source Initative Board of Directors in the “Affiliate” district.

Voting closed on Monday 2025-03-17 at 10:00 US/Pacific. One hour after that, I and at least three other candidates received the following email:

Date: Mon, 17 Mar 2025 11:01:22 -0700
From: OSI Elections team <elections@opensource.org>
To: Bradley Kuhn <bkuhn@ebb.org>
Subject: TIME SENSITIVE: sign OSI board agreement
Message-ID: <civicrm_67d86372f1bb30.98322993@opensource.org>

Thanks for participating in the OSI community polls which are now closed. Your name was proposed by community members as a candidate for the OSI board of directors. Functioning of the board of directors is critically dependent on all the directors committing to collaborative practices.

For your name to be considered by the board as we compute and review the outcomes of the polls,you must sign the board agreement before Wednesday March 19, 2025 at 1700 UTC (check times in your timezone). You’ll receive another email with the link to the agreement.

TIME SENSITIVE AND IMPORTANT: this is a hard deadline.

Please return the signed agreement asap, don’t wait. 

Thanks

OSI Elections team

(The link email did arrived too, with a link to a proprietary service called DocuSign. Fontana downloaded the PDF out of DocuSign and it appears to match the document found here. This document includes a clause that Fontana and I explicitly indicated in our OSI Reform Platform should be rewritten. )

All the (non-incumbent) candidates are surprised by this. OSI told us during the mandatory orientation meetings (on WED 2025-02-19 & again on TUE 2025-02-25) that the Board Agreement needed to be signed only by the election winners who were seated as Directors. No one mentioned (before or after the election) that all candidates, regardless of whether they won or lost, needed to sign the agreement. I've also served o many other 501(c)(3) Boards, and I've never before been asked to sign anything official for service until I was formally offered the seat.

Can someone more familiar with the OSI election process explain this? Specifically, why are all candidates (even those who lose) required to sign the Board Agreement before election results are published? Can folks who ran before confirm for us that this seems to vary from procedures in past years? Please reply on the fediverse thread if you have information. Richard Fontana also reached out to OSI on their discussion board on the same matter.

Posted Mon Mar 17 20:26:00 2025 Tags:

Update 2025-03-21: This blog post is extremely long (if you're reading this, you must already know I'm terribly long-winded). I was in the middle of consolidating it with other posts to make a final, single “wrap up” post of the OSI elections when, in the middle of doing that, I was told that Linux Weekly News (LWN) published an article written by Joe Brockmeier. As such,I've carefully left the text below as it stood it stood 2025-03-20 03:42 UTC, which I believe is the version that Brockmeier sourced for his story (only changes past the line “Original Post” have been HTML format fixes). (I hate as much as you do having to scour archive.org/web to find the right version.) Nevertheless, I wouldn't have otherwise left this here in its current form because it's a huge, real-time description that as such doesn't make the best historical reference record of these event. I used my blog as a campaigning tool (for reasons discussed below) before I knew how much interest there would ultimately be in the FOSS community about the 2025 OSI Board of Directors election. Since this was used as a source for the LWN article, keeping the original record easy to find is obviously important and folks shouldn't have to go to archive.org/web to find it. Nevertheless, if you're just digging into this story fresh, I don't really recommend reading the below. Instead, I suggest just reading Brockmeier's LWN article because he's a journalist and writes better and more concise than me, and he's unbiased and the below is my (understandably) biased view as a candidate who lived through this problematic election.

Original Post

I recently announced that I was nominated for the Open Source Initiative (OSI) Board of Directors as an “Affiliate” candidate. I chose to run as an (admittedly) opposition candidate against the existing status quo, on a “ticket” with my colleague, Richard Fontana, who is running as an (opposition) “Member” candidate.

These elections are important; they matter with regard to the future of FOSS. OSI recently published the “Open Source Artificial Intelligence Definition” (OSAID). One of OSI's stated purposes of the OSID is to convince the entire EU and other governments and policy agencies will adopt this Definition as official for all citizens. Those stakes aren't earth-shattering, but they are reasonably high stakes. (You can read i a blog post I wrote on the subject or Fontana's and my shared platform for more information about OSAID.)

I have worked and/or volunteered for nonprofits like OSI for years. I know it's difficult to get important work done — funding is always too limited. So, to be sure I'm not misquoted: no, I don't think the election is “rigged”. Every problem described herein can easily be attributed to innocent human error, and, as such, I don't think anyone at OSI has made an intentional plan to make the elections unfair. Nevertheless, these mistakes and irregularities (particularly the second one below) have led to an unfair 2025 OSI Directors Election. I call on the OSI to reopen the nominations for a few days, correct these problems, and then extend the voting time accordingly. I don't blame the OSI for these honest mistakes, but I do insist that they be corrected. This really does matter: since this isn't just a local club. OSI is an essential FOSS org that works worldwide and claims to have a consensus mandate for determining what is (or is not) “open source”. Thus, (if the OSI intends to continue with an these advisory elections), OSI's elections need the greatest integrity and legitimacy. Irregularities must be corrected and addressed to maintain the legitimacy of this important organization.

Regarding all these items below, I did raise all the concerns privately with the OSI staff before publicly listing them here. In every case, I gave OSI at least 20-30% of the entire election cycle to respond privately before discussing the problems publicly. (I have still received no direct response from the OSI on any of these issues.)

(Recap on) First Irregularity

The first irregularity was the miscommunication about the nomination deadline (as covered in the press. Instead of using the time zone of OSI's legal home (in California), or the standard FOSS community deadline of AoE (anywhere on earth) time, OSI surreptitiously chose UTC and failed to communicate that decision properly. According to my sources, only one email of 3(+) emails about the elections included the fully qualified datetime of the deadline. Everywhere else (including everywhere on OSI's website) published only the date, not the time. It was reasonable for nominators to assume the deadline was US/Pacific — particularly since the nomination form still worked after 23:59 UTC passed.

Second Irregularity

Due to that first irregularity, this second (and most egregious) irregularity is compounded even further. All year long, the OSI has communicated that, for 2025, elections are for two “Member” seats and one “Affiliate” seat. Only today (already 70% through the election cycle) did OSI (silently) correct this error. This change was made well after nominations had closed (in every TZ). By itself, the change in available seats after nominations closed makes the 2025 OSI elections unfair. Here's why: the Members and the Affiliates are two entirely different sets of electorates. Many candidates made complicated decisions about which seats to run for based on the number of seats available in each class. OSI is aware of that, too, because (a) we told them that during candidate orientation, and (b) Luke said so publicly in their blog post (and OSI directly responded to Luke in the press).

If we had known there were two Affiliate seats and just one Member seat, Debian (an OSI Affiliate) would have nominated Luke a week early to the Affiliate seat. Instead, Debian's leadership, Luke, Fontana, and I had a complex discussion in the final week of nominations on how best to run as a “ticket of three”. In that discussion, Debian leadership decided to nominate no one (instead of nominating Luke) precisely because I was already nominated on a platform that Debian supported, and Debian chose not to run a candidate against me for the (at the time, purported) one Affiliate seat available.

But this irregularity didn't just impact Debian, Fontana, Luke, and me. I was nominated by four different Affiliates. My primary pitch to ask them to nominate me was that there was just one Affiliate seat available. Thus, I told them, if they nominated someone else, that candidate would be effectively running against me. I'm quite sure at least one of those Affiliates would have wanted to nominate someone else if only OSI had told them the truth when it mattered: that Affiliates could easily elect both me and a different candidate for two available Affiliate seats. Meanwhile, who knows what other affiliates who nominated no one would have done differently? OSI surely doesn't know that. OSI has treated every one of their Affiliates unfairly by changing the number of seats available after the nominations closed.

Due to this Second Irregularity alone, I call on the OSI to reopen nominations and reset the election cycle. The mistakes (as played) actually benefit me as a candidate — since now I'm running against a small field and there are two seats available. If nominations reopen, I'll surely face a crowded field with many viable candidates added. Nevertheless, I am disgusted that I unintentionally benefited from OSI's election irregularity and I ask OSI take corrective action to make the 2025 election fair.

The remaining irregularities are minor (by comparison, anyway), but I want to make sure I list all the irregularities that I've seen in the 2025 OSI Board Elections in this one place for everyone's reference:

Third Irregularity

I was surprised when OSI published the slates of Affiliate candidates that they were not in any (forward or reverse) alphabetical order — not candidate's first, last, or nominator name. Perhaps the slots in the voter's guide were assigned randomly, but if so, that is not disclosed to the electorate. And, Who is listed first, you ask? Why, the incumbent Affiliate candidate. The issue of candidate ordering in voting guides and ballots has been well studied academically and, unsurprisingly, being listed first is known to be an advantage. Given that incumbents already have an advantage in all elections, putting the incumbent first without stating that the slots in the voter guide were randomly assign makes the 2025 OSI Board election unfair.

I contacted OSI leadership within hours of the posting of the candidates about this issue (at time of writing, that was four days ago) and they have refused to respond nor have they corrected the issue. This compounds the error, because OSI consciously choosing to list the incumbent Affiliate candidate first in the voter guide on purpose.

Note that this problem is not confined to the “Affiliate district”. In the “Member district”, my running mate, Richard Fontana, is listed last in the voter guide for no apparent reason.

Fourth Irregularity

It's (ostensibly) a good idea for the OSI to run a discussion forum for the candidates (and kudos to OSI ( in this instance, anyway ) for using the GPL'd Discourse software for the purpose). however, the requirements to create an account and respond to the questions exclude some Affiliate candidates. Specifically, the OSI has stated that Affiliate candidates, and the Affiliates that are their electorate, need not be Members of the OSI. (This is actually the very first item in OSI's election FAQ!) Yet, to join the discussion forum, one must become a member of the OSI! While it might be reasonable to require all Affiliate candidates become OSI Members, this was not disclosed until the election started, so it's unfair!

Some already argue that since there is a free (as in price) membership that this is a non-issue. I disagree, and here's why: Long ago, I had already decided that I would not become a Member of OSI (for free or otherwise) because OSI Members who do not pay money are denied voting rights in these elections! Yes, you read that right: the election for OSI Directors in the “Members” seat literally has a poll tax! I refuse to let OSI count me as a Member when the class of membership they are offering to people who can't afford to pay is a second-class citizenship in OSI's community. Anyway, there is no reason that one should have to become a Member to post on the discussion fora — particularly given that OSI has clearly stated that the Affiliate candidates (and the Affiliate representatives who vote) are not required to be individual Members.

A desire for Individual Membership is understandable for an nonprofit. Nonprofits often need to prove they represent a constituency. I don't blame any nonprofit for trying to build a constituency for itself. The issue is how. Counting Members as “anyone who ever posted on our discussion forum” is confusing and problematic — and becomes doubly so when Voting Memberships are available for purchase. Indeed, OSI's own annual reporting conflates the two types of Members confusingly, as “Member district” candidate Chad Whitacre asked about during the campaign (but received no reply).

I point as counter-example to the models used by GNOME Foundation (GF) and Software In the Public Interest (SPI). These organizations are direct peers to the OSI, but both GF and SPI have an application for membership that evaluates on the primary criterion of what contributions the individual has made to FOSS (be they paid or volunteer). AFAICT, for SPI and GF, no memberships require a donation, aren't handed out merely for signing up to the org's discussion fora, and all members (once qualified) can vote.

Fifth Irregularity

This final irregularity is truly minor, but I mention it for completeness. On the Affiliate candidate page, it seems as if each candidate is only nominated by one affiliate. When I submitted my candidate statement, since OSI told me they automatically filled in the nominating org, I had assumed that all my nominating orgs would be listed. Instead, they listed only one. If I'd known that, I'd have listed them at the beginning of my candidate statement; my candidate statement was drafted under the assumption all my nominating orgs would be listed elsewhere.

Sixth Irregularity

Update 2025-03-07. I received an unsolicited (but welcome) email from an Executive Director of one of OSI's Affiliate Organizations. This individual indicated they'd voted for me (I was pleasantly surprised, because I thought their org was pro-OSAID, which I immediately wrote back and told them). The irregularity here is that OSI told candidates that the campaign period would be 10 days, including two weekends in most places — including orientation phone calls for candidates. They started the campaign late, and didn't communicate that they weren't extending the timeline, so the campaign period was about 6.5 days and included only one weekend.

Meanwhile, during this extremely brief 6.5 day period, the election coordinator at OSI was unavailable to answer inquiries from candidates and Affiliates for at least three of those days. This included sending one Affiliate an email with the subject line ”Rain Check” in response to five questions they sent about the election process, and its contents indicated that the OSI would be unavailable to answers questions about the election — until after the election!

Seventh Irregularity (added 2025-03-13)

The OSI Election Team, less than 12 hours after sending out the ballots (on Friday 2025-03-07) sent the following email. Many of the Affiliates told me about the email, and it seems likely that all Affiliates received this email within a short time after receiving their ballots (and a week before the ballots were due):

Subject: OSI Elections: unsolicited emails
Date: Sat, 08 Mar 2025 02:11:05 -0800
From: "Staffer REDACTED" <staffer@opensource.org>

Dear REDACTED,

It has been brought to our attention that at least one candidate has been emailing affiliates without their consent.

We do not give out affiliate emails for candidate reachouts, and understand that you did not consent to be spammed by candidates for this election cycle.

Candidates can engage with their fellow affiliates on our forums where we provide community management and moderation support, and in other public settings where our affiliates have opted to sign up and publicly engage.

Please email us directly for any ongoing questions or concerns.

Kind regards,
OSI Elections team

This email is problematic because candidates received no specific guidance on this matter. No material presented at either of the two mandatory election orientations (which I attended) indicated that contacting your constituents directly was forbidden, nor could I find such in any materials on the OSI website. Also, I checked with Richard Fontana, who also attended these sessions, and he confirms I didn't miss anything.

It's not spam to contact one's “FOSS Neighbors” to learn their concerns when in a political campaign for an important position. In fact, during those same orientation sessions, it was mentioned that Affiliate candidates should know the needs of their constiuents — OSI's Affiliates. I took that charge seriously, so I invested 12-14 hours researching every single of my constituents (all ~76 OSI Affiliate Organizations). my research confirmed my hypothesis: my constituents were my proverbial “FOSS neighbors”. In fact, I found that I'd personally had contact with most of the orgs since before OSI even had an Affiliate program. For example, one of the now-Affiliates had contacted me way back in 2013 to provide general advice and support about how to handle fundraising and required nonprofit policies for their org. Three other now-Affiliate's Executive Directors are people I've communicated regularly with for nearly 20 years. (There are other similar examples too). IOW, I contacted my well-known neighbors to find out their concerns now that I was running for an office that would represent them.

There were also some Affiliates that I didn't know (or didn't know well) yet. For those, like any canvasing candidate, I knocked on their proverbial front doors: I reviewed their websites, found the name of the obvious decision maker, searched my email archives for contact info (and, in some cases, just did usual guesses like <firstname.lastname@example.org>), and contacted them. (BTW, I've done this since the 1990s in nonprofit work when trying to reach someone at a fellow nonprofit to discuss any issue.)

All together, I was able to find a good contact at 55 of the Affiliates, and here's a (redacted) sample of one the emails I sent:

Subject: Affiliate candidate for OSI Board of Directors available to answer any questions REDACTED_FIRSTNAME,

I'm Bradley M. Kuhn and I'm running as an Affiliate candidate in the Open Source Initiative Board elections that you'll be voting in soon on behalf of REDACTED_NAME_OF_ORG.

I wanted to let you know about the Shared Platform for OSI Reform (that I'm running for jointly with Richard Fontana) [0] and also offer some time to discuss the platform and any other concerns you have as an OSI Affiliate that you'd like me to address for you if elected.

(Fontana and I kept our shared platform narrow so that we could be available to work on other issues and concerns that our (different) constituencies might have.)

I look forward to hearing from you soon!

[0] https://codeberg.org/OSI-Reform-Platform/platform#readme

Note that Fontana is running as a Member candidate which has a separate electorate and for different Board seats, so we are not running in competition for the same seat.

(Since each one was edited manually for the given org, if the org primarily existed for a FOSS project I used, I also told them how I used the project myself, etc.)

Most importantly, though, election officials should never comment on the permitted campaign methods of any candidates before voting finishes in any event. While OSI staff may not have intended it, editorializing regarding campaign strategies can influence an election, and if you're in charge of running an impartial collection, you have a high standard to meet.

OSI: either reopen nominations or just forget the elections

Again, I call on OSI to correct these irregularities, briefly reopen nominations, and extend the voting deadline. However, if OSI doesn't want to do that, there is another reasonable solution. As explained in OSI's by-laws and elsewhere, OSI's Directors elections are purely advisory. Like most nonprofits, the OSI is governed by a self-perpetuating (not an elected) Board. I bet with all the talk of elections, you didn't even know that!

Frankly, I have no qualms with a nonprofit structure that includes a self-perpetuating Board. While it's not a democratic structure, a self-perpetuating Board of principled Directors does solve the problems created in a Member-based organization. In Member-based organizations, votes are for sale. Any company with resources to buy Memberships for its employees can easily dominate the election. While OSI probably has yet to experience this problem, if OSI grows its Membership (as it seeks to), OSI will sure face that problem. Self-perpetuating Boards aren't perfect, but they do prevent this problem.

Meanwhile, having now witnessed OSI's nomination and the campaign process from the inside, it really does seem to me that OSI doesn't really take this election all that seriously. And, OSI already has in mind the kinds of candidates they want. For example, during one of the two nominee orientation calls, a key person in the OSI Leadership said (regarding items 4 of Fontana's and my shared platform) [quote paraphrased from my memory]: If you don't want to agree to these things, then an OSI Directorship is not for you and you should withdraw and seek a place to serve elsewhere. I was of course flabbergasted to be told that a desire to avoid proprietary software should disqualify me (at least in view of the current OSI leadership). But, that speaks to the fact that the OSI doesn't really want to have Board elections in the first place. Indeed, based on that and many other things that the OSI leadership has said during this process, it seems to me they'd actually rather hand-pick Directors to serve than run a democratic process. There's no shame in a nonprofit that prefers a self-perpetuating Board; as I said, most nonprofits are not Membership organizations nor allow any electorate to fill Board seats.

Meanwhile, OSI's halfway solution (i.e., a half-heartedly organized election that isn't really binding) seems designed to manufacture consent. OSI's Affiliates and paid individual Membership are given the impression they have electoral power, but it's an illusion. Giving up on the whole illusion would be the most transparent choice for OSI, and if the OSI would rather end these advisory elections and just self-perpetuate, I'd support that decision.

Update on 2025-03-07: Chad Whitacre, candidate in OSI's “Member district”, has endorsed my suggestion that OSI reopen nominations briefly for this election. While I still urge voters in the “Member district” to rank my running mate, Richard Fontana first in that race, I believe Chad would be fine choice as your second listed candidate in the rank choice voting.

Posted Mon Mar 3 11:00:00 2025 Tags:

In the early days of space missions it was observed that spinning objects flip over spontaneously. This got people freaked out that it could happen to the Earth as a whole. Any solid object spinning in three dimensions will do this, and the amount of time it spends between flips has nothing to do with how quickly the flips happen.

Since then you might assume that this possibility was debunked. That didn’t happen. People just kind of got over it. The model of the Earth as a single solid object is overly simplistic, with some kind of fluid flows going on below the surface. Unfortunately we have no idea what those flows are actually doing and how they might affect this process. It’s a great irony of our universe that we know more about distant galaxies than the core of our own planet.

Unfortunately the one bit of evidence we have about the long-term stability of the Earth’s axis might point towards such flips happening regularly. The Earth’s magnetic field is known to invert every once in a while. But it’s just as plausible that what’s going on is the gooey molten inner core of the Earth keeps pointing in the same direction that whole time while the crunchy outer crust flips over.

If a flip like this happens over the course of a day then the Sun would go amusingly skeewumpus for a day and then start rising in the west and setting in the east. Unlike what you see in ‘The 3 body problem’ apparent gravity on the surface of the planet would remain normal that whole time. (The author of that book supposedly has a physics background. I’m calling shenanigans.) But there might be tides going an order of magnitude or higher than they normally go, and planetary weather patterns would invert causing all kinds of chaos. That would include California getting massive hurricanes from over the Pacific while Florida would be much more chill.

A suddenly flip like that is very unlikely, but that might not be a good thing. If the flip takes years then midway through it the poles will be aligned through the Sun, so they’ll spend months on end in either baking Sun or pitch black, getting baked to a crisp or frozen solid, far beyond the most extreme weather we have under normal circumstances. The equatorial regions will be spared, being in constant twilight, with the passage of time mostly denoted by spectacular northern and southern lights which alternate every 12 hours. And there will probably be lots of tectonic activity, but not as bad as on Venus.

Posted Mon Mar 3 03:47:57 2025 Tags:

Before getting into it I have to say: My experience in school was miserable. I hated it. My impression of the value of school is colored by that experience. On the other hand many if not most people hated school but it’s socially unacceptable to publicly say that school, especially college, was anything but transformative. So here’s to speaking up for the unheard masses.

The central thesis of ‘The Case Against Education’ by Bryan Caplan is that the vast bulk of benefit which students get from their schooling is the diploma, not the education. While his estimate of the value of the diploma at 80% sounds gut-wrenching, it’s hard to avoid getting somewhere close to that if you do even vaguely realistic estimates. Most classes cover material which students will never use even if they were to remember it. The claims that the real purpose is to enrich students lives and make them better citizens are not backed up by those things happening or even being seriously attempted.

All those arguments and counter-arguments are gone over painstakingly in the book, but they wind up roughly where anyone observing what actually goes on in schools would expect. Students retain almost nothing from school. What’s more interesting is the impacts on student’s politics and morals, which is almost nothing. It barely moves the needle. The one thing it does have an effect on is that it makes people have a lot fewer children, especially women. Caplan doesn’t go deep into why this may be, but if I may speculate based on what other areas of research have shown my guess is that it’s caused by (1) not having children while in school and (2) raising women’s standards for men high enough that many of them never settle.

This raises the question of how things got here and what could be done to fix it. The possibility which Caplan oddly does not consider is that it’s a broad conspiracy propped up by the rich and well-educated to make a path for their disappointing children to have solid careers. One in three students at Harvard got in through a path other than earning it from high school achievement. The parents of those kids repay Harvard with money or prestige. It’s remarkable and bizarre that employers don’t look on Harvard graduates with skepticism due to this fact, but they don’t.

What the book does go into is the question of why employers, especially private sector employers, continue to highly value degrees. The answer is an obvious but infuriating one: School, especially college, is something with all the trappings of a job: Boring, unimportant, authoritarian, and demanding. People who do well at it are likely to succeed at any job. (There’s a bunch of discussion of personality types and whatnot which are just poor proxies for succeeding at a sucky job and don’t add much to the discussion.) The only things which school is missing are productive output and pay. What alternative criteria there are for employers which happen to be exciting, meaningful, egalitarian, or fun to find high achieving employees who couldn’t cope with school due to it missing those things is unclear.

The obvious fix for this is to do the exact same thing but with an actual employer. If you get a job at a participating qualified employer and manage to stay working there for a prespecified number of years you get a certificate of sucking it up. The obvious objections are that this would be a big subsidy to large employers, especially those with known toxic work environments whose certificates would be especially valued, and that people who failed the program would have wasted years of their life. Those objections are true, but apply just as much to what happens in universities. In any case, programs like this are rare in the real world, even internationally, and unlikely to become common any time soon.

One source of improvement going on now which Caplan oddly doesn’t go into is the devaluing of the most useless majors. This tends to naturally feed on itself by a somewhat circular logic: Employers devalue the most ridiculous majors, which causes only people who are lacking judgement to pursue them as degrees, which causes employers to further devalue those majors. Such logic is often not a good thing. It’s what got us into overeducation in the first place. But at least in this case it’s causing students to make choices of major which cause them to get more education out of their schooling.

What ‘The Case Against Education’ does go into in great detail is the case for vocational education, which is overwhelming. All but the very best students would be better off getting a vocational degree, both for their own monetary self-interest and the amount of productive work they’re doing for society. The sneering attitude generally given towards vocational degrees (and even engineering degrees!) is obnoxious and unwarranted and should be changed. If vocational degrees were given even a fraction of the prestige which is given to four year degrees the world would be a better place.

What remains is the question of how to improve the education itself. Caplan barely touches on this but I will speculate. Rather than go into vitriolic rants about the problems in subjects which are not my field, I’ll talk about the ones which are, specifically mathematics and computer science.

Many if not most students feel that math classes are torture, a boring subject which they will never use and can barely pass. This assessment is, I’m sad to say, fairly accurate. The reforms which actual mathematicians favor are twofold. First, what’s considered basic literacy in mathematics should be expanded to include probability and expected value, both important concepts apply to people’s everyday lives. Second, everything beyond that shouldn’t be taught as something important which students need to be force-fed, but something beautiful which it’s enriching to learn, like art or literature. There are of course a tiny fraction of students who are likely to go into math-heavy fields, and there should be advanced classes available for them, but those should also be taught by people who love the subject, with an emphasis on its beauty. Nobody should ever be subject to the misery of trigonometry classes as they’re taught today. And people with PhDs in mathematics should be viewed as qualified to teach the subject.

Computer Science’s main problem is right there in its name: It’s Computer Science, not Programming. By an accident of history it’s socially acceptable to get something approximating a vocational programming degree by getting one in computer science. Either an alternative degree program focusing on software engineering should be set up, or the focus of computer science should be put on practical software development. Thankfully a lot of that is already happening.

Then there things which are so basic that they don’t even fit in a field. Can we drop cursive and teach everybody touch typing? Bring back cooking, cleaning, and shop classes? Yes it was a problem that those classes were gendered in the past, but a better solution to that would be to teach them to everybody instead of teach them to nobody.

Posted Sat Mar 1 22:02:22 2025 Tags:

There’s a new speedcubing single solve record which is by far the funniest one ever. No, it isn’t because it’s by an 11 year old kid. He’s the best speedcuber in the world. It isn’t even because he fumbles the solve at the end at the end and still shatters the world record, on a solve which isn’t even all that lucky, although that is very funny. What really makes it funny is that there are multiple videos breaking down the whole solve, by people who are themselves good speedsolvers, going over every single turn done in painstaking detail, which completely miss a key part of what’s happening.

The background to this is that there have long been pissing matches within the world of speedcubing about what’s the best solving algorithm. On one end there are methods which minimize total number of moves, with Petrus being particularly good at keeping that number down, and at the other end are methods which optimize for turning as fast as possible, with CFOP being an extreme of that. The problem with the turn optimizing algorithms is that they require a lot of analysis while the solve is being done and the finger placements to do the turns required tend to be awkward, in the end rendering the whole thing very slow. The speed oriented methods by contrast require only a few looks at the cube and then doing memorized sequences between them. (Speedcubers call sequences ‘algorithms’ which makes me cringe but I’ll go with it.) Good speedcubers have not only memorized and practiced all the algorithms but worked out exact finger placements throughout them for optimal speed. This is the approach which has worked much better in practice. CFOP has dominated speedsolving for the last two decades.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

What seems to be happening to the speedsolvers who aren’t initiated is that this is a fairly lucky CFOP solve which happens to yield particularly good finger placements. While this is technically true it’s missing that those setups aren’t accidents. While this solve is a bit lucky, several of those things are guaranteed to happen because of some subtle things done in the solve. This isn’t actually a CFOP solve at all. It’s EOCross.

A quick summary of CFOP: First you solve the bottom four edges, which is done intuitively (really planned out during the inspection time provided before solving begins). Then a ‘look’ is done and one of the bottom corners and the edge next to it is solved. This is repeated three more times to finish the bottom two layers. Then a look is done and an algorithm is used to orient all the top layer pieces. Finally there’s one last look and an algorithm is done to position all the last layer pieces. Yes that’s a lot of memorized algorithms.

This is how EOCross works: First you solve the bottom edges and all edge orientations, a process which absolutely must be planned out during inspection. The meaning of ‘edge orientations’ in this context may sounds a bit mysterious and it’s subtle enough that it caused the accidental trolling of the new world record. If you only rotate the up, down, right, and left faces of a Rubik’s Cube the edges don’t change orientation. Literally they do change orientation in the sense that they rotate while moved, but whenever they go back to the position they started they’ll always be in the same orientation they were at the beginning. The solve then proceeds with doing corner and edge pairs from the first two layers but with the restriction that the front and back faces aren’t turned. Finally all that’s left is the last layer which happens to be guaranteed to have all the edges correctly oriented, and a single algorithm is done for those. Yes that’s an even larger number of algorithms to memorize.

That may have been a bit much to follow, but the punch line is that to an uninitiated CFOP solver an EOCross solve looks like a CFOP solve where the edge orientations happen to land nicely.

Technically EOCross is a variant on a solving method called ZZ but it’s sufficiently different that it should be considered a different method. It was invented several years ago and devotees have been optimizing the details ever since. There have of course been claims that it should beat CFOP, to which the response has mostly been to point out that no EOCross solvers have been anywhere near the best speedcubers. The rebuttal has been that the top solvers haven’t tried it because it’s so much work to learn and if they did they’d be faster. Given how handily the world record was just broken that rebuttal seems to have been correct. Good work EOCrossians getting the method optimized.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Fri Feb 28 08:26:35 2025 Tags:

I accepted nomination as a candidate for an “Affiliate seat” in the Open Source Initiative (OSI) Board of Directors elections. I was nominated by the following four OSI Affiliates:

  • The Matrix Foundation
  • The Perl and Raku Foundation
  • Software Freedom Conservancy (my employer)
  • snowdrift.coop

To my knowledge, I am the only Affiliate candidate, in the history of these OSI Board of Directors “advisory” elections, to be nominated by four Affiliates.

I am also endorsed by another Affiliate, the Debian Project.

You can see my official candidate page on OSI's website. This blog post will be updated throughout the campaign to link to other posts, materials, and announcement related to my candidacy throughout the campaign.

Updates During the Campaign

I ran on the “OSI Reform Platform” with Richard Fontana.

I created a Fediverse account specifically to interact with constituents and the public, so please also follow that on floss.social/@bkuhn.

Posted Wed Feb 26 21:01:05 2025 Tags:

Computer scientists have long studied the question of what things can fit through mouse holes. Early on it was an open question as to whether there even exists a star which is larger than a mouse hole. That question got settled with a seminal result tackling an easier problem: Is there a planet bigger than a mouse hole? Because the mouse hole exists on a planet that planet must be bigger than a mouse hole. Because the mouse hole can’t fit through itself the planet can’t either. Because there are stars bigger than planets there must exist a star which can’t fit through a mouse hole.

The next question is whether there exists a continent which can’t fit through a mouse hole. This frustratingly remains an open problem. Because the earth is composed of continents and water and there are no mouse holes on water that would seem to imply that there must be a mouse hole on a continent and therefore the continent is larger than the mouse hole. But there’s a loophole: Boats are also on water and may contain mouse holes. While it is known that mouse holes exist it remains an open question whether they occur on continents, on boats, or both. (All mouse holes are known to be about the same size, so ‘a mousehole’ and ‘any mousehole’ mean essentially the same thing.) If it could be shown that there there exists a continent larger than all boats, which is widely believed to be the case, then we would know that there exists a continent larger than a mouse hole, but for now all we can prove is that there is either a boat larger than a mouse hole or a continent larger than a mouse hole.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Several years ago there was a breakthrough result showing that there exists a blue whale larger than a mouse hole. This is a very exciting result not only on its own merits but also because it breaks the so-called recursion barrier: Blue whales are the first things known to be larger than mouse holes which cannot themselves contain mouse holes. Unfortunately whether there exists a continent larger than all blue whales remains open so this result can’t be used to resolve the question of whether there exists a continent larger than a mouse hole.

Now there is an exciting new result showing that there exists an elephant larger than a mouse hole. Unfortunately this result comes with a large caveat: While it’s known that blue whales don’t exist on other planets the question of whether elephants exist on other planets remains open. So it’s still possible that there doesn’t exist a continent larger than a mouse hole or even possible that there exists an elephant on another planet larger than the entire earth which would make this result trivial, although that isn’t believed to be the case. If it could be shown that there aren’t elephants on other planets, or that they aren’t any larger than the elephants on earth, then it would be known that there’s an elephant on our planet larger than a mouse hole.

This is an exciting time in mouseholeology with important results coming in quickly. The coming years are all but guaranteed to bring new breakthroughs in our understanding of which things can fit through mouse holes.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Wed Feb 26 02:48:51 2025 Tags:

Ready in time for libinput 1.28 [1] and after a number of attempts over the years we now finally have 3-finger dragging in libinput. This is a long-requested feature that allows users to drag by using a 3-finger swipe on the touchpad. Instead of the normal swipe gesture you simply get a button down, pointer motion, button up sequence. Without having to tap or physically click and hold a button, so you might be able to see the appeal right there.

Now, as with any interaction that relies on the mere handful of fingers that are on our average user's hand, we are starting to have usage overlaps. Since the only difference between a swipe gesture and a 3-finger drag is in the intention of the user (and we can't detect that yet, stay tuned), 3-finger swipes are disabled when 3-finger dragging is enabled. Otherwise it does fit in quite nicely with the rest of the features we have though.

There really isn't much more to say about the new feature except: It's configurable to work on 4-finger drag too so if you mentally substitute all the threes with fours in this article before re-reading it that would save me having to write another blog post. Thanks.

[1] "soonish" at the time of writing

Posted Mon Feb 24 05:38:00 2025 Tags:

This is a heads up as mutter PR!4292 got merged in time for GNOME 48. It (subtly) changes the behaviour of drag lock on touchpads, but (IMO) very much so for the better. Note that this feature is currently not exposed in GNOME Settings so users will have to set it via e.g. the gsettings commandline tool. I don't expect this change to affect many users.

This is a feature of a feature of a feature, so let's start at the top.

"Tapping" on touchpads refers to the ability to emulate button presses via short touches ("taps") on the touchpad. When enabled, a single-finger tap corresponds emulates a left mouse button click, a two-finger tap a right button click, etc. Taps are short interactions and to be recognised the finger must be set down and released again within a certain time and not move more than a certain distance. Clicking is useful but it's not everything we do with touchpads.

"Tap-and-drag" refers to the ability to keep the pointer down so it's possible to drag something while the mouse button is logically down. The sequence required to do this is a tap immediately followed by the finger down (and held down). This will press the left mouse button so that any finger movement results in a drag. Releasing the finger releases the button. This is convenient but especially on large monitors or for users with different-than-whatever-we-guessed-is-average dexterity this can make it hard to drag something to it's final position - a user may run out of touchpad space before the pointer reaches the destination. For those, the tap-and-drag "drag lock" is useful.

"Drag lock" refers to the ability of keeping the mouse button pressed until "unlocked", even if the finger moves off the touchpads. It's the same sequence as before: tap followed by the finger down and held down. But releasing the finger will not release the mouse button, instead another tap is required to unlock and release the mouse button. The whole sequence thus becomes tap, down, move.... tap with any number of finger releases in between. Sounds (and is) complicated to explain, is quite easy to try and once you're used to it it will feel quite natural.

The above behaviour is the new behaviour which non-coincidentally also matches the macOS behaviour (if you can find the toggle in the settings, good practice for easter eggs!). The previous behaviour used a timeout instead so the mouse button was released automatically if the finger was up after a certain timeout. This was less predictable and caused issues with users who weren't fast enough. The new "sticky" behaviour resolves this issue and is (alanis morissette-stylue ironically) faster to release (a tap can be performed before the previous timeout would've expired).

Anyway, TLDR, a feature that very few people use has changed defaults subtly. Bring out the pitchforks!

As said above, this is currently only accessible via gsettings and the drag-lock behaviour change only takes effect if tapping, tap-and-drag and drag lock are enabled:

  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-to-click true
  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-and-drag true
  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-and-drag-lock true
  
All features above are actually handled by libinput, this is just about a default change in GNOME.
Posted Mon Feb 24 04:17:00 2025 Tags:

Board games are said to have ‘position’ which is about the longer term implications of what’s happening on the board, and ‘tactics’ which is amount immediate consequences. Let’s consider a game which is purely tactical. In this game the two sides alternate picking a bit which is added to a string and after they’ve both moved 64 times the secure hash of the string is calculated and that’s used to pick the winner. I suggest 64 as the number of moves because it’s cryptographic in size, so the initial moves will have unclear meanings and will become clearer towards the end of the game.

The first question to ask about this is what are the chances that the first player to move will have a theoretical win, assuming both sides have unlimited computational capability. It turns out if the chances are greater or less than a certain special value then the probability of one particular side having the win goes up rapidly as you get further from the end of the game. If the win probability is set to exactly that value then the winning chances remain stable as you calculate backwards. Calculating this value is left as an exercise to the reader. The interesting thing is that the value isn’t 50%, in fact it’s fairly partisan, which raises the question of whether the level of advantage for white in Chess is set about right to optimize for it being a tense game.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

There are other variants possible. The number of possible plays could be more than 2, or somewhat variable since you might have a chance of making the opponent skip their turn and you go again. This would allow the ‘effective’ fanout to be something non-integer, but it’s an interesting question whether there’s a way for it to be less than 2.

There’s a variant where instead of there being a fixed number of moves in a game after each move the player who just moved has some probability of winning (or losing). It isn’t obvious whether any win probability guarantees a 100% probability that the game is winnable by one side or the other. It seems like that should be a foundational result in computer science but I’m unfamiliar with it.

In practice of course analyzing this sort of game is constrained by computational ability. That can be ‘emulated’ by assuming that the outcomes are truly random and there’s an oracle which can be accessed a set number of times on one’s turn to say who wins/whether a player wins in a given position. There are a lot of variants possible based on the amount of queries the sides have and whether there’s optionality in it and whether you can think on the opponent’s time. It feels like optimal play is slightly randomized. Intuitively if one player has more thinking time than the other then the weaker player needs to mix things up a bit so their analysis isn’t just a subset of what the opponent is seeing. But this is a wild guess. Real analysis of this sort of game would be very interesting.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sat Jan 25 23:26:36 2025 Tags:
Looking at some claims that quantum computers won't work. #quantum #energy #variables #errors #rsa #secrecy
Posted Sat Jan 18 17:45:19 2025 Tags:

You might notice Katy is having a bit of an interdimensional anomaly here. I implemented this as an experiment because it’s different from any color cycling effect I’ve ever seen before. Normally in color cycling there’s one position which is true, then the hues rotate until all of them are swapped, then they keep rotating until they’re back to true. In this one at any given moment there are two opposite hues which are true and the ones at 90 degrees from those are swapped, and the color cycling effect is rotating which angle is true. It’s also doing a much better job of keeping luminance constant due to using okhsl.

Code is here. It’s leaning on libraries for most of the work, but I did write some code to dither just the low order bits of the RGB values. That’s a technique which should be used more often. This effect would also work on animated video. You could even adjust the angle as a directorial trick, to draw the viewer’s eye towards particular things by making their color true.

(Now that I think about it low order bit dithering could be improved by using error in the okhsl gamut. It also be improved by other diffusion techniques, which in turn can be further improved by dynamically choosing which neighboring pixel most wants to have error in the opposite direction already. I’m going to exercise some self-control and not implement any of this, but you most definitely should pick it up where I left off. All video manipulation should be done in 16 bit color the entire time and only dithered down to 8 bit on final display.)

As a bonus, I also simplified the color swatches I gave previously into two separate ones, for light and dark backgrounds. Files are here and here.

All of the above is done within the limitations of the sRGB color space. The sRGB standard kind of sucks. It’s based off the very first color television which was ever made in 1954 and the standardization which came later made it consistent but not broader. Now that OLED is getting used everywhere my expectation is that things are going to start supporting Rec2100 under the hood and once that becomes ubiquitous new content will be produced in formats which support that extra color depth. It’s going to take a few years.

Posted Fri Dec 27 15:32:45 2024 Tags:

This is a heads up that if you file an issue in the libinput issue tracker, it's very likely this issue will be closed. And this post explains why that's a good thing, why it doesn't mean what you want, and most importantly why you shouldn't get angry about it.

Unfixed issues have, roughly, two states: they're either waiting for someone who can triage and ideally fix it (let's call those someones "maintainers") or they're waiting on the reporter to provide some more info or test something. Let's call the former state "actionable" and the second state "needinfo". The first state is typically not explicitly communicated but the latter can be via different means, most commonly via a "needinfo" label. Labels are of course great because you can be explicit about what is needed and with our bugbot you can automate much of this.

Alas, using labels has one disadvantage: GitLab does not allow the typical bug reporter to set or remove labels - you need to have at least the Planner role in the project (or group) and, well, suprisingly reporting an issue doesn't mean you get immediately added to the project. So setting a "needinfo" label requires the maintainer to remove the label. And until that happens you have a open bug that has needinfo set and looks like it's still needing info. Not a good look, that is.

So how about we use something other than labels, so the reporter can communicate that the bug has changed to actionable? Well, as it turns out there is exactly thing a reporter can do on their own bugs other than post comments: close it and re-open it. That's it [1]. So given this vast array of options (one button!), we shall use them (click it!).

So for the forseeable future libinput will follow the following pattern:

  • Reporter files an issue
  • Maintainer looks at it, posts a comment requesting some information, closes the bug
  • Reporter attaches information, re-opens bug
  • Maintainer looks at it and either: files a PR to fix the issue or closes the bug with the wontfix/notourbug/cantfix label
Obviously the close/reopen stage may happen a few times. For the final closing where the issue isn't fixed the labels actually work well: they preserve for posterity why the bug was closed and in this case they do not need to be changed by the reporter anyway. But until that final closing the result of this approach is that an open bug is a bug that is actionable for a maintainer.

This process should work (in libinput at least), all it requires is for reporters to not get grumpy about issue being closed. And that's where this blog post (and the comments bugbot will add when closing) come in. So here's hoping. And to stave off the first question: yes, I too wish there was a better (and equally simple) way to go about this.

[1] we shall ignore magic comments that are parsed by language-understanding bots because that future isn't yet the present

Posted Wed Dec 18 03:21:00 2024 Tags:

Different sports have different attitudes to rules changes. Most competitive sports get constant rules tweaks (except for baseball, which seems dead set on achieving cultural irrelevance). Track and field tries to keep consistent rules over time so performances in different years are comparable. Poker has a history of lots of experimentation but has mostly been a few main variants the last few decades. Go never changes the rules at all (except for tweaking the scoring system out of necessity but still not admitting it. That’s a story for another day). Chess hasn’t changed the rules for a long time and it’s a problem.

I’m writing this right after a new world chess champion has been crowned. While the result was better than previous championships in that it succeeded in crowning an unambiguous world champion, it failed at two bigger goals: Being exciting and selecting a winner who’s widely viewed as demonstrating that they’re the strongest among all the competitors.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

The source of the lack of excitement is no secret: Most of the games were draws. Out of the 14 games, 5 were decisive. This also creates a problem for the accuracy of the result. In a less drawish game a match with 6 games of which only one was drawn would have equal statistical significance (assuming it wasn’t overly partisan). In fact it’s even worse than that appears on its face. The candidates tournament, which selected the challenger, was a 7 person double round robin where the winner scored 9/14 and three other competitors scored 8.5/14. Picking a winner based on such a close result has hardly any significance at all, and that format means that unless one of the competitors was ludicrously better than the others such a close result was expected. If the format were instead that the top four finishers from the candidates tournament played single elimination matches against each other then the eventual result would be viewed with far more authority. Partially the problem here is that this is just a badly designed tournament but some of the reasoning behind this format is because of the drawishness. Such matches would be long and arduous and not much fun, as they were in the past. Some previous FIDE title tournaments were far worse, following a very misguided idea that making the results random would lead to more excitement. That makes sense in sports like Soccer where there are few teams and it’s important that all of them have a shot, but the ethos of Chess is that the better player should consistently win, and adding randomness can easily lead to never seeing the same player win twice.

This leads to the question of how the rules of chess could be modified to not have so many draws. Most proposed variations suffer from being far too alien to chess players to be taken seriously, but there are two approaches which are, or should be, taken seriously which I’d like to highlight: Fischer random and the variants explored by Kramnik. In Fischer random a starting position with the pieces in the back rank scrambled randomly is selected. In the latter Kramnik, a former world champion, suggested game variants, and the Deepmind team trained an AI on them and measured the draw rate and partisanship of all of them based on how that engine did in self-play games. (Partisanship is the degree to which the game favors one player over the other). I love this methodology. Of the variants tried I’d like to highlight two of the best ones. One is ‘torpedo’, in which pawns can move two squares at once from any position, not just the starting one. The other is no-castle, which is exactly what it sounds like. No castle has the benefit that it gets rid of the most complex and confusing chess rules and that it’s just changing the opening position, in fact it’s a position reachable from the standard Chess opening position. (For some reason people don’t do no-castle Fischer Random tournaments, which seems ridiculous. Might as well combine the best ideas.)

Both no castle and torpedo have about the same level of partisanship as regular chess, which may be a good thing for reasons I really ought to do a separate blog post about. They also both do a good job of making the game less drawish. The reason for this is, in my opinion, basically the same, or at least two sides of the same coin. Torpedo makes the pawns stronger, so they’re more likely to promote and decide the game. No castle nerfs the king so it’s more likely to get captured. Of these two approaches torpedo feels far more alien to regular chess than no castle does. My proposal to make Chess even more non-drawish is to nerf the king even more: Make it so the king can’t move diagonally. Notably this would cause even king versus king endgames to be decisive. It would also result in a lot more exciting attacks because the king would be so poorly defended. This needs extensive play testing to be taken seriously, but repeating the Deepmind experiment is vastly easier now that AI has come so far, and it would be great if Chess or at least a very Chess-like game could have a world championship which was much more exciting and meaningful.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Fri Dec 13 06:21:54 2024 Tags:

Before I get into the meat of this rant, I’d like to say that my main point is not that giving monthly numbers is incompetent, but I will get that out of the way now. There’s this ongoing deep mystery of the universe: “Why does our monthly revenue always drop in February?” It’s because it has less days in it, ya dumbass. What you should be doing is average daily numbers with months as the time intervals of measurement, and longer term stats being weighted by lengths of months. That’s still subject to some artifacts but they’re vastly smaller and hard to avoid. (Whoever decided to make days and years not line up properly should be fired.)

Even if you aren’t engaged in that bit of gross incompetence, it’s still that case that not only year over year but any fixed time interval delta is something you should never, ever use. This includes measuring historical inflation, the performance of a stock, and the price of tea in China. You may find this surprising because the vast majority of graphs you ever see present data in exactly this way, but it’s true and is a huge problem.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Let’s consider the criteria we want out of a smoothing algorithm:

  1. There shouldn’t be any weird artifacts in how the data is presented

  2. There should be minimal parameterization/options for p-hacking

  3. Only the data from a fixed window should be considered

  4. Data from the future should not be considered

So what’s wrong with straightforwardly applying even weighting across the entire time interval? The issue is that it grants a special and extreme cutoff status to two very specific points in time: Right now and the beginning of the interval. While right now is hard to do anything about (but more on that later) granting special status to an arbitrary point in the past is just wrong. For example, here’s a year over year weighted average in a scenario where there was a big spike one month, which is the most artifact-laden scenario:

What happened exactly a year after the spike? Absolutely nothing, it’s an artifact of the smoothing algorithm used. You could argue that by picking a standard interval to use the amount of possible p-hacking is mitigated and it’s true. But the effects are still there and people can always decide to show the data at all only when the artifact goes the way they want and it’s extremely hard to standardize enough to avoid trivial p-hacks. For example it’s viewed as acceptable to show year to date, and that may hit a different convenient cutoff date.

(The graphs in this post all use weighted geometric means and for simplicity assume that all months are the same length. As I said at the top you should take into account month lengths with real world data but it’s beside the point here.)

What you should be using is linear weighted moving average, which instead of applying an even weighting to every data point has the weighting go down linearly as things go into the past, hitting zero at the beginning of the window. Because the early stuff is weighted less the size of the window is sort of narrower than with constant weighting. To get roughly apples to apples you can set it so that the amount of weight coming from the last half year is the same in either case, which corresponds to the time interval of the linear weighting being multiplied by the square root of 2, which is roughly 17 months instead of 12. Here’s what it looks like with linear weighted averages:

As you can see the LWMA doesn’t have a sudden crash at the end and much more closely models how you would experience the spike as you lived through it. There’s still some p-hacking which can be done, for example you can average over a month or a ten years instead of one year if that tells a story you prefer, but the effects of changing the time interval used are vastly less dramatic. Changing by a single month will rarely have any noticeable effect at all, while doing the same with a strict cutoff will matter fairly often.

Stock values in particular are usually shown as deviations from a single point at the beginning of a time period, which is all kinds of wrong. Much more appropriate would be to display it in the same way as inflation: annual rate of return smoothed out using LWMA over a set window. That view is much less exciting but much more informative.

In defense of the future

Now I’m going to go out on a limb and advocate for something more speculative. What I said above should be the default, what I’m about to say should be done at least sometimes, but for now it’s done hardly ever.

Of the four criteria at the top the dodgyest one by far is that you shouldn’t use data from the future. When you’re talking about the current time you don’t have much choice in the matter because you don’t know what will happen in the future, but when looking at past data it makes sense to retroactively take what happened later into account when evaluating what happened at a particular time. This is especially true of things which have measurement error, both from noise in measurement and noise in arrival time of effect. Using both before and after data simply gives better information. What it sacrifices is the criterion that you don’t retroactively change how you view past data after you give information at the time. While there’s some benefit to revisiting the drama as you experienced it at the time, that shouldn’t always be more important than accuracy.

Here’s a comparison of how a rolling linearly weighted average which uses data from the future would show things in real time versus in retrospect:

(This assumes a window of 17 months centered around the current time and data from the future elided, so the current value is across a window of 9 months.)

The after the fact smoothing is much more reasonable. It’s utilizing 20/20 hindsight, which is both its strength and weakness. The point is sometimes that’s what you want.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Mon Dec 2 17:25:59 2024 Tags:

The book ‘Manchild in the Promised Land’ is a classic of American literature which traumatized me when I was assigned to read it in middle school. It is a literarily important work, using a style of hyperrealism which was innovative at the time and eventually lead to reality television and youtube channels which have full staffs but go out of their way to make it appear that the whole thing was produced by a single person. In parallel it also was part of the rise of the blaxsploitation genre. The author and most of the books ardent followers claim that it proves that black people in the US are that way due to the oppression of white people. It’s also been claimed to be oppression porn, and to glorify drug use and criminality. I’ll get to whether those things are true, but first there are some things I need to explain.

The book is an autobiography about the author’s growing up in Harlem, how he was a involved in all manner of criminality when he was younger, and eventually managed to get out of it, go to law school, and have a respectable career. What struck me when I was younger was that it was the first depiction of the interactions between men and women I’d ever seen which wasn’t propaganda bullshit. Back in the 80s the depictions of dating in sitcoms and movies were stagey and dumb, but worse than that they’d alternate between christian moralizing and juvenile fantasizing of the writers. Even back then I could see transparently through them. This book was different. It depicted people in actually uncomfortable situations, doing things you weren’t supposed to talk about, and having normal human reactions. The thing which caused damage to my impressionable young mind was that most of these interactions involved pimps and hos and present a very dark side of humanity.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

There’s something I now need to admit. I haven’t, and most likely won’t, be able to force myself to finish re-reading this book. I know that’s a bit hypocritical when writing a review, but there’s something I realized early on re-reading it which recontextualized everything in it and made not able to cope with reading it any more. And that is that it’s all made up.

It was the homophobia which gave it away. All the gay people are presented as aggressive creepazoids, with not much motivation other than to fulfill the role of creepazoid in this literary universe of general oppression. Such people do exist in real life, but they tend to have the good sense to not openly go after people they don’t know and have no reason to think will be receptive, especially back when homosexuality was downright illegal and getting beating someone up for being gay wouldn’t land you in any trouble at all (The story is set in the 1940s). The logical conclusion is that the author didn’t know any real out gay people and was making them up as characters in a story using a common trope of the time. Looking into the author more, everything falls apart. He comes across as a dweeb in interviews, not someone anyone would take seriously as a pimp. None of the characters other than him mention seem to ever have been mentioned. He conveniently claims to have been a lawyer but then stopped practicing because he could make more money giving talks, but there doesn’t seem to be evidence that he ever practiced or passed the bar, attended law school, or got into law school in the first place. (I’m guessing he did some but not all of those.) Most ridiculously the timing doesn’t work. Either he was hustling soldiers on leave from WWII when he was eight years old, or that was somebody else’s story. In an interview with NPR they said it was a novel but written as an autobiography, which is surprising given that nearly everything referencing it claims unambiguously that it’s simply an autobiography. My guess is that they did some actual journalism and politely let him fall back on ‘it’s a novel’ when they called him out on his lies.

One may ask whether this matters. Does a work of art’s meaning or import depend on the artist who created it? In general I lean towards saying no, that the truth of even a claimed personal experience is less important than whether that experience is prototypical of that of many others. But in this case the context and meaning are so completely changed based on whether it’s real that I have to go the other way. If it’s a real story it’s about someone who was once a pimp, came to the realization that hos are real people too, changed his life around and is bringing a laudable message of inclusion. If it’s made up then it’s a loser fantasizing about having been a pimp. That doesn’t mean that this isn’t an important and influential book, but it does man that if you’re going to teach it you should include the context that it’s a seminal work of incel literature.

All that said, being a work of fiction doesn’t mean that there’s nothing about reality which can be gleaned from a work. I truly believe that the stories in this book were ones told to or observed by the author involving older, cooler boys and were either true or at least demonstrations that such stories could win you social status. A lot of it jives with things which I personally witnessed in the 80s and 90s growing up in walking distance of where the book is set. So now let’s get to critiques of the book and whether they’re true or not. The first question at hand is: Does it show that black americans are kept where they are via oppression from white americans? Well… not really. It’s complicated. A lot of the white people in it are very personally charitable to the author, helping him get through school and better himself. That’s the opposite of oppression. Where it does show a lot of oppression is from the police, both in the form of under- and over- policing. Police brutality is commonplace, but the cops won’t show up when you call for them and actually need them. This is definitely a real problem, thankfully less so now than the better part of a century ago, but there’s still a default assumption in the US that when the cops show up they’ll make the situation worse. Maybe over the course of my lifetime that’s been downgraded from an assumption to a fear, but it’s still a very real hangover and the war on drugs isn’t done yet. What the book most definitely shows is the black community oppressing itself. People in the ghetto are mostly interacting with other people in the ghetto, so it’s to be expected that crimes which happen there also have victims from there. The depictions of drug dealers intentionally getting potential customers addicted and of men forcing women into prostitution are particularly disturbing, but are things which actually happen.

That leads to the other big question, which is whether this book glorifies criminality. Yes it glorifies criminality! Half of it is the author fantasizing about being a badass pimp! He claims cocaine isn’t addictive! Gang rape is portrayed as a rite of passage! (The one thing which the author seems to really not like is heroin. This was before crack.) What’s puzzling is how this book is held up as a paragon of showing greatness of black culture. Saying ‘There’s a lot of glorification of criminality and misogyny in black culture and that’s a bad thing’ is something one isn’t allowed to say in polite company, or can only be said by black academics who obfuscate it with layers of jargon. While I understand getting defensive about that, it might behoove people who want to avoid the issue to not outright promote works which are emblematic of those trends (by, say, making them assigned reading in school). It doesn’t help to fall back on ‘this has a deep meaning which only black people can understand’ when the obvious problems are pointed out, as the author had a penchant for doing in interviews.

This brings me to the most central and currently culturally relevant aspect of what’s real in this book. The (very different, not at all bullshit) book ‘We Have Never Been Woke’ by Musa al-Gharbi makes a compelling case for the claim that woke worldview is a largely self-serving one of intellectual elites who claim to speak for underprivileged people but don’t. That book is from the point of view of someone who is himself cringily woke and can point out the hypocrisy and disconnect from that end. The other side of it is that when underprivileged people who woke claims to speak for say they are not you should believe them. This book is an example of that. So is all of gangsta rap. It is not productive to pretend a culture is something it is not just because you think it should be different. And there I think Manchild gets some amount of redemption: It claims to be a work which shines a light on cold, hard, unpleasant reality, and for all its faults it does, only not with the meaning the author intended.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Tue Nov 26 21:04:24 2024 Tags:

A while ago I was looking at Rust-based parsing of HID reports but, surprisingly, outside of C wrappers and the usual cratesquatting I couldn't find anything ready to use. So I figured, why not write my own, NIH style. Yay! Gave me a good excuse to learn API design for Rust and whatnot. Anyway, the result of this effort is the hidutils collection of repositories which includes commandline tools like hid-recorder and hid-replay but, more importantly, the hidreport (documentation) and hut (documentation) crates. Let's have a look at the latter two.

Both crates were intentionally written with minimal dependencies, they currently only depend on thiserror and arguably even that dependency can be removed.

HID Usage Tables (HUT)

As you know, HID Fields have a so-called "Usage" which is divided into a Usage Page (like a chapter) and a Usage ID. The HID Usage tells us what a sequence of bits in a HID Report represents, e.g. "this is the X axis" or "this is button number 5". These usages are specified in the HID Usage Tables (HUT) (currently at version 1.5 (PDF)). The hut crate is generated from the official HUT json file and contains all current HID Usages together with the various conversions you will need to get from a numeric value in a report descriptor to the named usage and vice versa. Which means you can do things like this:

  let gd_x = GenericDesktop::X;
  let usage_page = gd_x.usage_page();
  assert!(matches!(usage_page, UsagePage::GenericDesktop));
  
Or the more likely need: convert from a numeric page/id tuple to a named usage.
  let usage = Usage::new_from_page_and_id(0x1, 0x30); // GenericDesktop / X
  println!("Usage is {}", usage.name());
  
90% of this crate are the various conversions from a named usage to the numeric value and vice versa. It's a huge crate in that there are lots of enum values but the actual functionality is relatively simple.

hidreport - Report Descriptor parsing

The hidreport crate is the one that can take a set of HID Report Descriptor bytes obtained from a device and parse the contents. Or extract the value of a HID Field from a HID Report, given the HID Report Descriptor. So let's assume we have a bunch of bytes that are HID report descriptor read from the device (or sysfs) we can do this:

  let rdesc: ReportDescriptor = ReportDescriptor::try_from(bytes).unwrap();
  
I'm not going to copy/paste the code to run through this report descriptor but suffice to day it will give us access to the input, output and feature reports on the device together with every field inside those reports. Now let's read from the device and parse the data for whatever the first field is in the report (this is obviously device-specific, could be a button, a coordinate, anything):
   let input_report_bytes = read_from_device();
   let report = rdesc.find_input_report(&input_report_bytes).unwrap();
   let field = report.fields().first().unwrap();
   match field {
       Field::Variable(var) => {
          let val: u32 = var.extract(&input_report_bytes).unwrap().into();
          println!("Field {:?} is of value {}", field, val);
       },
       _ => {}
   }
  
The full documentation is of course on docs.rs and I'd be happy to take suggestions on how to improve the API and/or add features not currently present.

hid-recorder

The hidreport and hut crates are still quite new but we have an existing test bed that we use regularly. The venerable hid-recorder tool has been rewritten twice already. Benjamin Tissoires' first version was in C, then a Python version of it became part of hid-tools and now we have the third version written in Rust. Which has a few nice features over the Python version and we're using it heavily for e.g. udev-hid-bpf debugging and development. An examle output of that is below and it shows that you can get all the information out of the device via the hidreport and hut crates.

$ sudo hid-recorder /dev/hidraw1
# Microsoft Microsoft® 2.4GHz Transceiver v9.0
# Report descriptor length: 223 bytes
# 0x05, 0x01,                    // Usage Page (Generic Desktop)              0
# 0x09, 0x02,                    // Usage (Mouse)                             2
# 0xa1, 0x01,                    // Collection (Application)                  4
# 0x05, 0x01,                    //   Usage Page (Generic Desktop)            6
# 0x09, 0x02,                    //   Usage (Mouse)                           8
# 0xa1, 0x02,                    //   Collection (Logical)                    10
# 0x85, 0x1a,                    //     Report ID (26)                        12
# 0x09, 0x01,                    //     Usage (Pointer)                       14
# 0xa1, 0x00,                    //     Collection (Physical)                 16
# 0x05, 0x09,                    //       Usage Page (Button)                 18
# 0x19, 0x01,                    //       UsageMinimum (1)                    20
# 0x29, 0x05,                    //       UsageMaximum (5)                    22
# 0x95, 0x05,                    //       Report Count (5)                    24
# 0x75, 0x01,                    //       Report Size (1)                     26
... omitted for brevity
# 0x75, 0x01,                    //     Report Size (1)                       213
# 0xb1, 0x02,                    //     Feature (Data,Var,Abs)                215
# 0x75, 0x03,                    //     Report Size (3)                       217
# 0xb1, 0x01,                    //     Feature (Cnst,Arr,Abs)                219
# 0xc0,                          //   End Collection                          221
# 0xc0,                          // End Collection                            222
R: 223 05 01 09 02 a1 01 05 01 09 02 a1 02 85 1a 09 ... omitted for previty
N: Microsoft Microsoft® 2.4GHz Transceiver v9.0
I: 3 45e 7a5
# Report descriptor:
# ------- Input Report -------
# Report ID: 26
#    Report size: 80 bits
#  |   Bit:    8       | Usage: 0009/0001: Button / Button 1                          | Logical Range:     0..=1     |
#  |   Bit:    9       | Usage: 0009/0002: Button / Button 2                          | Logical Range:     0..=1     |
#  |   Bit:   10       | Usage: 0009/0003: Button / Button 3                          | Logical Range:     0..=1     |
#  |   Bit:   11       | Usage: 0009/0004: Button / Button 4                          | Logical Range:     0..=1     |
#  |   Bit:   12       | Usage: 0009/0005: Button / Button 5                          | Logical Range:     0..=1     |
#  |   Bits:  13..=15  | ######### Padding                                            |
#  |   Bits:  16..=31  | Usage: 0001/0030: Generic Desktop / X                        | Logical Range: -32767..=32767 |
#  |   Bits:  32..=47  | Usage: 0001/0031: Generic Desktop / Y                        | Logical Range: -32767..=32767 |
#  |   Bits:  48..=63  | Usage: 0001/0038: Generic Desktop / Wheel                    | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
#  |   Bits:  64..=79  | Usage: 000c/0238: Consumer / AC Pan                          | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
# ------- Input Report -------
# Report ID: 31
#    Report size: 24 bits
#  |   Bits:   8..=23  | Usage: 000c/0238: Consumer / AC Pan                          | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
# ------- Feature Report -------
# Report ID: 18
#    Report size: 16 bits
#  |   Bits:   8..=9   | Usage: 0001/0048: Generic Desktop / Resolution Multiplier    | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  10..=11  | Usage: 0001/0048: Generic Desktop / Resolution Multiplier    | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  12..=15  | ######### Padding                                            |
# ------- Feature Report -------
# Report ID: 23
#    Report size: 16 bits
#  |   Bits:   8..=9   | Usage: ff00/ff06: Vendor Defined Page 0xFF00 / Vendor Usage 0xff06 | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  10..=11  | Usage: ff00/ff0f: Vendor Defined Page 0xFF00 / Vendor Usage 0xff0f | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bit:   12       | Usage: ff00/ff04: Vendor Defined Page 0xFF00 / Vendor Usage 0xff04 | Logical Range:     0..=1     | Physical Range:     0..=0     |
#  |   Bits:  13..=15  | ######### Padding                                            |
##############################################################################
# Recorded events below in format:
# E: .  [bytes ...]
#
# Current time: 11:31:20
# Report ID: 26 /
#                Button 1:     0 | Button 2:     0 | Button 3:     0 | Button 4:     0 | Button 5:     0 | X:     5 | Y:     0 |
#                Wheel:     0 |
#                AC Pan:     0 |
E: 000000.000124 10 1a 00 05 00 00 00 00 00 00 00
  
Posted Tue Nov 19 01:54:00 2024 Tags:
Questioning a puzzling claim about mass surveillance. #attackers #governments #corporations #surveillance #cryptowars
Posted Mon Oct 28 14:25:09 2024 Tags: