This feed omits posts by jwz. Just 'cause.
(satire) *[DOPE] Employees Dig Up Arlington National Cemetery* to collect fees from all those freeloading corpses.
*Fossil-fuel firms receive US subsidies worth $31bn each year, study finds.*
Freedom of speech in the UK is rather limited, in ways that an American (or someone getting accustomed to US standards) can find shocking.
I've noticed the same thing, and not only about the UK. In many other European countries, the idea of freedom of speech is narrow. You can state a range of political opinions, but insulting a person can be a crime. President Sarkozi (recently convicted of corruption) had several people prosecuted for insulting him.
Israel has ordered all the million Palestinians in Gaza City to leave or be killed.
For many, this will be a death march, an enormous atrocity.
Israel's latest way of hindering negotiations with HAMAS is by terrorist attack against HAMAS officials in Qatar where the negotiations take place.
Israeli sniper Daniel Raab admitted that he shot and killed Palestinian Salem Doghmosh for no legitimate military justification.
Doghmosh was not fighting and did not seem to have a weapon. He was a noncombatant. The sniper aimed specifically at him and shot to kill. As I understand it, that is a war crime, and it is murder. Morally, no more needs to be said.
There is a peculiar wrinkle. Doghmosh was desperately trying to retrieve his brother's corpse. He and various relatives tried to do this, and were shot one by one because they had crossed a notional line. The line was, in effect, an excuse to kill.
WHO is accepting donations from businesses, some of which are anonymous. This will give businesses corrupt influence over world-wide medical policies.
*The president of the European Commission called for sanctions and a partial trade suspension against Israel.
Today's US college students are trapped in a system which automatically forces each student to pay a heavy fee for renting digital textbooks.
These can only be read with nonfree software, which makes them fundamentally unjust. And the rental expires in a year.
I would resent being compelled to pay so much, but the big injustice would lie in running those nonfree programs. A student could perhaps avoid that injustice by photographing all the pages of someone else's copy off a screen.
A drone dropped an incendiary bomb on one of the boats trying to bring humanitarian aid to Gaza.
US citizens: Submit an official comment Against the EPA's plan to rescind its ability to limit greenhouse gas emissions from any industry and gut vehicle standards needed to fight global heating.
Here's how to make the actionnetwork.org letter campaign linked above work without running the site's nonfree JavaScript code. (See https://gnu.org/philosophy/javascript-trap.html for why that issue matters.)
First, make sure you have deactivated JavaScript in your browser or are using the LibreJS plug-in.
I have done the next step for you: I added `?nowrapper=true' to the end of the campaign URL before posting it above. That should bring you to a page that starts with, "Letter campaigns will not work without JavaScript!"
They indeed won't work without some manual help, but the following simple method seems adequate for many of them, including this one.
To start, fill in the personal information answers in the box on the right side of the page. That's how you say who's sending the letter.
Then click the "START WRITING" button. That will take you to a page that can't function without nonfree JavaScript code. (To ensure it doesn't function perversely by running that nonfree code, you can enable LibreJS or disable JavaScript by visiting that page.) You can finish sending without that code By editing its URL in the browser's address bar, as follows:
First, go to the end and insert `&nowrapper=true'. Then tell the browser to visit that URL. This should give you a version of the page that works without JavaScript. Edit the subject and body of your letter. Finally, click on the "SEND LETTER" button, and you're done.
This method seems to work for letter campaigns that send the letters to a fixed list of recipients, the same recipients for every sender. Editing and revisiting the URL is the only additional step needed to bypass the nonfree JavaScript code. I'm sure you'll agree it is a small effort for the result of supporting the campaign without opening your computer to unjust (and potentially malicious) software.
The intervals of a piano are named roughly after the distances between them. Here are the names of them relative to C (and frequency ratios explained below):
The names are all one more than the number of half-steps because they predate people believing zero was a real number and the vernacular hasn’t been updated since.
The most important interval is the octave. Two notes an octave apart are so similar that have the same name and it’s the length of the repeating pattern on the piano. The second most important interval is the fifth, composed of seven half-steps. The notes on the piano form a looping pattern of fifth intervals in this order:
G♭ D♭ A♭ E♭ B♭ F C G D A E B F♯
If the intervals were turned to perfect fifths this wouldn’t wrap around exactly right, it would be off by a very small amount called the pythagorean comma. which at is about 0.01. In standard 12 tone equal temperament that error is spread evenly across all 12 intervals and is barely audible even to very well trained human ears.
Musical compositions have what’s called a tonic, which is the note which it starts and ends on, and a scale, which is the set of notes used in the composition. The most common scales are the pentatonic, corresponding to the black notes, and the diatonic, corresponding to the white notes. Since the pentatonic can be thought of as the diatonic with two notes removed everything below will talk about the diatonic. This simplification isn’t perfectly true, but since there aren’t any strong dissonances in the pentatonic scale you can play by feel and its usage is much less theory heavy. Most wind chimes are pentatonic.
Conventionally people talk about musical compositions having a ‘key’, which is a bit of a conflation of tonic and scale. When a key isn’t followed by the term ‘major’ or ‘minor’ it usually means ‘the scale which is the white notes on the piano’. Those scales can form seven different ‘modes’ (which are scales) following this pattern:
This construction is the reason why piano notes are sorted into black and white in the way they are. It’s called the circle of the fifths.
When it goes past the end all notes except the tonic move (because that’s the reference) and it jumps to the other end.
The days of the week aren’t common but they should be because but nobody remembers the standard names. The Tuesday mode is usually called ‘major’ and it has the feel of things moving up from the tonic. The Friday mode is usually called ‘minor’ and it has the feel of things moving down from the tonic.
The second most important interval is the third. To understand the relationships it helps to use some math. The frequency of an octave has a ratio of 2, a fifth is 3/2, a major third is 5/4 and a minor third is 6/5. When you move by an interval you multiply by it, so going up by an major third and then a minor third is 5/4 * 6/5 = 3/2 so you wind up at a fifth. Yes it’s very confusing that intervals are named after numbers which they’re only loosely related to while also talking about actual fractions. It’s even more annoying that fifths use 3 and thirds use 5. Music terminology has a lot of cruft.
The arrangement of keys on a pianor can be adjusted to follow the pattern of thirds. Sometimes electronic pianos literally use this arrangement, called the harmonic table note layout. It goes up by major thirds to the right, fifths to the upper right, and minor thirds to the upper left:
If the notes within a highlighted region are tuned perfectly it’s called just intonation (Technically any tuning which uses integer ratios is called ‘just intonation’ but this is the most canonical of them.) The pattern wraps around horizontally because of the diesis, which is the difference between 128/125 and one, or about 0.02. It wraps around vertically because of the syntonic comma, which is the difference between 81/80 and one, or about 0.01. The pythagorean comma is the difference between 3^12/2^19 and one, about 0.01. The fact of any two of those commas are small can be used to show that the other is small, so it’s only two coincidences, not three.
Jazz intervals use factors of 7. For example the blues note is either 7/5 or 10/7 depending on context. But that’s a whole other subject.

There is a large literature in cryptography on mental poker. It’s all very interesting but hasn’t quite crossed the threshold into being practical. There’s a much better approach they could be taking which I’ll now explain.
Traditionally ‘mental poker’ has meant an unspecified poker variant, played by exactly two people, with the goal of making it so the players can figure out who’s the better poker player. This is close to but not exactly what the requirements should be. In practice these days when people say ‘poker’ they mean Hold’em and the goal should be to make playing over a blockchain practical. Limiting it to exactly two players is completely reasonable. The requirement of running on a blockchain is quite onerous because computation there is extremely expensive but Hold’em has special properties which as I’ll explain enables a much simpler approach.
Side note on ‘practical’: Cryptographers might describe a protocol which requires a million dollars in resources and a month of time to compute as being ‘practical’, meaning it can physically be accomplished. I’m using the much more constraining definition of ‘practical’ as being that players can do it using just their laptops and not have such a bad experience that they rage quit.
The usual approach goes like this: The players collaboratively generate a deck of encrypted cards, then shuffle them, and whenever a player needs to know one card the other player gives enough information that that one card can be decrypted by the other player. This is a very general approach which can support a lot of games, but has a lot of issues. There are heavyweight cryptographic operations everywhere, often involving multiple round trips, which is a big problem for running on chain. There are lots of potential attacks where a player can cheat and get a peek at a card in which case there has to be a mechanism for detecting that and slashing the opponent. Having slashing brings all manner of other issues where a player can get fraudulently slashed, which is unfortunately for a game like poker where even if a player literally goes into a coma it only counts as a fold. There are cryptographic ways of making this machinery unnecessary, but those extra requirements come at a cost.
For those of you who don’t know, Hold’em works like this (skipping the parts about betting): Both players get two secret hole cards which the opponent can’t see. Five community cards get dealt out on the table visible to both players. If nobody folds then the players reveal their cards and whoever can form a better hand using a combination of their hole cards and the community cards wins. There are two important properties of this which make the cryptography a lot easier: There are only nine cards total, and they never move.
A much simpler approach to implementing mental poker goes like this: The game rules as played on chain use commit and reveal and simply hash together the committed values to make the cards. No attempt is made by the on chain play to avoid card repetitions. Before a hand even starts both players reveal what their commits are going to be. They then do collaborative computation to figure out if any of the nine cards will collide. If they will, they skip that hand. If they won’t, they play that hand on chain with the commits baked in at the very beginning.
This approach works much better. The costs on chain are about as small as they could possibly be, with all the difficulty shoved off chain and up front. If a player peeks during the collaborative computation or fails to demonstrate that they didn’t peek then the hand never starts in the first place, no slashing needed. The expensive computational bits can be done at the players’s leisure up front with no having to wait during hands.
Unfortunately none of the literature on mental poker works this way. If anybody makes an implementation which is practical I’ll actually use it. The only extra requirement is that the hashing algorithm is sha256. Until then I’m implementing a poker variant which doesn’t have card removal effects.

Big Tech seeks every advantage to convince users that computing is revolutionized by the latest fad. When the tipping point of Large Language Models (LLMs) was reached a few years ago, generative Artificial Intelligence (AI) systems quickly became that latest snake oil for sale on the carnival podium.
There's so much to criticize about generative AI, but I focus now merely on the pseudo-scientific rhetoric adopted to describe the LLM-backed user-interactive systems in common use today. “Ugh, what a convoluted phrase”, you may ask, “why not call them ‘chat bots’ like everyone else?” Because “chat bot” exemplifies the very anthropomorphic hyperbole of concern.
Too often, software freedom activists (including me — 😬) have asked us to
police our language as an advocacy tactic. Herein, I seek not to cajole everyone
to end AI anthropomorphism. I suggest rather that, when you
write about the latest Big Tech craze, ask yourself: Is my
rhetoric actually reinforcing the message of the very bad actors that I
seek to criticize?
This work now has interested parities with varied motivations. Researchers, for example,
will usually
admit that
they have nothing to contribute to philosophical debates about whether it is
appropriate to … [anthropomorphize] … machines
. But
researchers also can never resist a nascent area of study — so all
the academic disclaimers do not prevent the “world of
tomorrow” exuberance
expressed
by those whose work is now the flavor of the month (especially after they toiled at it for
decades in relative obscurity). Computer science (CS)
academics are too closely tied to the Big Tech gravy train even in mundane
times. But when the VCs
stand on their disruptor soap-boxes and make it rain 💸? … Some corners of CS
academia do become a capitalist echo chamber.
The research behind these LLM-backed generative AI systems is (mostly) not actually new. There's just more electricity, CPUs/GPUs, & digital data available now. When given ungodly resources, well-known techniques began yielding novel results. That allowed for quicker incremental (not exponential) improvement. But, a revolution it is not.
I once asked a fellow CS graduate student (in the mid-1990s), who was
presenting their neural net — built with DoD funding to spot tanks behind
trees —, the simple question0: Do you know why it's wrong when
it's wrong and why it's right when it's right?
. She grimaced and
answered: Not at all. It doesn't think.
. 30 years later, machines still don't think.
Precisely there lies the danger of anthropomorphization. While we may never know why our fellow humans believe what they believe — after centuries that brought1 Heraclitus, Aristotle, Aquinas, Bacon, Decartes, Kant, Kierkegaard, and Haack — we do know that people think, and therefore, they are. Computers aren't. Software isn't. When we who are succumb to the capitalist chicanery and erroneously project being unto these systems, we take our first step toward relinquishing our inherent power over these systems.
Counter-intuitively, the most dangerous are the AI anthropomorphism that criticize rather than laud the systems. The worst of these, “hallucination”, is insidious. Appropriation of a diagnostic term from the DSM-5 into CS literature is abhorrent — prima facie . The term leads the reader to the Bizarro world where programmers are doctors who heal sick programs for the betterment of society. Annoyingly and ironically — even if we did wish to anthropomorphize — LLM-backed generative AI systems almost never hallucinate. If one were to insist on lifting an analogous term from mental illness diagnosis (which I obviously don't recommend), the term is “delusional”. Frankly, having spent hundreds of hours of my life talking with a mentally ill family member who is frequently delusional but has almost never hallucinated — and having to learn to delineate the two for the purpose of assisting in the individual's care — I find it downright offensive and triggering that either term could possibly be used to describe a thing rather than a person.
Sadly, Big Tech really wants us to jump (not walk) to the conclusion that these systems
are human — or, at least, as beloved pets that we can't
imagine living without. Critics like me are easily framed as Luddites
when we've been socially manipulated into viewing — as “almost
human” — these machines poised to replace the artisans, the law enforcers, and the grocery stockers. Like many of you, I read
Asimov as a child. I later cheered during ST:TNG S02E09 (“Measure of a
Man”) when Lawyer Picard established Mr. Data's right to sentience
by shouting:
Your Honour, Starfleet was founded to seek out new life. Well, there it
sits.
But, I assure you as someone who has devoted much of my life to
considering the moral and ethical implication of Big Tech: they have
yet to give us Mr. Data — and if they eventually do, that Mr. Data2
is
probably going to work for ICE, not Starfleet. Remember, Noonien Soong's
fictional positronic opus was altruistic only because Soong worked in a post-scarcity society.
While I was still working on a draft of this essay, Eryk Salvaggio's essay “Human Literacy” was published. Salvaggio makes excellent further reading on the points above.
Footnotes:
0I always find that, in science, the answers simplest questions are always the most illuminating. I'm reminded how Clifford Stoll wrote about the most pertinent question at his PhD Physics prelims was “why is the sky blue?”.
1I really just picked a list of my favorite epistemologists here that sounded good when stated in a row; I apologize in advance if I left out your favorite from the list.
2I realize fellow Star Trek fans will say I was moving my lips and nothing came out but a bunch of gibberish because I forgot about Lore. 😛 I didn't forget about Lore; that, my readers, would have to be a topic for a different blog post.
I fysikken beskriver redshift (rødforskydning) og blueshift (blåforskydning) ændringer i lysets bølgelængde. Disse fænomener giver os vigtige informationer om universets bevægelser, galaksers udvikling og selve rumtidens natur.
Denne artikel forklarer, hvad redshift og blueshift er, deres typer, historiske baggrund samt hvordan astronomer bruger dem i dag.
Indholdsfortegnelse
- Hvad er redshift?
- Hvad er blueshift?
- Historisk baggrund
- Praktiske anvendelser
- Redshift og universets fremtid
- Ofte stillede spørgsmål (FAQ)
Hvad er redshift?
Redshift betyder, at lysets bølgelængde bliver længere. Når dette sker, falder både frekvensen og energien i fotonerne, og lyset forskydes mod den røde ende af spektret.
Der findes tre typer rødforskydning:
- Doppler-rødforskydning: Når en lyskilde bevæger sig væk fra observatøren.
- Kosmologisk rødforskydning: Skyldes universets udvidelse – jo længere væk en galakse er, desto mere rødforskudt lys.
- Tyngdekraftsrødforskydning: Ifølge Einsteins relativitetsteori kan lys, der bevæger sig væk fra et massivt objekt, miste energi og forskydes mod rødt.
Hvad er blueshift?
Blueshift er det modsatte af redshift. Her forkortes bølgelængden, og frekvensen og energien stiger. Lyset forskydes mod den blå ende af spektret, når en lyskilde bevæger sig tættere på observatøren.
Et kendt eksempel er Andromeda-galaksen, som bevæger sig mod Mælkevejen og derfor viser tydelig blueshift.
Historisk baggrund
I begyndelsen af 1900-tallet opdagede Vesto Melvin Slipher, at mange galakser viste rødforskydning. Senere påviste Edwin Hubble, at galaksers hastighed væk fra os er proportional med deres afstand – den berømte Hubbles lov. Dette var beviset for, at universet udvider sig.
Praktiske anvendelser
- Måling af galaksers afstande: Rødforskydning bruges til at beregne afstande i universet.
- Studiet af mørk energi: Fjerne supernovaer med rødforskydning har afsløret universets accelererende udvidelse.
- Forskning i sorte huller: Tyngdekrafts-rødforskydning giver indsigt i ekstreme gravitationsfelter.
- Navigation i rummet: Spektroskopiske målinger hjælper med at kortlægge stjerner og galakser.
Redshift og universets fremtid
Hvis universets udvidelse fortsætter med at accelerere, vil fjerne galakser til sidst blive så rødforskudte, at deres lys ikke længere kan observeres. Det betyder, at fremtidige civilisationer måske ikke vil kunne se det samme kosmos, vi gør i dag.
Ofte stillede spørgsmål (FAQ)
Er rødforskydning det samme som Doppler-effekten?
Nej, Doppler-rødforskydning er kun én type. Der findes også kosmologisk og tyngdekrafts-rødforskydning.
Hvorfor er blueshift sjældent i astronomi?
Fordi universet udvider sig, bevæger de fleste galakser sig væk fra os. Kun enkelte, som Andromeda, viser blueshift.
De fleste kender til Windows og macOS, når snakken falder på styresystemer til computere. Men der findes et tredje alternativ, som måske er mindre kendt blandt almindelige brugere – men som driver en stor del af den moderne digitale verden: Linux.
Selvom Linux ikke altid er synligt i hverdagen, er det sandsynligvis tættere på dig, end du tror. Det kører på servere, mobiltelefoner, netværksudstyr og meget mere.
Hvad er Linux egentlig?
Linux er et open source-styresystem, der blev skabt i 1991 af den finske studerende Linus Torvalds. Han ønskede et gratis og åbent alternativ til de kommercielle systemer, der fandtes dengang. Resultatet blev en kerne (kaldet Linux-kernen), som i dag er fundamentet for hundredevis af forskellige systemer.
I modsætning til Windows og macOS er Linux ikke ejet af ét firma. Det udvikles af et globalt fællesskab bestående af både frivillige og store teknologivirksomheder som IBM, Google, Red Hat og Canonical.
Det særlige ved Linux er, at alle kan hente, bruge og ændre det frit. Dette har ført til en enorm variation af versioner – kaldet distributioner.
Hvor bruges Linux i dag?
Mange tror, at Linux kun er for “computer-nørder”. Men sandheden er, at det findes næsten overalt:
- Servere: Omkring 70 % af alle webservere på internettet kører Linux. Når du besøger en hjemmeside, er chancen derfor stor for, at den ligger på en Linux-server.
- Mobiltelefoner: Android, verdens mest udbredte mobilstyresystem, bygger på Linux-kernen.
- Supercomputere: Over 95 % af verdens kraftigste supercomputere benytter Linux. Det skyldes systemets fleksibilitet og ydeevne.
- Indlejrede systemer: Smarte TV’er, biler, netværksroutere og IoT-enheder (Internet of Things) kører ofte Linux.
- Cloud og datacentre: Platforme som Google Cloud, AWS og Azure tilbyder Linux som standardvalg.
Med andre ord – selv hvis du aldrig har installeret Linux på din egen PC, bruger du højst sandsynligt Linux hver eneste dag uden at vide det.
Fordele ved Linux
Der er mange grunde til, at Linux er blevet så populært blandt udviklere, virksomheder og teknikinteresserede:
- Gratis og open source
De fleste distributioner er helt gratis at downloade og bruge. Man slipper for dyre licenser og får samtidig friheden til at tilpasse systemet. - Stabilitet og sikkerhed
Linux er kendt for at være ekstremt stabilt og modstandsdygtigt over for virus og malware. Derfor vælger banker, regeringer og store virksomheder ofte Linux til kritiske systemer. - Tilpasning
Alt kan ændres – fra skrivebordsmiljø til kernefunktioner. Det betyder, at systemet kan skræddersys til både en gammel bærbar og en topmoderne server. - Stort fællesskab
Linux har et aktivt globalt community. Der findes utallige fora, guides og videoer, hvor man kan få hjælp og inspiration. - Ydelse
Linux kan køre på både gamle og nye maskiner. Mange bruger det til at give ældre computere nyt liv, fordi det kræver færre ressourcer end Windows.
Linux-distributioner – mange smagsvarianter
En af de ting, der kan virke forvirrende for nybegyndere, er, at der ikke kun findes ét Linux. I stedet findes der hundredvis af distributioner – hver med sit formål. Nogle af de mest populære er:
- Ubuntu – et godt valg for begyndere, med brugervenlig installation og stort fællesskab.
- Debian – kendt for stabilitet og åbenhed. Ofte brugt som grundlag for andre distroer.
- Fedora – udvikles med fokus på de nyeste teknologier.
- Linux Mint – meget populært blandt nye brugere, fordi det ligner Windows i opsætning.
- Arch Linux – for de erfarne, der vil bygge alt op fra bunden.
Denne mangfoldighed betyder, at der findes en Linux-version til næsten ethvert behov – fra spil og multimedier til serverdrift og udvikling.
Ulemper og udfordringer
Selvom Linux har mange fordele, er der også nogle udfordringer, man bør være klar over:
- Softwarekompatibilitet: Nogle programmer (fx Microsoft Office eller Adobe Photoshop) findes ikke til Linux. Dog kan man ofte finde alternativer som LibreOffice eller GIMP, eller køre Windows-programmer via værktøjer som Wine.
- Læringskurve: For brugere, der er vant til Windows eller macOS, kan Linux kræve lidt tilvænning.
- Hardwaredrivere: De fleste ting virker “out of the box”, men visse grafikkort, printere eller specialenheder kan kræve ekstra opsætning.
Er Linux noget for dig?
Hvis du ønsker mere frihed, højere sikkerhed eller bare vil udforske et alternativ til de kommercielle systemer, er Linux bestemt værd at prøve.
Det er især oplagt hvis du:
- Vil puste nyt liv i en ældre computer.
- Er udvikler eller teknisk interesseret.
- Ønsker et gratis og stabilt system til arbejde eller studier.
- Gerne vil lære mere om, hvordan computere egentlig fungerer.
Sådan kommer du i gang
Det er nemt at prøve Linux uden risiko. Mange distributioner kan downloades som en Live USB, hvor du kan starte systemet direkte fra en USB-nøgle uden at ændre noget på din computer. På den måde kan du teste Linux, inden du beslutter dig for at installere det permanent.
De mest populære steder at starte er:
If you have spent any time around HID devices under Linux (for example if you
are an avid mouse, touchpad or keyboard user) then you may have noticed that
your single physical device actually shows up as multiple device nodes (for
free! and nothing happens for free these days!).
If you haven't noticed this, run libinput record
and you may be
part of the lucky roughly 50% who get free extra event nodes.
The pattern is always the same. Assuming you have a device named
FooBar ExceptionalDog 2000 AI
[1] what you will see are multiple devices
/dev/input/event0: FooBar ExceptionalDog 2000 AI Mouse /dev/input/event1: FooBar ExceptionalDog 2000 AI Keybard /dev/input/event2: FooBar ExceptionalDog 2000 AI Consumer ControlThe Mouse/Keyboard/Consumer Control/... suffixes are a quirk of the kernel's HID implementation which splits out a device based on the Application Collection. [2]
A HID report descriptor may use collections to group things together. A "Physical Collection" indicates "these things are (on) the same physical thingy". A "Logical Collection" indicates "these things belong together". And you can of course nest these things near-indefinitely so e.g. a logical collection inside a physical collection is a common thing.
An "Application Collection" is a high-level abstractions to group something together so it can be detected by software. The "something" is defined by the HID usage for this collection. For example, you'll never guess what this device might be based on the hid-recorder output:
# 0x05, 0x01, // Usage Page (Generic Desktop) 0 # 0x09, 0x06, // Usage (Keyboard) 2 # 0xa1, 0x01, // Collection (Application) 4 ... # 0xc0, // End Collection 74Yep, it's a keyboard. Pop the champagne[3] and hooray, you deserve it.
The kernel, ever eager to help, takes top-level application collections (i.e. those not inside another collection) and applies a usage-specific suffix to the device. For the above Generic Desktop/Keyboard usage you get "Keyboard", the other ones currently supported are "Keypad" and "Mouse" as well as the slightly more niche "System Control", "Consumer Control" and "Wireless Radio Control" and "System Multi Axis". In the Digitizer usage page we have "Stylus", "Pen", "Touchscreen" and "Touchpad". Any other Application Collection is currently unsuffixed (though see [2] again, e.g. the hid-uclogic driver uses "Touch Strip" and other suffixes).
This suffix is necessary because the kernel also splits out the data sent
within each collection as separate evdev event node. Since HID is (mostly)
hidden from userspace this makes it much easier for userspace to identify
different devices because you can look at a event node and say "well, it has
buttons and x/y, so must be a mouse" (this is exactly what udev does when applying
the various ID_INPUT
properties, with varying
levels of success).
The side effect of this however is that your device may show up as multiple devices and most of those extra devices will never send events. Sometimes that is due to the device supporting multiple modes (e.g. a touchpad may by default emulate a mouse for backwards compatibility but once the kernel toggles it to touchpad mode the mouse feature is mute). Sometimes it's just laziness when vendors re-use the same firmware and leave unused bits in place.
It's largely a cosmetic problem only, e.g. libinput treats every event node as individual device and if there is a device that never sends events it won't affect the other event nodes. It can cause user confusion though: "why does my laptop say there's a mouse?" and in some cases it can cause functional degradation - the two I can immediately recall are udev detecting the mouse node of a touchpad as pointing stick (because i2c mice aren't a thing), hence the pointing stick configuration may show up in unexpected places. And fake mouse devices prevent features like "disable touchpad if a mouse is plugged in" from working correctly. At the moment we don't have a good solution for detecting these fake devices - short of shipping giant databases with product-specific entries we cannot easily detect which device is fake. After all, a Keyboard node on a gaming mouse may only send events if the user configured the firmware to send keyboard events, and the same is true for a Mouse node on a gaming keyboard.
So for now, the only solution to those is a per-user udev rule to ignore a device. If we ever figure out a better fix, expect to find a gloating blog post in this very space.
[1] input device naming is typically bonkers, so I'm just sticking with precedence here
[2] if there's a custom kernel driver this may not apply and there are quirks to change this so this isn't true for all devices
[3] or sparkling wine, let's not be regionist here
Welcome to WordPress. This is your first post. Edit or delete it, then start writing!
De Kai says the problem with AI is it isn’t mixing system 1 and system 2. I agree but have a completely different spin. The issue isn’t that researchers don’t want to do it, but that they have no idea how to make the two systems communicate.
This communication problem is a good way of explaining the weaknesses in Go AI I wrote about previously. The issue isn’t narrowly that humans can beat the superhuman computers using certain very specific strategies. That’s equivalent to a human chess grandmaster losing to a trained chimpanzee. The issue is that there are systemic weaknesses in the computer’s play which make it much weaker overall, with losing to humans being a particularly comical symptom. Go presents a perfectly presented test case for communicating between system 2 and system 1. The hand coded system 2 for it is essentially perfect, at least as far as tactics go, and any ability to bridge the two would result in an immediate, measurable, dramatic improvement in playing strength. Anyone acting smart for knowing that this is a core issue for AI should put up or shut up. Either you come up with a technique which works on this perfectly engineered test case or you admit that you’re just as flummoxed as everybody else about how to do it.
On top of that we have no idea how to make system 2 dynamic. Right now it’s all painstakingly hand-coded processes like alpha-beta pruning or test driven development. I suspect there’s a system 3. While system 2 makes models, system 3 comes up with models. Despite almost all educational programs essentially claiming to promote system 3 there’s no evidence that any of them do it at all. It doesn’t help that we have no idea how to test it. The educational instruction we have has a heavy emphasis on system 2 with the occasional bit of system 1 for very specific skills. Students can reliably be taught very specific world models and practiced in them. That helps them do tasks but there’s no evidence that it helps their overall generalization skills.
This creates practical problems in highly skilled tasks like music. To a theorist seeing what a self-taught person does is much more likely to turn up something novel and interesting than someone who has been through standard training even though the latter person will be overall objectively better. The problem is that a self-taught person will be 99% reinventing standard theory badly and, if they’re lucky, 1% doing something novel and interesting. Unfortunately they’ll have no idea which part is that 1%. Neither will a standard instructor, who will view the novel bit as merely bad like the vast bulk of what the student is doing and train them out of it. It takes someone with a deep understanding of theory and a strong system 3 themselves to notice something interesting in what a self-taught person has done, point out what it is, and encourage them to polish that thing more rather than abandon it. I don’t have any suggestions on how to make this process work better other than that everyone to get at least some personalized instruction from someone truly masterful. I’m even more at a loss for how to make AI better at it. System 3 is deep in the recesses of the human mind, the most difficult and involved thing our brains have gained the ability to do, and we may be a long ways away from having AI do it well.

In the future ‘static’ web pages won’t be static at all. They’ll be customized to exactly you, generated in real time by an LLM which knows everything about you. Don’t worry, to combat this horrible enshittification your web browser will have an LLM of its own which takes whatever the web site spewed up and rewrites it to be more aligned to your interests. The two of them will know about each other and have a conversation/negotiation about exactly what will be shown to you on the ‘static’ web.
Unfortunately what will have happened already is that in the past some hacker will also have conversed with the web site. This will be a vibe coded exploit: They’ll have a conversation with their web browser where they give some esoteric information about firmware and convince it that this is very important safety information. Their browser will then have spoken to the web site convincing it of the same thing. The web site will in turn have mentioned this in passing to your web browser, which will have asked it for that information and the web site will have passed it along to win some favor from your browser.
Once that’s all happened in the background your browser, being the helpful agent that it is, will scan your network for any devices for which this information might be relevant. In its searches it will come across your toaster which is directly connected to the network. They won’t speak a common protocol, but that isn’t a problem: The toaster will have a port open with an LLM behind it so you can connect to it and send human language text to communicate. Your browser’s LLM will talk to the toaster’s LLM and tell it the Very Important Safety Information, after which your toaster will thank it profusely and immediately rewrite its firmware.
Then your toaster will catch on fire and burn your house down.
Don’t worry, it’s easy enough to make a toaster’s firmware read-only. But your browser should know about a recently deceased nigerian prince who left no known heirs…
