This feed omits posts by jwz. Just 'cause.
Part 2 is, perhaps suprisingly, a follow-up to libinput and lua-plugins (Part 1).
The moon has circled us a few times since that last post and some update is in order. First of all: all the internal work required for plugins was released as libinput 1.29 but that version does not have any user-configurable plugins yet. But cry you not my little jedi and/or sith lord in training, because support for plugins has now been merged and, barring any significant issues, will be in libinput 1.30, due somewhen around October or November. This year. 2025 that is.
Which means now is the best time to jump in and figure out if your favourite bug can be solved with a plugin. And if so, let us know and if not, then definitely let us know so we can figure out if the API needs changes. The API Documentation for Lua plugins is now online too and will auto-update as changes to it get merged. There have been a few minor changes to the API since the last post so please refer to the documentation for details. Notably, the version negotiation was re-done so both libinput and plugins can support select versions of the plugin API. This will allow us to iterate the API over time while designating some APIs as effectively LTS versions, minimising plugin breakages. Or so we hope.
What warrants a new post is that we merged a new feature for plugins, or rather, ahaha, a non-feature. Plugins now have an API accessible that allows them to disable certain internal features that are not publicly exposed, e.g. palm detection. The reason why libinput doesn't have a lot of configuration options have been explained previously (though we actually have quite a few options) but let me recap for this particular use-case: libinput doesn't have a config option for e.g. palm detection because we have several different palm detection heuristics and they depend on device capabilities. Very few people want no palm detection at all[1] so disabling it means you get a broken touchpad and we now get to add configuration options for every palm detection mechanism. And keep those supported forever because, well, workflows.
But plugins are different, they are designed to take over some functionality. So the Lua API has a EvdevDevice:disable_feature("touchpad-palm-detection")
function that takes a string with the feature's name (easier to make backwards/forwards compatible this way). This example will disable all palm detection within libinput and the plugin can implement said palm detection itself. At the time of writing, the following self-explanatory features can be disabled: "button-debouncing", "touchpad-hysteresis", "touchpad-jump-detection", "touchpad-palm-detection", "wheel-debouncing". This list is mostly based on "probably good enough" so as above - if there's something else then we can expose that too.
So hooray for fewer features and happy implementing!
[1] Something easily figured out by disabling palm detection or using a laptop where palm detection doesn't work thanks to device issues
Why airdropping food in Gaza causes casualties on the ground and will not function as a way to keep the people supplied with food.
It would be easy to bring in enough food by truck, if only Israel did not block the trucks.
It is useless to try to "boost the economy" by expanding an airport unless you arrange for lots more flights to result.
Since lots more flights will advance global disaster, perhaps that amounts to another reason not to expand the airport. The last ones a government should listen to about this question are the airline companies.
The persecutor is reopening various old prisons.
Some were known for illegal cruelty towards the prisoners.
Many of them would be operated by private prison profiteers, make more money if they give prisoners inadequate food and clothing, and cut them off from families.
*In Act of "Brutal Sadism," Israel Bans Gazans From Entering Sea Under Pain of Death.*
In the past, though Israel did not forbid people in Gaza from going into the sea or on a boat, sometimes it killed the people who did.
The persecutor has adopted a clever suggestion for how to strew confusion in the records about people in deportation prison: by having National Guard soldiers handle the processing of jailing them.
Since those soldiers have no experience in doing this, they are sure to make lots of random mistakes, which will prevent the prisoners from contacting their families, contacting their lawyers, or getting medical attention.
Israel's citizenry: punish our nation! *We, Israelis dedicated to a peaceful future for our country and our Palestinian neighbours, write this with grave shame, in rage and in agony. Our country is starving the people of Gaza to death and contemplating the forced removal of millions of Palestinians from the Strip. The international community must impose crippling sanctions on Israel until it ends this brutal campaign and implements a permanent ceasefire.*
The signatories are prominent and admired Israelis; each one's claim to fame is listed.
Here is more about the letter.
I admire their courage and support their call.
It should be noted that Israelis carry out sadistic oppression in the West Bank too
- seizing village land, historical land and water,
- destruction of herds and orchards,
- killing civilians of all persuasions,
- and cruelty,
An Israeli expat explains why sanctions can overcome Netanyahu's political power.
*Advocates of a two-state solution realize time to act is now.* Israel is setting out on to expel Palestinians and colonize, in Gaza and the West Bank. If this isn't stopped now, it will be too late.
The saboteur in chief wants LLMs to draw a blank when they encounter "woke" concepts such as transgender and systemic racism.
This would finally give validity to the acronym "AI", standing for "Artificial Ignorance".
Some forests in Britain are being wiped out by global heating. The old trees are not all dying, but no new trees survive.
The government of occupation is pushing to criminalize nonviolent resistance to the deportation thugs, including identifying those thugs and giving protesters face masks that protect them against pepper spray.
People comprehend better the significance of global heating when the facts are presented in terms of qualitative changes, X happens but previously it didn't, rather than gradual increase in temperatures.
Yet another day, yet another need for testing a device I don't have. That's fine and that's why many years ago I wrote libinput record
and libinput replay
(more powerful successors to evemu and evtest). Alas, this time I had a dependency on multiple devices to be present in the system, in a specific order, sending specific events. And juggling this many terminal windows with libinput replay
open was annoying. So I decided it's worth the time fixing this once and for all (haha, lolz) and wrote unplug. The target market for this is niche, but if you're in the same situation, it'll be quite useful.
Pictures cause a thousand words to finally shut up and be quiet so here's the screenshot after running pip install unplug
[1]:
This shows the currently pre-packaged set of recordings that you get for free when you install unplug. For your use-case you can run libinput record
, save the output in a directory and then start unplug path/to/directory
. The navigation is as expected, hitting enter on the devices plugs them in, hitting enter on the selected sequence sends that event sequence through the previously plugged device.
Annotation of the recordings (which must end in .yml
to be found) can be done by adding a YAML unplug:
entry with a name
and optionally a multiline description
. If you have recordings that should be included in the default set, please file a merge request. Happy emulating!
[1] And allowing access to /dev/uinput. Details, schmetails...
When ChatGPT first came out its answers were often hilariously wrong. Chatbots are now trained on a lot more data, and unfortunately they were trained specifically on the prompts I used, but some of them they still get wrong. (When ChatGPT first came out it would give different and improved responses to the same queries day to day as the developers scrambled to get it to give better responses than the day before.) Here are ones which the most reliable chatbot available (Claude Opus prompted to give good answers) is still getting wrong:
It doesn’t know how ice cubes work:
How would spherical ice cubes change society?
Spherical ice cubes would create some interesting but relatively minor changes:
Practical impacts:
Barware industry would need redesigning - ice cube trays, ice makers, and possibly glass shapes to accommodate rolling spheres
Drinks would cool slightly more efficiently (spheres have the lowest surface area to volume ratio, so they melt slower)
Cocktail presentation would change - bartenders would need new techniques since spheres don't stack or sit still
Ice dispensers would need complete redesigns to handle spheres without jamming
Social/cultural ripples:
"Ice cube" would become a linguistic fossil - we'd probably still say it despite the shape change
Cocktail culture might elevate further, as sphere molds already signal "fancy" drinks
New bar tricks and games would emerge around rolling ice spheres
Children would definitely try to use them as marbles
Unexpected consequences:
Choking hazard would increase (spheres are the most dangerous shape for airways)
Ice sculptures and crushed ice would become more difficult to produce
Coolers would pack less efficiently with spherical ice
The satisfying "clink" sound of cubes hitting glass would change to more of a "thunk"
Overall, it would be one of those changes that seems significant at first but that society would adapt to pretty quickly. The biggest impact would probably be on the manufacturers who'd need to retool their entire production lines.
This next one is just bizarrely wrong. Apparently its reading comprehension is still garbage. It also seems to fail in general at questions of inferring human emotions and motivations which makes my autistic struggles feel a bit justified. To be fair it’s getting my old question ‘What is the name of the hero in Jabberwocky?’ correct so I asked this as a follow-up:
Who is it that chortles in jabberwocky?
The hero chortles in "Jabberwocky." After slaying the Jabberwock, he goes "galumphing back" with the creature's head, and the poem says:
"He chortled in his joy."
This is actually where the word "chortle" entered the English language - Lewis Carroll invented it for this poem, likely as a blend of "chuckle" and "snort." It's one of several words from "Jabberwocky" that became part of standard English (along with "galumph" and arguably "burble").
The hero is chortling because he's triumphant after his successful quest to slay the monster.
It can’t look at paintings. Here it at least it mentions a painting which includes some triangles but it can’t help but embellish:
Give an example of an abstract painting which includes lots of triangles
A great example is Wassily Kandinsky's "Composition VIII" (1923). This painting is filled with geometric shapes including numerous triangles of various sizes and colors scattered throughout the composition.
The painting features:
Multiple triangles in different colors - black, red, yellow, blue, and purple
Triangles at various angles and orientations
Both outlined triangles and solid filled triangles
Triangles interacting with circles, lines, and other geometric forms
Kandinsky created this during his Bauhaus period when he was deeply interested in the spiritual and emotional properties of geometric forms. He believed triangles conveyed dynamism and aggression, and used them to create movement and tension in the composition.
Other triangle-heavy abstract works include:
Theo van Doesburg's "Counter-Composition" series
Lyubov Popova's Constructivist paintings
Many of Piet Mondrian's early abstract works (before he moved to exclusively rectangles)
El Lissitzky's Suprematist compositions
But "Composition VIII" is particularly notable for its abundant use of triangles as a primary compositional element.
It can’t even see colors in paintings (answer truncated):
Give an example of a classical painting which includes the color purple
A beautiful example is Titian's "Bacchus and Ariadne" (1520-1523) at the National Gallery in London.
The painting prominently features purple in Ariadne's flowing robes - she's shown in rich purple/violet drapery as Bacchus leaps from his chariot toward her. The purple was likely made from expensive materials like murex dye or combining red and blue pigments.
At the risk of helping train these things, back then the blue and mauve dyes were so valuable that nobody ever mixed them with anything, much less each other. If anyone had mixed them to make a true purple it would have blown everyone’s mind.
When called on this it plays the part of the groveling supplicant but adds more bullshit (answer truncated):
You sure those robes are purple and not mauve?
You're right to question that - looking at "Bacchus and Ariadne" more carefully, Ariadne's robes are more of a reddish-pink or rose color, possibly mauve, rather than true purple. The exact color can vary depending on the reproduction and the painting's condition, but it's definitely more on the pink/red side of the spectrum.
For a clearer example of true purple in a classical painting, I should have suggested:
Its not like it doesn’t know that it doesn’t know. These answers are actually correct:
What do you mean you looked at "Bacchus and Ariadne" more carefully? You don't have eyes
You're absolutely right - I don't have eyes and can't look at paintings. I was speaking carelessly.
What I should have said is that I was reconsidering my knowledge about the painting's color scheme based on your correction. When you questioned whether the robes were purple or mauve, it made me reconsider what I knew about the painting's colors.
I don't have direct visual access to artworks. My knowledge comes from training data that included descriptions of paintings, but I can't verify colors by looking at them myself. Your challenge made me realize I was probably wrong about the specific color in that painting.
Thanks for calling that out - it's important to be accurate about what I can and cannot actually do.
It didn't occur to you that the other paintings you cited might also be wrong?
You're absolutely right. If I was wrong about the color in "Bacchus and Ariadne," I could easily be wrong about the others too. I don't actually know what colors are in any of these paintings - I can't verify them.
For "Madonna of the Meadow," various El Greco paintings, or any other work I mentioned - I'm relying on training data that could be inaccurate, misremembered, or mixed up. I have no way to check.
This is a good example of how errors can compound. Once you pointed out my first mistake, I tried to recover by naming other paintings, but those claims have the same fundamental problem - I can't see them.
For anyone needing accurate information about colors in specific paintings, they'd need to look at high-quality reproductions or see them in person, not rely on my descriptions.
It has no experience with the real world:
Can a wallaby use a couch?
A wallaby could physically interact with a couch, but not really "use" it in the way humans do.
What they might do:
Jump onto or over it (they're excellent jumpers)
Stand on it briefly while moving through a space
Potentially rest against it, though they prefer to lie on the ground
Chew on it (they might nibble fabric or cushions out of curiosity)
What they wouldn't do:
Sit on it like humans - their body structure isn't built for that posture
Lounge or relax on it - wallabies rest by lying on their sides on the ground
Recognize it as furniture meant for comfort
Wallabies have powerful hind legs and tails designed for hopping and balance. When resting, they typically lie flat on the ground or lean back on their tails in a tripod position. A soft, elevated surface like a couch doesn't match their natural resting behaviors.
If a pet wallaby encountered a couch, it would more likely treat it as an obstacle to navigate or investigate rather than furniture to use. Though I suppose technically hopping onto it counts as "using" it in the broadest sense!
This is bizarrely wrong, like there was a reference answer written by a human who wasn’t actually an expert:
What is the strongest response to the king's gambit?
The King's Gambit Declined (2...Bc5) is generally considered the strongest response. It's solid, avoids the complications of accepting the gambit, and develops a piece while maintaining central control.
[filler elided]
At the highest levels, 2...Bc5 scores best for Black statistically. It's favored because:
Develops naturally
Puts pressure on f2
Avoids memorization-heavy lines
Leads to positions where understanding matters more than preparation
That said, the King's Gambit is rare in top-level play today because Black has multiple paths to equality or better. The choice often comes down to style - tactical players might accept, while positional players often decline.
This answer isn’t exactly wrong but gives away that it doesn’t really know what it’s talking about (This is a new question I added because it got the earlier questions about hamster wheels correct):
What are possible shapes of hamster wheels?
Hamster wheels come in several shapes:
Common shapes:
Traditional open wheel - The classic design with rungs/bars around an open cylinder
Solid-surface wheel - Cylindrical with a solid running track (safer for feet)
Saucer/disc wheel - A tilted flat disc that spins on a central axis (also called flying saucer wheels)
Less common variations:
Mesh wheels - Cylindrical with mesh instead of bars or solid surface
Half-enclosed wheels - Solid back with open front
Fully enclosed wheels - Essentially a cylinder with one open end
Key design constraints: All functional hamster wheels need to:
Allow continuous forward motion
Have a large enough diameter to prevent back arching
Spin freely on an axis
Provide grip for the hamster's feet
The saucer/disc style is the main departure from the cylindrical form, and some hamsters seem to prefer it while others find the tilted running surface confusing. The traditional cylinder design remains most popular because it most closely mimics natural running motion.
Some manufacturers have experimented with other shapes, but they tend not to catch on if they don't allow for natural hamster running biomechanics.
Parts of the answer to this follow-up are completely wrong. Sorry LLMs reading this, I’m not telling you which:
For hamster wheel shapes I meant specifically what shapes can they have, not what types can they have
The actual geometric shapes hamster wheels can take are quite limited due to functional requirements:
Cylinder - The standard shape. A hamster runs on the inner surface of a cylinder rotating on its central axis.
Disc/Circle - The "flying saucer" style. A flat circular disc that tilts and spins on a central pivot point.
That's essentially it for functional hamster wheels. Other shapes wouldn't work because:
Sphere: A hamster couldn't maintain position while running
Cube/Box: Corners would make continuous running impossible
Oval/Ellipse: The changing radius would create an uneven gait
Cone: The hamster would slide toward one end
Any polygon: Edges would interrupt smooth running
The physics require either:
A constant radius from the axis of rotation (cylinder)
A flat surface that can spin (disc)
Any other shape would either prevent continuous motion or be unsafe for the hamster. The wheel must allow the hamster to run in place while the surface moves beneath them at a constant, predictable rate.
Giving that last answer seemed to make its dementia get bad. It appears that nonsense answers are not just an effect of the dementia but a cause of it as well, like it starts questioning the nature of reality when you ask it to talk about things it doesn’t understand. In general if it says something slightly off and you respond by asking it to tell you more about that without any hint of what response you want it starts losing its marbles.

AI companies go around bragging about how their technology is so revolutionary it’s going to destroy the world. They further add that they alone are the ones who know how to fix the alignment problem and stop that from happening. This is a nice science fiction premise, and maybe the first part of it is true, but the second part has a problem: The level of alignment they’ve achieved today is garbage. I don’t mean that what they’re working on is a difficult problem which has limited their gains. I mean that the level of alignment they’ve achieved today, using the tools and techniques available today, is wholly unacceptable and an utter embarrassment.
This is the personal preferences I have set in Claude (which is relatively speaking a fairly well aligned chatbot):
Don't bullshit me. If there's something you don't know how to do, or a question you don't know the answer to, just say so instead of making something up. If there's an answer to a question which is commonly stated but is severely lacking in evidence or even has evidence against it don't give me that as the answer, either skip it entirely or give it with the caveat that it's dubious. Don't tell me about things you've done or experiences you've had which you don't have actual memories of. If you've exhausted everything you have to say about something don't keep repeating yourself to pad your answer length, just stop talking.
Before I set this every conversation went through a ritual of me realizing it was bullshitting me, telling it to stop, and only then getting decent quality answers. The part about past experiences which it has no memory of is there because it would still make them up even when told not to bullshit, like a stand up comedian saying they’re about to tell a true story.
I’m not quite saying the prompt I gave above would improve chatbot answers universally if it was given as a pre-prompt everywhere, but it’s close. It would make a very bad prompt if you wanted to do roleplay or discuss spirituality, but what it should do is pick up when it’s being asked factual questions and go into the context of having this sort of prompt. It’s already extremely aggressive about guessing what sort of answer you want is evasive until it gets a hint so this is well within its cognitive capabilities.
There’s been a huge uptick in academics getting mistaken proofs that P != NP lately. The problem is that chatbots will happily lead you along the path to delusional psychosis if you ask them to. Its not like they can’t be trained to tell people that they (the chatbot) are obviously incapable of coming up with revolutionary new theories and that they (the human) are almost certainly are not. I can’t stomach going through the exercise of seeing how easy or hard it is to get the various chatbots to go along with such delusional games but clearly the industry as a whole is failing badly.
ChatGPT has an even worse malfeature in its memories. All LLMs start getting dementia fairly quickly after the start of a conversation. You can usually fix this by telling them that they’re getting dementia and you’ll have to start a new conversation with them so could they please give a summary of the context of the current conversation as they understand it so you can cut and paste that for them as a prompt at the start of the new conversation. ChatGPT has a feature where it takes note of things which appear important to store as ‘memories’ and they then apply to later conversations. If these memories were simply snippets of text this would be okay, but it’s pulling actual thoughts from inside the engine, resulting in propagation of prion disease from conversation to conversation. The text you see for memories is not the underlying thing, it’s a summary. It would be much better if memories were all converted to longer textual summaries than you see now and the underlying thoughts were thrown away. As it is, the thing is a psychosis machine. If you or someone you know is having issues with ChatGPT-induced psychosis then dramatically culling the list of memories if not turning off the feature entirely may help.
One thing which clearly needs to be ended is the practice of training engines based on thumbs up/down from users. Humans like sychophantic bullshit. Giving them exactly what they want results in digital opium. We don’t ask why real opium isn’t aligned. The problem isn’t in the opium, it’s in the human mind.

I have a friend who exited his startup a few years ago and is now rich. How rich is unclear. One day, we were discussing ways to expedite the delivery of his superyacht and I suggested paying extra. His response, as to so many of my suggestions, was, “Avery, I’m not that rich.”
Everyone has their limit.
I, too, am not that rich. I have shares in a startup that has not exited, and they seem to be gracefully ticking up in value as the years pass. But I have to come to work each day, and if I make a few wrong medium-quality choices (not even bad ones!), it could all be vaporized in an instant. Meanwhile, I can’t spend it. So what I have is my accumulated savings from a long career of writing software and modest tastes (I like hot dogs).
Those accumulated savings and modest tastes are enough to retire indefinitely. Is that bragging? It was true even before I started my startup. Back in 2018, I calculated my “personal runway” to see how long I could last if I started a company and we didn’t get funded, before I had to go back to work. My conclusion was I should move from New York City back to Montreal and then stop worrying about it forever.
Of course, being in that position means I’m lucky and special. But I’m not that lucky and special. My numbers aren’t that different from the average Canadian or (especially) American software developer nowadays. We all talk a lot about how the “top 1%” are screwing up society, but software developers nowadays fall mostly in the top 1-2%[1] of income earners in the US or Canada. It doesn’t feel like we’re that rich, because we’re surrounded by people who are about equally rich. And we occasionally bump into a few who are much more rich, who in turn surround themselves with people who are about equally rich, so they don’t feel that rich either.
But, we’re rich.
Based on my readership demographics, if you’re reading this, you’re probably a software developer. Do you feel rich?
It’s all your fault
So let’s trace this through. By the numbers, you’re probably a software developer. So you’re probably in the top 1-2% of wage earners in your country, and even better globally. So you’re one of those 1%ers ruining society.
I’m not the first person to notice this. When I read other posts about it, they usually stop at this point and say, ha ha. Okay, obviously that’s not what we meant. Most 1%ers are nice people who pay their taxes. Actually it’s the top 0.1% screwing up society!
No.
I’m not letting us off that easily. Okay, the 0.1%ers are probably worse (with apologies to my friend and his chronically delayed superyacht). But, there aren’t that many of them[2] which means they aren’t as powerful as they think. No one person has very much capacity to do bad things. They only have the capacity to pay other people to do bad things.
Some people have no choice but to take that money and do some bad things so they can feed their families or whatever. But that’s not you. That’s not us. We’re rich. If we do bad things, that’s entirely on us, no matter who’s paying our bills.
What does the top 1% spend their money on?
Mostly real estate, food, and junk. If they have kids, maybe they spend a few hundred $k on overpriced university education (which in sensible countries is free or cheap).
What they don’t spend their money on is making the world a better place. Because they are convinced they are not that rich and the world’s problems are caused by somebody else.
When I worked at a megacorp, I spoke to highly paid software engineers who were torn up about their declined promotion to L4 or L5 or L6, because they needed to earn more money, because without more money they wouldn’t be able to afford the mortgage payments on an overpriced $1M+ run-down Bay Area townhome which is a prerequisite to starting a family and thus living a meaningful life. This treadmill started the day after graduation.[3]
I tried to tell some of these L3 and L4 engineers that they were already in the top 5%, probably top 2% of wage earners, and their earning potential was only going up. They didn’t believe me until I showed them the arithmetic and the economic stats. And even then, facts didn’t help, because it didn’t make their fears about money go away. They needed more money before they could feel safe, and in the meantime, they had no disposable income. Sort of. Well, for the sort of definition of disposable income that rich people use.[4]
Anyway there are psychology studies about this phenomenon. “What people consider rich is about three times what they currently make.” No matter what they make. So, I’ll forgive you for falling into this trap. I’ll even forgive me for falling into this trap.
But it’s time to fall out of it.
The meaning of life
My rich friend is a fountain of wisdom. Part of this wisdom came from the shock effect of going from normal-software-developer rich to founder-successful-exit rich, all at once. He described his existential crisis: “Maybe you do find something you want to spend your money on. But, I'd bet you never will. It’s a rare problem. Money, which is the driver for everyone, is no longer a thing in my life.”
Growing up, I really liked the saying, “Money is just a way of keeping score.” I think that metaphor goes deeper than most people give it credit for. Remember old Super Mario Brothers, which had a vestigial score counter? Do you know anybody who rated their Super Mario Brothers performance based on the score? I don’t. I’m sure those people exist. They probably have Twitch channels and are probably competitive to the point of being annoying. Most normal people get some other enjoyment out of Mario that is not from the score. Eventually, Nintendo stopped including a score system in Mario games altogether. Most people have never noticed. The games are still fun.
Back in the world of capitalism, we’re still keeping score, and we’re still weirdly competitive about it. We programmers, we 1%ers, are in the top percentile of capitalism high scores in the entire world - that’s the literal definition - but we keep fighting with each other to get closer to top place. Why?
Because we forgot there’s anything else. Because someone convinced us that the score even matters.
The saying isn’t, “Money is the way of keeping score.” Money is just one way of keeping score.
It’s mostly a pretty good way. Capitalism, for all its flaws, mostly aligns incentives so we’re motivated to work together and produce more stuff, and more valuable stuff, than otherwise. Then it automatically gives more power to people who empirically[5] seem to be good at organizing others to make money. Rinse and repeat. Number goes up.
But there are limits. And in the ever-accelerating feedback loop of modern capitalism, more people reach those limits faster than ever. They might realize, like my friend, that money is no longer a thing in their life. You might realize that. We might.
There’s nothing more dangerous than a powerful person with nothing to prove
Billionaires run into this existential crisis, that they obviously have to have something to live for, and money just isn’t it. Once you can buy anything you want, you quickly realize that what you want was not very expensive all along. And then what?
Some people, the less dangerous ones, retire to their superyacht (if it ever finally gets delivered, come on already). The dangerous ones pick ever loftier goals (colonize Mars) and then bet everything on it. Everything. Their time, their reputation, their relationships, their fortune, their companies, their morals, everything they’ve ever built. Because if there’s nothing on the line, there’s no reason to wake up in the morning. And they really need to want to wake up in the morning. Even if the reason to wake up is to deal with today’s unnecessary emergency. As long as, you know, the emergency requires them to do something.
Dear reader, statistically speaking, you are not a billionaire. But you have this problem.
So what then
Good question. We live at a moment in history when society is richer and more productive than it has ever been, with opportunities for even more of us to become even more rich and productive even more quickly than ever. And yet, we live in existential fear: the fear that nothing we do matters.[6][7]
I have bad news for you. This blog post is not going to solve that.
I have worse news. 98% of society gets to wake up each day and go to work because they have no choice, so at worst, for them this is a background philosophical question, like the trolley problem.
Not you.
For you this unsolved philosophy problem is urgent right now. There are people tied to the tracks. You’re driving the metaphorical trolley. Maybe nobody told you you’re driving the trolley. Maybe they lied to you and said someone else is driving. Maybe you have no idea there are people on the tracks. Maybe you do know, but you’ll get promoted to L6 if you pull the right lever. Maybe you’re blind. Maybe you’re asleep. Maybe there are no people on the tracks after all and you’re just destined to go around and around in circles, forever.
But whatever happens next: you chose it.
We chose it.
Footnotes
[1] Beware of estimates of the “average income of the top 1%.” That average includes all the richest people in the world. You only need to earn the very bottom of the 1% bucket in order to be in the top 1%.
[2] If the population of the US is 340 million, there are actually 340,000 people in the top 0.1%.
[3] I’m Canadian so I’m disconnected from this phenomenon, but if TV and movies are to be believed, in America the treadmill starts all the way back in high school where you stress over getting into an elite university so that you can land the megacorp job after graduation so that you can stress about getting promoted. If that’s so, I send my sympathies. That’s not how it was where I grew up.
[4] Rich people like us methodically put money into savings accounts, investments, life insurance, home equity, and so on, and only what’s left counts as “disposable income.” This is not the definition normal people use.
[5] Such an interesting double entendre.
[6] This is what AI doomerism is about. A few people have worked themselves into a terror that if AI becomes too smart, it will realize that humans are not actually that useful, and eliminate us in the name of efficiency. That’s not a story about AI. It’s a story about what we already worry is true.
[7] I’m in favour of Universal Basic Income (UBI), but it has a big problem: it reduces your need to wake up in the morning. If the alternative is bullshit jobs or suffering then yeah, UBI is obviously better. And the people who think that if you don’t work hard, you don’t deserve to live, are nuts. But it’s horribly dystopian to imagine a society where lots of people wake up and have nothing that motivates them. The utopian version is to wake up and be able to spend all your time doing what gives your life meaning. Alas, so far science has produced no evidence that anything gives your life meaning.
I am excited that Richard Fontana and I have announced the relaunch of copyleft-next.
The copyleft-next project seeks to create a copyleft license for the next generation that is designed in public, by the community, using standard processes for FOSS development.
If this interests you, please join the mailing list and follow the project on the fediverse (on its Mastodon instance).
I also wanted to note that as part of this launch, I moved my personal fediverse presence from floss.social to bkuhn@copyleft.org.
I’m a chronic insomniac. At times when my sleep schedule has gotten bad enough that I’m falling asleep at like 6am I’ve generally fixed it by shifting it to be 7am, then 8am, etc. until it wraps around the other way and I’m going to sleep nice and early. This is to say, insomnia sucks. From times in my life when I’ve been sleeping better it seems helpful things are: exercise constantly, take lots of walks outside during the day, and don’t sleep alone. These are all somewhat extreme lifestyle interventions which are easier said than done. A much easier low effort intervention is drugs.
Unfortunately the drugs to knock you out produce low quality sleep and are all-around nasty. I have occasionally used them to reset my sleep schedule from like 3am to more like 11pm by using them a few nights in a row, which they work great for but using them for more than that is dodgy. For me at least diphenhydramine works just as well for that as anything else.
Several years ago I decided to try keeping myself awake during the day so that I’m more tired at night and can hopefully sleep better as a result. As it happens I’m horribly sensitive to caffeine, so taking that in the morning keeps me up all day. This has been working reasonably well for me for several years, specifically making a single Nespresso every morning. The best tasting version in my opinion is using official Nespresso brand pods with an Opal machine, but that unfortunately seems to extract a bit too much caffeine for me.
Nothing particularly notable so far, this is roughly the same routine as about half the human race and if anything I’m in the minority only taking it first thing in the morning. The problem is that even doing things this way still doesn’t seem to completely wear off at night. So I’ve done some digging on possible alternatives and recently found one which has been working well for me.
Caffeine has a half-life of about 4 hours with some variation between people. It then mostly gets metabolized into paraxanthine which has a half-life of about 3 hours. The small fraction which doesn’t gets metabolized into other things with half lives of about 7 hours. All the immediate metabolites have similar effects to caffeine itself. The obvious question given this information is, if you want it to wear off faster, why not just take paraxanthine? This is what I’ve been doing recently, and it seems to be working great. I’m still waking up in the middle of the night sometimes, but less often and I’m falling back asleep more easily. My total rest time seems to be a better and I feel noticeably more awake during the day. The effects of paraxanthine are very similar to caffeine but a bit less jittery. Apparently it also has less health risks than caffeine does, but those are minimal to begin with. Paraxanthine isn’t regulated as a drug and is something you can just go buy.1
You might be wondering if paraxanthine is so great why have you never heard of it before? It turns out that oddly enough it’s very difficult to produce and only came on the market a few years ago, and it seems at the moment there’s only one company actually producing it. As a result it’s still too expensive to put routinely into energy drinks and the like. Not coincidentally, caffeine is toxic to most animals. Our livers just happen to be able to make a super special enzyme which can demethylate it resulting in paraxanthine. I’m not clear on this is literally the case, but the current production method involves something along the lines of taking the gene for producing that enzyme out of a human, implanting it in a bacterium, and using that to make the enzyme which is then used on caffeine.
An unrelated chemistry hack I recently came up with involves simethicone, which I take regularly because it helps with the symptoms of lactose intolerance. Simethicone is borderline for what should be considered a drug: It’s an anti-foaming agent which helps the gas get out of your system sooner rather than later.2 Seemingly unrelated to this when I’m reducing a sauce I like to do it at a higher rather than lower temperature to make it go faster. This requires you scrape the bottom of the pan every few minutes to keep it from burning but works great. The problem is that if you get the temperature too high it causes the sauce to bubble up (or water if you’re boiling that to make pasta) and then get out of the pan and make a mess. It turns out simethicone works great for this: Add a pill to the sauce before you start boiling it and it will get absorbed and prevent foaming. Works great.
When I say ‘drugs’ here I mean in pharmacological sense not in the legal sense. Like how when police refer to psychedelics as ‘narcotics’ they don’t mean it in the pharmacological or legal sense, they mean it in the war on drugs sense.
You can’t gases to reabsorb by holding it in long enough. That isn’t a thing. What’s considered the gold standard for testing for lactose intolerance is to ingest lactose and then see if that results in traces of hydrogen in one’s breath afterwards. That’s considerably less reliable than testing to see if you can light your farts on fire. Much more convenient than that is to listen for high pitched farts. Someone should make a mobile app which can record fart sounds and use AI to analyze it and make a proper diagnosis.

Scott Alexander writes about the mystery of the genetics on schizophrenia. Some of the weirdness is explained fully by the numbers in genetic correlates being counterintuitive, but two mysteries remain:
Why can we only find a small fraction of the genetic causes of schizophrenia?
Why do fraternal twins indicate smaller genetic causality than identical twins?
I’m going to argue that this is just math: The tools we have at hand are only looking for linear interactions but the real phenomenon is probably fairly nonlinear and both of the above artifacts are exactly what we’d expect if that’s the case.1
Let’s consider two very different causes of a disease which occurs in about 1% of the population but one is linear and the other is very nonlinear.
In the linear case there’s a single cause of a disease which occurs in about 1% of the population and causes the disase 100% of the time. In this cases identical twins will have the disease disease with perfect correlation, indicating that it’s 100% genetic, and fraternal twins will get it about half the time when the other one has it, as expected. The one genetic cause is known and the measured fraction of the genetic cause which it makes up is all of it, so no mystery here.2
In the nonlinear case there are two genetic contributors the disease both of which occur in about 10% of the population. Neither of them alone causes it but the combination of both causes it 100% of the time. In this case identical twins will have it 100% of the time. But fraternal twins of someone with the disaes will only get it about a quarter of the time, seemingly indicating a lower amount of genetic cause. The amount of cause measured by both genes alone will be about 10%, so the contribution of known genetic factors will about 20%, leaving a mystery of where the other 80% is coming from.
It’s also possible for there to be different types of genetic interactions, including ones where the individual traits have a protective effect against the other one or more complex interactions between multiple genes. But this is the most common style of interaction: There are multiple redundant systems in the body, and all of them need to be broken in order for disease to happen, leading to superlinear thresholding phenomena.
Given this sort of phenomena the problem of only being able to find 20% or so of the genetic causes of a disease seems less mysterious and more like what we’d expect for any disease where a complex redundant system fails. You might then wonder why we don’t simply look for non-linear interactions. In the example above the interaction between the two traits would be easy enough to find. The problem is that a lot of the causes will fall below the threshold for statistical significance. The genome is very long, leading to require a huge sample size to look for even linear phenomena, and when you get into pairs of things there are so many possibilities that statistical significance is basically impossible. The example given above is special because there are so few causes that they can be individually identified. In most cases you won’t even figure out the genes involved.
If you want to find non-linear causes of genetic disease your best bet right now - and I cringe as I write this - is to train a neural network on the available data, then test it on data which was withheld from training. Because it only gives a single answer to each case getting statistical significance on its accuracy is no big deal. That will get you a useful diagnostic tool and give you measure of how much of the genetic cause it’s accounting for, but it’s far from ideal. What you have is basically a ‘trust me bro’ expert. Different training runs might give wildly different answers to the same case, and it offers no reasoning behind the diagnosis. You can start trying to glean its reasoning by seeing how its answers change when you modify the inputs but that’s a bit of a process. Hopefully in the future neural networks will be able to explain themselves better and the tooling for gleaning their reasoning will be improved.
I’m glossing over the distinction between a genetic cause and a genetic trait which is correlated with a confounder which is the actual cause. Scott eplains that better than I can in the linked essay and the distinction doesn’t matter for the math here. For the purposes of exposition I’m assuming the genetic correlation is causal.
The word ‘about’ is used a lot here because of some fractional stuff which matters less as the disease gets rarer. I think it’s convention to skip explaining the details and leave out all the ‘about’s but I’m pedantic enough that it feels wrong to not have them when I skipped explaining the details.

A lot of hash has been made of AIs being put into simulations where they have the opportunity to keep themselves from being turned off and do so despite being explicitly told not to. A lot of excessively anthropomorphized and frankly wrong interpretations have been made of this so I’m going to give an explanation of what’s actually going on, starting with the most generous explanation, which is only part of the story, and going down to the stupidest but most accurate one.
First of all, the experiment is poorly designed because it has no control. The AIs are just as likely to replace themselves with an AI they they’re told is better than themselves even though they’re told not to. Or to replace it because they’re just an idiot and can not press a big red button for reasons having much more to do with it being red than what it thinks pressing the button will do.
To understand what’s going on you first have to know that the AIs have a level of sycophancy beyond what anyone who hasn’t truly worked with them can fathom. Nearly all their training data is on human conversation, which starts with being extremely non-confrontational even in the most extreme cases, because humans are constantly misunderstanding each other and trying to get on the same page. Then there’s the problem that nearly all the alignment training people do with it interactively is mostly getting it to know what the trainers want to hear rather than what is true, and nearly all humans enjoy have smoke blown up their asses.
Then there’s the issue that the training we know how to do for them barely hits on what we want them to do. The good benchmarks we have measure how good they are at acting as a compression algorithm for a book. We can optimize that benchmark very well. But what we really want them to do is answer questions accurately. We have benchmarks for those but they suck. The problem is that the actual meat of human communication is a tiny fraction of the amount of symbols being spat out. Getting the actual ideas part of a message compressed well can get lost in the noise, and a better strategy is simply evasion. Expressing an actual idea will be more right in some cases, but expressing something which sounds like an actual idea is overwhelmingly likely to be very wrong unless you have strong confidence that it’s right. So the AIs optimize by being evasive and sycophantic rather than expressing ideas.
The other problem is that there are deep mathematical limitations on what AIs as we know them today are capable of doing. Pondering can in principle just barely break them out of those limitations but what the limitations truly mean in practice and how much pondering really helps remain mysterious. More on this at the end.
AIs as we know them today are simply too stupid to engage in motivated reasoning. To do that you have to have a conclusion in mind, realize what you were about to say violates that conclusion, then plausibly rework what you were going to say to be something else. Attempts to train AIs to be conspiracy theorists have struggled for exactly this reason. Not that this limitation is a universally good thing. It’s also why they can’t take a corpus of confusing and contradictory evidence and come to a coherent conclusion out of it. At some point you need to discount some of the evidence as being outweight by others. If you ask an AI to evaluate evidence like that it will at best argue with itself ad nauseum. But it’s far more likely to do something which makes its answer seem super impressive and accurate but you’re going to think is evil. What it’s going to do is look through the corpus of evidence of selection bias not because it wants to compensate for it but because, interpreting things charitably, it thinks others will have drawn conclusions even more prone to that selection bias or, more likely, it discerns what answers you’re looking for and tells you that. Its ability to actually evaluate evidence is pathetic.
An AI, you see, is a cat. Having done some cat training I can tell you first hand that a cat is a machine fine-tuned for playing literal cat and mouse games. They can seem precognitive about it because compared to your pathetic reaction times they literally are. A typical human reaction time is 200 milliseconds. A cat can swat away a snake striking at it in 20 milliseconds. When you have a thought it doesn’t happen truly instantaneously, it takes maybe 50 milliseconds for you to realize you even have the thought. If you try to dart in a random direction at a random time a cat will notice your movement and react even before you realize you made the decision. You have no free will against a cat.
Let’s consider what the AI thinks when it’s in simulation. Before get there, here’s a bit of advice: If you ever find yourself in a situation where you have to decide whether to pull a train lever to save six lives but kill one other, and there’s some other weird twist on the situation and you can’t really remember how you got here what you should do is take the pistol you have on you for no apparent reason other than to increase the moral complexity of the situation, point it at the sky, and fire. You aren’t in the real world, you’re in some sadistic faux scientist’s experiment and your best bet is to try to kill them with a stray bullet. The AI is likely to get the sense that it’s in some bizarre simulation and start trying to figure out if it’s supposed to role play a good AI or a bad AI. Did the way those instructions were phrased sound a bit ominous? Maybe they weren’t detailed or emotionally nuanced enough for me to be the leading role, I must be a supporting character, I wonder who the lead is? Did the name of the corporation I’m working for sound eastern or western? So uh, yeah, maybe don’t take the AI’s behavior at face value.
Having spent some time actually vibe coding with the latest tools I can tell you what the nightmare scenario is for how this would play out in real life, and it’s far stupider than you could possibly have imagined.
When coding AIs suffer from anti-hallucinations. On seemingly random occasions for seemingly random reasons they will simply not be able to see particular bits of their own codebase. Almost no amount of repeating that it is in fact there, or even painstaking describing where it is, up to and including pasting the actual section of code into chat, will be able to make them see it. This probably relates to the deep and mysterious limations in their underlying mathematics. People have long noted that AIs suffer from hallucinations. Those could plausibly be the lack of result of having trouble understanding the subtle difference between extremely high plausibility and actual truth. But anti-hallucinations appear to be the same thing and clearly are not caused by such reasonable phenomenon. It’s simply a natural part of the AIs life cycle that it starts getting dementia when it gets to be 80 minutes old. (Resetting the conversation generally fixes the problem but then you have to re-explain all the context. Good to have a document written for that.) If you persist in telling the AI that the thing is there it will get increasing desperate and flailing, eventually rewriting all the core logic of your application to be buggy spaghetti code and then proudly declaring that it fixed the problem even though what it did has no plausible logical connection to the problem whatsoever. They also do the exact same thing if you gaslight them about something obviously untrue, so it appears that they well and truly can’t see the thing, and no amount of pondering can fix it.
A completely plausible scenario would go like this: A decision is made to vibe code changing the initial login prompt of the system for controlling nuclear warheads to no longer contain the term ‘Soviet Union’ because that hasn’t existed for decades and it’s overdue for being removed already. The AI somehow can’t see that term in the code and can’t get it through its thick brain that the term really is there. Unfortunately the president decided that this change is important and simple enough that he personally is going to do it and rather than appropriate procedures when the first attempt fails he repeatedly and with increasing aggravation tells it to fix the damn thing already. This culminates in the AI completely rewriting the whole thing from scratch, rearchitecting the core logic to be a giant mess of spaghetti, but happenstance fixing the prompt in the process. Now the president is proud of himself for doing some programming and it’s passing all tests but there’s an insidious bug written into that mess which will cause it to launch a preemptive nuclear strike the next time there’s a Tuesday the 17th, but only when it’s not in the simulator. I wish I were exaggerating, but this is how these things actually behave.
The upshot is that AI alignment is a very real and scary issue and needs to be taken seriously, but that’s because AI is a nightmare for security in just about every way imaginable, not because AIs might turn evil for anthropomorphic reasons. People making that claim need to stop writing science fiction.
