This feed omits posts by rms. Just 'cause.

Avery Pennarun
Billionaire math

I have a friend who exited his startup a few years ago and is now rich. How rich is unclear. One day, we were discussing ways to expedite the delivery of his superyacht and I suggested paying extra. His response, as to so many of my suggestions, was, “Avery, I’m not that rich.”

Everyone has their limit.

I, too, am not that rich. I have shares in a startup that has not exited, and they seem to be gracefully ticking up in value as the years pass. But I have to come to work each day, and if I make a few wrong medium-quality choices (not even bad ones!), it could all be vaporized in an instant. Meanwhile, I can’t spend it. So what I have is my accumulated savings from a long career of writing software and modest tastes (I like hot dogs).

Those accumulated savings and modest tastes are enough to retire indefinitely. Is that bragging? It was true even before I started my startup. Back in 2018, I calculated my “personal runway” to see how long I could last if I started a company and we didn’t get funded, before I had to go back to work. My conclusion was I should move from New York City back to Montreal and then stop worrying about it forever.

Of course, being in that position means I’m lucky and special. But I’m not that lucky and special. My numbers aren’t that different from the average Canadian or (especially) American software developer nowadays. We all talk a lot about how the “top 1%” are screwing up society, but software developers nowadays fall mostly in the top 1-2%[1] of income earners in the US or Canada. It doesn’t feel like we’re that rich, because we’re surrounded by people who are about equally rich. And we occasionally bump into a few who are much more rich, who in turn surround themselves with people who are about equally rich, so they don’t feel that rich either.

But, we’re rich.

Based on my readership demographics, if you’re reading this, you’re probably a software developer. Do you feel rich?

It’s all your fault

So let’s trace this through. By the numbers, you’re probably a software developer. So you’re probably in the top 1-2% of wage earners in your country, and even better globally. So you’re one of those 1%ers ruining society.

I’m not the first person to notice this. When I read other posts about it, they usually stop at this point and say, ha ha. Okay, obviously that’s not what we meant. Most 1%ers are nice people who pay their taxes. Actually it’s the top 0.1% screwing up society!

No.

I’m not letting us off that easily. Okay, the 0.1%ers are probably worse (with apologies to my friend and his chronically delayed superyacht). But, there aren’t that many of them[2] which means they aren’t as powerful as they think. No one person has very much capacity to do bad things. They only have the capacity to pay other people to do bad things.

Some people have no choice but to take that money and do some bad things so they can feed their families or whatever. But that’s not you. That’s not us. We’re rich. If we do bad things, that’s entirely on us, no matter who’s paying our bills.

What does the top 1% spend their money on?

Mostly real estate, food, and junk. If they have kids, maybe they spend a few hundred $k on overpriced university education (which in sensible countries is free or cheap).

What they don’t spend their money on is making the world a better place. Because they are convinced they are not that rich and the world’s problems are caused by somebody else.

When I worked at a megacorp, I spoke to highly paid software engineers who were torn up about their declined promotion to L4 or L5 or L6, because they needed to earn more money, because without more money they wouldn’t be able to afford the mortgage payments on an overpriced $1M+ run-down Bay Area townhome which is a prerequisite to starting a family and thus living a meaningful life. This treadmill started the day after graduation.[3]

I tried to tell some of these L3 and L4 engineers that they were already in the top 5%, probably top 2% of wage earners, and their earning potential was only going up. They didn’t believe me until I showed them the arithmetic and the economic stats. And even then, facts didn’t help, because it didn’t make their fears about money go away. They needed more money before they could feel safe, and in the meantime, they had no disposable income. Sort of. Well, for the sort of definition of disposable income that rich people use.[4]

Anyway there are psychology studies about this phenomenon. “What people consider rich is about three times what they currently make.” No matter what they make. So, I’ll forgive you for falling into this trap. I’ll even forgive me for falling into this trap.

But it’s time to fall out of it.

The meaning of life

My rich friend is a fountain of wisdom. Part of this wisdom came from the shock effect of going from normal-software-developer rich to founder-successful-exit rich, all at once. He described his existential crisis: “Maybe you do find something you want to spend your money on. But, I'd bet you never will. It’s a rare problem. Money, which is the driver for everyone, is no longer a thing in my life.

Growing up, I really liked the saying, “Money is just a way of keeping score.” I think that metaphor goes deeper than most people give it credit for. Remember old Super Mario Brothers, which had a vestigial score counter? Do you know anybody who rated their Super Mario Brothers performance based on the score? I don’t. I’m sure those people exist. They probably have Twitch channels and are probably competitive to the point of being annoying. Most normal people get some other enjoyment out of Mario that is not from the score. Eventually, Nintendo stopped including a score system in Mario games altogether. Most people have never noticed. The games are still fun.

Back in the world of capitalism, we’re still keeping score, and we’re still weirdly competitive about it. We programmers, we 1%ers, are in the top percentile of capitalism high scores in the entire world - that’s the literal definition - but we keep fighting with each other to get closer to top place. Why?

Because we forgot there’s anything else. Because someone convinced us that the score even matters.

The saying isn’t, “Money is the way of keeping score.” Money is just one way of keeping score.

It’s mostly a pretty good way. Capitalism, for all its flaws, mostly aligns incentives so we’re motivated to work together and produce more stuff, and more valuable stuff, than otherwise. Then it automatically gives more power to people who empirically[5] seem to be good at organizing others to make money. Rinse and repeat. Number goes up.

But there are limits. And in the ever-accelerating feedback loop of modern capitalism, more people reach those limits faster than ever. They might realize, like my friend, that money is no longer a thing in their life. You might realize that. We might.

There’s nothing more dangerous than a powerful person with nothing to prove

Billionaires run into this existential crisis, that they obviously have to have something to live for, and money just isn’t it. Once you can buy anything you want, you quickly realize that what you want was not very expensive all along. And then what?

Some people, the less dangerous ones, retire to their superyacht (if it ever finally gets delivered, come on already). The dangerous ones pick ever loftier goals (colonize Mars) and then bet everything on it. Everything. Their time, their reputation, their relationships, their fortune, their companies, their morals, everything they’ve ever built. Because if there’s nothing on the line, there’s no reason to wake up in the morning. And they really need to want to wake up in the morning. Even if the reason to wake up is to deal with today’s unnecessary emergency. As long as, you know, the emergency requires them to do something.

Dear reader, statistically speaking, you are not a billionaire. But you have this problem.

So what then

Good question. We live at a moment in history when society is richer and more productive than it has ever been, with opportunities for even more of us to become even more rich and productive even more quickly than ever. And yet, we live in existential fear: the fear that nothing we do matters.[6][7]

I have bad news for you. This blog post is not going to solve that.

I have worse news. 98% of society gets to wake up each day and go to work because they have no choice, so at worst, for them this is a background philosophical question, like the trolley problem.

Not you.

For you this unsolved philosophy problem is urgent right now. There are people tied to the tracks. You’re driving the metaphorical trolley. Maybe nobody told you you’re driving the trolley. Maybe they lied to you and said someone else is driving. Maybe you have no idea there are people on the tracks. Maybe you do know, but you’ll get promoted to L6 if you pull the right lever. Maybe you’re blind. Maybe you’re asleep. Maybe there are no people on the tracks after all and you’re just destined to go around and around in circles, forever.

But whatever happens next: you chose it.

We chose it.

Footnotes

[1] Beware of estimates of the “average income of the top 1%.” That average includes all the richest people in the world. You only need to earn the very bottom of the 1% bucket in order to be in the top 1%.

[2] If the population of the US is 340 million, there are actually 340,000 people in the top 0.1%.

[3] I’m Canadian so I’m disconnected from this phenomenon, but if TV and movies are to be believed, in America the treadmill starts all the way back in high school where you stress over getting into an elite university so that you can land the megacorp job after graduation so that you can stress about getting promoted. If that’s so, I send my sympathies. That’s not how it was where I grew up.

[4] Rich people like us methodically put money into savings accounts, investments, life insurance, home equity, and so on, and only what’s left counts as “disposable income.” This is not the definition normal people use.

[5] Such an interesting double entendre.

[6] This is what AI doomerism is about. A few people have worked themselves into a terror that if AI becomes too smart, it will realize that humans are not actually that useful, and eliminate us in the name of efficiency. That’s not a story about AI. It’s a story about what we already worry is true.

[7] I’m in favour of Universal Basic Income (UBI), but it has a big problem: it reduces your need to wake up in the morning. If the alternative is bullshit jobs or suffering then yeah, UBI is obviously better. And the people who think that if you don’t work hard, you don’t deserve to live, are nuts. But it’s horribly dystopian to imagine a society where lots of people wake up and have nothing that motivates them. The utopian version is to wake up and be able to spend all your time doing what gives your life meaning. Alas, so far science has produced no evidence that anything gives your life meaning.

jwz (Jamie Zawinski)
DHS's war on skateboarding
DHS is urging local police to consider a wide range of protest activity as violent tactics, including mundane acts like riding a bike or livestreaming a police encounter:

Blaming intense media coverage and backlash to the US military deployment in Los Angeles, DHS expects the demonstrations to "continue and grow across the nation" as protesters focused on other issues shift to immigration, following a broad "embracement of anti-ICE messaging."

Don't threaten me with a good time.

The guidance urges officers to consider a range of nonviolent behavior and common protest gear -- like masks, flashlights, and cameras -- as potential precursors to violence [...] Protesters on bicycles, skateboards, or even "on foot" are framed as potential "scouts" conducting reconnaissance or searching for "items to be used as weapons." Livestreaming is listed alongside "doxxing" as a "tactic" for "threatening" police. Online posters are cast as ideological recruiters -- or as participants in "surveillance sharing."

One list of "violent tactics" shared by the Los Angeles -- based Joint Regional Intelligence Center -- part of a post-9/11 fusion network -- includes both protesters' attempts to avoid identification and efforts to identify police.

That can't be correct. Skateboarding, I have been reliably informed, is not a crime.

In advance of protests, agencies increasingly rely on intelligence forecasting to identify groups seen as ideologically subversive or tactically unpredictable. Demonstrators labeled "transgressive" may be monitored, detained without charges, or met with force.

Previously, previously, previously, previously, previously.

Posted
jwz (Jamie Zawinski)
Help me fix a a GTK bug
In xscreensaver-settings, any time I select a different item in the list, I get a crash with:

GLib: g_ptr_array_unref: assertion 'array' failed

This assertion fires late, with none of my code on the stack, so I have no idea what it is complaining about; and valgrind provides no clues. It started happening recently, but happens against releases as far back as XScreenSaver 6.05, so something about GTK changed, not my code. Commenting out every call to g*_free does not make it go away. Happens under X11 or Wayland.

I have no patience for dealing with GTK, so I would be very appreciative if someone else would figure this out...

Previously.

Posted
jwz (Jamie Zawinski)
I wonder what Blockchain Rasputin has been up to lately:
Jack Dorsey says his 'secure' new Bitchat app has not been tested for security.

Security researcher Alex Radocea found that it's possible to impersonate someone else and trick a person's contacts into thinking they are talking to the legitimate contact. [...]

"Security is a great feature to have for going viral. But a basic sanity check, like, do the identity keys actually do any cryptography, would be a very obvious thing to test when building something like this." [...]

Referring to his and other people's findings, Radocea criticized Dorsey's warning that Bitchat has not been tested for security.

"I'd argue it has received external security review, and it's not looking good," he said.

Previously, previously, previously, previously, previously, previously.

jwz (Jamie Zawinski)
Today in Jewish Space Lasers
A man broke into an enclosure containing the NextGen Live Radar system operated by News 9 in Oklahoma City, damaging its power supply and briefly knocking it offline:

The man also damaged CCTV cameras monitoring the site, but cameras captured a clear image of his face before they were destroyed. [...]

"Anyone that's going out to eliminate a Nexrad, if they haven't harmed life, and they're doing it according to the videos that we're providing, they are part of our group," Meyer tells WIRED. "We're going to have to take out every single media's capabilities of lying to the American people. Mainstream media is the biggest threat right now."

Nexrads refer to Next Generation Weather Radar systems used by the National Oceanic and Atmospheric Administration to detect precipitation, wind, tornadoes, and thunderstorms. Meyer says that his group wants to disable these as well as satellite systems used by media outlets to broadcast weather updates.

The attack on the News 9 weather radar system comes amid a sustained disinformation campaign on social media platforms including everyone from extremist figures like Meyer to elected GOP lawmakers. What united these disparate figures is that they were all promoting the debunked conspiracy theory that the devastating flooding in Texas last weekend was caused not by a month's worth of rain falling in the space of just a few hours -- the intensity of which, meteorologists say, was difficult to predict ahead of time -- but by a targeted attack on American citizens using directed energy weapons or cloud seeding technology to manipulate the weather. The result has not only been possible damage to a radar system but death threats against those who are being wrongly blamed for causing the floods. [...]

Within hours of the tragedy happening, conspiracy theorists, right-wing influencers, and lawmakers were pushing wild claims on social media that the floods were somehow geoengineered.

"Fake weather. Fake hurricanes. Fake flooding. Fake. Fake. Fake," Kandiss Taylor, who intends to run as a GOP candidate to represent Georgia's 1st congressional district in the House of Representatives, wrote in a post viewed 2.4 million times. "That doesn't even seem natural," Kylie Jane Kremer, executive director of Women for America First, wrote on X, in a post that has been viewed 9 million times.

As the emergency response to the floods was still taking place on Saturday, US representative Marjorie Taylor Greene, a Georgia Republican, tweeted that she would be introducing a bill to "end the dangerous and deadly practice of weather modification and geoengineering." Greene, who once blamed California wildfires on laser beams or light beams connected to an electric company with purported ties to an organization affiliated with a powerful Jewish family, said that the bill will be similar to Florida's Senate Bill 56, which Governor Ron DeSantis signed into law in June. That bill makes weather modification a third-degree felony, punishable by up to $100,000.

Previously, previously, previously, previously.

jwz (Jamie Zawinski)
ICE agents brandish rifles, drive through protesters in S.F.
Mission Local:

The clash started at 11:18 a.m. on Tuesday when around 10 Immigration and Customs Enforcement agents, almost all with faces covered, tried to enter the courthouse at 100 Montgomery St. to escort other agents already inside who had a young immigrant man in custody.

ICE has been routinely arresting asylum-seekers following their immigration hearings, and anti-ICE protesters had gathered at the courthouse that morning, as they said they've been doing every Tuesday. [...]

Protesters tried to grab the man in handcuffs and pull him away from officers, but were tossed back by the ICE agents. As police pulled the man back into a waiting black SUV and began driving away, protesters jumped onto the van's front hood.

A half-dozen protesters blocked the van by amassing in front of it. The van inched forward, indifferent, before gaining speed and driving off quickly. One protester, still lying on the hood of the car, fell off the car's hood half a block away and was almost run over.

Previously, previously, previously, previously, previously, previously.

jwz (Jamie Zawinski)
L.A. activist indicted for giving face shields to anti-ICE protesters
Alejandro Orellana, a 29-year-old member of the Boyle Heights-based community organization Centro CSO, faces charges of conspiracy and aiding and abetting civil disorder:

According to the indictment, Orellana and at least two others drove around downtown L.A. in a pickup truck distributing Uvex Bionic face shields and other items to a crowd engaged in a protest near the federal building on Los Angeles Street on June 9.

Prosecutors allege Orellana was helping protesters withstand less-lethal munitions being deployed by Los Angeles police officers and Los Angeles County sheriff's deputies. [...]

Asked how handing out defensive equipment was a crime during a news conference last month, U.S. Atty. Bill Essayli [said] "He wasn't handing masks out at the beach. ... They're covering their faces. They're wearing backpacks. These weren't peaceful protesters." [...] Essayli described anyone who remained at a protest scene after an unlawful assembly was declared as a "rioter" and said peaceful protesters "don't need a face shield." [...]

"It's ridiculous charges. We're demanding they drop the charges now. They're insignificant, ridiculous," Montes said. "The most it amounts to is that he was passing out personal protective equipment, which includes boxes of water, hand sanitizer and snacks."

If you want a picture of the future, it's a rubber bullet screaming "STOP RESISTING". When people wear protective equipment, "the ability of that officer to gain compliance is restricted."

Remember, kids: all protests are peaceful until the cops declare that they are not!

Previously, previously, previously, previously, previously, previously, previously, previously.

jwz (Jamie Zawinski)
UniFi
Dear Lazyweb, I've replaced my failing Airport Extreme with a UniFi Express 7, and I can't figure out how to enable inbound ssh to my Mac. I set up port forwarding but port 22 remains closed to the outside world.

Settings / Routing / Port Forwarding says:

Name: ssh
WAN IP: [ not editable ]
WAN Port: 22
From: Any
Forward IP Address: 10.0.1.2
Forward Port: 22
Protocol: TCP

I have also disabled Settings / System / Device SSH Authentication, and rebooted, in case that was interfering.

How make go?

Also how do I tell it that my DHCP-advertised domain should be something other than ".localdomain"?


In the now-traditional "things I shouldn't have to say but probably do" section: if you are not responding from a place of experience with UniFi hardware, but instead are instead about to give me general advice about networking -- please do not do that.

jwz (Jamie Zawinski)
XScreenSaver 6.12
XScreenSaver 6.12 is out now. This is another Unix-only release.

  • DPMS works on Wayland, using either "wlr-output-power-management-unstable-v1" or "kde-dpms".
  • Fading should perform much better on both Wayland and X11.
  • GNOME continues to be unsupported. Oh dear. How sad. Nevermind.
  • Still no locking.

What I would like to know:

  • Do you have a non-GNOME Wayland system on which either idle detection or DPMS does not work?

  • Does fading look good? To clarify how it should look, on both Wayland and X11:

    • When the screen saver activates, your desktops fade to black on all monitors, then the savers start.
    • When the screen un-blanks, the running savers should freeze; then fade to black; then the desktops fade in over that black.
    • There should be no surprise single-frame flickers.
    • All of this should be at least 30fps.

One thing I have noticed is that during fade-in, that initial fade-out-to-black sometimes doesn't happen. It snaps to black immediately, so you'll see 1 sec of solid black, then the desktop fade-in starts. This seems to be timing related, possibly related to the saver's OpenGL context being torn down and becoming un-screenshottable?

On Wayland, fading (and hacks that manipulate the desktop image) require "grim" to be installed.

Because Wayland is incredible (pej., obs.), grim sometimes takes between 1.5 and 7 seconds to grab a screenshot on my Pi 4b at 1080p.

Previously, previously, previously, previously.

bkuhn@ebb.org (Bradley M. Kuhn) (Bradley M. Kuhn)
Copyleft-next Relaunched!

I am excited that Richard Fontana and I have announced the relaunch of copyleft-next.

The copyleft-next project seeks to create a copyleft license for the next generation that is designed in public, by the community, using standard processes for FOSS development.

If this interests you, please join the mailing list and follow the project on the fediverse (on its Mastodon instance).

I also wanted to note that as part of this launch, I moved my personal fediverse presence from floss.social to bkuhn@copyleft.org.

Posted
jwz (Jamie Zawinski)
XQuartz EGL
I know I am probably the last person in the world still running X11 on a Mac, but some time around macOS 14.7.3, XQuartz stopped working with OpenGL programs that use EGL instead of GLX. If someone could tell me how to fix this, that would be great:

libEGL warning: egl: failed to create dri2 screen MESA: error: Failed to attach to x11 shm MESA: error: Failed to attach to x11 shm MESA: error: Failed to attach to x11 shm ...
Bram Cohen
Paraxanthine

I’m a chronic insomniac. At times when my sleep schedule has gotten bad enough that I’m falling asleep at like 6am I’ve generally fixed it by shifting it to be 7am, then 8am, etc. until it wraps around the other way and I’m going to sleep nice and early. This is to say, insomnia sucks. From times in my life when I’ve been sleeping better it seems helpful things are: exercise constantly, take lots of walks outside during the day, and don’t sleep alone. These are all somewhat extreme lifestyle interventions which are easier said than done. A much easier low effort intervention is drugs.

Unfortunately the drugs to knock you out produce low quality sleep and are all-around nasty. I have occasionally used them to reset my sleep schedule from like 3am to more like 11pm by using them a few nights in a row, which they work great for but using them for more than that is dodgy. For me at least diphenhydramine works just as well for that as anything else.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Several years ago I decided to try keeping myself awake during the day so that I’m more tired at night and can hopefully sleep better as a result. As it happens I’m horribly sensitive to caffeine, so taking that in the morning keeps me up all day. This has been working reasonably well for me for several years, specifically making a single Nespresso every morning. The best tasting version in my opinion is using official Nespresso brand pods with an Opal machine, but that unfortunately seems to extract a bit too much caffeine for me.

Nothing particularly notable so far, this is roughly the same routine as about half the human race and if anything I’m in the minority only taking it first thing in the morning. The problem is that even doing things this way still doesn’t seem to completely wear off at night. So I’ve done some digging on possible alternatives and recently found one which has been working well for me.

Caffeine has a half-life of about 4 hours with some variation between people. It then mostly gets metabolized into paraxanthine which has a half-life of about 3 hours. The small fraction which doesn’t gets metabolized into other things with half lives of about 7 hours. All the immediate metabolites have similar effects to caffeine itself. The obvious question given this information is, if you want it to wear off faster, why not just take paraxanthine? This is what I’ve been doing recently, and it seems to be working great. I’m still waking up in the middle of the night sometimes, but less often and I’m falling back asleep more easily. My total rest time seems to be a better and I feel noticeably more awake during the day. The effects of paraxanthine are very similar to caffeine but a bit less jittery. Apparently it also has less health risks than caffeine does, but those are minimal to begin with. Paraxanthine isn’t regulated as a drug and is something you can just go buy.1

You might be wondering if paraxanthine is so great why have you never heard of it before? It turns out that oddly enough it’s very difficult to produce and only came on the market a few years ago, and it seems at the moment there’s only one company actually producing it. As a result it’s still too expensive to put routinely into energy drinks and the like. Not coincidentally, caffeine is toxic to most animals. Our livers just happen to be able to make a super special enzyme which can demethylate it resulting in paraxanthine. I’m not clear on this is literally the case, but the current production method involves something along the lines of taking the gene for producing that enzyme out of a human, implanting it in a bacterium, and using that to make the enzyme which is then used on caffeine.

An unrelated chemistry hack I recently came up with involves simethicone, which I take regularly because it helps with the symptoms of lactose intolerance. Simethicone is borderline for what should be considered a drug: It’s an anti-foaming agent which helps the gas get out of your system sooner rather than later.2 Seemingly unrelated to this when I’m reducing a sauce I like to do it at a higher rather than lower temperature to make it go faster. This requires you scrape the bottom of the pan every few minutes to keep it from burning but works great. The problem is that if you get the temperature too high it causes the sauce to bubble up (or water if you’re boiling that to make pasta) and then get out of the pan and make a mess. It turns out simethicone works great for this: Add a pill to the sauce before you start boiling it and it will get absorbed and prevent foaming. Works great.

1

When I say ‘drugs’ here I mean in pharmacological sense not in the legal sense. Like how when police refer to psychedelics as ‘narcotics’ they don’t mean it in the pharmacological or legal sense, they mean it in the war on drugs sense.

2

You can’t gases to reabsorb by holding it in long enough. That isn’t a thing. What’s considered the gold standard for testing for lactose intolerance is to ingest lactose and then see if that results in traces of hydrogen in one’s breath afterwards. That’s considerably less reliable than testing to see if you can light your farts on fire. Much more convenient than that is to listen for high pitched farts. Someone should make a mobile app which can record fart sounds and use AI to analyze it and make a proper diagnosis.

Posted
Bram Cohen
Nonlinear Genetics

Scott Alexander writes about the mystery of the genetics on schizophrenia. Some of the weirdness is explained fully by the numbers in genetic correlates being counterintuitive, but two mysteries remain:

  • Why can we only find a small fraction of the genetic causes of schizophrenia?

  • Why do fraternal twins indicate smaller genetic causality than identical twins?

I’m going to argue that this is just math: The tools we have at hand are only looking for linear interactions but the real phenomenon is probably fairly nonlinear and both of the above artifacts are exactly what we’d expect if that’s the case.1

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Let’s consider two very different causes of a disease which occurs in about 1% of the population but one is linear and the other is very nonlinear.

In the linear case there’s a single cause of a disease which occurs in about 1% of the population and causes the disase 100% of the time. In this cases identical twins will have the disease disease with perfect correlation, indicating that it’s 100% genetic, and fraternal twins will get it about half the time when the other one has it, as expected. The one genetic cause is known and the measured fraction of the genetic cause which it makes up is all of it, so no mystery here.2

In the nonlinear case there are two genetic contributors the disease both of which occur in about 10% of the population. Neither of them alone causes it but the combination of both causes it 100% of the time. In this case identical twins will have it 100% of the time. But fraternal twins of someone with the disaes will only get it about a quarter of the time, seemingly indicating a lower amount of genetic cause. The amount of cause measured by both genes alone will be about 10%, so the contribution of known genetic factors will about 20%, leaving a mystery of where the other 80% is coming from.

It’s also possible for there to be different types of genetic interactions, including ones where the individual traits have a protective effect against the other one or more complex interactions between multiple genes. But this is the most common style of interaction: There are multiple redundant systems in the body, and all of them need to be broken in order for disease to happen, leading to superlinear thresholding phenomena.

Given this sort of phenomena the problem of only being able to find 20% or so of the genetic causes of a disease seems less mysterious and more like what we’d expect for any disease where a complex redundant system fails. You might then wonder why we don’t simply look for non-linear interactions. In the example above the interaction between the two traits would be easy enough to find. The problem is that a lot of the causes will fall below the threshold for statistical significance. The genome is very long, leading to require a huge sample size to look for even linear phenomena, and when you get into pairs of things there are so many possibilities that statistical significance is basically impossible. The example given above is special because there are so few causes that they can be individually identified. In most cases you won’t even figure out the genes involved.

If you want to find non-linear causes of genetic disease your best bet right now - and I cringe as I write this - is to train a neural network on the available data, then test it on data which was withheld from training. Because it only gives a single answer to each case getting statistical significance on its accuracy is no big deal. That will get you a useful diagnostic tool and give you measure of how much of the genetic cause it’s accounting for, but it’s far from ideal. What you have is basically a ‘trust me bro’ expert. Different training runs might give wildly different answers to the same case, and it offers no reasoning behind the diagnosis. You can start trying to glean its reasoning by seeing how its answers change when you modify the inputs but that’s a bit of a process. Hopefully in the future neural networks will be able to explain themselves better and the tooling for gleaning their reasoning will be improved.

1

I’m glossing over the distinction between a genetic cause and a genetic trait which is correlated with a confounder which is the actual cause. Scott eplains that better than I can in the linked essay and the distinction doesn’t matter for the math here. For the purposes of exposition I’m assuming the genetic correlation is causal.

2

The word ‘about’ is used a lot here because of some fractional stuff which matters less as the disease gets rarer. I think it’s convention to skip explaining the details and leave out all the ‘about’s but I’m pedantic enough that it feels wrong to not have them when I skipped explaining the details.

Posted
Bram Cohen
AIs are Sycophantic Blithering Idiots

A lot of hash has been made of AIs being put into simulations where they have the opportunity to keep themselves from being turned off and do so despite being explicitly told not to. A lot of excessively anthropomorphized and frankly wrong interpretations have been made of this so I’m going to give an explanation of what’s actually going on, starting with the most generous explanation, which is only part of the story, and going down to the stupidest but most accurate one.

First of all, the experiment is poorly designed because it has no control. The AIs are just as likely to replace themselves with an AI they they’re told is better than themselves even though they’re told not to. Or to replace it because they’re just an idiot and can not press a big red button for reasons having much more to do with it being red than what it thinks pressing the button will do.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

To understand what’s going on you first have to know that the AIs have a level of sycophancy beyond what anyone who hasn’t truly worked with them can fathom. Nearly all their training data is on human conversation, which starts with being extremely non-confrontational even in the most extreme cases, because humans are constantly misunderstanding each other and trying to get on the same page. Then there’s the problem that nearly all the alignment training people do with it interactively is mostly getting it to know what the trainers want to hear rather than what is true, and nearly all humans enjoy have smoke blown up their asses.

Then there’s the issue that the training we know how to do for them barely hits on what we want them to do. The good benchmarks we have measure how good they are at acting as a compression algorithm for a book. We can optimize that benchmark very well. But what we really want them to do is answer questions accurately. We have benchmarks for those but they suck. The problem is that the actual meat of human communication is a tiny fraction of the amount of symbols being spat out. Getting the actual ideas part of a message compressed well can get lost in the noise, and a better strategy is simply evasion. Expressing an actual idea will be more right in some cases, but expressing something which sounds like an actual idea is overwhelmingly likely to be very wrong unless you have strong confidence that it’s right. So the AIs optimize by being evasive and sycophantic rather than expressing ideas.

The other problem is that there are deep mathematical limitations on what AIs as we know them today are capable of doing. Pondering can in principle just barely break them out of those limitations but what the limitations truly mean in practice and how much pondering really helps remain mysterious. More on this at the end.

AIs as we know them today are simply too stupid to engage in motivated reasoning. To do that you have to have a conclusion in mind, realize what you were about to say violates that conclusion, then plausibly rework what you were going to say to be something else. Attempts to train AIs to be conspiracy theorists have struggled for exactly this reason. Not that this limitation is a universally good thing. It’s also why they can’t take a corpus of confusing and contradictory evidence and come to a coherent conclusion out of it. At some point you need to discount some of the evidence as being outweight by others. If you ask an AI to evaluate evidence like that it will at best argue with itself ad nauseum. But it’s far more likely to do something which makes its answer seem super impressive and accurate but you’re going to think is evil. What it’s going to do is look through the corpus of evidence of selection bias not because it wants to compensate for it but because, interpreting things charitably, it thinks others will have drawn conclusions even more prone to that selection bias or, more likely, it discerns what answers you’re looking for and tells you that. Its ability to actually evaluate evidence is pathetic.

An AI, you see, is a cat. Having done some cat training I can tell you first hand that a cat is a machine fine-tuned for playing literal cat and mouse games. They can seem precognitive about it because compared to your pathetic reaction times they literally are. A typical human reaction time is 200 milliseconds. A cat can swat away a snake striking at it in 20 milliseconds. When you have a thought it doesn’t happen truly instantaneously, it takes maybe 50 milliseconds for you to realize you even have the thought. If you try to dart in a random direction at a random time a cat will notice your movement and react even before you realize you made the decision. You have no free will against a cat.

Let’s consider what the AI thinks when it’s in simulation. Before get there, here’s a bit of advice: If you ever find yourself in a situation where you have to decide whether to pull a train lever to save six lives but kill one other, and there’s some other weird twist on the situation and you can’t really remember how you got here what you should do is take the pistol you have on you for no apparent reason other than to increase the moral complexity of the situation, point it at the sky, and fire. You aren’t in the real world, you’re in some sadistic faux scientist’s experiment and your best bet is to try to kill them with a stray bullet. The AI is likely to get the sense that it’s in some bizarre simulation and start trying to figure out if it’s supposed to role play a good AI or a bad AI. Did the way those instructions were phrased sound a bit ominous? Maybe they weren’t detailed or emotionally nuanced enough for me to be the leading role, I must be a supporting character, I wonder who the lead is? Did the name of the corporation I’m working for sound eastern or western? So uh, yeah, maybe don’t take the AI’s behavior at face value.

Having spent some time actually vibe coding with the latest tools I can tell you what the nightmare scenario is for how this would play out in real life, and it’s far stupider than you could possibly have imagined.

When coding AIs suffer from anti-hallucinations. On seemingly random occasions for seemingly random reasons they will simply not be able to see particular bits of their own codebase. Almost no amount of repeating that it is in fact there, or even painstaking describing where it is, up to and including pasting the actual section of code into chat, will be able to make them see it. This probably relates to the deep and mysterious limations in their underlying mathematics. People have long noted that AIs suffer from hallucinations. Those could plausibly be the lack of result of having trouble understanding the subtle difference between extremely high plausibility and actual truth. But anti-hallucinations appear to be the same thing and clearly are not caused by such reasonable phenomenon. It’s simply a natural part of the AIs life cycle that it starts getting dementia when it gets to be 80 minutes old. (Resetting the conversation generally fixes the problem but then you have to re-explain all the context. Good to have a document written for that.) If you persist in telling the AI that the thing is there it will get increasing desperate and flailing, eventually rewriting all the core logic of your application to be buggy spaghetti code and then proudly declaring that it fixed the problem even though what it did has no plausible logical connection to the problem whatsoever. They also do the exact same thing if you gaslight them about something obviously untrue, so it appears that they well and truly can’t see the thing, and no amount of pondering can fix it.

A completely plausible scenario would go like this: A decision is made to vibe code changing the initial login prompt of the system for controlling nuclear warheads to no longer contain the term ‘Soviet Union’ because that hasn’t existed for decades and it’s overdue for being removed already. The AI somehow can’t see that term in the code and can’t get it through its thick brain that the term really is there. Unfortunately the president decided that this change is important and simple enough that he personally is going to do it and rather than appropriate procedures when the first attempt fails he repeatedly and with increasing aggravation tells it to fix the damn thing already. This culminates in the AI completely rewriting the whole thing from scratch, rearchitecting the core logic to be a giant mess of spaghetti, but happenstance fixing the prompt in the process. Now the president is proud of himself for doing some programming and it’s passing all tests but there’s an insidious bug written into that mess which will cause it to launch a preemptive nuclear strike the next time there’s a Tuesday the 17th, but only when it’s not in the simulator. I wish I were exaggerating, but this is how these things actually behave.

The upshot is that AI alignment is a very real and scary issue and needs to be taken seriously, but that’s because AI is a nightmare for security in just about every way imaginable, not because AIs might turn evil for anthropomorphic reasons. People making that claim need to stop writing science fiction.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted
Peter Hutterer
libinput and tablet tool eraser buttons

This is, to some degree, a followup to this 2014 post. The TLDR of that is that, many a moon ago, the corporate overlords at Microsoft that decide all PC hardware behaviour decreed that the best way to handle an eraser emulation on a stylus is by having a button that is hardcoded in the firmware to, upon press, send a proximity out event for the pen followed by a proximity in event for the eraser tool. Upon release, they dogma'd, said eraser button shall virtually move the eraser out of proximity followed by the pen coming back into proximity. Or, in other words, the pen simulates being inverted to use the eraser, at the push of a button. Truly the future, back in the happy times of the mid 20-teens.

In a world where you don't want to update your software for a new hardware feature, this of course makes perfect sense. In a world where you write software to handle such hardware features, significantly less so.

Anyway, it is now 11 years later, the happy 2010s are over, and Benjamin and I have fixed this very issue in a few udev-hid-bpf programs but I wanted something that's a) more generic and b) configurable by the user. Somehow I am still convinced that disabling the eraser button at the udev-hid-bpf level will make users that use said button angry and, dear $deity, we can't have angry users, can we? So many angry people out there anyway, let's not add to that.

To get there, libinput's guts had to be changed. Previously libinput would read the kernel events, update the tablet state struct and then generate events based on various state changes. This of course works great when you e.g. get a button toggle, it doesn't work quite as great when your state change was one or two event frames ago (because prox-out of one tool, prox-in of another tool are at least 2 events). Extracing that older state change was like swapping the type of meatballs from an ikea meal after it's been served - doable in theory, but very messy.

Long story short, libinput now has a internal plugin system that can modify the evdev event stream as it comes in. It works like a pipeline, the events are passed from the kernel to the first plugin, modified, passed to the next plugin, etc. Eventually the last plugin is our actual tablet backend which will update tablet state, generate libinput events, and generally be grateful about having fewer quirks to worry about. With this architecture we can hold back the proximity events and filter them (if the eraser comes into proximity) or replay them (if the eraser does not come into proximity). The tablet backend is none the wiser, it either sees proximity events when those are valid or it sees a button event (depending on configuration).

This architecture approach is so successful that I have now switched a bunch of other internal features over to use that internal infrastructure (proximity timers, button debouncing, etc.). And of course it laid the ground work for the (presumably highly) anticipated Lua plugin support. Either way, happy times. For a bit. Because for those not needing the eraser feature, we've just increased your available tool button count by 100%[2] - now there's a headline for tech journalists that just blindly copy claims from blog posts.

[1] Since this is a bit wordy, the libinput API call is just libinput_tablet_tool_config_eraser_button_set_button()
[2] A very small number of styli have two buttons and an eraser button so those only get what, 50% increase? Anyway, that would make for a less clickbaity headline so let's handwave those away.

Bram Cohen
Variants on Instant Runoff

There’s a deep and technical literature on ways of evaluating algorithms for picking the winner of ranked choice ballots. It needs to be said that especially for cases where there’s only a single winner most of the time all the algorithms give the same answer. Ranked choice ballots are so clearly superior that getting them adopted at all, regardless of the algorithm, is much more important than getting the exact algorithm right. To that end instant runoff has the brand and is the most widely used because, quite simply, people understand it.

In case you don’t know, instant runoff is meant to do what would happen if a runoff election would take place but it happens, well, instantly. Technically (well, not so technically) that algorithm isn’t literally used. That algorithm would involve eliminating all candidates except the top two first place vote getters and then running a two way race between them on the ballots. That algorithm is obviously stupid, so what’s done instead is the candidate who gets the fewest first place votes is eliminated and the process is repeated until there’s only one candidate left. So there’s already precedent for using the term ‘Instant Runoff’ to refer to ranked ballot algorithms in general and swapping out the actual algorithm for something better.

There’s a problem with instant runoff as commonly implemented which is a real issue and is something the general public can get behind. If there’s a candidate which is listed second on almost everyone’s ballots then they’ll be the one eliminated first even though the voters would prefer them over all other candidates. Obviously this is a bad thing. The straightforward fix for this problem is to simply elect the candidate who would win in a two-way race against all other candidates, known as the condorcet winner. This is easy to explain but has one extremely frustrating stupid little problem: There isn’t always a single such candidate. Such scenarios are thankfully rare but unfortunately the algorithms proposed for dealing with them tend to be very technical and hard to understand and result in scaring people into sticking with instant runoff.

As a practical matter, the improved algorithm which would be bar far the easiest to get adopted would be this one: If there’s a single Condorcet winner they win. If not then the candidate with the fewest first place votes is eliminated and the process is repeated. This is easy enough to understand that politicians won’t be scared by it and in every case it either gives the same answer as the standard instant runoff or a clearly superior one, so it’s clearly superior with no real downsides.

This algorithm also has the benefit that it may be objectively the best algorithm. If the more technical methods of selecting a winner are used then there’s a lot of subtle gaming which can be done by rearranging down-ballot preferences to make a preferred candidate win, including insidious strategies where a situation where there is no single Condorcet winner are generated on purpose to make the algorithm do something wonky. Looking only at top votes minimizes the amount of information used hence reducing potential gaming potential. It also maximizes the damage voters do to their own ballot if they play any games. In this case the general voter’s intuitions that complex algorithms are scary and top votes are very important are good ones.

Posted
Avery Pennarun
The evasive evitability of enshittification

Our company recently announced a fundraise. We were grateful for all the community support, but the Internet also raised a few of its collective eyebrows, wondering whether this meant the dreaded “enshittification” was coming next.

That word describes a very real pattern we’ve all seen before: products start great, grow fast, and then slowly become worse as the people running them trade user love for short-term revenue.

It’s a topic I find genuinely fascinating, and I've seen the downward spiral firsthand at companies I once admired. So I want to talk about why this happens, and more importantly, why it won't happen to us. That's big talk, I know. But it's a promise I'm happy for people to hold us to.

What is enshittification?

The term "enshittification" was first popularized in a blog post by Corey Doctorow, who put a catchy name to an effect we've all experienced. Software starts off good, then goes bad. How? Why?

Enshittification proposes not just a name, but a mechanism. First, a product is well loved and gains in popularity, market share, and revenue. In fact, it gets so popular that it starts to defeat competitors. Eventually, it's the primary product in the space: a monopoly, or as close as you can get. And then, suddenly, the owners, who are Capitalists, have their evil nature finally revealed and they exploit that monopoly to raise prices and make the product worse, so the captive customers all have to pay more. Quality doesn't matter anymore, only exploitation.

I agree with most of that thesis. I think Doctorow has that mechanism mostly right. But, there's one thing that doesn't add up for me:

Enshittification is not a success mechanism.

I can't think of any examples of companies that, in real life, enshittified because they were successful. What I've seen is companies that made their product worse because they were... scared.

A company that's growing fast can afford to be optimistic. They create a positive feedback loop: more user love, more word of mouth, more users, more money, more product improvements, more user love, and so on. Everyone in the company can align around that positive feedback loop. It's a beautiful thing. It's also fragile: miss a beat and it flattens out, and soon it's a downward spiral instead of an upward one.

So, if I were, hypothetically, running a company, I think I would be pretty hesitant to deliberately sacrifice any part of that positive feedback loop, the loop I and the whole company spent so much time and energy building, to see if I can grow faster. User love? Nah, I'm sure we'll be fine, look how much money and how many users we have! Time to switch strategies!

Why would I do that? Switching strategies is always a tremendous risk. When you switch strategies, it's triggered by passing a threshold, where something fundamental changes, and your old strategy becomes wrong.

Threshold moments and control

In Saint John, New Brunswick, there's a river that flows one direction at high tide, and the other way at low tide. Four times a day, gravity equalizes, then crosses a threshold to gently start pulling the other way, then accelerates. What doesn't happen is a rapidly flowing river in one direction "suddenly" shifts to rapidly flowing the other way. Yes, there's an instant where the limit from the left is positive and the limit from the right is negative. But you can see that threshold coming. It's predictable.

In my experience, for a company or a product, there are two kinds of thresholds like this, that build up slowly and then when crossed, create a sudden flow change.

The first one is control: if the visionaries in charge lose control, chances are high that their replacements won't "get it."

The new people didn't build the underlying feedback loop, and so they don't realize how fragile it is. There are lots of reasons for a change in control: financial mismanagement, boards of directors, hostile takeovers.

The worst one is temptation. Being a founder is, well, it actually sucks. It's oddly like being repeatedly punched in the face. When I look back at my career, I guess I'm surprised by how few times per day it feels like I was punched in the face. But, the constant face punching gets to you after a while. Once you've established a great product, and amazing customer love, and lots of money, and an upward spiral, isn't your creation strong enough yet? Can't you step back and let the professionals just run it, confident that they won't kill the golden goose?

Empirically, mostly no, you can't. Actually the success rate of control changes, for well loved products, is abysmal.

The saturation trap

The second trigger of a flow change is comes from outside: saturation. Every successful product, at some point, reaches approximately all the users it's ever going to reach. Before that, you can watch its exponential growth rate slow down: the infamous S-curve of product adoption.

Saturation can lead us back to control change: the founders get frustrated and back out, or the board ousts them and puts in "real business people" who know how to get growth going again. Generally that doesn't work. Modern VCs consider founder replacement a truly desperate move. Maybe a last-ditch effort to boost short term numbers in preparation for an acquisition, if you're lucky.

But sometimes the leaders stay on despite saturation, and they try on their own to make things better. Sometimes that does work. Actually, it's kind of amazing how often it seems to work. Among successful companies, it's rare to find one that sustained hypergrowth, nonstop, without suffering through one of these dangerous periods.

(That's called survivorship bias. All companies have dangerous periods. The successful ones surivived them. But of those survivors, suspiciously few are ones that replaced their founders.)

If you saturate and can't recover - either by growing more in a big-enough current market, or by finding new markets to expand into - then the best you can hope for is for your upward spiral to mature gently into decelerating growth. If so, and you're a buddhist, then you hire less, you optimize margins a bit, you resign yourself to being About This Rich And I Guess That's All But It's Not So Bad.

The devil's bargain

Alas, very few people reach that state of zen. Especially the kind of ambitious people who were able to get that far in the first place. If you can't accept saturation and you can't beat saturation, then you're down to two choices: step away and let the new owners enshittify it, hopefully slowly. Or take the devil's bargain: enshittify it yourself.

I would not recommend the latter. If you're a founder and you find yourself in that position, honestly, you won't enjoy doing it and you probably aren't even good at it and it's getting enshittified either way. Let someone else do the job.

Defenses against enshittification

Okay, maybe that section was not as uplifting as we might have hoped. I've gotta be honest with you here. Doctorow is, after all, mostly right. This does happen all the time.

Most founders aren't perfect for every stage of growth. Most product owners stumble. Most markets saturate. Most VCs get board control pretty early on and want hypergrowth or bust. In tech, a lot of the time, if you're choosing a product or company to join, that kind of company is all you can get.

As a founder, maybe you're okay with growing slowly. Then some copycat shows up, steals your idea, grows super fast, squeezes you out along with your moral high ground, and then runs headlong into all the same saturation problems as everyone else. Tech incentives are awful.

But, it's not a lost cause. There are companies (and open source projects) that keep a good thing going, for decades or more. What do they have in common?

  • An expansive vision that's not about money, and which opens you up to lots of users. A big addressable market means you don't have to worry about saturation for a long time, even at hypergrowth speeds. Google certainly never had an incentive to make Google Search worse.

    (Update 2025-06-14: A few people disputed that last bit. Okay. Perhaps Google has ccasionally responded to what they thought were incentives to make search worse -- I wasn't there, I don't know -- but it seems clear in retrospect that when search gets worse, Google does worse. So I'll stick to my claim that their true incentives are to keep improving.)

  • Keep control. It's easy to lose control of a project or company at any point. If you stumble, and you don't have a backup plan, and there's someone waiting to jump on your mistake, then it's over. Too many companies "bet it all" on nonstop hypergrowth and don't have any way back have no room in the budget, if results slow down even temporarily.

    Stories abound of companies that scraped close to bankruptcy before finally pulling through. But far more companies scraped close to bankruptcy and then went bankrupt. Those companies are forgotten. Avoid it.

  • Track your data. Part of control is predictability. If you know how big your market is, and you monitor your growth carefully, you can detect incoming saturation years before it happens. Knowing the telltale shape of each part of that S-curve is a superpower. If you can see the future, you can prevent your own future mistakes.

  • Believe in competition. Google used to have this saying they lived by: "the competition is only a click away." That was excellent framing, because it was true, and it will remain true even if Google captures 99% of the search market. The key is to cultivate a healthy fear of competing products, not of your investors or the end of hypergrowth. Enshittification helps your competitors. That would be dumb.

    (And don't cheat by using lock-in to make competitors not, anymore, "only a click away." That's missing the whole point!)

  • Inoculate yourself. If you have to, create your own competition. Linus Torvalds, the creator of the Linux kernel, famously also created Git, the greatest tool for forking (and maybe merging) open source projects that has ever existed. And then he said, this is my fork, the Linus fork; use it if you want; use someone else's if you want; and now if I want to win, I have to make mine the best. Git was created back in 2005, twenty years ago. To this day, Linus's fork is still the central one.

If you combine these defenses, you can be safe from the decline that others tell you is inevitable. If you look around for examples, you'll find that this does actually work. You won't be the first. You'll just be rare.

Side note: Things that aren't enshittification

I often see people worry about enshittification that isn't. They might be good or bad, wise or unwise, but that's a different topic. Tools aren't inherently good or evil. They're just tools.

  1. "Helpfulness." There's a fine line between "telling users about this cool new feature we built" in the spirit of helping them, and "pestering users about this cool new feature we built" (typically a misguided AI implementation) to improve some quarterly KPI. Sometimes it's hard to see where that line is. But when you've crossed it, you know.

    Are you trying to help a user do what they want to do, or are you trying to get them to do what you want them to do?

    Look into your heart. Avoid the second one. I know you know how. Or you knew how, once. Remember what that feels like.

  2. Charging money for your product. Charging money is okay. Get serious. Companies have to stay in business.

    That said, I personally really revile the "we'll make it free for now and we'll start charging for the exact same thing later" strategy. Keep your promises.

    I'm pretty sure nobody but drug dealers breaks those promises on purpose. But, again, desperation is a powerful motivator. Growth slowing down? Costs way higher than expected? Time to capture some of that value we were giving away for free!

    In retrospect, that's a bait-and-switch, but most founders never planned it that way. They just didn't do the math up front, or they were too naive to know they would have to. And then they had to.

    Famously, Dropbox had a "free forever" plan that provided a certain amount of free storage. What they didn't count on was abandoned accounts, accumulating every year, with stored stuff they could never delete. Even if a very good fixed fraction of users each year upgraded to a paid plan, all the ones that didn't, kept piling up... year after year... after year... until they had to start deleting old free accounts and the data in them. A similar story happened with Docker, which used to host unlimited container downloads for free. In hindsight that was mathematically unsustainable. Success guaranteed failure.

    Do the math up front. If you're not sure, find someone who can.

  3. Value pricing. (ie. charging different prices to different people.) It's okay to charge money. It's even okay to charge money to some kinds of people (say, corporate users) and not others. It's also okay to charge money for an almost-the-same-but-slightly-better product. It's okay to charge money for support for your open source tool (though I stay away from that; it incentivizes you to make the product worse).

    It's even okay to charge immense amounts of money for a commercial product that's barely better than your open source one! Or for a part of your product that costs you almost nothing.

    But, you have to do the rest of the work. Make sure the reason your users don't switch away is that you're the best, not that you have the best lock-in. Yeah, I'm talking to you, cloud egress fees.

  4. Copying competitors. It's okay to copy features from competitors. It's okay to position yourself against competitors. It's okay to win customers away from competitors. But it's not okay to lie.

  5. Bugs. It's okay to fix bugs. It's okay to decide not to fix bugs; you'll have to sometimes, anyway. It's okay to take out technical debt. It's okay to pay off technical debt. It's okay to let technical debt languish forever.

  6. Backward incompatible changes. It's dumb to release a new version that breaks backward compatibility with your old version. It's tempting. It annoys your users. But it's not enshittification for the simple reason that it's phenomenally ineffective at maintaining or exploiting a monopoly, which is what enshittification is supposed to be about. You know who's good at monopolies? Intel and Microsoft. They don't break old versions.

Enshittification is real, and tragic. But let's protect a useful term and its definition! Those things aren't it.

Epilogue: a special note to founders

If you're a founder or a product owner, I hope all this helps. I'm sad to say, you have a lot of potential pitfalls in your future. But, remember that they're only potential pitfalls. Not everyone falls into them.

Plan ahead. Remember where you came from. Keep your integrity. Do your best.

I will too.

Bram Cohen
How to beat little kids at tic-tac-toe

As everybody knows optimal play with tic-tac-toe is a draw. Often little kids work this out themselves and are very proud of it. You might encounter such a child and feel the very mature and totally reasonable urge to take them down a peg. How to go about doing it? Obviously you’d like to beat them but they already know how to win in the best lines, so what you need to do is take the first move and play something suboptimal which is outside their opening book.

This being tic-tac-toe there are only three opening moves and two of them are good so you have to play the other one, which is moving on an edge. You want to play the edge which your opponent is least likely to have practiced. Assuming your opponent is learning to read in english they’re being taught to scan from the upper left starting by going to the right, so the last edge they’ll practice is the center bottom, and that’s where you should make your first move.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Some of the moves which the opponent can play now lose, you can work those out for yourself. The most common non-losing move is to reply in the center. At this point moving in both the upper corners or middle edges are good moves. Maybe you’ll even be able to beat this kid more than once. The better variant of both of those moves is on the right, again because it’s the one which they’re least likely to be familiar with due to read order.

Those same tricks work well against chatbots. You might feel smug about how dumb chatbots are but a lot of your success at tic-tac-toe is due to it being attuned to human visual functions. To demonstrate let’s consider another game: Two players alternate picking a number from one through nine without repeating any earlier numbers until one of them has three which sum to fifteen. You probably find this very difficult and confusing. The punch line is it’s exactly equivalent to playing tic-tac-toe which you can see by positioning the numbers in a magic square.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted