The lower-post-volume people behind the software in Debian. (List of feeds.)

Bram Cohen
Manyana

I’m releasing Manyana, a project which I believe presents a coherent vision for the future of version control — and a compelling case for building it.

It’s based on the fundamentally sound approach of using CRDTs for version control, which is long overdue but hasn’t happened yet because of subtle UX issues. A CRDT merge always succeeds by definition, so there are no conflicts in the traditional sense — the key insight is that changes should be flagged as conflicting when they touch each other, giving you informative conflict presentation on top of a system which never actually fails. This project works that out.

Better conflict presentation

One immediate benefit is much more informative conflict markers. Two people branch from a file containing a function. One deletes the function. The other adds a line in the middle of it. A traditional VCS gives you this:

<<<<<<< left
=======
def calculate(x):
    a = x * 2
    logger.debug(f"a={a}")
    b = a + 1
    return b
>>>>>>> right

Two opaque blobs. You have to mentally reconstruct what actually happened.

Manyana gives you this:

<<<<<<< begin deleted left
def calculate(x):
    a = x * 2
======= begin added right
    logger.debug(f"a={a}")
======= begin deleted left
    b = a + 1
    return b
>>>>>>> end conflict

Each section tells you what happened and who did it. Left deleted the function. Right added a line in the middle. You can see the structure of the conflict instead of staring at two blobs trying to figure it out.

What CRDTs give you

CRDTs (Conflict-Free Replicated Data Types) give you eventual consistency: merges never fail, and the result is always the same no matter what order branches are merged in — including many branches mashed together by multiple people working independently. That one property turns out to have profound implications for every aspect of version control design.

Line ordering becomes permanent. When two branches insert code at the same point, the CRDT picks an ordering and it sticks. This prevents problems when conflicting sections are both kept but resolved in different orders on different branches.

Conflicts are informative, not blocking. The merge always produces a result. Conflicts are surfaced for review when concurrent edits happen “too near” each other, but they never block the merge itself. And because the algorithm tracks what each side did rather than just showing the two outcomes, the conflict presentation is genuinely useful.

History lives in the structure. The state is a weave — a single structure containing every line which has ever existed in the file, with metadata about when it was added and removed. This means merges don’t need to find a common ancestor or traverse the DAG. Two states go in, one state comes out, and it’s always correct.

Rebase without the nightmare

One idea I’m particularly excited about: rebase doesn’t have to destroy history. Conventional rebase creates a fictional history where your commits happened on top of the latest main. In a CRDT system, you can get the same effect — replaying commits one at a time onto a new base — while keeping the full history. The only addition needed is a “primary ancestor” annotation in the DAG.

This matters because aggressive rebasing quickly produces merge topologies with no single common ancestor, which is exactly where traditional 3-way merge falls apart. CRDTs don’t care — the history is in the weave, not reconstructed from the DAG.

What this is and isn’t

Manyana is a demo, not a full-blown version control system. It’s about 470 lines of Python which operate on individual files. Cherry-picking and local undo aren’t implemented yet, though the README lays out a vision for how those can be done well.

What it is is a proof that CRDT-based version control can handle the hard UX problems and come out with better answers than the tools we’re all using today — and a coherent design for building the real thing.

The code is public domain. The full design document is in the README.

Subscribe now

Posted
Avery Pennarun
Every layer of review makes you 10x slower

We’ve all heard of those network effect laws: the value of a network goes up with the square of the number of members. Or the cost of communication goes up with the square of the number of members, or maybe it was n log n, or something like that, depending how you arrange the members. Anyway doubling a team doesn't double its speed; there’s coordination overhead. Exactly how much overhead depends on how badly you botch the org design.

But there’s one rule of thumb that someone showed me decades ago, that has stuck with me ever since, because of how annoyingly true it is. The rule is annoying because it doesn’t seem like it should be true. There’s no theoretical basis for this claim that I’ve ever heard. And yet, every time I look for it, there it is.

Here we go:

Every layer of approval makes a process 10x slower

I know what you're thinking. Come on, 10x? That’s a lot. It’s unfathomable. Surely we’re exaggerating.

Nope.

Just to be clear, we're counting “wall clock time” here rather than effort. Almost all the extra time is spent sitting and waiting.

Look:

  • Code a simple bug fix
    30 minutes

  • Get it code reviewed by the peer next to you
    300 minutes → 5 hours → half a day

  • Get a design doc approved by your architects team first
    50 hours → about a week

  • Get it on some other team’s calendar to do all that
    (for example, if a customer requests a feature)
    500 hours → 12 weeks → one fiscal quarter

I wish I could tell you that the next step up — 10 quarters or about 2.5 years — was too crazy to contemplate, but no. That’s the life of an executive sitting above a medium-sized team; I bump into it all the time even at a relatively small company like Tailscale if I want to change product direction. (And execs sitting above large teams can’t actually do work of their own at all. That's another story.)

AI can’t fix this

First of all, this isn’t a post about AI, because AI’s direct impact on this problem is minimal. Okay, so Claude can code it in 3 minutes instead of 30? That’s super, Claude, great work.

Now you either get to spend 27 minutes reviewing the code yourself in a back-and-forth loop with the AI (this is actually kinda fun); or you save 27 minutes and submit unverified code to the code reviewer, who will still take 5 hours like before, but who will now be mad that you’re making them read the slop that you were too lazy to read yourself. Little of value was gained.

Now now, you say, that’s not the value of agentic coding. You don’t use an agent on a 30-minute fix. You use it on a monstrosity week-long project that you and Claude can now do in a couple of hours! Now we’re talking. Except no, because the monstrosity is so big that your reviewer will be extra mad that you didn’t read it yourself, and it’s too big to review in one chunk so you have to slice it into new bite-sized chunks, each with a 5-hour review cycle. And there’s no design doc so there’s no intentional architecture, so eventually someone’s going to push back on that and here we go with the design doc review meeting, and now your monstrosity week-long project that you did in two hours is... oh. A week, again.

I guess I could have called this post Systems Design 4 (or 5, or whatever I’m up to now, who knows, I’m writing this on a plane with no wifi) because yeah, you guessed it. It's Systems Design time again.

The only way to sustainably go faster is fewer reviews

It’s funny, everyone has been predicting the Singularity for decades now. The premise is we build systems that are so smart that they themselves can build the next system that is even smarter, that builds the next smarter one, and so on, and once we get that started, if they keep getting smarter faster enough, then the incremental time (t) to achieve a unit (u) of improvement goes to zero, so (u/t) goes to infinity and foom.

Anyway, I have never believed in this theory for the simple reason we outlined above: the majority of time needed to get anything done is not actually the time doing it. It’s wall clock time. Waiting. Latency.

And you can’t overcome latency with brute force.

I know you want to. I know many of you now work at companies where the business model kinda depends on doing exactly that.

Sorry.

But you can’t just not review things!

Ah, well, no, actually yeah. You really can’t.

There are now many people who have seen the symptom: the start of the pipeline (AI generated code) is so much faster, but all the subsequent stages (reviews) are too slow! And so they intuit the obvious solution: stop reviewing then!

The result might be slop, but if the slop is 100x cheaper, then it only needs to deliver 1% of the value per unit and it's still a fair trade. And if your value per unit is even a mere 2% of what it used to be, you’ve doubled your returns! Amazing.

There are some pretty dumb assumptions underlying that theory; you can imagine them for yourself. Suffice it to say that this produces what I will call the AI Developer’s Descent Into Madness:

  1. Whoa, I produced this prototype so fast! I have super powers!

  2. This prototype is getting buggy. I’ll tell the AI to fix the bugs.

  3. Hmm, every change now causes as many new bugs as it fixes.

  4. Aha! But if I have an AI agent also review the code, it can find its own bugs!

  5. Wait, why am I personally passing data back and forth between agents

  6. I need an agent framework

  7. I can have my agent write an agent framework!

  8. Return to step 1

It’s actually alarming how many friends and respected peers I’ve lost to this cycle already. Claude Code only got good maybe a few months ago, so this only recenlty started happening, so I assume they will emerge from the spiral eventually. I mean, I hope they will. We have no way of knowing.

Why we review

Anyway we know our symptom: the pipeline gets jammed up because of too much new code spewed into it at step 1. But what's the root cause of the clog? Why doesn’t the pipeline go faster?

I said above that this isn’t an article about AI. Clearly I’m failing at that so far, but let’s bring it back to humans. It goes back to the annoyingly true observation I started with: every layer of review is 10x slower. As a society, we know this. Maybe you haven't seen it before now. But trust me: people who do org design for a living know that layers are expensive... and they still do it.

As companies grow, they all end up with more and more layers of collaboration, review, and management. Why? Because otherwise mistakes get made, and mistakes are increasingly expensive at scale. The average value added by a new feature eventually becomes lower than the average value lost through the new bugs it causes. So, lacking a way to make features produce more value (wouldn't that be nice!), we try to at least reduce the damage.

The more checks and controls we put in place, the slower we go, but the more monotonically the quality increases. And isn’t that the basis of continuous improvement?

Well, sort of. Monotonically increasing quality is on the right track. But “more checks and controls” went off the rails. That’s only one way to improve quality, and it's a fraught one.

“Quality Assurance” reduces quality

I wrote a few years ago about W. E. Deming and the "new" philosophy around quality that he popularized in Japanese auto manufacturing. (Eventually U.S. auto manufacturers more or less got the idea. So far the software industry hasn’t.)

One of the effects he highlighted was the problem of a “QA” pass in a factory: build widgets, have an inspection/QA phase, reject widgets that fail QA. Of course, your inspectors probably miss some of the failures, so when in doubt, add a second QA phase after the first to catch the remaining ones, and so on.

In a simplistic mathematical model this seems to make sense. (For example, if every QA pass catches 90% of defects, then after two QA passes you’ve reduced the number of defects by 100x. How awesome is that?)

But in the reality of agentic humans, it’s not so simple. First of all, the incentives get weird. The second QA team basically serves to evaluate how well the first QA team is doing; if the first QA team keeps missing defects, fire them. Now, that second QA team has little incentive to produce that outcome for their friends. So maybe they don’t look too hard; after all, the first QA team missed the defect, it’s not unreasonable that we might miss it too.

Furthermore, the first QA team knows there is a second QA team to catch any defects; if I don’t work too hard today, surely the second team will pick up the slack. That's why they're there!

Also, the team making the widgets in the first place doesn’t check their work too carefully; that’s what the QA team is for! Why would I slow down the production of every widget by being careful, at a cost of say 20% more time, when there are only 10 defects in 100 and I can just eliminate them at the next step for only a 10% waste overhead? It only makes sense. Plus they'll fire me if I go 20% slower.

To say nothing of a whole engineering redesign to improve quality, that would be super expensive and we could be designing all new widgets instead.

Sound like any engineering departments you know?

Well, this isn’t the right time to rehash Deming, but suffice it to say, he was on to something. And his techniques worked. You get things like the famous Toyota Production System where they eliminated the QA phase entirely, but gave everybody an “oh crap, stop the line, I found a defect!” button.

Famously, US auto manufacturers tried to adopt the same system by installing the same “stop the line” buttons. Of course, nobody pushed those buttons. They were afraid of getting fired.

Trust

The basis of the Japanese system that worked, and the missing part of the American system that didn’t, is trust. Trust among individuals that your boss Really Truly Actually wants to know about every defect, and wants you to stop the line when you find one. Trust among managers that executives were serious about quality. Trust among executives that individuals, given a system that can work and has the right incentives, will produce quality work and spot their own defects, and push the stop button when they need to push it.

But, one more thing: trust that the system actually does work. So first you need a system that will work.

Fallibility

AI coders are fallible; they write bad code, often. In this way, they are just like human programmers.

Deming’s approach to manufacturing didn’t have any magic bullets. Alas, you can’t just follow his ten-step process and immediately get higher quality engineering. The secret is, you have to get your engineers to engineer higher quality into the whole system, from top to bottom, repeatedly. Continuously.

Every time something goes wrong, you have to ask, “How did this happen?” and then do a whole post-mortem and the Five Whys (or however many Whys are in fashion nowadays) and fix the underlying Root Causes so that it doesn’t happen again. “The coder did it wrong” is never a root cause, only a symptom. Why was it possible for the coder to get it wrong?

The job of a code reviewer isn't to review code. It's to figure out how to obsolete their code review comment, that whole class of comment, in all future cases, until you don't need their reviews at all anymore.

(Think of the people who first created "go fmt" and how many stupid code review comments about whitespace are gone forever. Now that's engineering.)

By the time your review catches a mistake, the mistake has already been made. The root cause happened already. You're too late.

Modularity

I wish I could tell you I had all the answers. Actually I don’t have much. If I did, I’d be first in line for the Singularity because it sounds kind of awesome.

I think we’re going to be stuck with these systems pipeline problems for a long time. Review pipelines — layers of QA — don’t work. Instead, they make you slower while hiding root causes. Hiding causes makes them harder to fix.

But, the call of AI coding is strong. That first, fast step in the pipeline is so fast! It really does feel like having super powers. I want more super powers. What are we going to do about it?

Maybe we finally have a compelling enough excuse to fix the 20 years of problems hidden by code review culture, and replace it with a real culture of quality.

I think the optimists have half of the right idea. Reducing review stages, even to an uncomfortable degree, is going to be needed. But you can’t just reduce review stages without something to replace them. That way lies the Ford Pinto or any recent Boeing aircraft.

The complete package, the table flip, was what Deming brought to manufacturing. You can’t half-adopt a “total quality” system. You need to eliminate the reviews and obsolete them, in one step.

How? You can fully adopt the new system, in small bites. What if some components of your system can be built the new way? Imagine an old-school U.S. auto manufacturer buying parts from Japanese suppliers; wow, these parts are so well made! Now I can start removing QA steps elsewhere because I can just assume the parts are going to work, and my job of "assemble a bigger widget from the parts" has a ton of its complexity removed.

I like this view. I’ve always liked small beautiful things, that’s my own bias. But, you can assemble big beautiful things from small beautiful things.

It’s a lot easier to build those individual beautiful things in small teams that trust each other, that know what quality looks like to them. They deliver their things to customer teams who can clearly explain what quality looks like to them. And on we go. Quality starts bottom-up, and spreads.

I think small startups are going to do really well in this new world, probably better than ever. Startups already have fewer layers of review just because they have fewer people. Some startups will figure out how to produce high quality components quickly; others won't and will fail. Quality by natural selection?

Bigger companies are gonna have a harder time, because their slow review systems are baked in, and deleting them would cause complete chaos.

But, it’s not just about company size. I think engineering teams at any company can get smaller, and have better defined interfaces between them.

Maybe you could have multiple teams inside a company competing to deliver the same component. Each one is just a few people and a few coding bots. Try it 100 ways and see who comes up with the best one. Again, quality by evolution. Code is cheap but good ideas are not. But now you can try out new ideas faster than ever.

Maybe we’ll see a new optimal point on the monoliths-microservices continuum. Microservices got a bad name because they were too micro; in the original terminology, a “micro” service was exactly the right size for a “two pizza team” to build and operate on their own. With AI, maybe it's one pizza and some tokens.

What’s fun is you can also use this new, faster coding to experiment with different module boundaries faster. Features are still hard for lots of reasons, but refactoring and automated integration testing are things the AIs excel at. Try splitting out a module you were afraid to split out before. Maybe it'll add some lines of code. But suddenly lines of code are cheap, compared to the coordination overhead of a bigger team maintaining both parts.

Every team has some monoliths that are a little too big, and too many layers of reviews. Maybe we won't get all the way to Singularity. But, we can engineer a much better world. Our problems are solvable.

It just takes trust.

Bram Cohen
AI thoughts

Since nobody reads to the end of my posts I’ll start this one with the actionable experiment:

Deep neural network have a fundamental problem. The thing which makes them able to be trained also makes them susceptible to Manchurian Candidate type attacks where you say the right gibberish to them and it hijacks their brain to do whatever you want. They’re so deeply susceptible to this that it’s a miracle they do anything useful at all, but they clearly do and mostly people just pretend this problem is academic when using them in the wild even though the attacks actually work.

There’s a loophole to this which it might be possible to make reliable: thinking. If an LLM spends time talking to itself then it might be possible for it to react to a Manchurian Candidate attack by initially being hijacked but then going ‘Wait, what am I talking about?’ and pulling itself together before giving its final answer. This is a loophole because the final answer changes chaotically with early word selection so it can’t be back propagated over.

This is something which should explicitly be trained for. During training you can even cheat and directly inject adversarial state without finding a specific adversarial prompt which causes that state. You then get its immediate and post-thinking answers to multiple choice questions and use reinforcement learning to improve its accuracy. Make sure to also train on things where it gets the right answer immediately so you aren’t just training to always change its answer. LLMs are sneaky.

Now on to rambling thoughts.

Some people nitpicked that in my last post I was a little too aggressive not including normalization between layers and residuals, which is fair enough, they are important and possibly necessary details which I elided (although I did mention softmax), but they most definitely play strictly within the rules and the framework given, which was the bigger point. It’s still a circuit you can back propagate over. There’s a problem with online discourse in general, where people act like they’ve debunked an entire thesis if any nitpick can be found, even if it isn’t central to the thesis or the nitpick is over a word fumbling or simplification or the adjustment doesn’t change the accuracy of the thesis at all.

It’s beautifully intuitive how the details of standard LLM circuits fit together: Residuals stop gradient decay. Softmax stops gradient explosion. Transformers cause diffusion. Activation functions add in nonlinearity. There’s another big benefit of residuals which I find important but most people don’t worry about: If you just did a matrix multiplication then all permutations of the outputs would be isomorphic and have valid encodings effectively throwing away log(N!) bits from the weights which is a nontrivial loss. Residuals give an order and make the permutations not at all isomorphic. One quirk of the vernacular is that there isn’t a common term for the reciprocal of the gradient, the size of training adjustments, which is the actual problem. When you have gradient decay you have adjustment explosion and the first layer weights become chaotic noise. When you have gradient explosion you have adjustment decay and the first layer weights are frozen and unchanging. Both are bad for different reasons.

There are clear tradeoffs between fundamental limitations and practical trainability. Simple DNNs get mass quantities of feedback but have slightly mysterious limitations which are terrifying. Thinking has slightly less limitations at the cost of doing the thinking both during running and training where it only gets one unit of feedback per entire session instead of per word. Genetic algorithms have no limitations on the kinds of functions then can handle at all at the cost of being utterly incapable of utilizing back propagation. Simple mutational hill climbing has essentially no benefit over genetic algorithms.

On the subject of activation functions, sometimes now people use Relu^2 which seems directly against the rules and only works by ‘divine benevolence’. There must be a lot of devil in the details in that its non-scale-freeness is leveraged and everything is normed to make the values mostly not go above 1 so there isn’t too much gradient growth. I still maintain trying Reluss is an experiment worth doing.

Some things about the structure of LLMs are bugging me (This is a lot fuzzier and more speculative than the above). In the later layers the residuals make sense but for the first few they’re forcing it to hold onto input information in its brain while it’s trying to form more abstract thoughts so it’s going to have to arbitrarily pick some bits to sacrifice. Of course the actual inputs to an LLM have special handling so this may not matter, at least not for the main part of everything. But that raises some other points which feel off. The input handling being special is a bit weird, but maybe reasonable. It still has the property that in practice the input is completely jamming the first layer for a simply practical reason: The ‘context window’ is basically the size of its brain, and you don’t have to literally overwhelm the whole first layer with it, but if you don’t you’re missing out on potentially useful content, so in practice people overwhelm its brain and figure the training will make it make reasonable tradeoffs on which tokens it starts ignoring, although I suspect in practice it somewhat arbitrarily picks token offsets to just ignore so it has some brain space to think. It also feels extremely weird that it has special weights for all token offset. While the very last word is special and the one before that less so, that goes down quickly and it seems wrong that the weights related the hundredth to hundred and first token back are unrelated to the weights related to the hundred and first and hundred and second token back. Those should be tied together so it’s getting trained as one thing. I suspect that some of that is redundant and inefficient and some of it it is again ignoring parts of the input so it has brain space to think.

Subscribe now

Posted
Bram Cohen
There's Only One Idea In AI

In 1995 someone could have written a paper which went like this (using modern vernacular) and advanced the field of AI by decades:

The central problem with building neural networks is training them when they’re deeper than two layers due to gradient descent and gradient decay. You can get around this problem by building a neural network which has N values at each layer which are then multiplied by an NxN matrix of weights and have Relu applied to them afterwards. This causes the derivative of effects on the last layer to be proportionate with the effects on the first layer no matter how deep the neural network is. This represents a quirky family of functions whose theoretical limitations are mysterious but demonstrably work well for simple problems in practice. As computers get faster it will be necessary to use a sub-quadratic structures for the layers.

History being the quirky thing that it is what actually happened is decades later the seminal paper on those sub-quadratic structures happened to stumble across making everything sublinear and as a result people are confused as to which is actually the core insight. But the structure holds: In a deep neural network, you stick to relu, softmax, sigmoid, sin, and other sublinear functions and magically can train neural networks no matter how deep they are.

There are two big advantages which digital brains have over ours: First, they can be copied perfectly for free, and second, as long as they haven’t diverged too much the results of training them can be copied from one to another. Instead of a million individuals with 20 years experience you get a million copies of one individual with 20 million years of experience. The amount of training data current we humans need to become useful is miniscule compared to current AI but they have the advantage of sheer scale.

Subscribe now

Posted
Greg Kroah-Hartman
Linux CVE assignment process

As described previously, the Linux kernel security team does not identify or mark or announce any sort of security fixes that are made to the Linux kernel tree. So how, if the Linux kernel were to become a CVE Numbering Authority (CNA) and responsible for issuing CVEs, would the identification of security fixes happen in a way that can be done by a volunteer staff? This post goes into the process of how kernel fixes are currently automatically assigned to CVEs, and also the other “out of band” ways a CVE can be issued for the Linux kernel project.

Posted
Bram Cohen
Chords And Microtonality

When playing melodies the effects of microtonality are a bit disappointing. Tunes are still recognizable when played ‘wrong’. The effects are much more dramatic when you play chords:

You can and should play with an interactive version of this here. It’s based off this and this with labels added by me. The larger gray dots are standard 12EDO (Equal Divisions of the Octave) positions and the smaller dots are 24EDO. There are a lot of benefits of going with 24EDO for microtonality. It builds on 12EDO as a foundation, in the places where it deviates it’s as microtonal as is possible, and it hits a lot of good chords.

Unrelated to that I’d like to report on an experiment of mine which failed. I had this idea that you could balance the volumes of dissonant notes to make dyads consonant in unusual places. It turns out this fails because the second derivative of dissonance curves is negative everywhere except unity. This can’t possibly be a coincidence. If you were to freehand something which looks like dissonance curves it wouldn’t have this property. Apparently the human ear uses positions where the second derivative of dissonance is positive to figure out what points form the components of a sound and looks for patterns in those to find complete sounds.

Subscribe now

Posted
Bram Cohen
A Legendary Poker Hand and A Big Poker Tell

Here’s the story of a legendary poker hand:

Our hero decides to play with 72, which is the worst hand in Holdem and theory says he was supposed to have folded but he played it anyway.

Later he bluffed all-in with 7332 on the board and the villain was thinking about whether to call. At this point our hero offered a side bet: For a fee you can look at one of my hole cards of your choice. The villain paid the fee and happened to see the 2, at which point he incorrectly deduced that the hero must have 22 as his hole cards and folded.

What’s going on here is that the villain had a mental model which doesn’t include side bets. It may have been theoretically wrong to play 72, but in a world where side bets are allowed and the opponent’s mental model doesn’t include them it can be profitable. The reveal of information in this case was adversarial. The fee charged for it was misdirection to make the opponent think that it was a tradeoff for value rather than information which the hero wanted to give away.

What the villain should have done was think through this one level deeper. Why is my opponent offering this at all? Under what situations would they come up with it? Even without working through the details there’s a much simpler heuristic for cutting through everything: There’s a general poker tell that if you’re considering what to do and your opponent starts talking about the hand that suggests that they want you to fold. A good rule of thumb is that if you’re thinking and the opponent offers some cockamamie scheme you should just call. That certainly would have worked in this case. This seems like a rule which applies in general in life, not just in Poker.

Subscribe now

Posted
Bram Cohen
Camper Vehicles

Let’s say you wanted an offroad vehicle which rather than being a car-shaped cowboy hat was actually useful for camping. How would it be configured?

The way people really into camping approach the process is very strange to normal people and does a negative job of marketing it. You drive to the campground in a perfectly good piece of shelter and then pitch a tent. Normal people aren’t there to rough it, they’re there to enjoy nature, and sleeping in one’s car is a much more reasonable approach.

To that end a camper vehicle should have built-in insulation, motorized roll-up window covers, and fold-up rear seats. You drive to the campground, press the button for privacy on the windows, fold up the seats, and bam, you’re all set.

It should have a big electric battery with range extender optimized for charging overnight. The waste heat during the charging process can keep the vehicle warm while you sleep in it.

Roughly 8 inch elevation off the ground and a compliant suspension designed for comfort on poorly maintained roads rather than feeling sporty.

Compact hatchback form with boxy styling. Hatchbacks are already boxy to begin with and a flat front windshield works well with window covers so it’s both functional and matches the aesthetics.

Available modular fridge, induction plate, and water heater. With custom connectors to the car’s battery the electric cooking elements could ironically be vastly better than the ones in your kitchen.

Unfortunately having a built-in shower or toilet is impossible in a compact but the above features might be enough to make it qualify as a camper van which you’re allowed to live in. They’d at least make it practical to inconspicuously live in one’s car and shower at a gym.

Subscribe now

Posted
Bram Cohen
How To Use AI To Get Better At Chess

Leela Odds is a superhuman chess AI designed to beat humans despite ludicrous odds. I’m a decent player and struggle to beat it with two extra rooks. It’s fun doing this for sheer entertainment value. Leela odds plays like the most obnoxious troll club player you’ve ever run into, more like a street hustler than something superhuman. Obviously getting beaten in this way is also humiliating, but it also seems to teach a lot about playing principled chess, in a way which raises questions about objectivity, free will, and teaching pedagogy.

Most computer chess evaluations suffer from being deeply irrelevant to human play. When decently strong humans review games with computer evaluation as reference they talk about ‘computer lines’, meaning insane tactics which no human would ever see and probably wouldn’t be a good idea for you to play in that position even after having been told the tactics work out for you in the end, much less apply to your more general chess understanding. There’s also the problem that the only truly objective evaluation of a chess position is one of three values: win, lose, or draw. One move is only truly better or worse than another when it crosses one of those thresholds. If a chess engine is strong enough it can tell that a bunch of different moves are all the same and plays one of them at random. Current engines already do that for what appear to be highly tactical positions which are objectively dead drawn. The only reason their play bears any resemblance to normal in those positions is they follow the tiebreak rule of playing whichever move looked best before they searched deeply enough to realize all moves are equivalent

Subscribe now

So there’s the issue: When a computer gives an evaluation, it isn’t something truly objective or useful, it’s an evaluation of its chances of winning in the given position against an opponent of equal superhuman strength. But what you care about is something more nuanced: What is the best move for me, at my current playing strength, to play against my opponent, with their playing strength? That is a question which has a more objective answer. Both you and your opponent have a probability distribution of what moves you’ll play in each position, so across many playouts of the same position you have some chance of winning.

This is the reality which Leela Odds already acknowledges. Technically it’s only looking at ‘perfect’ play for its own side, but in heavy odds situations like it’s playing the objectively best moves are barely affected by the disadvantaged side’s strength anyway because the only way a weaker player can win is to get lucky by happening to play nearly perfect moves. And here we’re led to what I think is the best heuristic anyone has ever come up with for how to play good, principled, practically winning chess: You should play the move which Leela Odds thinks makes its chances against you the worst. The version of you playing right now has free will can look ahead and work out tactics but the version of you playing in the future cannot and is limited to working out tactics with only some probability of success. You can learn from advice from the bot about what are the most principled chess moves which give you the best practical chances assuming the rest of the game will be played out by your own not free will having self. Everybody has free will but nobody can prove it to anybody else, not even themselves in the past or the future. The realization that your own mental processes are simply a probability distribution does not give you license to sit around having a diet of nothing but chocolate cake and scrolling on your phone all day while you wait for your own brain to kick in and change your behavior.

Philosophical rant aside, this suggests a very actionable thing for making a better chess tutor: You should be told Leela Odds’s evaluation of all available moves so you can pick out the best one. The scale here is a bit weird. In an even position it will say things like your chances of winning in this position are one in ten quadrillion but if you play this particular move it improves to one in a quadrillion. But the relative values do mean a lot and greater ratios mean more so some reasonable interface could be put on it. I haven’t worked out what that interface might be. This approach may break down in a situation where you’re in an objectively lost position instead of an objectively won one and you should be playing tricky troll moves yourself. That seems to matter less than you might think, and could be counteracted by reverting to a weaker version of Leela Odds which can’t work out the entire rest of the game once it gets into such a position.

So far no one is building this. Everybody uses Stockfish for evaluation, which suggests a lot of lines you could have played if you were it, but of course you’re not, and is overly dismissive of alternative lines that would have been perfectly fine against your actual non-superhuman opponent. Somebody should build this. In the meantime if you want to improve your chess you’re stuck getting humiliated by Leela Odds even when you’re in what seem to be impossible to lose situations.

Subscribe now

Posted
Bram Cohen
Drug Tidbits

SR-17018 is a novel drug which is getting increasing underground usage for quitting opioids. It is technically an opioid itself but produces an amount of euphoria which is somewhere between barely noticeable and completely nonexistent. While taking it people don’t get withdrawal symptoms from Fentanyl but their Fentanyl tolerance fades at about the same rate as if they were going cold turkey without the SR-17018. People have been successfully using it to quit opioid addictions and even keeping a stash of it around in case they relapse, which is bizarre behavior for usually addicts. Usually if there are any opioids around they’ll take them and it will cause a relapse, so this stuff must really not be much fun or addictive. Opioids for opioid quitting has a bad reputation because of Methadone, but swapping Buprenorphine for Fentanyl is a big improvement and SR-17018 seems to be truly good for cessation. Unfortunately because it’s technically an opioid and there hasn’t been any movement on getting it approved for cessation purposes (it was originally studied as a painkiller which it’s unsurprisingly not very good at) most likely it will get shoved into schedule 1 at some point, sanity and reality be damned.

Subscribe now

Varenilicline is a good smoking cessation drug but causes nausea in some people. The obvious fix would be to give patients Ondansetron with it. This has been suggested but doesn’t seem to have been tried, not even a case study. There seem to be two problems here: The drugs in question are generic and there’s no incentive to develop treatment improvements which are very cheap, and there’s a general view that any treatment of addiction is super scary and the patients should have to suffer, even for fairly safe drugs with no reason to think they’ll have a bad interaction.

Sodium Oxybate is about to get orphan drug status, for the second time, for the same drug, which is already making more than a billion dollars a year and was neither discovered nor characterized by the company which got the orphan drug status the first time. Pharma has the deeply broken structure that exclusivity periods are the only form of reward for research but a start to fixing it would be to make it that formulation changes are both much easier to get through and give much less exclusivity. A bare minimum start to that would be to clarify that orphan drug status was never meant to apply to formulation changes. It would also be good to make sectors which are already making massive profits not qualify as orphan any more and to reduce the exclusivity period for formulation patents in general, with time release formulas and salt changes handled as specific special cases.

Subscribe now

Posted
Bram Cohen
The Future Of Enterprise SAAS

People are unsure of what the inevitable huge disruptions AI will bring to software will eventually be, but one thing which is clear is that enterprise software as a service will be hard hit. The industry is producing products which are too awful, and is too bottlenecked on software development costs, to not be completely upended.

The way that industry works currently is that there’s generally a single dominant player in each niche which has a codebase with a million features ten of which are important. The problem is that every one of their customers uses twenty features: The ten which are important to everyone, and ten others which are important to them specifically. And which ten long tail features each customer cares about have very little correlation to each other.

It’s clear that million dollar a year saas contracts are going away. It’s becoming way too practical for customers that large to write their own bespoke solutions from scratch and wind up with something which sucks less. But that doesn’t mean everybody is going to write everything completely from scratch. Most likely there will be open source solutions for most problems which only have the ten big features and everybody vibe codes customizations for their their own deployment.

The open source business model for this is time honored and straightforward: The company maintaining the open source version also has a service where you pay for deployment. But now it’s even better, because they’ll have a vibe coding interface which is super trained on ten thousand other customizations of their codebase. They’ll likely even sneak in some human intervention in the background to help with rebasing when a new release of the base product comes out. And they’ll have a license which allows and all customizations to be upstreamed if the maintainers want them to be. There will probably be niche consultancies which specialize in helping companies do customizations of specific products but that won’t be done in house by the maintaining company because saas shops will still try to maintain high capital efficiency.

The whole saas industry is much more vulnerable than people realize. You could get me to switch off Jira just by making a comparable product which had page load times out of this century. And vibe coding will absolutely be at the core of the new way of doing things.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted
Bram Cohen
Making A Better Pulser Pump

This video caught my fancy so here are my thoughts on improving the design. There seems to be a lot of things which can be done which should make big improvements but consider everything in this post speculative spitballing. Anyone who wants to improve on this mechanism is free to try my ideas.

Technically it’s a bit wrong to say this mechanism has ‘no moving parts’. It does have moving parts, they’re just air bubbles which are being captured on the fly and hence aren’t subject to wear. The problem is that air bubbles don’t like behaving.

Starting with where the water comes in:

The mechanism in the above video is cheating a bit because the pump getting the water into the top is aerating it. A proper mechanism should have a way of getting air into the water when it’s coming in slowly and steadily. In particular it should have a mechanism for being able to recover if the mechanism as a whole ever gets overflowed so it isn’t stuck with no bubbles in it forever. The simplest mechanism for this is to have a section of the pipe going down which has holes in the sides. As long as water is flowing fast it will pull air bubbles in through the holes. If it gets backlogged water will escape through the holes and can be directed to the exit, making room for air to be let in. The ideal size and spacing of the holes is unclear. If the mechanism were big enough it would probably improve things a lot to split across multiple pipes which have air intake holes to pull more bubbles in. It might also be a good idea to make a whirlpool and stick a pipe in the middle to help the air go down but that gets complicated.

Once bubbles are captured the downward pipe should be split into a bundle of straws to keep the bubbles from coalescing and forcing their way upwards. The ideal diameter of the straws is probably somewhat dependent on their length but should be small enough that surface tension makes water form plugs. The length of the downward pipe in the above model seems to be way too long. It appears to be that this is being done to make the pulsing effect happen but there’s a better way of doing that which I’ll get to.

The intake for the air bubbles should come from the bottom of the chamber where the pumping upwards happens. That should lead upwards to a manifold which is a short pipe with a horizontal cap at the top with holes in it, all kept under water. Air will then build up in the pipe and result in a steady stream of bubbles coming out of the holes. The size and depth of the holes as well as the material they’re made out of and the width of the pipe relative to the rate of air coming in all affect the nucleation of bubbles. What should happen is that bubbles of a reasonably consistent size come up at a reasonably consistent rate in a nice steady stream instead of the chaos you see above. There’s probably a range of possible sizes and rates of bubbles which are possible and that needs to be studied.

Instead of a single pipe going upwards there should be a bundle of straws. The bottoms of the straws should splay out and have tapered inlets with a one to one correlation with the holes in the manifold so the bubbles from that hole go directly into that straw and push the water upwards. The ideal number and diameter of the straws is very dependent on how far the water is being pumped, how quickly the air is coming in, and what they’re made out of. They should be thin enough that surface tension causes water in them to form a plug and makes bubbles force the water upwards. The idea is to make the water flow up slowly and steadily, with the upwards force of the bubbles just barely able to force it to the height it’s being pumped to, without wasting any energy on the momentum from those pulses. Maybe this shift in emphasis makes the whole thing technically a different mechanism.

At the top the straws should flare away from each other so the water going out of one straw doesn’t fall into its neighbors.

Hopefully these changes can improve the efficiency of the system from awful to merely bad. You’d still only use it when you care less about efficiency than low maintenance or quiet or specifically want aeration. Using all those straws will reduce how well it works on water containing particulates.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted
Greg Kroah-Hartman
Linux kernel security work

Lots of the CVE world seems to focus on “security bugs” but I’ve found that it is not all that well known exactly how the Linux kernel security process works. I gave a talk about this back in 2023 and at other conferences since then, attempting to explain how it works, but I also thought it would be good to explain this all in writing as it is required to know this when trying to understand how the Linux kernel CNA issues CVEs.

Posted
bkuhn@ebb.org (Bradley M. Kuhn) (Bradley M. Kuhn)
I Lived a Similar Trauma Rob Reiner's Family Faces & Shame on Trump

I posted the following on my Fediverse (via Mastodon) account. I'm reposting the whole seven posts here as written there, but I hope folks will take a look at that thread as folks are engaging in conversation over there that might be worth reading if what I have to say interests you. (The remainder of the post is the same that can be found in the Fediverse posts linked throughout.)

I suppose Fediverse isn't the place people are discussing Rob Reiner. But after 36 hours of deliberating whether to say anything, I feel compelled. This thread will be long,but I start w/ most important part:

It's an “open secret” in the FOSS community that in March 2017 my brother murdered our mother. About 3k ppl/year in USA have this experience, so it's a statistical reality that someone else in FOSS experienced similar. If so, you're welcome in my PMs to discuss if you need support… (1/7)

… Traumatic loss due to murder is different than losing your grandparent/parent of age-related ailments (& is even different than losing a young person to a disease like cancer). The “a fellow family member did it” brings permanent surrealism to your daily life. Nothing good in your life that comes later is ever all that good. I know from direct experience this is what Rob Reiner's family now faces. It's chaos; it divides families forever: dysfunctional family takes on a new “expert” level… (2/7)

…as one example: my family was immediately divided about punishment. Some of my mother's relatives wanted prosecution to seek death penalty. I knew that my brother was mentally ill enough that jail or prison *would* get him killed in a prison dispute eventually,so I met clandestinely w/my brother's public defender (during funeral planning!) to get him moved to a criminal mental health facility instead of a regular prison. If they read this, it'll first time my family will find out I did that…(3/7)

…Trump's political rise (for me) links up: 5 weeks into Trump's 1ˢᵗ term, my brother murdered my mother. My (then 33yr-old) brother was severely mentally ill from birth — yet escalated to murder only then. IMO, it wasn't coincidence. My brother left voicemail approximately 5 hours before the murder stating his intent to murder & described an elaborate political delusion as the impetus. ∃ unintended & dangerous consequences of inflammatory political rhetoric on the mental ill!…(4/7)

…I'm compelled to speak publicly — for first time ≈10 yrs after the murder — precisely b/c of Trump's response.

Trump endorsed the idea that those who oppose him encourage their own murder from the mentally ill. Indeed, he said that those who oppose him are *themselves causing* mental illnesses in those around them, & that his political opponents should *expect* violence from their family members (who were apparently driven to mental illness from your opposition to Trump!)… (5/7)

…Trump's actual words:

Rob Reiner, tortured & struggling,but once…talented movie director & comedy star, has passed away, together w/ his wife…due to the anger he caused others through his massive, unyielding, & incurable affliction w/ a mind crippling disease known as TRUMP DERANGEMENT SYNDROME…He was known to have driven people CRAZY by his raging obsession of…Trump, w/ his obvious paranoia reaching new heights as [my] Administration surpassed all goals and expectations of greatness…
(6/7)

My family became ultra-pro-Trump after my mom's murder. My mom hated politics: she was annoyed *both* if I touted my social democratic politics & if my dad & his family stated their crypto-fascist views. Every death leaves a hole in a community's political fabric. 9+ years out, I'm ostracized from my family b/c I'm anti-Trump. Trump stated perhaps what my family felt but didn't say: those who don't support Trump are at fault when those who fail to support Trump are murdered. (7/7)

[ Finally, I want to also quote this one reply I also posted in the same thread: I ask everyone, now that I've stated this public, that I *know* you're going to want to search the Internet for it, & you will find a lot. Please, please, keep in mind that the Police Department & others basically lied to the public about some of the facts of the case. I seriously considered suing them for it, but ultimately it wasn't worth my time. But, please everyone ask me if you are curious about any of the truth of the details of the crime & its aftermath …

Posted
Greg Kroah-Hartman
Tracking kernel commits across branches

With all of the different Linux kernel stable releases happening (at least 1 stable branch and multiple longterm branches are active at any one point in time), keeping track of what commits are already applied to what branch, and what branch specific fixes should be applied to, can quickly get to be a very complex task if you attempt to do this manually. So I’ve created some tools to help make my life easier when doing the stable kernel maintenance work, which ended up making the work of tracking CVEs much simpler to manage in an automated way.

Posted
Rusty Russell
CLN Developer Series #6: Neatening a Bugfix PR

This is an “eat your veggies!” talk, which is an indepth review of an excellent PR by @dovgopoly. When someone first submits a PR, I like to explain every detail of how I would have done it, so they have some guidance about what the process looks like.

You can see the final result here.

Posted
Greg Kroah-Hartman
Linux kernel version numbers

Despite having a stable release model and cadence since December 2003, Linux kernel version numbers seem to baffle and confuse those that run across them, causing numerous groups to mistakenly make versioning statements that are flat out false. So let’s go into how this all works in detail.

Posted
Rusty Russell
CLN Developer Series #5: Gossipd: The Gossip Daemon

After the previous aside on a gossip bug, I realized I should do a tour of each daemon. I started with gossipd because it’s my favorite, having changed so much from what it originally did into something which now mainly exports the “gossip_store” file for other subdaemons and plugins to use.

Posted