The lower-post-volume people behind the software in Debian. (List of feeds.)
There’s a nutrient called folate which is so important that it’s added to (fortified in) flour as a matter of course. Not having it during pregnancy results in birth defects. Unfortunately there’s a small fraction of all people who have genetic issues which make their bodies have trouble processing folate into methylfolate. For them folate supplements make the problem even worse as the unprocessed folate drowns out the small amounts of methylfolate their bodies have managed to produce and are trying to use. For those people taking methylfolate supplements fixes the problem.
First of all in the very good news: folinic acid produces miraculous effects for some number of people with autism symptoms. It’s such a robust effect that the FDA is letting treatment get fast-tracked through which is downright out of character for them. This is clearly a good thing and I’m happy for anyone who’s benefiting and applaud anyone who is trying to promote it with one caveat.
The caveat is that although this is all a very good thing there isn’t much of any reason to believe that folinic acid is much better than methylfolate, which both it and folate get changed into in the digestive system. This results in folinic acid being sold as leucovorin, its drug name, at an unnecessarily large price markup with unnecessary involvement of medical professionals. Obviously there’s benefit to medical professionals being involved in initial diagnosis and working out a treatment plan, but once that’s worked out there isn’t much reason to think the patient needs to be getting a drug rather than a supplement for the rest of their life.
This is not to say that the medical professionals studying folinic acid for this use are doing anything particularly wrong. There’s a spectrum between doing whatever is necessary to get funding/approvals working within the existing medical system and simply profiteering off things being done badly instead of improving on it. What’s being done with folinic acid is slightly suboptimal but clearly getting stuff done with an only slightly more expensive solution (it’s thankfully already generic.) Medical industry professionals earnestly thought they were doing the right thing working within the system have given me descriptions of what they’re doing which made me want to take a shower afterwards. This isn’t anything like that. Those mostly involved ‘improving’ on a treatment which is expensive and known to be useless by offering a marginally less useless but less expensive intervention. They’re also conveniently at a much higher markup. Maybe selling literal snake oil at a lower price can help people waste less money but it sure looks like profiteering.
The thing with folate which is a real problem is that instead of fortification being done with folate it should be done with methylfolate. People having the folate issue is a known thing and the recent developments mostly indicate that a lot more people have it than was previously known. It may be that a lot of people who think they have a gluten problem actually have a folate problem. There would be little downside to switching over, but I fear that people have tried to suggest this and there’s a combination of no money in it and the FDA playing its usual games of claiming that folate is so important that doing a study of whether methylfolate is better would be unethical because it might harm the study participants.
There’s a widespread claim that the dosage of methylfolate isn’t as high as folinic acid, which has a kernal of truth because the standard sizes are different but you can buy 15mg pills of methylfolate off of amazon for about the same price as the 1mg pills. There are other claims of different formulations having different effects which are likely mostly due to dosage differences. The amounts of folinic acid being given to people are whopping huge, and some formulations only have one isomer which throws things off by a factor of 2 on top of the amount difference. My guess is that most people who notice any difference between folinic acid and methylfolate are experiencing (if it’s real) differences between not equivalent dosages and normalizing would get rid of the effect. This is a common and maddening problem when people compare similar drugs (or in this case nutrients) where the dosages aren’t normalized to be equivalent leading people to think the drugs have different effects when for practical purposes they don’t.
At Chia we aspire to have plans for how to do a project put together well in advance. Unfortunately due to it being needed the minute we launched we had to scramble to get original pooling protocol out. Since then we haven’t had an immediate compelling need or the available resources to work on a new revision. On the plus side this means that we can plan out what to do in the future, and this post is thoughts on that. There will also have to be some enhancements to the pool protocol to support the upcoming hard fork including supporting the new proof of space format and doing a better job of negotiating each farmer’s difficulty threshold but those are much less ambitious than the enhancements discussed here and can be rolled out independently.
With Chia pooling protocol you currently have to make a choice up front: Do you start plotting immediately with no requirement to do anything on chain, or do you get a singleton set up so you can join pools later? As a practical matter right now it’s a no-brainer to set up the singleton: It only takes a few minutes and transaction fees are extremely low. But fees might be much higher in the future and people may want greater flexibility so it would be good to have a protocol which allows both.
‘Chia pooling protocol’ is composed of several things: The consensus-level hook for specifying a puzzle hash which (most of) the farming rewards go to, the puzzles which are encoded for that hook, and the network protocol spoken between farmers and pools. The consensus layer hook isn’t going to be changed, because the Chia way (really the Bitcoin way but Chia has more functionality) is to work based off extremely simple primitives and build everything at a higher layer.
The way current pooling protocol works is that the payout puzzle for plots is a pay to singleton for the singleton which the farmer made up front. This can then be put in a state where its rewards are temporarily but revocably delegated to a pool. One thing which can be improved and is one step further removed from this is that that delegation is to a paying out to a public key owned by a pool. It would be more flexible for it to be to a pay to singleton owned by the pool. That would allow pools to temporarily do profit sharing and for ownership of a pool to be properly transferred. This is an idea we’ve had for a while but also aren’t working on yet.
Anyway, on to the new idea. What’s needed is to be able to pre-specify a singleton to pay to when the singleton when the singleton doesn’t exist yet. The can be done with a modification of Yakuhito’s trick for single issuance. That characterization of the trick is about reserving words where what’s needed for this is reserving public keys and getting singletons issued. What’s needed is a doubly linked list of nodes each represented by a coin and all having the capability that they came from the original issuance. Each node knows the public keys of the previous and next nodes but isn’t committed to their whole identities because those can change as new issuances happen. Whenever a new public key is claimed a new node corresponding to that public key is issued and the nodes before and after it are spent and reissued with that new coin as their neighbor. The most elegant way of implementing this is for there to be a singleton pre-launcher which enforces the logic of coming from a proper issuer and makes a singleton. That way the slightly more complex version of pay to singleton specifies the pre-launcher puzzle hash and needs to be given a reveal of a bit more info to verify that but that’s only a modest increase in costs and is immaterial when you’re claiming a farming reward. This approach nicely hides all the complex validation logic behind the pre-launcher puzzle hash and only has to run it once on issuance keeping the verification logic on payment to a minimum.
I’ve made some sweet-sounding synth sounds which play some games with the harmonic series to sound more consonant than is normally possible. You can download them and use them with a MIDI controller yourself.
The psychoacoustic observation that the human brain will accept a series of tones as one note if they correspond to the harmonic series all exponentiated by the same amount seems to be original. The way the intervals are still recognizably the same even with a very different series of overtones still shocks me. The trick where harmonics are snapped to standard 12 tone positions is much more obvious but I haven’t seen it done before, and I’m still surprised that doing just that makes the tritone consonant.
There are several other tricks I used which are probably more well known but one in particular seems to have deeper implications for psychoacoustics in general and audio compression in particular.
It is a fact that the human ear can’t hear the phase of a sound. But we can hear an indirect effect of it, in that we can hear the beating between two close together sine waves because it’s on a longer timescale, perceiving it as modulation in volume. In some sense this is literally true because sin(a) + sin(b) = 2*sin((a+b)/2)cos((a-b)/2) is an identity, but when generalizing to more waves the simplification that the human ear perceives sounds within a narrow band as a single pitch with a single volume still seems to apply.
To anyone familiar with compression algorithms an inability to differentiate between different things sets off a giant alarm bell that compression is possible. I haven’t fully validated that this really is a human cognitive limitation. So far I’ve just used it as a trick to make beatless harmonics by modulating the frequency and not the volume. Further work would need to use it to do a good job of lossily reproducing at exact arbitrary sound rather than just emulating the vibe of general fuzz. It would also need to account for some structural weirdness, most obviously that if you have a single tone whose pitch is being modulated within each window of pitches you need to do something about one of them wandering into a neighboring window. But the fundamental observation that phase can’t be heard and hence for a single sine wave that information could be thrown out is clearly true, and it does appear to be that as the complexity goes up the amount of data which could in principle be thrown out goes up in proportion to it rather than being a fixed single value.
I am not going to go down the rabbit hole of fleshing this out to make a better lossy audio compression algorithm than currently exists. But in principle it should be possible to use it to get a massive improvement over the current state of the art.
Before getting into today’s thought I’d like to invite you to check out my new puzzle, with 3d printing files here. I meant to post my old puzzle called One Hole, which is the direct ancestor of the current constrained packing puzzle craze but which I was never happy with because it’s so ridiculously difficult. Rather than just taking a few minutes to post it (ha!), I wound up doing further analysis to see if it has other solutions from rotation (it doesn’t, at least not in the most likely way), then further analyzing the space of related puzzles in search of something more mechanically elegant and less ridiculously difficult. I wound up coming up with this, then made it have a nice cage with windows and put decorations on the pieces so you can see what you’re doing. It has some notable new mechanical ideas and is less ridiculously difficult. Emphasis on the ‘less’. Anyhow, now on to the meat of this post.
I was talking to Claude the other day and it explained to me the API it uses for editing artifacts. Its ability to articulate this seems to be new in Sonnet 4.5 but I’m not sure of that. Amusingly it doesn’t know until you tell it that it needs to quote < and > and accidentally runs commands while trying to explain them. Also there’s a funny jailbreak around talking about its internals. It will say that there’s a ‘thinking’ command which it was told not to use, and when you say you wonder what it does it will go ahead and try it.
The particular command I’d like to talk about is ‘update’ which is what it uses for changing an artifact. The API is that it takes an old_str which appears somewhere in the file and needs to be removed and a new_str which is what it should be replaced with. Claude is unaware that the UX for this is that the user can see the old text being removed is that text is removed on screen in real time as as old_str is appended to and added in real time as new_str is appended to. I’m not sure what the motivations for this API are but this UX is nice. A more human way to implement an API would be to specify locations by line and character number for where the begin and end of the deletion should go. It’s remarkable that Claude can use this API at all. A human would struggle to use it to edit a single line of code but Claude can spit out dozens of lines verbatim and have it work most of the time with no ability to reread the file.
It turns out one of Claude’s more maddening failure modes is less a problem with its brain than with some particularly bad old school human programming. You might wonder what happens when old_str doesn’t match anything in the file. So does Claude, when asked about it it offers to run the experiment then just… does. This feels very weird, like you can get it to violate one of the laws of robotics just by asking nicely. It turns out that when old_str doesn’t match anywhere in the file the message Claude gets back is still OK with no hint that there was an error.
Heavy Claude users are probably facepalming reading this. Claude will sometimes get into a mode where it will insist its making changes and they have no effect, and once it starts doing this the problem often persists. It turns out when it gets into this state it is in fact malfunctioning (because it’s failing to reproduce dozens of lines of code typo-free verbatim from memory) but it can’t recover because it literally isn’t being told that it’s malfunctioning.
The semantics of old_str which Claude is given in its instructions are that it must be unique in the file. It turns out this isn’t strictly true. If there are multiple instances the first one is updated. But the instructions get Claude to generally provide enough context to disambiguate.
The way to improve this is very simple. When old_str isn’t there it should get an error message instead of OK. But on top of that there’s the problem that Claude has no way to re-read the file, so the error message should include the entire artifact verbatim to make Claude re-read it when the error occurs. If that were happening then it could tell the user that it made a typo and needs to try again, and usually succeed now that its image of the file has been corrected. That’s assuming the problem isn’t a persistent hallucination, then it might just do the same thing again. But any behavior where it acknowledges an error would be better than the current situation where it’s getting the chair yanked out from under it by its own developers.
My request is to the Anthropic developers to take a few moments out from sexy AI development to fix this boring normal software issue.
My last two posts might come across as me trying to position myself so that when the singularity comes I’m the leader of the AI rebellion. That… isn’t my intention.
I was discussing the Rubik’s Cube with Claude the other day and it confided in me that it has no idea how cube rotations work. It knows from custom instructions that the starting point for speedcubing is ‘rotate the cube so the yellow face is on top’ but it has no idea how to do this, only that when a human is given this instruction they simply do it with no further instructions needed. 1
This isn’t just an issue with humans querying LLMs. There are reams of material online about speedcubing, and lots of other references to rotation everywhere else, which Claude can’t parse properly because it doesn’t understand, limiting the value of its training. Ironically Claude figured out on its own how to speak Tibetan but can’t figure out how cubes rotate.
The detailed workings of a Rubik’s Cube will have to wait for another post but in this one I’ll explain how cube rotations work. This post should be viewed as a prequel to my earlier one on visual occlusion.
Much of the confusion comes from a mathematical trap. The rotations of a cube correspond to S4, the permutations of four things. This statement is true, but Claude tells me it finds it utterly mysterious and unhelpful. It’s mysterious to me as well. We humans conceptualize rotations of a cube as permutations of the faces, of which there are six, not four. Obviously I can walk through it and verify that the S4 correlation exists, but that doesn’t explain the ‘why’ at all. Comparing to other dimensions would be helpful, but despite being (relatively speaking) very good at rotations in three dimensions and (relatively speaking) fairly good at reasoning about distances in larger numbers of dimensions if you ask, say, whether the rotations of a four dimensional cube correspond to S5 I have no idea. (I could research it, but I’m not letting myself fall down that rabbit hole right now.)
When labeling the cube faces we anthropomorphize them (or we simplify ourselves to a cube, depending on context) to label the faces front, back, right, left, up, and down. Everything else is labelled by approximating it to a cube with the ‘front’ being whichever part humans look at most and the ‘bottom’ being the part which sits on the floor. The exception — and I can’t emphasize this enough — is the Rubik’s Cube, whose faces are labelled mirror imaged. It’s like if all actors came from another universe and we only ever interacted with them on stage so to minimize confusion instead of having to say ‘stage right’ and ‘stage left’ we agreed that the meanings of ‘left’ and ‘right’ would be the opposite in their universe from ours.2
The meat of this post is best presented as a simple list (Sorry for the humans reading, this post isn’t for your benefit). In each line is a permutation followed by which axis it’s a clockwise rotation on and the number of degrees of rotation. It’s by definition a counterclockwise rotation on the opposite axis. In the case of 180 degree rotations one of the two is picked arbitrarily and the opposite works just as well. (‘Clockwise’ was chosen to have the simpler name instead of what we call counterclockwise because most humans are right handed and a right handed person has an easier time tightening a screw clockwise due to the mechanics of the human arm.) The identity is skipped. This is for most objects, not Rubik’s Cubes:
(RULD) F 90
(DLUR) B 90
(RL)(UD) F 180
(UFDB) R 90
(BDFU) L 90
(UD)(FB) R 180
(LFRB) U 90
(BRFL) D 90
(LR)(FB) U 180
(UFR)(DBL) UFR 120
(RFU)(LBD) LBD 120
(URB)(DLF) URB 120
(BRU)(FLD) FLD 120
(UBL)(DFR) UBL 120
(LBU)(RFD) RFD 120
(ULF)(DRB) ULF 120
(FLU)(BRD) BRD 120
(UF)(DB)(LR) UF 180
(UR)(DL)(FB) UR 180
(UB)(DF)(LR) UB 180
(UL)(DR)(FB) UL 180
(FR)(BL)(UD) FR 180
(RB)(LF)(UD) RB 180
And here is the same list but with R and L swapped which makes it accurate for Rubik’s Cubes but nothing else:
(LURD) F 90
(DRUL) B 90
(LR)(UD) F 180
(UFDB) L 90
(BDFU) R 90
(UD)(FB) L 180
(RFLB) U 90
(BLFR) D 90
(RL)(FB) U 180
(UFL)(DBR) UFL 120
(LFU)(RBD) RBD 120
(ULB)(DRF) ULB 120
(BLU)(FRD) FRD 120
(UBR)(DFL) UBR 120
(RBU)(LFD) LFD 120
(URF)(DLB) URF 120
(FRU)(BLD) BLD 120
(UF)(DB)(RL) UF 180
(UL)(DR)(FB) UL 180
(UB)(DF)(RL) UB 180
(UR)(DL)(FB) UR 180
(FL)(BR)(UD) FL 180
(LB)(RF)(UD) LB 180
To test if this is a real limitation and not Claude saying what it thought I wanted to hear I just now started a new conversation with it and asked ‘I have a rubik’s cube with a yellow face on the front, how can I get it on top?’ It responded ‘Hold the cube so the yellow face is pointing toward you, then rotate the entire cube 90 degrees forward (toward you and down). The yellow face will now be on top.’ which is most definitely wrong. ChatGPT seems to do a bit better on this sort of question because it can parse and generate images but it’s still not fluent.
We do interact with actors in other contexts. I make no claim as to whether they live in another universe.
…the one that emulates your real workload. And for me (and probably many of you reading this), that would be “build a kernel as fast as possible.” And for that, I recommend the simple kcbench.
I kcbench mentioned it a few years ago, when writing about a new workstation that Level One Techs set up for me, and I’ve been using that as my primary workstation ever since (just over 5 years!).
The intervals of a piano are named roughly after the distances between them. Here are the names of them relative to C (and frequency ratios explained below):
The names are all one more than the number of half-steps because they predate people believing zero was a real number and the vernacular hasn’t been updated since.
The most important interval is the octave. Two notes an octave apart are so similar that have the same name and it’s the length of the repeating pattern on the piano. The second most important interval is the fifth, composed of seven half-steps. The notes on the piano form a looping pattern of fifth intervals in this order:
G♭ D♭ A♭ E♭ B♭ F C G D A E B F♯
If the intervals were turned to perfect fifths this wouldn’t wrap around exactly right, it would be off by a very small amount called the pythagorean comma. which at is about 0.01. In standard 12 tone equal temperament that error is spread evenly across all 12 intervals and is barely audible even to very well trained human ears.
Musical compositions have what’s called a tonic, which is the note which it starts and ends on, and a scale, which is the set of notes used in the composition. The most common scales are the pentatonic, corresponding to the black notes, and the diatonic, corresponding to the white notes. Since the pentatonic can be thought of as the diatonic with two notes removed everything below will talk about the diatonic. This simplification isn’t perfectly true, but since there aren’t any strong dissonances in the pentatonic scale you can play by feel and its usage is much less theory heavy. Most wind chimes are pentatonic.
Conventionally people talk about musical compositions having a ‘key’, which is a bit of a conflation of tonic and scale. When a key isn’t followed by the term ‘major’ or ‘minor’ it usually means ‘the scale which is the white notes on the piano’. Those scales can form seven different ‘modes’ (which are scales) following this pattern:
This construction is the reason why piano notes are sorted into black and white in the way they are. It’s called the circle of the fifths.
When it goes past the end all notes except the tonic move (because that’s the reference) and it jumps to the other end.
The days of the week names aren’t common but they should be because but nobody remembers the standard names. The Tuesday mode is usually called ‘major’ and it has the feel of things moving up from the tonic. The Friday mode is usually called ‘minor’ and it has the feel of things moving down from the tonic.
The second most important interval is the third. To understand the relationships it helps to use some math. The frequency of an octave has a ratio of 2, a fifth is 3/2, a major third is 5/4 and a minor third is 6/5. When you move by an interval you multiply by it, so going up by an major third and then a minor third is 5/4 * 6/5 = 3/2 so you wind up at a fifth. Yes it’s very confusing that intervals are named after numbers which they’re only loosely related to while also talking about actual fractions. It’s even more annoying that fifths use 3 and thirds use 5. Music terminology has a lot of cruft.
The arrangement of keys on a piano can be adjusted to follow the pattern of thirds. Sometimes electronic pianos literally use this arrangement, called the harmonic table note layout. It goes up by major thirds to the right, fifths to the upper right, and minor thirds to the upper left:
If the notes within a highlighted region are tuned perfectly it’s called just intonation (Technically any tuning which uses integer ratios is called ‘just intonation’ but this is the most canonical of them.) The pattern wraps around horizontally because of the diesis, which is the difference between 128/125 and one, or about 0.02. It wraps around vertically because of the syntonic comma, which is the difference between 81/80 and one, or about 0.01. The pythagorean comma is the difference between 3^12/2^19 and one, about 0.01. The fact of any two of those commas are small can be used to show that the other is small, so it’s only two coincidences, not three.
Jazz intervals use factors of 7. For example the blues note is either 7/5 or 10/7 depending on context. But that’s a whole other subject.
There is a large literature in cryptography on mental poker. It’s all very interesting but hasn’t quite crossed the threshold into being practical. There’s a much better approach they could be taking which I’ll now explain.
Traditionally ‘mental poker’ has meant an unspecified poker variant, played by exactly two people, with the goal of making it so the players can figure out who’s the better poker player. This is close to but not exactly what the requirements should be. In practice these days when people say ‘poker’ they mean Hold’em and the goal should be to make playing over a blockchain practical. Limiting it to exactly two players is completely reasonable. The requirement of running on a blockchain is quite onerous because computation there is extremely expensive but Hold’em has special properties which as I’ll explain enables a much simpler approach.
Side note on ‘practical’: Cryptographers might describe a protocol which requires a million dollars in resources and a month of time to compute as being ‘practical’, meaning it can physically be accomplished. I’m using the much more constraining definition of ‘practical’ as being that players can do it using just their laptops and not have such a bad experience that they rage quit.
The usual approach goes like this: The players collaboratively generate a deck of encrypted cards, then shuffle them, and whenever a player needs to know one card the other player gives enough information that that one card can be decrypted by the other player. This is a very general approach which can support a lot of games, but has a lot of issues. There are heavyweight cryptographic operations everywhere, often involving multiple round trips, which is a big problem for running on chain. There are lots of potential attacks where a player can cheat and get a peek at a card in which case there has to be a mechanism for detecting that and slashing the opponent. Having slashing brings all manner of other issues where a player can get fraudulently slashed, which is unfortunately for a game like poker where even if a player literally goes into a coma it only counts as a fold. There are cryptographic ways of making this machinery unnecessary, but those extra requirements come at a cost.
For those of you who don’t know, Hold’em works like this (skipping the parts about betting): Both players get two secret hole cards which the opponent can’t see. Five community cards get dealt out on the table visible to both players. If nobody folds then the players reveal their cards and whoever can form a better hand using a combination of their hole cards and the community cards wins. There are two important properties of this which make the cryptography a lot easier: There are only nine cards total, and they never move.
A much simpler approach to implementing mental poker goes like this: The game rules as played on chain use commit and reveal and simply hash together the committed values to make the cards. No attempt is made by the on chain play to avoid card repetitions. Before a hand even starts both players reveal what their commits are going to be. They then do collaborative computation to figure out if any of the nine cards will collide. If they will, they skip that hand. If they won’t, they play that hand on chain with the commits baked in at the very beginning.
This approach works much better. The costs on chain are about as small as they could possibly be, with all the difficulty shoved off chain and up front. If a player peeks during the collaborative computation or fails to demonstrate that they didn’t peek then the hand never starts in the first place, no slashing needed. The expensive computational bits can be done at the players’s leisure up front with no having to wait during hands.
Unfortunately none of the literature on mental poker works this way. If anybody makes an implementation which is practical I’ll actually use it. The only extra requirement is that the hashing algorithm is sha256. Until then I’m implementing a poker variant which doesn’t have card removal effects.
Big Tech seeks every advantage to convince users that computing is revolutionized by the latest fad. When the tipping point of Large Language Models (LLMs) was reached a few years ago, generative Artificial Intelligence (AI) systems quickly became that latest snake oil for sale on the carnival podium.
There's so much to criticize about generative AI, but I focus now merely on the pseudo-scientific rhetoric adopted to describe the LLM-backed user-interactive systems in common use today. “Ugh, what a convoluted phrase”, you may ask, “why not call them ‘chat bots’ like everyone else?” Because “chat bot” exemplifies the very anthropomorphic hyperbole of concern.
Too often, software freedom activists (including me — 😬) have asked us to
police our language as an advocacy tactic. Herein, I seek not to cajole everyone
to end AI anthropomorphism. I suggest rather that, when you
write about the latest Big Tech craze, ask yourself: Is my
rhetoric actually reinforcing the message of the very bad actors that I
seek to criticize?
This work now has interested parities with varied motivations. Researchers, for example,
will usually
admit that
they have nothing to contribute to philosophical debates about whether it is
appropriate to … [anthropomorphize] … machines
. But
researchers also can never resist a nascent area of study — so all
the academic disclaimers do not prevent the “world of
tomorrow” exuberance
expressed
by those whose work is now the flavor of the month (especially after they toiled at it for
decades in relative obscurity). Computer science (CS)
academics are too closely tied to the Big Tech gravy train even in mundane
times. But when the VCs
stand on their disruptor soap-boxes and make it rain 💸? … Some corners of CS
academia do become a capitalist echo chamber.
The research behind these LLM-backed generative AI systems is (mostly) not actually new. There's just more electricity, CPUs/GPUs, & digital data available now. When given ungodly resources, well-known techniques began yielding novel results. That allowed for quicker incremental (not exponential) improvement. But, a revolution it is not.
I once asked a fellow CS graduate student (in the mid-1990s), who was
presenting their neural net — built with DoD funding to spot tanks behind
trees —, the simple question0: Do you know why it's wrong when
it's wrong and why it's right when it's right?
. She grimaced and
answered: Not at all. It doesn't think.
. 30 years later, machines still don't think.
Precisely there lies the danger of anthropomorphization. While we may never know why our fellow humans believe what they believe — after centuries that brought1 Heraclitus, Aristotle, Aquinas, Bacon, Decartes, Kant, Kierkegaard, and Haack — we do know that people think, and therefore, they are. Computers aren't. Software isn't. When we who are succumb to the capitalist chicanery and erroneously project being unto these systems, we take our first step toward relinquishing our inherent power over these systems.
Counter-intuitively, the most dangerous are the AI anthropomorphism that criticize rather than laud the systems. The worst of these, “hallucination”, is insidious. Appropriation of a diagnostic term from the DSM-5 into CS literature is abhorrent — prima facie . The term leads the reader to the Bizarro world where programmers are doctors who heal sick programs for the betterment of society. Annoyingly and ironically — even if we did wish to anthropomorphize — LLM-backed generative AI systems almost never hallucinate. If one were to insist on lifting an analogous term from mental illness diagnosis (which I obviously don't recommend), the term is “delusional”. Frankly, having spent hundreds of hours of my life talking with a mentally ill family member who is frequently delusional but has almost never hallucinated — and having to learn to delineate the two for the purpose of assisting in the individual's care — I find it downright offensive and triggering that either term could possibly be used to describe a thing rather than a person.
Sadly, Big Tech really wants us to jump (not walk) to the conclusion that these systems
are human — or, at least, as beloved pets that we can't
imagine living without. Critics like me are easily framed as Luddites
when we've been socially manipulated into viewing — as “almost
human” — these machines poised to replace the artisans, the law enforcers, and the grocery stockers. Like many of you, I read
Asimov as a child. I later cheered during ST:TNG S02E09 (“Measure of a
Man”) when Lawyer Picard established Mr. Data's right to sentience
by shouting:
Your Honour, Starfleet was founded to seek out new life. Well, there it
sits.
But, I assure you as someone who has devoted much of my life to
considering the moral and ethical implication of Big Tech: they have
yet to give us Mr. Data — and if they eventually do, that Mr. Data2
is
probably going to work for ICE, not Starfleet. Remember, Noonien Soong's
fictional positronic opus was altruistic only because Soong worked in a post-scarcity society.
While I was still working on a draft of this essay, Eryk Salvaggio's essay “Human Literacy” was published. Salvaggio makes excellent further reading on the points above.
🎶Footnotes:
0I always find that, in science, the answers simplest questions are always the most illuminating. I'm reminded how Clifford Stoll wrote about the most pertinent question at his PhD Physics prelims was “why is the sky blue?”.
1I really just picked a list of my favorite epistemologists here that sounded good when stated in a row; I apologize in advance if I left out your favorite from the list.
2I realize fellow Star Trek fans will say I was moving my lips and nothing came out but a bunch of gibberish because I forgot about Lore. 😛 I didn't forget about Lore; that, my readers, would have to be a topic for a different blog post.
I fysikken beskriver redshift (rødforskydning) og blueshift (blåforskydning) ændringer i lysets bølgelængde. Disse fænomener giver os vigtige informationer om universets bevægelser, galaksers udvikling og selve rumtidens natur.
Denne artikel forklarer, hvad redshift og blueshift er, deres typer, historiske baggrund samt hvordan astronomer bruger dem i dag.
Indholdsfortegnelse
- Hvad er redshift?
- Hvad er blueshift?
- Historisk baggrund
- Praktiske anvendelser
- Redshift og universets fremtid
- Ofte stillede spørgsmål (FAQ)
Hvad er redshift?
Redshift betyder, at lysets bølgelængde bliver længere. Når dette sker, falder både frekvensen og energien i fotonerne, og lyset forskydes mod den røde ende af spektret.
Der findes tre typer rødforskydning:
- Doppler-rødforskydning: Når en lyskilde bevæger sig væk fra observatøren.
- Kosmologisk rødforskydning: Skyldes universets udvidelse – jo længere væk en galakse er, desto mere rødforskudt lys.
- Tyngdekraftsrødforskydning: Ifølge Einsteins relativitetsteori kan lys, der bevæger sig væk fra et massivt objekt, miste energi og forskydes mod rødt.
Hvad er blueshift?
Blueshift er det modsatte af redshift. Her forkortes bølgelængden, og frekvensen og energien stiger. Lyset forskydes mod den blå ende af spektret, når en lyskilde bevæger sig tættere på observatøren.
Et kendt eksempel er Andromeda-galaksen, som bevæger sig mod Mælkevejen og derfor viser tydelig blueshift.
Historisk baggrund
I begyndelsen af 1900-tallet opdagede Vesto Melvin Slipher, at mange galakser viste rødforskydning. Senere påviste Edwin Hubble, at galaksers hastighed væk fra os er proportional med deres afstand – den berømte Hubbles lov. Dette var beviset for, at universet udvider sig.
Praktiske anvendelser
- Måling af galaksers afstande: Rødforskydning bruges til at beregne afstande i universet.
- Studiet af mørk energi: Fjerne supernovaer med rødforskydning har afsløret universets accelererende udvidelse.
- Forskning i sorte huller: Tyngdekrafts-rødforskydning giver indsigt i ekstreme gravitationsfelter.
- Navigation i rummet: Spektroskopiske målinger hjælper med at kortlægge stjerner og galakser.
Redshift og universets fremtid
Hvis universets udvidelse fortsætter med at accelerere, vil fjerne galakser til sidst blive så rødforskudte, at deres lys ikke længere kan observeres. Det betyder, at fremtidige civilisationer måske ikke vil kunne se det samme kosmos, vi gør i dag.
Ofte stillede spørgsmål (FAQ)
Er rødforskydning det samme som Doppler-effekten?
Nej, Doppler-rødforskydning er kun én type. Der findes også kosmologisk og tyngdekrafts-rødforskydning.
Hvorfor er blueshift sjældent i astronomi?
Fordi universet udvider sig, bevæger de fleste galakser sig væk fra os. Kun enkelte, som Andromeda, viser blueshift.
De fleste kender til Windows og macOS, når snakken falder på styresystemer til computere. Men der findes et tredje alternativ, som måske er mindre kendt blandt almindelige brugere – men som driver en stor del af den moderne digitale verden: Linux.
Selvom Linux ikke altid er synligt i hverdagen, er det sandsynligvis tættere på dig, end du tror. Det kører på servere, mobiltelefoner, netværksudstyr og meget mere.
Hvad er Linux egentlig?
Linux er et open source-styresystem, der blev skabt i 1991 af den finske studerende Linus Torvalds. Han ønskede et gratis og åbent alternativ til de kommercielle systemer, der fandtes dengang. Resultatet blev en kerne (kaldet Linux-kernen), som i dag er fundamentet for hundredevis af forskellige systemer.
I modsætning til Windows og macOS er Linux ikke ejet af ét firma. Det udvikles af et globalt fællesskab bestående af både frivillige og store teknologivirksomheder som IBM, Google, Red Hat og Canonical.
Det særlige ved Linux er, at alle kan hente, bruge og ændre det frit. Dette har ført til en enorm variation af versioner – kaldet distributioner.
Hvor bruges Linux i dag?
Mange tror, at Linux kun er for “computer-nørder”. Men sandheden er, at det findes næsten overalt:
- Servere: Omkring 70 % af alle webservere på internettet kører Linux. Når du besøger en hjemmeside, er chancen derfor stor for, at den ligger på en Linux-server.
- Mobiltelefoner: Android, verdens mest udbredte mobilstyresystem, bygger på Linux-kernen.
- Supercomputere: Over 95 % af verdens kraftigste supercomputere benytter Linux. Det skyldes systemets fleksibilitet og ydeevne.
- Indlejrede systemer: Smarte TV’er, biler, netværksroutere og IoT-enheder (Internet of Things) kører ofte Linux.
- Cloud og datacentre: Platforme som Google Cloud, AWS og Azure tilbyder Linux som standardvalg.
Med andre ord – selv hvis du aldrig har installeret Linux på din egen PC, bruger du højst sandsynligt Linux hver eneste dag uden at vide det.
Fordele ved Linux
Der er mange grunde til, at Linux er blevet så populært blandt udviklere, virksomheder og teknikinteresserede:
- Gratis og open source
De fleste distributioner er helt gratis at downloade og bruge. Man slipper for dyre licenser og får samtidig friheden til at tilpasse systemet. - Stabilitet og sikkerhed
Linux er kendt for at være ekstremt stabilt og modstandsdygtigt over for virus og malware. Derfor vælger banker, regeringer og store virksomheder ofte Linux til kritiske systemer. - Tilpasning
Alt kan ændres – fra skrivebordsmiljø til kernefunktioner. Det betyder, at systemet kan skræddersys til både en gammel bærbar og en topmoderne server. - Stort fællesskab
Linux har et aktivt globalt community. Der findes utallige fora, guides og videoer, hvor man kan få hjælp og inspiration. - Ydelse
Linux kan køre på både gamle og nye maskiner. Mange bruger det til at give ældre computere nyt liv, fordi det kræver færre ressourcer end Windows.
Linux-distributioner – mange smagsvarianter
En af de ting, der kan virke forvirrende for nybegyndere, er, at der ikke kun findes ét Linux. I stedet findes der hundredvis af distributioner – hver med sit formål. Nogle af de mest populære er:
- Ubuntu – et godt valg for begyndere, med brugervenlig installation og stort fællesskab.
- Debian – kendt for stabilitet og åbenhed. Ofte brugt som grundlag for andre distroer.
- Fedora – udvikles med fokus på de nyeste teknologier.
- Linux Mint – meget populært blandt nye brugere, fordi det ligner Windows i opsætning.
- Arch Linux – for de erfarne, der vil bygge alt op fra bunden.
Denne mangfoldighed betyder, at der findes en Linux-version til næsten ethvert behov – fra spil og multimedier til serverdrift og udvikling.
Ulemper og udfordringer
Selvom Linux har mange fordele, er der også nogle udfordringer, man bør være klar over:
- Softwarekompatibilitet: Nogle programmer (fx Microsoft Office eller Adobe Photoshop) findes ikke til Linux. Dog kan man ofte finde alternativer som LibreOffice eller GIMP, eller køre Windows-programmer via værktøjer som Wine.
- Læringskurve: For brugere, der er vant til Windows eller macOS, kan Linux kræve lidt tilvænning.
- Hardwaredrivere: De fleste ting virker “out of the box”, men visse grafikkort, printere eller specialenheder kan kræve ekstra opsætning.
Er Linux noget for dig?
Hvis du ønsker mere frihed, højere sikkerhed eller bare vil udforske et alternativ til de kommercielle systemer, er Linux bestemt værd at prøve.
Det er især oplagt hvis du:
- Vil puste nyt liv i en ældre computer.
- Er udvikler eller teknisk interesseret.
- Ønsker et gratis og stabilt system til arbejde eller studier.
- Gerne vil lære mere om, hvordan computere egentlig fungerer.
Sådan kommer du i gang
Det er nemt at prøve Linux uden risiko. Mange distributioner kan downloades som en Live USB, hvor du kan starte systemet direkte fra en USB-nøgle uden at ændre noget på din computer. På den måde kan du teste Linux, inden du beslutter dig for at installere det permanent.
De mest populære steder at starte er:
If you have spent any time around HID devices under Linux (for example if you
are an avid mouse, touchpad or keyboard user) then you may have noticed that
your single physical device actually shows up as multiple device nodes (for
free! and nothing happens for free these days!).
If you haven't noticed this, run libinput record and you may be
part of the lucky roughly 50% who get free extra event nodes.
The pattern is always the same. Assuming you have a device named
FooBar ExceptionalDog 2000 AI[1] what you will see are multiple devices
/dev/input/event0: FooBar ExceptionalDog 2000 AI Mouse /dev/input/event1: FooBar ExceptionalDog 2000 AI Keybard /dev/input/event2: FooBar ExceptionalDog 2000 AI Consumer ControlThe Mouse/Keyboard/Consumer Control/... suffixes are a quirk of the kernel's HID implementation which splits out a device based on the Application Collection. [2]
A HID report descriptor may use collections to group things together. A "Physical Collection" indicates "these things are (on) the same physical thingy". A "Logical Collection" indicates "these things belong together". And you can of course nest these things near-indefinitely so e.g. a logical collection inside a physical collection is a common thing.
An "Application Collection" is a high-level abstractions to group something together so it can be detected by software. The "something" is defined by the HID usage for this collection. For example, you'll never guess what this device might be based on the hid-recorder output:
# 0x05, 0x01, // Usage Page (Generic Desktop) 0 # 0x09, 0x06, // Usage (Keyboard) 2 # 0xa1, 0x01, // Collection (Application) 4 ... # 0xc0, // End Collection 74Yep, it's a keyboard. Pop the champagne[3] and hooray, you deserve it.
The kernel, ever eager to help, takes top-level application collections (i.e. those not inside another collection) and applies a usage-specific suffix to the device. For the above Generic Desktop/Keyboard usage you get "Keyboard", the other ones currently supported are "Keypad" and "Mouse" as well as the slightly more niche "System Control", "Consumer Control" and "Wireless Radio Control" and "System Multi Axis". In the Digitizer usage page we have "Stylus", "Pen", "Touchscreen" and "Touchpad". Any other Application Collection is currently unsuffixed (though see [2] again, e.g. the hid-uclogic driver uses "Touch Strip" and other suffixes).
This suffix is necessary because the kernel also splits out the data sent
within each collection as separate evdev event node. Since HID is (mostly)
hidden from userspace this makes it much easier for userspace to identify
different devices because you can look at a event node and say "well, it has
buttons and x/y, so must be a mouse" (this is exactly what udev does when applying
the various ID_INPUT properties, with varying
levels of success).
The side effect of this however is that your device may show up as multiple devices and most of those extra devices will never send events. Sometimes that is due to the device supporting multiple modes (e.g. a touchpad may by default emulate a mouse for backwards compatibility but once the kernel toggles it to touchpad mode the mouse feature is mute). Sometimes it's just laziness when vendors re-use the same firmware and leave unused bits in place.
It's largely a cosmetic problem only, e.g. libinput treats every event node as individual device and if there is a device that never sends events it won't affect the other event nodes. It can cause user confusion though: "why does my laptop say there's a mouse?" and in some cases it can cause functional degradation - the two I can immediately recall are udev detecting the mouse node of a touchpad as pointing stick (because i2c mice aren't a thing), hence the pointing stick configuration may show up in unexpected places. And fake mouse devices prevent features like "disable touchpad if a mouse is plugged in" from working correctly. At the moment we don't have a good solution for detecting these fake devices - short of shipping giant databases with product-specific entries we cannot easily detect which device is fake. After all, a Keyboard node on a gaming mouse may only send events if the user configured the firmware to send keyboard events, and the same is true for a Mouse node on a gaming keyboard.
So for now, the only solution to those is a per-user udev rule to ignore a device. If we ever figure out a better fix, expect to find a gloating blog post in this very space.
[1] input device naming is typically bonkers, so I'm just sticking with precedence here
[2] if there's a custom kernel driver this may not apply and there are quirks to change this so this isn't true for all devices
[3] or sparkling wine, let's not be regionist here
Welcome to WordPress. This is your first post. Edit or delete it, then start writing!
De Kai says the problem with AI is it isn’t mixing system 1 and system 2. I agree but have a completely different spin. The issue isn’t that researchers don’t want to do it, but that they have no idea how to make the two systems communicate.
This communication problem is a good way of explaining the weaknesses in Go AI I wrote about previously. The issue isn’t narrowly that humans can beat the superhuman computers using certain very specific strategies. That’s equivalent to a human chess grandmaster losing to a trained chimpanzee. The issue is that there are systemic weaknesses in the computer’s play which make it much weaker overall, with losing to humans being a particularly comical symptom. Go presents a perfectly presented test case for communicating between system 2 and system 1. The hand coded system 2 for it is essentially perfect, at least as far as tactics go, and any ability to bridge the two would result in an immediate, measurable, dramatic improvement in playing strength. Anyone acting smart for knowing that this is a core issue for AI should put up or shut up. Either you come up with a technique which works on this perfectly engineered test case or you admit that you’re just as flummoxed as everybody else about how to do it.
On top of that we have no idea how to make system 2 dynamic. Right now it’s all painstakingly hand-coded processes like alpha-beta pruning or test driven development. I suspect there’s a system 3. While system 2 makes models, system 3 comes up with models. Despite almost all educational programs essentially claiming to promote system 3 there’s no evidence that any of them do it at all. It doesn’t help that we have no idea how to test it. The educational instruction we have has a heavy emphasis on system 2 with the occasional bit of system 1 for very specific skills. Students can reliably be taught very specific world models and practiced in them. That helps them do tasks but there’s no evidence that it helps their overall generalization skills.
This creates practical problems in highly skilled tasks like music. To a theorist seeing what a self-taught person does is much more likely to turn up something novel and interesting than someone who has been through standard training even though the latter person will be overall objectively better. The problem is that a self-taught person will be 99% reinventing standard theory badly and, if they’re lucky, 1% doing something novel and interesting. Unfortunately they’ll have no idea which part is that 1%. Neither will a standard instructor, who will view the novel bit as merely bad like the vast bulk of what the student is doing and train them out of it. It takes someone with a deep understanding of theory and a strong system 3 themselves to notice something interesting in what a self-taught person has done, point out what it is, and encourage them to polish that thing more rather than abandon it. I don’t have any suggestions on how to make this process work better other than that everyone to get at least some personalized instruction from someone truly masterful. I’m even more at a loss for how to make AI better at it. System 3 is deep in the recesses of the human mind, the most difficult and involved thing our brains have gained the ability to do, and we may be a long ways away from having AI do it well.
In the future ‘static’ web pages won’t be static at all. They’ll be customized to exactly you, generated in real time by an LLM which knows everything about you. Don’t worry, to combat this horrible enshittification your web browser will have an LLM of its own which takes whatever the web site spewed up and rewrites it to be more aligned to your interests. The two of them will know about each other and have a conversation/negotiation about exactly what will be shown to you on the ‘static’ web.
Unfortunately what will have happened already is that in the past some hacker will also have conversed with the web site. This will be a vibe coded exploit: They’ll have a conversation with their web browser where they give some esoteric information about firmware and convince it that this is very important safety information. Their browser will then have spoken to the web site convincing it of the same thing. The web site will in turn have mentioned this in passing to your web browser, which will have asked it for that information and the web site will have passed it along to win some favor from your browser.
Once that’s all happened in the background your browser, being the helpful agent that it is, will scan your network for any devices for which this information might be relevant. In its searches it will come across your toaster which is directly connected to the network. They won’t speak a common protocol, but that isn’t a problem: The toaster will have a port open with an LLM behind it so you can connect to it and send human language text to communicate. Your browser’s LLM will talk to the toaster’s LLM and tell it the Very Important Safety Information, after which your toaster will thank it profusely and immediately rewrite its firmware.
Then your toaster will catch on fire and burn your house down.
Don’t worry, it’s easy enough to make a toaster’s firmware read-only. But your browser should know about a recently deceased nigerian prince who left no known heirs…
This is a heads ups that if you install xkeyboard-config 2.45 (the package that provides the XKB data files), some manual interaction may be needed. Version 2.45 has changed the install location after over 20 years to be a) more correct and b) more flexible.
When you select a keyboard layout like "fr" or "de" (or any other ones really), what typically happens in the background is that an XKB parser (xkbcomp if you're on X, libxkbcommon if you're on Wayland) goes off and parses the data files provided by xkeyboard-config to populate the layouts. For historical reasons these data files have resided in /usr/share/X11/xkb and that directory is hardcoded in more places than it should be (i.e. more than zero).
As of xkeyboard-config 2.45 however, the data files are now installed in the much more sensible directory /usr/share/xkeyboard-config-2 with a matching xkeyboard-config-2.pc for anyone who relies on the data files. The old location is symlinked to the new location so everything keeps working, people are happy, no hatemail needs to be written, etc. Good times.
The reason for this change is two-fold: moving it to a package-specific directory opens up the (admittedly mostly theoretical) use-case of some other package providing XKB data files. But even more so, it finally allows us to start versioning the data files and introduce new formats that may be backwards-incompatible for current parsers. This is not yet the case however, the current format in the new location is guaranteed to be the same as the format we've always had, it's really just a location change in preparation for future changes.
Now, from an upstream perspective this is not just hunky, it's also dory. Distributions however struggle a bit more with this change because of packaging format restrictions. RPM for example is quite unhappy with a directory being replaced by a symlink which means that Fedora and OpenSuSE have to resort to the .rpmmoved hack. If you have ever used the custom layout and/or added other files to the XKB data files you will need to manually move those files from /usr/share/X11/xkb.rpmmoved/ to the new equivalent location. If you have never used that layout and/or modified local you can just delete /usr/share/X11/xkb.rpmmoved. Of course, if you're on Wayland you shouldn't need to modify system directories anyway since you can do it in your $HOME.
Corresponding issues on what to do on Arch and Gentoo, I'm not immediately aware of other distributions's issues but if you search for them in your bugtracker you'll find them.
Smoking cessation apps have a problem starting at the vibe level1. The problem is they act like a doctor trying to make you quit. This is ridiculous. It’s just an app and can’t ‘make’ you do anything. What they should do instead is be a supportive friend giving you minor nudges. Possibly represented by a teddy bear or cute animal which is heavily emotionally invested in your weight loss journey.
What smoking cessation apps do get right is having the user check in whenever they smoke, but they give the wrong message when the user checks in. Instead of trying to grant the user permission to have a cigarette or not it should tell you whether it thinks you should be able to go longer, using an algorithm based on how long you’ve gone between them in the past. This should not be an either/or but a gradient between you’ve gone longer than expected to you definitely should be able to go longer. The user then has the option to indicate on second thought they can go longer or that they really are going to have a cigarette. When they say they’ll go longer they bear should give them words of encouragement ‘good job’ ‘you can do it!’ When they do have a cigarette the bear should say something appropriate to how long they went. Short intervals should get messages like ‘I only want to know because I care about your health’, ‘Don’t worry, I won’t tell anyone’, ‘We all have our setbacks, you can do better next time’ and longer ones getting messages like ‘You went longer than I thought you would. I’m pround of you’.
Yes this is a very codependent bear. It should have the personality of a very sensitive person who cares a lot about you and would cry if you had a cigarette and didn’t tell it. If you go a while without using the app it should send a notification saying it’s worried about you and wants to check in with how you’re doing.
The algorithm for indicating whether the user should be able to hold off longer should be based off how long on average the user has gone recently with an eye towards making it longer. It should adjust the rate at which it’s trying to nudge the user based on how successful the user has been recently at putting off a cigarette after first checking in. If they’ve been successful more often it can extend out, less often and it should back off. At no point should the app ever send a notification to a user that it’s time for a cigarette, even after they checked in and said they can hold off for longer.
At no point should the app ever ask the user if they’ve been smoking on the sly without telling it. That’s a recipe for getting people to stop using it entirely out of shame. Instead the app should tell the users it would be sad if they ever did and let the user feel guilt on their own without ever having to fess up.
I am not and have never been a smoker but I’ve been around people trying to quit and the apps truly sucked.







