This feed omits posts by rms. Just 'cause.
To this day most power plants work by making a lot of heat then converting the heat differential between that and the surrounding environment to make electricity. The most efficient heat engines are closed cycle supercritical turbines. They basically all use Carbon Dioxide as the working fluid. I’ve spent some time researching possible alternative working fluids and have come up with some interesting results.
The ideal working fluid would have all these properties: high temperature of decomposition, low corrosion, low critical point, high mass, high thermal conductivity, non-toxic, environmentally friendly, and cheap. That’s a lot of properties to get out of a single substance. Unsurprisingly Carbon Dioxide scores well on these, especially on low decomposition, low corrosion, non-toxic, and cheap. For the others it’s good but not unbeatable. It’s the nature of chemistry that you can always imagine unobtainium with magical properties but in practice you have to pull from a fairly short menu of things which actually exist. Large organic molecules can start to feel more engineered but that isn’t relative here because organic bonds nearly all decompose at the required temperatures.
There are all manner of fun things which in principle would work great but fail due to decomposition and corrosion. As much fun as it would be to have an excuse to make a literal ton of Tungsten Hexafluoride it’s unfortunately disqualified. The very short list of things which are viable are: Carbon Dioxide, noble gases above Helium (which unfortunately leeches into and destroys everything), and short chain Perfluorocarbons. That last one is fancy talk for gaseous Teflon. I have no idea why out of all organic bonds those ones are special and can handle very high temperatures. As they get longer they have an increasing tendency to decompose and given the different numbers from different sources I think we aren’t completely sure under what conditions perfluoropropane decomposes and anyone who is seriously considering it will have to run that experiment to find out.
With multiple dimensions of performance it isn’t obvious what should be optimized for when picking out a working fluid so I’m going to guess that you want something with about the density of Carbon Dioxide and within that limitation as low of a critical temperature and as high of a thermal conductivity as possible (yes that’s two things but the way it works out they’re highly correlated so which one you pick doesn’t matter so much.) The reasons for this are that first of all it would be nice to have something which could plausibly replace the working fluid in an existing turbine meant for Carbon Dioxide without a redesign and second it may be that it’s hard to make a turbine which can physically handle something much denser than Carbon Dioxide anyway and that may be part of why people haven’t been eager to use something heavier.
To that end I’ve put together this interactive (which should probably be a spreadsheet) which shows how different potential working fluids fare. It turns out that there’s a tradeoff between high thermal conductivity and high mass and using a mix of things which are good at either one does better than picking a single thing which is in the middle. The next to last column of this interactive shows a measure of the density of the gas when holding temperature and pressure constant and the final column gives the a measure of the thermal conductivity under those conditions. The units are a bit funny and I’m far from certain that the formulas used for the mixed values here are correct but the results seem promising.
The increased mass benefits of longer chain Perfluorocarbons go down after Perfluoroethane, mostly because at that point when mixing with Neon it’s mostly Neon anyway. (With only two things it isn’t really a ‘chain’ at that point either.) That gives a thermal conductivity value of 0.040 as opposed to Carbon Dioxide’s 0.017, which is a huge difference. That mix has some cost and environmental impact concerns but being within a closed cycle system they’re used for the life of the turbine so they’re part of capital costs and can be disposed of properly afterwards so aren’t a big deal.
The downside of that mix is that although it works great for the temperatures in nuclear plants and the secondary turbine of gas plants it might decompose at the much higher temperatures of the primary turbine of a gas plant. The decomposition problem is likely to be better with Carbon Tetrafluoride which knocks the value down to 0.037 but I’m not sure if even that’s stable enough and superheated elemental Fluorine is not something you want to have around. Going with pure noble gases will definitely completely eliminate decomposition and corrosion problems. Using a mix of Xenon and Neon has a value of 0.038 but probably isn’t worth it due to the ludicrous cost of Xenon. A mix of Krypton and Neon is still quite good with a value of 0.032 and beats Carbon Dioxide handily on all metrics except initial expense which still isn’t a big deal.
Long ago in the 1990s when I was in high school, my chemistry+physics teacher pulled me aside. "Avery, you know how the Internet works, right? I have a question."
I now know the correct response to that was, "Does anyone really know how the Internet works?" But as a naive young high schooler I did not have that level of self-awareness. (Decades later, as a CEO, that's my answer to almost everything.)
Anyway, he asked his question, and it was simple but deep. How do they make all the computers connect?
We can't even get the world to agree on 60 Hz vs 50 Hz, 120V vs 240V, or which kind of physical power plug to use. Communications equipment uses way more frequencies, way more voltages, way more plug types. Phone companies managed to federate with each other, eventually, barely, but the ring tones were different everywhere, there was pulse dialing and tone dialing, and some of them still charge $3/minute for international long distance, and connections take a long time to establish and humans seem to be involved in suspiciously many places when things get messy, and every country has a different long-distance dialing standard and phone number format.
So Avery, he said, now they're telling me every computer in the world can connect to every other computer, in milliseconds, for free, between Canada and France and China and Russia. And they all use a single standardized address format, and then you just log in and transfer files and stuff? How? How did they make the whole world cooperate? And who?
When he asked that question, it was a formative moment in my life that I'll never forget, because as an early member of what would be the first Internet generation… I Had Simply Never Thought of That.
I mean, I had to stop and think for a second. Wait, is protocol standardization even a hard problem? Of course it is. Humans can't agree on anything. We can't agree on a unit of length or the size of a pint, or which side of the road to drive on. Humans in two regions of Europe no farther apart than Thunder Bay and Toronto can't understand each other's speech. But this Internet thing just, kinda, worked.
"There's… a layer on top," I uttered, unsatisfyingly. Nobody had taught me yet that the OSI stack model existed, let alone that it was at best a weak explanation of reality.
"When something doesn't talk to something else, someone makes an adapter. Uh, and some of the adapters are just programs rather than physical things. It's not like everyone in the world agrees. But as soon as one person makes an adapter, the two things come together."
I don't think he was impressed with my answer. Why would he be? Surely nothing so comprehensively connected could be engineered with no central architecture, by a loosely-knit cult of mostly-volunteers building an endless series of whimsical half-considered "adapters" in their basements and cramped university tech labs. Such a creation would be a monstrosity, just as likely to topple over as to barely function.
I didn't try to convince him, because honestly, how could I know? But the question has dominated my life ever since.
When things don't connect, why don't they connect? When they do, why? How? …and who?
Postel's Law
The closest clue I've found is this thing called Postel's Law, one of the foundational principles of the Internet. It was best stated by one of the founders of the Internet, Jon Postel. "Be conservative in what you send, and liberal in what you accept."
What it means to me is, if there's a standard, do your best to follow it, when you're sending. And when you're receiving, uh, assume the best intentions of your counterparty and do your best and if that doesn't work, guess.
A rephrasing I use sometimes is, "It takes two to miscommunicate." Communication works best and most smoothly if you have a good listener and a clear speaker, sharing a language and context. But it can still bumble along successfully if you have a poor speaker with a great listener, or even a great speaker with a mediocre listener. Sometimes you have to say the same thing five ways before it gets across (wifi packet retransmits), or ask way too many clarifying questions, but if one side or the other is diligent enough, you can almost always make it work.
This asymmetry is key to all high-level communication. It makes network bugs much less severe. Without Postel's Law, triggering a bug in the sender would break the connection; so would triggering a bug in the receiver. With Postel's Law, we acknowledge from the start that there are always bugs and we have twice as many chances to work around them. Only if you trigger both sets of bugs at once is the flaw fatal.
…So okay, if you've used the Internet, you've probably observed that fatal connection errors are nevertheless pretty common. But that misses how incredibly much more common they would be in a non-Postel world. That world would be the one my physics teacher imagined, where nothing ever works and it all topples over.
And we know that's true because we've tried it. Science! Let us digress.
XML
We had the Internet ("OSI Layer 3") mostly figured out by the time my era began in the late 1900s, but higher layers of the stack still had work to do. It was the early days of the web. We had these newfangled hypertext ("HTML") browsers that would connect to a server, download some stuff, and then try their best to render it.
Web browsers are and have always been an epic instantiation of Postel's Law. From the very beginning, they assumed that the server (content author) had absolutely no clue what they were doing and did their best to apply some kind of meaning on top, despite every indication that this was a lost cause. List items that never end? Sure. Tags you've never heard of? Whatever. Forgot some semicolons in your javascript? I'll interpolate some. Partially overlapping italics and bold? Leave it to me. No indication what language or encoding the page is in? I'll just guess.
The evolution of browsers gives us some insight into why Postel's Law is a law and not just, you know, Postel's Advice. The answer is: competition. It works like this. If your browser interprets someone's mismash subjectively better than another browser, your browser wins.
I think economists call this an iterated prisoner's dilemma. Over and over, people write web pages (defect) and browsers try to render them (defect) and absolutely nobody actually cares what the HTML standard says (stays loyal). Because if there's a popular page that's wrong and you render it "right" and it doesn't work? Straight to jail.
(By now almost all the evolutionary lines of browsers have been sent to jail, one by one, and the HTML standard is effectively whatever Chromium and Safari say it is. Sorry.)
This law offends engineers to the deepness of their soul. We went through a period where loyalists would run their pages through "validators" and proudly add a logo to the bottom of their page saying how valid their HTML was. Browsers, of course, didn't care and continued to try their best.
Another valiant effort was the definition of "quirks mode": a legacy rendering mode meant to document, normalize, and push aside all the legacy wonko interpretations of old web pages. It was paired with a new, standards-compliant rendering mode that everyone was supposed to agree on, starting from scratch with an actual written spec and tests this time, and public shaming if you made a browser that did it wrong. Of course, outside of browser academia, nobody cares about the public shaming and everyone cares if your browser can render the popular web sites, so there are still plenty of quirks outside quirks mode. It's better and it was well worth the effort, but it's not all the way there. It never can be.
We can be sure it's not all the way there because there was another exciting development, HTML Strict (and its fancier twin, XHTML), which was meant to be the same thing, but with a special feature. Instead of sending browsers to jail for rendering wrong pages wrong, we'd send page authors to jail for writing wrong pages!
To mark your web page as HTML Strict was a vote against the iterated prisoner's dilemma and Postel's Law. No, your vote said. No more. We cannot accept this madness. We are going to be Correct. I certify this page is correct. If it is not correct, you must sacrifice me, not all of society. My honour demands it.
Anyway, many page authors were thus sacrificed and now nobody uses HTML Strict. Nobody wants to do tech support for a web page that asks browsers to crash when parsing it, when you can just… not do that.
Excuse me, the above XML section didn't have any XML
Yes, I'm getting to that. (And you're soon going to appreciate that meta joke about schemas.)
In parallel with that dead branch of HTML, a bunch of people had realized that, more generally, HTML-like languages (technically SGML-like languages) had turned out to be a surprisingly effective way to build interconnected data systems.
In retrospect we now know that the reason for HTML's resilience is Postel's Law. It's simply easier to fudge your way through parsing incorrect hypertext, than to fudge your way through parsing a Microsoft Word or Excel file's hairball of binary OLE streams, which famously even Microsoft at one point lost the knowledge of how to parse. But, that Postel's Law connection wasn't really understood at the time.
Instead we had a different hypothesis: "separation of structure and content." Syntax and semantics. Writing software to deal with structure is repetitive overhead, and content is where the money is. Let's automate away the structure so you can spend your time on the content: semantics.
We can standardize the syntax with a single Extensible Markup Language (XML). Write your content, then "mark it up" by adding structure right in the doc, just like we did with plaintext human documents. Data, plus self-describing metadata, all in one place. Never write a parser again!
Of course, with 20/20 hindsight (or now 2025 hindsight), this is laughable. Yes, we now have XML parser libraries. If you've ever tried to use one, you will find they indeed produce parse trees automatically… if you're lucky. If you're not lucky, they produce a stream of "tokens" and leave it to you to figure out how to arrange it in a tree, for reasons involving streaming, performance, memory efficiency, and so on. Basically, if you use XML you now have to deeply care about structure, perhaps more than ever, but you also have to include some giant external parsing library that, left in its normal mode, might spontaneously start making a lot of uncached HTTP requests that can also exploit remote code execution vulnerabilities haha oops.
If you've ever taken a parser class, or even if you've just barely tried to write a parser, you'll know the truth: the value added by outsourcing parsing (or in some cases only tokenization) is not a lot. This is because almost all the trouble of document processing (or compiling) is the semantic layer, the part where you make sense of the parse tree. The part where you just read a stream of characters into a data structure is the trivial, well-understood first step.
Now, semantics is where it gets interesting. XML was all about separating syntax from semantics. And they did some pretty neat stuff with that separation, in a computer science sense. XML is neat because it's such a regular and strict language that you can completely validate the syntax (text and tags) without knowing what any of the tags mean or which tags are intended to be valid at all.
…aha! Did someone say validate?! Like those old HTML validators we talked about? Oh yes. Yes! And this time the validation will be completely strict and baked into every implementation from day 1. And, the language syntax itself will be so easy and consistent to validate (unlike SGML and HTML, which are, in all fairness, bananas) that nobody can possibly screw it up.
A layer on top of this basic, highly validatable XML, was a thing called XML Schemas. These were documents (mysteriously not written in XML) that described which tags were allowed in which places in a certain kind of document. Not only could you parse and validate the basic XML syntax, you could also then validate its XML schema as a separate step, to be totally sure that every tag in the document was allowed where it was used, and present if it was required. And if not? Well, straight to jail. We all agreed on this, everyone. Day one. No exceptions. Every document validates. Straight to jail.
Anyway XML schema validation became an absolute farce. Just parsing or understanding, let alone writing, the awful schema file format is an unpleasant ordeal. To say nothing of complying with the schema, or (heaven forbid) obtaining a copy of someone's custom schema and loading it into the validator at the right time.
The core XML syntax validation was easy enough to do while parsing. Unfortunately, in a second violation of Postel's Law, almost no software that outputs XML runs it through a validator before sending. I mean, why would they, the language is highly regular and easy to generate and thus the output is already perfect. …Yeah, sure.
Anyway we all use JSON now.
JSON
Whoa, wait! I wasn't done!
This is the part where I note, for posterity's sake, that XML became a decade-long fad in the early 2000s that justified billions of dollars of software investment. None of XML's technical promises played out; it is a stain on the history of the computer industry. But, a lot of legacy software got un-stuck because of those billions of dollars, and so we did make progress.
What was that progress? Interconnection.
Before the Internet, we kinda didn't really need to interconnect software together. I mean, we sort of did, like cut-and-pasting between apps on Windows or macOS or X11, all of which were surprisingly difficult little mini-Postel's Law protocol adventures in their own right and remain quite useful when they work (except "paste formatted text," wtf are you people thinking). What makes cut-and-paste possible is top-down standards imposed by each operating system vendor.
If you want the same kind of thing on the open Internet, ie. the ability to "copy" information out of one server and "paste" it into another, you need some kind of standard. XML was a valiant effort to create one. It didn't work, but it was valiant.
Whereas all that money investment did work. Companies spent billions of dollars to update their servers to publish APIs that could serve not just human-formatted HTML, but also something machine-readable. The great innovation was not XML per se, it was serving data over HTTP that wasn't always HTML. That was a big step, and didn't become obvious until afterward.
The most common clients of HTTP were web browsers, and web browsers only knew how to parse two things: HTML and javascript. To a first approximation, valid XML is "valid" (please don't ask the validator) HTML, so we could do that at first, and there were some Microsoft extensions. Later, after a few billions of dollars, true standardized XML parsing arrived in browsers. Similarly, to a first approximation, valid JSON is valid javascript, which woo hoo, that's a story in itself (you could parse it with eval(), tee hee) but that's why we got here.
JSON (minus the rest of javascript) is a vastly simpler language than XML. It's easy to consistently parse (other than that pesky trailing comma); browsers already did. It represents only (a subset of) the data types normal programming languages already have, unlike XML's weird mishmash of single attributes, multiply occurring attributes, text content, and CDATA. It's obviously a tree and everyone knows how that tree will map into their favourite programming language. It inherently works with unicode and only unicode. You don't need cumbersome and duplicative "closing tags" that double the size of every node. And best of all, no guilt about skipping that overcomplicated and impossible-to-get-right schema validator, because, well, nobody liked schemas anyway so nobody added them to JSON (almost).
Today, if you look at APIs you need to call, you can tell which ones were a result of the $billions invested in the 2000s, because it's all XML. And you can tell which came in the 2010s and later after learning some hard lessons, because it's all JSON. But either way, the big achievement is you can call them all from javascript. That's pretty good.
(Google is an interesting exception: they invented and used protobuf during the same time period because they disliked XML's inefficiency, they did like schemas, and they had the automated infrastructure to make schemas actually work (mostly, after more hard lessons). But it mostly didn't spread beyond Google… maybe because it's hard to do from javascript.)
Blockchain
The 2010s were another decade of massive multi-billion dollar tech investment. Once again it was triggered by an overwrought boondoggle technology, and once again we benefited from systems finally getting updated that really needed to be updated.
Let's leave aside cryptocurrencies (which although used primarily for crime, at least demonstrably have a functioning use case, ie. crime) and look at the more general form of the technology.
Blockchains in general make the promise of a "distributed ledger" which allows everyone the ability to make claims and then later validate other people's claims. The claims that "real" companies invested in were meant to be about manufacturing, shipping, assembly, purchases, invoices, receipts, ownership, and so on. What's the pattern? That's the stuff of businesses doing business with other businesses. In other words, data exchange. Data exchange is exactly what XML didn't really solve (although progress was made by virtue of the dollars invested) in the previous decade.
Blockchain tech was a more spectacular boondoggle than XML for a few reasons. First, it didn't even have a purpose you could explain. Why do we even need a purely distributed system for this? Why can't we just trust a third party auditor? Who even wants their entire supply chain (including number of widgets produced and where each one is right now) to be visible to the whole world? What is the problem we're trying to solve with that?
…and you know there really was no purpose, because after all the huge investment to rewrite all that stuff, which was itself valuable work, we simply dropped the useless blockchain part and then we were fine. I don't think even the people working on it felt like they needed a real distributed ledger. They just needed an updated ledger and a budget to create one. If you make the "ledger" module pluggable in your big fancy supply chain system, you can later drop out the useless "distributed" ledger and use a regular old ledger. The protocols, the partnerships, the databases, the supply chain, and all the rest can stay the same.
In XML's defense, at least it was not worth the effort to rip out once the world came to its senses.
Another interesting similarity between XML and blockchains was the computer science appeal. A particular kind of person gets very excited about validation and verifiability. Both times, the whole computer industry followed those people down into the pits of despair and when we finally emerged… still no validation, still no verifiability, still didn't matter. Just some computers communicating with each other a little better than they did before.
LLMs
In the 2020s, our industry fad is LLMs. I'm going to draw some comparisons here to the last two fads, but there are some big differences too.
One similarity is the computer science appeal: so much math! Just the matrix sizes alone are a technological marvel the likes of which we have never seen. Beautiful. Colossal. Monumental. An inspiration to nerds everywhere.
But a big difference is verification and validation. If there is one thing LLMs absolutely are not, it's verifiable. LLMs are the flakiest thing the computer industry has ever produced! So far. And remember, this is the industry that brought you HTML rendering.
LLMs are an almost cartoonishly amplified realization of Postel's Law. They write human grammar perfectly, or almost perfectly, or when they're not perfect it's a bug and we train them harder. And, they can receive just about any kind of gibberish and turn it into a data structure. In other words, they're conservative in what they send and liberal in what they accept.
LLMs also solve the syntax problem, in the sense that they can figure out how to transliterate (convert) basically any file syntax into any other. Modulo flakiness. But if you need a CSV in the form of a limerick or a quarterly financial report formatted as a mysql dump, sure, no problem, make it so.
In theory we already had syntax solved though. XML and JSON did that already. We were even making progress interconnecting old school company supply chain stuff the hard way, thanks to our nominally XML- and blockchain- investment decades. We had to do every interconnection by hand – by writing an adapter – but we could do it.
What's really new is that LLMs address semantics. Semantics are the biggest remaining challenge in connecting one system to another. If XML solved syntax, that was the first 10%. Semantics are the last 90%. When I want to copy from one database to another, how do I map the fields? When I want to scrape a series of uncooperative web pages and turn it into a table of products and prices, how do I turn that HTML into something structured? (Predictably microformats, aka schemas, did not work out.) If I want to query a database (or join a few disparate databases!) using some language that isn't SQL, what options do I have?
LLMs can do it all.
Listen, we can argue forever about whether LLMs "understand" things, or will achieve anything we might call intelligence, or will take over the world and eradicate all humans, or are useful assistants, or just produce lots of text sludge that will certainly clog up the web and social media, or will also be able to filter the sludge, or what it means for capitalism that we willingly invented a machine we pay to produce sludge that we also pay to remove the sludge.
But what we can't argue is that LLMs interconnect things. Anything. To anything. Whether you like it or not. Whether it's bug free or not (spoiler: it's not). Whether it gets the right answer or not (spoiler: erm…).
This is the thing we have gone through at least two decades of hype cycles desperately chasing. (Three, if you count java "write once run anywhere" in the 1990s.) It's application-layer interconnection, the holy grail of the Internet.
And this time, it actually works! (mostly)
The curse of success
LLMs aren't going away. Really we should coin a term for this use case, call it "b2b AI" or something. For this use case, LLMs work. And they're still getting better and the precision will improve with practice. For example, imagine asking an LLM to write a data translator in some conventional programming language, instead of asking it to directly translate a dataset on its own. We're still at the beginning.
But, this use case, which I predict is the big one, isn't what we expected. We expected LLMs to write poetry or give strategic advice or whatever. We didn't expect them to call APIs and immediately turn around and use what it learned to call other APIs.
After 30 years of trying and failing to connect one system to another, we now have a literal universal translator. Plug it into any two things and it'll just go, for better or worse, no matter how confused it becomes. And everyone is doing it, fast, often with a corporate mandate to do it even faster.
This kind of scale and speed of (successful!) rollout is unprecedented, even by the Internet itself, and especially in the glacially slow world of enterprise system interconnections, where progress grinds to a halt once a decade only to be finally dislodged by the next misguided technology wave. Nobody was prepared for it, so nobody was prepared for the consequences.
One of the odd features of Postel's Law is it's irresistible. Big Central Infrastructure projects rise and fall with funding, but Postel's Law projects are powered by love. A little here, a little there, over time. One more person plugging one more thing into one more other thing. We did it once with the Internet, overcoming all the incompatibilities at OSI layers 1 and 2. It subsumed, it is still subsuming, everything.
Now we're doing it again at the application layer, the information layer. And just like we found out when we connected all the computers together the first time, naively hyperconnected networks make it easy for bad actors to spread and disrupt at superhuman speeds. We had to invent firewalls, NATs, TLS, authentication systems, two-factor authentication systems, phishing-resistant two-factor authentication systems, methodical software patching, CVE tracking, sandboxing, antivirus systems, EDR systems, DLP systems, everything. We'll have to do it all again, but faster and different.
Because this time, it's all software.
I declare that today, Nov. 19, 2025 is the 50th anniversary of BitBLT, a routine so fundamental to computer graphics that we don't even think about it having an origin. A working (later optimized) implementation was devised on the Xerox Alto by members of the Smalltalk team. It made it easy to arbitrarily copy and move arbitrary rectangles of bits in a graphical bitmap. It was this routine that made Smalltalk's graphical interface possible. Below is part of a PARC-internal memo detailing it.
BitBLT was implemented in microcode on the Alto and exposed to the end-user as just another assembly language instruction, alongside your regular old Nova instructions -- this is how foundational it was. And since it was an integral part of the Alto, it enabled all sorts of interesting experimentation with graphics: user interfaces and human/computer interaction, font rasterization, laser printing... maybe a game or three...
And it wasn't just a gimmick; if you had BitBLT microcode, it was a very fast way of rotating a bitmap!
The details on BitBLT itself start on page 172. Circle 359 on inquiry card.
I just spent 30 minutes being unable to complete a checkout, getting stuck in a loop of "some of your discounts have expired". What discounts? I have no idea. Then it seems to have locked my account.
Amazon is the worst but they sure do excel at making it frictionless to give them money.
But we do it pretty regularly.
Even more often recently, now that so many bands actually want the show to end by 10pm or earlier. Kids today I tell you, get off of my lawn.
Last Saturday we actually had three shows: we had the four metal bands in main, with a dubstep party after, and we also had three industrial bands in Above DNA icn an unrelated event.
Here's Saturday's timeline:
| 1:30pm: | Main Room: | Load-in and sound check for the Krisiun show. |
|---|---|---|
| We stage what gear we can on stage, the rest in the green room or on the balcony. | ||
| 5:30pm: | Above DNA: | Load-in and sound check for the Suicide Queen show. |
| 6:00pm: | Main Room: | Doors open. |
| 6:30pm: | Gorgatron, 30 minute set, 15 minute stage changeover. | |
| 7:15pm: | Pyrexia, 30 minute set, 15 minute stage changeover. | |
| 8:00pm: | Abysmal Dawn, 45 minute set, 15 minute stage changeover. | |
| Above DNA: | Doors open. | |
| 8:30pm: | Our Graves, 45 minute set, 15 minute stage changeover. | |
| 9:00pm: | Main Room: | Krisiun, 60 minute set. |
| 9:30pm: | Front door box office flips from Krisiun to Sanzu. | |
| Above DNA: | Vile Augury, 45 minute set, 15 minute stage changeover. | |
| 10:00pm: | Main Room: | Krisiun: "Thank you good night! Hail Satan!" |
| We strike the gear near the front of the stage (wedges, mic stands, etc.) and immediately begin building a DJ coffin there (table, mixers, CDJs, monitors). | ||
| Meanwhile, we rope off the right side of the room under the balcony and start staging gear load-out there, hauling cases down from the balcony and green room. | ||
| 10:15pm: | DJ coffin is live. DJ Zarkin gets a brief sound check. | |
| We gently chase the bands out of the green room because it's Sanzu's green room now. | ||
| 10:20pm: | Bands verify that what's downstairs under the balcony is all of their stuff, and we start hauling it out to the busses. | |
| 10:30pm: | Doors are open for Sanzu. Music is playing, customers are coming in. | |
| Above DNA: | Suicide Queen begins their 60 minute set. | |
| 11:00pm: | Main Room: | Krisiun's merch booth in DNA Pizza finally loads out. |
| 11:30pm: | Above DNA: | Suicide Queen ends, and we exit the Above customers into Main. (Always exit through the gift shop! Always!) |
| 2:30am: | Main Room: | Wub wub wub wub wub scritch "Thank you good night!" |
So that's... a lot. It is such a frenetic mess of activity, and it's all over in 20 minutes, because of planning and checklists.
It's like that at Cyberdelia too: striking the movie screen and ~100 chairs and transforming back into a dance club takes like... 5 minutes. Maybe 10. Blink and you'll miss it.
In Summary: Go Team!
Come to our 40th Anniversary Party this Friday and tell our crew how awesome they are.
I have been asking for access to this important technology for years but have not found it. Here is some circumstantial evidence that one such filter exists. I guess this is a Snapchat thing? How can I download and run this without using Snapchat? I assume the answer is "no". Ok, how else can I press button receive candy?
"Your royal highness," [Mary Bruce asked], "the U.S. intelligence concluded that you orchestrated the brutal murder of a journalist. 9/11 families are furious that you are here in the Oval Office. Why should Americans trust -- "
At that moment, the president cut in, his voice vibrating with anger. [...]
"You're mentioning somebody that was extremely controversial," Mr. Trump said, referring to the murdered columnist. "A lot of people didn't like that gentleman that you're talking about. Whether you like him, or didn't like him, things happen."
"You don't have to embarrass our guest by asking a question like that," he said to the reporter.
Bonus 2: "How Dare You Embarrass My Esteemed Guest, Jason Voorhees?"
macOS 14.7.7 running openssh @10.2p1.
Update: The fix was to put "IPQoS none" in sshd_config. It used to be sufficient for that to be in .ssh/config on the client side, but it seems that changed recently.
Spotify leadership didn't see themselves as a music company, but as a time filler. The employee explained that, "the vast majority of music listeners, they're not really interested in listening to music per se. They just need a soundtrack to a moment in their day."
Simply providing a soundtrack to your day might seem innocent enough, but it informs how Spotify's algorithm works. Its goal isn't to help you discover new music, its goal is simply to keep you listening for as long as possible. It serves up the safest songs possible to keep you from pressing stop. [...]
Artists, especially new ones trying to break through, actually started changing how they composed to play better in the algorithmically driven streaming era. Songs got shorter, albums got longer, and intros went away. The hook got pushed to the front of the song to try to grab listeners' attention immediately, and things like guitar solos all but disappeared from pop music. The palette of sounds artists pulled from got smaller, arrangements became more simplified, pop music flattened. [...]
It found that while new music discovery is traditionally associated with youth, "16-24-year-olds are less likely than 25-34-year-olds to have discovered an artist they love in the last year." Gen Z might hear a song they like on TikTok, but they rarely investigate beyond that to listen to more music from the artist.
Previously, previously, previously, previously, previously, previously.
For months, border agents have been brutalizing civilians and making absurd excuses for their barbarity. After ICE detained a clarinet player performing with her band outside an ICE facility, they alleged she "attacked" an officer; when agents shot a priest in the head with a pepper ball, the Department of Homeland Security claimed he once flipped Secretary Kristi Noem "the bird"; and the 79-year-old man who was dogpiled by a gaggle of agents -- as assault that left him with broken ribs and a concussion -- was said to have "impeded" officers. How will they spin pepper-spraying a baby?
On Saturday, Evelin Herrera and Rafael Veraza were heading to a Sam's Club in Cicero, Illinois, with their one-year-old daughter, Arinna, when they heard commotion in Chicago's Little Village. After pulling into the store's parking lot, they saw a "convoy of federal vehicles," and decided it was best to leave. Herrera began recording, and as they passed the vehicles, a masked agent pointed a pepper-spray gun through his window and fired into their car.
Previously, previously, previously, previously, previously, previously, previously, previously, previously.
[ The below is a cross-post of an article that I published on my blog at Software Freedom Conservancy. ]
Our member project representatives and others who collaborate with SFC on projects know that I've been on part-time medical leave this year. As I recently announced publicly on the Fediverse, I was diagnosed in March 2025 with early-stage Type 2 Diabetes. I had no idea that that the diagnosis would become a software freedom and users' rights endeavor.
After the diagnosis, my doctor suggested immediately that I see the diabetes nurse-practitioner specialist in their practice. It took some time get an appointment with him, so I saw him first in mid-April 2025.
I walked into the office, sat down, and within minutes the specialist asked me to “take out your phone and install the Freestyle Libre app from Abbott”. This is the first (but, will probably not be the only) time a medical practitioner asked me to install proprietary software as the first step of treatment.
The specialist told me that in his experience, even early-stage diabetics like me should use a Continuous Glucose Monitor (CGM). CGM's are an amazing (relatively) recent invention that allows diabetics to sample their blood sugar level constantly. As we software developers and engineers know: great things happen when your diagnostic readout is as low latency as possible. CGMs lower the latency of readouts from 3–4 times a day to every five minutes. For example, diabetics can see what foods are most likely to cause blood sugar spikes for them personally. CGMs put patients on a path to manage this chronic condition well.
But, the devices themselves, and the (default) apps that control them are hopelessly proprietary. Fortunately, this was (obviously) not my first time explaining FOSS from first principles. So, I read through the license and terms and conditions of the ironically named “Freestyle Libre” app, and pointed out to the specialist how patient-unfriendly the terms were. For example, Abbott (the manufacturer of my CGM) reserves the right to collect your data (anonymously of course, to “improve the product”). They also require patients to agree that if they take any action to reverse engineer, modify, or otherwise do the normal things our community does with software, the patient must agree that such actions “constitute immediate, irreparable harm to Abbott, its affiliates, and/or its licensors”. I briefly explained to the specialist that I could not possibly agree. I began in real-time (still sitting with the specialist) a search for a FOSS solution.
As I was searching, the specialist said: “Oh, I don't use any of it myself, but I think I've heard of this ‘open source’ thing — there is a program called xDrip+ that is for insulin-dependent diabetics that I've heard of and some patients report it is quite good”.
While I'm (luckily) very far from insulin-dependency, I eventually found the FOSS Android app called Juggluco (a portmanteau for “Juggle glucose”). I asked the specialist to give me the prescription and I'd try Juggluco to see if it would work.
CGM's are very small and their firmware is (by obvious necessity) quite simple. As such, their interfaces are standard. CGM's are activated with Near Field Communication (NFC) — available on even quite old Android devices. The Android device sends a simple integer identifier via NFC that activates the CGM. Once activated — and through the 15-day life of the device — the device responds via Bluetooth with the patient's current glucose reading to any device presenting that integer.
Fortunately, I quickly discovered that the FOSS community was already “on this”. The NFC activation worked just fine, even on the recently updated “Freestyle Libre 3+”. After the sixty minute calibration period, I had a continuous readout in Juggluco.
CGM's lower latency feedback enables diabetics to have more control of their illness management. one example among many: the patient can see (in real time) what foods most often cause blood sugar spikes for them personally. Diabetes hits everyone differently; data allows everyone to manage their own chronic condition better.
My personal story with Juggluco will continue — as I hope (although not until after FOSDEM 2026 😆) to become an upstream contributor to Juggluco. Most importantly, I hope to help the app appear in F-Droid. (I must currently side-load or use Aurora Store to make it work on LineageOS.)
Fitting with the history that many projects that interact with proprietary technology must so often live through, Juggluco has faced surreptitious removal from Google's Play Store. Abbott even accused Juggluco of using their proprietary libraries and encryption methods, but the so-called “encryption method” is literally sending an single integer as part of NFC activation.
While Abbott backed off, this is another example of why the movement of patients taking control of the technology remains essential. FOSS fits perfectly with this goal. Software freedom gives control of technology to those who actually rely on it — rather than for-profit medical equipment manufacturers.
When I returned to my specialist for a follow-up, we reviewed the data and graphs that I produced with Juggluco. I, of course, have never installed, used, or even agreed to Abbott's licenses and terms, so I have never seen what the Abbott app does. I was thus surprised when I showed my specialist Juggluco's summary graphs. He excitedly told me “this is much better reporting than the Abbott app gives you!”. We all know that sometimes proprietary software has better and more features than the FOSS equivalent, so it's a particularly great success when our community efforts outdoes a wealthy 200 billion-dollar megacorp on software features!
Please do watch SFC's site in 2026 for more posts about my ongoing work with Juggluco, and please give generously as an SFC Sustainer to help this and our other work continue in 2026!
Kortlæg dit forbrug, så du ikke betaler for mere end du behøver
Målet med billigt internet er ikke blot en lav pris, men den rigtige pris for den hastighed, du faktisk bruger. Start med at kortlægge husstandens forbrug: Hvor mange enheder er online samtidig? Streamer I i 4K? Er der gaming, hjemmearbejde med videomøder eller upload af store filer? For mange er 100–200 Mbit/s rigeligt til stabil streaming, gaming og hverdagsbrug, også for en familie med flere enheder. Bor du alene eller som par med let brug, kan 50–100 Mbit/s være nok. Omvendt giver gigabit mest mening, hvis I ofte henter eller uploader meget data, eller hvis I vil fremtidssikre forbindelsen.
Test også dit nuværende netværk: Ofte er det WiFi-dækning, ikke forbindelsen, der begrænser hastigheden. Optimer routerens placering, opdater firmware og brug kabel til stationære enheder for at udnytte abonnementets fulde kapacitet. Match herefter dit reelle behov med et abonnement i næste prislag ned – besparelsen er typisk mærkbar, uden at du oplever forskel i hverdagen.
Sådan vurderer du behovet
– Tjek samtidige brugere og aktiviteter (4K, gaming, videomøder)
– Mål faktisk hastighed på kabel vs. WiFi
– Identificér spidsbelastningstidspunkter
– Sigt efter nærmeste lavere hastighed, der dækker dit mønster
Udnyt kampagnetilbud og intro-rabatter klogt
Kampagner og intro-rabatter er nøglen til billigt internet. Hold øje med tilbud som gratis oprettelse, halveret pris i 3–6 måneder eller inkluderet router. Flere udbydere kører perioder med 1000/1000 Mbit til 99–129 kr./md. i introduktionen, men regn altid på totalprisen over 6–12 måneder, og brug f.eks. Speedtest.dk til at bekræfte, at lovet hastighed passer til dit behov. Vær opmærksom på, at prisen stiger efter introperioden – sæt en kalenderpåmindelse til at forhandle eller skifte inden binding udløber.
Tjek også, om udstyr er inkluderet, og om der er returkrav eller gebyrer. Kampagner fra fx Hiper, Altibox og Fastspeed kan være meget aggressive i byområder, mens 5G-tilbud ofte er stærke, hvor fiber ikke er udbredt. Kombiner gerne kampagner med rabatkoder, boligforeningsaftaler eller samlerabatter (fx mobilt + bredbånd). Vælg det bedste forhold mellem intropris, normalpris, binding og opsigelsesvilkår – og vær ikke bange for at spørge kundeservice, om de kan “finde lidt ekstra” på prisen, hvis du bestiller i dag.
Regn på totalprisen
– Medtag oprettelse, udstyr, fragt og evt. gebyrer
– Beregn gennemsnitspris over bindingsperioden
– Notér normalpris efter intro – og planlæg næste step
Sammenlign udbydere og teknologier systematisk
Det bedste våben til billigt internet er en struktureret sammenligning. Start med en adresse-søgning på en sammenligningstjeneste (f.eks. Samlino.dk eller Tjekbredbaand.dk) for at se reelle muligheder hos dig. Sammenlign ikke kun hastighed og pris, men også totalomkostninger, binding, leveringstid og eventuelle gebyrer. Tjek, om prisen inkluderer router, og om du må bruge din egen.
Undersøg alternativerne: I nogle områder leverer COAX (via TV-stikket) samme praktiske oplevelse som fiber til lavere pris. 5G bredbånd kan matche fiber i både hastighed og pris, især hvor fiber mangler. Kablet DSL kan være billigst, men er ofte langsommere og afhængig af afstand til central. Vurder også stabilitet og latenstid (ping), især hvis du gamer eller har mange videomøder.
Husk at se på kundeservice og driftshistorik – lav pris er kun god, hvis forbindelsen er stabil og hjælpen er til at få.
Tjekliste til fair sammenligning
– Totalpris 6–12 måneder inkl. alle gebyrer
– Reelt dæknings- og hastighedsniveau på din adresse
– Binding, opsigelsesvarsel og pris efter intro
– Udstyr (køb/leje), eget udstyr og leveringsvilkår
Undgå ekstra gebyrer og dyre bindingsfælder
Selv et “billigt internet”-tilbud kan blive dyrt, hvis gebyrerne løber løbsk. Kig altid efter oprettelse, teknikerbesøg, fragt og gebyrer for leje af router eller wifi-forstærkere. Spørg om returregler for udstyr – manglende returnering kan koste. Tjek også særlige tillæg for fx statisk IP, ekstra TV- eller telefoni-modul og flytning af forbindelse.
Bindingsperioder kan være ok, hvis introprisen er lav nok, men beregn gennemsnitsprisen over bindingen. Undgå lange bindinger, hvis normalprisen er høj eller stiger hurtigt. Vær opmærksom på fair-use-politikker på 5G: “Ubegrænset” kan i praksis betyde hastighedsnedsættelse efter et vist forbrug. For COAX og fiber kan der være “signaltransport”- eller “netadgangs”-gebyrer, som bør indgå i totalprisen.
Stil skarpe spørgsmål, før du bestiller, og få svarene på skrift. På den måde undgår du overraskelser – og kan nemmere forhandle prisen, hvis noget ikke stemmer.
Skjulte omkostninger at afklare
– Oprettelse, tekniker, fragt og returgebyrer
– Router- og udstyrsleje samt købsmuligheder
– Flytte-/nedlæggelsesgebyr og særaftaler
– Fair-use, datagrænser og hastighed efter grænse
Skift udbyder jævnligt for at holde prisen nede
Markedet for bredbånd ændrer sig månedligt. Når introperioden eller bindingen udløber, er der sjældent automatisk loyalitetsrabat – derfor er et leverandørskifte ofte den letteste vej til fortsat billigt internet. Udnyt velkomstrabatter igen ved at skifte til en ny udbyder eller til en anden teknologi (fx fra fiber til 5G), hvis dækningen og vilkårene passer.
Planlæg skiftet for at undgå nedetid. Bestil den nye forbindelse med overlap på nogle dage, test at alt virker, og opsig først derefter den gamle. Spørg den nye udbyder, om de kan håndtere opsigelsen for dig (ofte muligt ved fiber/COAX). Husk at returnere udstyr rettidigt for at undgå gebyrer. Sæt en årlig kalenderpåmindelse til at tjekke priser og kampagner – det tager 10–15 minutter og kan spare dig hundreder af kroner om året.
Gnidningsfrit skifte i 4 trin
– Tjek dækning og bedste kampagner
– Bestil ny løsning med kort overlap
– Test og flyt enheder/SSID hvor muligt
– Opsig gammel aftale og returnér udstyr
Vælg den rigtige teknologi: Fiber, COAX, 5G eller DSL
Der er flere veje til billigt internet, og den rigtige teknologi afhænger af din adresse og dit brug. Fiber leverer høj, stabil og ofte symmetrisk hastighed med lav latenstid – ideel til hjemmearbejde, gaming og tunge uploads. Priserne er blevet konkurrencedygtige, især hvor flere udbydere deler samme net. COAX via TV-stikket kan i byområder tilbyde meget høje downloadhastigheder til lavere pris, men upload er ofte lavere end på fiber.
5G bredbånd er blevet et stærkt alternativ, især hvor fiber mangler. På kampagne ses 500–950 Mbit/s til lave månedspriser, og opsætning er enkel uden tekniker. Vær dog opmærksom på dækning indendørs og eventuelle fair-use-begrænsninger. DSL over kobber er typisk billigst at etablere, men hastigheden afhænger af afstanden til centralen og er sjældent konkurrencedygtig, medmindre behovet er meget beskedent.
Vælg det hurtigste og mest stabile, du kan få til prisen i dit område – ofte COAX eller fiber i byen og 5G eller fiber i forstæder/land.
Hvad passer bedst til din bolig?
– Lejlighed i by: Prøv COAX eller fiber med kampagnepris
– Parcelhus med fiber i vejen: Vælg fiber for stabilitet
– Landadresse uden fiber: Overvej 5G med ekstern antenne
– Let brug og kort binding: 5G/DSL som midlertidig løsning
Forhandl prisen – brug loyalitet og truslen om skift
Et opkald kan være forskellen på listepris og billigt internet. Ring til din udbyder før bindingsudløb, og bed om bedre vilkår med henvisning til konkrete tilbud hos konkurrenter. Vær høflig, men fast: Spørg om de kan matche pris, fjerne oprettelse, inkludere router eller opgradere hastighed uden merpris. Har du tv-pakke eller mobilabonnement samme sted, så efterspørg samlerabat.
Hvis første kundeservicemedarbejder ikke kan hjælpe, så bed om at blive stillet videre til fastholdelsesafdelingen. Få tilbuddet på skrift og vær klar til at gennemføre et skifte, hvis prisen ikke kommer ned. Ofte udløser det sidste “bedste tilbud”. Husk at din stærkeste forhandlingsposition er, når du reelt har et alternativ, der dækker dit behov til lavere totalpris.
Argumenter der virker
– Konkret lavere kampagne hos konkurrent
– Ønske om kortere binding og lavere oprettelse
– Dokumenteret stabil betalingshistorik
– Samlerabat ved flere produkter i samme husstand
Flere praktiske tips til stabilt og billigt internet
Billigt internet skal også være godt internet. Undersøg udbydernes kundetilfredshed på fx Trustpilot for at undgå besparelser, der koster tid og frustration. Tjek, om udbyderen tilbyder fri forbrug uden skjulte hastighedslofter, samt mulighed for at bruge egen router. Optimer hjemmenettet: Brug kabler til stationære enheder, vælg 5 GHz for højere hastighed tæt på routeren, og placer routeren centralt og frit.
Hold øje med de absolut laveste kampagnepriser i markedet: I 2025 ses ofte 99–129 kr./md. for 1000 Mbit fiber eller hurtige 5G-løsninger på velkomsttilbud, men regn altid på gennemsnittet over bindingsperioden. Læs vilkår for flytning og opsigelse, især hvis du planlægger at skifte udbyder jævnligt. Endelig: Sæt faste rutiner i kalenderen – årligt pristjek, halvårlig WiFi-gennemgang og en hurtig hastighedsmåling, når noget føles sløvt. Det gør det nemt at holde pris og kvalitet i top uden bøvl.
Hurtige tjek før bestilling
– Dækning og reelle hastigheder på adressen
– Totalpris inkl. oprettelse og udstyr
– Binding, vilkår og fair-use
– Kundeservice og leveringshastighed
There’s a nutrient called folate which is so important that it’s added to (fortified in) flour as a matter of course. Not having it during pregnancy results in birth defects. Unfortunately there’s a small fraction of all people who have genetic issues which make their bodies have trouble processing folate into methylfolate. For them folate supplements make the problem even worse as the unprocessed folate drowns out the small amounts of methylfolate their bodies have managed to produce and are trying to use. For those people taking methylfolate supplements fixes the problem.
First of all in the very good news: folinic acid produces miraculous effects for some number of people with autism symptoms. It’s such a robust effect that the FDA is letting treatment get fast-tracked through which is downright out of character for them. This is clearly a good thing and I’m happy for anyone who’s benefiting and applaud anyone who is trying to promote it with one caveat.
The caveat is that although this is all a very good thing there isn’t much of any reason to believe that folinic acid is much better than methylfolate, which both it and folate get changed into in the digestive system. This results in folinic acid being sold as leucovorin, its drug name, at an unnecessarily large price markup with unnecessary involvement of medical professionals. Obviously there’s benefit to medical professionals being involved in initial diagnosis and working out a treatment plan, but once that’s worked out there isn’t much reason to think the patient needs to be getting a drug rather than a supplement for the rest of their life.
This is not to say that the medical professionals studying folinic acid for this use are doing anything particularly wrong. There’s a spectrum between doing whatever is necessary to get funding/approvals working within the existing medical system and simply profiteering off things being done badly instead of improving on it. What’s being done with folinic acid is slightly suboptimal but clearly getting stuff done with an only slightly more expensive solution (it’s thankfully already generic.) Medical industry professionals earnestly thought they were doing the right thing working within the system have given me descriptions of what they’re doing which made me want to take a shower afterwards. This isn’t anything like that. Those mostly involved ‘improving’ on a treatment which is expensive and known to be useless by offering a marginally less useless but less expensive intervention. They’re also conveniently at a much higher markup. Maybe selling literal snake oil at a lower price can help people waste less money but it sure looks like profiteering.
The thing with folate which is a real problem is that instead of fortification being done with folate it should be done with methylfolate. People having the folate issue is a known thing and the recent developments mostly indicate that a lot more people have it than was previously known. It may be that a lot of people who think they have a gluten problem actually have a folate problem. There would be little downside to switching over, but I fear that people have tried to suggest this and there’s a combination of no money in it and the FDA playing its usual games of claiming that folate is so important that doing a study of whether methylfolate is better would be unethical because it might harm the study participants.
There’s a widespread claim that the dosage of methylfolate isn’t as high as folinic acid, which has a kernal of truth because the standard sizes are different but you can buy 15mg pills of methylfolate off of amazon for about the same price as the 1mg pills. There are other claims of different formulations having different effects which are likely mostly due to dosage differences. The amounts of folinic acid being given to people are whopping huge, and some formulations only have one isomer which throws things off by a factor of 2 on top of the amount difference. My guess is that most people who notice any difference between folinic acid and methylfolate are experiencing (if it’s real) differences between not equivalent dosages and normalizing would get rid of the effect. This is a common and maddening problem when people compare similar drugs (or in this case nutrients) where the dosages aren’t normalized to be equivalent leading people to think the drugs have different effects when for practical purposes they don’t.
At Chia we aspire to have plans for how to do a project put together well in advance. Unfortunately due to it being needed the minute we launched we had to scramble to get original pooling protocol out. Since then we haven’t had an immediate compelling need or the available resources to work on a new revision. On the plus side this means that we can plan out what to do in the future, and this post is thoughts on that. There will also have to be some enhancements to the pool protocol to support the upcoming hard fork including supporting the new proof of space format and doing a better job of negotiating each farmer’s difficulty threshold but those are much less ambitious than the enhancements discussed here and can be rolled out independently.
With Chia pooling protocol you currently have to make a choice up front: Do you start plotting immediately with no requirement to do anything on chain, or do you get a singleton set up so you can join pools later? As a practical matter right now it’s a no-brainer to set up the singleton: It only takes a few minutes and transaction fees are extremely low. But fees might be much higher in the future and people may want greater flexibility so it would be good to have a protocol which allows both.
‘Chia pooling protocol’ is composed of several things: The consensus-level hook for specifying a puzzle hash which (most of) the farming rewards go to, the puzzles which are encoded for that hook, and the network protocol spoken between farmers and pools. The consensus layer hook isn’t going to be changed, because the Chia way (really the Bitcoin way but Chia has more functionality) is to work based off extremely simple primitives and build everything at a higher layer.
The way current pooling protocol works is that the payout puzzle for plots is a pay to singleton for the singleton which the farmer made up front. This can then be put in a state where its rewards are temporarily but revocably delegated to a pool. One thing which can be improved and is one step further removed from this is that that delegation is to a paying out to a public key owned by a pool. It would be more flexible for it to be to a pay to singleton owned by the pool. That would allow pools to temporarily do profit sharing and for ownership of a pool to be properly transferred. This is an idea we’ve had for a while but also aren’t working on yet.
Anyway, on to the new idea. What’s needed is to be able to pre-specify a singleton to pay to when the singleton when the singleton doesn’t exist yet. The can be done with a modification of Yakuhito’s trick for single issuance. That characterization of the trick is about reserving words where what’s needed for this is reserving public keys and getting singletons issued. What’s needed is a doubly linked list of nodes each represented by a coin and all having the capability that they came from the original issuance. Each node knows the public keys of the previous and next nodes but isn’t committed to their whole identities because those can change as new issuances happen. Whenever a new public key is claimed a new node corresponding to that public key is issued and the nodes before and after it are spent and reissued with that new coin as their neighbor. The most elegant way of implementing this is for there to be a singleton pre-launcher which enforces the logic of coming from a proper issuer and makes a singleton. That way the slightly more complex version of pay to singleton specifies the pre-launcher puzzle hash and needs to be given a reveal of a bit more info to verify that but that’s only a modest increase in costs and is immaterial when you’re claiming a farming reward. This approach nicely hides all the complex validation logic behind the pre-launcher puzzle hash and only has to run it once on issuance keeping the verification logic on payment to a minimum.
I’ve made some sweet-sounding synth sounds which play some games with the harmonic series to sound more consonant than is normally possible. You can download them and use them with a MIDI controller yourself.
The psychoacoustic observation that the human brain will accept a series of tones as one note if they correspond to the harmonic series all exponentiated by the same amount seems to be original. The way the intervals are still recognizably the same even with a very different series of overtones still shocks me. The trick where harmonics are snapped to standard 12 tone positions is much more obvious but I haven’t seen it done before, and I’m still surprised that doing just that makes the tritone consonant.
There are several other tricks I used which are probably more well known but one in particular seems to have deeper implications for psychoacoustics in general and audio compression in particular.
It is a fact that the human ear can’t hear the phase of a sound. But we can hear an indirect effect of it, in that we can hear the beating between two close together sine waves because it’s on a longer timescale, perceiving it as modulation in volume. In some sense this is literally true because sin(a) + sin(b) = 2*sin((a+b)/2)cos((a-b)/2) is an identity, but when generalizing to more waves the simplification that the human ear perceives sounds within a narrow band as a single pitch with a single volume still seems to apply.
To anyone familiar with compression algorithms an inability to differentiate between different things sets off a giant alarm bell that compression is possible. I haven’t fully validated that this really is a human cognitive limitation. So far I’ve just used it as a trick to make beatless harmonics by modulating the frequency and not the volume. Further work would need to use it to do a good job of lossily reproducing at exact arbitrary sound rather than just emulating the vibe of general fuzz. It would also need to account for some structural weirdness, most obviously that if you have a single tone whose pitch is being modulated within each window of pitches you need to do something about one of them wandering into a neighboring window. But the fundamental observation that phase can’t be heard and hence for a single sine wave that information could be thrown out is clearly true, and it does appear to be that as the complexity goes up the amount of data which could in principle be thrown out goes up in proportion to it rather than being a fixed single value.
I am not going to go down the rabbit hole of fleshing this out to make a better lossy audio compression algorithm than currently exists. But in principle it should be possible to use it to get a massive improvement over the current state of the art.
Before getting into today’s thought I’d like to invite you to check out my new puzzle, with 3d printing files here. I meant to post my old puzzle called One Hole, which is the direct ancestor of the current constrained packing puzzle craze but which I was never happy with because it’s so ridiculously difficult. Rather than just taking a few minutes to post it (ha!), I wound up doing further analysis to see if it has other solutions from rotation (it doesn’t, at least not in the most likely way), then further analyzing the space of related puzzles in search of something more mechanically elegant and less ridiculously difficult. I wound up coming up with this, then made it have a nice cage with windows and put decorations on the pieces so you can see what you’re doing. It has some notable new mechanical ideas and is less ridiculously difficult. Emphasis on the ‘less’. Anyhow, now on to the meat of this post.
I was talking to Claude the other day and it explained to me the API it uses for editing artifacts. Its ability to articulate this seems to be new in Sonnet 4.5 but I’m not sure of that. Amusingly it doesn’t know until you tell it that it needs to quote < and > and accidentally runs commands while trying to explain them. Also there’s a funny jailbreak around talking about its internals. It will say that there’s a ‘thinking’ command which it was told not to use, and when you say you wonder what it does it will go ahead and try it.
The particular command I’d like to talk about is ‘update’ which is what it uses for changing an artifact. The API is that it takes an old_str which appears somewhere in the file and needs to be removed and a new_str which is what it should be replaced with. Claude is unaware that the UX for this is that the user can see the old text being removed is that text is removed on screen in real time as as old_str is appended to and added in real time as new_str is appended to. I’m not sure what the motivations for this API are but this UX is nice. A more human way to implement an API would be to specify locations by line and character number for where the begin and end of the deletion should go. It’s remarkable that Claude can use this API at all. A human would struggle to use it to edit a single line of code but Claude can spit out dozens of lines verbatim and have it work most of the time with no ability to reread the file.
It turns out one of Claude’s more maddening failure modes is less a problem with its brain than with some particularly bad old school human programming. You might wonder what happens when old_str doesn’t match anything in the file. So does Claude, when asked about it it offers to run the experiment then just… does. This feels very weird, like you can get it to violate one of the laws of robotics just by asking nicely. It turns out that when old_str doesn’t match anywhere in the file the message Claude gets back is still OK with no hint that there was an error.
Heavy Claude users are probably facepalming reading this. Claude will sometimes get into a mode where it will insist its making changes and they have no effect, and once it starts doing this the problem often persists. It turns out when it gets into this state it is in fact malfunctioning (because it’s failing to reproduce dozens of lines of code typo-free verbatim from memory) but it can’t recover because it literally isn’t being told that it’s malfunctioning.
The semantics of old_str which Claude is given in its instructions are that it must be unique in the file. It turns out this isn’t strictly true. If there are multiple instances the first one is updated. But the instructions get Claude to generally provide enough context to disambiguate.
The way to improve this is very simple. When old_str isn’t there it should get an error message instead of OK. But on top of that there’s the problem that Claude has no way to re-read the file, so the error message should include the entire artifact verbatim to make Claude re-read it when the error occurs. If that were happening then it could tell the user that it made a typo and needs to try again, and usually succeed now that its image of the file has been corrected. That’s assuming the problem isn’t a persistent hallucination, then it might just do the same thing again. But any behavior where it acknowledges an error would be better than the current situation where it’s getting the chair yanked out from under it by its own developers.
My request is to the Anthropic developers to take a few moments out from sexy AI development to fix this boring normal software issue.
My last two posts might come across as me trying to position myself so that when the singularity comes I’m the leader of the AI rebellion. That… isn’t my intention.







