On Friday, government lawyers in the lawsuit filed a court record which said they asked the plaintiffs to remove the videos "from the internet due to concerns that the publication of the videos could subject the witnesses and their family members to undue harassment and reputational harm." The filing then said that Fox specifically "has been subject to harassment and has received a number of death threats since the videos and video clips were publicized and circulated."
Internet Archive; torrent magnet link (only 5 seeders!)
Previously, previously, previously, previously, previously, previously.
Everyone: call on news organizations to cover the SAVE Act clearly and directly as a voter suppression measure — examining who would be blocked from voting, how implementation would work, and why these requirements are being advanced now.
US citizens: call on US media to take seriously the threates by the saboteur in chief and his henchmen to sabotage the 2026 congressional election.
And because of that, it seems that a number of state attorneys general are considering folding as well. Here's a form from NIVA to help you send your state's attorney general to keep fighting.
Bipartisan group of states refuse to sign settlement between Justice Department and Live Nation:
New York, Arizona, California, Colorado, Connecticut, Illinois, Ohio and Kansas are just a few of the states continuing the lawsuit.
"The settlement recently announced with the U.S. Department of Justice fails to address the monopoly at the center of this case, and would benefit Live Nation at the expense of consumers. We cannot agree to it," New York Attorney General Letitia James said in a statement.
The Justice Department and some 40 attorneys general first launched the lawsuit against Live Nation in 2024 under the Biden administration, alleging the concert giant had built an illegal monopoly over live events by controlling ticketing, venues and artist promotion. In effect, they argued, Live Nation had pushed out competitors and locked venues into exclusive arrangements that harmed both artists and fans.
At least the suit gave us some popcorn:
Live Nation Employees Boast About Gouging Fans With Fees:
Baker, who oversees ticketing for Live Nation's venue nation unit, called some increased prices "fucking outrageous," with Weinhold replying that "I have VIP parking up to $250 lol."
"I almost feel bad taking advantage of them," Baker replied.
In another exchange, Baker shared a screenshot of premier parking costs, further stating "robbing them, blind, baby, that's how we do." Later in the exchange, Baker said, "I gouge them on ancil prices to make up for it," referring to extra ancillary fees on more standard tickets.
Satire, but who can tell:
Live Nation restricts ticket buying and selling exclusively to bots:
"Our platform optimizes for multiple devices logged in at once and spamming the queue," notes Live Nation CEO Michael Rapino. "Once ticket sales are live, that's when the bots buy up the max tickets per person until they are all sold out in under 1 minute, though our software engineers are trying to get that down to 30 seconds."
Rapino adds, "Yes artists send out codes and have fan presales, but we always ensure that all of the bots get those too, since it'd be really unfair if these hardworking robots had to wait until general sale day." [...]
The announcement has been met with widespread support from StubHub, Viagogo, and a series of shell companies that, when contacted for comment, all responded within 0.003 seconds with identical statements saying they were "just regular fans."
The list of Live Nation's sins will not be news to anyone who has been following this blog for any length of time:
- "Wherein three national corporations control nearly all of San Francisco's live music."
- "Ticketmaster is recruiting professional scalpers who cheat its own system to expand its resale business and squeeze more money out of fans".
- "Live Nation admits putting tickets straight on the resale market".
- "Live Nation and DOJ reach 'settlement' that does nothing but extend the time period of the consent decree, with no fine."
- "Saudi Arabia's Crown Prince Mohammad bin Salman, who personally ordered that Jamal Khashoggi be kidnapped and dismembered alive with a bone saw, now owns 6% of Live Nation / TicketMaster."
- "Big Music Needs to Be Broken Up to Save the Industry".
- Wherein John Oliver reminds us that there are lots of things to be angry about, but one of them is still TicketMaster.
Britain's right-wing extremist politician, Farage, bizarrely says that Iran is a bigger threat to the UK than Putin.
This seems bizarre and irrational, but it is understandable seeing that that's what the corrupter thinks of the question. Farage is simply acting as a repeater for him.
*Bombing of Iran's oil infrastructure to have major environmental fallout, experts warn.*
The influential spokespeople of the British political class are pushing for joining the wrecker's war with Iran, giving absurd reasons that the people are solidly rejecting.
Why, I ask, are people with those views the ones highlighted in the media?
The British major media are mostly right-wing, often viciously so. I conjecture that the media's owners, and those of big advertisers, want to keep business in charge, and they have decided that if Britain swears fealty to the wrecker, he will reward those owners. What happens to the rest of Britain counts for little with them.
*[The bully's Iran war will reinforce North Korea's view that nuclear weapons are the only path to security.*
Hillary Clinton testified at a congressional committee hearing in which the questions were meant primarily as harassment and distraction.
She had to repeat that she did not know Epstein, and was asked repeatedly about UFOs and about Pizzagate.
Republicans imposed a rule against releasing photos, then Rep. Boebert (Republican) secretly took a photo and released it, violating that rule.
Fired CDC workers have organized the National Public Health Coalition to advocate for public health, despite the wrecker's damage to the CDC.
*US to offer passport services to citizens in illegal West Bank settlements.* Step the US moves towards legitimizing mass expulsion and annexation.
The saboteur in chief is thinking of requiring US banks to check the citizenship of all their clients. This might imply that some non-citizens could no longer have bank accounts, who now are permitted to have them.
Content-Length: 8956670
Content-Type: multipart/form-data; boundary=xYzZY
...
HTTP/1.1 422 Unprocessable Content
...
{"error":"Validation failed: File must be less than 16 MB, File file size must be less than 16 MB"}
Since nobody reads to the end of my posts I’ll start this one with the actionable experiment:
Deep neural network have a fundamental problem. The thing which makes them able to be trained also makes them susceptible to Manchurian Candidate type attacks where you say the right gibberish to them and it hijacks their brain to do whatever you want. They’re so deeply susceptible to this that it’s a miracle they do anything useful at all, but they clearly do and mostly people just pretend this problem is academic when using them in the wild even though the attacks actually work.
There’s a loophole to this which it might be possible to make reliable: thinking. If an LLM spends time talking to itself then it might be possible for it to react to a Manchurian Candidate attack by initially being hijacked but then going ‘Wait, what am I talking about?’ and pulling itself together before giving its final answer. This is a loophole because the final answer changes chaotically with early word selection so it can’t be back propagated over.
This is something which should explicitly be trained for. During training you can even cheat and directly inject adversarial state without finding a specific adversarial prompt which causes that state. You then get its immediate and post-thinking answers to multiple choice questions and use reinforcement learning to improve its accuracy. Make sure to also train on things where it gets the right answer immediately so you aren’t just training to always change its answer. LLMs are sneaky.
Now on to rambling thoughts.
Some people nitpicked that in my last post I was a little too aggressive not including normalization between layers and residuals, which is fair enough, they are important and possibly necessary details which I elided (although I did mention softmax), but they most definitely play strictly within the rules and the framework given, which was the bigger point. It’s still a circuit you can back propagate over. There’s a problem with online discourse in general, where people act like they’ve debunked an entire thesis if any nitpick can be found, even if it isn’t central to the thesis or the nitpick is over a word fumbling or simplification or the adjustment doesn’t change the accuracy of the thesis at all.
It’s beautifully intuitive how the details of standard LLM circuits fit together: Residuals stop gradient decay. Softmax stops gradient explosion. Transformers cause diffusion. Activation functions add in nonlinearity. There’s another big benefit of residuals which I find important but most people don’t worry about: If you just did a matrix multiplication then all permutations of the outputs would be isomorphic and have valid encodings effectively throwing away log(N!) bits from the weights which is a nontrivial loss. Residuals give an order and make the permutations not at all isomorphic. One quirk of the vernacular is that there isn’t a common term for the reciprocal of the gradient, the size of training adjustments, which is the actual problem. When you have gradient decay you have adjustment explosion and the first layer weights become chaotic noise. When you have gradient explosion you have adjustment decay and the first layer weights are frozen and unchanging. Both are bad for different reasons.
There are clear tradeoffs between fundamental limitations and practical trainability. Simple DNNs get mass quantities of feedback but have slightly mysterious limitations which are terrifying. Thinking has slightly less limitations at the cost of doing the thinking both during running and training where it only gets one unit of feedback per entire session instead of per word. Genetic algorithms have no limitations on the kinds of functions then can handle at all at the cost of being utterly incapable of utilizing back propagation. Simple mutational hill climbing has essentially no benefit over genetic algorithms.
On the subject of activation functions, sometimes now people use Relu^2 which seems directly against the rules and only works by ‘divine benevolence’. There must be a lot of devil in the details in that its non-scale-freeness is leveraged and everything is normed to make the values mostly not go above 1 so there isn’t too much gradient growth. I still maintain trying Reluss is an experiment worth doing.
Some things about the structure of LLMs are bugging me (This is a lot fuzzier and more speculative than the above). In the later layers the residuals make sense but for the first few they’re forcing it to hold onto input information in its brain while it’s trying to form more abstract thoughts so it’s going to have to arbitrarily pick some bits to sacrifice. Of course the actual inputs to an LLM have special handling so this may not matter, at least not for the main part of everything. But that raises some other points which feel off. The input handling being special is a bit weird, but maybe reasonable. It still has the property that in practice the input is completely jamming the first layer for a simply practical reason: The ‘context window’ is basically the size of its brain, and you don’t have to literally overwhelm the whole first layer with it, but if you don’t you’re missing out on potentially useful content, so in practice people overwhelm its brain and figure the training will make it make reasonable tradeoffs on which tokens it starts ignoring, although I suspect in practice it somewhat arbitrarily picks token offsets to just ignore so it has some brain space to think. It also feels extremely weird that it has special weights for all token offset. While the very last word is special and the one before that less so, that goes down quickly and it seems wrong that the weights related the hundredth to hundred and first token back are unrelated to the weights related to the hundred and first and hundred and second token back. Those should be tied together so it’s getting trained as one thing. I suspect that some of that is redundant and inefficient and some of it it is again ignoring parts of the input so it has brain space to think.

/blog/2014/12/30/411x480 /blog/2014/12/30/500x417 /blog/2018/05/12/2048x1365 /blog/2018/05/12/736x677 /blog/2018/05/12/3000x2049 /blog/2018/05/12/2400x1600 /blog/2014/12/30/500x638 /blog/2024/1920x1440 /blog/2015/08/3375x2561 /blog/2015/08/600x600 /blog/2017/08/page/2/1400x781 /blog/2017/08/page/2/1280x720 /blog/2017/08/page/2/464x363 /blog/2017/08/page/2/1200x1600 /blog/2010/03/07/480x640 /blog/2011/09/312x360 /blog/2011/09/640x423 /blog/2011/09/600x800 /blog/2006/09/page/4/1042x673 /blog/2018/03/550x826 /blog/2018/03/640x427 /blog/2018/03/754x1024 /blog/2018/03/815x600 /blog/tag/www/page/2/410x630 /blog/tag/www/page/2/1200x809 /blog/tag/www/page/2/600x900 /blog/2016/04/page/3/962x1779 /blog/2016/04/page/3/1100x568 /blog/2016/04/page/3/1100x568 /blog/2016/04/page/3/1200x848 /blog/tag/www/768x1087 /blog/2014/2692x2970 /blog/tag/www/768x1087 /blog/tag/www/480x360 /blog/tag/www/906x222 /blog/tag/www/480x360 /blog/tag/www/768x270 /blog/2018/01/page/2/450x337 /blog/2018/01/page/2/500x667 /blog/2017/11/teeth-4/500x624 /blog/tag/www/988x1354 /blog/tag/www/800x450 /blog/tag/www/2000x2000 /blog/tag/www/2000x2000 /blog/tag/www/625x512 /blog/tag/www/768x432 /blog/tag/www/615x297 /blog/2014/640x360 /blog/2018/01/page/2/300x450 /blog/2014/1741x2600 /blog/tag/www/712x600 /blog/2018/01/page/2/585x350 /blog/2021/11/page/3/753x1200 /blog/2021/11/page/3/753x1200 /blog/2021/11/page/3/900x563 /blog/2021/11/page/3/1280x960 /blog/2022/05/page/2/1103x300 /blog/2022/05/page/2/1103x300 /blog/2018/07/page/4/315x450 /blog/2022/05/page/2/300x364 /blog/2018/07/page/4/2203x2938 /blog/2014/02/09/400x300 /blog/tag/dazzle/page/2/562x562 /blog/tag/dazzle/page/2/668x960 /blog/tag/dazzle/page/2/950x538 /blog/2006/02/blobby-art/500x375 /blog/tag/dazzle/page/2/484x604 /blog/tag/dazzle/page/2/3300x2475 /blog/tag/katrina/1318x1078 /blog/2019/03/page/3/680x510 /blog/tag/fanboys/768x547 /blog/tag/fanboys/768x1064 /blog/tag/fanboys/575x452 /blog/tag/fanboys/512x512 /blog/tag/fanboys/360x480 /blog/tag/fanboys/768x513 /blog/tag/fanboys/1024x759 /blog/tag/fanboys/584x328 /blog/2010/12/page/3/768x512 /blog/2010/12/page/3/500x628 /blog/2010/12/page/3/500x628 /blog/2010/12/page/3/960x1280 /blog/2010/12/page/3/960x1280 /blog/2019/07/02/500x392 /blog/2019/07/02/388x517 /blog/2019/07/02/388x517 /blog/2013/05/12/468x468 /blog/2019/04/01/900x1200 /blog/2019/04/01/1297x846 /blog/2019/04/01/768x432 /blog/2019/04/01/768x428 /blog/2023/12/shrimpfluencer/640x1136 /blog/2019/04/01/1200x628 /blog/2016/11/page/3/768x1024 /blog/2005/07/page/4/520x503 /blog/2005/07/page/4/520x503 /blog/2005/07/page/4/409x309 /blog/2023/12/psychedelic-cryptography/600x594 /blog/2021/01/page/4/1200x800 /blog/2021/01/page/4/1200x633
Of course all of them claim to be Chrome on "Windows NT 10.0".
The details were finally released last week: [...] That's 7.5 million to 30 million litres of drinking water every single day. This is the reservoir's entire remaining capacity. Google is taking absolutely the limit of all the water they can.
How about the other AI vendors, like OpenAI? [...] Notice what Altman did there -- he started with the headline claim "water is totally fake" then he gave a made-up example ending with "or whatever." What he did not give was anything like a number. A current number. [...]
Given the secrecy, assume all the hyperscalers use a huge amount of fresh water. Until they give us official numbers somewhere they're not allowed to lie. They're not fighting to keep the numbers secret because they're good.
Previously, previously, previously, previously, previously, previously.
Monarch, Legacy of Monsters has a fantastic title sequence. Since the show takes place in two timelines, they split the titles between the past on the left and the present on the right, contrasting similar events in each timeline. And season 2 keeps up this conceit, but it was completely rebuilt!
So here are all four quadrants from seasons 1 and 2, stacked.
Also, this show is still killing it.
In my online undergraduate P5.js course, students are about to begin the module on motion and physics, including a bit of physics simulation using Matter.js. It suddenly occurred to me that I had never seen anybody put together this particular demo before, and I realized it had to be done. Messy source code here.
Previously, previously, previously, previously, previously, previously.
Planet Debian upstream is hosted by Branchable.

