Richard Stallman
Australian censorship thugs seized art posters

Australian censorship thugs seized *art posters depicting [the persecutor, the muskrat], Putin and Netanyahu in Nazi uniforms.*

Because Nazi symbols stand for hate, it is now perhaps prohibited to condemn acts and policies embodying hatred by comparing them to Nazis. This would be good for a laugh at the cops' expense, if it weren't seriously dangerous.

Posted
Richard Stallman
Biometric identification company

A company that does biometric identification (on behalf of LinkedIn) collects 12 kinds of biometrics and consults 6 kinds of government and commercial databases. Then farms out processing of face data to 17 other companies.

Those companies are North America, so in practice they can ignore the EU's "data protection" regulations, and the US government can seize any and all of that data.

This example shows the concrete meaning of dis-services operating in cloudy servers. Don't trust them! I never do.

Of course, I would never ask to be identified on LinkedIn. I do not use LinkedIn at all, because it too is cloudy processing, not to mention the other unjust things it does to its users. But LinkedIn is just one example, and the broad conclusion applies to nearly any business that wants personal data from you.

Posted
Richard Stallman
Increase in hot dry days

*Study finds global increase in hot, dry days ideal for wildfires.*

Over the last 45 years, the frequency of such days has increased from around 20 days per year to around 60 days per year. About 60% of that (36 days per year) are caused by fossil fuel, but some of the rest is caused by other human activities such as deforestation.

As long as we keep driving global heating, we will increase our vulnerability to fire. This is why I reject the idea that today's fires, floods and storms are "natural disasters".

A secondary problem of this increased temperature level is damage to coffee cultivation, in the main coffee-producing countries.

Posted
Richard Stallman
Putin hates Telegram

Putin hates Telegram, and is trying to arrange Russian criminal charges against its developer, Pavel Durov.

This leads me to suspect that he is planning a military-style attack against Durov or the people operate the servers behind Telegram.

Posted
Bram Cohen
AI thoughts

Since nobody reads to the end of my posts I’ll start this one with the actionable experiment:

Deep neural network have a fundamental problem. The thing which makes them able to be trained also makes them susceptible to Manchurian Candidate type attacks where you say the right gibberish to them and it hijacks their brain to do whatever you want. They’re so deeply susceptible to this that it’s a miracle they do anything useful at all, but they clearly do and mostly people just pretend this problem is academic when using them in the wild even though the attacks actually work.

There’s a loophole to this which it might be possible to make reliable: thinking. If an LLM spends time talking to itself then it might be possible for it to react to a Manchurian Candidate attack by initially being hijacked but then going ‘Wait, what am I talking about?’ and pulling itself together before giving its final answer. This is a loophole because the final answer changes chaotically with early word selection so it can’t be back propagated over.

This is something which should explicitly be trained for. During training you can even cheat and directly inject adversarial state without finding a specific adversarial prompt which causes that state. You then get its immediate and post-thinking answers to multiple choice questions and use reinforcement learning to improve its accuracy. Make sure to also train on things where it gets the right answer immediately so you aren’t just training to always change its answer. LLMs are sneaky.

Now on to rambling thoughts.

Some people nitpicked that in my last post I was a little too aggressive not including normalization between layers and residuals, which is fair enough, they are important and possibly necessary details which I elided (although I did mention softmax), but they most definitely play strictly within the rules and the framework given, which was the bigger point. It’s still a circuit you can back propagate over. There’s a problem with online discourse in general, where people act like they’ve debunked an entire thesis if any nitpick can be found, even if it isn’t central to the thesis or the nitpick is over a word fumbling or simplification or the adjustment doesn’t change the accuracy of the thesis at all.

It’s beautifully intuitive how the details of standard LLM circuits fit together: Residuals stop gradient decay. Softmax stops gradient explosion. Transformers cause diffusion. Activation functions add in nonlinearity. There’s another big benefit of residuals which I find important but most people don’t worry about: If you just did a matrix multiplication then all permutations of the outputs would be isomorphic and have valid encodings effectively throwing away log(N!) bits from the weights which is a nontrivial loss. Residuals give an order and make the permutations not at all isomorphic. One quirk of the vernacular is that there isn’t a common term for the reciprocal of the gradient, the size of training adjustments, which is the actual problem. When you have gradient decay you have adjustment explosion and the first layer weights become chaotic noise. When you have gradient explosion you have adjustment decay and the first layer weights are frozen and unchanging. Both are bad for different reasons.

There are clear tradeoffs between fundamental limitations and practical trainability. Simple DNNs get mass quantities of feedback but have slightly mysterious limitations which are terrifying. Thinking has slightly less limitations at the cost of doing the thinking both during running and training where it only gets one unit of feedback per entire session instead of per word. Genetic algorithms have no limitations on the kinds of functions then can handle at all at the cost of being utterly incapable of utilizing back propagation. Simple mutational hill climbing has essentially no benefit over genetic algorithms.

On the subject of activation functions, sometimes now people use Relu^2 which seems directly against the rules and only works by ‘divine benevolence’. There must be a lot of devil in the details in that its non-scale-freeness is leveraged and everything is normed to make the values mostly not go above 1 so there isn’t too much gradient growth. I still maintain trying Reluss is an experiment worth doing.

Some things about the structure of LLMs are bugging me (This is a lot fuzzier and more speculative than the above). In the later layers the residuals make sense but for the first few they’re forcing it to hold onto input information in its brain while it’s trying to form more abstract thoughts so it’s going to have to arbitrarily pick some bits to sacrifice. Of course the actual inputs to an LLM have special handling so this may not matter, at least not for the main part of everything. But that raises some other points which feel off. The input handling being special is a bit weird, but maybe reasonable. It still has the property that in practice the input is completely jamming the first layer for a simply practical reason: The ‘context window’ is basically the size of its brain, and you don’t have to literally overwhelm the whole first layer with it, but if you don’t you’re missing out on potentially useful content, so in practice people overwhelm its brain and figure the training will make it make reasonable tradeoffs on which tokens it starts ignoring, although I suspect in practice it somewhat arbitrarily picks token offsets to just ignore so it has some brain space to think. It also feels extremely weird that it has special weights for all token offset. While the very last word is special and the one before that less so, that goes down quickly and it seems wrong that the weights related the hundredth to hundred and first token back are unrelated to the weights related to the hundred and first and hundred and second token back. Those should be tied together so it’s getting trained as one thing. I suspect that some of that is redundant and inefficient and some of it it is again ignoring parts of the input so it has brain space to think.

Subscribe now

Posted
jwz (Jamie Zawinski)
Today in scrapers
Honestly one of the most offensive things about these AI scraper bots is how bad at their jobs they are. Look at these 404s from the last 6 hours and despair:

/blog/2014/12/30/411x480 /blog/2014/12/30/500x417 /blog/2018/05/12/2048x1365 /blog/2018/05/12/736x677 /blog/2018/05/12/3000x2049 /blog/2018/05/12/2400x1600 /blog/2014/12/30/500x638 /blog/2024/1920x1440 /blog/2015/08/3375x2561 /blog/2015/08/600x600 /blog/2017/08/page/2/1400x781 /blog/2017/08/page/2/1280x720 /blog/2017/08/page/2/464x363 /blog/2017/08/page/2/1200x1600 /blog/2010/03/07/480x640 /blog/2011/09/312x360 /blog/2011/09/640x423 /blog/2011/09/600x800 /blog/2006/09/page/4/1042x673 /blog/2018/03/550x826 /blog/2018/03/640x427 /blog/2018/03/754x1024 /blog/2018/03/815x600 /blog/tag/www/page/2/410x630 /blog/tag/www/page/2/1200x809 /blog/tag/www/page/2/600x900 /blog/2016/04/page/3/962x1779 /blog/2016/04/page/3/1100x568 /blog/2016/04/page/3/1100x568 /blog/2016/04/page/3/1200x848 /blog/tag/www/768x1087 /blog/2014/2692x2970 /blog/tag/www/768x1087 /blog/tag/www/480x360 /blog/tag/www/906x222 /blog/tag/www/480x360 /blog/tag/www/768x270 /blog/2018/01/page/2/450x337 /blog/2018/01/page/2/500x667 /blog/2017/11/teeth-4/500x624 /blog/tag/www/988x1354 /blog/tag/www/800x450 /blog/tag/www/2000x2000 /blog/tag/www/2000x2000 /blog/tag/www/625x512 /blog/tag/www/768x432 /blog/tag/www/615x297 /blog/2014/640x360 /blog/2018/01/page/2/300x450 /blog/2014/1741x2600 /blog/tag/www/712x600 /blog/2018/01/page/2/585x350 /blog/2021/11/page/3/753x1200 /blog/2021/11/page/3/753x1200 /blog/2021/11/page/3/900x563 /blog/2021/11/page/3/1280x960 /blog/2022/05/page/2/1103x300 /blog/2022/05/page/2/1103x300 /blog/2018/07/page/4/315x450 /blog/2022/05/page/2/300x364 /blog/2018/07/page/4/2203x2938 /blog/2014/02/09/400x300 /blog/tag/dazzle/page/2/562x562 /blog/tag/dazzle/page/2/668x960 /blog/tag/dazzle/page/2/950x538 /blog/2006/02/blobby-art/500x375 /blog/tag/dazzle/page/2/484x604 /blog/tag/dazzle/page/2/3300x2475 /blog/tag/katrina/1318x1078 /blog/2019/03/page/3/680x510 /blog/tag/fanboys/768x547 /blog/tag/fanboys/768x1064 /blog/tag/fanboys/575x452 /blog/tag/fanboys/512x512 /blog/tag/fanboys/360x480 /blog/tag/fanboys/768x513 /blog/tag/fanboys/1024x759 /blog/tag/fanboys/584x328 /blog/2010/12/page/3/768x512 /blog/2010/12/page/3/500x628 /blog/2010/12/page/3/500x628 /blog/2010/12/page/3/960x1280 /blog/2010/12/page/3/960x1280 /blog/2019/07/02/500x392 /blog/2019/07/02/388x517 /blog/2019/07/02/388x517 /blog/2013/05/12/468x468 /blog/2019/04/01/900x1200 /blog/2019/04/01/1297x846 /blog/2019/04/01/768x432 /blog/2019/04/01/768x428 /blog/2023/12/shrimpfluencer/640x1136 /blog/2019/04/01/1200x628 /blog/2016/11/page/3/768x1024 /blog/2005/07/page/4/520x503 /blog/2005/07/page/4/520x503 /blog/2005/07/page/4/409x309 /blog/2023/12/psychedelic-cryptography/600x594 /blog/2021/01/page/4/1200x800 /blog/2021/01/page/4/1200x633

Of course all of them claim to be Chrome on "Windows NT 10.0".

Previously, previously, previously, previously.

jwz (Jamie Zawinski)
How much water do the data centres use? It's a secret.
Do you want Immortan Joes? Because this is how you get Immortan Joes.

Roanoke gets its drinking water from Carvins Cove Reservoir. The locals tried to find out just how much water Google would be taking. But Google wanted the water and power numbers kept secret.

The details were finally released last week: [...] That's 7.5 million to 30 million litres of drinking water every single day. This is the reservoir's entire remaining capacity. Google is taking absolutely the limit of all the water they can.

How about the other AI vendors, like OpenAI? [...] Notice what Altman did there -- he started with the headline claim "water is totally fake" then he gave a made-up example ending with "or whatever." What he did not give was anything like a number. A current number. [...]

Given the secrecy, assume all the hyperscalers use a huge amount of fresh water. Until they give us official numbers somewhere they're not allowed to lie. They're not fighting to keep the numbers secret because they're good.

Previously, previously, previously, previously, previously, previously.

jwz (Jamie Zawinski)
Monarch title sequence
Following up on my obsession with title sequences... As I've said before, I find the decisions about what stories they choose to tell, and under such constraints, fascinating. So every time a new season of a show starts, I pick them apart to see what subtle changes they made.

Monarch, Legacy of Monsters has a fantastic title sequence. Since the show takes place in two timelines, they split the titles between the past on the left and the present on the right, contrasting similar events in each timeline. And season 2 keeps up this conceit, but it was completely rebuilt!

So here are all four quadrants from seasons 1 and 2, stacked.

Also, this show is still killing it.

jwz (Jamie Zawinski)
That one XKCD thing, now interactive
This is so much fun... Craig S. Kaplan:

In my online undergraduate P5.js course, students are about to begin the module on motion and physics, including a bit of physics simulation using Matter.js. It suddenly occurred to me that I had never seen anybody put together this particular demo before, and I realized it had to be done. Messy source code here.

Previously, previously, previously, previously, previously, previously.

jwz (Jamie Zawinski)
Linux Xft Unicode fonts
Dear Lazyweb, can someone show me a straightforward example of an X11 program calling XftDrawStringUtf8 that succeeds in displaying Japanese characters? On Debian 13 with "fonts-noto" installed, "lxterminal" can do it but XScreenSaver still can't seem to display anything more complicated than Cyrillic.

E.g. "unicrud --block Katakana".

The actual XFT font I get from XftFontOpenXlfd("-*-sans serif-bold-r-*-*-*-180-*-*-*-*-*-*") is

Noto Sans-300 :familylang=en :style=Bold :stylelang=en :fullname=Noto Sans Bold :fullnamelang=en :slant=0 :weight=200 :width=100 :pixelsize=401.899 :foundry=GOOG :antialias=True :hintstyle=1 :hinting=True :verticallayout=False :autohint=False :globaladvance=True :file=/usr/share/fonts/truetype/noto/NotoSans-Bold.ttf :index=0 :outline=True :scalable=True :dpi=96.4557 :rgba=5 :scale=1 :minspace=False :fontversion=131334 :capability=otlayout\:DFLT otlayout\:cyrl otlayout\:grek otlayout\:latn :fontformat=TrueType :embolden=False :embeddedbitmap=True :decorative=False :lcdfilter=1 :namelang=en :prgname=unicrud :postscriptname=NotoSans-Bold :color=False :symbol=False :variable=False :fonthashint=True :order=0 :namedinstance=False :fontwrapper=SFNT

Previously, previously, previously.

jwz (Jamie Zawinski)
Palantir Sues Swiss Magazine For Accurately Reporting That The Swiss Government Didn't Want Palantir
By the way, I have just been informed that "Peter Thiel" is an anagram for "Hitler Pete".

The articles, produced in collaboration with the investigative collective WAV, detailed a years-long, multi-ministry charm offensive by Palantir to sell its software to Swiss federal authorities. The campaign was, by all accounts, a comprehensive failure. Swiss agencies rejected Palantir at least nine times, with concerns ranging from data sovereignty to reputational risk to the simple fact that nobody needed the product. [...]

So how does a sophisticated data intelligence company respond to well-sourced investigative journalism based on official government documents?

By suing the journalists, of course.

Previously, previously, previously, previously, previously, previously, previously.

Bram Cohen
There's Only One Idea In AI

In 1995 someone could have written a paper which went like this (using modern vernacular) and advanced the field of AI by decades:

The central problem with building neural networks is training them when they’re deeper than two layers due to gradient descent and gradient decay. You can get around this problem by building a neural network which has N values at each layer which are then multiplied by an NxN matrix of weights and have Relu applied to them afterwards. This causes the derivative of effects on the last layer to be proportionate with the effects on the first layer no matter how deep the neural network is. This represents a quirky family of functions whose theoretical limitations are mysterious but demonstrably work well for simple problems in practice. As computers get faster it will be necessary to use a sub-quadratic structures for the layers.

History being the quirky thing that it is what actually happened is decades later the seminal paper on those sub-quadratic structures happened to stumble across making everything sublinear and as a result people are confused as to which is actually the core insight. But the structure holds: In a deep neural network, you stick to relu, softmax, sigmoid, sin, and other sublinear functions and magically can train neural networks no matter how deep they are.

There are two big advantages which digital brains have over ours: First, they can be copied perfectly for free, and second, as long as they haven’t diverged too much the results of training them can be copied from one to another. Instead of a million individuals with 20 years experience you get a million copies of one individual with 20 million years of experience. The amount of training data current we humans need to become useful is miniscule compared to current AI but they have the advantage of sheer scale.

Subscribe now

Posted
jwz (Jamie Zawinski)
Let Friction Ring
Dear Lazyweb,

I have this pulley wheel, 50mm inside diameter, 4mm groove. I need a rubber traction ring to go inside it. I cannot find anyone who will sell this to me.

The ring must be flat or concave, not round like a typical gasket seal O-ring, or the string its pulling will just slide off the track.

Alternately, any similar-sized metal pulley wheel that comes with a friction surface pre-attached, 8mm axis hole with set screw.

I have tried coating it with sugru, but that is too soft and wears off after not-very-long.


Update: If you're going to say "why don't you just" or "have you searched for" without a purchase link to a product of the correct size, please know that you are not helping.


Previously.

Posted

Planet Debian upstream is hosted by Branchable.