This feed omits posts by rms. Just 'cause.

Touchscreens are quite prevalent by now but one of the not-so-hidden secrets is that they're actually two devices: the monitor and the actual touch input device. Surprisingly, users want the touch input device to work on the underlying monitor which means your desktop environment needs to somehow figure out which of the monitors belongs to which touch input device. Often these two devices come from two different vendors, so mutter needs to use ... */me holds torch under face* .... HEURISTICS! :scary face:

Those heuristics are actually quite simple: same vendor/product ID? same dimensions? is one of the monitors a built-in one? [1] But unfortunately in some cases those heuristics don't produce the correct result. In particular external touchscreens seem to be getting more common again and plugging those into a (non-touch) laptop means you usually get that external screen mapped to the internal display.

Luckily mutter does have a configuration to it though it is not exposed in the GNOME Settings (yet). But you, my $age $jedirank, can access this via a commandline interface to at least work around the immediate issue. But first: we need to know the monitor details and you need to know about gsettings relocatable schemas.

Finding the right monitor information is relatively trivial: look at $HOME/.config/monitors.xml and get your monitor's vendor, product and serial from there. e.g. in my case this is:

  <monitors version="2">
   <configuration>
    <logicalmonitor>
      <x>0</x>
      <y>0</y>
      <scale>1</scale>
      <monitor>
        <monitorspec>
          <connector>DP-2</connector>
          <vendor>DEL</vendor>              <--- this one
          <product>DELL S2722QC</product>   <--- this one
          <serial>59PKLD3</serial>          <--- and this one
        </monitorspec>
        <mode>
          <width>3840</width>
          <height>2160</height>
          <rate>59.997</rate>
        </mode>
      </monitor>
    </logicalmonitor>
    <logicalmonitor>
      <x>928</x>
      <y>2160</y>
      <scale>1</scale>
      <primary>yes</primary>
      <monitor>
        <monitorspec>
          <connector>eDP-1</connector>
          <vendor>IVO</vendor>
          <product>0x057d</product>
          <serial>0x00000000</serial>
        </monitorspec>
        <mode>
          <width>1920</width>
          <height>1080</height>
          <rate>60.010</rate>
        </mode>
      </monitor>
    </logicalmonitor>
  </configuration>
</monitors>
  
Well, so we know the monitor details we want. Note there are two monitors listed here, in this case I want to map the touchscreen to the external Dell monitor. Let's move on to gsettings.

gsettings is of course the configuration storage wrapper GNOME uses (and the CLI tool with the same name). GSettings follow a specific schema, i.e. a description of a schema name and possible keys and values for each key. You can list all those, set them, look up the available values, etc.:


    $ gsettings list-recursively
    ... lots of output ...
    $ gsettings set org.gnome.desktop.peripherals.touchpad click-method 'areas'
    $ gsettings range org.gnome.desktop.peripherals.touchpad click-method
    enum
    'default'
    'none'
    'areas'
    'fingers'
  
Now, schemas work fine as-is as long as there is only one instance. Where the same schema is used for different devices (like touchscreens) we use a so-called "relocatable schema" and that requires also specifying a path - and this is where it gets tricky. I'm not aware of any functionality to get the specific path for a relocatable schema so often it's down to reading the source. In the case of touchscreens, the path includes the USB vendor and product ID (in lowercase), e.g. in my case the path is:
  /org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/
In your case you can get the touchscreen details from lsusb, libinput record, /proc/bus/input/devices, etc. Once you have it, gsettings takes a schema:path argument like this:
  $ gsettings list-recursively org.gnome.desktop.peripherals.touchscreen:/org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/
  org.gnome.desktop.peripherals.touchscreen output ['', '', '']
Looks like the touchscreen is bound to no monitor. Let's bind it with the data from above:
 
   $ gsettings set org.gnome.desktop.peripherals.touchscreen:/org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/ output "['DEL', 'DELL S2722QC', '59PKLD3']"
Note the quotes so your shell doesn't misinterpret things.

And that's it. Now I have my internal touchscreen mapped to my external monitor which makes no sense at all but shows that you can map a touchscreen to any screen if you want to.

[1] Probably the one that most commonly takes effect since it's the vast vast majority of devices

Posted Tue Mar 12 04:33:00 2024 Tags:
When I hear someone say "a Unix timestamp is in GMT" I die a little inside. It is muddy thinking that leads to many of the problems that plague this modern world.

A time_t does not have a time zone at all. It is a point in time, a scalar value. A 'struct tm' is a point in spacetime, a vector value.

If you are trying to express a point in time to a human, you could do that by saying "1,710,111,386 seconds after the Epoch", but while precise, it's not very readable, so instead you might choose to convert that from a point in time to a point in spacetime instead, and say "2024-03-10 15:56:26 PDT". But those two are not the same thing. You converted a scalar to a vector by picking an arbitrary position in space to attach to it.

The Unix Epoch, the point in time when a time_t is numerically zero, is commonly defined as being at midnight on a particular date in England, but it could just as easily have been defined as having been at 4PM on a particular date in California, or as some number of nanoseconds since the Big Bang. There's nothing "GMT" or "UTC" about it, except that when converted to a human-readable string situated in England, that text has some extra zeroes in it and humans find zeroes comforting.

The number of seconds since that point in time does not change depending on my point in space. (Mostly.)

("GMT" is a time zone and "UTC" is not, but both are points in spacetime where that point happens to be the Royal Observatory. The Platinum-Iridium Reference Zero is of course stored in an evacuated vault in the basement of Pavillon de Breteuil.)

This has been a Public Service Announcement from The Paladin of the Lost Hour.

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Posted Sun Mar 10 23:48:09 2024 Tags:

As a 3d printing hobbyist with opinions about the strength of different types of materials I’ve sometimes had discussions with people with Real Engineering Degrees who feel the need to mansplain to me trivia about the different types of strength and how I’m using the terms all wrong. It might sound weird that I’m being dismissive of people who have an actual degree and are using real scientific jargon, but this is a case of knowing a lot of trivia while missing the underlying point.

Within engineering there’s a range of measurement types between ‘fundamental physics’ and ‘empirical rules of thumb’, with weight on one extreme and coefficient of friction on the other. It isn’t that weight has no artifacts: Objects are affected by moving them close to other objects, and when in orbit the centrifugal effect changes it a lot, but it’s so close to the fundamental physics concept of mass that we’ve relied on it for making very prices scales since antiquity. Coefficient of friction on the other hand is a vague concept about ‘how well these two things mash into each other’, strongly affected by how long they’ve been meshed together, how much force has been pushed into them orthogonally, whether they’ve been moved already, the phase of the moon, and the purity of the soul of the person conducting the experiment. When building things which rely on coefficient of friction we run the numbers, add an order of magnitude to it, and then test the real thing to find out when it actually breaks.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Measures of hardness are much closer to coefficient of friction than mass. Take the Mohs scale, which is literally a metric of who beats who in a scratching contest. It’s more meaningful than IQ scores, but directly comparable to Chess ratings, which are at almost exactly the same point on the metric accuracy scale (but to their credit both are closer to real than the the metric accuracy scale itself).

For measuring strength there’s all manner of terms used, but they’re all different benchmarks based off what a particular measurement happens to say. The Shore measurement scale literally specifies the size and shape of objects to attempt to jam into the material and then measure penetration at different pressures. The different shapes have a fair amount of correlation but often deviate based on their size and spikyness. There’s no platonic ideal being measured here, it’s just an empirical value.

When you bend a ‘strong’ object it tends to snap back to where it was when you’re done but there are a lot of things which could happen during bending. Maybe it got little microfractures from the bending which build up at some schedule if you bend it repeatedly. Maybe it underwent some amount of plastic deformation. If it did maybe it lost some of its strength, or maybe it will stay a bit bent permanently. Maybe it undergoes plastic deformation slowly, and will snap back varying amounts based on how long you keep it bent, possibly on a schedule which doesn’t look very linear. The details are so varied and hard to measure that most of the measures of strength simply ignore the amount of material failure which happened during the test. This is an expedient but dubious approach. John Henry only technically defeated the steam engine. He died with a hammer in his hand. By any reasonable standard the steam engine won.

A material can be ‘stiff’, meaning it’s hard to bend in the first place, and it can be ‘tough’, meaning it doesn’t undergo much damage when it’s bent. PLA is still but not tough and tends to fail catastrophically. TPU is tough but not stiff. Nylon is both stiff and tough, but not as stiff as PLA or as tough as TPU. In general PLA+ is PLA with something added which makes it less stiff but more tough so it doesn’t undergo catastrophic failure. The downside is that it then undergoes material failure much sooner. This makes it do better on strength tests while making it a worse material for making real things out of. For some niche safety related applications it’s important that things fail visibly but not catastrophically, but for the vast majority of practical applications you don’t care how gracefully things fail, you care about them not failing in the first place. For that you need stiffness, and as boring of a result as it is PLA wins the stiffness competition against the other common and even not so common 3d printing materials. If your parts are failing you should design them more robustly not try to switch to some unobtanium printing material.

With that very disappointing result out of the way, the question is, what if you want to find some material which really is stronger than PLA? Having a 3d printer which could work with solder would be awesome, but there aren’t any of those on the market right now and I don’t know what it would take to make such a thing. Short of that the best material is… PLA. Even within PLA there are different levels of quality based on how long the chains are at the molecular level and how knotted up in each other they are, but as you may have gathered from the tirade about PLA+ about the PLA vendors are less than up front about the quality of their material and it isn’t possibly to simply buy higher quality PLA right now. Within what’s available now you can use PLA a bit better. If you print in a warm chamber and only slowly cool it down once printing is done you’ll get some annealing in and have a stronger final product which gets soft at a higher temperature. Ideally you’d repeatedly reheat the entire chamber to the temperature you’d properly anneal at and cool it back down again after every layer, which would result in an amazing quality product but take forever.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sat Mar 9 21:29:27 2024 Tags:

There are two arguments about Bitcoin inscriptions: One is that it’s playing completely within the rules of Bitcoin and is fine. The other is that it’s massively increasing the costs of running full nodes and hence bad. Both are technically correct.

Rather than get moralizing and playing dumb cat and mouse games about letting them into blocks, which is what’s happening now, it would be far better for Bitcoin development to put together a soft fork to create a vbytes cost for UTXO creation. That should have been in there in the first place. The downside is that this reduces chain throughput even for people who have nothing to do with it. But they’ll get more space by having less inscription to compete with. And the vbytes cost necessary to keep UTXO size increase under control is small enough that there’s a sweep spot which doesn’t do much damage, nowhere near knocking down chain capacity to what it was pre-segwit.

If Bitcoin development were healthy the details of this would be getting hashed out in polite, highly technical, and frankly kind of boring discussions, but it isn’t. Back in 2015 such things got railroaded through so fast that even the core devs were alarmed at how much it smacked of centralized control, but now there’s the opposite problem, where even simple obvious fixes can’t get in.

Posted Fri Mar 8 20:22:56 2024 Tags:

Please enjoy jwz mixtape 244.

It has been quite a while since last one. As I have lamented before, my ingestion of new music has slowed to a trickle. Please send links to music video blogs that have RSS feeds.

Posted Wed Mar 6 20:06:29 2024 Tags:
I regularly replace old links with archive.org links using my Waybackify tool. But if often fails because an error or login page got archived instead. Here's one of my favorites.

Has anyone written a tool that does a decent job of detecting when an archived page is actually an error page?

Please, I beg of you, note that I did not ask, "Do you have any suggestions on how one might write such a tool." I also did not ask whether you think the task is easy, hard, or impossible.


Update: The answer is "No, nobody has done a decent job of that."

Adding myself to the list of people who have not done a decent job of that, I have added an indecent, halfassed "soft 404" detector to Waybackify that detects (some) unusable snapshots of Facebook and Twitter, and when found, tries several adjacent earlier and later snapshots until it finds one that works. My test case was the 8,000+ archived Facebook links to DNA Lounge performers. It works by matching how Facebook happened to translate the phrase "Security Check Required" into a dozen human languages.

In an ideal world, someone who works for Internet Archive would run a similar query internally and simply delete all of those bad crawls, since they convey no information other than "this snapshot failed but our system didn't notice that".


Previously, previously, previously.

Posted Tue Mar 5 03:34:08 2024 Tags:

At the start of the last pandemic I happened to have a huge bag of flour and a single packet of yeast and wound up living off homemade bread with a yeast culture I kept alive for a while. This is a pretty good plan for having a stockpile of food around for such an emergency, with dried rice, beans, and pasta being other good options, along with all manner of canned goods.

The good news from the experiences of the last pandemic is that although supply chains may be disrupted there will always be plenty of food. It may be necessary to drive around in tanks to crush the zombies on the road, but we’ll still have Door Dash via tank, because we aren’t savages.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Still, it’s fun to consider what one would do to survive if all one had was an apartment or possibly a suburban lawn to get food off of. For basic calories there’s a clear winner: potatoes. Potatoes are the uber crop, able to be grown trivially and producing mass amounts of calories. They aren’t terribly nutritious, but they’ll keep you alive for a long time.

If things go long enough you’ll need a source of protein. We humans can eat practically any animal, but most of them have issues which make their domestication problematic. There are some animals which can be raised trivially under just the right conditions, for example sometimes you can build an artificial berm in the ocean and simply pick up oysters from it as they grow naturally, but for the most part the standard animals raised for meat are well the best in terms of ease of raising and yield of meat. Chickens are a ridiculous outlier in terms of yield but aren’t terribly conducive to keeping indoors. Crickets are another huge outlier in productivity and in principle can be ranched indoors but the equipment for that isn’t terribly common.

For ease of raising them indoors with occasionally letting them outside to graze there’s a clear winner: guinea pigs. The biggest problem would be people being able to bring themselves to slaughter them. Rabbits are close behind with similar slaughtering issues. Other reasonably easy animals which are bigger include goats and pigs. You’d be basically raising a petting zoo for food, which is good for having cute animals around but bad for getting attached to them.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Mon Mar 4 19:40:47 2024 Tags:
San Francisco political support data shows true alignment:

San Francisco (and much of the Bay Area) has a curious political idiosyncrasy where brand-name Republican candidates and issues get so little traction among voters so as to be considered irrelevant. Yet, many republican ideals and beliefs do receive substantial support from locals.

This dynamic causes practically republican organizations to outwardly re-code and re-brand as something -- anything -- other than GOP. Popular self-descriptions include Moderate or middle-of-the-road Democrat. However, this analysis is motivated by ignoring these pretenses and looking exclusively at indicated issue recommendations / endorsements, and understanding how similar vs. how different each political organization is acting (a behaviorist / empirical approach).

Previously, previously, previously, previously, previously, previously.

Posted Sun Mar 3 21:36:07 2024 Tags:
Code of Matt: (My favorite is this one):

Street lighting in Paris, France was inadvertently turned off at midnight at the start of February 29th, according to reporting by Le Parisien, a French daily newspaper. The operator, Cielis, told the reporter that the problem was linked to a programming fault related to the leap day. It took several hours for lighting to be manually restored.

Previously, previously, previously, previously.

Posted Sun Mar 3 19:09:21 2024 Tags:
Waymo can now charge for robotaxi rides in LA and on San Francisco freeways:

Last month, the CPUC's Consumer Protection and Enforcement Division suspended Waymo's application to expand its robotaxi service in Los Angeles and San Mateo counties for up to 120 days to provide extra time for review. [...] The five protests came from the city of South San Francisco, the county of San Mateo, the Los Angeles Department of Transportation, the San Francisco County Transportation Authority and the San Francisco Taxi Workers Alliance.

Just brazenly saying the quiet part out loud, as usual:

"We will, as we did in San Francisco, expand our service before we start charging," she said. "And I mean, we sort of show up and you get to experience this for a couple of months or several months without paying. And then we have that moment of truth, which we went through in San Francisco, which is we start charging, and then we figure out how many people [have] really integrated it into their lives. What's the price point they're willing to pay?"

Let's not forget that these companies are still immune from prosecution when one of their remotely-operated drones commits a moving violation, up to and including a killing. And that Waymo's owner Google have stated in court filings that it is good for business if their competitors' cars kill more people.

Previously, previously, previously, previously, previously, previously, previously.

Posted Sat Mar 2 20:30:43 2024 Tags:
April Fools day is for losers, it's worse than Santacon, but Leap Day! Well. Nobody* noticed my hilarious Leap Day prank. For the last two days, March 1st was displayed on our calendar and flyers as Februrary 30th.

I realize this joke is extremely niche, but I expected at least that one of our promoters would be freaking out and my phone would be blowing up. But it didn't happen!

CALENDAR FACT! Did you know that the extra day in February is not the last day of the month? No! The extra day is February 24th, also known as Bissextus, the "Second Sixth". Prior to the 15th century, the last day of February was still the 28th, but the 24th happened twice. And honestly, I think we should still do it that way, just out of pure Chaos Monkey spirit.

Due to various youthful indiscretions, Calendrical Calculations has ended up being one of my Dream Jeopardy Categories. Ask me about Easter.**

Anyway, when I was hacking Ye Olde Webbe Syte to do this thing my first approach was to just wrap localtime and make it lie about mon and dotm but then I discovered that strftime was having none of those shenanigans and would just convert Feb 30 to Mar 1, so hey, good for you, little POSIX, for being resilient in the face of crap input. Totally unexpected!


Ok, to be fair, one person noticed. One person.
** Please do not ask me about Easter.

Posted Sat Mar 2 19:18:20 2024 Tags:
You may have noticed that the DNA Lounge calendar pages have hyperlinks upon the artists' names, linking to their World Wide Web Home Pages. Over ten thousand of them.

Why? Why do I bother doing this? I think I'm going to stop.

Part of our event submission process is gathering those links, and automatically validating that they are not 404, and that has gotten much harder lately because most of the time the various Zuckerweb properties won't even let you check the existence of a URL without being logged in, let alone through an API.

"Just stop validating them and assume they are correct? They just copied and pasted, right?" You'd be surprised. The reason I added the validation is that it seems that half the time, they manually transcribe the URLs from a wet napkin written in crayon.

So why have the links at all? In the olden days, my thinking was, "Someone might click the link, learn more about the artist, and then decide to buy a ticket to the show. Commerce!"

I don't think anyone does that. I think I'm just driving traffic to billionaires for no reason. They sure don't reciprocate.

Everything is terrible.


Previously, previously.

Posted Fri Mar 1 03:04:06 2024 Tags:
  1. Go full Lisa Frankenstein, e.g. 80s as understood by TikTok kids.
  2. Barbiemancer.
  3. Muppets.
  4. Jodorowsky's Dune's TRON.

I really think those are your only options.

I joked earlier that I hope Jared Leto is in the new Neuromancer, because the remake of The Crow only cast a Jared Leto Joker impersonator and that leaves so much barrel-bottom un-scraped. But I have been informed that Jared Leto is currently on the hook to ruin the new TRON movie, and we can only rely upon one man to ruin so much.

I assume they're just going to make it look like Altered Carbon, which is to say, a vision of the future that has remained unchanged from the racks of a mall Hot Topic in 1992.

Which is why the Lisa Frank version is my first choice: Neuromancer should be a period piece, because Cyberpunk is of the past, just as (even modern) industrial music is now properly retconned to be a subclass of "90s synthpop".

In any future cyberpunk media, I don't want to see New Rocks and vinyl tights. I will only accept fashions such as the following:

Posted Thu Feb 29 17:17:36 2024 Tags:
Deborah Pickett:

Kazakhstan, which has not observed daylight saving for 20 years, is turning the clocks back at 0:00 on 1 March 2024, to 23:00 on leap day February 29 2024.

Definitely playing time zones on Hard Mode, Kazakhstan. Thanks for doing some QA on stacked edge cases for us all!

chx:

Right but it's not daylight savings, the change is permanent: Kazakhstan will change from using two time zones to observing only one time zone in the entire territory: UTC+5.

Previously, previously, previously, previously, previously, previously, previously.

Posted Thu Feb 29 16:04:09 2024 Tags:

Back when I was 12 years old I got obsessed with solving the Rubik’s Cube and eventually figured out a method for it. Normally these sorts of solutions are just sort of bad, but because I hadn’t figured out commutators as a concept there aren’t any inverse sequences done and everything is ‘forward’, resulting in a very unusual and hopefully interesting approach.

The ground rules I was playing with were that I didn’t want any external help whatsoever, which for some reason included using a pen and paper to make notes. I felt I’d been given a bit of a spoiler by being told about layer by layer. Because of this the method doesn’t so much give sequences of things to do but notes as to what to do in different situations which feed into each other nicely. Most of these were found by trial and error. Anyhow, here’s the instructions:

  1. Solve the bottom cross. This is done intuitively as in most solutions.

  2. Solve the bottom corners. Again this is done intuitively.

  3. Solve the middle edges. This is done with the sequence FU2F’U2L’U’L which puts the FU edge into FL and its mirror image F’U2FU2RUR’ which puts the FU edge into FR. To use them position an edge which needs to go in the middle so that its top sticker matches the front center sticker and then use the sequence starting with F if it needs to go in the FL position or F’ if it needs to go in the FR position. If none of the edges on the top go in the middle but there are still edges in the middle positioned wrong use one of those sequences to put one of the edges on the top into the position of one of the misplaced edges in the middle and proceed from there.

  4. Get the top edges positioned and oriented. The basic move for this is to use one of the sequences in step 3 to move an edge from the top layer to the middle layer then use the the other handed sequence to fix it. Starting this with an F’ is a ‘left break’ and starting it with an F is a ‘right break’. Each case is defined by a first move to do then you forget about what you just did and go back to the beginning of this step with a new observation of where everything is. These cases feed into each other in a way which is guaranteed to always eventually solve the top edges. The different cases are as follows:

    1. If none of the top edges are in the correct orientation do a left break.

    2. If all of the top edges are in the correct orientation rotate the top until either all four are in correct positions or exactly two are in correct positions. If all of them are positioned correctly you’re ready for the next step, otherwise

      1. If the two incorrectly positioned edges are opposite each other position the cube so that one of the correctly oriented but wrongly positioned edges is in the front and do a left break.

      2. Otherwise the two wrongly positioned edges will be next to each other. Position the cube as a whole such that one of them is in the front and the other is on the left and do a left break.

    3. If exactly two of the top edges are correctly oriented and they’re adjacent to each other then rotate the top as few units as possible so that at least one of the edges is in the correct position, (in the case of at least one of the correctly oriented top edges already being in the right position this means don’t rotate the top at all). Then position the cube such that the remaining incorrectly positioned edge is in the front. If there is no such edge orient the cube so that either of the correctly oriented edges is in the front. Then do a break in the directly of the other correctly oriented edge.

    4. If exactly two of the top edges are correctly oriented and they’re opposite each other then if neither of those edges is correctly oriented do a U2 to get at least one of them correctly positioned. If even that wouldn’t correctly position either of them then rotate the top until at least one of them is correctly positioned. Then:

      1. If both of those edges are in the correct position orient the cube as a whole so that one of them is in the front and do a left break.

      2. Otherwise orient the cube as a whole so that the correctly oriented but incorrectly positioned top edge is in the front and do a break in the direction where color of the sticker on the top edge matches the color of the front center sticker.

  5. Position the top corners. To cycle UFR → UBR → UBL do a left break and start over. To cycle UFL → UBL → UBR do a right break and start over. It’s always possible to position the corners using these sequences at most twice.

  6. Orient the top corners. The rotation sequence used for this turns UFR clockwise and UBR counterclockwise. It’s done by doing a left break and starting over. (This kicks off a cycle done in step 5 which you’ll then fix doing step 5 again.) To use this to solve the orientations in the fewest steps:

    1. If all four of the top corners are oriented wrong do the rotation sequence so only two or three of the top corners are oriented wrong.

    2. If three of the top corners are oriented wrong do the rotation sequence so that only two adjacent corners are oriented wrong.

    3. If two non-adjacent corners are oriented wrong do the rotation sequence so that two adjacent corners are oriented wrong.

    4. If only two adjacent corners are oriented wrong then if it’s possible to finish solving with a single use of the rotation move do that. Otherwise redesignate the side face they’re both adjacent to as the top and you’ll be able finish the solve with a single use of the rotation move.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Wed Feb 14 23:06:19 2024 Tags:
As was recently announced, the Linux kernel project has been accepted as a CNA as a CVE Numbering Authority (CNA) for vulnerabilities found in Linux. This is a trend, of more open source projects taking over the haphazard assignments of CVEs against their project by becoming a CNA so that no other group can assign CVEs without their involvment. Here’s the curl project doing much the same thing for the same reasons.
Posted Tue Feb 13 00:00:00 2024 Tags:

This paper has a very impressive result: They got a neural network based program to play chess at grandmaster level with no search whatsoever. While technically speaking there’s plenty of opportunity for tactical analysis to be happening here that’s all going on within the neural network and not using the custom written code for it.

The approach used is to take a corpus of chess positions, evaluate them using Stockfish (the world’s strongest custom chess engine) and then train a neural network to replicate those evaluations. This sounds like a simple idea, but it’s hard to get the details right and it requires a lot of computational power to actually do it.

The next step for this is to make it a ‘zero’ engine, meaning that it doesn’t have any training based off custom written software. First you start with the engine being random. Then you plug that in as the evaluation function into an alpha-beta pruner which has the extra board evaluation function that it detects when the game has been won. That will spit out board evaluations which, while still very poor quality, are more accurate than random numbers. You then train a new neural network on those numbers. Then you switch to the other corpus of chess positions (you need to alternate between two of them) and repeat the process again. With each generation the neural network will become stronger until it hits the same level it got with the non-zero approach.

Using positions from human or Stockfish games for the corpus of positions should work fine but it seems more appropriate to have the engine generate positions for itself. A reasonable approach to that would be each time an engine is generated it plays a set of games against itself and the positions from those are the alternate set of positions to be used in the generation after the next one.

Posted Mon Feb 12 04:08:11 2024 Tags:

What I’d always believed (and assume been told) was that the way to improve at Chess was by learning to be systematic in analysis, acquiring understanding of deep positional principles and a systematic, disciplined approach to manually doing alpha-beta pruning yourself when confronted with a position, and start with long time controls and work your way down as things become more automatic. What you don’t want to do is learn a bunch of silly little tricks which are only rarely applicable and do so at quick time controls which aren’t developing much of any discipline.

That advice is about 80% wrong. There’s some utility to basic positional formulas like improved piece weights and of course looking ahead multiple moves is a core skill especially at long time controls. But most of the rest of the advice is wrong, more based on a fantasy of what Chess is all about that what it actually is. In reality Chess, first and foremost, is a game of tactics. A reasonable definition of a game being ‘tactical’ is ‘If you slap together a crude manually designed board evaluation algorithm and an alpha-beta pruner it will obliterate even very strong humans’. Another way to think about it is how close the game is to a wordfind. Chess is obviously not a wordfind, but it is a tacticsfind. At each move the player finds the best tactic they can, resulting in a stronger or weaker position. How good of a tactic they find is directly correlated to how many moves they’re looking into the future.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Of course as one gets stronger seeing basic tactics becomes more of a given. Probably around 2300 playing strength the main determiner of Chess playing strength switches from how good you are at tactics to how good is your opening preparation. (Sorry, couldn’t resist.) But at my level the differences in board value caused by subtle positional differences are so small compared to swings caused by tactical misses that they’re a bit lost in the noise. What I’ve been getting a lot of benefit from is using this tactics trainer for a few minutes a day (mostly instead of checking social media). The key to getting improvement out of it is not to spend lots of time on each puzzle but to give yourself only a few seconds for each one, then play through the solution when you miss it, come up with an explanation of how your could have figured it out, and commit that to memory. The more of them you get through, the more you’ll remember. Yes it’s only teaching you silly tricks, but that’s most of what Chess is. Positional concepts themselves are often just acknowledgement of tactical potential which isn’t quite manifesting yet, like noticing when pieces aren’t defended even when nothing’s attacking them, or they’re attacked even when they’re currently adequately defended.

Even with only a few seconds to evaluate each position It’s possible to do quite a bit of lookahead, particularly once you get good at pulling out candidate moves. Where human lookahead deviates a lot from slavish alpha-beta pruning is that especially in very tactical positions you often see something which almost works which then suggests something you didn’t initially consider as a candidate move, especially if it involves distracting an enemy piece so it’s no longer defending something or in the way of something. It’s also common for something to be an obvious candidate move a few moves down and the better move order to be the less obvious one.

My own natural skill level at Chess isn’t very high, owing to a serious lack of visual memory. Okay okay, it’s a low natural skill level on the scale of mathematicians, and I can ‘visualize’ some things very well but it’s all 3d not 2d and not very useful for 2d board games. In any case, I’ve played enough Chess in my life that I should be a lot stronger than I am. But after doing tactics training for a few minutes a day for a few months my estimated strength by tactics hovers around 2200 or so depending on how long I take for each one and how tired I am my current caffeine and blood sugar levels. Playing strength seems to be around 1750 or so. It’s weaker because I’ve barely been playing actual chess games, resulting in my time management being awful and also my consistency at easier tactics isn’t what I’d like. It would be nice if the tactics trainer threw in a lot more ‘easy’ problems so there were strong rewards for getting easy problems right almost all the time instead of being so focused on getting hard problems right half the time. It would also be nice if it were more consistent at keeping the problem going through the hard to see move and making the opponent always force you to play the hard to find move even when it’s a weaker line for them. But tactics trainers are hard to write, and that’s by far the best one I know of.

Around 2300 is where tactics puzzle seem to switch over to needing some positional understanding, in that the resulting position from the right series of moves is superior for some subtle positional reason instead of conferring a clear material advantage. It’s interesting watching Chess commentary after gaining some basic skills. It becomes very obvious which commentators are strong players themselves and which are weak players using Stockfish, because the weak ones completely skip over obvious-to-strong-player variants which don’t work out in the end for very involved tactical reasons.

Next you’ll probably ask whether cheesy math team problems are better for improving at mathematics than learning actual deep concepts. To that I say that teaching integration should skip ahead to Laplace transforms and not waste all that time on techniques which get subsumed by it.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sat Feb 3 03:14:45 2024 Tags:

This is a follow-up from our Spam-label approach, but this time with MOAR EMOJIS because that's what the world is turning into.

Since March 2023 projects could apply the "Spam" label on any new issue and have a magic bot come in and purge the user account plus all issues they've filed, see the earlier post for details. This works quite well and gives every project member the ability to quickly purge spam. Alas, pesky spammers are using other approaches to trick google into indexing their pork [1] (because at this point I think all this crap is just SEO spam anyway). Such as commenting on issues and merge requests. We can't apply labels to comments, so we found a way to work around that: emojis!

In GitLab you can add "reactions" to issue/merge request/snippet comments and in recent GitLab versions you can register for a webhook to be notified when that happens. So what we've added to the gitlab.freedesktop.org instance is support for the :do_not_litter: (🚯) emoji [2] - if you set that on an comment the author of said comment will be blocked and the comment content will be removed. After some safety checks of course, so you can't just go around blocking everyone by shotgunning emojis into gitlab. Unlike the "Spam" label this does not currently work recursively so it's best to report the user so admins can purge them properly - ideally before setting the emoji so the abuse report contains the actual spam comment instead of the redacted one. Also note that there is a 30 second grace period to quickly undo the emoji if you happen to set it accidentally.

Note that for purging issues, the "Spam" label is still required, the emojis only work for comments.

Happy cleanup!

[1] or pork-ish
[2] Benjamin wanted to use :poop: but there's a chance that may get used for expressing disagreement with the comment in question

Posted Mon Jan 29 07:58:00 2024 Tags:

I’ve recently started teaching juggling with a new approach which seems to be very effective, often getting someone competently juggling in only a single session. Because I want more people to learn how to juggle I’ll now explain it in the hopes that others start using this approach to good effect as well.

The problem with learning juggling (and with going to higher numbers) is that it’s too big of a leap. What you want are ‘stepping stones’: patterns which are within what the student can do or close to it but challenging enough that they’re getting useful feedback and improving from trying to do them. Ideally you’d have a low gravity chamber for people to juggle in while practicing and gradually increase the gravity level until it was Earth normal. Sadly physics doesn’t allow for that. Maybe having people roll balls up an angled board and gradually increasing the angle until it was vertical and removing it completely would work well, but I haven’t tried that.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

The difficult of a juggling pattern has to do with the ratio of balls to hands. Since the vast majority of people only have two hand and their feet are fairly useless for juggling increasing the number of hands is not an option. If you go down from 3 to 2 balls it’s no longer juggling, it’s just holding balls, and since there’s on integer between 2 and 3 [citation needed] there’s no easier pattern available.

But there’s a loophole! You can do pattern which the beginner does in collaboration with an expert. This allows for a lower ratio of balls to hands by adding more hands, allows the beginner to only use one hand, and averages out the skill of the jugglers to be between the expert and the beginner which allows for harder patterns than the beginner could do alone. Not only does this allow for stepping stone patterns but it’s very good motivationally because beginners can participate in a legitimate juggling pattern right off the bat.

The first pattern to do is four balls and three hands with the beginner doing one of the three hands. It’s best to start with throwing a single ball through its orbit to get a feel for what the throws are. (This applies to all the later patterns as well so pretend I repeat the advice about starting with one ball for each one). To make later patterns easier the student should make inside throws. They should stand face to face with the teacher and throw from their right hand to the teacher’s right hand, which then throws to the teacher’s left hand, and finally the student’s right hand. It’s easiest to start with the student holding two balls and they initiate the pattern. Also gets the used to starting with two which they’ll eventually have to do when juggling by themselves. After they get the hang of their right hand you can switch to their left hand, where the ball goes from the student’s left hand to the the teacher’s left hand, then the teacher’s right hand, and finally back to the student’s left hand. For left handed students the other order should be used.

The next pattern up is five balls and four hands, which is an even lower ratio of balls to hands but does require the student use both hands. The student should do the inside throw half of the pattern because inside throws are what they’ll need to do for juggling by themselves. The pattern is that balls go from the student’s right hand to the teacher’s right hand, then the student’s left hand, then the teacher’s left hand, and finally back to the student’s right hand. It’s easiest to start with the student having two balls in their right and and one in their left and initiating the first throw. (Or starting with two in their left and one in their right if they’re left handed.)

Almost at the end is pair juggling with three balls and two hands where the student does the right hand and the teacher does the left, then the mirror image pattern where the student does the left hand and the teacher does the right. Finally the student has mastered all the elements which need to be put together and they’re ready to try full-blown juggling by themselves. The build-up patterns have covered the elements of it enough that some people with no athletic backgrounds can manage to get in good runs after less than an hour of training.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Fri Jan 26 23:19:04 2024 Tags: