This feed omits posts by rms. Just 'cause.

MIME-Version: 1.0
From: ██████ <███@dnalounge.com>
Date: Tue, 28 Jan 2025 11:17:21 -0800
X-Gm-Features: AWEUYZl████████_█████-███████_█████VNcM
Subject: Re: ████████████████████████████
To: Jamie Zawinski <jwz@dnalounge.com>
Content-Type: multipart/alternative; boundary="0000000000006c4c9e062cc90b73"

--0000000000006c4c9e062cc90b73
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

=F0=9F=91=8D

█████████ reacted via Gmail
< https://www.google.com/gmail/about/ um=3Det&utm_campaign=3Demojireactionemail#app>

--0000000000006c4c9e062cc90b73
Content-Type: text/vnd.google.email-reaction+json; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

{
  "emoji": "=F0=9F=91=8D",
  "version": 1
}
--0000000000006c4c9e062cc90b73
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div><p style=3D"font-size:50px;margin-top:0;margin-bottom:0">=F0=9F=91=8D<=
/p><p style=3D"margin-top:10px;margin-bottom:0">█████████ reacted via <=
a href=3D"https://www.google.com/gmail/about/ &amp;utm_medium=3Det&amp;utm_campaign=3Demojireactionemail#app">Gmail</a></=
p></div>

--0000000000006c4c9e062cc90b73--

Previously, previously, previously, previously, previously, previously, previously.

Posted Tue Jan 28 20:15:57 2025 Tags:
  • Afraid (2024): A not-bad entry in the Haunted Home Assistant genre. If you're like me the first hurdle you have to get over for a movie like this is that idea that artificial general intelligence is possible at all (it is not) but I also watch movies about vampires, zombies and werewolves and that's fine. The difference is that there are not currently grifters manipulating the economy with their fucking promises about werewolf futures. Anyway, viewed as a haunted house fantasy it works pretty well.

    But an altogether better (and less mean-spirited) version of this story is Cat Pictures Please by Naomi Kritzer, which was expanded into the novels Catfishing on CatNet and Chaos on CatNet.

  • Levels (2024): But what if it's all a simulation, man? It's ok. Rodney from Stargate is in it.

  • Never Let Go (2024): Halle Berry lives in the woods with her kids after some kind of Rapture Apocalypse, but once you figure out (it's not hard to figure out) that actually she's just crazy as shit, it's just cruelty-porn.

  • My Old Ass (2024): Girl gets high and hallucinates her older self who is Aubrey Plaza. It is charming and funny.

  • Doc of Chucky (2024): Look, I hesitate to recommend that you watch a FIVE HOUR documentary covering every Chucky movie, but I did and I do not regret it. Previously.

  • Dark Matter (2024): Not to be confused with the Canadian space-amnesia trash of the same name, this is a Sliders kind of deal where a guy is lost in the multiverse and trying to get home. It's good! It sticks the landing, and does some pretty wild things with the premise.

  • Heretic (2024): Two teenage Mormon missionary girls debate theology with psychopath Hugh Grant. It's pretty talky but also scary.

  • Earth Abides (2024): I wrote the following having watched the first 4 episodes: I feel like this show looked at Walking Dead and made the (completely accurate) observation that none of the problems in that show were zombies, all of the problems were that every one of these people are just massive fucking assholes. So this is what I guess you have to call "Cozy Apocalypse". Everything is so low stakes. Anyway, it's fine.

    Then in episode 5, their Negan shows up, and it turns into the same old crap.

  • Times Square (1980): A couple of 13 year old girls break out of a mental hospital, go completely feral, and start a cult. Tim Curry is a voyeur radio DJ. How had I never heard of this movie? It's extremely punk rock and has a great soundtrack.

  • Landman (2024): If you were interested in the answers to two questions: "How do oil billionaires feel about climate change, alternative energy and OSHA regulations?" and "Given a massive budget and 10 hours to do so, how would an incel write female characters?", then look no further. Take a little brain-vacation to the worst place in the world with the worst people in the world, written cartoonishly. Enjoy.

  • Alice in Wonderland (1933): This is a trip. Pretty impressive live-action effects for 1933! The costumes are absolute nightmare fuel.

  • Mononoke The Movie, Phantom in the Rain (2024): The plot snaps back and forth between "watching paint dry" (slaves arguing about manners and propriety) to "utterly incomprehensible" (gay elf fights the wallpaper?), but the animation style is like nothing I've ever seen before. It looks like watercolors on top of wheat paste. It's no Spiderverse, but it is incredibly visually dense, and a fascinating look. Apparently it's part 1. Apparently they think the're more to tell here.

  • Star Trek Section 31 (2025): I've got good news for you! Enterprise S02 is no longer the worst Star Trek!

    "Hey, we're making a Section 31 show!"

    "Oh, 'Special Circumstances', I love it! This will be an exploration of the dark underbelly of Utopia, what it does to preserve itself? Since Trek has always largely been 'powerful people sitting around a conference table arguing about ethics', this will be a rebuttal to that, or at least a counterpoint? This will be Trek's Andor?"

    "No, I thought we'd do a comedy heist show for children instead. Like Skeleton Crew but worse in every way. With fart jokes."

    "Wow. Wow wow wow wow."

  • The OA (2016): I loved this the first time, but on a rewatch, I'm upgrading that to "obsessed with". It's so completely bonkers. Apparently they had a five season plan to keep escalating this madness, but to the shock of nobody, Netflix killed them after two. I'll say: "People who liked Starfish (2018) will also like." I am that people.

    I wanted someone to tell me, "Oh yeah, Marling and Batmanglij made a few other completely batshit scifi-magical-surreal shows too" but that appears to not be the case. The East and Sound of My Voice are both culty and show some of their obsessions, but don't have the high weirdness of The OA. (And A Murder at the End of the World was Muskian trash.)

  • NOS4A2 (2019): I wasn't crazy about this the first time around -- it starts pretty slowly, and I guess I wasn't feeling it -- but on a rewatch, I love this. Well-drawn characters, and nicely creepy. It keeps feeling like it's about to make Stephen King-like bad decisions but does not.

Complaining about YouTube is a waste ("it is a video archive in the same sense that a supermarket is a Food Museum") but I can't help myself sometimes:

So, amongst my many scripts is one that tells me when links have gone bad on this blog. By far, the majority of those link suicides are the ones linking to YouTube trailers in these review posts. I always link the movie to its official trailer, typically to the one posted by the studio's official account. And very often, after around two years, they delete the video. Not just unlisting, but deletion.

And then I dutifully remove the link, rather than searching for a new one, because if they're not interested in you seeing their show, I guess I'm not either. It is so dumb. But welcome to oblivion, if that's how you want it.

(Some of you are already warming up to tell me your Well Actually theories about why some studio marketing choad might think that this is a good idea, and I implore you to keep your very smart theory to yourself.)

Previously.

Posted Mon Jan 27 22:05:42 2025 Tags:

Board games are said to have ‘position’ which is about the longer term implications of what’s happening on the board, and ‘tactics’ which is amount immediate consequences. Let’s consider a game which is purely tactical. In this game the two sides alternate picking a bit which is added to a string and after they’ve both moved 64 times the secure hash of the string is calculated and that’s used to pick the winner. I suggest 64 as the number of moves because it’s cryptographic in size, so the initial moves will have unclear meanings and will become clearer towards the end of the game.

The first question to ask about this is what are the chances that the first player to move will have a theoretical win, assuming both sides have unlimited computational capability. It turns out if the chances are greater or less than a certain special value then the probability of one particular side having the win goes up rapidly as you get further from the end of the game. If the win probability is set to exactly that value then the winning chances remain stable as you calculate backwards. Calculating this value is left as an exercise to the reader. The interesting thing is that the value isn’t 50%, in fact it’s fairly partisan, which raises the question of whether the level of advantage for white in Chess is set about right to optimize for it being a tense game.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

There are other variants possible. The number of possible plays could be more than 2, or somewhat variable since you might have a chance of making the opponent skip their turn and you go again. This would allow the ‘effective’ fanout to be something non-integer, but it’s an interesting question whether there’s a way for it to be less than 2.

There’s a variant where instead of there being a fixed number of moves in a game after each move the player who just moved has some probability of winning (or losing). It isn’t obvious whether any win probability guarantees a 100% probability that the game is winnable by one side or the other. It seems like that should be a foundational result in computer science but I’m unfamiliar with it.

In practice of course analyzing this sort of game is constrained by computational ability. That can be ‘emulated’ by assuming that the outcomes are truly random and there’s an oracle which can be accessed a set number of times on one’s turn to say who wins/whether a player wins in a given position. There are a lot of variants possible based on the amount of queries the sides have and whether there’s optionality in it and whether you can think on the opponent’s time. It feels like optimal play is slightly randomized. Intuitively if one player has more thinking time than the other then the weaker player needs to mix things up a bit so their analysis isn’t just a subset of what the opponent is seeing. But this is a wild guess. Real analysis of this sort of game would be very interesting.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sat Jan 25 23:26:36 2025 Tags:
There are people out there apparently seriously proposing to spend over thirty million dollars, and an uncountable amount of labor, to adversarially interoperate with Blue Sky.

  • Blue Sky has thus far duped the stenographers of the tech media that it is an "open" protocol that will interoperate "some day".

  • This is a lie. They will never do this. It will never be in their financial interest.

  • If someone else does it for them "for free", they will just take credit for it and their lie will never be exposed. You might as well donate directly to their marketing department.

  • By doing this, these well-meaning idiots will have worked as volunteer labor providing free material support to a for-profit corporation composed entirely of fascists, cryptogrifters and TESCREAL lunatics.

  • Once the Blue Sky folks decide that interoperability is bad actually, they can just unilaterally turn it off. Remember when Google and Facebook strangled Jabber / XMPP by just deciding "Nah, we're not gonna federate any more"? Pepperidge Farms remembers.

"Oh no, but those poor Blue Sky users, when the service becomes even more terrible they might lose all their posts. We must save them from their own bad decisions!"

Oh dear. How sad. Nevermind.

It's Cory. This time the guy holding court is Cory.

Sorry, but you are just stultifyingly wrong on this one. Welcome them to the oblivion of their own making. Nothing of value will have been lost.

BTW, the Kickstarter for Pixelfed and Loops, the actually-open, ActivityPub based clones of Instagram and TikTok, just hit $35,000 in its first 13 hours. The creator of Mastodon just ceded control to a new non-profit. That's how you do it. By spending money to build free things for the public interest out in the open, not by tithing your money, labor and attention directly to VC-funded for-profit corporations.

Previously, previously, previously, previously, previously, previously.

Posted Wed Jan 22 20:50:42 2025 Tags:
BREAKING:

I will be ignoring the internet, and specifically "the news", on Monday, Jan 20, 2025. Whatever stupid shit happens, I will learn about in abbreviated digest form on Tuesday, rather than following along with whatever mortifying nonsense happens in realtime elevated-heart-rate horror. If it is important, it will come back.

This is self-care and I strongly urge you to consider it.

Posted Mon Jan 20 07:10:35 2025 Tags:
Word of advice, maybe also try to be more masculine. See if you can combine that with obsequiousness!

NetBlocks:

TikTok has deactivated its own service in the US [...] there are no indications of widespread network level restrictions imposed by internet providers at the present time.

Pictured to the right: TikTok itself as a self-portrait. (Via StickTok.)

Also, this is pretty funny -- I happened to restore my iPad from backup tonight because it was being stupid and this is what happens if you click on the already downloaded TikTok app. It seemingly re-downloads the whole app every time you click on it, and then refuses to run it.


Update: So this was a performative suicide attempt! Can you get a 5150 hold for a corporation? Corporations being people and all.


Previously, previously, previously, previously, previously, previously.

Posted Sun Jan 19 05:43:23 2025 Tags:
After all these years, I am still baffled at how terrible Apple is at syncing.

Truly, it is shocking. How are they so bad at this?

I have 3 Apple devices right next to each other and the chance that all three of them will have the same set of items on the Safari Reading List is essentially nil. And there is no way to tell any of them to reload it.

The thousands and thousands of pages of people offering cargo-cult advice for how to fix this ricochet between "have you tried rebooting" and "turn off iCloud Safari sync then turn it back on" but all that does is make all of your reading list items and bookmarks disappear from that device and never come back.

It's one bookmark, Michael, how long could it take to sync? Ten years?

Previously, previously, previously, previously.

Posted Sun Jan 19 04:31:37 2025 Tags:
I knew one day I'd have to watch powerful men burn the world down -- I just didn't expect them to be such losers.

I knew that one day we might have to watch as capitalism and greed and bigotry led to a world where powerful men, deserving or not, would burn it all down. What I didn't expect, and don't think I could have foreseen, is how incredibly cringe it would all be. I have been prepared for evil, for greed, for cruelty, for injustice -- but I did not anticipate that the people in power would also be such huge losers. [...]

Climate crises keep coming, genocides continue, women keep getting murdered, art is being strangled to death by AI, bigotry is on the rise, social progress is being rolled back ... AND these men insist on being cringe? It's a rotten cherry on top. This combination of evil and embarrassment is a unique horror, one that science fiction has failed to prepare us for.

Previously, previously, previously, previously, previously, previously, previously.

Posted Sat Jan 18 20:33:23 2025 Tags:
Looking at some claims that quantum computers won't work. #quantum #energy #variables #errors #rsa #secrecy
Posted Sat Jan 18 17:45:19 2025 Tags:
We first started selling online tickets in 2005. We first started accepting credit cards at the bar in 2012. And If I could send a message through time to decades-ago past-me (and if for some reason it couldn't just be a Biff Tannen gambit) then that message would be: "CASH ONLY FOREVER, NEVER DO CREDIT CARDS OR ONLINE SALES". (And if I get a second sentence, "Don't date ████ or ████.")

Upsides to this plan:

  • Spending zero hours of my life learning about payment processors.
  • No chargebacks ever.
  • My annual budget for point-of-sale iPads would have been $0, for over a decade.
  • We could just not have WiFi instead of having 30+ access points that exist solely to keep all of those stupid tablets working. I could have spent zero hours and zero dollars on that, for over a decade.
  • We own ATMs that can convert cards into cash, and instead of us paying a transaction fee, we charge a transaction fee. It's like Itchy & Scratchy Money but more fun.
  • It would be possible to ██ █████ █ ██ █████ █ ██ ██ ██ ██ ███ ██ ███.

Fuck dem banks.

Downsides to this plan:

  • We make money from service fees on advance tickets.
  • Some people buy advance tickets then don't show up and we keep their money.
  • It is helpful to have an early indication of how many people are going to show up. (But these days nearly everybody buys their tickets day-of, so the predictive power of advance tickets is not what it once was.)
  • Possibly drunk people are more profligate with their spending when it's not real money. [citation needed]
  • Jackass promoters would insist on undermining all of this by selling online tickets through third party middlemen, because TicketBastard sold them on a pack of lies about "reach" and "engagement".

People would say, "Yeah, DNA Lounge is that weird place that is still cash only. Some kinda off-grid Luddites, I guess." I could have lived with that rep.


And for those of you winding up to tell me "WELL ACTUALLY, I would find that mildly inconvenient", I want you to understand that you have been heard and also that I do not have a time machine.

And also that if I did have a time machine you wouldn't know because this would have already happened.

Posted Sat Jan 18 01:02:45 2025 Tags:
Today I added an infinite-nonsense honeypot to my web site just to fuck with LLM scrapers, based on a "spicy autocomplete" program I wrote about 30 years ago. Well-behaved web crawlers will ignore it, but those "AI" people.... well, you know how they are.

I'm intentionally not linking to the honeypot from here, for reasons, but I'll bet you can find it pretty easily (and without guessing URLs.)

It's kinda funny.


Previously, previously, previously, previously, previously, previously, previously, previously.

Posted Thu Jan 16 04:56:19 2025 Tags:

You might notice Katy is having a bit of an interdimensional anomaly here. I implemented this as an experiment because it’s different from any color cycling effect I’ve ever seen before. Normally in color cycling there’s one position which is true, then the hues rotate until all of them are swapped, then they keep rotating until they’re back to true. In this one at any given moment there are two opposite hues which are true and the ones at 90 degrees from those are swapped, and the color cycling effect is rotating which angle is true. It’s also doing a much better job of keeping luminance constant due to using okhsl.

Code is here. It’s leaning on libraries for most of the work, but I did write some code to dither just the low order bits of the RGB values. That’s a technique which should be used more often. This effect would also work on animated video. You could even adjust the angle as a directorial trick, to draw the viewer’s eye towards particular things by making their color true.

(Now that I think about it low order bit dithering could be improved by using error in the okhsl gamut. It also be improved by other diffusion techniques, which in turn can be further improved by dynamically choosing which neighboring pixel most wants to have error in the opposite direction already. I’m going to exercise some self-control and not implement any of this, but you most definitely should pick it up where I left off. All video manipulation should be done in 16 bit color the entire time and only dithered down to 8 bit on final display.)

As a bonus, I also simplified the color swatches I gave previously into two separate ones, for light and dark backgrounds. Files are here and here.

All of the above is done within the limitations of the sRGB color space. The sRGB standard kind of sucks. It’s based off the very first color television which was ever made in 1954 and the standardization which came later made it consistent but not broader. Now that OLED is getting used everywhere my expectation is that things are going to start supporting Rec2100 under the hood and once that becomes ubiquitous new content will be produced in formats which support that extra color depth. It’s going to take a few years.

Posted Fri Dec 27 15:32:45 2024 Tags:

This is a heads up that if you file an issue in the libinput issue tracker, it's very likely this issue will be closed. And this post explains why that's a good thing, why it doesn't mean what you want, and most importantly why you shouldn't get angry about it.

Unfixed issues have, roughly, two states: they're either waiting for someone who can triage and ideally fix it (let's call those someones "maintainers") or they're waiting on the reporter to provide some more info or test something. Let's call the former state "actionable" and the second state "needinfo". The first state is typically not explicitly communicated but the latter can be via different means, most commonly via a "needinfo" label. Labels are of course great because you can be explicit about what is needed and with our bugbot you can automate much of this.

Alas, using labels has one disadvantage: GitLab does not allow the typical bug reporter to set or remove labels - you need to have at least the Planner role in the project (or group) and, well, suprisingly reporting an issue doesn't mean you get immediately added to the project. So setting a "needinfo" label requires the maintainer to remove the label. And until that happens you have a open bug that has needinfo set and looks like it's still needing info. Not a good look, that is.

So how about we use something other than labels, so the reporter can communicate that the bug has changed to actionable? Well, as it turns out there is exactly thing a reporter can do on their own bugs other than post comments: close it and re-open it. That's it [1]. So given this vast array of options (one button!), we shall use them (click it!).

So for the forseeable future libinput will follow the following pattern:

  • Reporter files an issue
  • Maintainer looks at it, posts a comment requesting some information, closes the bug
  • Reporter attaches information, re-opens bug
  • Maintainer looks at it and either: files a PR to fix the issue or closes the bug with the wontfix/notourbug/cantfix label
Obviously the close/reopen stage may happen a few times. For the final closing where the issue isn't fixed the labels actually work well: they preserve for posterity why the bug was closed and in this case they do not need to be changed by the reporter anyway. But until that final closing the result of this approach is that an open bug is a bug that is actionable for a maintainer.

This process should work (in libinput at least), all it requires is for reporters to not get grumpy about issue being closed. And that's where this blog post (and the comments bugbot will add when closing) come in. So here's hoping. And to stave off the first question: yes, I too wish there was a better (and equally simple) way to go about this.

[1] we shall ignore magic comments that are parsed by language-understanding bots because that future isn't yet the present

Posted Wed Dec 18 03:21:00 2024 Tags:

Different sports have different attitudes to rules changes. Most competitive sports get constant rules tweaks (except for baseball, which seems dead set on achieving cultural irrelevance). Track and field tries to keep consistent rules over time so performances in different years are comparable. Poker has a history of lots of experimentation but has mostly been a few main variants the last few decades. Go never changes the rules at all (except for tweaking the scoring system out of necessity but still not admitting it. That’s a story for another day). Chess hasn’t changed the rules for a long time and it’s a problem.

I’m writing this right after a new world chess champion has been crowned. While the result was better than previous championships in that it succeeded in crowning an unambiguous world champion, it failed at two bigger goals: Being exciting and selecting a winner who’s widely viewed as demonstrating that they’re the strongest among all the competitors.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

The source of the lack of excitement is no secret: Most of the games were draws. Out of the 14 games, 5 were decisive. This also creates a problem for the accuracy of the result. In a less drawish game a match with 6 games of which only one was drawn would have equal statistical significance (assuming it wasn’t overly partisan). In fact it’s even worse than that appears on its face. The candidates tournament, which selected the challenger, was a 7 person double round robin where the winner scored 9/14 and three other competitors scored 8.5/14. Picking a winner based on such a close result has hardly any significance at all, and that format means that unless one of the competitors was ludicrously better than the others such a close result was expected. If the format were instead that the top four finishers from the candidates tournament played single elimination matches against each other then the eventual result would be viewed with far more authority. Partially the problem here is that this is just a badly designed tournament but some of the reasoning behind this format is because of the drawishness. Such matches would be long and arduous and not much fun, as they were in the past. Some previous FIDE title tournaments were far worse, following a very misguided idea that making the results random would lead to more excitement. That makes sense in sports like Soccer where there are few teams and it’s important that all of them have a shot, but the ethos of Chess is that the better player should consistently win, and adding randomness can easily lead to never seeing the same player win twice.

This leads to the question of how the rules of chess could be modified to not have so many draws. Most proposed variations suffer from being far too alien to chess players to be taken seriously, but there are two approaches which are, or should be, taken seriously which I’d like to highlight: Fischer random and the variants explored by Kramnik. In Fischer random a starting position with the pieces in the back rank scrambled randomly is selected. In the latter Kramnik, a former world champion, suggested game variants, and the Deepmind team trained an AI on them and measured the draw rate and partisanship of all of them based on how that engine did in self-play games. (Partisanship is the degree to which the game favors one player over the other). I love this methodology. Of the variants tried I’d like to highlight two of the best ones. One is ‘torpedo’, in which pawns can move two squares at once from any position, not just the starting one. The other is no-castle, which is exactly what it sounds like. No castle has the benefit that it gets rid of the most complex and confusing chess rules and that it’s just changing the opening position, in fact it’s a position reachable from the standard Chess opening position. (For some reason people don’t do no-castle Fischer Random tournaments, which seems ridiculous. Might as well combine the best ideas.)

Both no castle and torpedo have about the same level of partisanship as regular chess, which may be a good thing for reasons I really ought to do a separate blog post about. They also both do a good job of making the game less drawish. The reason for this is, in my opinion, basically the same, or at least two sides of the same coin. Torpedo makes the pawns stronger, so they’re more likely to promote and decide the game. No castle nerfs the king so it’s more likely to get captured. Of these two approaches torpedo feels far more alien to regular chess than no castle does. My proposal to make Chess even more non-drawish is to nerf the king even more: Make it so the king can’t move diagonally. Notably this would cause even king versus king endgames to be decisive. It would also result in a lot more exciting attacks because the king would be so poorly defended. This needs extensive play testing to be taken seriously, but repeating the Deepmind experiment is vastly easier now that AI has come so far, and it would be great if Chess or at least a very Chess-like game could have a world championship which was much more exciting and meaningful.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Fri Dec 13 06:21:54 2024 Tags:

Before I get into the meat of this rant, I’d like to say that my main point is not that giving monthly numbers is incompetent, but I will get that out of the way now. There’s this ongoing deep mystery of the universe: “Why does our monthly revenue always drop in February?” It’s because it has less days in it, ya dumbass. What you should be doing is average daily numbers with months as the time intervals of measurement, and longer term stats being weighted by lengths of months. That’s still subject to some artifacts but they’re vastly smaller and hard to avoid. (Whoever decided to make days and years not line up properly should be fired.)

Even if you aren’t engaged in that bit of gross incompetence, it’s still that case that not only year over year but any fixed time interval delta is something you should never, ever use. This includes measuring historical inflation, the performance of a stock, and the price of tea in China. You may find this surprising because the vast majority of graphs you ever see present data in exactly this way, but it’s true and is a huge problem.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Let’s consider the criteria we want out of a smoothing algorithm:

  1. There shouldn’t be any weird artifacts in how the data is presented

  2. There should be minimal parameterization/options for p-hacking

  3. Only the data from a fixed window should be considered

  4. Data from the future should not be considered

So what’s wrong with straightforwardly applying even weighting across the entire time interval? The issue is that it grants a special and extreme cutoff status to two very specific points in time: Right now and the beginning of the interval. While right now is hard to do anything about (but more on that later) granting special status to an arbitrary point in the past is just wrong. For example, here’s a year over year weighted average in a scenario where there was a big spike one month, which is the most artifact-laden scenario:

What happened exactly a year after the spike? Absolutely nothing, it’s an artifact of the smoothing algorithm used. You could argue that by picking a standard interval to use the amount of possible p-hacking is mitigated and it’s true. But the effects are still there and people can always decide to show the data at all only when the artifact goes the way they want and it’s extremely hard to standardize enough to avoid trivial p-hacks. For example it’s viewed as acceptable to show year to date, and that may hit a different convenient cutoff date.

(The graphs in this post all use weighted geometric means and for simplicity assume that all months are the same length. As I said at the top you should take into account month lengths with real world data but it’s beside the point here.)

What you should be using is linear weighted moving average, which instead of applying an even weighting to every data point has the weighting go down linearly as things go into the past, hitting zero at the beginning of the window. Because the early stuff is weighted less the size of the window is sort of narrower than with constant weighting. To get roughly apples to apples you can set it so that the amount of weight coming from the last half year is the same in either case, which corresponds to the time interval of the linear weighting being multiplied by the square root of 2, which is roughly 17 months instead of 12. Here’s what it looks like with linear weighted averages:

As you can see the LWMA doesn’t have a sudden crash at the end and much more closely models how you would experience the spike as you lived through it. There’s still some p-hacking which can be done, for example you can average over a month or a ten years instead of one year if that tells a story you prefer, but the effects of changing the time interval used are vastly less dramatic. Changing by a single month will rarely have any noticeable effect at all, while doing the same with a strict cutoff will matter fairly often.

Stock values in particular are usually shown as deviations from a single point at the beginning of a time period, which is all kinds of wrong. Much more appropriate would be to display it in the same way as inflation: annual rate of return smoothed out using LWMA over a set window. That view is much less exciting but much more informative.

In defense of the future

Now I’m going to go out on a limb and advocate for something more speculative. What I said above should be the default, what I’m about to say should be done at least sometimes, but for now it’s done hardly ever.

Of the four criteria at the top the dodgyest one by far is that you shouldn’t use data from the future. When you’re talking about the current time you don’t have much choice in the matter because you don’t know what will happen in the future, but when looking at past data it makes sense to retroactively take what happened later into account when evaluating what happened at a particular time. This is especially true of things which have measurement error, both from noise in measurement and noise in arrival time of effect. Using both before and after data simply gives better information. What it sacrifices is the criterion that you don’t retroactively change how you view past data after you give information at the time. While there’s some benefit to revisiting the drama as you experienced it at the time, that shouldn’t always be more important than accuracy.

Here’s a comparison of how a rolling linearly weighted average which uses data from the future would show things in real time versus in retrospect:

(This assumes a window of 17 months centered around the current time and data from the future elided, so the current value is across a window of 9 months.)

The after the fact smoothing is much more reasonable. It’s utilizing 20/20 hindsight, which is both its strength and weakness. The point is sometimes that’s what you want.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Mon Dec 2 17:25:59 2024 Tags:

The book ‘Manchild in the Promised Land’ is a classic of American literature which traumatized me when I was assigned to read it in middle school. It is a literarily important work, using a style of hyperrealism which was innovative at the time and eventually lead to reality television and youtube channels which have full staffs but go out of their way to make it appear that the whole thing was produced by a single person. In parallel it also was part of the rise of the blaxsploitation genre. The author and most of the books ardent followers claim that it proves that black people in the US are that way due to the oppression of white people. It’s also been claimed to be oppression porn, and to glorify drug use and criminality. I’ll get to whether those things are true, but first there are some things I need to explain.

The book is an autobiography about the author’s growing up in Harlem, how he was a involved in all manner of criminality when he was younger, and eventually managed to get out of it, go to law school, and have a respectable career. What struck me when I was younger was that it was the first depiction of the interactions between men and women I’d ever seen which wasn’t propaganda bullshit. Back in the 80s the depictions of dating in sitcoms and movies were stagey and dumb, but worse than that they’d alternate between christian moralizing and juvenile fantasizing of the writers. Even back then I could see transparently through them. This book was different. It depicted people in actually uncomfortable situations, doing things you weren’t supposed to talk about, and having normal human reactions. The thing which caused damage to my impressionable young mind was that most of these interactions involved pimps and hos and present a very dark side of humanity.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

There’s something I now need to admit. I haven’t, and most likely won’t, be able to force myself to finish re-reading this book. I know that’s a bit hypocritical when writing a review, but there’s something I realized early on re-reading it which recontextualized everything in it and made not able to cope with reading it any more. And that is that it’s all made up.

It was the homophobia which gave it away. All the gay people are presented as aggressive creepazoids, with not much motivation other than to fulfill the role of creepazoid in this literary universe of general oppression. Such people do exist in real life, but they tend to have the good sense to not openly go after people they don’t know and have no reason to think will be receptive, especially back when homosexuality was downright illegal and getting beating someone up for being gay wouldn’t land you in any trouble at all (The story is set in the 1940s). The logical conclusion is that the author didn’t know any real out gay people and was making them up as characters in a story using a common trope of the time. Looking into the author more, everything falls apart. He comes across as a dweeb in interviews, not someone anyone would take seriously as a pimp. None of the characters other than him mention seem to ever have been mentioned. He conveniently claims to have been a lawyer but then stopped practicing because he could make more money giving talks, but there doesn’t seem to be evidence that he ever practiced or passed the bar, attended law school, or got into law school in the first place. (I’m guessing he did some but not all of those.) Most ridiculously the timing doesn’t work. Either he was hustling soldiers on leave from WWII when he was eight years old, or that was somebody else’s story. In an interview with NPR they said it was a novel but written as an autobiography, which is surprising given that nearly everything referencing it claims unambiguously that it’s simply an autobiography. My guess is that they did some actual journalism and politely let him fall back on ‘it’s a novel’ when they called him out on his lies.

One may ask whether this matters. Does a work of art’s meaning or import depend on the artist who created it? In general I lean towards saying no, that the truth of even a claimed personal experience is less important than whether that experience is prototypical of that of many others. But in this case the context and meaning are so completely changed based on whether it’s real that I have to go the other way. If it’s a real story it’s about someone who was once a pimp, came to the realization that hos are real people too, changed his life around and is bringing a laudable message of inclusion. If it’s made up then it’s a loser fantasizing about having been a pimp. That doesn’t mean that this isn’t an important and influential book, but it does man that if you’re going to teach it you should include the context that it’s a seminal work of incel literature.

All that said, being a work of fiction doesn’t mean that there’s nothing about reality which can be gleaned from a work. I truly believe that the stories in this book were ones told to or observed by the author involving older, cooler boys and were either true or at least demonstrations that such stories could win you social status. A lot of it jives with things which I personally witnessed in the 80s and 90s growing up in walking distance of where the book is set. So now let’s get to critiques of the book and whether they’re true or not. The first question at hand is: Does it show that black americans are kept where they are via oppression from white americans? Well… not really. It’s complicated. A lot of the white people in it are very personally charitable to the author, helping him get through school and better himself. That’s the opposite of oppression. Where it does show a lot of oppression is from the police, both in the form of under- and over- policing. Police brutality is commonplace, but the cops won’t show up when you call for them and actually need them. This is definitely a real problem, thankfully less so now than the better part of a century ago, but there’s still a default assumption in the US that when the cops show up they’ll make the situation worse. Maybe over the course of my lifetime that’s been downgraded from an assumption to a fear, but it’s still a very real hangover and the war on drugs isn’t done yet. What the book most definitely shows is the black community oppressing itself. People in the ghetto are mostly interacting with other people in the ghetto, so it’s to be expected that crimes which happen there also have victims from there. The depictions of drug dealers intentionally getting potential customers addicted and of men forcing women into prostitution are particularly disturbing, but are things which actually happen.

That leads to the other big question, which is whether this book glorifies criminality. Yes it glorifies criminality! Half of it is the author fantasizing about being a badass pimp! He claims cocaine isn’t addictive! Gang rape is portrayed as a rite of passage! (The one thing which the author seems to really not like is heroin. This was before crack.) What’s puzzling is how this book is held up as a paragon of showing greatness of black culture. Saying ‘There’s a lot of glorification of criminality and misogyny in black culture and that’s a bad thing’ is something one isn’t allowed to say in polite company, or can only be said by black academics who obfuscate it with layers of jargon. While I understand getting defensive about that, it might behoove people who want to avoid the issue to not outright promote works which are emblematic of those trends (by, say, making them assigned reading in school). It doesn’t help to fall back on ‘this has a deep meaning which only black people can understand’ when the obvious problems are pointed out, as the author had a penchant for doing in interviews.

This brings me to the most central and currently culturally relevant aspect of what’s real in this book. The (very different, not at all bullshit) book ‘We Have Never Been Woke’ by Musa al-Gharbi makes a compelling case for the claim that woke worldview is a largely self-serving one of intellectual elites who claim to speak for underprivileged people but don’t. That book is from the point of view of someone who is himself cringily woke and can point out the hypocrisy and disconnect from that end. The other side of it is that when underprivileged people who woke claims to speak for say they are not you should believe them. This book is an example of that. So is all of gangsta rap. It is not productive to pretend a culture is something it is not just because you think it should be different. And there I think Manchild gets some amount of redemption: It claims to be a work which shines a light on cold, hard, unpleasant reality, and for all its faults it does, only not with the meaning the author intended.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Tue Nov 26 21:04:24 2024 Tags:

A while ago I was looking at Rust-based parsing of HID reports but, surprisingly, outside of C wrappers and the usual cratesquatting I couldn't find anything ready to use. So I figured, why not write my own, NIH style. Yay! Gave me a good excuse to learn API design for Rust and whatnot. Anyway, the result of this effort is the hidutils collection of repositories which includes commandline tools like hid-recorder and hid-replay but, more importantly, the hidreport (documentation) and hut (documentation) crates. Let's have a look at the latter two.

Both crates were intentionally written with minimal dependencies, they currently only depend on thiserror and arguably even that dependency can be removed.

HID Usage Tables (HUT)

As you know, HID Fields have a so-called "Usage" which is divided into a Usage Page (like a chapter) and a Usage ID. The HID Usage tells us what a sequence of bits in a HID Report represents, e.g. "this is the X axis" or "this is button number 5". These usages are specified in the HID Usage Tables (HUT) (currently at version 1.5 (PDF)). The hut crate is generated from the official HUT json file and contains all current HID Usages together with the various conversions you will need to get from a numeric value in a report descriptor to the named usage and vice versa. Which means you can do things like this:

  let gd_x = GenericDesktop::X;
  let usage_page = gd_x.usage_page();
  assert!(matches!(usage_page, UsagePage::GenericDesktop));
  
Or the more likely need: convert from a numeric page/id tuple to a named usage.
  let usage = Usage::new_from_page_and_id(0x1, 0x30); // GenericDesktop / X
  println!("Usage is {}", usage.name());
  
90% of this crate are the various conversions from a named usage to the numeric value and vice versa. It's a huge crate in that there are lots of enum values but the actual functionality is relatively simple.

hidreport - Report Descriptor parsing

The hidreport crate is the one that can take a set of HID Report Descriptor bytes obtained from a device and parse the contents. Or extract the value of a HID Field from a HID Report, given the HID Report Descriptor. So let's assume we have a bunch of bytes that are HID report descriptor read from the device (or sysfs) we can do this:

  let rdesc: ReportDescriptor = ReportDescriptor::try_from(bytes).unwrap();
  
I'm not going to copy/paste the code to run through this report descriptor but suffice to day it will give us access to the input, output and feature reports on the device together with every field inside those reports. Now let's read from the device and parse the data for whatever the first field is in the report (this is obviously device-specific, could be a button, a coordinate, anything):
   let input_report_bytes = read_from_device();
   let report = rdesc.find_input_report(&input_report_bytes).unwrap();
   let field = report.fields().first().unwrap();
   match field {
       Field::Variable(var) => {
          let val: u32 = var.extract(&input_report_bytes).unwrap().into();
          println!("Field {:?} is of value {}", field, val);
       },
       _ => {}
   }
  
The full documentation is of course on docs.rs and I'd be happy to take suggestions on how to improve the API and/or add features not currently present.

hid-recorder

The hidreport and hut crates are still quite new but we have an existing test bed that we use regularly. The venerable hid-recorder tool has been rewritten twice already. Benjamin Tissoires' first version was in C, then a Python version of it became part of hid-tools and now we have the third version written in Rust. Which has a few nice features over the Python version and we're using it heavily for e.g. udev-hid-bpf debugging and development. An examle output of that is below and it shows that you can get all the information out of the device via the hidreport and hut crates.

$ sudo hid-recorder /dev/hidraw1
# Microsoft Microsoft® 2.4GHz Transceiver v9.0
# Report descriptor length: 223 bytes
# 0x05, 0x01,                    // Usage Page (Generic Desktop)              0
# 0x09, 0x02,                    // Usage (Mouse)                             2
# 0xa1, 0x01,                    // Collection (Application)                  4
# 0x05, 0x01,                    //   Usage Page (Generic Desktop)            6
# 0x09, 0x02,                    //   Usage (Mouse)                           8
# 0xa1, 0x02,                    //   Collection (Logical)                    10
# 0x85, 0x1a,                    //     Report ID (26)                        12
# 0x09, 0x01,                    //     Usage (Pointer)                       14
# 0xa1, 0x00,                    //     Collection (Physical)                 16
# 0x05, 0x09,                    //       Usage Page (Button)                 18
# 0x19, 0x01,                    //       UsageMinimum (1)                    20
# 0x29, 0x05,                    //       UsageMaximum (5)                    22
# 0x95, 0x05,                    //       Report Count (5)                    24
# 0x75, 0x01,                    //       Report Size (1)                     26
... omitted for brevity
# 0x75, 0x01,                    //     Report Size (1)                       213
# 0xb1, 0x02,                    //     Feature (Data,Var,Abs)                215
# 0x75, 0x03,                    //     Report Size (3)                       217
# 0xb1, 0x01,                    //     Feature (Cnst,Arr,Abs)                219
# 0xc0,                          //   End Collection                          221
# 0xc0,                          // End Collection                            222
R: 223 05 01 09 02 a1 01 05 01 09 02 a1 02 85 1a 09 ... omitted for previty
N: Microsoft Microsoft® 2.4GHz Transceiver v9.0
I: 3 45e 7a5
# Report descriptor:
# ------- Input Report -------
# Report ID: 26
#    Report size: 80 bits
#  |   Bit:    8       | Usage: 0009/0001: Button / Button 1                          | Logical Range:     0..=1     |
#  |   Bit:    9       | Usage: 0009/0002: Button / Button 2                          | Logical Range:     0..=1     |
#  |   Bit:   10       | Usage: 0009/0003: Button / Button 3                          | Logical Range:     0..=1     |
#  |   Bit:   11       | Usage: 0009/0004: Button / Button 4                          | Logical Range:     0..=1     |
#  |   Bit:   12       | Usage: 0009/0005: Button / Button 5                          | Logical Range:     0..=1     |
#  |   Bits:  13..=15  | ######### Padding                                            |
#  |   Bits:  16..=31  | Usage: 0001/0030: Generic Desktop / X                        | Logical Range: -32767..=32767 |
#  |   Bits:  32..=47  | Usage: 0001/0031: Generic Desktop / Y                        | Logical Range: -32767..=32767 |
#  |   Bits:  48..=63  | Usage: 0001/0038: Generic Desktop / Wheel                    | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
#  |   Bits:  64..=79  | Usage: 000c/0238: Consumer / AC Pan                          | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
# ------- Input Report -------
# Report ID: 31
#    Report size: 24 bits
#  |   Bits:   8..=23  | Usage: 000c/0238: Consumer / AC Pan                          | Logical Range: -32767..=32767 | Physical Range:     0..=0     |
# ------- Feature Report -------
# Report ID: 18
#    Report size: 16 bits
#  |   Bits:   8..=9   | Usage: 0001/0048: Generic Desktop / Resolution Multiplier    | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  10..=11  | Usage: 0001/0048: Generic Desktop / Resolution Multiplier    | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  12..=15  | ######### Padding                                            |
# ------- Feature Report -------
# Report ID: 23
#    Report size: 16 bits
#  |   Bits:   8..=9   | Usage: ff00/ff06: Vendor Defined Page 0xFF00 / Vendor Usage 0xff06 | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bits:  10..=11  | Usage: ff00/ff0f: Vendor Defined Page 0xFF00 / Vendor Usage 0xff0f | Logical Range:     0..=1     | Physical Range:     1..=12    |
#  |   Bit:   12       | Usage: ff00/ff04: Vendor Defined Page 0xFF00 / Vendor Usage 0xff04 | Logical Range:     0..=1     | Physical Range:     0..=0     |
#  |   Bits:  13..=15  | ######### Padding                                            |
##############################################################################
# Recorded events below in format:
# E: .  [bytes ...]
#
# Current time: 11:31:20
# Report ID: 26 /
#                Button 1:     0 | Button 2:     0 | Button 3:     0 | Button 4:     0 | Button 5:     0 | X:     5 | Y:     0 |
#                Wheel:     0 |
#                AC Pan:     0 |
E: 000000.000124 10 1a 00 05 00 00 00 00 00 00 00
  
Posted Tue Nov 19 01:54:00 2024 Tags:

In this weekend’s edition of ‘Bram gets nerd sniped by something ridiculous so makes a blog post about it to make it somebody else’s problem’ Mark Rober said something about ‘A Lava Lamp made out of real lava’. Unfortunately he just poured lava on a regular lava lamp to destroy it but this does raise the question of whether you could have a real lava lamp which uses a molten salt instead of water.

First the requirements. The lamp is made of three substances: the lamp itself, the ‘liquid’ inside, and the ‘solid’ inside. The lamp must be transparent and remain solid across the range of temperatures used. The ‘liquid’ must be solid at room temperature, become liquid at a high but not too high temperature, and be transparent in its liquid phase. It should also be opaque in its solid phase to give a cool reveal of what the thing does as it heats up but but that’s hard to avoid. The ‘solid’ should have a melting point higher than the ‘liquid’ but not so high that it softens the lamp and be opaque. The density of the ‘solid’ should be just barely below that of the ‘liquid’ in its melted form and just barely above in its solid form to give it that distinctive lava lamp buoyancy effect. The ‘solid’ and ‘liquid’ should not react with each other or stick to the lamp or decompose over time.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

That was a lot of requirements, but it does seem to be possible to meet them. The choice for the lamp is obvious: Borosilicate glass. That’s physically strong, transparent, can withstand big temperature changes (due to low thermal expansion) and is chemically inert. All the same reasons why it’s ideal for cookware. It doesn’t get soft until over 800C, so the melting points of the other materials should be well below that.

For the ‘liquid’ there also turns out to only be one real option: Zinc Chloride. That’s transparent and has a melting point of 290C and a density of 2.9 (it’s also opaque at room temperature). The other transparent salts aren’t dense enough.

For the ‘solid’ there once again only seems to be one option: Boron Trioxide. That has a melting point of 450C and a density of 2.46. Every other oxide has a density which is way too high, but this one overshoots it a bit. It’s much easier to get get the densities closer together by making mixing the Boron Trioxide with something heavy than the Zinc Chloride with something light, so some Lead(II) oxide can be mixed in. That has a density of 9.53 so not much of it is needed and a melting point of 888C so the combined melting point will still be completely reasonable. (Due to eutectic-type effects it might be barely higher at all.) It should also add some color, possibly multiple ones because the colors formed depend on how it cools. Bismuth(III) oxide should also work and may be a bit more colorful.

I’m crossing my fingers a bit on these things not reacting but given that they’re glasses and salts it seems reasonable. The glasses may have a bit of a tendency to stick to each other. Hopefully not so much because one is a solid at these temperatures and the other is a liquid, but it’s probably a good idea to coat the top and bottom of the insides of the lamp with Silicon and to use an overall shape where the pieces inside never come close to the walls, in particular having an inverted cone shape at the bottom and a similar tapering at the top. The whole lamp should also be sealed because oxygen and water might react at the high temperatures reached, and there should be an argon bubble at the top because there is some expansion and contraction going on. Those same concerns apply to regular lava lamps which explains a lot about how they’re shaped.

Anyone who wants to feel free to try this build. You don’t need any more permission from me. I’d like to see it happen and don’t have the time to spend on building it myself.

Thanks for reading Bram’s Thoughts! Subscribe for free to receive new posts and support my work.

Posted Sun Nov 17 04:53:55 2024 Tags:

Let’s say you happen to need a color code something and want RGB values corresponding to all the common color names. Let’s further say that you’re an obsessive maniac and would like to avoid losing your sanity studying color theory just for the sake of this one silly task. How do you do it? Easy, leverage the work I already did for you. Here’s the chart, followed by caveats and methodology:

First the caveats: The level of saturation of these colors varies a lot mostly because sRGB sucks and your monitor can only display faded versions of some of them. (For colorblind accessibility it might be a good idea to use slightly lower saturation.) A luminance high enough to make yellow have decent saturation washes out other things so this was set to a consistent level which is a reasonable compromise. (This isn’t the fault of sRGB, it’s a limitation of human eyes.) Purple hues at angles 300 and 320 are both things which my eyes accept as the single ideal of Purple and don’t realize are two different things until I see them next to each other. The value given is midway between. Reasonable descriptions of them are ‘Royal Purple’ and ‘Common Purple’. They have an analogous relationship to the one between Blue and Cyan.

The methodology behind this first has to answer the question ‘What is a color?’ For the purposes of this exercise we’ll just pretend that hues are colors. The next question is why particular positions in the hue continuum count as colors. Hues are a twisting road. In particular places the road bends, making a gradient crossing over it look not like a straight line. The places where those bends happen we call colors. Which bends get a name is dependent on where you set the threshold and cultural factors. The exact point where the bend happens is also hard to define exactly. I located them by the highly scientific process of picking them out with my own two eyes.

There’s a standard statement of what the common color words are in English to which I’m adding Cyan and Pink. Cyan is a proper name for what’s usually called ‘Light Blue’, a name which makes no sense because both Cyan and Blue can appear at any amount of luminance. It may be that Cyan is denied a proper common name because our displays can barely show it. The biggest limitation of sRGB by far is that it can only display Cyan with poor saturation. Pink I put in both because it’s a primary color (as is Cyan) and because it’s a very common color word in English, mostly denied its proper place out of an absurd bit of misogyny and homophobia. It’s especially funny that in printing Pink is euphemized as ‘Magenta’ even though the shade which is used is a light one and the common usage of ‘Magenta’ is to refer to a darker shade.

One important thing to note is that the primary colors, which have sRGB values set to 00 and FF, are NOT on ideal color hues. Those colors correspond to the most saturated things your display can show, which is important, but the positioning of RGB was selected first and foremost for them to be 120 degrees from each other to maximize how much color display could be done with only three phosphores. They happen to be very close to the true ideals but not exactly.

To pick out exact shades I used an OKHSL color picker with saturation 100% and light and dark luminance at 65% and 43%. There are still a few artifacts of OKHSL, in particular its hues bend a tiny bit in the darker shades. To compensate I picked out color angles which are good at both high and low luminance, mostly resulting in Cyan being shifted over because in darker shades Teal is elbowing it out of the way. One thing OKHSL does NOT do is maintain the property that two colors which are opposite each other in space are true opposites, which is annoying because oppositeness is one of the most objective things you can say about colors. (That could probably be improved by having brightness correction be done by cubing a single value instead of three values separately but I personally am not going to put together such a proposal.)

Annoyingly the human perceptual system doesn’t see fit to put color angles with canonical names opposite each other, instead placing them roughly evenly but with seemingly random noise added. This of course creates problems for color wheels, which want both to show what colors are opposites and what the color names are.

Posted Sun Nov 10 07:07:22 2024 Tags: