Posted Sun Aug 25 00:00:00 2019 Tags:

Iran has jailed another Iranian citizen who works for the UK government.

I feel for Tyson, and for Amiri; but when he demands that the British government do something to free Amiri, he mistakes the UK for the great power that it once was. Bogus Johnson tried demanding that Iran free Nazanin Zaghari-Ratcliffe, and Iran responded with "You can't make me, nyah nyah!"

Her husband also wants to meet with officials to demand they "do something" more. But that would be futile again.

Imprisoning people as pawns is cruel and unjust, when Iran does it and when China does it. But there is nothing the UK can do to prevent it.

The only way the UK government could help free Amiri is with quiet, slow diplomacy. But first the UK should stop supporting the conman's plans for war with Iran.

Posted Sun Aug 25 00:00:00 2019 Tags:

Use of e-cigarettes does not lead Americans to smoke. On the contrary, a lot fewer Americans smoke now than in 2002.

Posted Sun Aug 25 00:00:00 2019 Tags:

It has become so hot in the UK that a cycad can reproduce there.

Posted Sun Aug 25 00:00:00 2019 Tags:

Hunting intrusive pythons is becoming a passionate hobby for some in Florida.

It behooves us to think of protecting the Everglades from being inundated, too.

Posted Sun Aug 25 00:00:00 2019 Tags:

Melbourne, Australia, is losing valuable water each year by cutting down mature trees — because the young trees that replace them soak up more water.

Posted Sun Aug 25 00:00:00 2019 Tags:

Americans that oppose abortion rights tend to be against women's rights and women's equality in general.

Their obsession about the "life" of a fetus was a rationalization.

Posted Sun Aug 25 00:00:00 2019 Tags:

Planned Parenthood's decision to refuse to parrot antiabortionists' lies was the only moral choice, and the only honorable choice.

Posted Sun Aug 25 00:00:00 2019 Tags:

The bully says he will keep refugee minors and families in prison with no time limit.

Posted Sun Aug 25 00:00:00 2019 Tags:

16 million Americans will be directed to vote in 2020 with no paper ballot. This is fundamentally untrustworthy.

Posted Sun Aug 25 00:00:00 2019 Tags:

The Internet has gotten too big.

Growing up, I, like many computery people of my generation, was an idealist. I believed that better, faster communication would be an unmitigated improvement to society. "World peace through better communication," I said to an older co-worker, once, as the millenium was coming to an end. "If people could just understand each others' points of view, there would be no reason for them to fight. Government propaganda will never work if citizens of two warring countries can just talk to each other and realize that the other side is human, just like them, and teach each other what's really true."

[Wired.com has an excellent article about this sort of belief system.]

"You have a lot to learn about the world," he said.

Or maybe he said, "That's the most naive thing I've ever heard in my entire life." I can't remember exactly. Either or both would have been appropriate, as it turns out.

What actually happened

There's a pattern that I don't see talked about much, but which seems to apply in all sorts of systems, big and small, theoretical and practical, mathematical and physical.

The pattern is: the cheaper interactions become, the more intensely a system is corrupted. The faster interactions become, the faster the corruption spreads.

What is "corruption?" I don't know the right technical definition, but you know it when you see it. In a living system, it's a virus, or a cancer, or a predator. In a computer operating system, it's malware. In password authentication, it's phishing attacks. In politics, it's lobbyists and grift. In finance, it's high-frequency trading and credit default swaps. In Twitter, it's propaganda bots. In earth's orbit, it's space debris and Kessler syndrome.

On the Internet, it's botnets and DDoS attacks.

What do all these things have in common? That they didn't happen when the system was small and expensive. The system was just as vulnerable back then - usually even more so - but it wasn't worth it. The corruption didn't corrupt. The attacks didn't evolve.

What do I mean by evolve? Again, I don't know what technical definition to use. The evolutionary process of bacteria isn't the same as the evolutionary process of a spam or phishing attack, or malware, or foreign-sponsored Twitter propaganda, or space junk in low-earth orbit. Biological infections evolve, we think, by random mutation and imperfect natural selection. But malware evolves when people redesign it. And space junk can turn violent because of accidental collisions; not natural, and yet the exact opposite of careful human design.

Whatever the cause, the corruption happens in all those cases. Intelligent human attackers are only one way to create a new corruption, but they're a fast, persistent one. The more humans you connect, the more kinds of corruption they will generate.

Most humans aren't trying to create corruption. But it doesn't matter, if a rare corruption can spread quickly. A larger, faster, standardized network lets the same attack offer bigger benefits to the attacker, without increasing cost.

Diversity

One of the natural defenses against corruption is diversity. There's this problem right now where supposedly the most common strain of bananas is dying out because they are all genetically identical, so the wrong fungus at the right time can kill them all. One way to limit the damage would be to grow, say, 10 kinds of bananas; then when there's a banana plague, it'll only kill, say, 10% of your crop, which you can replace over the next few years.

That might work okay for bananas, but for human diseases, you wouldn't want to be one of the unlucky 10%. For computer viruses, maybe we can have 10 operating systems, but you still don't want to be the unlucky one, and you also don't want to be stuck with the 10th best operating system or the 10th best browser. Diversity is how nature defends against corruption, but not how human engineers do.

In fact, a major goal of modern engineering is to destroy diversity. As Deming would say, reduce variation. Find the "best" solution, then deploy it consistently everywhere, and keep improving it.

When we read about adversarial attacks on computer vision, why are they worse than Magic Eye drawings or other human optical illusions? Because they can be perfectly targeted. An optical illusion street sign would only fool a subset of humans, only some of the time, because each of our neural nets is configured differently from everyone else's. But every neural net in every car of a particular brand and model will be susceptible to exactly the same illusion. You can take a perfect copy of the AI, bring it into your lab, and design a perfect illusion that fools it. Subtracting natural diversity has turned a boring visual attack into a disproportionately effective one.

The same attacks work against a search engine or an email spam filter. If you get a copy of the algorithm, or even query it quickly enough and transparently enough in a black box test, you can design a message to defeat it. That's the SEO industry and email newsletter industry, in a nutshell. It's why companies don't want to tell you which clause of their unevenly-enforced terms of service you violated; because if you knew, you'd fine tune your behaviour to be evil, but not quite evil enough to trip over the line.

It's why human moderators still work better than computer moderators: because humans make unpredictable mistakes. It's harder to optimize an attack against rules that won't stay constant.

...but back to the Internet

I hope you didn't think I was going to tell you how to fix Twitter and Facebook and U.S. politics. The truth is, I have no idea at all. I just see that the patterns of corruption are the same. Nobody bothered to corrupt Twitter and Facebook until they got influential enough to matter, and then everybody bothered to corrupt them, and we have no defense. Start filtering out bots - which of course you must do - and people will build better bots, just like they did with email spam and auto-generated web content and CAPTCHA solvers. You're not fighting against AI, you're fighting against I, and the I is highly incentivized by lots and lots of money and power.

But, ok, wait. I don't know how to fix giant social networks. But I do know a general workaround to this whole class of problem: slow things down. Choose carefully who you interact with. Interact with fewer people. Make sure you are certain which people they are.

If that sounds like some religions' advice about sex, it's probably not a coincidence. It's also why you shouldn't allow foreigners to buy political ads in your country. And why local newspapers are better than national ones. And why "free trade" goes so bad, so often, even though it's also often good. And why doctors need to wash their hands a lot. (Hospital staff are like the Internet of Bacteria.)

Unfortunately, this general workaround translates into "smash Facebook" or "stop letting strangers interact on Twitter," which is not very effective because a) it's not gonna happen, and b) it would destroy lots of useful interactions. So like I said, I've got nothing for you there. Sorry. Big networks are hard.

But Avery, the Internet, you said

Oh right. Me and my cofounders at Tailscale.io have been thinking about a particular formulation of this problem. Let's forget about Internet Scale problems (like giant social networks) for a moment. The thing is, only very few problems are Internet Scale. That's what makes them newsworthy. I hate to be the bearer of bad news, but chances are, your problems are not Internet Scale.

Why is it so hard to launch a simple web service for, say, just your customers or employees? Why did the Equifax breach happen, when obviously no outsiders at all were supposed to have access to Equifax's data? How did the Capital One + AWS hack happen, when Capital One clearly has decades of experience with not leaking your data all over the place?

I'll claim it again... because the Internet is too big.

Equifax's data was reachable from the Internet even though it should have only been accessible to a few support employees. Capital One's data surely used to be behind layers and layers of misconfigured firewalls, unhelpful proxy servers, and maybe even pre-TCP/IP legacy mainframe protocols, but then they moved it to AWS, eliminating that diversity and those ad-hoc layers of protection. Nobody can say modernizing their systems was the wrong choice, and yet the result was the same result we always get when we lose diversity.

AWS is bananas, and AWS permission bug exploits are banana fungus.

Attackers perfect their attack once, try it everywhere, scale it like crazy.

Back in the 1990s, I worked with super dumb database apps running on LANs. They stored their passwords in plaintext, in files readable by everyone. But there was never a (digital) attack on the scale of 2019 Capital One. Why?

Because... there was no Internet. Well, there was, but we weren't on it. Employees, with a bit of tech skill, could easily attack the database, and surely some got into some white collar crime. And you occasionally heard stories of kids "hacking into" the school's grading system and giving themselves an A. I even made fake accounts on a BBS or two. But random people in foreign countries didn't hack into your database. And the kids didn't give A's to millions of other kids in all the other schools. It wasn't a thing. Each corruption was contained.

Here's what we've lost sight of, in a world where everything is Internet scale: most interactions should not be Internet scale. Most instances of most programs should be restricted to a small set of obviously trusted people. All those people, in all those foreign countries, should not be invited to read Equifax's PII database in Argentina, no matter how stupid the password was. They shouldn't even be able to connect to the database. They shouldn't be able to see that it exists.

It shouldn't, in short, be on the Internet.

On the other hand, properly authorized users, who are on the Internet, would like to be able to reach it from anywhere. Because requiring all the employees to come to an office location to do their jobs ("physical security") seems kinda obsolete.

That leaves us with a conundrum, doesn't it?

Wouldn't it be nice though? If you could have servers, like you did in the 1990s, with the same simple architectures as you used in the 1990s, and the same sloppy security policies developer freedom as you had in the 1990s, but somehow reach them from anywhere? Like... a network, but not the Internet. One that isn't reachable from the Internet, or even addressable on the Internet. One that uses the Internet as a substrate, but not as a banana.

That's what we're working on.


Literary Afterthoughts

I'm certainly not the first to bring up all this. Various sci-fi addresses the problem of system corruption due to excess connectivity. I liked A Fire Upon the Deep by Vernor Vinge, where some parts of the universe have much better connectivity than others and it doesn't go well at all. There's also the Rifters Trilogy by Peter Watts, in which the Internet of their time is nearly unusable because it's cluttered with machine-generated garbage.

Still, I'd be interested in hearing about any "real science" on the general topic of systems corruption at large scales with higher connectivity. Is there math for this? Can we predict the point at which it all falls apart? Does this define an upper limit on the potential for hyperintelligence? Will this prevent the technological singularity?

Logistical note

I'm normally an East Coast person, but I'll be visiting the San Francisco Bay area from August 26-30 to catch up with friends and talk to people about Tailscale, the horrors of IPv6, etc. Feel free to contact me if you're around and would like to meet up.

Posted Mon Aug 19 07:39:48 2019 Tags:

Given that the main development workflow for most kernel maintainers is with email, I spend a lot of time in my email client. For the past few decades I have used (mutt), but every once in a while I look around to see if there is anything else out there that might work better.

One project that looks promising is (aerc) which was started by (Drew DeVault). It is a terminal-based email client written in Go, and relies on a lot of other go libraries to handle a lot of the “grungy” work in dealing with imap clients, email parsing, and other fun things when it comes to free-flow text parsing that emails require.

aerc isn’t in a usable state for me just yet, but Drew asked if I could document exactly how I use an email client for my day-to-day workflow to see what needs to be done to aerc to have me consider switching.

Note, this isn’t a criticism of mutt at all. I love the tool, and spend more time using that userspace program than any other. But as anyone who knows email clients, they all suck, it’s just that mutt sucks less than everything else (that’s literally their motto)

I did a (basic overview of how I apply patches to the stable kernel trees quite a few years ago) but my workflow has evolved over time, so instead of just writing a private email to Drew, I figured it was time to post something showing others just how the sausage really is made.

Anyway, my email workflow can be divided up into 3 different primary things that I do:

  • basic email reading, management, and sorting
  • reviewing new development patches and applying them to a source repository.
  • reviewing potential stable kernel patches and applying them to a source repository.

Given that all stable kernel patches need to already be in Linus’s kernel tree first, the workflow of the how to work with the stable tree is much different from the new patch workflow.

Basic email reading

All of my email ends up in either two “inboxes” on my local machine. One for everything that is sent directly to me (either with To: or Cc:) as well as a number of mailing lists that I ensure I read all messages that are sent to it because I am a maintainer of those subsystems (like (USB), or (stable)). The second inbox consists of other mailing lists that I do not read all messages of, but review as needed, and can be referenced when I need to look something up. Those mailing lists are the “big” linux-kernel mailing list to ensure I have a local copy to search from when I am offline (due to traveling), as well as other “minor” development mailing lists that I like to keep a copy locally like linux-pci, linux-fsdevel, and a few other smaller vger lists.

I get these maildir folders synced with the mail server using (mbsync) which works really well and is much faster than using (offlineimap), which I used for many many years ends up being really slow for when you do not live on the same continent as the mail server. (Luis’s) recent post of switching to mbsync finally pushed me to take the time to configure it all properly and I am glad that I did.

Let’s ignore my “lists” inbox, as that should be able to be read by any email client by just pointing it at it. I do this with a simple alias:

alias muttl='mutt -f ~/mail_linux/'

which allows me to type muttl at any command line to instantly bring it up:

What I spend most of the time in is my “main” mailbox, and that is in a local maildir that gets synced when needed in ~/mail/INBOX/. A simple mutt on the command line brings this up:

Yes, everything just ends up in one place, in handling my mail, I prune relentlessly. Everything ends up in one of 3 states for what I need to do next:

  • not read yet
  • read and left in INBOX as I need to do something “soon” about it
  • read and it is a patch to do something with

Everything that does not require a response, or I’ve already responded to it, gets deleted from the main INBOX at that point in time, or saved into an archive in case I need to refer back to it again (like mailing list messages).

That last state makes me save the message into one of two local maildirs, todo and stable. Everything in todo is a new patch that I need to review, comment on, or apply to a development tree. Everything in stable is something that has to do with patches that need to get applied to the stable kernel tree.

Side note, I have scripts that run frequently that email me any patches that need to be applied to the stable kernel trees, when they hit Linus’s tree. That way I can just live in my email client and have everything that needs to be applied to a stable release in one place.

I sweep my main INBOX ever few hours, and sort things out either quickly responding, deleting, archiving, or saving into the todo or stable directory. I don’t achieve a constant “inbox zero”, but if I only have 40 or so emails in there, I am doing well.

So, for this main workflow, I need an easy way to:

  • filter the INBOX by a pattern so that I only see one “type” of message at a time (more below)
  • read an email
  • write an email
  • respond to existing email, and use vim as an editor as my hands have those key bindings burned into them.
  • delete an email
  • save an email to one of two mboxes with a press of a few keystrokes
  • bulk delete/archive/save emails all at once

These are all tasks that I bet almost everyone needs to do all the time, so a tool like aerc should be able to do that easily.

A note about filtering. As everything comes into one inbox, it is easier to filter that mbox based on things so I can process everything at once.

As an example, I want to read all of the messages sent to the linux-usb mailing list right now, and not see anything else. To do that, in mutt, I press l (limit) which brings up a prompt for a filter to apply to the mbox. This ability to limit messages to one type of thing is really powerful and I use it in many different ways within mutt.

Here’s an example of me just viewing all of the messages that are sent to the linux-usb mailing list, and saving them off after I have read them:

This isn’t that complex, but it has to work quickly and well on mailboxes that are really really big. As an example, here’s me opening my “all lists” mbox and filtering on the linux-api mailing list messages that I have not read yet. It’s really fast as mutt caches lots of information about the mailbox and does not require reading all of the messages each time it starts up to generate its internal structures.

All messages that I want to save to the todo directory I can do with a two keystroke sequence, .t which saves the message there automatically

Again, that’s a binding I set up years ago, , jumps to the specific mbox, and . copies the message to that location.

Now you see why using mutt is not exactly obvious, those bindings are not part of the default configuration and everyone ends up creating their own custom key bindings for whatever they want to do. It takes a good amount of time to figure this out and set things up how you want, but once you are over that learning curve, you can do very complex things easily. Much like an editor (emacs, vim), you can configure them to do complex things easily, but getting to that level can take a lot of time and knowledge. It’s a tool, and if you are going to rely on it, you should spend the time to learn how to use your tools really well.

Hopefully aerc can get to this level of functionality soon. Odds are everyone else does something much like this, as my use-case is not unusual.

Now let’s get to the unusual use cases, the fun things:

Development Patch review and apply

When I decide it’s time to review and apply patches, I do so by subsystem (as I maintain a number of different ones). As all pending patches are in one big maildir, I filter the messages by the subsystem I care about at the moment, and save all of the messages out to a local mbox file that I call s (hey, naming is hard, it gets worse, just wait…)

So, in my linux/work/ local directory, I keep the development trees for different subsystems like usb, char-misc, driver-core, tty, and staging.

Let’s look at how I handle some staging patches.

First, I go into my ~/linux/work/staging/ directory, which I will stay in while doing all of this work. I open the todo mbox with a quick ,t pressed within mutt (a macro I picked from somewhere long ago, I don’t remember where…), and then filter all staging messages, and save them to a local mbox with the following keystrokes:

mutt
,t
l staging
T
s ../s

Yes, I could skip the l staging step, and just do T staging instead of T, but it’s nice to see what I’m going to save off first before doing so:

Now all of those messages are in a local mbox file that I can open with a single keystroke, ’s’ on the command line. That is an alias:

alias s='mutt -f ../s'

I then dig around in that mbox, sort patches by driver type to see everything for that driver at once by filtering on the name and then save those messages to another mbox called ‘s1’ (see, I told you the names got worse.)

s
l erofs
T
s ../s1

I have lots of local mbox files all “intuitively” named ‘s1’, ‘s2’, and ‘s3’. Of course I have aliases to open those files quickly:

alias s1='mutt -f ../s1'
alias s2='mutt -f ../s2'
alias s3='mutt -f ../s3'

I have a number of these mbox files as sometimes I need to filter even further by patch set, or other things, and saving them all to different mboxes makes things go faster.

So, all the erofs patches are in one mbox, let’s open it up and review them, and save the patches that look good enough to apply to another mbox:

Turns out that not all patches need to be dealt with right now (moving erofs out of the staging directory requires other people to review it, so I just save those messages back to the todo mbox:

Now I have a single patch that I want to apply, but I need to add some acks from the maintainers of erofs provided. I do this by editing the “raw” message directly from within mutt. I open the individual messages from the maintainers, cut their reviewed-by line, and then edit the original patch and add those lines to the patch:

Some kernel maintainers right now are screaming something like “Automate this!”, “Patchwork does this for you!”, “Are you crazy?” Yeah, this is one place that I need to work on, but the time involved to do this is not that much and it’s not common that others actually review patches for subsystems I maintain, unfortunately.

The ability to edit a single message directly within my email client is essential. I end up having to fix up changelog text, editing the subject line to be correct, fixing the mail headers to not do foolish things with text formats, and in some cases, editing the patch itself for when it is corrupted or needs to be fixed (I want a Linkedin skill badge for “can edit diff files by hand and have them still work”)

So one hard requirement I have is “editing a raw message from within the email client.” If an email client can not do this, it’s a non-starter for me, sorry.

So we now have a single patch that needs to be applied to the tree. I am already in the ~/linux/work/staging/ directory, and on the correct git branch for where this patch needs to go (how I handle branches and how patches move between them deserve a totally different blog post…)

I can apply this patch in one of two different ways, using git am -s ../s1 on the command line, piping the whole mbox into git and applying the patches directly, or I can apply them within mutt individually by using a macro.

When I have a lot of patches to apply, I just pipe the mbox file to git am -s as I’m comfortable with that, and it goes quick for multiple patches. It also works well as I have lots of different terminal windows open in the same directory when doing this and I can quickly toggle between them.

But we are talking about email clients at the moment, so here’s me applying a single patch to the local git tree:

All it took was hitting the L key. That key is set up as a macro in my mutt configuration file with a single line:

macro index L '| git am -s'\n

This macro pipes the output of the current message to git am -s.

The ability of mutt to pipe the current message (or messages) to external scripts is essential for my workflow in a number of different places. Not having to leave the email client but being able to run something else with that message, is a very powerful functionality, and again, a hard requirement for me.

So that’s it for applying development patches. It’s a bunch of the same tasks over and over:

  • collect patches by a common theme
  • filter the patches by a smaller subset
  • review them manually and responding if there are problems
  • saving “good” patches off to apply
  • applying the good patches
  • jump back to the first step

Doing that all within the email program and being able to quickly get in, and out of the program, as well as do work directly from the email program, is key.

Of course I do a “test build and sometimes test boot and then push git trees and notify author that the patch is applied” set of steps when applying patches too, but those are outside of my email client workflow and happen in a separate terminal window.

Stable patch review and apply

The process of reviewing patches for the stable tree is much like the development patch process, but it differs in that I never use ‘git am’ for applying anything.

The stable kernel tree, while under development, is kept as a series of patches that need to be applied to the previous release. This series of patches is maintained by using a tool called (quilt). Quilt is very powerful and handles sets of patches that need to be applied on top of a moving base very easily. The tool was based on a crazy set of shell scripts written by Andrew Morton a long time ago, and is currently maintained by Jean Delvare and has been rewritten in perl to make them more maintainable. It handles thousands of patches easily and quickly and is used by many developers to handle kernel patches for distributions as well as other projects.

I highly recommend it as it allows you to reorder, drop, add in the middle of the series, and manipulate patches in all sorts of ways, as well as create new patches directly. I do this for the stable tree as lots of times we end up dropping patches from the middle of the series when reviewers say they should not be applied, adding new patches where needed as prerequisites of existing patches, and other changes that with git, would require lots of rebasing.

Rebasing a git does not work for when you have developers working “down” from your tree. We usually have the rule with kernel development that if you have a public tree, it never gets rebased otherwise no one can use it for development.

Anyway, the stable patches are kept in a quilt series in a repository that is kept under version control in git (complex, yeah, sorry.) That queue can always be found (here).

I do create a linux-stable-rc git tree that is constantly rebased based on the stable queue for those who run test systems that can not handle quilt patches. That tree is found (here) and should not ever be used by anyone for anything other than automated testing. See (this email for a bit more explanation of how these git trees should, and should not, be used.

With all that background information behind us, let’s look at how I take patches that are in Linus’s tree, and apply them to the current stable kernel queues:

First I open the stable mbox. Then I filter by everything that has upstream in the subject line. Then I filter again by alsa to only look at the alsa patches. I look at the individual patches, looking at the patch to verify that it really is something that should be applied to the stable tree and determine what order to apply the patches in based on the date of the original commit.

I then hit F to pipe the message to a script that looks up the Fixes: tag in the message to determine what stable tree, if any, the commit that this fix was contained in.

In this example, the patch only should go back to the 4.19 kernel tree, so when I apply it, I know to stop at that place and not go further.

To apply the patch, I hit A which is another macro that I define in my mutt configuration

macro index A |'~/linux/stable/apply_it_from_email'\n
macro pager A |'~/linux/stable/apply_it_from_email'\n

It is defined “twice” as you can have different key bindings when you are looking at mailbox’s index of all messages from when you are looking at the contents of a single message.

In both cases, I pipe the whole email message to my apply_it_from_email script.

That script digs through the message, finds the git commit id of the patch in Linus’s tree, then runs a different script that takes the commit id, exports the patch associated with that id, edits the message to add my signed-off-by to the patch as well as dropping me into my editor to make any needed tweaks that might be needed (sometimes files get renamed so I have to do that by hand, and it gives me one final change to review the patch in my editor which is usually easier than in the email client directly as I have better syntax highlighting and can search and review the text better.

If all goes well, I save the file and the script continues and applies the patch to a bunch of stable kernel trees, one after another, adding the patch to the quilt series for that specific kernel version. To do all of this I had to spawn a separate terminal window as mutt does fun things to standard input/output when piping messages to a script, and I couldn’t ever figure out how to do this all without doing the extra spawn process.

Here it is in action, as a video as (asciinema) can’t show multiple windows at the same time.

Once I have applied the patch, I save it away as I might need to refer to it again, and I move on to the next one.

This sounds like a lot of different steps, but I can process a lot of these relatively quickly. The patch review step is the slowest one here, as that of course can not be automated.

I later take those new patches that have been applied and run kernel build tests and other things before sending out emails saying they have been applied to the tree. But like with development patches, that happens outside of my email client workflow.

Bonus, sending email from the command line

In writing this up, I remembered that I do have some scripts that use mutt to send email out. I don’t normally use mutt for this for patch reviews, as I use other scripts for that (ones that eventually got turned into git send-email), so it’s not a hard requirement, but it is nice to be able to do a simple:

mutt -s "${subject}" "${address}" <  ${msg} >> error.log 2>&1

from within a script when needed.

Thunderbird also can do this, I have used:

thunderbird --compose "to='${address}',subject='${subject}',message=${msg}"

at times in the past when dealing with email servers that mutt can not connect to easily (i.e. gmail when using oauth tokens).

Summary of what I need from an email client

So, to summarize it all for Drew, here’s my list of requirements for me to be able to use an email client for kernel maintainership roles:

  • work with local mbox and maildir folders easily
  • open huge mbox and maildir folders quickly.
  • custom key bindings for any command. Defaults that are sane is always good, but everyone is used to a previous program and training fingers can be hard.
  • create new key bindings for common tasks (like save a message to a specific mbox)
  • easily filter messages based on various things. Full regexes are not needed, see the PATTERNS section of ‘man muttrc’ for examples of what people have come up with over the years as being needed by an email client.
  • when sending/responding to an email, bring it up in the editor of my choice, with full headers. I know aerc already uses vim for this, which is great as that makes it easy to send patches or include other files directly in an email body
  • edit a message directly from the email client and then save it back to the local mbox it came from
  • pipe the current message to an external program

That’s what I use for kernel development.

Oh, I forgot:

  • handle gpg encrypted email. Some mailing lists I am on send everything encrypted with a mailing list key which is needed to both decrypt the message and to encrypt messages sent to the list. SMIME could be used if GPG can’t work, but both of them are probably equally horrid to code support for, so I recommend GPG as that’s probably used more often.

Bonus things that I have grown to rely on when using mutt is:

  • handle displaying html email by piping to an external tool like w3m
  • send a message from the command line with a specific subject and a specific attchment if needed.
  • specify the configuration file to use as a command line option. It is usually easier to have one configuration file for a “work” account, and another one for a “personal” one, with different email servers and settings provided for both.
  • configuration files that can include other configuration files. Mutt allows me to keep all of my “core” keybindings in one config file and just specific email server options in separate config files, allowing me to make configuration management easier.

If you have made it this far, and you aren’t writing an email client, that’s amazing, it must be a slow news day, which is a good thing. I hope this writeup helps others to show them how mutt can be used to handle developer workflows easily, and what is required of a good email client in order to be able to do all of this.

Hopefully other email clients can get to state where they too can do all of this. Competition is good and maybe aerc can get there someday.

Posted Wed Aug 14 11:00:44 2019 Tags:

Of course, "O'Blivion" was not the name I was born with. That's my television name. Soon, all of us will have special names -- names designed to cause the cathode ray tube to resonate.
This is the hero we deserve.

Henrico County, Va. -- Nearly a year after a Glen Allen neighborhood woke up to find dozens of vintage box television sets sitting on their front porches, the strange circumstance has happened again. [...]

Lt. Matt Pecka with Henrico Police said more than one person "wearing a mask resembling a television" dropped off the TVs to "the majority of homes throughout the community." [...] Henrico Police and Solid Waste Divisions worked together to remove the more than 50 TVs.

Previously, previously, previously, previously, previously.

Posted Tue Aug 13 20:13:29 2019 Tags:
Posted Sun Aug 11 10:02:12 2019 Tags:
Posted Sun Aug 11 10:00:56 2019 Tags:
4th Amendment Crop Top

The patterns on the goods in this shop are designed to trigger Automated License Plate Readers, injecting junk data in to the systems used by the State and its contractors to monitor and track civilians and their locations.

Previously, previously, previously, previously, previously.

Posted Sun Aug 11 03:44:47 2019 Tags:
It's time for The Guardian's Best of the Bay again, so go validate us, ok? Applicable categories include:

  • Best Late-Night Restaurant: DNA Pizza
  • Best Pizza: DNA Pizza
  • Best Overall Bar: DNA Lounge
  • Best Performance Space: DNA Lounge
  • Best Live Music Venue: DNA Lounge
  • Best Nightclub: DNA Lounge
  • Best Dance Party: Bootie SF, So Stoked, Wasted, Sequence...
  • Best Burlesque: Hubba Hubba Revue
  • Best Kids' Event Or Venue: So Stoked

Because a few people asked for them, we put our pizza window posters on sale in the store. These are the 11"x17" versions, and are a paltry $15 each! Get 'em while they're "hot"?

"Someone took another one of our soap dispenser handles as a trophy" is a thing that I have to say on the regular. Whyyyy? Do you put it on the shelf with your collection of empty beer bottles?

It's been a little while since I posted a photo gallery round-up, so here we go...

Acid Rain
Too Far Gone
Sequence: Dack Janiels
Sequence: Monxx
Afton Presents

Smash-Up Derby
LGBooTie Pride
Dead Souls
Hubba Hubba State Fair
Burlesque Nation

Sad Boy Show Out
So Stoked: Candy Rave
So Stoked For Pride
So Stoked: Summer of Love
Cocktail Robotics Grand Challenge

Posted Sat Aug 10 00:31:00 2019 Tags:

Planet Debian upstream is hosted by Branchable.