The lower-post-volume people behind the software in Debian. (List of feeds.)

A few minutes ago here at chez Raymond, my friend John Desmond says: “So, have you heard about the new Iraqi national anthem?”

I said “Uh, OK, I’m braced for this. What about it?”

He said “In the good old Sumer time.”

I pointed a finger at him and said “You’re Akkad!”

Yes. Yes, we probably do both deserve a swift kicking.

Posted Wed Mar 22 04:03:23 2017 Tags:

Maintaining cvs-fast-export is, frankly, a pain in the ass. Parts of the code I inherited are head-achingly opaque. CVS repositories are chronically prone to malformations that look like bugs in the tool and/or can’t be adapted to in any safe way. Its actual bugs are obscure and often difficult to fix – the experience is not unlike groping for razor-blades in the dark. But people expect cvs-fast-export to “just work” anyway and don’t know enough about what a Zeno’s tarpit the domain problem is to be properly grateful when it does.

Still I persevere. Somebody has to; the thought of vital code being trapped in CVS is pretty nervous-making if you know everything that can go wrong with it.

This release fixes a bug introduced by an incorrect optimization hack in 2014. It should only have affected you if you tried to use the -c option.

If you use this at a place that pays developers, please have your organization contribute to my Patreon feed. Some of my projects are a pleasure to do for free; this one is grubby, hard work.

Posted Mon Mar 20 11:02:52 2017 Tags:

Latest version, as usual, here.

New stuff: Note just how crazily heterogenous the six-bit character sets were. FTP. Ctrl-V on Unix systems. A correction about uu{de|en}code. Timeline updates for ’74 and ’77.

The pace of submissions continues to slow.

Posted Sat Mar 18 15:18:23 2017 Tags:

When you've emptied a cupboard, put masking tape across it, ideally in a colour that's highly visible. This way you immediately see know which ones are finished and which ones still need attention. You won't keep opening the cupboard a million times to check and after the move it takes merely seconds to undo.

Posted Fri Mar 17 02:00:00 2017 Tags:

Yes, to a certain segment of the population I suppose I define myself as a relic of ancient times when I insist that one can write good and absorbing computer games that don’t have a GUI – that throw down old-school in a terminal emulator.

Today I’m shipping a new release of the game greed – which is, I think, one of the better arguments for this proposition. Others include roguelike dungeon crawlers (nethack, angband, moria, larn), VMS Empire, the whole universe of text adventure games that began with ADVENT and Zork, and Super Star Trek.

I maintain a bunch of these old games, including an improved version of the BSD Battleships game and even a faithful port of the oldest of them all: wumpus, which I let you play (if you want) in a mode that emulates the awful original BASIC interface, all-caps as far as the eye can see.

Some of these I keep alive only because somebody ought to; they’re the heritage grain of computer gaming, even if they look unimpressive to the modern eye. But others couldn’t really be much improved by a GUI; greed, in particular, is like that. In fact, if you ranked heritage terminal games by how little GUIfication would improve then, I think greed would probably be right at the top (perhaps sharing that honor with ski). That in itself makes greed a bit interesting.

Much has been gained by GUIfying games; I have my own favorites in that style, notably Civilization II and Spaceward Ho! and Battle For Wesnoth (on which I was a developer for years). But the very best terminal games retain, I think, a distinct charm of their own.

Some of them (text adventures, roguelikes) work, I think, the way a novel does, or Scott McCloud taught us minimalist cartooning does; they engage the user’s own imagination as a peripheral, setting up a surprisingly strong interaction between the user’s private imagery and the bare elements of the game. At their best, such games (like novels) can have a subtle imaginative richness that goes well beyond anything this week’s graphical splatterfest offers.

More abstract puzzle games like greed don’t quite do that. What they offer instead is some of the same appeal as tiling window managers. In these games there is no waste, no excess, no bloat, no distraction; it’s all puzzle value all the way down. There’s a bracing quality about that.

Ski is kind of hermaphroditic that way. You can approach it as a cartoon (Aieee! Here comes the Yeti! Flee for your life!) or as a pure puzzle game. It works either way.

Finally, maybe it’s just me, but one thing I think these old-school terminal games consistently do better than their modern competition is humor. This is probably the McCloud effect again. I’ve laughed harder at, and retained longer, the wry turns of phrase from classic text adventures than any sight gag I’ve ever seen in a GUI game.

So, enjoy. It’s an odd and perhaps half-forgotten corner of our culture, but no less valuable for that.

UPDATE: I probably shouldn’t have described wumpus (1972) as “the oldest of them all”, because there were a few older games for teletypes like Hammurabi, aka Hamurabi (with a single ‘m’) aka The Sumer game from 1968. But wumpus is the oldest one that seems to be live in the memory of the hacker culture; only SPACEWAR (1961) has a longer pedigree, and it’s a different (vector graphics) kind of thing.

Posted Thu Mar 16 12:19:04 2017 Tags:

An important part of the Way of Unix is to try to tackle large problems with small, composable tools. This goes with a tradition of using line-oriented textual streams to represent data. But…you can’t always do either. Some kinds of data don’t serialize to text streams well (example: databases). Some problems are only tractable to large, relatively monolithic tools (example: compiling or interpreting a programming language).

Can we say anything generatively useful about where the boundary is? Anything that helps us do the Way of Unix better, or at least help us know when we have no recourse but to write something large?

Yes, in fact we can. Back in 2015 I was asked why reposurgeon, my editor for version-control repositories, is not written as a collection of small tools rather than as a relatively large interpreter for a domain-specific language. I think the answer I gave then generalizes to a deeper insight, and a productive set of classifications.

(Part of the motivation for expanding that comment into this essay now is that I recently got mail from a reposurgeon user who shipped me several good fix patches and complimented the tool as a good example of the Way of Unix. I actually had to pause to think about whether he was right or not, because reposurgeon is certainly not a small program, either in code bulk or internal complexity. At first sight it doesn’t seem very Unixy. But he had a point, as I hope to show.)

Classic Unix line-oriented text streams have what I’m going to call semantic locality. Consider as an example a Unix password file. The semantic boundaries of the records in it – each one serializing a user’s name, numeric user ID, home directory, and various other information – correspond nicely to the syntactic boundary at each end of line.

Semantic locality means you can do a lot by looking at relatively small pieces (like line-at-a-time) using simple state machines or parsers. Well-designed data serializations tend to have this property even when they’re not textual, so you can do Unix-fu tricks on things like the binary packed protocol a uBlox GPS ships.

Repository internals are different. A lot of the most important information – for example, the DAG structure of the history – is extremely de-localized; you have to do complicated and failure-prone operations on on an entire serialized dump of the repository (assuming you can get such a thing at all!) to recover it. You can’t do more than the very simplest transforms on the de-localized data without knowing a lot of fragile context.

(Another, probably more familiar example of a data structure with poor semantic locality is a database dump. It may be possible to express individual records in tables in a representation that has good locality, but the web of relationships between tables is nowhere expressed in a way that is local for parsing.)

Now, if you are a really clever Unix hacker, the way you deal with the problem of representing and editing repositories is by saying “Fsck it. I’m not going to deal with repository internals at all, only lossless textual serializations of them.” Voila, reposurgeon! All your Unix-fu is suddenly relevant again. You exile your serialization/deserialization logic into stream exporters and importers which have just one extremely well-defined job, just as the Way of Unix prescribes.

Inside those importer/exporter tools…Toto, you’re not in Unix-land anymore, at least not as far as the gospel of small separable tools is concerned. That’s OK; by using them to massage the repository structures into a shape with better semantic locality you’ve made the conceptually hard part (the editing operations) much easier to specify and implement. You can’t get all the way to line-oriented text streams, but you can get close enough that ed(1), the ancient Unix line-oriented text editor, makes a good model for reposurgeon’s interface.

To sharpen this point, consider repocutter. This companion tool in the reposurgeon distribution takes advantage of the fact that Subversion itself can serialize repository into a textual dumpfile. There’s a repertoire of useful operations that repocutter can perform on these dumpfiles; notably, one of them is dissecting a multi-project Subversion repository dump to get out any one of the project histories in a dumpfile of its own. While repocutter has a more limited repertoire than reposurgeon, it does behave like a classic Unix filter.

Stepping back from the details of reposurgeon, we can use it as a paradigmatic case for a couple of more general observations that explain and generalize traditional Unix practice.

First: semantic locality aids decomposability. Whether you get to write a flock of small separable tools or have to do one large one is largely controlled by whether your data structure has a lossless representation with good semantic locality or not.

Or, to put it more poetically, you can carve a data structure at its joints only if there’s a representation with joints to carve.

Second: There’s almost nothing magic about line-oriented text streams other than their good semantic locality. (I say “almost” only because eyeball friendliness and the ability to edit them with unspecialized tools also matter.)

Third: Because semantic locality aids decomposability, part of the Way of Unix is to choose data structures and data formats that maximize semantic locality (under the constraint that you have to represent the data’s entire ontology).

That third point is the philosophical generalization of “jam it into a line-oriented text-stream representation”; it’s why that works, when it works.

Fourth: When you can transform a data structure or representation to a form with better semantic locality, you can collect gains in decomposability.

That fourth point is the main trick that reposurgeon pulls. I had more to say about this as a design strategy in Automatons, judgment amplifiers, and DSLs.

Posted Wed Mar 15 01:36:36 2017 Tags:

Most hackers know how the twos-complement representation of binary numbers works, and are at least aware that there was an older representation called “ones-complement” in which you negated a binary number by inverting each bit.

This came up on the NTPsec development list recently, with a question about whether we might ever have to port to a non-twos-complement machine. To my utter, gob-smacked astonishment, it turns out ones-complement systems still exist – though, thankfully, not as an issue for us.

I thought I could just mumble something about the CDC 6600 and be done, but if you google “one’s-complement machines” you’ll find that Unisys still ships a series of machines with the brand “Clear-Path Dorado” (latest variant introduced 2015) that are emulations of their old 1100-series mainframes running over Intel Xeon hardware – and these have one’s-complement arithmetic.

This isn’t a practical port blocker for NTPsec, as NTP will never run over the batch OS on these things – it’s about as POSIX-compatible as the Bhagavad-Gita. It’s just weird and interesting that ones-complement machines survive in any form at all.

And a bit personal for me. My father was a programmer at Univac in the 1950s and early ’60s. He was proud of his work. My very first interaction with a computer ever was getting to play a very primitive videogame on the oscilloscope-based video console of a Univac 1108. This was in 1968. I was 11 years old, and my game machine cost $8M and took up the entire ground floor of an office building in Rome, Italy.

Other than the 1100, the ones-complement machines Wikipedia mentions (LINC, PDP-1, and CDC6600) are indeed all long dead. There was a ones-complement “CDC Cyber” series as late as 1989, but again this was never going to implement POSIX.

About other competitors to twos-complement there is less to say. Some of them are still used in floating-point representations, but I can find no evidence that sign-magnitude or excess-k notation have been used for integers since the IBM 7090 in 1959.

There’s a comp.lang.std.c article from 1993 that argues in some technical detail that that a C compiler is not practical on ones-complement hardware because too many C idioms have twos-complement assumptions baked in. The same argument would apply to sign-magnitude and excess-k.

UPDATE: It seems that Unisys is the graveyard of forgotten binary formats. I have a report that its Clear-Path Libra machines, emulating an ancient Burroughs stack machine architecture, use sign-magnitude representation of integers.

Posted Sun Mar 12 22:20:01 2017 Tags:

Was looking at using zstd for backup, and wanted to see the effect of different compression levels. I backed up my (built) bitcoin source, which is a decent representation of my home directory, but only weighs in 2.3GB. zstd -1 compressed it 71.3%, zstd -22 compressed it 78.6%, and here’s a graph showing runtime (on my laptop) and the resulting size:

zstandard compression (bitcoin source code, object files and binaries) times and sizes

For this corpus, sweet spots are 3 (the default), 6 (2.5x slower, 7% smaller), 14 (10x slower, 13% smaller) and 20 (46x slower, 22% smaller). Spreadsheet with results here.

Posted Thu Mar 9 00:53:39 2017 Tags:

This morning I stumbled over a comment from last September that I somehow missed replying to at the time. I suspect it’s something more than one of my readers has wondered about, so here goes…

Edward Cree wrote:

If I’m really smart enough to impress esr, I feel like I ought to be doing more with myself than toy projects, games, and an obscure driver. It’s not that I’m failing to change the world, it’s that I’m not even trying. (Not for want of causes, either; there are plenty of things I’d change about the world if I could, and I suspect esr would approve of most of them.)

Obviously without Eric’s extroversion I won’t be as influential as him, but… dangit, Eric, what’s your trick? You make having a disproportionate effect on the course of history look easy! Why can I never find anything important to hack on?

There are several reasons people get stuck this way. I’ve experienced some of them myself. I’ve seen others.

If this sounds like you, dear reader, the first question to ask yourself is whether you are so attached to having a lot of potential that you fear failing in actuality. I don’t know Edward’s age, but I’ve seen this pattern in a lot of bright young people; it manifests as a lot of project starts that are potentially brilliant but a failure to follow through to the point where you ship something that has to meet a reality test. Or in an opposite way: as self-constraining to toy projects where the risk of failure is low.

So my first piece of advice is this: if you want to have “a disproportionate effect on the course of history”, the first thing you need to do is give yourself permission to fail – as long as you learn something from every failure, and are ready to keep scaling up your bets after success.

The second thing you need to do is finish something and ship it. No, more than that. You need to make finishing and shipping things a habit, something you do routinely. There are things that can be made to look easy only by cultivating a lot of self-discipline and persistence. This is one of them.

(The good news is that once you get your self-discipline to the required level it won’t feel like you have to flog yourself any more. It’ll just be habit. It’ll be you.)

Another thing you need to do is actually pay attention to what’s going on around you, at every scale. 99% of the time, you find important things to hack on by noticing possibilities other people have missed. The hard part here is seeing past the blinding assumptions you don’t know you have, and the hard part of that is being conscious of your assumptions.

Here’s my favorite example of this from my own life. After I described the many-eyeballs-make-bugs-shallow effect, I worried for years at the problem of why nobody in the hacker culture had noticed it sooner. After all, I was describing what was already a decades-old folk practice in a culture not undersupplied with bright people – why didn’t I or anybody else clue in faster?

I remember vividly the moment I got it. I was pulling on my pants in a hotel in Trondheim, Norway, idly chewing over this question yet again. It was because we all thought we knew why we were simultaneously innovating and achieving low error rates – we had an unexamined, unconscious explanation that suited us and we never looked past it.

That assumption was this: hackers write better software because we are geniuses, or at least an exceptionally gifted and dedicated elite among programmers. Our culture successfully recruits and selects for this.

The insidious thing about this explanation is that it’s not actually false. We really are an exceptionally gifted elite. But as long as you don’t know that you’re carrying this assumption, or know it and fail to look past it because it makes you feel so good, it will be nearly impossible to notice that something else is going on – that the gearing of our social machine matters a lot, and is an evolved instrument to maximize those gifts.

There’s an old saw that it’s not the things you don’t know that hurt you, it’s the things you think you know that ain’t so. I’m amplifying that: it’s the things you don’t know you think that hurt you the most.

It’s not enough to be rigorous about questioning your assumptions once you’ve identified them. The subtler work is noticing you have them. So when you’re looking for something important to hack on, the question to learn to ask is: what important problems are everybody, including you, seeing right past? Pre-categorizing and dismissing?

There’s a kind of relaxed openness to what is, a seeing past preconceptions, that is essential to creativity. We all half-know this; it’s why hackers resonate so strongly with Zen humor. It’s in that state that you will notice the problems that are really worth your effort. Learn to go there.

As for making it look easy…it’s only easy in the same way that mastery always looks a skill easier than it is. When someone like John Petrucci or Andy Timmons plays a guitar lick with what looks like simple, effortless grace, you’re not seeing the years of practice and effort they put into getting to where that fluency and efficiency is natural to them.

Similarly, when you see me doing things with historical-scale consequences and making it look easy, you’re not seeing the years of practice and effort I put in on the component skills (chopping wood, drawing water). Learning to write well. Learning to speak well. Getting enough grasp on what makes people tick that you know how to lead them. Learning enough about your culture that you can be a prophet, speak its deepest yearnings and its highest aspirations to it, bringing to consciousness what was unconscious before. These are learnable skills – almost certainly anyone reading this is bright enough to acquire them – but they’re not easy at all.

Want to change the world? It’s doable. It’s not magic. Be aware. Be courageous. And will it – want it enough that you accept your failures, learn from them, and never stop pushing.

Posted Wed Mar 8 21:18:52 2017 Tags:

I haven’t announced a reposurgeon release on the blog in some time because recent releases have mostly been routine stuff and bugfixes. But today we have a feature that many will find interesting: reposurgeon can now read BitKeeper repositories. This is its first new version-control system since Monotone was added in mid-2015.

Those of you who remember the BitKeeper flap of 2005 might assume some fearsome magic was required, but no. BitKeeper has gone open source and now has a “bk fast-export” subcommand, so adding read-side support was pretty trivial. In theory the write-side support ought to work – there’s also a “bk fast-import” that reposurgeon uses – but the importer does not work well. It doesn’t seem to handle tag declarations, and I have core-dumped it during basic testing. I expect this will be fixed eventually; BitMover has a business incentive to make imports easy, after all.

While reposurgeon has your attention, I guess I should mention another revent development. The svncutter tool that I wrote back around 2009 is back, as “repocutter”, and now part of the reposurgeon distribution with improved documentation. There are some cases of Subversion repositories holding multiple projects that it turns out are better handled by slicing them apart with repocutter than by trying to do the entire dissection in reposurgeon.

Yes, there are still some pending bugs in weird Subversion cases. I decided to ship a release anyway, deferring those, because read support for BitKeeper seemed important enough. I believe that makes reposurgeon’s coverage of Unix-hosted version-control systems about as complete as is technically possible.

Fear the reposturgeon!

Posted Mon Mar 6 10:28:41 2017 Tags:

The newest version of Things Every Hacker Once Knew is only a minor update.

There’s material on SIGHUP; six-bit characters on 36-bit machines; a correction that XMODEM required 8 bits; and why screensavers are called that.

New submissions are ramping down; I don’t expect to need to issue another update of this for some time.

Posted Thu Mar 2 11:52:37 2017 Tags:

I had a very powerful experience recently. I found my love of jazz again. Here’s the recording that did it: Simon Phillips & Protocol + Ndugu Chancler + Billy Ward: Biplane to Bermuda.

My teens and twenties were an exciting time to be a jazz fan. I fell in love with the first wave of jazz fusion experiments in the 1970s by groups like Weather Report, Return to Forever, Mahavishnu Orchestra, Billy Cobham.

My relationship to older forms of the genre had been ambiguous. I generally liked jazz from before it began to strive for artsiness – primitive forms like Dixieland and swing and the brass-centric music of Preservation Hall. On the other hand, I often found the “art” jazz of the 50s and 60s excessively involuted and unlistenable.

It seemed to me like the fusionists of the ’70s, borrowing from rock and pop-funk and world music, had rediscovered the vigor of early jazz with a wider range of rhythms, textures, tone colors – and a willingness to take chances, push boundaries.

And dear Goddess I loved the results – albums like Return to Forever’s Romantic Warrior, Weather Report’s Heavy Weather, Billy Cobham’s Spectrum, Herbie Hancock’s Headhunters, Stanley Clarke’s School Days. Forty years later I still cherish those recordings.

But then in the late ’80s something shifted. The harbinger, though I didn’t know it at the time, was Weather Report’s breakup in ’86. Somehow the magic went away. It seemed to me that jazz lost its sense of adventure. What I could hear out there mostly seemed to retreat into bland elevator music and overly-reverent recreations of historical styles.

The best I could find were the likes of Pat Metheny and Spirogyra – pleasant enough listening, but…wan, almost bloodless. Safe. Too safe. And let us draw a kindly veil over the terrifying blandness of most of the rest of “smooth jazz”.

So I moved sideways into jazz-influenced prog-metal, artists like Derek Sherinian and Planet X and Liquid Tension Experiment. And that was good too; wonderful music, intricate fire.

But I missed jazz as a living genre – I missed polyrhythms and wailing saxes and the uses of silence and things that derivatives of rock could not quite bring themselves to do. Occasionally Pandora would throw up a track that partly brought back the magic, like Bill Frisell’s White Fang or various stuff by Niacin. But these were few and far between.

And then Pandora’s algorithms figured out that I might like Simon Phillips and Protocol. And I did. Took me a while to notice that a lot of the good newer tracks it was rotating in were by the same outfit. I think the track that forced me to sit up and take notice was Manganese.

Then I found the live Biplane to Bermuda, and I listened, and – I mean this – I nearly cried. This was what I’d been missing for a very long time. Not just the style and virtuosity of music – Andy Timmon’s astonishing understated doubling on guitar with Everette Harp’s sax, Phillips’s and Chancler’s polyrhythms – but the sense that this was not a museum piece. These were players still asking questions, still pushing, still taking chances.

This is jazz I can feel passionate about again, jazz that rewards repeated listens and invites me into subtle depths of phrasing, rhythm, and expression.

Damn it’s good to have that back. I don’t think I knew how much I missed it, before.

Posted Sun Feb 26 21:56:38 2017 Tags:

I encourage all of you to either listen to or read the transcript of Terry Gross' Fresh Air interview with Joseph Turow about his discussion of his book “The Aisles Have Eyes: How Retailers Track Your Shopping, Strip Your Privacy, And Define Your Power”.

Now, most of you who read my blog know the difference between proprietary and Free Software, and the difference between a network service and software that runs on your own device. I want all of you have a good understanding of that to do a simple thought experiment:

How many of the horrible things that Turow talks about can happen if there is no proprietary software on your IoT or mobile devices?

AFAICT, other than the facial recognition in the store itself that he talked about in Russia, everything he talks about would be mitigated or eliminated completely as a thread if users could modify the software on their devices.

Yes, universal software freedom will not solve all the worlds' problems. But it does solve a lot of them, at least with regard to the bad things the powerful want to do to us via technology.

(BTW, the blog title is a reference to Philip K. Dick's Minority Report, which includes a scene about systems reading people's eyes to target-market to them. It's not the main theme of that particular book, though… Dick was always going off on tangents in his books.)

Posted Tue Feb 14 03:30:00 2017 Tags:

There are a lot of problems in our society, and particularly in the USA, right now, and plenty of charities who need our support. The reason I continue to focus my work on software freedom is simply because there are so few focused on the moral and ethical issues of computing. Open Source has reached its pinnacle as an industry fad, and with it, a watered-down message: “having some of the source code for some of your systems some of the time is so great, why would you need anything more?”. Universal software freedom is however further from reality than it was even a few years ago. At least a few of us, in my view, must focus on that cause.

I did not post many blog posts about this in 2016. There was a reason for that — more than any other year, work demands at Conservancy have been constant and unrelenting. I enjoy my work, so I don't mind, but blogging becomes low priority when there is a constant backlog of urgent work to support Conservancy's mission and our member projects. It's not just Conservancy's mission, of course, it's my personal one as well.

For our 2016 fundraiser, I wrote last year a blog post entitled “Do You Like What I Do For a Living?”. Last year, so many of you responded, that it not only made it possible for me to continue that work for one more year, but we were able to add our colleague Brett C. Smith to our staff, which brought Conservancy to four full-time staff for the first time. We added a few member projects (and are moving that queue to add more in 2017), and sure enough — the new work plus the backlog of work waiting for another staffer filled Brett's queue just like my, Karen's and Tony's was already filled.

The challenge now is sustaining this staffing level. Many of you came to our aid last year because we were on the brink of needing to reduce our efforts (and staffing) at Conservancy. Thanks to your overwhelming response, we not only endured, but we were able to add one additional person. As expected, though, needs of our projects increased throughout the year, and we again — all four of us full-time staff — must work to our limits to meet the needs of our projects.

Charitable donations are a voluntary activity, and as such they have a special place in our society and culture. I've talked a lot about how Conservancy's Supporters give us a mandate to carry out our work. Those of you that chose to renew your Supporter donations or become new Supporters enable us to focus our full-time efforts on the work of Conservancy.

On the signup and renewal page, you can read about some of our accomplishments in the last year (including my recent keynote at FOSDEM, an excerpt of which is included here). Our work does not follow fads, and it's not particularly glamorous, so only dedicated Supporters like you understand its value. We don't expect to get large grants to meet the unique needs of each of our member projects, and we certainly don't expect large companies to provide very much funding unless we cede control of the organization to their requests (as trade associations do). Even our most popular program, Outreachy, is attacked by a small group of people who don't want to see the status quo of privileged male domination of Open Source and Free Software disrupted.

Supporter contributions are what make Conservancy possible. A year ago, you helped us build Conservancy as a donor-funded organization and stabilize our funding base. I now must ask that you make an annual commitment to renewal — either by renewing your contribution now or becoming a monthly supporter, or, if you're just learning about my work at Conservancy from this blog post, reading up on us and becoming a new Supporter.

Years ago, when I was still only a part-time volunteer at Conservancy, someone who disliked our work told me that I had “invented a job of running Conservancy”. He meant it as an insult, but I take it as a compliment with pride. In fact, between me and my colleague (and our Executive Director) Karen Sandler, we've “invented” a total of four full-time jobs and one part-time one to advance software freedom. You helped us do that with your donations. If you donate again today, your donation will be matched to make the funds go further.

Many have told me this year that they are driven to give to other excellent charities that fight racism, work for civil and immigration rights, and other causes that seem particularly urgent right now. As long as there is racism, sexism, murder, starvation, and governmental oppression in the world, I cannot argue that software freedom should be made a priority above all of those issues. However, even if everyone in our society focused on a single, solitary cause that we agreed was the top priority, it's unlikely we could make quicker progress. Meanwhile, if we all single-mindedly ignore less urgent issues, they will, in time, become so urgent they'll be insurmountable by the time we focus on them.

Industrialized nations have moved almost fully to computer automation for most every daily task. If you question this fact, try to do your job for a day without using any software at all, or anyone using software on your behalf, and you'll probably find it impossible. Then, try to do your job using only Free Software for a day, and you'll find, as I have, that tasks that should take only a few minutes take hours when you avoid proprietary software, and some are just impossible. There are very few organizations that are considering the long-term implications of this slowly growing problem and making plans to build the foundations of a society that doesn't have that problem. Conservancy is one of those few, so I hope you'll realize that long-term value of our lifelong work to defend and expand software freedom and donate.

Posted Mon Feb 13 15:20:00 2017 Tags:

There are a lot of problems in our society, and particularly in the USA, right now, and plenty of charities who need our support. The reason I continue to focus my work on software freedom is simply because there are so few focused on the moral and ethical issues of computing. Open Source has reached its pinnacle as an industry fad, and with it, a watered-down message: “having some of the source code for some of your systems some of the time is so great, why would you need anything more?”. Universal software freedom is however further from reality than it was even a few years ago. At least a few of us, in my view, must focus on that cause.

I did not post many blog posts about this in 2016. There was a reason for that — more than any other year, work demands at Conservancy have been constant and unrelenting. I enjoy my work, so I don't mind, but blogging becomes low priority when there is a constant backlog of urgent work to support Conservancy's mission and our member projects. It's not just Conservancy's mission, of course, it's my personal one as well.

For our 2016 fundraiser, I wrote last year a blog post entitled “Do You Like What I Do For a Living?”. Last year, so many of you responded, that it not only made it possible for me to continue that work for one more year, but we were able to add our colleague Brett Smith to our staff, which brought Conservancy to four full-time staff for the first time. We added a few member projects (and are moving that queue to add more in 2017), and sure enough — the new work plus the backlog of work waiting for another staffer filled Brett's queue just like my, Karen's and Tony's was already filled.

The challenge now is sustaining this staffing level. Many of you came to our aid last year because we were on the brink of needing to reduce our efforts (and staffing) at Conservancy. Thanks to your overwhelming response, we not only endured, but we were able to add one additional person. As expected, though, needs of our projects increased throughout the year, and we again — all four of us full-time staff — must work to our limits to meet the needs of our projects.

Charitable donations are a voluntary activity, and as such they have a special place in our society and culture. I've talked a lot about how Conservancy's Supporters give us a mandate to carry out our work. Those of you that chose to renew your Supporter donations or become new Supporters enable us to focus our full-time efforts on the work of Conservancy.

On the signup and renewal page, you can read about some of our accomplishments in the last year (including my recent keynote at FOSDEM, an excerpt of which is included here). Our work does not follow fads, and it's not particularly glamorous, so only dedicated Supporters like you understand its value. We don't expect to get large grants to meet the unique needs of each of our member projects, and we certainly don't expect large companies to provide very much funding unless we cede control of the organization to their requests (as trade associations do). Even our most popular program, Outreachy, is attacked by a small group of people who don't want to see the status quo of privileged male domination of Open Source and Free Software disrupted.

Supporter contributions are what make Conservancy possible. A year ago, you helped us build Conservancy as a donor-funded organization and stabilize our funding base. I now must ask that you make an annual commitment to renewal — either by renewing your contribution now or becoming a monthly supporter, or, if you're just learning about my work at Conservancy from this blog post, reading up on us and becoming a new Supporter.

Years ago, when I was still only a part-time volunteer at Conservancy, someone who disliked our work told me that I had “invented a job of running Conservancy”. He meant it as an insult, but I take it as a compliment with pride. In fact, between me and my colleague (and our Executive Director) Karen Sandler, we've “invented” a total of four full-time jobs and one part-time one to advance software freedom. You helped us do that with your donations. If you donate again today, your donation will be matched to make the funds go further.

Many have told me this year that they are driven to give to other excellent charities that fight racism, work for civil and immigration rights, and other causes that seem particularly urgent right now. As long as there is racism, sexism, murder, starvation, and governmental oppression in the world, I cannot argue that software freedom should be made a priority above all of those issues. However, even if everyone in our society focused on a single, solitary cause that we agreed was the top priority, it's unlikely we could make quicker progress. Meanwhile, if we all single-mindedly ignore less urgent issues, they will, in time, become so urgent they'll be insurmountable by the time we focus on them.

Industrialized nations have moved almost fully to computer automation for most every daily task. If you question this fact, try to do your job for a day without using any software at all, or anyone using software on your behalf, and you'll probably find it impossible. Then, try to do your job using only Free Software for a day, and you'll find, as I have, that tasks that should take only a few minutes take hours when you avoid proprietary software, and some are just impossible. There are very few organizations that are considering the long-term implications of this slowly growing problem and making plans to build the foundations of a society that doesn't have that problem. Conservancy is one of those few, so I hope you'll realize that long-term value of our lifelong work to defend and expand software freedom and donate.

Posted Mon Feb 13 15:20:00 2017 Tags:

libinput has a couple of features that 'automagically' work on touchpads such as disable-while-typing and the lid switch triggered disabling of touchpads and disabling the touchpad when an external mouse is plugged in [1]. But not all of these features make sense on all touchpads. For example, an Apple Magic Trackpad doesn't need disable-while-typing because unless you have a creative arrangement of input devices [2], the touchpad won't be where your palm is likely to hit it. Likewise, a Logitech T650 connected over a unifying receiver shouldn't get disabled when the laptop lid closes.

For this to work, libinput has some code to figure out whether a touchpad is internal or external. Initially we had some code to detect this but eventually moved this to the ID_INPUT_TOUCHPAD_INTEGRATION property now set by udev's hwdb (systemd 231 and later). Having it in the hwdb makes it quite trivial to override locally where the current rules are insufficient (and until the hwdb is fixed, thanks for filing a bug). We still have the fallback code though in case the tag is missing. On a sufficiently modern distribution, udevadm info /sys/class/input/event4 for your touchpad device node should show something like ID_INPUT_TOUCHPAD_INTEGRATION=internal.

So for any feature that libinput adds for touchpads, we only enable it where it makes sense. That's why your external touchpad doesn't trigger disable-while-typing or the lid switch.

[1] ok, I admit, this is something we should've left to the client, but now we have the feature.
[2] yes, I'm sure there's at least one person out there that uses the touchpad upside down in front of the keyboard and is now angry that libinput doesn't allow arbitrary rotation of the device combined with configurable dwt. I think of you every night I cry myself to sleep.

Posted Fri Feb 10 00:27:00 2017 Tags:

My New Years resolution is to restart blogging.jigsawfish2

Bufferbloat is the most common underlying cause of most variable bad performance on the Internet; it is called “lag” by gamers.

Trying to steer anything the size of the Internet into a better direction is very slow and difficult at best. From the time changes in the upstream operating systems are complete to when consumers can buy new product is typically four years caused by the broken and insecure ecosystem in the embedded device market. Chip vendors, box vendors, I’m looking at you… So much of what is now finally appearing in the market is based on work that is often four years old. Market pull may do what push has not.

The fq_codel & cake work going on in the bufferbloat project is called SQM – “smart queue management.”

See What to do About Bufferbloat for general information. And the DSLReports Speedtest makes it easy to test for bufferbloat. But new commercial products are becoming increasingly available.  Here’s some of them.

Evenroute IQrouter

First up, I’d like call out the Evenroute IQrouter, which has a variant of SQM that deals with “sag”. DSL users have often suffered more than other broadband users, due to bad bloat in the modems compounded by minimal bandwidth, so the DSL version of the IQrouter is particularly welcome.   Often DSL ISP’s seem to have the tendency (seemingly more often than ISPs with other technologies) to under provision their back haul, causing “sag” at different times of day/week.  This makes the static configuration techniques we’ve used in LEDE/OpenWrt SQM ineffective, as you have to give away too much bandwidth if a fixed bandwidth is used.  I love the weasel words “up to” some speed used by many ISPs. It is one thing for your service to degrade for a short period of days or weeks while an ISP takes action to provision more bandwidth to an area; it is another for your bandwidth to routinely vary by large factors for weeks/months and years.

I sent a DSL Evenroute IQrouter to my brother in Pennsylvania recently and arranged for one for a co-worker, and they are working well, and Rich Brown has had similarly good experiences. Evenroute has been working hard to make the installation experience easy. Best yet, is that the IQrouter is autoconfiguring and figures out for you what to do in the face of “sag” in your Internet service, something that may be a “killer feature” if you suffer lots of “sag” from your ISP. The IQrouter is therefore the first “out of the box” device I can recommend to almost anyone, rather than just my geek friends.

The IQRouter does not yet have the very recent wonderful WiFi results of Toke and Dave (more about coming this in a separate post), but has the capability for over the air updates and one hopes debloated WiFi and ATF will come to it reasonably soon. The new WiFi stack is just going upstream into Linux and LEDE/OpenWRT as I write this post.  DSL users seldom have enough bandwidth for the WiFi hop to be the bottleneck; so the WiFi work is much more important for Cable and fiber users at higher bandwidth than for DSL users stuck at low bandwidth.

Ubiquiti Edgerouter

I’ve bought an Ubiquiti Edgerouter X on recommendation of Dave Taht but not yet put it into service. Router performance can be an issue on high end cable or fiber service. It is strictly an Ethernet router, lacking WiFi interfaces; but in my house, where the wiring is down in the basement, that’s what I need.  The Edgerouter starts at around $50; the POE version I bought around $75.

The Edgerouter story is pretty neat – Dave Taht did the backport 2? years back. Ubiquti’s user community jumped all over it and polished it up, adding support to their conf tools and GUI, and Ubiquiti recognized what they had and shipped it as part of their next release.

SQM is available in recent releases of Ubituiti’s Edgerouter firmware.  SQM itself is easy to configure. But the Edgerouter overall requires considerable configuration before it is useful in the home environment, however, and its firmware web interface is aimed at IT people rather than most home users. I intend this to replace my primary router TP-Link Archer C7v2 someday soon, as it is faster than the TP-Link since Comcast keeps increasing my bandwidth without asking me.  I wish the Ubiquiti had a “make me into a home router” wizard that would make it immediately usable for most people, as its price is low enough for some home users to be interested in it.   I believe one can install LEDE/OpenWrt on the Edgerouter, which I may do if I find its IT staff oriented web interface too unusable.

LEDE/OpenWrt and BSD for the Geeks

If you are adventurous enough to reflash firmware, anything runnable on OpenWrt/LEDE of the last few years has SQM available. You take the new LEDE release in testing for a spin. If your router has an Ath9k WiFi chip (or a later version of the Ath10k WiFi chip), or you buy a new router with the right chips in them, you can play with the new WiFi goodness now in LEDE (noted above). There is a very wide variety of home routers that can benefit from reflashing. Its web UI is tolerably decent, better than many commercial vendors I have seen.

WiFi chip vendors should take careful note of the stupendous improvements available in the Linux mac802.11 framework for bufferbloat elimination and air time fairness. If you don’t update to the new interfaces and get your code into LEDE, you’re going to be at a great disadvantage to Atheros in the market.

dd-wrt, asuswrt, ipfire, all long ago added support for SQM. It will be interesting to see how long it takes them to pick up the stunning WiFi work.

The pcengines APU2 is a good “DIY” router for higher speeds. Dave has not yet tried LEDE on it yet, but will. He uses it presently on Ubuntu….

BSD users recently got fq_codel in opnsense, so the BSD crowd are making progress.

Other Out of the Box Devices

The Turris Omnia is particularly interesting for very fast broadband service and can run LEDE as well; but unfortunately,  it seems only available in Europe at this time.  We think the Netduma router has SQM support, though it is not entirely clear what they’ve done; it is a bit pricey for my taste, and I don’t happen to know anyone who has one.

Cable Modems

Cable users may find that upgrading to a new DOCSIS 3.1 modem is helpful (though that does not solve WiFi bufferbloat).  The new DOCSIS 3.1 standard requires AQM.  While I don’t believe PIE anywhere as good as fq_codel (lacking flow queuing), the DOCSIS 3.1 standard at least requires an AQM, and PIE should help and does not require manual upstream bandwidth tuning.  Maybe someday we’ll find some fq_codel (or fq_pie) based cable modems.  Here’s hoping…

Under the Covers, Hidden

Many home routers vendors make bold claims they have proprietary cool features, but these are usually smoke and mirrors. Wireless mesh devices without bufferbloat reduction are particularly suspect and most likely to require manual RF engineering beyond most users. They require very high signal strength and transfer rates to avoid the worst of bufferbloat. Adding lots more routers without debloating and not simultaneously attacking transmit power control is a route to WiFi hell for everyone. The LEDE release is the first to have the new WiFi bits needed to make wireless mesh more practical. No one we know of has been working on minimizing transmit power to reduce interference between mesh nodes. So we are very skeptical of these products.

There are now a rapidly increasing number of products out there with SQM goodness under the covers, sometimes implemented well, and sometimes not so well, and more as the months go by.

One major vendor put support for fq_codel/SQM under the covers of one product using a tradename, promptly won an award, but then started using that tradename on inferior products in their product line that did not have real queue management. I can’t therefore vouch for any product line tradename that does not acknowledge publicly how it works and that the tradename means that it really has SQM under the covers. Once burned, three times shy. That product therefore does not deserve a mention due to the behavior of the vendor. “Bait and switch” is not what anyone needs.

Coming Soon…

We have wind of a number of vendors’ plans who have not quite reached the market, but it is up to them to announce their products.

If you find new products or ISP’s that do really well, let us know, particularly if they actually say what they are doing. We need to start some web pages to keep track of commercial products.


Posted Thu Feb 2 08:00:00 2017 Tags:

I merged a patchset from James Ye today to add support for switch events to libinput, specifically: lid switch events. This feature is scheduled for libinput 1.7.

First, what are switches and how are they different so keys? A key's state is transient with a neutral state of "key is up". The state itself is expected to change frequently. Switches don't always have a defined logical neutral state and the state changes only infrequently. This requires different handling in applications and thus libinput exposes a new interface (and capability) for switches.

The interface itself is trivial. A switch event has two properties, the switch type (e.g. "lid") and the switch state (on/off). See the libinput-debug-events source code for a simple code to print the state and type.

In libinput, we generally try to restrict ourselves to the cases we know how to handle. So in the first iteration, we'll support a single switch event: the lid switch. This is the toggle that changes when you close the lid on your laptop.

But libinput uses this internally too: touchpads are disabled automatically whenever the lid is closed. Indeed, this functionally was the main motivation for this patchset. On a number of devices, we get ghost touches when the lid is closed. Even though the touchpad is unreachable by the user interference with the screen still causes events, moving the pointer in unexpected ways and generally being a nuisance. Some trackpoints suffer from the same issue. But now that libinput knows about the lid switch it can transparently disable the touchpad whenever the lid is closed and thus discard the events.

Lid switches on some devices are unreliable. There are some devices where the lid is permanently closed and other devices where the lid can be closed, but we'll never see the open event. So we change behaviour based on a few factors. After all, no-one likes a dysfunctional touchpad because the lid switch is broken (if you do, seek help). For one, whenever we detect keyboard events while in logically closed state we'll assume that the lid is open after all and adjust state accordingly. Unless the lid switch is reliable, we don't sync the initial state. That's annoying for those who start libinput in closed mode, but it filters out all devices that set the lid switch to "on" and then never change again. On the Surface 3 devices we go even further: we know those devices needs a bit of hand-holding. So whenever we detect activity on the keyboard, we also write the EV_SW/SW_LID state to the device node, thus updating the kernel to be correct again (and thus help everyone else who may be listening).

The exact behaviours will likely change slightly over time as we have to deal with corner-cases one-by-one. But meanwhile, it's even easier for compositors to listen to switch events and users don't have to deal with ghost touches anymore. Many thanks to James Ye for implementing this.

Posted Wed Feb 1 04:51:00 2017 Tags:

In order to read events and modify devices, libinput needs a file descriptor to the /dev/input/event node. But those files are only accessible by the root user. If libinput were to open these directly, we would force any process that uses libinput to have sufficient privileges to open those files. But these days everyone tries to reduce a processes privileges wherever possible, so libinput simply delegates opening and closing the file descriptors to the caller.

The functions to create a libinput context take a parameter of type struct libinput_interface. This is an non-opaque struct with two function pointers: "open_restricted" and "close_restricted". Whenever libinput needs to open or close a file, it calls the respective function. For open_restricted() libinput expects the caller to return an fd with the given flags.

In the simplest case, a caller can merely call open() and close(). This is what the debugging tools do (and the test suite). But obviously this means you have to run those as root. The main wayland compositors (weston, mutter, kwin, ...) instead forward the request to systemd-logind. That then opens the event node and returns the fd which is passed to libinput. And voila, the compositors don't need to run as root, libinput doesn't have to know how the fd is opened and everybody wins. Plus, logind will mute the fd on VT-switch, so we can't leak keyboard events.

In the X.org case it's a combination of the two. When the server runs with systemd-logind enabled, it will open the fd before the driver initialises the device. During the init stage, libinput asks the xf86-input-libinput driver to open the device node. The driver forwards the request to the server which simply returns the already-open fd. When the server runs without systemd-logind, the server opens the file normally with a standard open() call.

So in summary: you can easily run libinput without systemd-logind but you'll have to figure out how to get the required privileges to open device nodes. For anything more than a test or debugging program, I recommend using systemd-logind.

Posted Mon Jan 30 00:00:00 2017 Tags:

We're in the middle of the 1.7 development cycle and one of the features merged already is support for "wheel tilt", i.e. support for devices that don't have a separate horizontal wheel but instead rely on a tilt motion for horizontal event. Now, the way this is handled in the kernel is that the events are sent via REL_WHEEL (or REL_DIAL) so we don't actually need special code in libinput to handle tilt. But libinput tries to to make sense of input devices so the upper layers have a reliable base to build on - and that's why we need tilt-wheels to be handled.

For 'pointer axis' events (i.e. scroll events) libinput provides scroll sources. These specify how the scroll event was generated, allowing a caller to handle things accordingly. A finger-based scroll for example can trigger kinetic scrolling while a mouse wheel would not usually do so. The value for a pointer axis is also dependent on the scroll source - for continuous/finger based scrolling the value is in pixels. For a mouse wheel, the value is in degrees. This obviously doesn't work for a tilt event because degrees don't make sense in this context. So the new axis source is just that, an indicator that the event was caused by a wheel tilt rather than a rotation. Its value matches the default wheel rotation (i.e. 15 degrees) just to make use of it easier.

Of course, a device won't tell us whether it provides a proper wheel or just tilt. So we need a hwdb property and I've added that to systemd's repo. To make this work, set the MOUSE_WHEEL_TILT_HORIZONTAL and/or MOUSE_WHEEL_TILT_VERTICAL property on your hardware and you're off. Yay.

Patches for the wayland protocol have been merged as well, so this is/will be available to wayland clients.

Posted Thu Jan 26 05:03:00 2017 Tags: