The lower-post-volume people behind the software in Debian. (List of feeds.)

Today the Netherlands celebrates King's Day. To honor this tradition, the Dutch embassy in San Francisco invited me to give a "TED talk" to an audience of Dutch and American entrepreneurs. Here's the text I read to them. Part of it is the tl;dr of my autobiography; part of it is about the significance of programming languages; part of it is about Python's big idea. Leve de koning! (Long live the king!)

Python: a programming language created by a community

Excuse my ramblings. I’ll get to a point eventually.

Let me introduce myself. I’m a nerd, a geek. I’m probably somewhere on the autism spectrum. Im also a late bloomer. I graduated from college when I was 26. I was 45 when I got married. Im now 60 years old, with a 14 year old son. Maybe I just have a hard time with decisions: I’ve lived in the US for over 20 years and I am still a permanent resident.

I'm no Steve Jobs or Mark Zuckerberg. But at age 35 I created a programming language that got a bit of a following. What happened next was pretty amazing. But I'll get to that.

At age 10 my parents gave me an educational electronics kit. The kit was made by Philips, and it was amazing. At first I just followed the directions and everything worked; later I figured out how to design my own circuits. My prized possessions were the kit's three (!) transistors.

I took one of my first electronics models, a blinking light, to show and tell in 5th grade. It was a total dud nobody cared or understood its importance. I think that's one of my earliest memories of finding myself a geek: until then I had just been a quiet quick learner.

In high school I developed my nerdiness further I hung out with a few other kids interested in electronics, and during physics class we sat in the back of the class discussing NAND gates while the rest of the class was still figuring out Ohm's law.

Fortunately our physics teacher had figured us out: he employed us to build a digital timer that he used to demonstrate the law of gravity to the rest of the class. It was a great project and showed us that our skills were useful. The other kids still thought we were weird: it was the seventies and many were into smoking pot and rebelling; another group was already preparing for successful careers as doctors or lawyers or tech managers. But they left me alone, I left them alone, and I graduated as one of the best of my year.

After high school I went to the University of Amsterdam: It was close to home, and to a teen growing up in the Netherlands in the seventies, Amsterdam was the only cool city. (Yes, the student protests of 1968 did touch me a bit.) Much to my high school physics teacher's surprise and disappointment, I chose to major in math, not physics. But looking back I think it didn’t matter.

In the basement of the science building was a mainframe computer, and it was love at first sight. Card punches! Line printers! Batch jobs! More to the point, I quickly learned to program, in languages with names like Algol, Fortran and Pascal. Mostly forgotten names, but highly influential at the time. Soon I was, again, sitting in the back of class, ignoring the lecture, correcting my computer programs. And why was that?

In that basement, around the mainframe, something amazing was happening. There was a loosely-knit group of students and staff with similar interests, and we exchanged tricks of the trade. We shared subroutines and programs. We united in our alliances against the mainframe staff, especially in the endless cat-and-mouse games over disk space. (Disk space was precious in a way you cannot understand today.)

But the most important lesson I learned was about sharing: while most of the programming tricks I learned there died with the mainframe era, the idea that software needs to be shared is stronger than ever. Today we call it open source, and it’s a movement. Hold that thought!

At the time, my immediate knowledge of the tricks and the trade seemed to matter most though. The mainframe’s operating system group employed a few part-time students, and when they posted a vacancy, I applied, and got the job. It was a life-changing event! Suddenly I had unlimited access to the mainframe no more fighting for space or terminalsplus access to the source code for its operating system, and dozens of colleagues who showed me how all that stuff worked.

I now had my dream job, programming all day, with real customers: other programmers, the users of the mainframe. I stalled my studies and essentially dropped out of college, and I would not have graduated if not for my enlightened manager and a professor who hadn't given up on me. They nudged me towards finishing some classes and pulled some strings, and eventually, with much delay, I did graduate. Yay!

I immediately landed a new dream job that would not have been open to me without that degree. I had never lost my interest in programming languages as an object of study, and I joined a team building a new programming language — not something you see every day. The designers hoped their language would take over the world, replacing Basic.

It was the eighties now, and Basic was the language of choice for a new generation of amateur programmers, coding on microcomputers like the Apple II and the Commodore 64. Our team considered the Basic language a pest that the world should be rid of. The language we were building, ABC, would "stamp out Basic", according to our motto.

Sadly, for a variety of reasons, our marketing (or perhaps our timing) sucked, and after four years, ABC was abandoned. Since then I've spent many hours trying to understand why the project failed, despite its heart being so clearly in the right place. Apart from being somewhat over-engineered, my best answer is that ABC died because there was no internet in those days, and as a result there could not be a healthy feedback loop between the makers of the language and its users. ABC’s design was essentially a one-way street.

Just half a decade later, when I was picking through ABC’s ashes looking for ideas for my own language, that missing feedback loop was one of the things I decided to improve upon. “Release early, release often” became my motto (freely after the old Chicago Democrats’ encouragement, “vote early, vote often”). And the internet, small and slow as it was in 1990, made it possible.

Looking back 25 years, the Internet and the Open Source movement (a.k.a. Free Software) really did change everything. Plus something called Moore's Law, which makes computers faster every year. Together, these have entirely changed the interaction between the makers and users of computer software. It is my belief that these developments (and how I managed to make good use of them) have contributed more to the success of “my” programming language than my programming skills and experience, no matter how awesome.

It also didn't hurt that I named my language Python. This was a bit of unwitting marketing genius on my part. I meant to honor the irreverent comedic genius of Monty Python's Flying Circus, and back in 1990 I didn't think I had much to lose. Nowadays, I'm sure "brand research" firms would be happy to to charge you a very large fee to tell you exactly what complex of associations this name tickles in the subconscious of the typical customer. But I was just being flippant.

I have promised the ambassador not to bore you with a technical discussion of the merits of different programming languages. But I would like to say a few things about what programming languages mean to the people who use them programmers. Typically when you ask a programmer to explain to a lay person what a programming language is, they will say that it is how you tell a computer what to do. But if that was all, why would they be so passionate about programming languages when they talk among themselves?

In reality, programming languages are how programmers express and communicate ideas and the audience for those ideas is other programmers, not computers. The reason: the computer can take care of itself, but programmers are always working with other programmers, and poorly communicated ideas can cause expensive flops. In fact, ideas expressed in a programming language also often reach the end users of the program people who will never read or even know about the program, but who nevertheless are affected by it.

Think of the incredible success of companies like Google or Facebook. At the core of these are ideas ideas about what computers can do for people. To be effective, an idea must be expressed as a computer program, using a programming language. The language that is best to express an idea will give the team using that language a key advantage, because it gives the team members — people! — clarity about that idea. The ideas underlying Google and Facebook couldn't be more different, and indeed these companies' favorite programming languages are at opposite ends of the spectrum of programming language design. And that’s exactly my point.

True story: The first version of Google was written in Python. The reason: Python was the right language to express the original ideas that Larry Page and Sergey Brin had about how to index the web and organize search results. And they could run their ideas on a computer, too!

So, in 1990, long before Google and Facebook, I made my own programming language, and named it Python. But what is the idea of Python? Why is it so successful? How does Python distinguish itself from other programming languages? (Why are you all staring at me like that? :-)

I have many answers, some quite technical, some from my specific skills and experience at the time, some just about being in the right place at the right time. But I believe the most important idea is that Python is developed on the Internet, entirely in the open, by a community of volunteers (but not amateurs!) who feel passion and ownership.

And that is what that group of geeks in the basement of the science building was all about.

Surprise: Like any good inspirational speech, the point of this talk is about happiness!

I am happiest when I feel that I'm part of such a community. I’m lucky that I can feel it in my day job too. (I'm a principal engineer at Dropbox.) If I can't feel it, I don't feel alive. And so it is for the other community members. The feeling is contagious, and there are members of our community all over the world.

The Python user community is formed of millions of people who consciously use Python, and love using it. There are active members organizing Python conferences — affectionately known as PyCons — in faraway places like Namibia, Iran, Iraq, even Ohio!

My favorite story: A year ago I spent 20 minutes on a video conference call with a classroom full of faculty and staff at Babylon University in southern Iraq, answering questions about Python. Thanks to the efforts of the audacious woman who organized this event in a war-ridden country, students at Babylon University are now being taught introductory programming classes using Python. I still tear up when I think about the power of that experience. In my wildest dreams I never expected I’d touch lives so far away and so different from my own.

And on that note I'd like to leave you: a programming language created by a community fosters happiness in its users around the world. Next year I may go to PyCon Cuba!
Posted Wed Apr 27 17:17:00 2016 Tags:

Because people do in fact drop money in my PayPal and Patreon accounts, I think a a decent respect to the opinions of mankind requires that I occasionally update everyone on where the money goes. First in an occasional series,

Recently I’ve been buying Raspberry Pi GPS HATs (daughterboards with a GPS and real-time clock) to go with the Raspberry PI 3 Dave Taht dropped on me. Yesterday morning a thing called an Uputronics GPS Extension Board arrived from England. A few hours ago I ordered a cheap Chinese thing obviously intended to compete with the Adafruit GPS HAT I bought last week.

The reason is that I’m working up a very comprehensive HOWTO on how to build a Stratum 1 timeserver in a box. Not content to merely build one, I’m writing a sheaf of recipes that includes all three HATs I’ve found and (at least) two revisions of the Pi.

What makes this HOWTO different from various build pages on this topic scattered around the Web? In general, the ones I’ve found are well-intended but poorly written. They make too many assumptions, they’re tied to very specific hardware types, they skip “obvious” steps, they leave out diagnostic details about how to tell things are going right and what to do when things go wrong.

My goal is to write a HOWTO that can be used by people who are not Linux and NTP experts – basically, my audience is anyone who could walk into a hackerspace and not feel utterly lost.

Also, my hope is that by not being tightly tied to one parts list this HOWTO will help people develop more of a generative understanding of how you compose a build recipe, and develop their own variations.

I cover everything, clear down to how to buy a case that will fit a HAT. And this work has already had some functional improvements to GPSD as a side effect.

I expect it might produce some improvements in NTPsec as well – our program manager, A&D regular Mark Atwood, has been smiling benignly on this project. Mark’s plan is to broadcast this thing to a hundred hackerspaces and recruit the next generation of time-service experts that way.

Three drafts have already circulated to topic experts. Progress will be interrupted for a bit while I’m off at Penguicon, but 1.0 is likely to ship within two weeks or so.

And it will ship with the recipe variations tested. Because that’s what I do with your donations. If this post stimulates a few more, I’ll add an Odroid C2 (Raspberry Pi workalike with beefier hardware) to the coverage; call it a stretch goal.

Posted Wed Apr 27 11:12:03 2016 Tags:

This is an entirely silly post about the way I name the machines in my house, shared for the amusement of my regulars.

The house naming theme is “comic mythical beasts”.

My personal desktop machine is always named “snark”, after Lewis Carroll’s “Hunting of the”. This has been so since long before adj. “snarky” and vi. “to snark” entered popular English around the turn of the millennium. I do not find the new layer of meaning inappropriate.

Currently snark is perhaps better known as the Great Beast of Malvern, but whereas “snark” describes its role, “Beast” refers to the exceptional capabilities of this particular machine.

One former snark had two Ethernet ports. Its alias through the second IP address was, of course, “boojum”.

My laptop is always “golux”, from James Thurber’s The Thirteen Clocks.

The bastion host (mail and DNS server) is always “grelber”, after the insult-spewing Grelber from the Broom Hilda comic strip. It’s named not for the insults but because Grelber is depicted as a lurking presence inside a hollow log with a mailbox on the top.

Cathy’s personal desktop machine is always “minx” after a pretty golden-furred creature from Infocom’s classic Zork games, known for its ability to sniff out buried chocolate truffles.

The router is “quintaped”, a five-legged creature supposed to live on a magically concealed island in the Potterverse. Because it has 5 ports, you see.

The guest machine in the basement (distinct from the mailserver) is “hurkle” after the title character in Theodore Sturgeon’s The Hurkle Is A Happy Beast (1949).

For years we had a toilet-seat Mac (iBook) I’d been given as a gift (it’s long dead now). We used it as a gaming machine (mainly “Civilization II” and “Spaceward Ho”). It was “billywig”, also from the Potterverse.

I have recently acquired 3 Raspberry Pis (more about this in a future post). The only one of them now in use is currently named “whoville”, but that is likely to change as I have just decided the sub-namespace for Pis will be Dr. Seuss creatures – lorax, sneetch, zax, grinch, etc.

That is all.

Posted Sun Apr 24 05:56:55 2016 Tags:
You don’t need an Uber, you don’t need a cab (via Casey Bisson CC BY-NC-SA 2.0)

NetworkManager 1.2 was released yesterday, and it’s already built for Fedora (24 and rawhide), a release candidate is in Ubuntu 16.04, and it should appear in other distros soon too.  Lubo wrote a great post on many of the new features, but there’s too many to highlight in one post for our ADD social media 140-character tap-tap generation to handle.  Ready for more?

indicator menus

appletWayland is coming, and it doesn’t support the XEmbed status icons like nm-applet creates.  Desktop environments also want more control over how these status menus appear.  While KDE and GNOME both provide their own network status menus Ubuntu, XFCE, and LXDE use nm-applet.  How do they deal with lack of XEmbed and status icons?

Ubuntu has long patched nm-applet to add App Indicator support, which exposes the applet’s menu structure as D-Bus objects to allow the desktop environment to draw the menu just like it wants.  We enhanced the GTK3 support in libdbusmenu-gtk to handle nm-applet’s icons and then added an indicator mode to nm-applet based off Ubuntu’s work.  We’ve made packager’s lives easier by building both modes into the applet simultaneously and allowing them to be switched at runtime.

IP reconfiguration

Want to add a second IP address or change your DNS servers right away?  With NetworkManager 1.2 you can now change the IP configuration of a device through the D-Bus interface or nmcli without triggering a reconnect.  This lets the network UIs like KDE or GNOME control-center apply changes you make to network configuration immediately without interrupting your network connection.  That might take a cycle  or two to show up in your favorite desktop environment, but the basis is there.

802.1x/WPA Enterprise authentication

An oft-requested feature was the ability to use certificate domain suffix checking to validate an authentication server.  While NetworkManager has supported certificate subject checking for years, this has limitations and isn’t as secure as domain suffix checking.  Both these options help prevent man-in-the-middle attacks where a rogue access point could masquerade as as your normal secure network.  802.1x authentication is still too complicated, and we hope to greatly simplify it in upcoming releases.

Interface stacking

While NM has always been architected to allow bridges-on-bonds-on-VLANs, there were some internal issues that prevented these more complicated configurations from working.  We’ve fixed those bugs, so now layer-cake network setups work in a flash!  Hopefully somebody will come up with a fancy drag-n-drop UI based off Minecraft or CandyCrush with arbitrary interface trees.  Maybe it’ll even have trophies when you finally get a Level 48 active-backup bond.

Old Stable Series

Now that 1.2 is out, the 1.0 series is in maintenance mode.  We’ll fix bugs and any security issues that come up, but typically don’t add new features.  Backporting from 1.2 to 1.0 will be even more difficult due to the removal of dbus-glib, a major feature in 1.2 release.  If you’re on 1.0, 0.9.10, or (gasp!) 0.9.8 I’d urge you to upgrade, and I think you’ll like what you see!

Posted Thu Apr 21 18:07:22 2016 Tags:

When we released graphics tablet support in libinput earlier this year, only tablet tools were supported. So while you could use the pen normally, the buttons, rings and strips on the physical tablet itself (the "pad") weren't detected by libinput and did not work. I have now merged the patches for pad support into libinput.

The reason for the delay was simple: we wanted to get it right [1]. Pads have a couple of properties that tools don't have and we always considered pads to be different to pens and initially focused on a more generic interface (the "buttonset" interface) to accommodate for those. After some coding, we have now arrived at a tablet pad-specific interface instead. This post is a high-level overview of the new tablet pad interface and how we intend it do be used.

The basic sign that a pad is present is when a device has the tablet pad capability. Unlike tools, pads don't have proximity events, they are always considered in proximity and it is up to the compositor to handle the focus accordingly. In most cases, this means tying it to the keyboard focus. Usually a pad is available as soon as a tablet is plugged in, but note that the Wacom ExpressKey Remote (EKR) is a separate, wireless device and may be connected after the physical pad. It is up to the compositor to link the EKR with the correct tablet (if there is more than one).

Pads have three sources of events: buttons, rings and strips. Rings and strips are touch-sensitive surfaces and provide absolute values - rings in degrees, strips in normalized [0.0, 1.0] coordinates. Similar to pointer axis sources we provide a source notification. If that source is "finger", then we send a terminating out-of-range event so that the caller can trigger things like kinetic scrolling.

Buttons on a pad are ... different. libinput usually re-uses the Linux kernel's include/input.h event codes for buttons and keys. But for the pad we decided to use plain sequential button numbering, starting at index 0. So rather than a semantic code like BTN_LEFT, you'd simply get a button 0 event. The reasoning behind this is a caveat in the kernel evdev API: event codes have semantic meaning (e.g. BTN_LEFT) but buttons on a tablet pad don't those meanings. There are some generic event ranges (e.g. BTN_0 through to BTN_9) and the Wacom tablets use those but once you have more than 10 buttons you leak into other ranges. The ranges are simply too narrow so we end up with seemingly different buttons even though all buttons are effectively the same. libinput's pad support undoes that split and combines the buttons into a simple sequential range and leaves any semantic mapping of buttons to the caller. Together with libwacom which describes the location of the buttons a caller can get a relatively good idea of how the layout looks like.

Mode switching is a commonly expected feature on tablet. One button is designated as mode switch button and toggles all other buttons between the available modes. On the Intuos Pro series tablets, that button is usually the button inside the ring. Button mapping and thus mode switching is however a feature we leave up to the caller, if you're working on a compositor you will have to implemented mode switching there.

Other than that, pad support is relatively simple and straightforward and should not cause any big troubles.

[1] or at least less wrong than in the past
[2] They're actually linux/input-event-codes.h in recent kernels

Posted Mon Apr 18 07:14:00 2016 Tags:

This year’s meatspace party for blog regulars and friends will be held at Penguicon 2016 On Friday, April 29 beginning at 9PM 10PM.

UPDATE: Pushed back an hour because the original start time conflicted with the time slot assigned for my “Ask Me Anything” event.

The venue is the Southfield Westin hotel in Southfield, Michigan. It’s booked solid already; we were only able to get a room there Friday night, and will be decamping to the Holiday In Express across the parking lot on Saturday. They still have rooms, but I suggest making reservations now.

The usual assortment of hackers, anarchists, mutants, mad scientists, and for all I know covert extraterrestrials will be attending the A&D party. The surrounding event is worth attending in itself and will be running Friday to Sunday.

Southfield is near the northwestern edge of the Detroit metro area and is served by the Detroit Metropolitan Airport (code DTW).

Penguicon is a crossover event: half science-fiction convention, half open-source technical conference. Terry Pratchett and I were the co-guests-of-honor at Penguicon I back in 2003 and I’ve been back evey year since.

If you’ve never been to an SF con, you have no idea how much fun this can be. A couple thousand unusually intelligent people well equipped with geek toys and costumes and an inclination to party can generate a lot of happy chaos, and Penguicon reliably does. If you leave Monday without having made new friends, you weren’t trying.

Things I have done at Penguicon: Singing. Shooting pistols. Tasting showcased exotic foods. Getting surprise-smooched by attractive persons. Swordfighting. Playing strategy games. Junkyard Wars. Participating in a Viking raid (OK, it turned into a dance-off). Punning contests. And trust me, you have never been to parties with better conversation than the ones we throw.

Fly in Thursday night (the 28th) if you can because Geeks With Guns (the annual pistol-shooting class founded by yours truly and now organized by John D. Bell) is early Friday afternoon and too much fun to miss.

Posted Sun Apr 17 08:52:13 2016 Tags:

About five years ago I reacted to a lot of hype about the impending death of the personal computer with an observation and a prediction. The observation was that some components of a computer have to be the size they are because they’re scaled to human dimensions – notably screens, keyboards, and pointing devices. Wander outside certain size extrema and you get things like smartphone keyboards that are only good for limited use.

However, what we normally think of as the heart of a computer – the processing and storage – isn’t like this. It can get arbitrarily small without impacting usability at all. Consequently, I predicted a future in which people would carry around powerful computing nodes descended from smartphones and walk them to docking stations bundling a screen, a pointing device, and a real keyboard when they need to get real work done.

We’ve now reached an interesting midway point on that road. The (stationary) computers I use are in the process of bifurcating into two classes: one quite large, one very small. I qualify that with “stationary” because laptops are an exception for reasons which, if not yet obvious, will be in a few paragraphs.

The “large” class is exemplified in my life by the Great Beast of Malvern: my “desktop” system, except that it’s really more like a baby supercomputer optimized for fast memory access to extremely large data sets (as in, surgery on large version-control repositories). This is more power than a typical desktop user would know what to do with, by a pretty large margin -absurd overkill for just running an office suite or video editing or gaming or whatever.

My other two stationary production machines are, as of yesterday, a fanless mini-ITX box about the size of a paperback book and a credit-card-sized Raspberry Pi 3. They arrived on my doorstep around the same time. The mini-ITX box was a planned replacement for the conventional tower PC I had been using as a mailserver/DNS/bastion host, because I hate moving parts and want to cut my power bills. The Pi was serendipitous, a surprise gift from Dave Taht who’s trying to nudge me into improving my hardware hacking.

(And so I shall; tomorrow I expect to solder a header onto an Adafruit GPS hat, plug it into the Pi, and turn the combination into a tiny Stratum 1 NTP test machine.)

And now I have three conventional tower PCs in my living room (an old mailserver and two old development workstations) that I’m trying to get rid of – free to good home, you must come to Malvern to get them. Because they just don’t make sense as service machines any more. Fanless small-form-factor systems are now good enough to replace almost any computer with functional requirements less than those of a Great-Beast-class monster.

My wife still has a tower PC, but maybe not for long. Hers could easily be replaced by something like an Intel NUC – Intel’s sexy flagship small-form-factor fanless system, now cheap enough on eBay to be price-competitive with a new tower PC. And no moving parts, and no noise, and less power draw.

I have one tower PC left – the recently decomissioned mailserver. But the only reason I’m keeping it is as a courtesy for basement guests – it’ll be powered down when we don’t have one. But I am seriously thinking of replacing it with another Raspberry Pi set up as a web kiosk.

I still have a Thinkpad for travel. When you have to carry your peripherals with you, it’s a compromise that makes sense. (Dunno what I’m going to do when it dies, either – the quality and design of recent Thinkpads has gone utterly to shit. The new keyboards are particularly atrocious.)

There’s a confluence of factors at work here. Probably the single most important is cheap solid-state drives. Without SSDs, small-form-factor systems were mostly cute technology demonstrations – it didn’t do a lot of practical good for the rest of the computing/storage core to be a tiny SBC when it had to drag around a big, noisy hunk of spinning rust. With SSDs everything, including power draw and noise and heat dissipation, scales down in better harmony.

What it adds up to for me is that midrange PCs are dead. For most uses, SFF (small-form-factor) hardware has reached a crossover point – their price per unit of computing is now better.

Next, these SFF systems get smaller and cooler and merge with smartphone technology. That’ll take another few years.

Posted Fri Apr 15 12:10:06 2016 Tags:

I don't actually support everyone in every bathroom

this is from a distant facebook friend whose heart is in the right place and was making a better point on a different topic, but it got me to thinking.

read more

Posted Thu Apr 14 00:52:15 2016 Tags:

Once upon a time, free-trade agreements were about just that: free trade. You abolish your tariffs and import restrictions, I’ll abolish mine. Trade increases, countries specialize in what they’re best equipped to do, efficiency increases, price levels drop, everybody wins.

Then environmentalists began honking about exporting pollution and demanded what amounted to imposing First World regulation on Third World countries who – in general – wanted the jobs and the economic stimulus from trade more than they wanted to make environmentalists happy. But the priorities of poor brown people didn’t matter to rich white environmentalists who already had theirs, and the environmentalists had political clout in the First World, so they won. Free-trade agreements started to include “environmental safeguards”.

Next, the labor unions, frightened because foreign workers might compete down domestic wages, began honking about abusive Third World labor conditions about which they didn’t really give a damn. They won, and “free trade” agreements began to include yet more impositions of First World pet causes on Third World countries. The precedent firmed up: free trade agreements were no longer to be about “free” trade, but rather about managing trade in the interests of wealthy First Worlders.

Today there’s a great deal of angst going on in the tech community about the Trans-Pacific Partnership. Its detractors charge that a “free-trade” agreement has been hijacked by big-business interests that are using it to impose draconian intellectual-property rules on the entire world, criminalize fair use, obstruct open-source software, and rent-seek at the expense of developing countries.

These charges are, of course, entirely correct. So here’s my question: What the hell else did you expect to happen? Where were you idiots when the environmentalists and the unions were corrupting the process and the entire concept of “free trade”?

The TPP is a horrible agreement. It’s toxic. It’s a dog’s breakfast. But if you stood meekly by while the precedents were being set, or – worse – actually approved of imposing rich-world regulation on poor countries, you are partly to blame.

The thing about creating political machinery to fuck with free markets is this: you never get to be the last person to control it. No matter how worthy you think your cause is, part of the cost of your behavior is what will be done with it by the next pressure group. And the one after that. And after that.

The equilibrium is that political regulatory capability is hijacked by for the use of the pressure group with the strongest incentives to exploit it. Which generally means, in Theodore Roosevelt’s timeless phrase, “malefactors of great wealth”. The abuses in the TPP were on rails, completely foreseeable, from the first time “environmental standards” got written into a trade agreement.

That’s why it will get you nowhere to object to the specifics of the TPP unless you recognize that the entire context in which it evolved is corrupt. If you want trade agreements to stop being about regulatory carve-outs, you have to stop tolerating that corruption and get back to genuinely free trade. No exemptions, no exceptions, no sweeteners for favored constituencies, no sops to putatively noble causes.

It’s fine to care about exporting pollution and child labor and such things, but the right way to fix that is by market pressure – fair trade labeling, naming and shaming offenders, that sort of thing. If you let the politicians in they’ll do what they always do: go to the highest bidder and rig the market in its favor. And then you will get screwed.

Application of this principle to domestic policy is left as an easy exercise for the reader.

Posted Tue Apr 12 14:33:48 2016 Tags:

I’ve been implementing segregated witness support for c-lightning; it’s interesting that there’s no address format for the new form of addresses.  There’s a segregated-witness-inside-p2sh which uses the existing p2sh format, but if you want raw segregated witness (which is simply a “0” followed by a 20-byte or 32-byte hash), the only proposal is BIP142 which has been deferred.

If we’re going to have a new address format, I’d like to make the case for shifting away from bitcoin’s base58 (eg. 1At1BvBMSEYstWetqTFn5Au4m4GFg7xJaNVN2):

  1. base58 is not trivial to parse.  I used the bignum library to do it, though you can open-code it as bitcoin-core does.
  2. base58 addresses are variable-length.  That makes webforms and software mildly harder, but also eliminates a simple sanity check.
  3. base58 addresses are hard to read over the phone.  Greg Maxwell points out that the upper and lower case mix is particularly annoying.
  4. The 4-byte SHA check does not guarantee to catch the most common form of errors; transposed or single incorrect letters, though it’s pretty good (1 in 4 billion chance of random errors passing).
  5. At around 34 letters, it’s fairly compact (36 for the BIP141 P2WPKH).

This is my proposal for a generic replacement (thanks to CodeShark for generalizing my previous proposal) which covers all possible future address types (as well as being usable for current ones):

  1. Prefix for type, followed by colon.  Currently “btc:” or “testnet:“.
  2. The full scriptPubkey using base 32 encoding as per http://philzimmermann.com/docs/human-oriented-base-32-encoding.txt.
  3. At least 30 bits for crc64-ecma, up to a multiple of 5 to reach a letter boundary.  This covers the prefix (as ascii), plus the scriptPubKey.
  4. The final letter is the Damm algorithm check digit of the entire previous string, using this 32-way quasigroup. This protects against single-letter errors as well as single transpositions.

These addresses look like btc:ybndrfg8ejkmcpqxot1uwisza345h769ybndrrfg (41 digits for a P2WPKH) or btc:yybndrfg8ejkmcpqxot1uwisza345h769ybndrfg8ejkmcpqxot1uwisza34 (60 digits for a P2WSH) (note: neither of these has the correct CRC or check letter, I just made them up).  A classic P2PKH would be 45 digits, like btc:ybndrfg8ejkmcpqxot1uwisza345h769wiszybndrrfg, and a P2SH would be 42 digits.

While manually copying addresses is something which should be avoided, it does happen, and the cost of making them robust against common typographic errors is small.  The CRC is a good idea even for machine-based systems: it will let through less than 1 in a billion mistakes.  Distinguishing which blockchain is a nice catchall for mistakes, too.

We can, of course, bikeshed this forever, but I wanted to anchor the discussion with something I consider fairly sane.

Posted Fri Apr 8 01:50:56 2016 Tags:

The British have a phrase “Too clever by half”, It needs to go global, especially among hackers. It can have any of several closely related meanings: the one I mean to focus on here has to do with overconfidence in one’s intelligence or skill, and the particular bad consequences that can have. It’s related to Nassim Taleb’s concept of a “fragilista”.

This came up recently when I posted about building a new mailserver out of a packaged fanless mini-ITX system. My stated goal was to reduce my mailserver’s power dissipation in order to (eventually) collect a net savings on my utility bill.

Certain of my commenters immediately crapped all over this idea, describing it as overkill and insisting that I ought to be using something with even lower power draw; the popular suggestion was a Raspberry Pi. It was when I objected to the absence of a battery-backed-up RTC (real-time clock) on the Pi that the real fun started.

The pro-Pi people airily dismissed this objection. One observed that you can get an RTC hat for the Pi. Some others waxed sarcastic about the maintainer of GPSD and the NTPsec tech lead not trusting his own software; a GPS to supply time, or NTP to take time corrections over the net, should (they claim) be a perfectly adequate substitute for an RTC.

And so they would be…under optimal conditions, with everything working perfectly, and a software bridge that hasn’t been written yet. Best case would be that your GPS hat has a solid satellite lock when you boot and sets the system clock within the first second. Only, oops, GPSD as it is doesn’t actually have the capability to set the system clock directly. It has to go through ntpd or chrony.

So now you have to have a time service daemon installed, properly configured, and running for the timestamps on your system logs to look sane. Well, unless your GPS doesn’t have sat lock. Or you’re booting without a network connection for diagnostic or fault isolation reasons. Now your cleverness has gotten you nowhere; your machine could believe it’s near 0 in the Unix epoch (Midnight, January 1st 1970) for an arbitrary amount of time.

Why is this a problem? One very mundane reason is that logfile analyzers don’t necessarily deal well with large jumps in the system clock, like the one that will happen when the system time finally gets set; if you have to troubleshoot boot-time behavior later. Another is cron jobs firing inappropriately. Yet another is that the implementations for various network protocols can get confused by large time skew, even if they’re formally supposed to be able to handle it.

And I left out the fact that outright setting the system clock isn’t normal behavior for an NTP daemon, either. What it’s actually designed to do is collect small amounts of drift by speeding up or slowing down the clock until system time matches NTP time. And why is it designed to do this? If you guessed “because too many applications get upset by jumping time” you get a prize.

You can certainly tell an NTP daemon to set time rather than skewing the clock rate. But you do have to tell it to do that. This is a configuration knob that can be gotten wrong.

Are we perhaps beginning to see the problem here?

Engineering is tradeoffs. When you optimize for one figure of merit (like low cost) you are likely to end up pessimizing another, like proliferating possible failure modes. This is especially likely if an economy measure like leaving out an RTC requires interlocking compensations like having a GPS hat and configuring your time-service daemon exactly right.

The “too clever by half” mindset often wants to optimize demonstrating its own cleverness. This, of course, is something hackers are particularly prone to. It can be a virtue of sorts when you’re doing exploratory design, but not when you’re engineering a production system. I’m not the first person to point out that if you write code that’s as clever as you can manage, it’s probably too tricky for you to debug.

A particularly dangerous form of too clever by half is when you assume that you are smart enough for your design to head off all failure modes. This is the mindset Nassim Taleb calls “fragilista” – the overconfident planner who proliferates complexity and failure modes and gets blindsided when his fragile construct collides with messy reality.

Now I need to introduce the concept of an incident pit. This is a term invented by scuba divers. It describes a cascade that depends with a small thing going wrong. You try to fix the small thing, but the fix has an unexpected effect that lands you in more trouble. You try to fix that thing, don’t get it quite right, and are in bigger trouble. Now you’re under stress and maybe not thinking clearly. The next mistake is larger… A few iterations of this can kill a diver.

The term “incident pit” has been adopted by paramedics and others who have to make life-critical decisions. A classic XKCD cartoon, “Success”, captures how this applies to hardware and software engineering:

The XKCD cartoon

Too clever by half lands you in incident pits.

How do you avoid these? By designing to avoid failure modes. This why “KISS” – Keep It Simple, Stupid” is an engineering maxim. Buy the RTC to foreclose the failure modes of not having one. Choose a small-form-factor system your buddy Phil the expert hardware troubleshooter is already using rather than novel hardware neither of you knows the ins and outs of.

Don’t get cute. Well, not unless your actual objective is to get cute – if I didn’t know that playfulness and deliberately pushing the envelope has its place I’d be a piss-poor hacker. But if you’re trying to bring up a production mailserver, or a production anything, cute is not the goal and you shouldn’t let your ego suck you into trying for the cleverest possible maneuver. That way lie XKCD’s sharks.

Posted Thu Apr 7 17:07:19 2016 Tags:

Most days, at least one of the bugs I deal with requests something along the lines of "just add $FOO as a config option". In this post, I'll explain why this is usually a bad solution. First, read http://www.islinuxaboutchoice.com/ and keep those arguments in mind. Generally, there are two groups of configuration options - hardware options and user options. Hardware options are those that deal with specific quirks needed on some hardware, but not on other hardware. User options are those that deal with user preferences such as tapping or two-finger vs. edge scrolling.

In the old synaptics driver, we added options whenever something new came up and we tried to make those options generic. This was a big mistake. The driver now has over 70 configuration options resulting in a test matrix with a googolplex of combinations. In other words, it's completely untestable. To make a device work users often have to find the right combination of options from somewhere, write out a root-owned config file and then hope this works. Why do we still think this is acceptable? Even worse: some options are very specific to hardware but still spread in user forum examples like an STD during spring break.

In libinput, we're having none of that. When hardware doesn't work we expect a user to file a bug, we get it fixed upstream for the specific model and thus automatically fix it for all users of that device. We're leaning heavily on udev's hwdb which we have extended to correct devices when the firmware announces wrong information. This has the advantage that there is only one authoritative source of quirks a device needs to work. And we can update this as time goes by without having to worry about stale configuration options. One good example here is the custom acceleration profile that Lenovo X230 touchpads have in libinput. All in all, there is little pushback for the lack of hardware-specific configuration options and most users are fine with it once they accept the initial waiting period to get the patch into their distribution.

User-specific options are more contentious. In our opinion, some features should be configurable and others should not. Where to draw that line is of course quite undefined. For example, tapping on or off was one of the first configuration options available and that was never a cause for arguments either way (except whether the default should be on or off). Other options are more contentious. Clickpad software buttons are always on the bottom edge and their size is hardcoded (synaptics allowed almost free positioning of those buttons). Other features such as changing a two-finger tap to some other button event is not supported at all in libinput. This effectively comes down to cost. You see, whenever you write "it's just 5 lines of code to make this an option", what I think is "once the patch is reviewed and applied, I'll spend two days to write test cases and documentation. I'll need to handle any bug reports related to this, and I'm expected to make sure this option works indefinitely. Any addition of another feature may conflict with this option, so I need to make sure the right combination is possible and test cases are written." So your work ends after writing a 5 line patch, my work as maintainer merely starts. And unless it pays off long-term, the effort is not worth it. Some features make that cut, others don't if they are too much of a niche feature.

All this is of course nothing new and every software project needs to make these decisions. Input isn't even a special case here, it pales in comparison with e.g. the decisions UI designers need to make. However, in FOSS we have a tendency to think that because something is possible, it should be done. Legally, you have freedom to do almost anything with the software, so you can maintain a local fork of libinput with that extra feature applied. If that isn't acceptable, why would it be acceptable to merge the patch and expect others to shoulder the costs?

Posted Thu Apr 7 00:11:00 2016 Tags:

here is my open letter to amazon.com, concerning audible.com, sent in response to a survey i received after finishing a chat session with a customer service representative:

read more

Posted Tue Apr 5 23:06:35 2016 Tags:

I just pushed a patch to libinput master to enable a middle button on the clickpad software buttons. Until now, our stance was that clickpads only get a left and right software button, split at the 50% mark. The reasoning is simple: touchpads only have markings for left and right buttons (if any!) and the middle button's extents are not easily discoverable if there is no visual or haptic feedback. A middle button event could however be triggered through middle button emulation, i.e. by clicking the touchpad with a finger on the left and right software button area (see the instructions here).

This is nice in theory but, as usual, reality gets in the way. Most interactions with the middle button are quick and short-lived, i.e. clicking the button once to paste. This interaction is what many touchpads are spectacularly bad at. For middle button emulation to be handled correctly, both fingers must be registered before the physical button press. The scanout rate on a touchpad is often too low and on touchpads with extremely light resistance like the T440 it's common to register the physical click before we know that there's a second finger on the touchpad. But even on a T450 and an X220 with much higher clickpad resistance I barely managed to get above 7 out of 10 correctly registered middle button events. That is simply not good enough.

So the patch I just pushed out to master enables a middle software button between the left and the right button. The exact width of the button scales with the touchpad but it's usually around 20-25mm and it's centered on the touchpad so despite the lack of visual or haptic guides it should be reliable to hit. The new behaviour is hard-coded and for now middle button emulation continues to work on touchpads. In the future, I expect I will remove middle button emulation on touchpads or at least disable it by default.

The middle button will be available in libinput 1.3.

Posted Tue Apr 5 23:04:00 2016 Tags:

This may be the week the SJWs lost it all…or, at least, their power to bully people in the hacker culture and the wider tech community.

Many of you probably already know about the LambdaConf flap. In brief: LambdaConf, a technical conference on functional programming, accepted a presentation proposal about a language called Urbit, from a guy named Curtis Yarvin. I’ve looked at Urbit: it is very weird, but rather interesting, and certainly a worthy topic for a functional programming conference.

And then all hell broke loose. For Curtis Yarvin is better known as Mencius Moldbug, author of eccentric and erudite political rants and a focus of intense hatred by humorless leftists. Me, I’ve never been able to figure out how much of what Moldbug writes he actually believes; his writing seems designed to leave a reader guessing as to whether he’s really serious or executing the most brilliantly satirical long-term troll-job in the history of the Internet.

A mob of SJWs, spearheaded by a no-shit self-described Communist named Jon Sterling, descended on LambdaConf demanding that they cancel Yarvin’s talk, pretending that he (rather than, say, the Communist) posed a safety threat to other conference-goers. The conference’s principal organizers, headed up one John de Goes, quite properly refused to cancel the talk, observing that Yarvin was there to talk about his code and not his politics.

I think they conceded to much to the SJWs, actually, by asking Yarvin to issue a statement about his views on violence. Nobody asked Jon Sterling whether he was down with that whole liquidation of the kulaks thing, after all, and if a Communist who likes to tweet about sending capitalists to “hard labor in the North” gets a pass it is not easy to see why any apologia was required from a man with no history of advocating violence at all.

But, ultimately, they did make the right decision: to judge Yarvin’s talk proposal by its technical merit alone. This is the hacker way.

The SJWs then attempted to pressure LambdaConf’s sponsors into withdrawing their support so the conference would have to be canceled. Several sponsors withdrew (I don’t know details about who; my sources for this part are secondhand).

So far, so wearily familiar – Marxist thugs versus free expression, with free expression’s chances not looking so hot. But there’s where the story gets good. Meredith Patterson and her friends at the blog Status 451 organized a counterpunch. They launched an IndieGoGo campaign Save LambdaConf …and an open society.

I got wind of this a bit less than two days ago and posted to G+ asking all 20K of my followers to chip in, something I’ve never done before. Because, like Merry, I understand that this wasn’t actually about Mencius Moldbug at all – it was about opposing a power play by the political-correctness police. The IndieGoGo campaign was our chance to strike back for liberty.

A day later it was fully funded. ClarkHat’s victory lap makes great reading.

I replied to congratulate ClarkHat: “@ClarkHat I don’t often ask my 20K G+ followers to support a crowdfunder, but when I do it’s hoping for a victory like this one.” And today I have 21K followers.

The hacker community has spoken, and it put its money where its mouth is, too. Now we know how to stop the SJWs in their tracks – fund what they denounce, make their hatred an asset, repeatedly kerb-stomp them with proof that their hate campaigns will be countered by the overwhelming will of the people and communities they thought they had bullied into submission.

I’m proud of my community for stepping up. I hope Sir Tim Hunt and Brendan Eich and Matt Taylor and other past victims of PC lynch mobs are smiling tonight. The SJWs’ preference-falsification bubble has popped; with a little work and a few more rounds of demonstration we may be able to prevent future lynchings entirely.

Posted Mon Apr 4 03:08:53 2016 Tags:

For at least five years now I’ve been telling myself that, as nifty as it would be to play with the hardware, I really shouldn’t spend money on a small-form-factor PC.

This was not an easy temptation to resist, because I found little systems like the Intel NUC fascinating. I’d look over the specs for things like that in on-line stores and drool. Replacing a big noisy PC seemed so attractive…but I always drew back my hand, because that hardware came with a premium pricetag and I already have working kit.

Then, tonight, I’m over at my friend Phil Salkie’s place. Phil is a hardware and embedded-programming guy par excellence; I know he builds small-form-factor systems for industrial applications. And tonight he’s got a new toy to show off, a Taiwanese mini-ITX box called a Jetway.

He says “$79 on Amazon”, and I say “I’ve thought about replacing my mailserver with something like that, but could never cost-justify it.” Phil looks at me and says “You should. These things lower your electric bills – it’ll pay itself off inside of a year.”

Oh. My. Goddess. Why didn’t I think of that?

Because of course he’s right. A fanless low-power design doesn’t constantly dissipate 150 watts or more. Especially not if you drop an SSD in it so it’s not spinning rust constantly. There’s going to be a break-even point past which your drop in power consumption pays off the up-front cost.

Now to be fair to my own previous hesitation, it might be that the payback period was too long to be more than a theoretical justification until quite recently. But SSDs have been dropping in price pretty dramatically of late and when the base cost of the box is $79 you don’t have to collect a lot of savings per month to keep the payoff time below 12 of them.

I’m expecting the new hardware (which I have mentally dubbed “the microBeast”) to arrive in two days. I ended up spending a bit more than that $79 to get a 250GB SSD; with the DDR3 RAM the whole thing came to $217. This is pretty close to what I’d pay for yet another generic tower PC at my local white-box emporium, maybe a bit less – itself a sign that the crossover point on these things has indeed arrived.

I could have gone significantly cheaper with a conventional laptop drive, but I decided to spend a bit more up front to pull the power dissipation and longer-term savings as low as possible. Besides, I like quiet systems; the “no bearing noise” feature seemed attractive.

I’ll take notes on the microBeast installation and probably post a report here when I have it done.

Posted Sun Apr 3 06:33:33 2016 Tags:

Hi, I was one of the authors/bikeshedders of BIP9, which Pieter Wuille recently refined (and implemented) into its final form.  The bitcoin core plan is to use BIP9 for activations from now on, so let’s look at how it works!

Some background:

  • Blocks have a 32-bit “version” field.  If the top three bits are “001”, the other 29 bits represent possible soft forks.
  • BIP9 uses the same 2016-block periods (roughly 2 weeks) as the difficulty adjustment does.

So, let’s look at BIP68 & 112 (Sequence locks and OP_CHECKSEQUENCEVERIFY) which are being activated together:

  • Every soft fork chooses an unused bit: these are using bit 1 (not bit 0), so expect to see blocks with version 536870914.
  • Every soft fork chooses an start date: these use May 1st, 2016, and time out a year later if it fails.
  • Every period, we look back to see if 95% have a bit set (75% for testnet).
    • If so, and that bit is for a known soft fork, and we’re within its start time that soft fork is locked-in: it will activate after another 2016 blocks, giving the stragglers time to upgrade.

There are also two alerts in the bitcoin core implementation:

  • If at any stage 50 of the last 100 blocks have unexpected bits set, you get Warning: Unknown block versions being mined! It’s possible unknown rules are in effect.
  • If we see an unknown softfork bit activate: you get Warning: unknown new rules activated (versionbit X).

Now, when could the OP_CSV soft forks activate? bitcoin-core will only start setting the bit in the first period after the start date, so somewhere between 1st and 15th of May[1], then will take another period to lock-in (even if 95% of miners are already upgraded), then another period to activate.  So early June would be the earliest possible date, but we’ll get two weeks notice for sure.

The Old Algorithm

For historical purposes, I’ll describe how the old soft-fork code worked.  It used version as a simple counter, eg. 3 or above meant BIP66, 4 or above meant BIP65 support.  Every block, it examined the last 1000 blocks to see if more than 75% had the new version.  If so, then the new softfork rules were enforced on new version blocks: old version blocks would still be accepted, and use the old rules.  If more than 95% had the new version, old version blocks would be rejected outright.

I remember Gregory Maxwell and other core devs stayed up late several nights because BIP66 was almost activated, but not quite.  And as a miner there was no guarantee on how long before you had to upgrade: one smaller miner kept producing invalid blocks for weeks after the BIP66 soft fork.  Now you get two weeks’ notice (probably more if you’re watching the network).

Finally, this change allows for miners to reject a particular soft fork without rejecting them all.  If we’re going to see more contentious or competing proposals in the future, this kind of plumbing allows it.

Hope that answers all your questions!


 

[1] It would be legal for an implementation to start setting it on the very first block past the start date, though it’s easier to only think about version bits once every two weeks as bitcoin-core does.

Posted Fri Apr 1 01:28:22 2016 Tags:

The people at Netdev 1.1 have posted the recording of my talk (24 minutes) on Youtube.

I personally don't really like watching talks online. In case you're like me (or want additional detail that didn't fit in 24 minutes), you can also download my slides (pdf), including extensive speaker notes.

Posted Mon Mar 28 05:24:06 2016 Tags:

Over on G+, Peter da Silva wrote: ‘I just typoed “goatee” as “gloatee” and now I’m wondering why it wasn’t always spelled that way.’ #evilviziersrepresent #muahaha

The estimable Mr. da Silva is sadly in error. I played the evil vizier in the first run of the Arabian Nights LARP back in 1987. No goatee, and didn’t gloat even once, was much too busy being efficiently cruel and clever.

What, you think this sort of thing is just fun and games? Despotic oriental storybook kingdoms don’t run themselves, you know. That takes functionaries. Somebody gotta keep the wheels turning while that overweight good-for-nothing Caliph lounges on his divan smoking bhang and being fanned by slavegirls. Or being bhanged by slavegirls and smoking his divan. Whatever.

A thankless job it is too. You keep everything prosperous and orderly with a bare minimum of floggings, beheadings, castrations, and miscreants torn apart by camels, and your reward is a constant stream of idiot heroes with oversized scimitars trying to slit your weasand. With the Caliph’s daughter looking all starry-eyed as they try it on – now there’s a girl who’s way too impressed by an oversized, er, scimitar.

Now if you’ll excuse me I need to go see a man about a lamp.

Posted Sun Mar 27 19:20:47 2016 Tags:
Practice is that many cite webpages for the software, sometimes even just list the name. I do not understand why scholars do not en masse look up the research papers that are associated with the software. As a reviewer of research papers I often have to advice authors to revise their manuscript accordingly, but I think this is something that should be caught by the journal itself. Fact is, not all reviewers seem to check this.

In some future, if publishers would also take this serious, we will citation metrics for software like we have to research papers and increasingly for data (see also this brief idea). You can support this by assigning DOIs to software releases, e.g. using ZENODO. This list on our research group's webpage shows some of the software releases:


My advice for citation software thus goes a bit beyond what traditionally request for authors:

  1. cite the journal article(s) for the software that you use
  2. cite the specific software release version using ZENODO (or compatible) DOIs

 This tweet gives some advice about citing software, triggering this blog post:
Citations inside software
Daniel Katz takes a step further and asked how we should add citations inside software. After all, software reuses knowledge too, stands on algorithmic shoulders, and this can be a lot. This is something I can relate to a lot: if you write a cheminformatics software library, you use a ton of algorithms, all that are written up somewhere. Joerg Wegner did this too in his JOELib, and we adopted this idea for the Chemistry Development Kit.

So, the output looks something like:


(Yes, I spot the missing page information. But rather than missing information, it's more that this was an online only journal, and the renderer cannot handle it well. BTW, here you can find this paper; it was my first first author paper.)

However, at a Java source code level it looks quite different:


The build process is taking advantage of the JavaDoc taglet API and uses a BibTeXML file with the literature details. The taglet renders it to full HTML as we saw above.

Bioclipse does not use this in the source code, but does have the equivalent of a CITATION file: the managers, that extend the Python, JavaScript, and Groovy scripting environments with domain specific functionality (well, read the paper!). You can ask in any of these scripting languages about citation information:

    > doi bridgedb

This will open the webpage of the cited article (which sometimes opens in Bioclipse, sometimes in an external browser, depending on how it is configured).

At a source code level, this looks like:


So, here are my few cents. Software citation is important!
Posted Sun Mar 27 09:01:00 2016 Tags: