The lower-post-volume people behind the software in Debian. (List of feeds.)

Union syntax

(I'm trying to do this as a quick post in response to some questions I received on this topic. I realize this will probably reopen the whole discussion about the best syntax for types, but sorry folks, PEP 484 was accepted nearly a year ago, after many months of discussions and hundreds of messages. It's unlikely that any idea you can think of here would be new. This post just explains the rationale of one particular decision and tries to put it in some context.)
I've heard some grumbling about the union syntax in PEP 484: Union[X, Y, Z] (where X, Y and Z are arbitrary type expressions). In the past people have suggested X|Y|Z for this, or (X, Y, Z) or {X, Y, Z}. Why did we go with the admittedly clunkier Union[X, Y, Z]?

First of all, despite all the attention drawn to it, unions are actually a pretty minor feature, and you shouldn't be using them much. So you also shouldn't care that much.

Why not X|Y|Z?

This won't fly because we want compatibility with versions of Python 3 that were already frozen (see below). We want to be able to express e.g. a union of int and str, which under this notation would be written as int|str. But for that to fly we'd have to modify the builtin 'type' class to implement __or__ -- and that wouldn't fly on already-frozen Python versions. Supporting X|Y only for types (like List) imported from the typing module and some other notation for builtin types would only sow confusion. So X|Y|Z is out.

Why not {X, Y, Z}?

That's the set with elements X, Y and Z, using the builtin set notation. We can usefully consider types to be sets of values, and this makes a union a set of values too (that's why it's called union :-).

However, {X, Y, Z} confuses the set of types with the set of values, which I consider a mortal sin. This would just cause endless confusion.

This notation would also confuse things when taking the union of several classes that overlap, e.g. if we have classes B and C, where C inherits from B, then the union of B and C is just B. But the builtin set doesn't see it that way. In contrast, the X|Y notation could actually solve this (since in principle we could overload __or__ to do whatever we want), and the Union[] operator ("functor"?) from PEP 484 indeed solves this -- in this example Union[B, C] returns the (non-union) type B, both in the type checker and at runtime.

Why not (X, Y, Z)?

That's the tuple (X, Y, Z). It has the same disadvantages as {X, Y, Z}, but at least it has the advantage of being similar to how unions are expressed as arguments to isinstance(), for example isinstance(x, (int, str, list)) or isinstance(x, (Sequence, Mapping)). (Similarly the except clause: try: ... / except (KeyError, IndexError): ...)

Another problem with tuples is that the tuple syntax is already overloaded in so many ways that it would be confused with other uses even more easily. One particular confusion would be other generic types, for which we'd still want to use square brackets. (You can't really beat Iterable[int] for clarity if you have an iterable of integers. :-) Suppose you have a sequence of values that could be integers or strings. In PEP 484 notation we write this as Sequence[Union[int, str]]. Using the tuple notation we'd want to write this as Sequence[(int, str)]. But it turns out that the __getitem__ overload on the metaclass can't tell the difference between Sequence[(int, str)] and Sequence[int, str] -- and we would like to reject the latter as a mistake since Sequence[] is a generic class over a single parameter. (An example of a generic class over two parameters would be Mapping[K, V].) Disambiguating all this would place us on very thin ice indeed.

The nail in this idea's coffin is the competing idea of using (X, Y, Z) to indicate a tuple with three items, with respective types, X, Y and Z. At first sight this seems an even better use of the tuple syntax than unions would be, and tuples are way more common than unions. But it runs afoul of the same problems with Foo[(X, Y)] vs. Foo[X, Y]. (Also, there would be no easy way to describe what PEP 484 calls Tuple[X, ...], i.e. a variable-length tuple with uniform item type X.)

PS. Why support old Python 3 versions?

The reason for supporting older versions is adoption. Only a relatively small crowd of early adopters can upgrade to the latest Python version as soon as it's out; the rest of us are stuck on older versions (even Python 2.7!).

So for PEP 484 and the typing module, we wanted to support 3.2 and up -- we chose 3.2 because it's the newest Python 3 supported by some older but still popular Ubuntu and Debian distributions. (Also, 3.0 and 3.1 were too immature at their time of release to ever have a large following.)

There's a typing package that you can install easily using pip, and this defines all sorts of useful things for typing, from Any and Union to generic versions of List and Sequence. But such a package can't modify existing builtins like int or list.

(Eventually we also added Python 2.7 support, using type comments for function signatures.)
Posted Wed May 18 18:55:00 2016 Tags:

Type annotations for fspath

Python 3.6 will have a new dunder protocol, __fspath__() , which should be supported by classes that represent filesystem paths. Example of such classes are the pathlib.Path family and os.DirEntry  (returned by os.scandir() ).

You can read more about this protocol in the brand new PEP 519. In this blog post I’m going to discuss how we would add type annotations for these additions to the standard library.

I’m making frequent use of AnyStr , a quite magical type variable predefined in the typing module. If you’re not familiar with it, I recommend reading my blog post about AnyStr . You may also want to read up on generics in PEP 484 (or read mypy’s docs on the subject).

Adding os.scandir() to the stubs for

For practice, let’s see if we can add something to the stub file for As of this writing there’s no typeshed information for os.scandir() , which I think is a shame. I think the following will do nicely. Note how we only define DirEntry  and scandir() for Python versions >= 3.5. (Mypy doesn’t support this yet, but it will soon, and the example here still works — it just doesn’t realize scandir()  is only available in Python 3.5.) This could be added to the end of stdlib/3/os/__init__.pyi:

from typing import Generic, AnyStr, overload, Iterator

if sys.version_info >= (3, 5):

    class DirEntry(Generic[AnyStr]):
        name = ...  # type: AnyStr
        path = ...  # type: AnyStr
        def inode(self) -> int: ...
        def is_dir(self, *, follow_symlinks: bool = ...) -> bool: ...
        def is_file(self, *, follow_symlinks: bool = ...) -> bool: ...
        def is_symlink(self) -> bool: ...
        def stat(self, *, follow_symlinks: bool = ...) -> stat_result: ...

    def scandir() -> Iterator[DirEntry[str]]: ...
    def scandir(path: AnyStr) -> Iterator[DirEntry[AnyStr]]: ...

Deconstructing this a bit, we see a generic class (that’s what the Generic[AnyStr]  base class means) and an overloaded function.  The scandir() definition uses @overload because it can also be called without arguments. We could also write it as follows; it’ll work either way:

    def scandir(path: str = ...) -> Iterator[DirEntry[str]]: ...
    def scandir(path: bytes) -> Iterator[DirEntry[bytes]]: ...

Either way there really are three ways to call scandir() , all three returning an iterable of DirEntry objects:

  • scandir() -> Iterator[DirEntry[str]] 
  • scandir(str) -> Iterator[DirEntry[str]] 
  • scandir(bytes) -> Iterator[DirEntry[bytes]] 

Adding os.fspath()

Next I’ll show how to add os.fspath() and how to add support for the __fspath__()  protocol to DirEntry .

PEP 519 defines a simple ABC (abstract base class), PathLike , with one method, __fspath__() . We need to add this to the stub for , as follows:

class PathLike(Generic[AnyStr]):
    def __fspath__(self) -> AnyStr: ...

That’s really all there is to it (except for the sys.version_info  check, which I’ll leave out here since it doesn’t really work yet). Next we define os.fspath() , which wraps this protocol. It’s slightly more complicated than just calling its argument’s __fspath__()  method, because it also handles strings and bytes. So here it is:

def fspath(path: PathLike[AnyStr]) -> AnyStr: ...
def fspath(path: AnyStr) -> AnyStr: ...

Easy enough! Next is update the definition of DirEntry . That’s easy too — in fact we only need to make it inherit from PathLike[AnyStr] , the rest is the same as the definition I gave above:

class DirEntry(PathLike[AnyStr], Generic[AnyStr]):
    # Everything else unchanged!

The only slightly complicated bit here is the extra base class Generic[AnyStr] . This seems redundant, and in fact PEP 484 says we can leave it off, but mypy doesn’t support that yet, and it’s quite harmless — this just rubs into mypy’s face that this is a generic class of one type variable (the by-now famous AnyStr ).

Finally we need to make a similar change to the stub for . Again, all we need to do is to make PurePath  inherit from PathLike[str] , like so:

from os import PathLike

class PurePath(PathLike[str]):
    # Everything else unchanged!

However, here we don’t add Generic , because this is not a generic class! It inherits from PathLike[str] , which is quite un-generic, since it’s PathLike specialized for just str .

Note that we don’t actually have to define the __fspath__()  method in these stubs — we’re not supposed to call them directly, and stubs don’t provide implementations, only interfaces.

Putting it all together, we see that it’s quite elegant:

for a in os.scandir('.'):
    b = os.fspath(a)
    # Here, the typechecker will know that the type of b is str!

The derivation that b has type str  is not too complicated: first, os.scandir('.')  has a str  argument, so it returns an iterator of DirEntry  objects parameterized with str , which we write as DirEntry[str] . Passing this DirEntry[str]  to os.fspath()  then takes the first of that function’s two overloads (the one with PathLike[AnyStr] ), since it doesn’t match the second one ( DirEntry  doesn’t inherit from AnyStr , because it’s neither a str  nor bytes ). Further the AnyStr type variable in PathLike[AnyStr] is solved to stand for just str , because DirEntry[str]  inherits from PathLike[str] . This is the specialized version of what the code says: DirEntry[AnyStr]  inherits from PathLike[AnyStr] .

Okay, so maybe that last paragraph was intermediate or advanced. And maybe it could be expanded. Maybe I’ll write another blog about how type inference works, but there’s a lot on that topic, and other authors have probably already written better introductory material about generics (in other languages, though).

Making things accept PathLike

There’s a bit of cleanup work that I’ve left out. PEP 519 says that many stdlib functions that currently take strings for pathnames will be modified to also accept PathLike . For example, here’s how the signatures for os.scandir()  would change:

def scandir() -> Iterator[DirEntry[str]]: ...
def scandir(path: AnyStr) -> Iterator[DirEntry[AnyStr]]: ...
def scandir(path: PathLike[AnyStr]) -> Iterator[DirEntry[AnyStr]]: ...

The first two entries are unchanged; I’ve just added a third overload. (Note that the alternative way of defining scandir() would require more changes — an indication that this way is more natural.)

I also tried doing this with a union:

def scandir() -> Iterator[DirEntry[str]]: ...
def scandir(path: Union[AnyStr, PathLike[AnyStr]]) -> Iterator[DirEntry[AnyStr]]: ...

But I couldn’t get this to work, so the extra overload is probably the best we can do. Quite a few functions will require a similar treatment, sometimes introducing overloading where none exists today (but that shouldn’t hurt anything).

A note about pathlib : since it only deals with strings, its methods (the ones that PEP 519 says should be changed anyway) should use PathLike[str]  rather than PathLike[AnyStr] .


(Thanks for comments on the draft to Stephen Turnbull, Koos Zevenhoven, Ethan Furman, and Brett Cannon.)
Posted Wed May 18 14:06:00 2016 Tags:

The AnyStr type variable

I was drafting a blog post on how to add type annotations for the new __fspath__()  protocol (PEP 519) when I realized that I should write a separate post about AnyStr . So here it is.

A simple function on strings

Let’s write a function that surrounds a string in parentheses. We’ll put it in a file named :

def parenthesize(s):
    return '(' + s + ')'

It works, too:

>>> from demo import parenthesize
>>> print(parenthesize('hola'))

Of course, if you pass it something that’s not a string it will fail:

>>> parenthesize(42)
Traceback (most recent call last):
  File "", line 1, in
  File "", line 2, in parenthesize
TypeError: Can't convert 'int' object to str implicitly

Adding type annotations

Using PEP 484 type annotations we can clarify our little function’s signature:

def parenthesize(s: str) -> str:
    return '(' + s + ')'

Nothing to it, right? Even if you’ve never heard of PEP 484 before you can guess what this means. (Note that PEP 484 also says that the runtime behavior is unchanged. The calls I showed above will still have exactly the same effect, including the TypeError raised by parenthesize(42) .)

Polymorphic functions

Now suppose this is actually part of a networking app and we need to be able to parenthesize byte strings as well as text strings. Here’s how you’d implement that:

def parenthesize(s):
    if isinstance(s, str):
        return '(' + s + ')'
    elif isinstance(s, bytes):
        return b'(' + s + b')'
        raise TypeError(f"That's not a string, it's a {type(s)}")  # See PEP 498

With a fancy word we call that a polymorphic function. How do you write a signature for such a function? For the answer we have to dive a little deeper into PEP 484. It defines a nifty operator named Union  that lets us state that a type can be either this or that (or something else). In our case, it’s either str  or bytes , so we can write it like this:

from typing import Union

def parenthesize(s: Union[str, bytes]) -> Union[str, bytes]:
    if isinstance(s, str):
    # Etc.

Now let’s write a little main program with a bug, to show off the type checker:

from demo import parenthesize

a = parenthesize('hello')
b = parenthesize(b'hola')
c = a + b  ### bug here<-- bug="" span="">

When we try to run this, the two parenthesize()  calls work fine (yay polymorphism!) but we get a TypeError on the last line:

$ python3 
Traceback (most recent call last):
  File "", line 5, in
    c = a + b  ### bug here<-- bug="" span="">
TypeError: Can't convert 'bytes' object to str implicitly

The reason should be pretty obvious: in Python 3 you can’t mix bytes and str objects. And when we type-check this program using mypy we indeed get a type error:

$ mypy error: Unsupported operand types for + (likely involving Union)

Debugging the bug

So let’s try a program without a bug:

from demo import parenthesize

a = parenthesize('hello')
b = parenthesize('hola')
c = a + b  ### bug here<-- bug="" no="" span="">

Run it and it works great:

$ python3

So the type checker should be happy too, right?

$ mypy error: Unsupported operand types for + (likely involving Union)

Whoops! The same error. What happened? Of course, I set you up, so I can explain something about type checking.

The trouble with tribbles unions

The type checker takes the signature at face value, so that when checking the call, it infers the type Union[str, bytes]  for every call to parenthesize() , regardless of what the arguments are. This is because, for most functions of even modest complexity, a type checker doesn’t understand enough about what’s going on in the function body, so it just has to believe the types in the signature (even though in this particular case it would probably be easy enough to do better).

In our test program the types of a  and b  are both inferred to be exactly what parenthesize()  claims to return, i.e., both variables have the type Union[str, bytes] . The type checker then analyzes the expression a + b , and for this it discovers a problem: if a is either str or bytes, and so is b , then the +  operator may be invoked on any of these combinations of types: str + str , str + bytes , bytes + str , or bytes + bytes . But only the first and the last are valid! In Python 3, str + bytes  or bytes + str  are invalid operations.

Aside: Even in Python 2, those two are suspect: since while 'x' + u'y'  indeed works (returning u'xy' ), other combinations will raise UnicodeDecodeError, e.g.:

>>>'Franç' + u'ois'
Traceback (most recent call last):
  File "", line 1, in
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 4:
ordinal not in range(128)

Anyway, the type checker doesn’t like this business, and it rejects operations on Unions where some combinations are invalid. What can we do instead?

Function overloading

One option would be function overloading. PEP 484 defines a magical decorator, @overload , which lets us get around this problem. We could write something like this:

from typing import overload

def parenthesize(s: str) -> str: ...
def parenthesize(s: bytes) -> bytes: ...

This tells the type checker that if the argument is a str , the return value is also a str , and similarly for bytes . Unfortunately @overload  is only allowed in stub files, which are a kind of interface definition files that show a type checker the signatures of a module’s contents without giving the implementation.

Type variables

Fortunately there’s an even better way, using type variables. This is how it goes:

from typing import TypeVar

S = TypeVar('S')

def parenthesize(s: S) -> S:
    if isinstance(s, str):
        return '(' + s + ')'
    elif isinstance(s, bytes):
        return b'(' + s + b')'
        raise TypeError("That's not a string, dude! It's a %s" % type(s))

Well… Almost. Our program (unchanged from above) now gets a clean bill of health, but when we type-check this version we get errors on both return  lines: note: In function "parenthesize": error: Incompatible return value type: expected S`-1, got builtins.str error: Incompatible return value type: expected S`-1, got builtins.bytes

This is a bit hard to fathom, but the fix is what I was leading up to anyway, so I’ll reveal it now:

from typing import TypeVar

S = TypeVar('S', str, bytes)

def parenthesize(s: S) -> S:
    if isinstance(s, str):
        return '(' + s + ')'
    elif isinstance(s, bytes):
        return b'(' + s + b')'
        raise TypeError("That's not a string, dude! It's a %s" % type(s))

The only changed line is this one:

S = TypeVar('S', str, bytes)

This notation is called a type variable with value restriction . Yes, it’s mouthful; we sometimes also call it a constrained type variable. S is a type variable restricted to a set of types. It also has the advantage of telling the type checker that types other than str  or bytes  are not acceptable. Without that, a call like this would have been considered valid:

x = parenthesize(42)

because the original type variable (without the restrictions) doesn't tell mypy that this is a bad idea.

In fact, this particular use case (a type variable constrained to str or bytes) is so commonly needed that it's predefined in the typing module, and all we have to do is import it:

from typing import AnyStr

def parenthesize(s: AnyStr) -> AnyStr:
    # Etc. -- trust me, it works!

Real-world use of AnyStr

In fact, this is how many polymorphic functions in the os  and os.path  modules are defined. For example, in the stub for  we find definitions like the following:

def link(src: AnyStr, link_name: AnyStr) -> None: ...

and also this:

def split(path: AnyStr) -> Tuple[AnyStr, AnyStr]: ...

These show us a bit more of the power of type variables: the signature for link()  indicates that either both arguments must be str  or both must be bytes ; split()  demonstrates that the type variable may also occur in more complex constructs: splitting a str returns a tuple of two str objects, while splitting bytes returns a tuple of two bytes  objects.

That’s all I wanted to share about AnyStr . Thanks for comments on the draft to Stephen Turnbull, Koos Zevenhoven, Ethan Furman, and Brett Cannon.

Posted Tue May 17 16:53:00 2016 Tags:

Recently Dave Täht wrote a blog post investigating latency and WiFi scanning and came across NetworkManager’s periodic scan behavior.  When a WiFi device scans it obviously must change from its current radio channel to other channels and wait for a short amount of time listening for beacons from access points.  That means it’s not passing your traffic.

With a bad driver it can sometimes take 20+ seconds and all your traffic gets dropped on the floor.

With a good driver scanning takes only a few seconds and the driver breaks the scan into chunks, returning to the associated access point’s channel periodically to handle pending traffic.  Even with a good driver, latency-critical applications like VOIP or gaming will clearly suffer while the WiFi device is listening on another channel.

So why does NetworkManager periodically scan for WiFi access points?


Whenever your WiFi network has multiple access points with the same SSID (or a dual-band AP with a single SSID) you need roaming to maintain optimal connectivity and speed.  Jumping to a better AP requires that the device know what access points are available, which means doing a periodic scan like NetworkManager does every 2 minutes.  Without periodic scans, the driver must scan at precisely the worst moment: when the signal quality is bad, and data rates are low, and the risk of disconnecting is higher.

Enterprise WiFi setups make the roaming problem much worse because they often have tens or hundreds of access points in the network and because they typically use high-security 802.1x authentication with EAP.  Roaming with 802.1x introduces many more steps to the roaming process, each of which can fail the roaming attempt.  Strategies like pre-authentication and periodic scanning greatly reduce roaming errors and latency.

User responsiveness and Location awareness

The second reason for periodic scanning is to maintain a list of access points around you for presentation in user interfaces and for geolocation in browsers that support it.  Up until a couple years ago, most Linux WiFi applets displayed a drop-down list of access points that you could click on at any time.  Waiting for 5 to 15 seconds for a menu to populate or ‘nmcli dev wifi list’ to return would be annoying.

But with the proliferation of WiFi (often more than 30 or 40 if you live in a flat) those lists became less and less useful, so UIs like GNOME Shell moved to a separate window for WiFi lists.  This reduces the need for a constantly up-to-date WiFi list and thus for periodic scanning.

To help support these interaction models and click-to-scan behaviors like Mac OS X or Maemo, NetworkManager long ago added a D-Bus API method to request an out-of-band WiFi scan.  While it’s pretty trivial to use this API to initiate geolocation or to refresh the WiFi list based on specific user actions, I’m not aware of any clients using it well.  GNOME Shell only requests scans when the network list is empty and plasma-nm only does so when the user clicks a button.  Instead, UIs should simply request scans periodically while the WiFi list is shown, removing the need for yet another click.


If you don’t care about roaming, and I’m assuming David doesn’t, then NetworkManager offers a simple solution: lock your WiFi connection profile to the BSSID of your access point.  When you do this, NetworkManager understands that you do not want to roam and will disable the periodic scanning behavior.  Explicitly requested scans are still allowed.

You can also advocate that your favorite WiFi interface add support for NetworkManager’s RequestScan() API method and begin requesting periodic scans when WiFi lists are shown or when your browser uses geolocation.  When most do this, perhaps NetworkManager could be less aggressive with its own periodic scans, or perhaps remove them altogether in favor of a more general solution.

That general solution might involve disabling periodic roaming when the signal strength is extremely good and start scanning more aggressively when signal strength drops over a threshold.  But signal strength drops for many reasons like turning on a microwave, closing doors, turning on Bluetooth, or even walking to the next room, and triggering a scan then still interrupts your VOIP call or low ping headshot.  This also doesn’t help people who aren’t close to their access point, leading to the same scanning problem David talks about if you’re in the basement but not if you’re in the bedroom.

Another idea would be to disable periodic scanning when latency critical applications are active, but this requires that these applications consistently set the IPv4 TOS field or use the SO_PRIORITY socket option.  Few do so.  This also requires visibility into kernel mac80211 queue depths and would not work for proprietary or non-mac80211-based drivers.  But if all the pieces fell into place on the kernel side, NetworkManager could definitely do this while waiting for applications and drivers to catch up.

If you’ve got other ideas, feel free to propose them.

Posted Mon May 16 17:43:57 2016 Tags:

I've posted in the past about the Oracle vs. Google case. I'm for the moment sticking to my habit of only commenting when there is a clear court decision. Having been through litigation as the 30(b)(6) witness for Conservancy, I'm used to court testimony and why it often doesn't really matter in the long run. So much gets said by both parties in a court case that it's somewhat pointless to begin analyzing each individual move, unless it's for entertainment purposes only. (It's certainly as entertaining as most TV dramas, really, but I hope folks who are watching step-by-step admit to themselves that they're just engaged in entertainment, not actual work. :)

I saw a lot go by today with various people as witnesses in the case. About the only part that caught my attention was that Classpath was mentioned over and over again. But that's not for any real salient reason, only because I remember so distinctly, sitting in a little restaurant in New Orleans with RMS and Paul Fisher, talking about how we should name this yet-to-be-launched GNU project “$CLASSPATH”. My idea was that was a shell variable that would expand to /usr/lib/java, so, in my estimation, it was a way to name the project “User Libraries for Java” without having to say the words. (For those of you that were still children in the 1990s, trademark aggression by Sun at the time on the their word mark for “Java” was fierce, it was worse than the whole problem the Unix trademark, which led in turn to the GNU name.)

But today, as I saw people all of the Internet quoting judges, lawyers and witnesses saying the word “Classpath” over and over again, it felt a bit weird to think that, almost 20 years ago sitting in that restaurant, I could have said something other than Classpath and the key word in Court today might well have been whatever I'd said. Court cases are, as I said, dramatic, and as such, it felt a little like having my own name mentioned over and over again on the TV news or something. Indeed, I felt today like I had some really pointless, one-time-use superpower that I didn't know I had at the time. I now further have this feeling of: “darn, if I knew that was the one thing I did that would catch on this much, I'd have tried to do or say something more interesting”.

Naming new things, particularly those that have to replace other things that are non-Free, is really difficult, and, at least speaking for myself, I definitely can't tell when I suggest a name whether it is any good or not. I actually named another project, years later, that could theoretically get mentioned in this case, Replicant. At that time, I thought Replicant was a much more creative name than Classpath. When I named Classpath, I felt it was somewhat obvious corollary to the “GNU'S Not Unix” line of thinking. I also recall distinctly that I really thought the name lost all its cleverness when the $ and the all-caps was dropped, but RMS and others insisted on that :).

Anyway, my final message today is to the court transcribers. I know from chatting with the court transcribers during my depositions in Conservancy's GPL enforcement cases that technical terminology is really a pain. I hope that the term I coined that got bandied about so much in today's testimony was not annoying to you all. Really, no one thinks about the transcribers in all this. If we're going to have lawsuits about this stuff, we should name stuff with the forethought of making their lives easier when the litigation begins. :)

Posted Sat May 14 01:14:54 2016 Tags:

While most of the NTPsec team was off at Penguicon, the NTP Classic people shipped a release patched for eleven security vulnerabilities in their code. Which might have been pretty embarrassing, if those vulnerabilities were in our code, too. People would be right to wonder, given NTPsec’s security focus, why we didn’t catch all these sooner.

In fact, we actually did pre-empt most of them. The attack surface that eight of these eleven security bugs penetrate isn’t present at all in NTPsec. The vulnerabilities were in bloat and obsolete features we’ve long since removed, like the Mode 7 control channel.

I’m making a big deal about this because it illustrates a general point. One of the most effective ways to harden your code against attack – perhaps the most effective – is to reduce its attack surface.

Thus, NTPsec’s strategy all along has centered on aggressive cruft removal. This strategy has been working extremely well. Back in January our 0.1 release dodged two CVEs because of code we had already removed. This time it was eight foreclosed – and I’m pretty sure it won’t be the last time, either. If only because I ripped out Autokey on Sunday, a notorious nest of bugs.

Simplify, cut, discard. It’s often better hardening than anything else you can do. The percentage of NTP Classic code removed from NTPsec is up to 58% now, and could easily hit 2/3rds before we’re done,

Posted Thu May 5 03:16:04 2016 Tags:

A recurring question I encounter is the question whether uinput or evdev should be the approach do implement some feature the user cares about. This question is unfortunately wrongly framed as uinput and evdev have no real overlap and work independent of each other. This post outlines what the differences are. Note that "evdev" here refers to the kernel API, not to the X.Org evdev driver.

First, the easy flowchart: do you have to create a new virtual device that has a set of specific capabilities? Use uinput. Do you have to read and handle events from an existing device? Use evdev. Do you have to create a device and read events from that device? You (probably) need two processes, one doing the uinput bit, one doing the evdev bit.

Ok, let's talk about the difference between evdev and uinput. evdev is the default input API that all kernel input device nodes provide. Each device provides one or more /dev/input/eventN nodes that a process can interact with. This usually means checking a few capability bits ("does this device have a left mouse button?") and reading events from the device. The events themselves are in the form of struct input_event, defined in linux/input.h and consist of a event type (relative, absolute, key, ...) and an event code specific to the type (x axis, left button, etc.). See linux/input-event-codes.h for a list or linux/input.h in older kernels.Specific to evdev is that events are serialised - framed by events of type EV_SYN and code SYN_REPORT. Anything before a SYN_REPORT should be considered one logical hardware event. For example, if you receive an x and y movement within the same SYN_REPORT frame, the device has moved diagonally.

Any event coming from the physical hardware goes into the kernel's input subsystem and is converted to an evdev event that is then available on the event node. That's pretty much it for evdev. It's a fairly simple API but it does have some quirks that are not immediately obvious so I recommend using libevdev whenever you actually need to communicate with a kernel device directly.

uinput is something completely different. uinput is an kernel device driver that provides the /dev/uinput node. A process can open this node, write a bunch of custom commands to it and the kernel then creates a virtual input device. That device, like all others, presents an /dev/input/eventN node. Any event written to the /dev/uinput node will re-appear in that /dev/input/eventN node and a device created through uinput looks just pretty much like a physical device to a process. You can detect uinput-created virtual devices, but usually a process doesn't need to care so all the common userspace (libinput, Xorg) doesn't bother. The evemu tool is one of the most commonly used applications using uinput.

Now, there is one thing that may cause confusion: first, to set up a uinput device you'll have to use the familiar evdev type/code combinations (followed-by a couple of uinput-specific ioctls). Events written to uinput also use the struct input_event form, so looking at uinput code one can easily mistake it for evdev code. Nevertheless, the two serve a completely different purpose. As with evdev, I recommend using libevdev to initalise uinput devices. libevdev has a couple of uinput-related functions that make life easier.

Below is a basic illustration of how things work together. The physical devices send their events through the event nodes and libinput is a process that reads those events. evemu talks to the uinput module and creates a virtual device which then too sends events through its event node - for libinput to read.

Posted Wed May 4 23:42:00 2016 Tags:
It is not exactly a new feature,  but I find myself using the --sort option of git branch a lot more often than before these days.

The way I work is that I review patches that were mailed-in during previous night (my time) in the morning. For each promising new topic, I decide where the topic should eventually be merged (some are fixes that should go to older maintenance tracks, some are new features that we will not want to merge to the maintenance tracks), create a dedicated topic branch for it, apply these patches, re-review them once more and then test the changes in isolation. Each existing topics that is redone in response to previous reviews is handled the same way. Its branch is rewound and the new round of patches are applied instead.

After accumulating the new and updated topics that way without integrating with anything else, I'd often forget how many topics need to be integrated into the test branches (i.e. jch and pu), and I can do this:

$ git branch --no-merged pu --sort=-committerdate

This lists the topic branches that are not part of pu, which is the branch that is supposed to contain all the testable things, and sort them according to the commit date (i.e. the time I last touched it) of the tip of the topic branch. There often are topics that were once picked up, but turned out to be not ready even for the pu branch, and left around without getting merged to anywhere as a reminder for myself (otherwise, I'll forget pinging their authors about them), and they will sink in the older part of the output, while the freshly created and updated ones will float to the top of the output. This reminds me of the topics from the day that I need to reintegrate before starting the integration testing.

The --sort option appeared first in Git 2.7.0.

Another command that I use often these days is Michael Haggerty's when-merged script, available in his repository at GitHub. After finding a problematic line in the source and identifying the exact commit that introduced the line by using git blame, I can see when it landed in the mainline by doing this:

$ git when-merged $that_problematic_commit master | git name-rev --stdin

This gives the merge commit that brought in the commit as part of a topic to the mainline, and after that, it is just the matter of turning it into a revision name to find the oldest maintenance track that needs to be fixed, which is partially done by passing its output through the name-rev filter.

Posted Tue May 3 21:50:00 2016 Tags:

I had a great time at Penguicon 2016, including face time with a lot of the people who help out on my various projects. There are a couple of thoughts that kept coming back to me during these conversations. One is “It is good, having so many impressively competent friends.”

The other is that without me consciously working at it, an amazing support network has sort of materialized around me – people who believe in the various things I’m trying to do and encourage them by throwing hardware and money and the occasional supportive cheer at me.

Because I didn’t consciously try to recruit these people, it’s easy for me to miss how collectively remarkable they are and how much they contribute until several of them concentrate in one place as happened at Penguicon.

Where I thought: “I’ve been taking these people a bit for granted. I should do better.”

So here, in no particular order, is a (partial) list of people who are really helping. It focuses on those who were at Penguicon and are A&D regulars, so I may have left off some people that would belong on a more complete list.

John D. Bell: I hate having to do system administration and am not very good at it. John is the competent sysadmin I run to for help; he gives generously of his time and friendship. He’s also the person who accidentally started the cascade of events that resulted in the building of the Great Beast of Malvern.

Jay Maynard: My DNS zone secondary and the person I go to specifically for DNS help, because I touch it so seldom that I invariably forget all the fiddly details.

Mark Atwood: My project manager on NTPsec, he who makes my paychecks flow. He wouldn’t make this list if he were just some random corporate functionary set by CII to watch how their money is being spent; he volunteered to PM at least in part because he respects me, likes the work I do and thinks it good to help. Thus, not only does he do as comprehensive a job as I judge is possible of shielding me from the politics around my work and my funding, he smiles benignly when I wander off to work on things like reposurgeon or the Practical Python Porting HOWTO for a while. He even warns me against the dangers of overwork. I’m not sure what more one could ask of a manager, but I’m sure I’ve never had a better one.

Dave Taht: Dave…starts things. Like tossing a Raspberry Pi 3 dev kit at me, confident that though he might not be able to predict exactly what I’d do with it, I’d do something interesting. (He wasn’t wrong.) Dave is constantly pushing me, gently and constructively, to learn and think a bit outside my comfort zone. He’s one of the very best friends I have.

Phil Salkie: He who taught me how to solder (he’s a top-flight industrial troubleshooter of the hands-on kind by day) and is gradually inculcating in me the hardware-integration skills required for me to work with things like SBCs. Takes a lively, intelligent observer’s interest in many of my projects, and often has useful things to say about them.

Jason Azze: It took me longer to notice Jason than I should have, because he doesn’t draw attention to himself. His style is to lurk around the edges of my projects quietly doing useful things, often involving buildbots.

Sanjeev Gupta: Another frequent lurker, with a particularly good hand for criticizing and improving documentation.

Gary E. Miller: If I were an evil overlord, Gary would be my trusty henchman. He’s been my chief lieutenant on GPSD for years, and told me that he likes having me in the #1 spot so he doesn’t have to do it. Tends to follow me around to other projects; a once and probably future NTPsec dev. His excellent low-level troubleshooting skills complement my systems-architect view of things perfectly.

Susan Sons: Susan is an InfoSec specialist who worries, very constructively, about my security. She’s good at it. She was also the person originally responsible for pulling me into NTP development.

Wendell Wilson: Builder of the Great Beast, and another guy who tends to drop hardware on me to see what I’ll do with it. Takes time out of a very busy life as an engineer/entrepreneur to make sure I have sharp tools and the blades stay properly whetted.

All you fanboys out there: These people give me a gift I value much more than adulation. They engage me as equals to a fallible human being, think about what might make me less hassled and/or more productive, and then do it. This is good, because it means I get to solve more and harder problems for everybody’s benefit.

Last I cannot fail to mention my wife Catherine Raymond. It’s certainly what would be expected for a wife to support her husband, but Cathy goes well beyond “That’s nice, dear” by being actively engaged with my life among the geeks. She befriends my peers and followers and shares their jokes, not merely tolerating but often enjoying their eccentricities. The people in my support network like her, too, and that actually matters in pulling it together.

When I think of it, it’s like I have a small but remarkably capable army around me. I’m making a resolution to be more appreciative of people who sign up for that. Yes, they all have good reasons of their own; people who believe that teaching me things and helping me can have far-reaching consequences that they will enjoy are, on past evidence, quite right to bet that way in their own interests. Still doesn’t mean I should take them for granted.

Posted Tue May 3 18:06:43 2016 Tags:
Today the Netherlands celebrates King's Day. To honor this tradition, the Dutch embassy in San Francisco invited me to give a "TED talk" to an audience of Dutch and American entrepreneurs. Here's the text I read to them. Part of it is the tl;dr of my autobiography; part of it is about the significance of programming languages; part of it is about Python's big idea. Leve de koning! (Long live the king!)

Python: a programming language created by a community

Excuse my ramblings. I’ll get to a point eventually.

Let me introduce myself. I’m a nerd, a geek. I’m probably somewhere on the autism spectrum. Im also a late bloomer. I graduated from college when I was 26. I was 45 when I got married. Im now 60 years old, with a 14 year old son. Maybe I just have a hard time with decisions: I’ve lived in the US for over 20 years and I am still a permanent resident.

I'm no Steve Jobs or Mark Zuckerberg. But at age 35 I created a programming language that got a bit of a following. What happened next was pretty amazing. But I'll get to that.

At age 10 my parents gave me an educational electronics kit. The kit was made by Philips, and it was amazing. At first I just followed the directions and everything worked; later I figured out how to design my own circuits. My prized possessions were the kit's three (!) transistors.

I took one of my first electronics models, a blinking light, to show and tell in 5th grade. It was a total dud nobody cared or understood its importance. I think that's one of my earliest memories of finding myself a geek: until then I had just been a quiet quick learner.

In high school I developed my nerdiness further I hung out with a few other kids interested in electronics, and during physics class we sat in the back of the class discussing NAND gates while the rest of the class was still figuring out Ohm's law.

Fortunately our physics teacher had figured us out: he employed us to build a digital timer that he used to demonstrate the law of gravity to the rest of the class. It was a great project and showed us that our skills were useful. The other kids still thought we were weird: it was the seventies and many were into smoking pot and rebelling; another group was already preparing for successful careers as doctors or lawyers or tech managers. But they left me alone, I left them alone, and I graduated as one of the best of my year.

After high school I went to the University of Amsterdam: It was close to home, and to a teen growing up in the Netherlands in the seventies, Amsterdam was the only cool city. (Yes, the student protests of 1968 did touch me a bit.) Much to my high school physics teacher's surprise and disappointment, I chose to major in math, not physics. But looking back I think it didn’t matter.

In the basement of the science building was a mainframe computer, and it was love at first sight. Card punches! Line printers! Batch jobs! More to the point, I quickly learned to program, in languages with names like Algol, Fortran and Pascal. Mostly forgotten names, but highly influential at the time. Soon I was, again, sitting in the back of class, ignoring the lecture, correcting my computer programs. And why was that?

In that basement, around the mainframe, something amazing was happening. There was a loosely-knit group of students and staff with similar interests, and we exchanged tricks of the trade. We shared subroutines and programs. We united in our alliances against the mainframe staff, especially in the endless cat-and-mouse games over disk space. (Disk space was precious in a way you cannot understand today.)

But the most important lesson I learned was about sharing: while most of the programming tricks I learned there died with the mainframe era, the idea that software needs to be shared is stronger than ever. Today we call it open source, and it’s a movement. Hold that thought!

At the time, my immediate knowledge of the tricks and the trade seemed to matter most though. The mainframe’s operating system group employed a few part-time students, and when they posted a vacancy, I applied, and got the job. It was a life-changing event! Suddenly I had unlimited access to the mainframe no more fighting for space or terminalsplus access to the source code for its operating system, and dozens of colleagues who showed me how all that stuff worked.

I now had my dream job, programming all day, with real customers: other programmers, the users of the mainframe. I stalled my studies and essentially dropped out of college, and I would not have graduated if not for my enlightened manager and a professor who hadn't given up on me. They nudged me towards finishing some classes and pulled some strings, and eventually, with much delay, I did graduate. Yay!

I immediately landed a new dream job that would not have been open to me without that degree. I had never lost my interest in programming languages as an object of study, and I joined a team building a new programming language — not something you see every day. The designers hoped their language would take over the world, replacing Basic.

It was the eighties now, and Basic was the language of choice for a new generation of amateur programmers, coding on microcomputers like the Apple II and the Commodore 64. Our team considered the Basic language a pest that the world should be rid of. The language we were building, ABC, would "stamp out Basic", according to our motto.

Sadly, for a variety of reasons, our marketing (or perhaps our timing) sucked, and after four years, ABC was abandoned. Since then I've spent many hours trying to understand why the project failed, despite its heart being so clearly in the right place. Apart from being somewhat over-engineered, my best answer is that ABC died because there was no internet in those days, and as a result there could not be a healthy feedback loop between the makers of the language and its users. ABC’s design was essentially a one-way street.

Just half a decade later, when I was picking through ABC’s ashes looking for ideas for my own language, that missing feedback loop was one of the things I decided to improve upon. “Release early, release often” became my motto (freely after the old Chicago Democrats’ encouragement, “vote early, vote often”). And the internet, small and slow as it was in 1990, made it possible.

Looking back 25 years, the Internet and the Open Source movement (a.k.a. Free Software) really did change everything. Plus something called Moore's Law, which makes computers faster every year. Together, these have entirely changed the interaction between the makers and users of computer software. It is my belief that these developments (and how I managed to make good use of them) have contributed more to the success of “my” programming language than my programming skills and experience, no matter how awesome.

It also didn't hurt that I named my language Python. This was a bit of unwitting marketing genius on my part. I meant to honor the irreverent comedic genius of Monty Python's Flying Circus, and back in 1990 I didn't think I had much to lose. Nowadays, I'm sure "brand research" firms would be happy to to charge you a very large fee to tell you exactly what complex of associations this name tickles in the subconscious of the typical customer. But I was just being flippant.

I have promised the ambassador not to bore you with a technical discussion of the merits of different programming languages. But I would like to say a few things about what programming languages mean to the people who use them programmers. Typically when you ask a programmer to explain to a lay person what a programming language is, they will say that it is how you tell a computer what to do. But if that was all, why would they be so passionate about programming languages when they talk among themselves?

In reality, programming languages are how programmers express and communicate ideas and the audience for those ideas is other programmers, not computers. The reason: the computer can take care of itself, but programmers are always working with other programmers, and poorly communicated ideas can cause expensive flops. In fact, ideas expressed in a programming language also often reach the end users of the program people who will never read or even know about the program, but who nevertheless are affected by it.

Think of the incredible success of companies like Google or Facebook. At the core of these are ideas ideas about what computers can do for people. To be effective, an idea must be expressed as a computer program, using a programming language. The language that is best to express an idea will give the team using that language a key advantage, because it gives the team members — people! — clarity about that idea. The ideas underlying Google and Facebook couldn't be more different, and indeed these companies' favorite programming languages are at opposite ends of the spectrum of programming language design. And that’s exactly my point.

True story: The first version of Google was written in Python. The reason: Python was the right language to express the original ideas that Larry Page and Sergey Brin had about how to index the web and organize search results. And they could run their ideas on a computer, too!

So, in 1990, long before Google and Facebook, I made my own programming language, and named it Python. But what is the idea of Python? Why is it so successful? How does Python distinguish itself from other programming languages? (Why are you all staring at me like that? :-)

I have many answers, some quite technical, some from my specific skills and experience at the time, some just about being in the right place at the right time. But I believe the most important idea is that Python is developed on the Internet, entirely in the open, by a community of volunteers (but not amateurs!) who feel passion and ownership.

And that is what that group of geeks in the basement of the science building was all about.

Surprise: Like any good inspirational speech, the point of this talk is about happiness!

I am happiest when I feel that I'm part of such a community. I’m lucky that I can feel it in my day job too. (I'm a principal engineer at Dropbox.) If I can't feel it, I don't feel alive. And so it is for the other community members. The feeling is contagious, and there are members of our community all over the world.

The Python user community is formed of millions of people who consciously use Python, and love using it. There are active members organizing Python conferences — affectionately known as PyCons — in faraway places like Namibia, Iran, Iraq, even Ohio!

My favorite story: A year ago I spent 20 minutes on a video conference call with a classroom full of faculty and staff at Babylon University in southern Iraq, answering questions about Python. Thanks to the efforts of the audacious woman who organized this event in a war-ridden country, students at Babylon University are now being taught introductory programming classes using Python. I still tear up when I think about the power of that experience. In my wildest dreams I never expected I’d touch lives so far away and so different from my own.

And on that note I'd like to leave you: a programming language created by a community fosters happiness in its users around the world. Next year I may go to PyCon Cuba!
Posted Wed Apr 27 17:17:00 2016 Tags:

Because people do in fact drop money in my PayPal and Patreon accounts, I think a a decent respect to the opinions of mankind requires that I occasionally update everyone on where the money goes. First in an occasional series,

Recently I’ve been buying Raspberry Pi GPS HATs (daughterboards with a GPS and real-time clock) to go with the Raspberry PI 3 Dave Taht dropped on me. Yesterday morning a thing called an Uputronics GPS Extension Board arrived from England. A few hours ago I ordered a cheap Chinese thing obviously intended to compete with the Adafruit GPS HAT I bought last week.

The reason is that I’m working up a very comprehensive HOWTO on how to build a Stratum 1 timeserver in a box. Not content to merely build one, I’m writing a sheaf of recipes that includes all three HATs I’ve found and (at least) two revisions of the Pi.

What makes this HOWTO different from various build pages on this topic scattered around the Web? In general, the ones I’ve found are well-intended but poorly written. They make too many assumptions, they’re tied to very specific hardware types, they skip “obvious” steps, they leave out diagnostic details about how to tell things are going right and what to do when things go wrong.

My goal is to write a HOWTO that can be used by people who are not Linux and NTP experts – basically, my audience is anyone who could walk into a hackerspace and not feel utterly lost.

Also, my hope is that by not being tightly tied to one parts list this HOWTO will help people develop more of a generative understanding of how you compose a build recipe, and develop their own variations.

I cover everything, clear down to how to buy a case that will fit a HAT. And this work has already had some functional improvements to GPSD as a side effect.

I expect it might produce some improvements in NTPsec as well – our program manager, A&D regular Mark Atwood, has been smiling benignly on this project. Mark’s plan is to broadcast this thing to a hundred hackerspaces and recruit the next generation of time-service experts that way.

Three drafts have already circulated to topic experts. Progress will be interrupted for a bit while I’m off at Penguicon, but 1.0 is likely to ship within two weeks or so.

And it will ship with the recipe variations tested. Because that’s what I do with your donations. If this post stimulates a few more, I’ll add an Odroid C2 (Raspberry Pi workalike with beefier hardware) to the coverage; call it a stretch goal.

Posted Wed Apr 27 11:12:03 2016 Tags:

This is an entirely silly post about the way I name the machines in my house, shared for the amusement of my regulars.

The house naming theme is “comic mythical beasts”.

My personal desktop machine is always named “snark”, after Lewis Carroll’s “Hunting of the”. This has been so since long before adj. “snarky” and vi. “to snark” entered popular English around the turn of the millennium. I do not find the new layer of meaning inappropriate.

Currently snark is perhaps better known as the Great Beast of Malvern, but whereas “snark” describes its role, “Beast” refers to the exceptional capabilities of this particular machine.

One former snark had two Ethernet ports. Its alias through the second IP address was, of course, “boojum”.

My laptop is always “golux”, from James Thurber’s The Thirteen Clocks.

The bastion host (mail and DNS server) is always “grelber”, after the insult-spewing Grelber from the Broom Hilda comic strip. It’s named not for the insults but because Grelber is depicted as a lurking presence inside a hollow log with a mailbox on the top.

Cathy’s personal desktop machine is always “minx” after a pretty golden-furred creature from Infocom’s classic Zork games, known for its ability to sniff out buried chocolate truffles.

The router is “quintaped”, a five-legged creature supposed to live on a magically concealed island in the Potterverse. Because it has 5 ports, you see.

The guest machine in the basement (distinct from the mailserver) is “hurkle” after the title character in Theodore Sturgeon’s The Hurkle Is A Happy Beast (1949).

For years we had a toilet-seat Mac (iBook) I’d been given as a gift (it’s long dead now). We used it as a gaming machine (mainly “Civilization II” and “Spaceward Ho”). It was “billywig”, also from the Potterverse.

I have recently acquired 3 Raspberry Pis (more about this in a future post). The only one of them now in use is currently named “whoville”, but that is likely to change as I have just decided the sub-namespace for Pis will be Dr. Seuss creatures – lorax, sneetch, zax, grinch, etc.

That is all.

Posted Sun Apr 24 05:56:55 2016 Tags:
You don’t need an Uber, you don’t need a cab (via Casey Bisson CC BY-NC-SA 2.0)

NetworkManager 1.2 was released yesterday, and it’s already built for Fedora (24 and rawhide), a release candidate is in Ubuntu 16.04, and it should appear in other distros soon too.  Lubo wrote a great post on many of the new features, but there’s too many to highlight in one post for our ADD social media 140-character tap-tap generation to handle.  Ready for more?

indicator menus

appletWayland is coming, and it doesn’t support the XEmbed status icons like nm-applet creates.  Desktop environments also want more control over how these status menus appear.  While KDE and GNOME both provide their own network status menus Ubuntu, XFCE, and LXDE use nm-applet.  How do they deal with lack of XEmbed and status icons?

Ubuntu has long patched nm-applet to add App Indicator support, which exposes the applet’s menu structure as D-Bus objects to allow the desktop environment to draw the menu just like it wants.  We enhanced the GTK3 support in libdbusmenu-gtk to handle nm-applet’s icons and then added an indicator mode to nm-applet based off Ubuntu’s work.  We’ve made packager’s lives easier by building both modes into the applet simultaneously and allowing them to be switched at runtime.

IP reconfiguration

Want to add a second IP address or change your DNS servers right away?  With NetworkManager 1.2 you can now change the IP configuration of a device through the D-Bus interface or nmcli without triggering a reconnect.  This lets the network UIs like KDE or GNOME control-center apply changes you make to network configuration immediately without interrupting your network connection.  That might take a cycle  or two to show up in your favorite desktop environment, but the basis is there.

802.1x/WPA Enterprise authentication

An oft-requested feature was the ability to use certificate domain suffix checking to validate an authentication server.  While NetworkManager has supported certificate subject checking for years, this has limitations and isn’t as secure as domain suffix checking.  Both these options help prevent man-in-the-middle attacks where a rogue access point could masquerade as as your normal secure network.  802.1x authentication is still too complicated, and we hope to greatly simplify it in upcoming releases.

Interface stacking

While NM has always been architected to allow bridges-on-bonds-on-VLANs, there were some internal issues that prevented these more complicated configurations from working.  We’ve fixed those bugs, so now layer-cake network setups work in a flash!  Hopefully somebody will come up with a fancy drag-n-drop UI based off Minecraft or CandyCrush with arbitrary interface trees.  Maybe it’ll even have trophies when you finally get a Level 48 active-backup bond.

Old Stable Series

Now that 1.2 is out, the 1.0 series is in maintenance mode.  We’ll fix bugs and any security issues that come up, but typically don’t add new features.  Backporting from 1.2 to 1.0 will be even more difficult due to the removal of dbus-glib, a major feature in 1.2 release.  If you’re on 1.0, 0.9.10, or (gasp!) 0.9.8 I’d urge you to upgrade, and I think you’ll like what you see!

Posted Thu Apr 21 18:07:22 2016 Tags:

When we released graphics tablet support in libinput earlier this year, only tablet tools were supported. So while you could use the pen normally, the buttons, rings and strips on the physical tablet itself (the "pad") weren't detected by libinput and did not work. I have now merged the patches for pad support into libinput.

The reason for the delay was simple: we wanted to get it right [1]. Pads have a couple of properties that tools don't have and we always considered pads to be different to pens and initially focused on a more generic interface (the "buttonset" interface) to accommodate for those. After some coding, we have now arrived at a tablet pad-specific interface instead. This post is a high-level overview of the new tablet pad interface and how we intend it do be used.

The basic sign that a pad is present is when a device has the tablet pad capability. Unlike tools, pads don't have proximity events, they are always considered in proximity and it is up to the compositor to handle the focus accordingly. In most cases, this means tying it to the keyboard focus. Usually a pad is available as soon as a tablet is plugged in, but note that the Wacom ExpressKey Remote (EKR) is a separate, wireless device and may be connected after the physical pad. It is up to the compositor to link the EKR with the correct tablet (if there is more than one).

Pads have three sources of events: buttons, rings and strips. Rings and strips are touch-sensitive surfaces and provide absolute values - rings in degrees, strips in normalized [0.0, 1.0] coordinates. Similar to pointer axis sources we provide a source notification. If that source is "finger", then we send a terminating out-of-range event so that the caller can trigger things like kinetic scrolling.

Buttons on a pad are ... different. libinput usually re-uses the Linux kernel's include/input.h event codes for buttons and keys. But for the pad we decided to use plain sequential button numbering, starting at index 0. So rather than a semantic code like BTN_LEFT, you'd simply get a button 0 event. The reasoning behind this is a caveat in the kernel evdev API: event codes have semantic meaning (e.g. BTN_LEFT) but buttons on a tablet pad don't those meanings. There are some generic event ranges (e.g. BTN_0 through to BTN_9) and the Wacom tablets use those but once you have more than 10 buttons you leak into other ranges. The ranges are simply too narrow so we end up with seemingly different buttons even though all buttons are effectively the same. libinput's pad support undoes that split and combines the buttons into a simple sequential range and leaves any semantic mapping of buttons to the caller. Together with libwacom which describes the location of the buttons a caller can get a relatively good idea of how the layout looks like.

Mode switching is a commonly expected feature on tablet. One button is designated as mode switch button and toggles all other buttons between the available modes. On the Intuos Pro series tablets, that button is usually the button inside the ring. Button mapping and thus mode switching is however a feature we leave up to the caller, if you're working on a compositor you will have to implemented mode switching there.

Other than that, pad support is relatively simple and straightforward and should not cause any big troubles.

[1] or at least less wrong than in the past
[2] They're actually linux/input-event-codes.h in recent kernels

Posted Mon Apr 18 07:14:00 2016 Tags:

This year’s meatspace party for blog regulars and friends will be held at Penguicon 2016 On Friday, April 29 beginning at 9PM 10PM.

UPDATE: Pushed back an hour because the original start time conflicted with the time slot assigned for my “Ask Me Anything” event.

The venue is the Southfield Westin hotel in Southfield, Michigan. It’s booked solid already; we were only able to get a room there Friday night, and will be decamping to the Holiday In Express across the parking lot on Saturday. They still have rooms, but I suggest making reservations now.

The usual assortment of hackers, anarchists, mutants, mad scientists, and for all I know covert extraterrestrials will be attending the A&D party. The surrounding event is worth attending in itself and will be running Friday to Sunday.

Southfield is near the northwestern edge of the Detroit metro area and is served by the Detroit Metropolitan Airport (code DTW).

Penguicon is a crossover event: half science-fiction convention, half open-source technical conference. Terry Pratchett and I were the co-guests-of-honor at Penguicon I back in 2003 and I’ve been back evey year since.

If you’ve never been to an SF con, you have no idea how much fun this can be. A couple thousand unusually intelligent people well equipped with geek toys and costumes and an inclination to party can generate a lot of happy chaos, and Penguicon reliably does. If you leave Monday without having made new friends, you weren’t trying.

Things I have done at Penguicon: Singing. Shooting pistols. Tasting showcased exotic foods. Getting surprise-smooched by attractive persons. Swordfighting. Playing strategy games. Junkyard Wars. Participating in a Viking raid (OK, it turned into a dance-off). Punning contests. And trust me, you have never been to parties with better conversation than the ones we throw.

Fly in Thursday night (the 28th) if you can because Geeks With Guns (the annual pistol-shooting class founded by yours truly and now organized by John D. Bell) is early Friday afternoon and too much fun to miss.

Posted Sun Apr 17 08:52:13 2016 Tags:

About five years ago I reacted to a lot of hype about the impending death of the personal computer with an observation and a prediction. The observation was that some components of a computer have to be the size they are because they’re scaled to human dimensions – notably screens, keyboards, and pointing devices. Wander outside certain size extrema and you get things like smartphone keyboards that are only good for limited use.

However, what we normally think of as the heart of a computer – the processing and storage – isn’t like this. It can get arbitrarily small without impacting usability at all. Consequently, I predicted a future in which people would carry around powerful computing nodes descended from smartphones and walk them to docking stations bundling a screen, a pointing device, and a real keyboard when they need to get real work done.

We’ve now reached an interesting midway point on that road. The (stationary) computers I use are in the process of bifurcating into two classes: one quite large, one very small. I qualify that with “stationary” because laptops are an exception for reasons which, if not yet obvious, will be in a few paragraphs.

The “large” class is exemplified in my life by the Great Beast of Malvern: my “desktop” system, except that it’s really more like a baby supercomputer optimized for fast memory access to extremely large data sets (as in, surgery on large version-control repositories). This is more power than a typical desktop user would know what to do with, by a pretty large margin -absurd overkill for just running an office suite or video editing or gaming or whatever.

My other two stationary production machines are, as of yesterday, a fanless mini-ITX box about the size of a paperback book and a credit-card-sized Raspberry Pi 3. They arrived on my doorstep around the same time. The mini-ITX box was a planned replacement for the conventional tower PC I had been using as a mailserver/DNS/bastion host, because I hate moving parts and want to cut my power bills. The Pi was serendipitous, a surprise gift from Dave Taht who’s trying to nudge me into improving my hardware hacking.

(And so I shall; tomorrow I expect to solder a header onto an Adafruit GPS hat, plug it into the Pi, and turn the combination into a tiny Stratum 1 NTP test machine.)

And now I have three conventional tower PCs in my living room (an old mailserver and two old development workstations) that I’m trying to get rid of – free to good home, you must come to Malvern to get them. Because they just don’t make sense as service machines any more. Fanless small-form-factor systems are now good enough to replace almost any computer with functional requirements less than those of a Great-Beast-class monster.

My wife still has a tower PC, but maybe not for long. Hers could easily be replaced by something like an Intel NUC – Intel’s sexy flagship small-form-factor fanless system, now cheap enough on eBay to be price-competitive with a new tower PC. And no moving parts, and no noise, and less power draw.

I have one tower PC left – the recently decomissioned mailserver. But the only reason I’m keeping it is as a courtesy for basement guests – it’ll be powered down when we don’t have one. But I am seriously thinking of replacing it with another Raspberry Pi set up as a web kiosk.

I still have a Thinkpad for travel. When you have to carry your peripherals with you, it’s a compromise that makes sense. (Dunno what I’m going to do when it dies, either – the quality and design of recent Thinkpads has gone utterly to shit. The new keyboards are particularly atrocious.)

There’s a confluence of factors at work here. Probably the single most important is cheap solid-state drives. Without SSDs, small-form-factor systems were mostly cute technology demonstrations – it didn’t do a lot of practical good for the rest of the computing/storage core to be a tiny SBC when it had to drag around a big, noisy hunk of spinning rust. With SSDs everything, including power draw and noise and heat dissipation, scales down in better harmony.

What it adds up to for me is that midrange PCs are dead. For most uses, SFF (small-form-factor) hardware has reached a crossover point – their price per unit of computing is now better.

Next, these SFF systems get smaller and cooler and merge with smartphone technology. That’ll take another few years.

Posted Fri Apr 15 12:10:06 2016 Tags:

I don't actually support everyone in every bathroom

this is from a distant facebook friend whose heart is in the right place and was making a better point on a different topic, but it got me to thinking.

read more

Posted Thu Apr 14 00:52:15 2016 Tags:

Once upon a time, free-trade agreements were about just that: free trade. You abolish your tariffs and import restrictions, I’ll abolish mine. Trade increases, countries specialize in what they’re best equipped to do, efficiency increases, price levels drop, everybody wins.

Then environmentalists began honking about exporting pollution and demanded what amounted to imposing First World regulation on Third World countries who – in general – wanted the jobs and the economic stimulus from trade more than they wanted to make environmentalists happy. But the priorities of poor brown people didn’t matter to rich white environmentalists who already had theirs, and the environmentalists had political clout in the First World, so they won. Free-trade agreements started to include “environmental safeguards”.

Next, the labor unions, frightened because foreign workers might compete down domestic wages, began honking about abusive Third World labor conditions about which they didn’t really give a damn. They won, and “free trade” agreements began to include yet more impositions of First World pet causes on Third World countries. The precedent firmed up: free trade agreements were no longer to be about “free” trade, but rather about managing trade in the interests of wealthy First Worlders.

Today there’s a great deal of angst going on in the tech community about the Trans-Pacific Partnership. Its detractors charge that a “free-trade” agreement has been hijacked by big-business interests that are using it to impose draconian intellectual-property rules on the entire world, criminalize fair use, obstruct open-source software, and rent-seek at the expense of developing countries.

These charges are, of course, entirely correct. So here’s my question: What the hell else did you expect to happen? Where were you idiots when the environmentalists and the unions were corrupting the process and the entire concept of “free trade”?

The TPP is a horrible agreement. It’s toxic. It’s a dog’s breakfast. But if you stood meekly by while the precedents were being set, or – worse – actually approved of imposing rich-world regulation on poor countries, you are partly to blame.

The thing about creating political machinery to fuck with free markets is this: you never get to be the last person to control it. No matter how worthy you think your cause is, part of the cost of your behavior is what will be done with it by the next pressure group. And the one after that. And after that.

The equilibrium is that political regulatory capability is hijacked by for the use of the pressure group with the strongest incentives to exploit it. Which generally means, in Theodore Roosevelt’s timeless phrase, “malefactors of great wealth”. The abuses in the TPP were on rails, completely foreseeable, from the first time “environmental standards” got written into a trade agreement.

That’s why it will get you nowhere to object to the specifics of the TPP unless you recognize that the entire context in which it evolved is corrupt. If you want trade agreements to stop being about regulatory carve-outs, you have to stop tolerating that corruption and get back to genuinely free trade. No exemptions, no exceptions, no sweeteners for favored constituencies, no sops to putatively noble causes.

It’s fine to care about exporting pollution and child labor and such things, but the right way to fix that is by market pressure – fair trade labeling, naming and shaming offenders, that sort of thing. If you let the politicians in they’ll do what they always do: go to the highest bidder and rig the market in its favor. And then you will get screwed.

Application of this principle to domestic policy is left as an easy exercise for the reader.

Posted Tue Apr 12 14:33:48 2016 Tags:

I’ve been implementing segregated witness support for c-lightning; it’s interesting that there’s no address format for the new form of addresses.  There’s a segregated-witness-inside-p2sh which uses the existing p2sh format, but if you want raw segregated witness (which is simply a “0” followed by a 20-byte or 32-byte hash), the only proposal is BIP142 which has been deferred.

If we’re going to have a new address format, I’d like to make the case for shifting away from bitcoin’s base58 (eg. 1At1BvBMSEYstWetqTFn5Au4m4GFg7xJaNVN2):

  1. base58 is not trivial to parse.  I used the bignum library to do it, though you can open-code it as bitcoin-core does.
  2. base58 addresses are variable-length.  That makes webforms and software mildly harder, but also eliminates a simple sanity check.
  3. base58 addresses are hard to read over the phone.  Greg Maxwell points out that the upper and lower case mix is particularly annoying.
  4. The 4-byte SHA check does not guarantee to catch the most common form of errors; transposed or single incorrect letters, though it’s pretty good (1 in 4 billion chance of random errors passing).
  5. At around 34 letters, it’s fairly compact (36 for the BIP141 P2WPKH).

This is my proposal for a generic replacement (thanks to CodeShark for generalizing my previous proposal) which covers all possible future address types (as well as being usable for current ones):

  1. Prefix for type, followed by colon.  Currently “btc:” or “testnet:“.
  2. The full scriptPubkey using base 32 encoding as per
  3. At least 30 bits for crc64-ecma, up to a multiple of 5 to reach a letter boundary.  This covers the prefix (as ascii), plus the scriptPubKey.
  4. The final letter is the Damm algorithm check digit of the entire previous string, using this 32-way quasigroup. This protects against single-letter errors as well as single transpositions.

These addresses look like btc:ybndrfg8ejkmcpqxot1uwisza345h769ybndrrfg (41 digits for a P2WPKH) or btc:yybndrfg8ejkmcpqxot1uwisza345h769ybndrfg8ejkmcpqxot1uwisza34 (60 digits for a P2WSH) (note: neither of these has the correct CRC or check letter, I just made them up).  A classic P2PKH would be 45 digits, like btc:ybndrfg8ejkmcpqxot1uwisza345h769wiszybndrrfg, and a P2SH would be 42 digits.

While manually copying addresses is something which should be avoided, it does happen, and the cost of making them robust against common typographic errors is small.  The CRC is a good idea even for machine-based systems: it will let through less than 1 in a billion mistakes.  Distinguishing which blockchain is a nice catchall for mistakes, too.

We can, of course, bikeshed this forever, but I wanted to anchor the discussion with something I consider fairly sane.

Posted Fri Apr 8 01:50:56 2016 Tags:

The British have a phrase “Too clever by half”, It needs to go global, especially among hackers. It can have any of several closely related meanings: the one I mean to focus on here has to do with overconfidence in one’s intelligence or skill, and the particular bad consequences that can have. It’s related to Nassim Taleb’s concept of a “fragilista”.

This came up recently when I posted about building a new mailserver out of a packaged fanless mini-ITX system. My stated goal was to reduce my mailserver’s power dissipation in order to (eventually) collect a net savings on my utility bill.

Certain of my commenters immediately crapped all over this idea, describing it as overkill and insisting that I ought to be using something with even lower power draw; the popular suggestion was a Raspberry Pi. It was when I objected to the absence of a battery-backed-up RTC (real-time clock) on the Pi that the real fun started.

The pro-Pi people airily dismissed this objection. One observed that you can get an RTC hat for the Pi. Some others waxed sarcastic about the maintainer of GPSD and the NTPsec tech lead not trusting his own software; a GPS to supply time, or NTP to take time corrections over the net, should (they claim) be a perfectly adequate substitute for an RTC.

And so they would be…under optimal conditions, with everything working perfectly, and a software bridge that hasn’t been written yet. Best case would be that your GPS hat has a solid satellite lock when you boot and sets the system clock within the first second. Only, oops, GPSD as it is doesn’t actually have the capability to set the system clock directly. It has to go through ntpd or chrony.

So now you have to have a time service daemon installed, properly configured, and running for the timestamps on your system logs to look sane. Well, unless your GPS doesn’t have sat lock. Or you’re booting without a network connection for diagnostic or fault isolation reasons. Now your cleverness has gotten you nowhere; your machine could believe it’s near 0 in the Unix epoch (Midnight, January 1st 1970) for an arbitrary amount of time.

Why is this a problem? One very mundane reason is that logfile analyzers don’t necessarily deal well with large jumps in the system clock, like the one that will happen when the system time finally gets set; if you have to troubleshoot boot-time behavior later. Another is cron jobs firing inappropriately. Yet another is that the implementations for various network protocols can get confused by large time skew, even if they’re formally supposed to be able to handle it.

And I left out the fact that outright setting the system clock isn’t normal behavior for an NTP daemon, either. What it’s actually designed to do is collect small amounts of drift by speeding up or slowing down the clock until system time matches NTP time. And why is it designed to do this? If you guessed “because too many applications get upset by jumping time” you get a prize.

You can certainly tell an NTP daemon to set time rather than skewing the clock rate. But you do have to tell it to do that. This is a configuration knob that can be gotten wrong.

Are we perhaps beginning to see the problem here?

Engineering is tradeoffs. When you optimize for one figure of merit (like low cost) you are likely to end up pessimizing another, like proliferating possible failure modes. This is especially likely if an economy measure like leaving out an RTC requires interlocking compensations like having a GPS hat and configuring your time-service daemon exactly right.

The “too clever by half” mindset often wants to optimize demonstrating its own cleverness. This, of course, is something hackers are particularly prone to. It can be a virtue of sorts when you’re doing exploratory design, but not when you’re engineering a production system. I’m not the first person to point out that if you write code that’s as clever as you can manage, it’s probably too tricky for you to debug.

A particularly dangerous form of too clever by half is when you assume that you are smart enough for your design to head off all failure modes. This is the mindset Nassim Taleb calls “fragilista” – the overconfident planner who proliferates complexity and failure modes and gets blindsided when his fragile construct collides with messy reality.

Now I need to introduce the concept of an incident pit. This is a term invented by scuba divers. It describes a cascade that depends with a small thing going wrong. You try to fix the small thing, but the fix has an unexpected effect that lands you in more trouble. You try to fix that thing, don’t get it quite right, and are in bigger trouble. Now you’re under stress and maybe not thinking clearly. The next mistake is larger… A few iterations of this can kill a diver.

The term “incident pit” has been adopted by paramedics and others who have to make life-critical decisions. A classic XKCD cartoon, “Success”, captures how this applies to hardware and software engineering:

The XKCD cartoon

Too clever by half lands you in incident pits.

How do you avoid these? By designing to avoid failure modes. This why “KISS” – Keep It Simple, Stupid” is an engineering maxim. Buy the RTC to foreclose the failure modes of not having one. Choose a small-form-factor system your buddy Phil the expert hardware troubleshooter is already using rather than novel hardware neither of you knows the ins and outs of.

Don’t get cute. Well, not unless your actual objective is to get cute – if I didn’t know that playfulness and deliberately pushing the envelope has its place I’d be a piss-poor hacker. But if you’re trying to bring up a production mailserver, or a production anything, cute is not the goal and you shouldn’t let your ego suck you into trying for the cleverest possible maneuver. That way lie XKCD’s sharks.

Posted Thu Apr 7 17:07:19 2016 Tags: