The lower-post-volume people behind the software in Debian. (List of feeds.)

How big should your queue be, and what should you do when it fills up? Many times, we implement or even deploy a networking system before we have answered those questions. Luckily, recent research has given us some guidelines. Here's what we know:

  1. There are two ways to deal with a full queue: drop packets, or throttle incoming packets (backpressure).

  2. Most network queues are mostly empty. The only exceptions are caused by burstiness (uneven incoming packet rates), and bottlenecks (exactly one slowest queue per network path).

  3. Queues exist only to handle burstiness. If traffic flowed at a constant rate, every queue would be empty except the bottleneck, which would be full (bufferbloat), no matter its limit.

    • Corollary: limit queue length to a statistically large burst (eg. 99th percentile).
  4. Backpressure may be implicit. For example, if an ethernet driver is slow to pull packets from the Linux netdev queue x into the device's ring buffer y, then there is implicit backpressure from y to x even though no signal is sent to x.

  5. Dropping packets should be intentional. An ethernet device will tail-drop incoming packets if its (usually fixed-size) receive ring buffer is exhausted, leaving software no control over what to drop. To take control, we could add a big software queue and unload the device's receive ring buffer to it as quickly as possible.

  6. Burstiness can come from inside a router or switch. Examples: It's more CPU-efficient to process 100 packets at a time instead of one. TSO/GSO are getting popular, and can produce, eg., bursts of 40+ packets from a single 64k buffer. (Pacing leaves time between 64k bursts.) Frame aggregation does the same on wifi. (Wifi nodes can negotiate reduced aggregation sizes if their buffers are small.)

  7. Use backpressure when muxing. In our diagram, from i to x, it is best to use backpressure from x to i so that i can drop packets from individual queues.

    • Prioritized handling of incoming packets is equivalent to muxing. Multiple priority queues are drained by a single CPU, creating an implicit single queue. Draining them too slowly creates implicit backpressure.
  8. Drop packets when demuxing. From y to o, it's best to drop packets at o rather than use backpressure from o to y. Otherwise, a single full output queue could starve the others by stopping y. (Example: imagine if a full queue to a 1 Mbps wifi station could stop traffic to a 100 Mbps wifi station. Some Linux wifi drivers suffer from this.)

    • Prioritizing outgoing packets is demuxing. You must prioritize first, then drop packets from the low-priority queues. Prioritizing too slowly creates backpressure on all packets, which would punish high priority packets.
  9. Tail drop is worst drop. There are several variants of AQM (active queue management) with different tradeoffs, but almost all are better than dropping the newest packet when a queue is full. Even the opposite ("head drop") is better in many cases. (Later TCP ACKs encompass all the information from previous ACKs, so if you have to drop one, it might as well be the oldest one.) CoDel is a more refined AQM. Most AQMs are the same speed or only slightly slower than tail drop.

  10. Some queues are way too short, and you have to design around it. For example, consumer-grade ethernet switches might have only one or two packets of queue space per port. Burstiness must be artificially reduced in order for these switches to protect traffic when busy. This leads to inventions like TCP pacing and TCP BBR.

  11. Ethernet "pause" frames usually make things worse. A general-purpose ethernet switch looks like our diagram: a mux and a demux chained together. At the mux, you want backpressure on the individual queues i. At the demux, you want to drop packets when queues o get full. Unfortunately, the latter prevents the former; since o drops packets, x and y are always empty, so there is no backpressure to apply to i. Unfortunately that's correct behaviour. Imagine incoming port i1 wants to send traffic alternately to ports o1 and o2. It would be wrong to pause i1 just because o1 is full, because that will starve o2. There is no "correct" solution except adequate buffering (for transients) on o1 and o2.

    • "Pause" frames work when the bottleneck is x or y rather than o. For example, we connected the input to a 400 Mbps MoCA chip to one output port on our switch via gigabit RGMII. The MoCA chip may be connected to many other MoCA targets, but it is not a demux: all traffic goes out on the shared MoCA bus, which acts like queue y in our diagram. The MoCA chip should apply backpressure so that better decisions can be made at i.

      (However, it would be wrong to further propagate backpressure from individual MoCA targets back to y. The MoCA bus itself acts as an implicit zero-queue demux o; individual targets will have to drop packets if they can't handle them.)

  12. "Fair" queueing is undefinable, but we have to try. All of the above helps get you to an optimal point where you eventually need to drop packets at i or o. Unfortunately, deciding which streams to drop is impossible in the general case. In the simple example of a server where i1 is very heavy but i2 has only a trickle of traffic, and queue space is exhausted, should you drop traffic from i1? What if i1 has 100 users behind a NAT but i2 has only one user? There is no answer that works in all situations.

(Sorry this is so dense. I challenged myself to make this into a "two pager" that I could send to people at work. But sometimes two pages is harder to read than five pages...) https://www.mcsweeneys.net/articles/lesser-known-trolley-problem-variations

Posted Fri Aug 11 08:08:36 2017 Tags:

Last November I went to an IETF meeting for the first time. The IETF is an interesting place; it seems to be about 1/3 maintenance grunt work, 1/3 extending existing stuff, and 1/3 blue sky insanity. I attended mostly because I wanted to see how people would react to TCP BBR, which was being presented there for the first time. (Answer: mostly positively, but with suspicion. It kinda seemed too good to be true.)

Anyway, the IETF meetings contain lots and lots of presentations about IPv6, the thing that was supposed to replace IPv4, which is what the Internet runs on. (Some would say IPv4 is already being replaced; some would say it has already happened.) Along with those presentations about IPv6, there were lots of people who think it's great, the greatest thing ever, and they're pretty sure it will finally catch on Any Day Now, and IPv4 is just a giant pile of hacks that really needs to die so that the Internet can be elegant again.

I thought this would be a great chance to really try to figure out what was going on. Why is IPv6 such a complicated mess compared to IPv4? Wouldn't it be better if it had just been IPv4 with more address bits? But it's not, oh goodness, is it ever not. So I started asking around. Here's what I found.

Buses ruined everything

Once upon a time, there was the telephone network, which used physical circuit switching. Essentially, that meant moving connectors around so that your phone connection was literally just a very long wire ("OSI layer 1"). A "leased line" was a very long wire that you leased from the phone company. You would put bits in one end of the wire, and they'd come out the other end, a fixed amount of time later. You didn't need addresses because there was exactly one machine at each end.

Eventually the phone company optimized that a bit. Time-division multiplexing (TDM) and "virtual circuit switching" was born. The phone company could transparently take the bits at a slower bit rate from multiple lines, group them together with multiplexers and demultiplexers, and let them pass through the middle of the phone system using fewer wires than before. Making that work was a little complicated, but as far as we modem users were concerned, you still put bits in one end and they came out the other end. No addresses needed.

The Internet (not called the Internet at the time) was built on top of this circuit switching concept. You had a bunch of wires that you could put bits into and have them come out the other side. If one computer had two or three interfaces, then it could, if given the right instructions, forward bits from one line to another, and you could do something a lot more efficient than a separate line between each pair of computers. And so IP addresses ("layer 3"), subnets, and routing were born. Even then, with these point-to-point links, you didn't need MAC addresses, because once a packet went into the wire, there was only one place it could come out. You used IP addresses to decide where it should go after that.

Meanwhile, LANs got invented as an alternative. If you wanted to connect computers (or terminals and a mainframe) together at your local site, it was pretty inconvenient to need multiple interfaces, one for each wire to each satellite computer, arranged in a star configuration. To save on electronics, people wanted to have a "bus" network (also known as a "broadcast domain," a name that will be important later) where multiple stations could just be plugged into a single wire, and talk to any other station plugged into the same wire. These were not the same people as the ones building the Internet, so they didn't use IP addresses for this. They all invented their own scheme ("layer 2").

One of the early local bus networks was arcnet, which is dear to my heart (I wrote the first Linux arcnet driver and arcnet poetry way back in the 1990s, long after arcnet was obsolete). Arcnet layer 2 addresses were very simplistic: just 8 bits, set by jumpers or DIP switches on the back of the network card. As the network owner, it was your job to configure the addresses and make sure you didn't have any duplicates, or all heck would ensue. This was kind of a pain, but arcnet networks were usually pretty small, so it was only kind of a pain.

A few years later, ethernet came along and solved that problem once and for all, by using many more bits (48, in fact) in the layer 2 address. That's enough bits that you can assign a different (sharded-sequential) address to every device that has ever been manufactured, and not have any overlaps. And that's exactly what they did! Thus the ethernet MAC address was born.

Various LAN technologies came and went, including one of my favourites, IPX (Internetwork Packet Exchange, though it had nothing to do with the "real" Internet) and Netware, which worked great as long as all the clients and servers were on a single bus network. You never had to configure any addresses, ever. It was beautiful, and reliable, and worked. The golden age of networking, basically.

Of course, someone had to ruin it: big company/university networks. They wanted to have so many computers that sharing 10 Mbps of a single bus network between them all became a huge bottleneck, so they needed a way to have multiple buses, and then interconnect - "internetwork," if you will - those buses together. You're probably thinking, of course! Use the Internet Protocol for that, right? Ha ha, no. The Internet protocol, still not called that, wasn't mature or popular back then, and nobody took it seriously. Netware-over-IPX (and the many other LAN protocols at the time) were serious business, so as serious businesses do, they invented their own thing(s) to extend the already-popular thing, ethernet. Devices on ethernet already had addresses, MAC addresses, which were about the only thing the various LAN protocol people could agree on, so they decided to use ethernet addresses as the keys for their routing mechanisms. (Actually they called it bridging and switching instead of routing.)

The problem with ethernet addresses is they're assigned sequentially at the factory, so they can't be hierarchical. That means the "bridging table" is not as nice as a modern IP routing table, which can talk about the route for a whole subnet at a time. In order to do efficient bridging, you had to remember which network bus each MAC address could be found on. And humans didn't want to configure each of those by hand, so it needed to figure itself out automatically. If you had a complex internetwork of bridges, this could get a little complicated. As I understand it, that's what led to the spanning tree poem, and I think I'll just leave it at that. Poetry is very important in networking.

Anyway, it mostly worked, but it was a bit of a mess, and you got broadcast floods every now and then, and the routes weren't always optimal, and it was pretty much impossible to debug. (You definitely couldn't write something like traceroute for bridging, because none of the tools you need to make it work - such as the ability for an intermediate bridge to even have an address - exist in plain ethernet.)

On the other hand, all these bridges were hardware-optimized. The whole system was invented by hardware people, basically, as a way of fooling the software, which had no idea about multiple buses and bridging between them, into working better on large networks. Hardware bridging means the bridging could go really really fast - as fast as the ethernet could go. Nowadays that doesn't sound very special, but at the time, it was a big deal. Ethernet was 10 Mbps, because you could maybe saturate it by putting a bunch of computers on the network all at once, not because any one computer could saturate 10 Mbps. That was crazy talk.

Anyway, the point is, bridging was a mess, and impossible to debug, but it was fast.

Internet over buses

While all that was happening, those Internet people were getting busy, and were of course not blind to the invention of cool cheap LAN technologies. I think it might have been around this time that the ARPANET got actually renamed to the Internet, but I'm not sure. Let's say it was, because the story is better if I sound confident.

At some point, things progressed from connecting individual Internet computers over point-to-point long distance links, to the desire to connect whole LANs together, over point-to-point links. Basically, you wanted a long-distance bridge.

You might be thinking, hey, no big deal, why not just build a long distance bridge and be done with it? Sounds good, doesn't work. I won't go into the details right now, but basically the problem is congestion control. The deep dark secret of ethernet bridging is that it assumes all your links are about the same speed, and/or completely uncongested, because they have no way to slow down. You just blast data as fast as you can, and expect it to arrive. But when your ethernet is 10 Mbps and your point-to-point link is 0.128 Mbps, that's completely hopeless. Separately, the idea of figuring out your routes by flooding all the links to see which one is right - this is the actual way bridging typically works - is hugely wasteful for slow links. And sub-optimal routing, an annoyance on local networks with low latency and high throughput, is nasty on slow, expensive long-distance links. It just doesn't scale.

Luckily, those Internet people (if it was called the Internet yet) had been working on that exact set of problems. If we could just use Internet stuff to connect ethernet buses together, we'd be in great shape.

And so they designed a "frame format" for Internet packets over ethernet (and arcnet, for that matter, and every other kind of LAN).

And that's when everything started to go wrong.

The first problem that needed solving was that now, when you put an Internet packet onto a wire, it was no longer clear which machine was supposed to "hear" it and maybe forward it along. If multiple Internet routers were on the same ethernet segment, you couldn't have them all picking it up and trying to forward it; that way lies packet storms and routing loops. No, you had to choose which router on the ethernet bus is supposed to pick it up. We can't just use the IP destination field for that, because we're already using that for the final destination, not the router destination. Instead, we identify the desired router using its MAC address in the ethernet frame.

So basically, to set up your local IP routing table, you want to be able to say something like, "send packets to IP address 10.1.1.1 via the router at MAC address 11:22:33:44:55:66." That's the actual thing you want to express. This is important! Your destination is an IP address, but your router is a MAC address. But if you've ever configured a routing table, you might have noticed that nobody writes it like that. Instead, because the writers of your operating system's TCP/IP stack are stubborn, you write something like "send packets to IP address 10.1.1.1 via the router at IP address 192.168.1.1."

In truth, that really is just complicating things. Now your operating system has to first look up the ethernet address of 192.168.1.1, find out it's 11:22:33:44:55:66, and finally generate a packet with destination ethernet address 11:22:33:44:55:66 and destination IP address 10.1.1.1. 192.168.1.1 shows up nowhere in the packet; it's just an abstraction at the human level.

To do that pointless intermediate step, you need to add ARP (address resolution protocol), a simple non-IP protocol whose job it is to convert IP addresses to ethernet addresses. It does this by broadcasting to everyone on the local ethernet bus, asking them all to answer if they own that particular IP address. If you have bridges, they all have to forward all the ARP packets to all their interfaces, because they're ethernet broadcast packets, and that's what broadcasting means. On a big, busy ethernet with lots of interconnected LANs, excessive broadcasts start becoming one of your biggest nightmares. It's especially bad on wifi. As time went on, people started making bridges/switches with special hacks to avoid forwarding ARP as far as it's technically supposed to go, to try to cut down on this problem. Some devices (especially wifi access points) just make fake ARP answers to try to help. But doing any of that is a hack, albeit sometimes a necessary hack.

Death by legacy

Time passed. Eventually (and this actually took quite a while), people pretty much stopped using non-IP protocols on ethernet at all. So basically all networks became a physical wire (layer 1), with multiple stations on a bus (layer 2), with multiple buses connected over bridges (gotcha! still layer 2!), and those inter-buses connected over IP routers (layer 3).

After a while, people got tired of manually configuring IP addresses, arcnet style, and wanted them to auto-configure, ethernet style, except it was too late to literally do it ethernet style, because a) the devices had already been manufactured with ethernet addresses, not IP addresses, and b) IP addresses were only 32 bits, which is not enough to just manufacture them forever with no overlaps, and c) just assigning IP addresses sequentially instead of using subnets would bring us back to square one: it would just be ethernet over again, and we already have ethernet.

So that's where bootp and DHCP came from. Those protocols, by the way, are special kinda like ARP is special (except they pretend not to be special, by technically being IP packets). They have to be special, because an IP node has to be able to transmit them before it has an IP address, which is of course impossible, so it just fills the IP headers with essentially nonsense (albeit nonsense specified by an RFC), so the headers might as well have been left out. (You know these "IP" headers are nonsense because the DHCP server has to open a raw socket and fill them in by hand; the kernel IP layer can't do it.) But nobody would feel nice if they were inventing a whole new protocol that wasn't IP, so they pretended it was IP, and then they felt nice. Well, as nice as one can feel when one is inventing DHCP.

Anyway, I digress. The salient detail here is that unlike real IP services, bootp and DHCP need to know about ethernet addresses, because after all, it's their job to hear your ethernet address and assign you an IP address to go with it. They're basically the reverse of ARP, except we can't say that, because there's a protocol called RARP that is literally the reverse of ARP. Actually, RARP worked quite fine and did the same thing as bootp and DHCP while being much simpler, but we don't talk about that.

The point of all this is that ethernet and IP were getting further and further intertwined. They're nowadays almost inseparable. It's hard to imagine a network interface (except ppp0) without a 48-bit MAC address, and it's hard to imagine that network interface working without an IP address. You write your IP routing table using IP addresses, but of course you know you're lying when you name the router by IP address; you're just indirectly saying that you want to route via a MAC address. And you have ARP, which gets bridged but not really, and DHCP, which is an IP packet but is really an ethernet protocol, and so on.

Moreover, we still have both bridging and routing, and they both get more and more complicated as the LANs and the Internet get more and more complicated, respectively. Bridging is still, mostly, hardware based and defined by IEEE, the people who control the ethernet standards. Routing is still, mostly, software based and defined by the IETF, the people who control the Internet standards. Both groups still try to pretend the other group doesn't exist. Network operators basically choose bridging vs routing based on how fast they want it to go and how much they hate configuring DHCP servers, which they really hate very much, which means they use bridging as much as possible and routing when they have to.

In fact, bridging has gotten so completely out of control that people decided to extract the layer 2 bridging decisions out completely to a higher level (with configuration exchanged between bridges using a protocol layered over IP, of course!) so it can be centrally managed. That's called software-defined networking (SDN). It helps a lot, compared to letting your switches and bridges just do whatever they want, but it's also fundamentally silly, because you know what's software defined networking? IP. It is literally and has always been the software-defined network you use for interconnecting networks that have gotten too big. But the problem is, IPv4 was initially too hard to hardware accelerate, and anyway, it didn't get hardware accelerated, and configuring DHCP really is a huge pain, so network operators just learned how to bridge bigger and bigger things. And nowadays big data centers are basically just SDNed, and you might as well not be using IP in the data center at all, because nobody's routing the packets. It's all just one big virtual bus network.

It is, in short, a mess.

Now forget I said all that...

Great story, right? Right. Now pretend none of that happened, and we're back in the early 1990s, when most of that had in fact already happened, but people at the IETF were anyway pretending that it hadn't happened and that the "upcoming" disaster could all be avoided. This is the good part!

There's one thing I forgot to mention in that big long story above: somewhere in that whole chain of events, we completely stopped using bus networks. Ethernet is not actually a bus anymore. It just pretends to be a bus. Basically, we couldn't get ethernet's famous CSMA/CD to keep working as speeds increased, so we went back to the good old star topology. We run bundles of cables from the switch, so that we can run one cable from each station all the way back to the center point. Walls and ceilings and floors are filled with big, thick, expensive bundles of ethernet, because we couldn't figure out how to make buses work well... at layer 1. It's kinda funny actually when you think about it. If you find sad things funny.

In fact, in a bonus fit of insanity, even wifi - the ultimate bus network, right, where literally everybody is sharing the same open-air "bus" - we almost universally use wifi in a mode, called "infrastructure mode," which simulates a giant star topology. If you have two wifi stations connected to the same access point, they don't talk to each other directly, even when they can hear each other just fine. They send a packet to the access point, but addressed to the MAC address of the other node. The access point then bounces it back out to the destination node.

HOLD THE HORSES LET ME JUST REVIEW THAT FOR YOU. There's a little catch there. When node X wants to send to Internet node Z, via IP router Y, via wifi access point A, what does the packet look like? Just to draw a picture, here's what we want to happen:

X -> [wifi] -> A -> [wifi] -> Y -> [internet] -> Z

Z is the IP destination, so obviously the IP destination field has to be Z. Y is the router, which we learned above that we specify by using its ethernet MAC address in the ethernet destination field. But in wifi, X can't just send out a packet to Y, for various reasons (including that they don't know each other's encryption keys). We have to send to A. Where do we put A's address, you might ask?

No problem! 802.11 has a thing called 3-address mode. They add a third ethernet MAC address to every frame, so they can talk about the real ethernet destination, and the intermediate ethernet destination. On top of that, there are bit fields called "to-AP" and "from-AP," which tell you if the packet is going from a station to an AP, or from an AP to a station, respectively. But actually they can both be true at the same time, because that's how you make wifi repeaters (APs send packets to APs).

Speaking of wifi repeaters! If A is a repeater, it has to send back to the base station, B, along the way, which looks like this:

X -> [wifi] -> A -> [wifi-repeater] -> B -> [wifi] -> Y -> [internet] -> Z

X->A uses three-address mode, but A->B has a problem: the ethernet source address is X, and the ethernet destination address is Y, but the packet on the air is actually being sent from A to B; X and Y aren't involved at all. Suffice it to say that there's a thing called 4-address mode, and it works pretty much like you think.

(In 802.11s mesh networks, there's a 6-address mode, and that's about where I gave up trying to understand.)

Avery, I was promised IPv6, and you haven't even mentioned IPv6

Oh, oops. This post went a bit off the rails, didn't it?

Here's the point of the whole thing. The IETF people, when they were thinking about IPv6, saw this mess getting made - and maybe predicted some of the additional mess that would happen, though I doubt they could have predicted SDN and wifi repeater modes - and they said, hey wait a minute, stop right there. We don't need any of this crap! What if instead the world worked like this?

  • No more physical bus networks (already done!)
  • No more layer 2 internetworks (that's what layer 3 is for)
  • No more broadcasts (layer 2 is always point-to-point, so where would you send the broadcast to? replace it with multicast instead)
  • No more MAC addresses (on a point-to-point network, it's obvious who the sender and receiver are, and you can do multicast using IP addresses)
  • No more ARP and DHCP (no MAC addresses, no so mapping IP addresses to MAC addresses)
  • No more complexity in IP headers (so you can hardware accelerate IP routing)
  • No more IP address shortages (so we can go back to routing big subnets again)
  • No more manual IP address configuration except at the core (and there are so many IP addresses that we can recursively hand out subnets down the tree from there)

Imagine that we lived in such a world: wifi repeaters would just be IPv6 routers. So would wifi access points. So would ethernet switches. So would SDN. ARP storms would be gone. "IGMP snooping bridges" would be gone. Bridging loops would be gone. Every routing problem would be traceroute-able. And best of all, we could drop 12 bytes (source/dest ethernet addresses) from every ethernet packet, and 18 bytes (source/dest/AP addresses) from every wifi packet. Sure, IPv6 adds an extra 24 bytes of address (vs IPv4), but you're dropping 12 bytes of ethernet, so the added overhead is only 12 bytes - pretty comparable to using two 64-bit IP addresses but having to keep the ethernet header. The idea that we could someday drop ethernet addresses helped to justify the oversized IPv6 addresses.

It would have been beautiful. Except for one problem: it never happened.

Requiem for a dream

One person at work put it best: "layers are only ever added, never removed."

All this wonderfulness depended on the ability to start over and throw away the legacy cruft we had built up. And that is, unfortunately, pretty much impossible. Even if IPv6 hits 99% penetration, that doesn't mean we'll be rid of IPv4. And if we're not rid of IPv4, we won't be rid of ethernet addresses, or wifi addresses. And if we have to keep the IEEE 802.3 and 802.11 framing standards, we're never going to save those bytes. So we will always need the "IPv6 neighbour discovery" protocol, which is just a more complicated ARP. Even though we no longer have bus networks, we'll always need some kind of simulator for broadcasts, because that's how ARP works. We'll need to keep running a local DHCP server at home so that our obsolete IPv4 light bulbs keep working. We'll keep needing NAT so that our obsolete IPv4 light bulbs can keep reaching the Internet.

And that's not the worst of it. The worst of it is we still need the infinite abomination that is layer 2 bridging, because of one more mistake the IPv6 team forgot to fix. Unfortunately, while they were blue-skying IPv6 back in the 1990s, they neglected to solve the "mobile IP" problem. As I understand it, the idea was to get IPv6 deployed first - it should only take a few years - and then work on it after IPv4 and MAC addresses had been eliminated, at which time it should be much easier to solve, and meanwhile, nobody really has a "mobile IP" device yet anyway. I mean, what would that even mean, like carrying your laptop around and plugging into a series of one ethernet port after another while you ftp a file? Sounds dumb.

The killer app: mobile IP

Of course, with a couple more decades of history behind us, now we know a few use cases for carrying around a computer - your phone - and letting it plug into one ethernet port wireless access point after another. We do it all the time. And with LTE, it even mostly works! With wifi, it works sometimes. Good, right?

Not really, because of the Internet's secret shame: all that stuff only works because of layer 2 bridging. Internet routing can't handle mobility - at all. If you move around on an IP network, your IP address changes, and that breaks any connections you have open.

Corporate wifi networks fake it for you, bridging their whole LAN together at layer 2, so that the giant central DHCP server always hands you the same IP address no matter which corporate wifi access point you join, and then gets your packets to you, with at most a few seconds of confusion while the bridge reconfigures. Those newfangled home wifi systems with multiple extenders/repeaters do the same trick. But if you switch from one wifi network to another as you walk down the street - like if there's a "Public Wifi" service in a series of stores - well, too bad. Each of those gives you a new IP address, and each time your IP address changes, you kill all your connections.

LTE tries even harder. You keep your IP address (usually an IPv6 address in the case of mobile networks), even if you travel miles and miles and hop between numerous cell towers. How? Well... they typically just tunnel all your traffic back to a central location, where it all gets bridged together (albeit with lots of firewalling) into one super-gigantic virtual layer 2 LAN. And your connections keep going. At the expense of a ton of complexity, and a truly embarrassing amount of extra latency, which they would really like to fix, but it's almost impossible.

Making mobile IP actually work1

So okay, this has been a long story, but I managed to extract it from those IETF people eventually. When we got to this point - the problem of mobile IP - I couldn't help but ask. What went wrong? Why can't we make it work?

The answer, it turns out, is surprisingly simple. The great design flaw was in how the famous "4-tuple" (source ip, source port, destination ip, destination port) was defined. We use the 4-tuple to identify a given TCP or UDP session; if a packet has those four fields the same, then it belongs to a given session, and we can deliver it to whatever socket is handling that session. But the 4-tuple crosses two layers: internetwork (layer 3) and transport (layer 4). If, instead, we had identified sessions using only layer 4 data, then mobile IP would have worked perfectly.

Let's do a quick example. X port 1111 is talking to Y port 80, so it sends a packet with 4-tuple (X,1111,Y,80). The response comes back with (Y,80,X,1111), and the kernel delivers it to the socket that generated the original packet. When X sends more packets tagged (X,1111,Y,80), then Y delivers them all to the same server socket, and so on.

Then, if X hops IP addresses, it gets a new name, say Q. Now it'll start sending packets with (Q,1111,Y,80). Y has no idea what that means, and throws it away. Meanwhile, if Y sends packets tagged (Y,80,X,1111), they get lost, because there is no longer an X to receive them.

Imagine now that we tagged sockets without reference to their IP address. For that to work, we'd need much bigger port numbers (which are currently 16 bits). Let's make them, say, 128 or 256 bits, some kind of unique hash.

Now X sends out packets to Y with tag (uuid,80). Note, the packets themselves still contain the (X,Y) addressing information, down at layer 3 - that's how they get routed to the right machine in the first place. But the kernel doesn't use the layer 3 information to decide which socket to deliver to; it just uses the uuid. The destination port (80 in this case) is only needed to initiate a new session, to identify what service you want to connect to, and can be ignored or left out after that.

For the return direction, Y's kernel caches the fact that packets for (uuid) go to IP address X, which is the address it most recently received (uuid) packets from.

Now imagine that X changes addresses to Q. It still sends out packets tagged with (uuid,80), to IP address Y, but now those packets come from address Q. On machine Y, it receives the packet and matches it to the socket associated with (uuid), notes that the packets for that socket are now coming from address Q, and updates its cache. Its return packets can now be sent, tagged as (uuid), back to Q instead of X. Everything works! (Modulo some care to prevent connection hijacking by impostors.2)

There's only one catch: that's not how UDP and TCP work, and it's too late to update them. Updating UDP and TCP would be like updating IPv4 to IPv6; a project that sounded simple, back in the 1990s, but decades later, is less than half accomplished (and the first half was the easy part; the long tail is much harder).

The positive news is we may be able to hack around it with yet another layering violation. If we throw away TCP - it's getting rather old anyway - and instead use QUIC over UDP, then we can just stop using the UDP 4-tuple as a connection identifier at all. Instead, if the UDP port number is the "special mobility layer" port, we unwrap the content, which can be another packet with a proper uuid tag, match it to the right session, and deliver those packets to the right socket.

There's even more good news: the experimental QUIC protocol already, at least in theory, has the right packet structure to work like this. It turns out you need unique session identifiers (keys) anyhow if you want to use stateless packet encryption and authentication, which QUIC does. So, perhaps with not much work, QUIC could support transparent roaming. What a world that would be!

At that point, all we'd have to do is eliminate all remaining UDP and TCP from the Internet, and then we would definitely not need layer 2 bridging anymore, for real this time, and then we could get rid of broadcasts and MAC addresses and SDN and DHCP and all that stuff.

And then the Internet would be elegant again.

1 Edit 2017-08-16: It turns out that nothing in this section requires IPv6. It would work fine with IPv4 and NAT, even roaming across multiple NATs.

2 Edit 2017-08-15: Some people asked what "some care to prevent connection hijacking" might look like. There are various ways to do it, but the simplest would be to do something like the SYN-ACK-SYNACK exchange TCP does at connection startup. If Y just trusts the first packet from the new host Q, then it's too easy for any attacker to take over the X->Y connection by simply sending a packet to Y from anywhere on the Internet. (Although it's a bit hard to guess which 256-bit uuid to fill in.) But if Y sends back a cookie that Q must receive and process and send back to Y, that ensures that Q is at least a man-in-the-middle and not just an outside attacker (which is all TCP would guarantee anyway). If you're using an encrypted protocol (like QUIC), the handshake can also be protected by your session key.

Posted Thu Aug 10 11:32:26 2017 Tags:

I just gave $100 to James Damore’s official fundraiser.

Damore, for any of you who have been hiding under a rock, is the guy who wrote a completely sane and reasonable memorandum, objecting on principled and scientific grounds to the assumptions behind “diversity”.

He’s been fired and is, of course, the target of a full-blown SJW rage-mob.

The full version of the memo is here. Note that much of the negative public discussion seems to have been based on redacted versions from which references and charts were omitted.

Please give generously. Because the thought police must be stopped.

Posted Wed Aug 9 04:00:12 2017 Tags:

Six months ago, I wrote Hey, Democrats! We need you to get your act together!, a plea to the opposition to get its act together.

A month ago, a Democratic activist attempting a mass political assassination shot Steve Scalise through the hip. Today, Gallup’s job creation index at +37 in July—a record high.

In my previous post I stayed away from values arguments about policy and considered only the practical politics of the Democrats’ positioning. I will continue that here.

In brief: Democrats, when you’re in a hole, stop digging!

How has the Democratic party’s self-destruction been pursued since Trump’s election? Let me count some of the ways…

The new party vice-chair, Keith Ellison, has a notorious history of anti-white racism and anti-Semitism, about which the Republicans are now carefully holding fire, is certain to be hung around the party’s neck in the 2018 midterms. A few quotes from his days in the Nation of Islam are all it’s going to take. Anyone who doesn’t expect this to tip the GOP the balance in a couple of flyover-country states is delusional.

At a time when 59% of Americans (including 74% of independent swing voters) favor President Trump’s proposed immigration restrictions, Democrats are doubling down on support for “sanctuary city” laws.

Americans’ Trust in Mass Media Sinks to New Low; in late May, a Republican Congressional candidate who body-slammed a reporter won a special election less than 24 hours later. Yet Democratic reliance on media partisans to make its political case has been increasing rather than decreasing since Election Night, often in bizarre and theatrical ways (cue CNN co-host Kathy Griffin’s display of a mock severed head of the President).

Do you want more Trump? Because this is how you get more Trump.

Dammit, Democrats, your country still needs an opposition that is smarter than this! But every time you temporarily abandon one suicidal obsession (like, say, gun control) you seem to latch onto another, like hamstringing the ICE. News flash: even legal immigrants to the U.S. are overwhelmingly in favor of an illegal-immigration crackdown.

Whether that’s good policy by some abstract technocratic measure is not the point here; the point is that you are choosing to fight Trump on an issue where public opinion is already heavily on his side. You can’t win that kind of fight with him; he’s way too good at making you look like out of touch let-them-eat-cake elitists even when you have a case.

You need to reconnect with the Middle Americans that are on Trump’s train. If the election should have taught you anything, it’s that the way to do that does not go through endless establishment-media tirades and celebrity endorsements and moralistic scolding about the deplorability of anyone in a MAGA hat. Yet this behavior is the lesser half of your post-election mistakes.

The greater half is your embrace of radicals advocating and many cases practicing political violence. BAMN. BLM. Linda Sarsour. Antifa. The voters you lost think that these people and organizations are their enemies, and they’re not wrong, and even if they were wrong the perception is what matters. You can have By Any Means Necessary in your coalition, or you can have the kind of people who attend Rotary meetings in the Midwest. You can’t have both.

Related: Every time Democrats are seen screaming and cursing and acting out in public, Trump wins. It’s no good pointing out that Trump himself is vulgar, boorish, profane, and often infantile in his presentation; his voters think “he fights!” and have already priced in his visible character defects. When you fail to look like the adults in the room, you don’t hurt Trump; instead, you disqualify yourselves from being seen as a better alternative.

But there is a mistake that may be even worse (though subtler) than playing footsie with violent radicals. And that is believing that only your messaging needs to change.

Every time I near a Democrat saying anything equivalent to “if we’d just gotten our message out better…” I wince. No. You didn’t lose because the people of the majority of the states in the Electoral College failed to understand your program; given the spin of a largely Democratic-leaning establishment media, there is no way they can’t have seen its best possible face in endless repetition.

What you have to process is that they did understand…and rejected it. Continuing to believe that you merely suffered from bad messaging is an excuse that will only prevent the real self-examination you need to go through.

I have watched in vain for any sign of that self-examination. I’ve seen no more than occasional flashes of humility from the Democratic leadership, always rapidly sniffed out by a shrill replay of talking points that are anything but humble.

Given the friction costs of substantive change, you’re running out of time before the 2018 midterms. And, as I began this post by noting, Trump has seized the high ground by actually moving on pocketbook issues. You may not think “regulatory relief” is a big deal, but that and the high-profile commitments to in-U.S. manufacturing by outfits like Foxconn matter a lot to an electorate who has seen way too many blighted small towns and deserted malls.

Finally: in my earlier post I noted that you need to banish the words “racism” and “sexism” from your vocabulary, and take a strap to any Democratic partisan who uses the phrase “angry white man”. I observed that these tags are traps that impede your ability to speak or even think in terms Trump’s base won’t reject as demonizing and toxic. Well…in the ensuing six months, it has been easy to identify two more such labels.

Those are “nationalist” (especially in the compound “white nationalist”) and “alt-right”. As I’ve explained before, when you talk about the alt-right, you create the thing you fear. And in order to win against Trump, you cannot repudiate “nationalism” – that won’t fly. not outside the deepest blue of blue-tribe areas. No. To win against Trump, you need to take nationalism away from him..

Posted Mon Aug 7 15:47:12 2017 Tags:

I’m back from vacation – World Boardgaming Championships, where this year I earned laurels in Ticket To Ride and Terra Mystica..

Catching up on some releases I needed to do:

* Open Adventure 1.3: Only minor bugfixes in this one, it’s pretty stable now. We gave 100% coverage in the test suite now, an achievement I’ll probably write about in a future post.

* ascii 1.18: By popular demand, this can now generate a 4×16 as well as 16×4 table, This is especially useful in conjuction with the new -b option to display binary code points.

* Things Every Hacker Once Knew: With new sections on the slow birth of distributed development and the forgotten history of early bitmap displays.

Posted Wed Aug 2 03:06:30 2017 Tags:

A few days ago, I pushed code for button debouncing into libinput, scheduled for libinput 1.9. What is button debouncing you ask? Well, I'm glad you asked, because otherwise typing this blog post would've been a waste of time :)

Over in Utopia, when you press the button on a device, you get a press event from the hardware. When you release said button, you get a release event from the hardware. Together, they form the button click interaction we have come to learn and love over the last couple of decades. Life is generally merry and the sunshine to rainbow to lollipop ratio is good. Meanwhile, over here in the real world, buttons can be quite dodgy, don't always work like they're supposed to, lollipops are unhealthy and boy, have you seen that sunburn the sunshine gave me? One way how buttons may not work is that they can lose contact for a fraction of a second and send release events even though the button is being held down. The device usually detects that the button is still being down in the next hardware cycle (~8ms on most devices) and thus sends another button press.

For us, there are not a lot of hints that this is bad hardware besides the timestamps. It's not possible for a user to really release and press a button within 8ms, so we can take this as a signal for dodgy hardware. But at least that's someting. All we need to do is ignore the release event (and subsequent button event) and only release when the button is released properly. This requires timeouts and delays of the event, something we generally want to avoid unless absolutely necessary. So the behaviour libinput has now is enabled but inactive button debouncing on all devices. We monitor button release and button press timestamps, but otherwise leave the events as-is, so no delays are introduced. Only if a device sends release/press combos with unfeasably short timeouts, activate button debouncing. Once active, we filter all button release events and instead set a timer. Once the timer expires, we send the button release event. But if at any time before then another button press is detected, the scheduled release is discarded, the button press is filtered and no event is sent. Thus, we paper over the release/press combo the hardware gives us and to anyone using libinput, it will look like the button was held down for the whole time.

There's one downside with this approach - the very first button debounce to happen on a device will still trigger an erroneous button release event. It remains to be seen whether this is a problem in real-world scenarios. That's the cost of having it as an auto-enabling feature rather than an explicit configuration option.

If you do have a mouse that suffers from button bouncing, I recommend you try libinput's master branch and file any issues if the debouncing doesn't work as it should. Might as well get any issues fixed before we have a release.

Posted Thu Jul 27 11:52:00 2017 Tags:
An effort to clean up several messes simultaneously. #rng #forwardsecrecy #urandom #cascade #hmac #rekeying #proofs
Posted Sun Jul 23 13:37:46 2017 Tags:
News regarding the SUPERCOP benchmarking system, and more recommendations to NIST. #benchmarking #supercop #nist #pqcrypto
Posted Wed Jul 19 18:15:13 2017 Tags:

This might become a new major section in Things Every Hacker Once Knew, but before I merge it I’m putting it out for comment and criticism.

Nowadays we take for granted a public infrastructure of distributed version control and a lot of practices for distributed teamwork that go with it – including development teams that never physically have to meet. But these tools, and awareness of how to use them, were a long time developing. They replace whole layers of earlier practices that were once general but are now half- or entirely forgotten.

The earliest practice I can identify that was directly ancestral was the DECUS tapes. DECUS was the Digital Equipment Corporation User Group, chartered in 1961. One of its principal activities was circulating magnetic tapes of public-domain software shared by DEC users. The early history of these tapes is not well-documented, but the habit was well in place by 1976.

One trace of the DECUS tapes seems to be the README convention. While it entered the Unix world through USENET in the early 1980s, it seems to have spread there from DECUS tapes. The DECUS tapes begat the USENET source-code groups, which were the incubator of the practices that later became “open source”. Unix hackers used to watch for interesting new stuff on comp.sources.unix as automatically as they drank their morning coffee.

The DECUS tapes and the USENET sources groups were more of a publishing channel than a collaboration medium, though. Three pieces were missing to fully support that: version control, patching, and forges.

Version control was born in 1972, though SCCS (Source Code Control System) didn’t escape Bell Labs until 1977. The proprietary licensing of SCCS slowed its uptake; one response was the freely reusable RCS (Revision Control System) in 1982.

The first real step towards across-network collaboration was the patch(1) utility in 1984. The concept seems so obvious now that even hackers who predate patch(1) have trouble remembering what it was like when we only knew how to pass around source-code changes as entire altered files. But that’s how it was.

Even with SCCS/RCS/patch the friction costs of distributed development over the Internet were still so high that some years passed before anyone thought to try it seriously. I have looked for, but not found, examples earlier than nethack. This was a roguelike game launched in 1987. Nethack developers passed around whole files – and later patches – by email, sometimes using SCCS or RCS to manage local copies. footnote[I was an early nethack devteam member. I did not at the time understand how groundbreaking what we were doing actually was.].

Distributed development could not really get going until the third major step in version control. That was CVS (Concurrent Version System) in 1990, the oldest VCS still in wide use at time of writing in 2017. Though obsolete and now half-forgotten, CVS was the first version-control system to become so ubiquitous that every hacker once knew it. CVS, however, had significant design flaws; it fell out of use rapidly when better alternatives became available.

Between around 1989 and the breakout of mass-market Internet in 1993-1994, fast Internet became available enough to hackers that distributed development in the modern style began to become thinkable. The next major steps were not technical changes but cultural ones.

In 1991 Linus Torvalds announced Linux as a distributed collaborative effort. It is now easy to forget that early Linux development used the same patch-by-email method as nethack – there were no public Linux repositories yet. The idea that there ought to be public repositories as a normal practice for major projects wouldn’t really take hold until after I published “The Cathedral and the Bazaar” in 1997. While CatB was influential in promoting distributed development via shared public repositories, the technical weaknesses of CVS were in hindsight probably an important reason this practice did not become established sooner and faster.

The first dedicated software forge was not spun up until 1999. That was SourceForge, still extant today. At first it supported only CVS, but it sped up the adoption of the (greatly superior) Subversion, launched in 2000 by a group of former CVS developers.

Between 2000 and 2005 Subversion became ubiquitous common knowledge. But in 2005 Linus Torvalds invented git, which would fairly rapidly obsolesce all previous version-control systems and is a thing every hacker now knows.

Questions for reviewers:

(1) Can anyone identify a conscious attempt to organize a distributed development team before nethack (1987)?

(2) Can anyone tell me more about the early history of the DECUS tapes?

(3) What other questions should I be asking?

Posted Wed Jul 19 12:02:10 2017 Tags:

The craft of programming is not a thing easily taught. It’s not so much that the low level details like language syntaxes are difficult to convey, it’s more that (as I’ve written before) “the way of the hacker is a posture of mind”.

The posture of mind is more essential than the details. I only know one way to teach that, and it looks like this…

19:51:23 esr | You know, at some point you should build Open Adventure and play it. For that geek heritage experience, like admiring Classical temple friezes.

ianbruene | note to self: play advent

19:53:56 esr | I actually think this should count as (a very minor) part of your training, though I’m not sure I can fully articulate why. Mumble mumble something mimesis and mindset. It was written by two guys with the mindset of great hackers. If playing that game gets you inside their heads even a little bit, you’ll have gained value…

19:55:08 ianbruene | I already had a flag set to fix the non-human-readable save problem someday, if no one else got to it first. Kind of hard to do that without playing at least *some* of the game.

19:55:15 esr | :-)

19:55:29 ianbruene | (non human readable saves irk me)

19:55:35 esr | An excellently chosen exercise, apprentice!

19:57:57 esr | Since you’ve brought it up, let’s think through some of the tradeoffs.

19:58:16 ianbruene |

19:59:25 esr | You are right, in general, to dislike non-eyeball-friendly save formats, especially for small-scale applications.

19:59:54 esr | But: There are two reasons to like the way it’s done now.

20:00:27 esr | Care to guess at them? You can ask me questions about the code context to develop your answer.

20:00:12 ianbruene | anti-cheat being the obvious one

20:00:48 * | ianbruene pulls up advent page

20:00:51 esr | Right – it’s a low barrier, but not a completely useless one.

20:02:02 esr | That is, it hits a nice sweet spot between “too easily editable” and “putting in extra work for something that in principle can always be circumvented no matter what you do”.

20:02:19 esr | Speed bump.

20:02:31 esr | That’s reason #1. What’s #2?

20:02:49 esr | (Don’t overthink it.)

20:03:16 ianbruene | ok, just seeing the first couple functions the comments mention twiddling a few things to prevent savescumming. falls under anti-cheat I assume?

20:03:24 esr | Yes.

20:03:44 esr | No, there’s something else much more basic.

20:04:29 ianbruene | um…. well restore() is just a memcopy……. that seems almost too simple

20:04:41 esr | You’re warm.

20:04:43 ianbruene | also incredibly fragile

20:05:20 esr | I guess you’re close enough.

20:05:36 esr | Reason #2: IT’S *SIMPLE*.

20:06:17 esr | Not a consideration to be despised. Smart programmers do as little work as possible on unimportant things, so they can save brainpower and effort for important things.

20:06:43 ianbruene | am I right in thinking that this is considered ok because 1. very early: the field hadn’t really developed yet

20:07:07 ianbruene | and 2. no need for interop, or devastating if data lost

20:07:21 ianbruene | unlike a certain *cough* MS Word *cough*

20:07:32 esr | Pretty much correct.

20:07:45 esr | However, note 2 subtle points:

20:08:09 ianbruene | ok, *readjusts utility function for memdump fileformat tradeoff*

20:08:20 esr | 1. Format is rather less fragile than you think (I’ll explain that).

20:08:33 ianbruene | I only knew of the ludicrous example of the MS formats previously

20:09:27 esr | 2. The FORTRAN save/restore code was really nasty and complicated. It doesn’t get as simple as it now is unless you have fread/fwrite and a language with structs.

20:10:07 esr | Now, why this isn’t as bad a choice as you think:

20:10:31 ianbruene | (pre-guess: everything is pre-swizzled)

20:11:56 esr | No. It’s because any processor this is likely to run on uses the same set of struct-member aligment rules, what’s called “self-aligned” members. So padding won’t bite you, just endianness and word size.

20:12:54 ianbruene | *blink* oh…. another win from the intervening steamroller of standardization

20:13:17 esr | Precisely, another steamroller win.

(Editor’s note: ianbruene is aware that at the time the original ADVENT was written, greater diversity in processor architectures meant that the structure member alignment rules were more variable and more difficult to predict. So you took a harder portability hit from using a structure dump as a save format then.)

20:12:19 esr | Wait, I want to give you a pointer

20:12:58 esr | Read this, if you haven’t: http://www.catb.org/esr/structure-packing/

20:13:26 ianbruene | read it before, been quite a while (and didn’t have anything to put it into practice)

20:14:22 esr | So, reread with this save format in mind. Go through the reasoning to satisfy yourself about what I just claimed.

20:14:32 esr | It won’t take long.

20:16:06 esr | Now, this does *not* mean a memory dump would be a good format for anything much more complex than this game state. We’re sort of just below the max-complexity threshold here.

20:16:44 esr | And we do get screwed by endianness and word-size differences.

20:17:39 esr | But…let’s get real, how often are these save files going to move between machines? This is not data with a long service lifetime!

20:19:49 esr | OK. Continuation of exercise:

20:20:28 esr | What’s the simplest way you can think of to design an eyeballable save format?

20:21:14 ianbruene | *thinks* (given that I know nothing of the internal structure of advent)

20:22:45 esr | You don’t need to. Look at the structure definition.

20:23:47 * | ianbruene looking up struct

20:28:34 ianbruene | well *one* simple way of doing it would be to do a (I forget what the format is called) var=value\n format, with the save function being a giant printf of doom and the load function being a giant switch of doom.

20:28:44 ianbruene | I don’t think that is *the* simple one though

20:30:06 esr | That sort of thing is generally called a “keyword/value” format. It is the most obvious choice here. Can you think of a simpler one?

20:30:42 esr | (I’m not sure I can.)

20:31:16 ianbruene | ok, we know the struct “shape”, could arrange for all of type X, all of type Y, etc. to be in contiguous spans. Sequence either hardcoded or using a build time generator for the var names. Hmmmmmmmm….. while it has a certain elegance it seems brittle and complex

20:31:25 esr | Yes. It would be that.

20:31:38 esr | Pretty classic example of “too clever by half”, there.

20:33:35 ianbruene | ok, ignore assuming a shape, it would be *possible* for a code generator to simply look at the struct and create a pair of load/save functions from it, using either the internal names, or special comments in the definition

20:33:59 ianbruene | I don’t think the format itself can get any simpler than key=value though

20:34:12 ianbruene | there isn’t much complex structure in the save.

20:34:21 esr | I think you’re right.

20:34:33 ianbruene | it isn’t a bunch of logical blocks of different rooms and characters

20:35:09 ianbruene | if there were there would be useful tradeoffs in how you grouped things

20:35:45 esr | Good! That was a sound insight, there.

20:34:59 esr | Now, you’ve correctly described a way to implement dump as a giant printf.

20:35:27 esr | Do you as yet know enough C to sketch restore?

20:35:39 ianbruene | *thinks*

20:36:24 ianbruene | ok, the template I’m thinking is similar to the packet handling code for mode 6 (python side). but it is more complicated due to C

20:37:29 ianbruene | read until get a token, slice off the token, feed the token into the Switch of Doom

20:37:47 ianbruene | the SoD sets any vars it gets tokens for

20:38:07 ianbruene | if the file is well formed you get all the data you need

20:39:49 esr | Alas, you’ll find actually doing restore in C is a PITA for a couple reasons. One is that you can’t switch on a string’s content, only its start address – C switch only accepts scalars.

20:40:31 ianbruene | grrrr, so you have a big ugly set of str compares in if statements

20:40:37 esr | Indeed, you’re going to write a big fscking if () with a whole bunch of strcmp() guards.

(Editor’s note: There’s another way to do it, driven from iterating through a table of struct initializers, that would be slightly more elegant but no simpler.)

20:40:38 ianbruene | I forgot about that

20:41:29 ianbruene | this is something where your code style becomes *very* important or it will be an ugly, incomprehensible mess

20:41:45 esr | Yes. Now you begin to see why I went to stupid fread/fwrite and stayed there.

20:42:31 ianbruene | and the obvious way to do it in something like python (magic introspection to class elements or dict keys) doesn’t work here

20:43:03 esr | Right. Replacing this binary dump with something clean and textual is not a terrible idea, but really only justifies itself as a finger exercise for a trainee, like playing scales to learn an instrument.

20:43:17 ianbruene | I see

20:43:39 ianbruene | hence why you mentioned in the blog that it was very low priority

20:43:46 esr | Right. The absence of introspection is the other lack in C that makes it a PITA.

20:44:17 esr | And you’ve extracted most of the value of the finger exercise by thinking through the design issues.

20:45:02 ianbruene | when you get to do it introspection is a gigantic win, makes it difficult to remember how bad it is when you don’t get it

20:45:08 esr | Yes.

20:46:05 esr | Those of us who started in LISP learned this early. Took forty years for the rest of the world to catch up, and they’re only getting there now.

20:46:40 ianbruene | I have done only the barest toying with lisp, barely even hello world level

20:47:00 ianbruene | but even that (coupled with some reading of a lisp book) changed the way I thought

20:47:35 ianbruene | plenty of times I’ve hit a snag of “this would be *so much easier* if I could do a lisp macro in python”

20:47:52 esr | Indeed.

20:48:33 ianbruene | incidentally, has GvR used lisp at all? the impression I’ve heard is that he doesn’t like the lispy features?

20:49:11 esr | He doesn’t. Back in the late ’90s I practically had to arm-wrestle him into not killing lambdas.

20:49:45 ianbruene | [insert rant against anyone who thinks lambdas are useless]

20:51:37 esr | I think I might edit this dialog into a blog post. Start it with the Heinlein quote about the ideal university: a log with a teacher on one end and a student on the other.

20:51:51 * | ianbruene grins

The foregoing was transcribed from IRC and lightly edited to fix typos, fill out sentence fragments, and complete 80%-articulated ideas we mutually glarked from context. A few exchanges have been slightly reordered.

Posted Mon Jul 17 04:09:19 2017 Tags:

An interesting math question is: for a given number of dimensions, how many different points can be placed in that many dimensions such that the angle formed by every triplet of them is acute? This paper shows a dramatic new result which makes an explicit construction of (21/2)d which is about 1.41d. I’ve managed to improve this to (61/5)d, which is about 1.43d.

Those of you not familiar with it already should read about dot products before proceeding. The important points for this result are: If the dot product is positive the angle is acute. The dot product across multiple dimensions is the sum of the products for each of the dimensions individually, and if one of the vector lengths in a dot product is very small then the dot product is very small as well.

My result is based on a recurrence which adds five dimensions and multiplies the number of points by six. It’s done like so:

Some explanation for this diagram is in order. The points a, b, c, p, q, and r all correspond to the same old point. The leftmost picture is the first and second new dimensions, the middle one the third and fourth, and the right one the fifth. The value ε1 is very small compared to all values which previously appeared and all normal functions on them. The value ε2 is very small compared to ε1 and this continues to ε5. (Just pretend I drew those as epsilons instead of es. My diagramming skills are limited.)

A more numeric way of defining the values for the new dimensions is as follows:

point 1 2 3 4 5
a cos(θ1) * ε1 sin(θ1) * ε1 cos(θ3) * ε5 sin(θ3) * ε5 ε4
b cos(θ1) * ε1 sin(θ1) * ε1 -cos(θ3) * ε5 -sin(θ3) * ε5 4
c cos(θ1) * ε1 * (1-ε3) sin(θ1) * ε1 * (1-ε3) cos(θ2) * ε2 sin(θ2) * ε2 0
p -cos(θ1) * ε1 -sin(θ1) * ε1 cos(θ5) * ε5 sin(θ5) * ε5 ε4
q -cos(θ1) * ε1 -sin(θ1) * ε1 -cos(θ5) * ε5 -sin(θ5) * ε5 4
r -cos(θ1) * ε1 * (1-ε3) -sin(θ1) * ε1 * (1-ε3) cos(θ4) * ε2 sin(θ4) * ε2 0

Those thetas are all angles. Their exact values don’t matter much, but they need to not be too close to each other compared to ε. Also θ2 and θ4 need to be between 0 and π/2 and not too close to the ends of that range.

Each triplet of points either all correspond to different old points, or two correspond to the same old point and another one to a second, or all three correspond to the same old point. In the case where all three correspond to different old points, their angle will be acute because the positions have only gotten changed by epsilon. The other cases can be straightforwardly enumerated. Because there are only two types of points there are only two cases for the singular external point to consider, labelled x and y in the diagram above. Because the last value of x can be positive or negative there are two positions labelled for it.

To check if each angle is acute, the dot product is calculated and checked to see if it’s positive. Rather than calculate these values exactly, it’s sufficient to check what the smallest epsilon each value is multiplied by and whether it’s positive or possibly negative. If the sum of the different dimensions is a positive larger epsilon (that is, one with a smaller subscript) then the dot product is positive and the angle is acute. Because many of the cases are very similar they can be consolidated down, so multiple endpoints are listed in some of the cases of the chart of all cases below. In cases where a value in one set of dimensions overwhelms all values which come after it, those values are skipped.

vertex end point end point 1 & 2 3 & 4 5
a b x 0 ε5 0
a b y 0 5 ε4
a c x ε3 5
a c y ε3 ε2
a pqr xy ε1
c ab x 3 ε2
c ab y 3 ε2
c pqr xy ε1
a b cr 0 5 ε4
a b pq 0 ε5 0
a c pq ε3 5
a pqr pqr ε1
c abpq bpq 3 2
c abpq r 3 2

Note that the extra restrictions on θ2 and θ4 are important in the CAY case (where A is the vertex).

In case you’re wondering how I came up with this: I can’t visualize in five dimensions any better than you can, but I’m pretty good at visualizing in three, so I worked out a 3 dimensions to 3 points recurrence which almost works. That’s dimensions three through five here. I put one of the recurrences from the previous work for dimensions one and two and moved the c and r points ‘down’ in the first two dimensions to fix the case which was broken, specifically CAX.

I’m quite certain this result can be improved on, although it’s hard for my poor little human brain to work out the more complicated cases. My guess is that it trends towards the absolute maximum of the nth root of n, which is at e (that’s the root of the natural log, not a miswritten epsilon). I conjecture that this is tight, so no matter how small ε is, (e1/e – ε)d < acute(d) < (e1/e + ε)d for sufficiently large d.

Posted Mon Jul 17 02:32:42 2017 Tags:

The Colossal Cave Adventure restoration is pretty much done now. One thing we’re still working on is getting test coverage of the last few corners in the code. Because when you’re up to 99.7% the temptation to push for that last 0.3% is really strong even if the attempt is objectively fairly pointless.

What’s more interesting is the technique one of our guys came up with for getting us above about 85% coverage. After that point it started to get quite difficult to hand-craft test logs to go to the places in the code that still hadn’t been exercised.

But NHorus, aka Petr Vorpaev, is expert at fuzz testing; we’ve been using American Fuzzy Lop, a well-designed, well-documented, and tasteful tool that I highly recommend. And he had an idea.

Want to get a test log that hits a particular line? Insert an abort() call right after it and rebuild. Then unleash the fuzzer. If you’ve fed it a good test corpus that gets somewhere near your target, it will probably not take long for the fuzzer to random-walk into your abort() call and record that log.

Then watch your termination times. For a while we’d generally get a result within hours, but we eventually hit a break after which the fuzzer would run for days without result. That knee in the curve is your clue that the fuzzer has done everything it can.

I dub this technique “fuzzbombing”. I think it will generalize well.

Posted Wed Jul 12 18:49:09 2017 Tags:

For the last year or so I have been deliberately experimenting with a psychoactive, nootropic drug.

You have to know me personally (much better than most of my blog audience does) to realize what a surprising admission this is. I’ve been a non-smoking teetotaller since I was old enough to form the decision. I went through college in the 1970s, the heyday of the drug culture, without so much as toking a joint. I have been open with my friends about having near enough relatives with substance-abuse problems that I suspect I have a genetic predisposition to those that I am very wary of triggering. And I have made my disgust at the idea of being controlled by a substance extremely plain.

Nevertheless, I have good reasons for the experiment. The drug, modafinil (perhaps better known by the trade name Provigil) has a number of curious and interesting properties. I’m writing about it because while factual material on effects, toxicology, studies and so forth is easy to find, I have yet to see useful written advice about why and how to use the drug covering any but the narrowest medical applications.

Before I continue, a caveat that may save both your butt and mine. In the U.S., modafinil is a Schedule IV restricted drug, illegal to use without a prescription. I use it legally. I do not – repeat, do not – advise anyone to use modafinil illegally. I judge the legal restriction is absurd – there are lots of over-the counter drugs that are far more dangerous (ibuprofen will do as an example) – but the law is the law and the drug cops can flatten you without a thought.

Another caveat: Your mileage may vary. This is a field report from one user that is consistent with the clinical studies and other large-scale evidence, but reactions to drugs can be highly idiosyncratic. Proceed with caution and skepticism and self-monitor carefully. You only have one neurochemistry and you won’t like what happens if you break it.

The first thing I should cover is why I get to use this drug legally. Many of you know that I have congenital spastic palsy. Our first weird fact about modafinil is that it is the only drug – indeed, the only medical intervention of any kind – that has improved the motor control of CP patients in controlled trials. This is a more interesting effect than drugs like Baclofen that are merely muscle relaxants.

Yes, modafinil helps me walk better – that’s why I can get it prescribed. I’m not actually very impaired as a normal thing, but it is of medically significant use in the circumstances when I am worst off – mainly, when I’m fatigued or under stress. Modafinil greatly reduces the likelihood that when I am tired or strained I will become extremely clumsy, barge into things, break them, and hurt myself.

Nobody knows how this works. (This is a phrase you will hear a lot in descriptions of modafinil effects.) CP is thought to be actual damage to the brain’s motor control circuitry. For a drug to be able to counteract or override that is bizarre – like observing a broken watch to work better when you pour butter into it – and nobody has any plausible theory about it.

Eppure, si muove. It really works. I have no explanation.

Modafinil has a much better known role as a wakefulness and pro-concentration drug. While on it, you will feel much less need or desire to sleep, and your ability to single-focus concentrate will go way, way up. Furthermore, there is no crash when the drug effect dissipates. And clinical studies indicate the drug has very low toxicity and addictive potential.

Nobody knows how this works. Pretty much all other wakefulness drugs exact a heavy price – they’re toxic, addictive, psychotogenic, derange your sleep cycle, and known or suspected to have baleful long-term effects on your neurochemistry even if you don’t have one of those short-term bad things land on you.

How do we know the drug is benign? Because there is actually decades of field experience with modafinil (and adrafinil, a precursor drug that metabolizes into modafinil) in various military organizations. These drugs were first seriously investigated as go pills for pilots and SpecOps personnel on long-duration missions. They proved an effective (and unexpectedly non-injurious) replacement for dexedrine and other amphetamines.

This isn’t to say the military experience didn’t reveal any problems at all. I am reliably informed by a source close to the field military that in a few cases heavy modafinil users developed a kind of manic Superman syndrome – neglecting self-maintainance, going without sleep for months, overdriving themselves and their units into psychological and physical collapse. Later in this report I will suggest some lessons to be drawn from these incidents.

The things I read about the military experience were a major factor in convincing me to try modafinil. Nothing less than decades of toxic/addictive/psychotogenic effects not showing up at a level above statistical noise would have done for me, personally; I’m extreeeemely protective of my gray matter. I didn’t know of the Superman-syndrome outliers when I began using it; had I known, I would have found it obvious that this is not a problem with modafinil itself but with stupid people interpreting the drug as a license to be stupider.

I should also note that there have been a very few (as in about 5 cases per million) reports of a very nasty necrotic skin disorder called “Stevens-Johnson syndrome” being brought on by modafinil. But this also occurs as an equally rare side effect of a wide range of other drugs, and the incidence pattern suggests to me that the victims have a rare karotype that responds badly to all of them. Just as stupid people gonna stupid, fragile people gonna fragile. This means less than meets the eye.

(After writing the above I learned that my conjecture is correct. There’s a list of SNPs associated with susceptibility with Stevens-Johnson. The source also lists classes of other drugs that are likely to trigger it: antibiotics, analgesics, cough and cold medication, NSAIDs, antiepileptics, antigout drugs, cocaine and phenytoin.)

There have been an even tinier collection of reports (as in, countable on the fingers of one hand) of psychotic breaks in people introduced modafinil with no previous history of mental illness. Again, the incidence pattern creates a strong suspicion that these were fragile people whose neurotransmitter balance could have been messed with by any number of stimuli, and that modafinil was an accidental but not essential cause here.

Earlier I said “you will feel much less need or desire to sleep”, and that is true. However, modafinil does not actually abolish physical fatigue. What it does is (a) reduce your sleep need by about 2/3rds, sometimes more, and (b) sever the link from physical fatigue to drowsiness, distraction, and brain-fog. If you pay attention to your body while on the drug, you will notice that (after a longer time than usual) it is getting sore and clumsy from physical fatigue – but that will present as a sort of neutral mechanical fact that affects nothing about your mental state; your mind will stay clear, sharp, and focused until the dose dissipates.

Now to the last big thing about modafinil: there is clinical evidence of significant increases in IQ while on the drug. To what extent this can be separated from the large boost in ability to single-focus is not clear, and one 2005 study found a boost effect that decreased with increasing IQ. However, recent studies and a 2016 meta-analysis suggest a stronger and more consistent effect than did earlier ones, with significant gains in both executive function and learning capacity. But nobody knows how this works.

I can’t say as much about this from personal experience as I’d like to, because I don’t know how I’d tell if my IQ were elevated. It’s certainly not something one can notice as easily as “Hey! I can really concentrate.” Also, if it’s really true that the effect decreases with increasing baseline IQ, I’d be poorly positioned to notice it.

However: there is practical field evidence that backs up the more positive studies. I am reliably informed that demand for modafinil from STEM students at top universities and people in cognitively-demanding jobs has created a large underground around the drug. If this is true, the drug cops must be practicing benign neglect; to date, modafinil-related criminal charges in the U.S. can be counted on the fingers of one hand and all seem to have been a sort of decorative garnish on more serious indictments.

(Of course, this cannot be relied on to continue. Moral panics have been ginned up on even slighter causes in the past and doubtless will be again.)

I will also say that I notice some differences in my affect while on the drug that are at least consistent with it jacking up my IQ. It makes me feel calm, cerebral, and in control – the exact opposite of the jittery, volatile effect from caffeine or (I’m told) other conventional stimulants. Emotions aren’t gone but they’re a little damped, a little muted. Except, interestingly enough, for my sense of humor; that is fully operational or possibly even enhanced.

Related point: I find the onset bump when the modafinil crosses my blood-brain barrier quite noticeable.

(I’ve heard that one of the commoner street names for modafinil is “zombie”. That makes a lot of sense if you think about how that slight damping of emotional swings is going to register to a person who lives in their emotions most of the time and barely even knows what “calm and cerebral” is like.)

Again, I’m probably not the best person to report on changes in affect. While doing web searches related to this topic I discovered several articles on the theme “ZOMG makes me a different person!”. That’s not my experience at all – rather, modafinil makes me more like me. It chemically pegs my affect to the same place I go naturally when I’m at the top of my game.

And am I actually more productive? Oh hell yes. I can tell by the amount of code and text I get written while on modafinil. Many users report productivity outside their normal range; I don’t get that, but I do get consistent performance at or near my normal peak level for as long as I’m on it.

You may be thinking modafinil sounds too good to be true. You have company; everybody who knows anything about drugs that mess with neurotransmitter balance has the same reaction when they learn the facts. Nobody knows how this works.

Before getting to management strategies, I will report another thing: Some but not all modafinil users develop a tolerance to the drug and require increasing doses to collect the effects they want. Odds of developing tolerance seem, unsurprisingly, to increase with frequency of use.

For the rest of this report, I am going to assume that you are either a U.S. citizen with a narrowly valid medical reason to use modafinil (such as spasticity or narcolepsy) and a legal prescription, or you live in a non-U.S. jurisdiction that does not restrict the drug, or you have other means of legal access. I will further assume that you want to maximize the nootropic and other benefits of modafinil while minimizing the risks.

Let’s inventory the risks:

First: Stevens-Johnson syndrome.

Second: Lifestyle dependency. While the clinical studies suggest very low potential for either physical or psychological addiction, you don’t want to go anywhere near subtler, functional versions of addiction either.

Third: Acquired tolerance requiring increasing doses.

Fourth: Self-damage through ignoring physiological cues partially suppressed by the drug (manic-Superman syndrome would be the extreme example of this).

Now the mitigation methods:

Against Stevens-Johnson, don’t have a fragile karotype. OK, there’s not much you can do to prevent that. So learn the symptoms of Stevens-Johnson syndrome and if you think they’re developing, stop taking modafinil and see your doctor immediately. One bit of good news is that you only have to pass this gate once – if they’re not triggered the first time you take it, they’re not going to be second and later times.

To avoid lifestyle dependency, plan your modafinil use around specific, non-recurring, slightly unusual challenges. Like: you need to not be clumsy and fatigued for a particular martial-arts test. Or, you need to put two or three working days’ worth days of peak effort into a project all at once.

To avoid tolerance buildup, don’t use it often. I seem to have a steady rate of about 1 200mg dose a month. I’m most likely to use it to handle unusual events where I want to be functioning at peak and perhaps expect to get less sleep than normal – SF conventions, for example. I will, as mentioned, also take it before a kung fu test for prophylaxis against palsy effects,

To avoid self-damage, self-monitor. In particular, stay aware of your physical fatigue level. Sometimes when I’m on modafinil and my muscles start getting fatigue-sore after hour 20 or so, I take a hot shower and a short nap to make the muscles happier even though my brain doesn’t need the rest yet.

That’s it, really. The last part, self-monitoring, is I think the most important. The drug will expand the envelope of what you can do; take those gains but treat yourself gently – no need to push the expanded envelope to collapse.

Modafinil is actually a mix of two enantiomers, only one of which is active. Once I use up my last few doses, I will be switching to a variant called armodafinil that is just the purified active enantiomer. I’m told it has a gentler onset and a longer dwell time,

A use for which I can certify it is combating fatigue on long drives. The effect I have seen in this application is so dramatic and benign compared to riskier drugs like amphetamines that I think this is in itself a sufficient argument for making modafinil and its variants over-the-counter drugs rather than prescription – they would would rapidly displace much more harmful substances and probably significantly decrease highway fatalities.

The drug also has much to recommend it for medical personnel, search & rescue people, police, and anyone else who has to work odd shifts under potential stress. The calming, anti-jitter effect is significant here and an improvement over large doses of caffeine, which promotes wakefulness without being particularly pro-concentration.

Finally, of course, there’s flow-state maintenance for programmers. Frankly, I don’t understand why steady use of modafinil is not already so dirt-common among people who code for a living that everyone takes it for granted. The pro-concentration effect is hugely helpful for productivity, and after a year of use I have experienced no downside at all, not even the jitters and wakefulness I would expect from deploying caffeine for similar purpose.

Nevertheless, I’m still wary of taking it more often, because I don’t want to develop that lifestyle dependency. On the other hand, I’ve seen a reason I might want to relax about this more as I get older. A recent study out of Italy suggests that modafinil improves centrality of neural function in elderly people, in effect mitigating or even partly reversing the effects of physical senescence on the brain.

Er, so, anti-senescence on top of everything else? Seems way too good to be true. But the positive results keep rolling in. I shall continue experimenting, self-monitoring, and perhaps occasionally reporting on it here.

Posted Mon Jul 10 21:29:11 2017 Tags:

Two years ago, considering the blocksize debate, I made two attempts to measure average bandwidth growth, first using Akamai serving numbers (which gave an answer of 17% per year), and then using fixed-line broadband data from OFCOM UK, which gave an answer of 30% per annum.

We have two years more of data since then, so let’s take another look.

OFCOM (UK) Fixed Broadband Data

First, the OFCOM data:

  • Average download speed in November 2008 was 3.6Mbit
  • Average download speed in November 2014 was 22.8Mbit
  • Average download speed in November 2016 was 36.2Mbit
  • Average upload speed in November 2008 to April 2009 was 0.43Mbit/s
  • Average upload speed in November 2014 was 2.9Mbit
  • Average upload speed in November 2016 was 4.3Mbit

So in the last two years, we’ve seen 26% increase in download speed, and 22% increase in upload, bringing us down from 36/37% to 33% over the 8 years. The divergence of download and upload improvements is concerning (I previously assumed they were the same, but we have to design for the lesser of the two for a peer-to-peer system).

The idea that upload speed may be topping out is reflected in the Nov-2016 report, which notes only an 8% upload increase in services advertised as “30Mbit” or above.

Akamai’s State Of The Internet Reports

Now let’s look at Akamai’s Q1 2016 report and Q1-2017 report.

  • Annual global average speed in Q1 2015 – Q1 2016: 23%
  • Annual global average speed in Q1 2016 – Q1 2017: 15%

This gives an estimate of 19% per annum in the last two years. Reassuringly, the US and UK (both fairly high-bandwidth countries, considered in my previous post to be a good estimate for the future of other countries) have increased by 26% and 19% in the last two years, indicating there’s no immediate ceiling to bandwidth.

You can play with the numbers for different geographies on the Akamai site.

Conclusion: 19% Is A Conservative Estimate

17% growth now seems a little pessimistic: in the last 9 years the US Akamai numbers suggest the US has increased by 19% per annum, the UK by almost 21%.  The gloss seems to be coming off the UK fixed-broadband numbers, but they’re still 22% upload increase for the last two years.  Even Australia and the Philippines have managed almost 21%.

Posted Thu Jul 6 10:01:47 2017 Tags:

A G+ follower pointed me at Note on Homesteading the Noosphere by Martin Sústrik. He concludes saying this:

In short: Labeling open source communities as gift cultures is not helpful. It just muddles the understanding of what’s actually going on. However, given that they are not exchange economies either, they probably deserve a name of their own, say, “reputation culture”.

I’m going to start by saying that I wish I’d seen a lot more criticism this intelligent. It bothers me that in 20 years nobody seems to have refuted or seriously improved on my theories – I see this as a problem, both for the study of hacker culture and in the field of anthropology.

That said, I think Sústrik gets a couple things wrong here. And don’t want them to obscure the large thing he’s gotten right.

First (possible) mistake: I have not observed that, as a matter of language, the term “gift culture” is as hard-edged and specific as he thinks it is. There’s a way we could both be right, though – it might be that terminology has shifted since I wrote HtN. Possibly this came about as part as the revival of interest in the concept that I seem to have stimulated.

But: one piece of evidence that anthropologists are still using “gift culture” in the inclusive sense Sústrik criticizes me for enmploying is that Sústrik himself feels, at the end of the article, that he needs to propose a contrasting term rather than citing one that is already established.

This so far is all about map rather than territory. As a General Semanticist I know better than to get over-invested in it.

Here’s the territory issue: Sústrik is not quite right about expectations of direct reciprocal exchange not being a shaping force. True enough that they aren’t salient at the macrolevel the way they were among the Kwakaka’wakwe. But if I download a piece of open source, and it’s useful to me, and I find a bug in it, I do indeed feel a reciprocal obligation to the project owner (not just an attenuated feeling about the culture in general) to gin up a fix patch if it is at all within my capability to do so – an obligation that rises in proportion to the value of his/her gift.

I should also point out that the cultures Sústrik think are paradigmatic for his strict sense of “gift culture” are mixed in the other direction. There is certainly an element of generalized reputation-seeking in the way individual Kwakaka’wakwe discharged their debts. There, and in the New Guinea Highlands, the “big man” is seen to have high status by virtue of his generosity – he overpays, on the material level, to buy reputation.

In the terms Sústrik wants to use, open-source culture is reputation-driven at both macro-level and microlevel, and also sometimes driven by gift reciprocation in his strict sense at microlevel. The macro-level reputation-seeking and micro-level gift reciprocity feed and reinforce each other.

This brings me to the large thing that Sústrik gets right. I think his distinction between “gift” and “reputation” cultures is fruitful – both testable and predictively useful. While I’m still skeptical about it being in general use among anthropologists, I rather hope I’m mistaken about that – better if it were.

Yes, real-world cultures are probably never pure examples of one or the other. But differentiating the mechanisms – and observing that the Kwakaka’wakwe and hacker culture are near opposing ends of the spectrum in how they combine – that is certainly worthwhile.

As a minor point, Sústrik is also quite right about reciprocal licenses being a red herring in this discussion. But I think he has the reason for their irrelevance mostly wrong. The important fact is they’re not mainly intended to regulate in-group behavior; they’re mainly a lever on the behavior of outsiders coming into contact with the hacker culture.

(It was actually my wife Cathy – a pretty sharp-eyed observer herself, and not coincidentally a lawyer – who brought this to my attention.)

Bottom line, however, is that this was high-quality criticism that got its most central point right. In fact, if I were writing HtN today I would use – and argue for – Sústrik’s distinction myself.

Posted Wed Jul 5 18:43:15 2017 Tags:
Figure from the article. CC-BY.
One of the projects I worked on at Karolinska Institutet with Prof. Grafström was the idea of combining transcriptomics data with dose-response data. Because we wanted to know if there was a relation between the structures of chemicals (drugs, toxicants, etc) and how biological systems react to that. Basically, testing the whole idea behind quantitative-structure activity relationship (QSAR) modeling.

Using data from the Connectivity Map (Cmap, doi:10.1126/science.1132939) and NCI60, we set out to do just that. My role in this work was to explore the actual structure-activity relationship. The Chemistry Development Kit (doi:10.1186/s13321-017-0220-4) was used to calculate molecular descriptor, and we used various machine learning approaches to explore possible regression models. Bottom line was, it is not possible to correlate the chemical structures with the biological activities. We explored the reason and ascribe this to the high diversity of the chemical structures in the Cmap data set. In fact, they selected the chemicals in that study based on chemical diversity. All the details can be found in this new paper.

It's important to note that these findings does not validate the QSAR concept, but just that they very unfortunately selected their compounds, making exploration of this idea impossible, by design.

However, using the transcriptomics data and a method developed by Juuso Parkkinen it is able to find multivariate patterns. In fact, what we saw is more than is presented in this paper, as we have not been able to support further findings with supporting evidence yet. This paper, however, presents experimental confirmation that predictions based on this component model, coined the Predictive Toxicogenocics Gene Space, actually makes sense. Biological interpretation is presented using a variety of bioinformatics analyses. But a full mechanistic description of the components is yet to be developed. My expectation is that we will be able to link these components to key events in biological responses to exposure to toxicants.

 Kohonen, P., Parkkinen, J. A., Willighagen, E. L., Ceder, R., Wennerberg, K., Kaski, S., Grafström, R. C., Jul. 2017. A transcriptomics data-driven gene space accurately predicts liver cytopathology and drug-induced liver injury. Nature Communications 8. 
https://doi.org/10.1038/ncomms15932
Posted Wed Jul 5 09:31:00 2017 Tags:
Replacing your router: Vendor A: 10% broken Vendor B: 10% broken P(both A and B broken): 10% x 10% = 1%
Replacing your router (or firmware) almost always fixes your problem.
Adding a wifi extender: Router A: 90% working Router B: 90% working P(both A and B working): 90% x 90% = 81%
Adding an additional router almost always makes things worse.

All wireless networks, both LTE and mesh, go down sometimes, but I'm willing to bet that your wifi network is flakier than your phone's LTE connection. At Battlemesh v10, we were all sitting in a room with dozens of experimental misconfigured wifi routers offering open networks that may or may not ever successfully route back to the real Internet. What makes a network reliable or unreliable?

After a few years of messing with this stuff (and being surrounded by tons of engineers working on other distributed systems problems, which turn out to all have similar constraints), I think I can summarize it like this. Distributed systems are more reliable when you can get a service from one node OR another. They get less reliable when a service depends on one node AND another. And the numbers combine multiplicatively, so the more nodes you have, the faster it drops off.

For a non-wireless example, imagine running a web server with a database. If those are on two computers (real or virtual), then your web app goes down if you don't have the web server AND the database server working perfectly. It's inherently less reliable than a system that requires a web server, but does not require a database. Conversely, imagine you arrange for failover between two database servers, so that if one goes down, we switch to the other one. The database is up if the primary server OR the secondary server is working, and that's a lot better. But it's still less reliable than if you didn't need a database server at all.

Let's take that back to wifi. Imagine I have a wifi router from vendor A. Wifi routers usually suck, so for the sake of illustration, let's say it's 90% reliable, and for simplicity, let's define that as "it works great for 90% of customers and has annoying bugs for 10%." 90% of customers who buy a vendor A router will be happy, and then never change it again. 10% will be unhappy, so they buy a new router - one from vendor B. That one also works for 90% of people, but if the bugs are independent, it'll work for a different 90%. What that means is, 90% of the people are now using vendor A, and happy; 90% of 10% are now using vendor B, and happy. That's a 99% happiness rate! Even though both routers are only 90% reliable. It works because everyone has the choice between router A OR router B, so they pick the one that works and throw away the other.

This applies equally well to software (vendor firmware vs openwrt vs tomato) or software versions (people might not upgrade from v1.0 to v2.0 unless v1.0 gave them trouble). In our project, we had a v1 router and a v2 router. v1 worked fine for most people, but not all. When v2 came out, we started giving out v2 routers to all new customers, but also to v1 customers who complained that their v1 router had problems. When we drew a graph of customer satisfaction, it went up right after the v2 release. Sweet! (Especially sweet since the v2 router was my team's project :)). Upgrade them all, right?

Well, no, not necessarily. The problem was we were biasing our statistics: we only upgraded v1 users with problems to v2. We didn't "upgrade" v2 users with problems (of course there were some) to v1. Maybe both routers were only 90% reliable; the story above would have worked just as well in reverse. The same phenomenon explains why some people switch from openwrt to tomato and rave about how much more reliable it is, and vice versa, or Red Hat vs Debian, or Linux vs FreeBSD, etc. This is the "It works for me!" phenomenon in open source; simple probability. You only have an incentive to switch if the thing you have is giving you a problem, right now.

But the flip side of the equation is also true, and that matters a lot for mesh. When you set up multiple routers in a chain, now you depend on router A AND router B to both work properly, or your network is flakey. Wifi is notorious for this: one router accepts connections, but acts weird (eg. doesn't route packets), and clients still latch onto that router, and it ruins it for everyone. As the number of mesh nodes increases, the probability of this happening increases fast.

LTE base stations also have reliability problems, of course - plenty of them. But they usually aren't arranged in a mesh, and a single LTE station usually covers a much larger area, so there are fewer nodes to depend on. Also, each LTE node is typically "too big to fail" - in other words, it will annoy so many people, so quickly, that the phone company will need to fix it fast. A single mesh node being flakey might affect only a smaller region of space, so that everyone passing through that area would be affected, but most of the time, they aren't. That leads to a vague impression of "wifi meshes are flakey and LTE is reliable", even if your own mesh link is working most of the time. It's all a game of statistics.

Solution: the buddy system

Let your friend tell you if you're making an ass of yourself. Router A: 90% working Router B: 90% working P(either A or B working): 1 - (1-0.9) x (1-0.9) = 99%

In the last 15 years or so, distributed systems theory and practice have come a long way. We now, mostly, know how to convert an AND situation into an OR situation. If you have a RAID5 array, and one of the disks dies, you take that disk out of circulation so you can replace it before the next one dies. If you have a 200-node nosql database service, you make sure nodes that fail stop getting queries routed to them so that the others can pick up the slack. If one of your web servers gets overloaded running Ruby on Rails bloatware, your load balancers redirect traffic to one of the nodes that's less loaded, until the first server catches up.

So it should be with wifi: if your wifi router is acting weird, it needs to be taken out of circulation until it's fixed.

Unfortunately, it's harder to measure wifi router performance than database or web server performance. A database server can easily test itself; just run a couple of queries and make sure its request socket is up. Since all your web servers are accessible from the Internet, you can have a single "prober" service query them all one by one to make sure they're working, and reboot the ones that stop. But by definition, not all your wifi mesh nodes are accessible via direct wifi link from one place, so a single prober isn't going to work.

Here's my proposal, which I call the "wifi buddy system." The analogy is if you and some friends go to a bar, and you get too drunk, and start acting like a jerk. Because you're too drunk, you don't necessarily know you're acting like a jerk. It can be hard to tell. But you know who can tell? Your friends. Usually even if they're also drunk.

Although by definition, not all your mesh nodes are reachable from one place, you can also say that by definition, every mesh node is reachable by at least one other mesh node. Otherwise it wouldn't be a mesh, and you'd have bigger problems. That gives us a clue for how to fix it. Each mesh node should occasionally try to connect up to one or more nearby nodes, pretending to be an end user, and see if it can route traffic or not. If it can, then great! Tell that node it's doing a great job, keep it up. If not, then bad! Tell that node it had better get back on the wagon. (Strictly speaking, the safest way to implement this is to send only "you're doing great" messages after polling. A node that is broken might not be capable of receiving "you're doing badly" messages. You want a watchdog-like system that resets the node when it doesn't get a "great!" message within a given time limit.)

In a sufficiently dense mesh - where there's always two or more routes between a given pair of nodes - this converts AND behaviour to OR behaviour. Now, adding nodes (ones that can decommission themselves when there's a problem) makes things more reliable instead of less.

That gives meshes an advantage over LTE instead of a disadvantage: LTE has less redundancy. If a base station goes down, a whole area loses coverage and the phone company needs to rush to fix it. If a mesh node goes down, we route around the problem and fix it at our leisure later.

A little bit of math goes a long way!

Not enough for you?

You can see my complete slides (pdf) about consumer wifi meshes (including detailed speaker notes) from Battlemesh v10 in Vienna, or watch my talk on Youtube:

Previously: my talk on wifi data collection and analytics.

Footnote

[1] These so-called "laws" are a special case of more general and thus more useful distributed systems theorems. But this is the Internet, so I picked my one special case and named it after myself. Go ahead, try and stop me.

Posted Wed Jul 5 05:53:32 2017 Tags:

Hello everyone! After many years of using a massively hacked-up version of dcoombs's NITLog page generator to host these pages, I've become, er, concerned, about the security of the ancient version of PHP I was using to host it. I tossed the whole thing out and rewrote it in python. The new thing doesn't really have a name, but let's call it PyNitLog.

My, how the web has moved on in the last 13 years or so! Now we can do all that stuff we used to do, but in a total of 298 lines of actually manageable code, including templates, and with hardly any insane regexes. Most of the credit goes to the python tornado library, which makes it really easy to write small, fast, secure web apps, even if you're super lazy.

Please let me know if anything explodes.

Posted Wed Jul 5 05:52:24 2017 Tags:

I (finally!) merged a patchset to detect palms based on pressure into libinput. This should remove a lot of issues that our users have seen with accidental pointer movement. Palm detection in libinput previously used two approaches: disable-while-typing and an edge-based approach. The former simply ignores touchpad events while keyboard events are detected, the latter ignores touches that happen in the edge zones of the touchpad where real interaction is unlikely. Both approaches have the obvious disadvantages: they're timeout- and location-dependent, causing erroneous pointer movements. But their big advantage is that they work even on old touchpads where a lot of other information is unreliable. Touchpads are getting better, so it's time to make use of that.

The new feature is relatively simple: libinput looks at per-touch pressure and if that pressure hits a given threshold, the touch is regarded as palm. Once a palm, that touch will be ignored until touch up. The threshold is intended to be high enough that it cannot easily be hit. At least on the touchpads I have available for testing, I have to go through quite some effort to trigger palm detection with my finger.

Pressure on touchpads is unfortunately hardware-dependent and we can expect most laptops to have different pressure thresholds. For our users this means that the feature won't immediately work perfectly, it will require a lot of hwdb entries. libinput now ships a libinput measure touchpad-pressure tool to experiment with the various pressure thresholds. This makes it easy to figure out the right pressure threshold and submit a bug report (or patch) for libinput to get the pressure threshold updated. The documentation for this tool is available as part of libinput's online documentation.

TLDR: if libinput seems to misdetect touches as palms, figure out the right threshold with libinput measure touchpad-pressure and file a bug report so we can merge this into our hwdb.

Posted Tue Jul 4 04:43:00 2017 Tags:

It's fortunately more common now in Free Software communities today to properly value contributions from non-developers. Historically, though, contributions from developers were often overvalued and contributions from others grossly undervalued. One person trailblazed as (likely) the earliest non-developer contributor to software freedom. His name was Robert J. Chassell — called Bob by his friends and colleagues. Over the weekend, our community lost Bob after a long battle with a degenerative illness.

I am one of the few of my generation in the Free Software community who had the opportunity to know Bob. He was already semi-retired in the late 1990s when I first became involved with Free Software, but he enjoyed giving talks about Free Software and occasionally worked the FSF booths at events where I had begun to volunteer in 1997. He was the first person to offer mentorship to me as I began the long road of becoming a professional software freedom activist.

I regularly credit Bob as the first Executive Director of the FSF. While he technically never held that title, he served as Treasurer for many years and was the de-facto non-technical manager at the FSF for its first decade of existence. One need only read the earliest issues of the GNU's Bulletin to see just a sampling of the plethora of contributions that Bob made to the FSF and Free Software generally.

Bob's primary forte was as a writer and he came to Free Software as a technical writer. Having focused his career on documenting software and how it worked to help users make the most of it, software freedom — the right to improve and modify not only the software, but its documentation as well — was a moral belief that he held strongly. Bob was an early member of the privileged group that now encompasses most people in industrialized society: a non-developer who sees the value in computing and the improvement it can bring to life. However, Bob's realization that users like him (and not just developers) faced detrimental impact from proprietary software remains somewhat rare, even today. Thus, Bob died in a world where he was still unique among non-developers: fighting for software freedom as an essential right for all who use computers.

Bob coined a phrase that I still love to this day. He said once that the job that we must do as activists was “preserve, protect and promote software freedom”. Only a skilled writer such as he could come up with such a perfectly concise alliteration that nevertheless rolls off the tongue without stuttering. Today, I pulled up an email I sent to Bob in November 2006 to tell him that (when Novell made their bizarre software-freedom-unfriendly patent deal with Microsoft) Novell had coopted his language in their FAQ on the matter. Bob wrote back: I am not surprised. You can bet everything [we've ever come up with] will be used against us. Bob's decade-old words are prolific when I look at the cooption we now face daily in Free Software. I acutely feel the loss of his insight and thoughtfulness.

One of the saddest facts about Bob's illness, Progressive Supranuclear Palsy, is that his voice was quite literally lost many years before we lost him entirely. His illness made it nearly impossible for him to speak. In the late 1990s, I had the pleasure of regularly hearing Bob's voice, when I accompanied Bob to talks and speeches at various conferences. That included the wonderful highlight of his acceptance speech of GNU's 2001 achievement award from the USENIX Association. (I lament that no recordings of any of these talks seem to be available anywhere.) Throughout the early 2000s, I would speak to Bob on the telephone at least once a month; he would offer his sage advice and mentorship in those early years of my professional software freedom career. Losing his voice in our community has been a slow-moving tragedy as his illness has progressed. This weekend, that unique voice was lost to us forever.


Bob, who was born in Bennington, VT on 22 August 1946, died in Great Barrington, MA on 30 June 2017. He is survived by his sister, Karen Ringwald, and several nieces and nephews and their families. A memorial service for Bob will take place at 11 am, July 26, 2017, at The First Congregational Church in Stockbridge, MA.

In the meantime, the best I can suggest is that anyone who would like to posthumously get to know Bob please read (what I believe was) the favorite book that he wrote, An Introduction to Programming in Emacs Lisp. Bob was a huge advocate of non-developers learning “a little bit” of programming — just enough to make their lives easier when they used computers. He used GNU Emacs from its earliest versions and I recall he was absolutely giddy to discover new features, help document them, and teach them to new users. I hope those of you that both already love and use Emacs and those who don't will take a moment to read what Bob had to teach us about his favorite program.

Posted Tue Jul 4 01:02:36 2017 Tags: