The lower-post-volume people behind the software in Debian. (List of feeds.)

Sometimes you find performance improvements in the simplest places. Last night I improved the time-stepping precision of NTP by a factor of up to a thousand. With a change of less than 20 lines.

The reason I was able to do this is because the NTP code had not caught up to a change in the precision of modern computer clocks. When it was written, you set time with settimeofday(2), which takes a structure containing seconds and microseconds. But modern POSIX-conformant Unixes have a clock_settime(2) which takes a structure containing seconds and nanoseconds.

Internally, NTP represents times to a precision of under a nanosecond. But because the code was built around the old settimeofday(2) call, until last night it rounded to the nearest microsecond too soon, throwing away precision which clock_settime(2) was capable of passing to the system clock.

Once I noticed this it was almost trivial to fix. The round-off only has to happen if your target platform only has settimeofday(2). Moving it into the handler code for that case, and changing one argument-structure declaration, sufficed.

Now, in practice this is not going to yield a full thousand-fold improvement in stepping accuracy, because you can’t get clock sources that accurate. (Well, not unless you’re a national time authority and can afford a cesium-fountain clock.) This change only helps to the extent that your time-server is actually delivering corrections with sub-microsecond accuracy; otherwise those extra bits will be plain noise.

You won’t get that accuracy from a plain GPS, which is seriously wobbly in the 100-millisecond range. Nor from a GPS with 1PPS, which delivers around one microsecond accuracy. But plug in a GPS-conditioned oscillator (GPSDO) and now you’re talking. These commonly have accuracy in about the 100-nanosecond range, so we can expect computing in nanoseconds to actually pass through an order of magnitude in stepping precision.

Pretty good for a 20-line change!

What are our lessons for today?

First…roundoff is insidious. You should always compute at the highest available precision and round off, when you have to, at the latest possible moment. I knew this and had a to-do item in my head to change as many instances of the old struct timeval (microsecond precision) to struct timespec (nanosecond precision) as possible. This is the first place in the NTP code I’ve found that it makes a provable difference. I’ll be hunting others.

Second…you really ought to beat the dust out of your code every couple of years even if it’s working. Because APIs will improve on you, and if you settle for a quick least-effort shim you may throw away significant functional gains without realizing it. A factor of ten is not bupkis, and this one was stupid-easy to collect; I just had to be paying attention. Clearly the NTP Classic maintainers were not.

So, this is my first non-security-related functional improvement in NTP. To be followed by many others, I hope.

Posted Fri Oct 9 11:57:27 2015 Tags:

The following is a comment I just filed on FCC Docket 15-170, “Amendment of Parts 0, 1, 2, 15, and 18 of the Commission’s Rules et al.”

Thirty years ago I had a small hand in the design of the Internet. Since then I’ve become a senior member of the informal collegium that maintains key pieces of it. You rely on my code every time you use a browser or a smartphone or an ATM. If you ever ride in a driverless car, the nav system will critically depend on code I wrote, and Google Maps already does. Today I’m deeply involved in fixing Internet time service.

I write to endorse the filings by Dave Taht and Bruce Perens (I gave Dave Taht a bit of editorial help). I’m submitting an independent comment because while I agree with the general thrust of their recommendations I think they may not go far enough.

The present state of router and wireless-access-point firmware is nothing short of a disaster with grave national-security implications. I know of people who say that could use firmware exploits to take down targeted and arbitrarily large swathes of the public Internet. I believe them because I’m pretty sure I could figure out how to do that myself in three weeks or so if I wanted to.

So far we have been lucky. The specialized technical knowledge required for Internet disruption on a massive scale is mostly confined to a small cadre of old hands like Vint Cerf and Dave Taht and myself. *We* don’t want to disrupt the internet; we created it and we love it. But the threat from others not so benign is a real and present danger.

Cyberwarfare and cyberterrorism are on the rise, with no shortage of malefactors ready to employ them. The Communist Chinese are not just a theoretical threat, they have already run major operations like the OPM crack. Add the North Koreans, the Russians, and the Iranians to a minimum list of those who might plausibly acquire the know-how to turn our own infrastructure against us in disastrous ways.

The effect of locking down router and WiFi firmware as these rules contemplate would be to lock irreparably in place the bugs and security vulnerabilities we now have. To those like myself who know or can guess the true extent of those vulnerabilities, this is a terrifying possibility.

I believe there is only one way to avoid a debacle: mandated device upgradeability and mandated open-source licensing for device firmware so that the security and reliability problems can be swarmed over by all the volunteer hands we can recruit. This is an approach proven to work by the Internet ubiquity and high reliability of the Linux operating system.

In these recommendations I go a bit beyond where Taht and Perens are willing to push. Dave Taht is willing to settle for a mandate of *inspectable* source without a guarantee of permission to modify and redistribute; experience with such arrangements warns me that they scale poorly and are usually insufficient. Bruce Perens is willing to settle for permitting/licensing requirements which I believe would be both ineffective and suppressive of large-scale cooperation.

The device vendors aren’t going to solve the security and reliability problem, because they can’t profit from solving it and they’re generally running on thin margins as it is. Thus, volunteer hackers like myself (and thousands of others) are the only alternative.

We have the skill. We have the desire. We have a proud tradition of public service and mutual help. But you have to *let us do it* – and, to the extent it is in your remit, you have to make the device vendors let us do it.

There is precedent. Consider the vital role of radio hams in coordinating disaster relief. The FCC understands that it is in the public interest to support their and enable their voluntarism. In an Internetted age, enabling our voluntarism is arguably even more important.

Mandated device upgradeability. Mandated open source for firmware. It’s not just a good idea, it should be the law.

Posted Wed Oct 7 18:58:54 2015 Tags:

One of my followers on G+ asked me to comment on a Vox article,
What no politician wants to admit about gun control

I’ve studied the evidence, and I don’t believe the effect of the Australian confiscation on homicides was significant.  You can play games with statistics to make it look that way, but they are games.

As for the major contention of the article, it’s simply wrong.  80% of U.S. crime, including gun violence, is associated with the drug trade and happens in urban areas where civil order has partially or totally collapsed.

Outside those areas, the U.S. looks like Switzerland or Norway – lots of guns, very little crime.  Those huge, peaceful swathes of high-gun-ownership areas show that our problem is not too many guns, it’s too many criminals.

The reason nobody at Vox or anywhere in the punditosphere wants to admit this is because of the racial angle.  The high-crime, high-violence areas of the U.S. are populated by blacks (who, at 12.5% of the population, commit 50% of index violent crimes).  The low-crime, lots-of-guns areas are white.

The predictively correct observation would be that in the U.S., lots of legal weapons owned by white people doesn’t produce high levels of gun violence any more than they do in Switzerland or Norway.   The U.S. has extraordinarily high levels of gun violence because American blacks (and to a lesser extent American Hispanics and other non-white, non-Asian minorities) are extraordinarily lawless.  As in, Third-World tribal badlands levels of lawless.

Nobody wants to be honest about this except a handful of evil scumbag racists (and me).  Thus, the entire policy discussion around U.S. firearms is pretty much fucked from the word go.

What would I do about it? Well, since I’m not an evil scumbag racist and in fact believe all laws and regulations should be absolutely colorblind, I would start by legalizing all drugs.  Then we could watch gun violence drop by 80% and look for the next principal driver.

Posted Tue Oct 6 17:01:18 2015 Tags:

[ A version of this blog post was crossposted on Conservancy's blog. ]

Would software-related scandals, such as Volkswagen's use of proprietary software to lie to emissions inspectors, cease if software freedom were universal? Likely so, as I wrote last week. In a world where regulations mandate distribution of source code for all the software in all devices, and where no one ever cheats on that rule, VW would need means other than software to hide their treachery.

Universal software freedom is my lifelong goal, but I realized years ago that I won't live to see it. I suspect that generations of software users will need to repeatedly rediscover and face the harms of proprietary software before a groundswell of support demands universal software freedom. In the meantime, our community has invented semi-permanent strategies, such as copyleft, to maximize software freedom for users in our current mixed proprietary and Free Software world.

In the world we live in today, software freedom can impact the VW situation only if a few complex conditions are met. Let's consider the necessary hypothetical series of events, in today's real world, that would have been necessary for Open Source and Free Software to have stopped VW immediately.

First, VW would have created a combined or derivative work of software with a copylefted program. While many cars today contain Linux, which is copylefted, I am not aware of any cars that use Linux outside of the on-board entertainment and climate control systems. The VW software was not part of those systems, and VW engineers almost surely wrote the emissions testing mode code from scratch. Even if they included some non-copylefted Open Source or Free Software in it, those licenses don't require disclosure of any source code; VW's ability to conceal its bad actions with non-copylefted code is roughly identical to the situation of proprietary VW code before us. As a thought experiment, though, let's pretend, that VW based the nefarious code on Linux by writing a proprietary Linux module to trick the emissions testing systems.

In that case, VW would have violated the GPL. But that alone is far from enough to ensure anyone would catch VW. Indeed, GPL violations remain very prevalent, and only one organization enforces the GPL for Linux (full disclosure: that's Software Freedom Conservancy, where I work). That organization has such limited enforcement resources (only three people on staff, and enforcement is one of many of our programs), I suspect that years would pass before Conservancy had the resources to pursue the violation; Conservancy currently has hundreds of Linux GPL violations queued for action. Even once opened, most GPL violations take years to resolve. As an example, we are currently enforcing the GPL against one auto manufacturer who has Linux in their car. We've already spent hundreds of hours and the company to date continues to fail in their GPL compliance efforts. Admittedly, it's highly unlikely that particular violator has a GPL-violating Linux module specifically designed to circumvent automotive regulations. However, after enforcing the GPL in that case for more than two years, I still don't have enough data about their use of Linux to even know which proprietary Linux modules are present — let alone whether those modules are nefarious in any way other than as violating Linux's license.

Thus, in today's world, a “software freedom solution” to prevent the VW scandal must meet unbelievable preconditions: (a) VW would have to base all its software on copylefted Open Source and Free Software, and (b) an organization with a mission to enforce copyleft for the public good would require the resources to find the majority of GPL violators and ensure compliance in a timely fashion. This thought experiment quickly shows how much more work remains to advance and defend software freedom. While requirements of source code disclosure, such as those in copyleft licenses, are necessary to assure the benefits of software freedom, they cannot operate unless someone exercises the offers for source and looks at the details.

We live in a world where most of the population accepts proprietary software as legitimate. Even major trade associations, such as the OpenStack Foundation and the Linux Foundation, in the Open Source community laud companies who make proprietary software, as long as they adopt and occasionally contribute to some Free Software too. Currently, it feels like software freedom is winning, because the overwhelming majority in the software industry believe Open Source and Free Software is useful and superior in some circumstances. Furthermore, while I appreciate the aspirational ideal of voluntary Open Source, I find in my work that so many companies, just as VW did, will cheat against important social good policies unless someone watches and regulates. Mere adoption of Open Source won't work alone; we only yield the valuable results of software freedom if software is copylefted and someone upholds that copyleft.

Indeed, just as it has been since the 1980s, very few people believe that software freedom is of fundamental importance for all software users. Scandals, like VW's use of proprietary software to hide other bad acts, might slowly change opinions, but one scandal is rarely enough to permanently change public opinion. I therefore encourage those who support software freedom to take this incident as inspiration for a stronger stance, and to prepare yourselves for the long haul of software freedom advocacy.

Posted Mon Sep 28 19:00:00 2015 Tags:

I’ve written a tool to assist intrepid code archeologists trying to comprehend the structure of ancient codebases. It’s called ifdex, and it comes with a backstory. Grab your fedora and your bullwhip, we’re going in…

One of the earliest decisions we made on NTPsec was to replace its build system. It had become so difficult to understand and modify that we knew it would be significant drag on development.

Ancient autoconf builds tend to be crawling horrors and NTP’s is an extreme case – 31KLOC of kludgy macrology that defines enough configuration symbols to make getting a grasp on its interface with the codebase nigh-impossible even when you have a config.h to look at. And that’s a problem when you’re planning large changes!

One of our guys, Amar Takhar, is an expert on the waf build system. When he tentatively suggested moving to that I cheered the idea resoundingly. Months later he was able to land a waf recipe which, while not complete, would at least produce binaries that could be live-tested.

When I say “not complete” I mean that I could tell that there were configuration #defines in the codebase that the waf build never set. Quite a few of them – in some cases fossils that the autoconf build didn’t touch either, but in others … not. And these unreached configuration knobs tended to get lost amidst a bunch of conditional guards looking at #defines set by system headers and the compiler.

And we’re not talking a handful or even dozens. I eventually counted over 670 distinct #defines being used in #if/#ifdef/#ifndef/#elif guards – 2430 of them, as A&D regular John D. Bell pointed out in a comment on my last post. I needed some way to examine these and sort them into groups – this is from a system header, that’s a configuration knob, and over there is something else…

So I wrote an analyzer. It parses every compile-time conditional in a code tree for symbols, then reports them either as a bare list or GCC-like file/line error messages that you can step through with Emacs compilation mode.

To reduce noise, it knows about a long list of guard symbols (almost 200 of them) that it should normally ignore – things like the __GNUC__ symbol that GCC predefines, or the O_NONBLOCK macro used by various system calls.

The symbols are divided into groups that you can choose to ignore individually with a command-line option. So, if you want to ignore all standardized POSIX macros in the list but see anything OS-dependent, you can do that.

Another important feature is that you can build your own exclusion lists, with comments. The way I’m exploring the jungle of NTP conditionals is by building a bigger and bigger exclusion list describing the conditional symbols I understand. Eventually (I hope) the report of unknown symbols will shrink to empty. At that point I’ll know what all the configuration knobs are with certainty.

As of now I have knocked out about 300 of them and have 373 to go. That’s most of a week’s work, even with my spiffy new tool. Oh well, nobody ever said code archeology was easy.

Posted Thu Sep 24 19:07:34 2015 Tags:

I’ve been pretty quiet on what’s going on with NTPsec since posting Yes, NTPsec is real and I am involved just about a month ago. But it’s what I’m spending most of my time on, and I have some truly astonishing success to report.

The fast version: in three and a half months of intensive hacking, since the NTP Classic repo was fully converted to Git on 6 June, the codebase is down to 47% of its original size. Live testing on multiple platforms seems to indicate that the codebase is now solid beta quality, mostly needing cosmetic fixes and more testing before we can certify it production-ready.

Here’s the really astonishing number…

In fifteen weeks of intensive hacking, code cleanup, and security-hardening changes, the number of user-visible bugs we introduced was … one (1). Turns out that during one of my code-hardening passes, when I was converting as many flag variables as possible to C99 bool so static analyzers would have more type constraint information, I flipped one flag initialization. This produced two different minor symptoms (strange log messages at startup and incorrect drift-statistics logging)

Live testing revealed two other bugs, one of which turned out to be a build-system issue and the other some kind of Linux toolchain problem with glibc or pthreads that doesn’t show up under FreeBSD, so it doesn’t really count.

Oh, and that build system bug? Happened while we were reducing the build from 31KLOC of hideous impacted autotools cruft to a waf recipe that runs at least an order of magnitude faster and comes in at a whole 900 lines. Including the build engine.

For those of you who aren’t programmers, just two iatrogenic bugs after fifteen weeks of hacking on a 227-thousand-line codebase full of gnarly legacy crud from as far back as the 1980s – and 31KLOC more of autotools hair – is a jaw-droppingly low, you’ve-got-to-be-kidding-me, this-never-happens error rate.

This is the point at which I would normally make self-deprecating noises about how good the other people on my team are, especially because in the last week and a half they really have been. But for complicated and unfortunate reasons I won’t go into, during most of that period I was coding effectively alone. Not by choice.

Is it bragging when I say I didn’t really know I was that good? I mean, I thought I might be, and I’ve pulled off some remarkable things before, and I told my friends I felt like I was doing the best work of my life on this project, but looking at those numbers leaves me feeling oddly humbled. I wonder if I’ll ever achieve this kind of sustained performance again.

The August release announcement was way premature (see complicated and unfortunate reasons I won’t go into, above). But. Two days ago I told the new project manager – another A&D regular, Mark Atwood – that, speaking as architect and lead coder, I saw us as being one blocker bug and a bunch of cosmetic stuff from a release I’d be happy to ship. And yesterday the blocker got nailed.

I think what we have now is actually damn good code – maybe still a bit overengineered in spots, but that’s forgivable if you know the history. Mostly what it needed was to have thirty years of accumulated cruft chiseled off of it – at times it was such an archeological dig that I felt like I ought to be coding with a fedora on my head and a bullwhip in hand. Once I get replicable end-to-end testing in place the way GPSD has, it will be code you can bet your civilizational infrastructure on. Which is good, because you probably are going to be doing exactly that.

I need to highlight one decision we made early on and how much it has paid off. We decided to code to an ANSI.1-2001/C99 baseline and ruthlessly throw out support for legacy OSes that didn’t meet that. Partly this was informed by my experience with GPSD, from which I tossed out all the legacy-Unix porting shims in 2011 and never got a this-doesn’t-port complaint even once afterwards – which might impress you more if you knew how many weird-ass embedded deployments GPSD has. Tanks, robot submarines, you name it…

I thought that commitment would allow us to chisel off 20% or so of the bulk of the code, maybe 25% if we were lucky.

This morning it was up to 53% 53%! And we’re not done. If reports we’ve been hearing of good POSIX conformance in current Windows are accurate, we may soon have a working Windows port and be able to drop most of another 6 KLOC.

(No, I won’t be doing the Windows port. It’ll be Chris Johns of the RTEMS project behind that, most likely.)

I don’t have a release date yet. But we are starting to reach out to developers who were not on the original rescue team. Daniel Franke will probably be the first to get commit rights. Public read-only access to the project repo will probably be made available some time before we ship 1.0.

Why didn’t we open up sooner? I’m just going to say “politics” and leave it at that. There were good reasons. Not pleasant ones, but good ones – and don’t ask because I’m not gonna talk about it.

Finally, a big shout-out to the Core Infrastructure Initiative and the Linux Foundation, who are as of about a month ago actually (gasp!) paying me to work on NTPsec. Not enough that I don’t still have some money worries, because Cathy is still among the victims-of-Obamacare unemployed, but enough to help. If you want to help and you haven’t already, there’s my Patreon page.

I have some big plans and the means to make them happen. The next six months should be good.

Posted Wed Sep 23 11:44:22 2015 Tags:

The issue of software freedom is, not surprisingly, not mentioned in the mainstream coverage of Volkswagen's recent use of proprietary software to circumvent important regulations that exist for the public good. Given that Volkswagen is an upstream contributor to Linux, it's highly likely that Volkswagen vehicles have Linux in them.

Thus, we have a wonderful example of how much we sacrifice at the altar of “Linux adoption”. While I'm glad for some Free Software to appear in products rather than none, I also believe that, too often, our community happily accepts the idea that we should gratefully laud a company includes a bit of Free Software in their product, and gives a little code back, even if most of what they do is proprietary software.

In this example, a company poisoned people and our environment with out-of-compliance greenhouse gas emissions, and hid their tracks behind proprietary software. IIUC, the EPA had to do use an (almost literal) analog hole to catch these scoundrels.

It's not that I'm going to argue that end users should modify the software that verifies emissions standards. But if end users could extract these binaries from the physical device, recompile the source, and verify the binaries match, someone would have discovered this problem immediately when the models drove off the lot.

So, why does no one demand for this? To me, this feels like Diebold and voting machines all over again. So tell me, voters' rights advocates who claimed proprietary software was fine, as long as you could get voter-verified paper records: how do are we going to “paper verify” our emissions testing?

Software freedom is the only solution to problems that proprietary software creates. Sadly, opposition to software freedom is so strong, nearly everyone will desperately try every other (failing) solution first.

Posted Wed Sep 23 02:00:00 2015 Tags:

As promised, we now have a version of our IDE powered by Roslyn, Microsoft's open sourced C# compiler as a service

When we did the port we found various leaks in the IDE that were made worse by Roslyn, so we decided to take the time and fix those leaks, and optimize our use of Roslyn.

Next Steps

We want to get your feedback on how well it works and to let us know what problems you are running into. Once we feel that there are no regressions, we will make this part of the default IDE.

While Roslyn is very powerful, this power comes with a memory consumption price tag. The Roslyn edition of Xamarin Studio will use more memory.

We are working to reduce Roslyn's and Xamarin Studio memory usage in future versions.

Posted Mon Sep 21 22:17:12 2015 Tags:

shimulate, vt.: To insert a shim into code so it simulates a desired standardized ANSI/POSIX facility under a deficient operating system. First used of implementing clock_gettime(2) under Mac OS X, in the commit log of ntpsec.

I checked first use by Googling.

Posted Thu Sep 17 21:11:03 2015 Tags:

[ This post was cross-posted on Conservancy's blog. ]

In this post, I discuss one example of how a choice for software freedom can cause many strange problems that others will dismiss. My goal here is to explain in gory detail how proprietary software biases in the computing world continue to grow, notwithstanding Open Source ballyhoo.

Two decades ago, nearly every company, organization, entity, and tech-minded individual ran their own email server. Generally speaking, even back then, nearly all the software for both MTAs and MUAs were Free Software0. MTA's are the mail transport agents — the complex software that moves email around from one Internet domain to another. MUAs are the mail user agents, sometimes called mail clients — the local programs with which users manipulate their own email.

I've run my own MTA since around 1993: initially with sendmail, then with exim for a while, and with Postfix since 1999 or so. Also, everywhere I've worked throughout my entire career since 1995, I've either been in charge of — or been the manager of the person in charge of — the MTA installation for the organization where I worked. In all cases, that MTA has always been Free Software, of course.

However, the world of email has changed drastically during that period. The most notable change in the email world is the influx of massive amounts of spam, which has been used as an excuse to implement another disturbing change. Slowly but surely, email service — both the MTA and the MUA — have been outsourced for most organizations. Specifically, either (a) organizations run proprietary software on their own computers to deal with email and/or (b) people pay a third-party to run proprietary and/or trade-secret software on their behalf to handle the email services. Email, generally speaking, isn't handled by Free Software all that much anymore.

This situation became acutely apparent to me this earlier this month when Conservancy moved its email server. I had plenty of warning that the move was needed1, and I'd set up a test site on the new server. We sent and received some of our email for months (mostly mailing list traffic) using that server configured with a different domain ( When the shut-off day came, I moved's email officially. All looked good: I had a current Debian, with a new version of Postfix and Dovecot on a speedier host, and with better spam protection settings in Postfix and better spam filtering with a newer version of SpamAssassin. All was going great, thanks to all those great Free Software projects — until the proprietary software vendors threw a spanner in our works.

For reasons that we'll never determine for sure2, the IPv4 number that our new hosting provide gave us was already listed on many spam blacklists. I won't debate the validity of various blacklists here, but the fact is, for nearly every public-facing, pure-blacklist-only service, delisting is straightforward, takes about 24 hours, and requires at most answering some basic questions about your domain name and answering a captcha-like challenge. These services, even though some are quite dubious, are not the center of my complaint.

The real peril comes from third-party email hosting companies. These companies have arbitrary, non-public blacklisting rules. More importantly, they are not merely blacklist maintainers, they are MTA (and in some cases, even MUA) providers who sell their proprietary and/or trade-secret hosted solutions as a package to customers. Years ago, the idea of giving up that much control of what happens to your own email would be considered unbelievable. Today, it's commonplace.

And herein lies the fact that is obvious to most software freedom advocates but indiscernible by most email users. As a Free Software user, with your own MTA on your own machine, your software only functions if everyone else respects your right to run that software yourself. Furthermore, if the people you want to email are fully removed from their hosting service, they won't realize nor understand that their hosting site might block your emails. These companies have their customers fully manipulated to oppose your software freedom. In other words, you can't appeal to those customers (the people you want to email), because you're likely the only person to ever raise this issue with them (i.e., unless they know you very well, they'll assume you're crazy). You're left begging to the provider, whom you have no business relationship with, to convince them that their customers want to hear from you. Your voice rings out indecipherable from the spammers who want the same permission to attack their customers.

The upshot for Conservancy? For days, Microsoft told all its customers that Conservancy is a spammer; Microsoft did it so subtly that the customers wouldn't even believe it if we told them. Specifically, every time I or one of my Conservancy colleagues emailed organizations using Microsoft's “Exchange Online”, “Office 365” or similar products to host email for their domain4, we got the following response:

        Sep  2 23:26:26 pine postfix/smtp[31888]: 27CD6E12B: to=,[]:25, delay=5.6, delays=0.43/0/0.16/5, dsn=5.7.1, status=bounced (host[] said: 550 5.7.1 Service unavailable; Client host [] blocked using FBLW15; To request removal from this list please forward this message to (in reply to RCPT TO command))

Oh, you ask, did you forward your message to the specified address? Of course I did; right away! I got back an email that said:

Hello ,

Thank you for your delisting request SRXNUMBERSID. Your ticket was received on (Sep 01 2015 06:13 PM UTC) and will be responded to within 24 hours.

Once we passed the 24 hour mark with no response, I started looking around for more information. I also saw a suggestion online that calling is the only way to escalate one of those tickets, so I phoned 800-865-9408 and gave V-2JECOD my ticket number and she told that I could only raise these issues with the “Mail Flow Team”. She put me on hold for them, and told me that I was number 2 in the queue for them so it should be a few minutes. I waited on hold for just under six hours. I finally reached a helpful representative, who said the ticket was the lowest level of escalation available (he hinted that it would take weeks to resolve at that level, which is consistent with other comments about this problem I've seen online). The fellow on the phone agreed to escalate it to the highest priority available, and said within four hours, Conservancy should be delisted. Thus, ultimately, I did resolve these issues after about 72 hours. But, I'd spent about 15 hours all-told researching various blacklists, email hosting companies, and their procedures3, and that was after I'd already carefully configured our MTA and DNS to be very RFC-compliant (which is complicated and confusing, but absolutely essential to stay off these blacklists once you're off).

Admittedly, this sounds like a standard Kafkaesque experience with a large company that almost everyone in post-modern society has experienced. However, it's different in one key way: I had to convince Microsoft to allow me to communicate with their customers who are paying Microsoft for proprietary and/or trade-secret software and services, ostensibly to improve efficiency of their communications. Plus, since Microsoft, by the nature of their so-called spam blocking, doesn't inform their customers whom they've blocked, I and my colleagues would have just sounded crazy if we'd asked our contacts to call their provider instead. (I actually considered this, and realized that we might negatively impact relationships with professional contacts.)

These problems do reduce email software freedom by network effects. Most people rely on third-party proprietary email software from Google, Microsoft, Barracuda, or others. Therefore, most people, don't exercise any software freedom regarding email services. Since exercising software freedom for email slowly becomes a rarer and rarer (rather than norm it once was), society slowly but surely pegs those who do exercise software freedom as “random crazy people”.

There are a few companies who are seeking to do email hosting in a way that respects your software freedom. The real test of such companies is if someone technically minded can get the same software configured on their own systems, and have it work the same way. Yet, in most cases, you go to one of these companies' Github pages and find a bunch of stuff pushed public, but limited information on how to configure it so that it functions the same way the hosted service does. RMS wrote years ago that Free Software cannot properly succeed without Free Documentation, and in many of these hosting cases: the hosting company is using fully upstreamed Free Software, but has configured the software in a way that is difficult to stumble upon by oneself. (For that reason, I'm committing to writing up tutorials on how Conservancy configured our mail server, so at least I'll be part of the solution instead of part of the problem.)

BTW, as I dealt with all this, I couldn't help but think of John Gilmore's activism efforts regarding open mail relays. While I don't agree with all of John's positions on this, his fundamental position is right: we must oppose companies who think they know better how we should configure our email servers (or on which IP numbers we should run those servers). I'd add a corollary that there's a serious threat to software freedom, at least with regard to email software, if we continue to allow such top-down control of the once beautifully decentralized email system.

The future of software freedom depends on issues like this. Imagine someone who has just learned that they can run their own email server, or bought some Free Software-based plug computing system that purports to be a “home cloud” service with email. There's virtually no chance that such users would bother to figure all this out. They'd see their email blocked, declare the “home cloud” solution useless, and would just get a,, or some other third-party email account. Thus, I predict that software freedom that we once had, for our MTAs and MUAs, will eventually evaporate for everyone except those tiny few who invest the time to understand these complexities and fight the for-profit corporate power that curtails software freedom. Furthermore, that struggle becomes Sisyphean as our numbers dwindle.

Email is the oldest software-centric communication system on the planet. The global email system serves as a canary in the coalmine regarding software freedom and network service freedom issues. Frighteningly, software now controls most of the global communications systems. How long will it be before mobile network providers refuse to terminate PSTN calls or SMS's sent from devices running modified Android firmwares like Replicant? Perhaps those providers, like large email providers, will argue that preventing robocalls (the telephone equivalent of SPAM) necessitates such blocking. Such network effects place so many dystopias on software freedom's horizon.

I don't deny that every day, there is more Free Software existing in the world than has ever existed before — the P.T. Barnum's of Open Source have that part right. The part they leave out is that, each day, their corporate backers make it a little more difficult to complete mundane tasks using only Free Software. Open Source wins the battle while software freedom loses the war.

0Yes, I'm intimately aware that Elm's license was non-free, and that the software freedom of PINE's license was in question. That's slightly relevant here but mostly orthogonal to this point, because Free Software MUAs were still very common then, and there were (ultimately successful) projects to actively rewrite the ones whose software freedom was in question

1For the last five years, one of Conservancy's Director Emeriti, Loïc Dachary, has donated an extensive amount of personal time and in-kind donations by providing Cloud server for Conservancy to host its three key servers, including the email server. The burden of maintaining this for us became too time consuming (very reasonably), and Loïc's asked us to find another provider. I want, BTW, to thank Loïc his for years of volunteer work maintaining infrastructure for us; he provided this service for much longer than we could have hoped! Loïc also gave us plenty of warning that we'd need to move. None of these problems are his fault in the least!

2The obvious supposition is that, because IPv4 numbers are so scarce, this particular IP number was likely used previously by a spammer who was shut down.

3I of course didn't count the time time on phone hold, as I was able to do other work while waiting, but less efficiently because the hold music was very distracting.

4If you want to see if someone's domain is a Microsoft customer, see if the MX record for their domain (say, points to

Posted Tue Sep 15 23:30:16 2015 Tags:

Many modern mice have the ability to store profiles, customize button mappings and actions and switch between several hardware resolutions. A number of those mice are targeted at gamers, but the features are increasingly common in standard mice. Under Linux, support for these device is spotty, though there are a few projects dedicated to supporting parts of the available device range. [1] [2] [3]

Benjamin Tissoires and I started a new project: libratbag. libratbag is a library to provide a generic interface to these mice,enabling desktop environments to provide configuration tools without having to worry about the device model. As of the time of this writing, we have partial support for the Logitech HID++ 1.0 (G500, G5) and HID++ 2.0 protocols (G303), the Etekcity Scroll Alpha and Roccat Kone XTD. Thomas H. P. Anderson already added the G5, G9 and the M705.

git clone

The internal architecture is fairly simple, behind the library's API we have a couple of protocol-specific drivers that access the mouse. The drivers match a specific product/vendor ID combination and load the data from the device, the library then exports it to the caller as a struct ratbag_device. Each device has at least one profile, each profile has a number of buttons and at least one resolution. Where possible, the resolutions can be queried and set, the buttons likewise can be queried and set for different functions. If the hardware supports it, you can map buttons to other buttons, assign macros, or special functions such as DPI/profile switching. The main goal of libratbag is to unify access to the devices so a configuration application doesn't need different libraries per hardware. Especially short-term, we envision using some of the projects listed above through custom backends.

We're at version 0.1 at the moment, so the API is still subject to change. It looks like this:

#include <libratbag.h>

struct ratbag *ratbag;
struct ratbag_device *device;
struct ratbag_profile *p;
struct ratbag_button *b;
struct ratbag_resolution *r;

ratbag = ratbag_create_context(...);
device = ratbag_device_new_from_udev(ratbag, udev_device);

/* retrieve the first profile */
p = ratbag_device_get_profile(device, 0);

/* retrieve the first resolution setting of the profile */
r = ratbag_profile_get_resolution(p, 0);
printf("The first resolution is: %dpi @ %d Hz\n",


/* retrieve the fourth button */
b = ratbag_profile_get_button(p, 4);

if (ratbag_button_get_action_type(b) == RATBAG_BUTTON_ACTION_TYPE_SPECIAL &&
ratbag_button_get_special(b) == RATBAG_BUTTON_ACTION_SPECIAL_RESOLUTION_UP)
printf("button 4 selects next resolution");


For testing and playing around with libratbag, we have a tool called ratbag-command that exposes most of the library:

$ ratbag-command info /dev/input/event8
Device 'BTL Gaming Mouse'
Capabilities: res profile btn-key btn-macros
Number of buttons: 11
Profiles supported: 5
Profile 0 (active)
0: 800x800dpi @ 500Hz
1: 800x800dpi @ 500Hz (active)
2: 2400x2400dpi @ 500Hz
3: 3200x3200dpi @ 500Hz
4: 4000x4000dpi @ 500Hz
5: 8000x8000dpi @ 500Hz
Button: 0 type left is mapped to 'button 1'
Button: 1 type right is mapped to 'button 2'
Button: 2 type middle is mapped to 'button 3'
Button: 3 type extra (forward) is mapped to 'profile up'
Button: 4 type side (backward) is mapped to 'profile down'
Button: 5 type resolution cycle up is mapped to 'resolution cycle up'
Button: 6 type pinkie is mapped to 'macro "": H↓ H↑ E↓ E↑ L↓ L↑ L↓ L↑ O↓ O↑'
Button: 7 type pinkie2 is mapped to 'macro "foo": F↓ F↑ O↓ O↑ O↓ O↑'
Button: 8 type wheel up is mapped to 'wheel up'
Button: 9 type wheel down is mapped to 'wheel down'
Button: 10 type unknown is mapped to 'none'
Profile 1
And to toggle/query the various settings on the device:

$ ratbag-command dpi set 400 /dev/input/event8
$ ratbag-command profile 1 resolution 3 dpi set 800 /dev/input/event8
$ ratbag-command profile 0 button 4 set action special doubleclick

libratbag is in a very early state of development. There are a bunch of FIXMEs in the code, the hardware support is still spotty and we'll appreciate any help we can get, especially with the hardware driver backends. There's a TODO in the repo for some things that we already know needs changing. Feel free to browse the repo on github and drop us some patches.

Eventually we want this to be integrated into the desktop environments, either in the respective control panels or in a standalone application. libratbag already provides SVGs for some devices we support but we'll need some designer input for the actual application. Again, any help you want to provide here will be much appreciated.

Posted Tue Sep 15 23:10:00 2015 Tags:

Who are the programmers who are famous for doing programmer things?

I’m wondering about this because my wife Cathy asked me a simple question last night, and I realized I didn’t have an answer to it. “Are you” she asked “the most famous programmer in the world?”

This was a question which I had, believe it or not, never thought about before. But it’s a reasonable one to ask, given recent evidence – notably, the unexpected success of my Patreon page. This is relevant because Patreon is mainly an arts-funding site – it’s clearly not designed for or by techies.

There are a couple of obvious ways to get this question wrong. One is to point at, the likes of, say Donald Knuth – towering reputation among programmers, but no visibility outside CS and math academia.

Another is to point at programmers who are famous, but not for being programmers. Bill Gates, for example, is famous for running Microsoft, not for code he’s written, or (at an acceptable one metalevel) anything he’s written about code.

A third error would be to point at people like Kevin Mitnick or Aaron Swartz, or anyone else who’s famous because of their role in a high-profile cracking incident.

My immediate reaction was to try to think of other programmers who are both famous for doing programmer things and have name recognition outside the population of programmers itself. So…the first person I thought of was actually John Carmack.

“What”? I hear you say. “What about Richard Stallman? Or Linus Torvalds?” Good question. But I think the insider perspective of hackers and programmers is unhelpful here. Yes, these guys are heroes to us…but Carmack has fame for writing code among people who have never written a line of code in their lives, because of Doom. So do I, for different reasons – I’ve even had my photo in People magazine with a keyboard in my hand.

Mind you, it’s not important to me whether I’m the world’s most famous programmer – fame is merely an instrument I picked up for tactical reasons and have since discovered is pretty adhesive even when you no longer want it. But my anthropologist head finds the more general question interesting. Besides, I’d like to have a better answer to give my wife.

Who are the programmers who are famous as programmers? How did they get that way? Are there any useful generalizations to be made about the group? Discuss in comments.

Posted Tue Sep 15 13:10:29 2015 Tags:

(This copies a comment I left on Derek Lowe’s blog at Science Magazine.)

I was the foundational theorist of open-source software development back in the 1990s, and have received a request to respond to your post on open-source pharma.

Is there misplaced idealism and a certain amount of wishful thinking in the open-source pharma movement? Probably. Something I often find myself pointing out to my more eager followers is that atoms are not bits; atoms are heavy, which means there are significant limiting factors of production other than human attention, and a corresponding problem of capital costs that is difficult to make go away. And I do find people who get all enthusiastic and ignore economics rather embarrassing.

On the other hand, even when that idealism is irrational it is often a useful corrective against equally irrational territoriality. I have observed that corporations have a strong, systemic hunker-down tendency to overprotect their IP, overestimating the amount of secrecy rent they can collect and underestimating the cost savings and additional options generated by going open.

I doubt pharma companies are any exception to this; when you say “the people who are actually spending their own cash to do it have somehow failed to realize any of these savings, because Proprietary” as if it’s credulous nonsense, my answer is “Yes. Yes, in fact, this actually happens everywhere”.

Thus, when I have influence I try to moderate the zeal but not suppress it, hoping that the naive idealists and the reflexive hunker-downers will more or less neutralize each other. It would be better if everybody just did sound praxeology, but human beings are not in general very good at that. Semi-tribalized meme wars fueled by emotional idealism seem to be how we roll as a species. People who want to change the world have to learn to work with human beings as they are, not as we’d like them to be.

If you’re not inclined to sign up with either side, I suggest pragmatically keeping your eye on the things the open-source culture does well and asking if those technologies and habits of thought can be useful in drug discovery. Myself, I think the long-term impact of open data interchange formats and public, cooperatively-maintained registries of pre-competitive data could be huge and is certainly worth serious investment and exploration even in the most selfish ROI terms of every party involved.

The idealists may sound a little woolly at times, but at least they understand this possibility and have the cultural capital to realize it – that part really is software.

Then…we see what we can learn. Once that part of the process has been de-territorialized, options to do analogous things at other places in the pipeline may become more obvious,

P.S: I’ve been a huge fan of your “Things I Won’t Work With” posts. More, please?

Posted Tue Sep 8 17:29:12 2015 Tags:

This past June, Apple announced that WatchOS 2 applications would have to be submitted using LLVM BitCode. The idea being that Apple could optimize your code as new optimizations are developed or new CPU features are introduced, and users would reap the benefits without requiring the developer to resubmit their applications.

BitCode is a serialized version of the low-level intermediate representation used by LLVM.

WatchOS 2 requires pure BitCode to be submitted. That is, BitCode that does not contain any machine code blobs. iOS supports mixed mode BitCode, that is, BitCode that contains both the LLVM intermediate representation code, and blobs of machine code.

While Mono has had an LLVM backend for a long time, generating pure BitCode posed a couple of challenges for us.

First, Mono's LLVM backend does not cover all the generated code. There were some corner cases that we handled with Mono's old code generator. Also, Mono uses hand-written assembly language code in various places (lots of small optimizations involving generics code sharing, method dispatch and other things like that). This poses a problem for WatchOS.

Secondly, Mono uses a modified version of LLVM that adds support for many .NET idioms. In particular, our changes to LLVM produce the necessary information to support .NET-style exception handling [1].

We spent the summer adapting Mono to produce Vanilla LLVM bitcode support. This includes the removal of our hand-tuned machine code, as well as devising a new system for exception handling that works in this context. Sadly, the exception handling is not as efficient as the one that we got with our modified LLVM.

Hopefully, we will be able to upstream our changes for better exception handling for .NET-like languages to LLVM in the future and get some of the performance back.


[1] Vanilla LLVM exception support requires exceptions to be explicit. In .NET some exceptions happen implicitly, for example, when dereferencing a null pointer, or dividing by zero.

Posted Wed Sep 2 22:32:39 2015 Tags:

I created a Patreon page just before leaving for vacation on 2 Aug. The background to this is that while I’m now getting some regular bucks for working on NTPsec, it’s not a lot. Royalties from my books have been dwindling and my wife Cathy isn’t making all that much from legal contract gigs that are all she can get since Obamacare costs killed her full-time law job. Add the fact that our eight-year-old car has developed problems that would cost more to fix than its book value, and the house needs a new roof, and it’s looking pretty broke out.

(Yes, we do have some savings and stock. No, I don’t want to tap them, because thank you I do not want to be on the dole or dead of starvation and exposure when I’m really old. And, even if doing so weren’t in conflict with my values, counting on the U.S. government to be able to keep me in enough money for food and shelter in 2035 or so would be deeply stupid.)

Rather to my surprise, the page has attracted a whole 67 patrons and $409.17 in monthly pledges. I wasn’t actually expecting Patreon to yield that much, as it seems to be strongly oriented towards the beret-and-nose-ring crowd. It’ll help, some.

All this is by way of explaining why from now on my new-release announcements will probably be going to Patreon, mostly, rather than here on the blog. They’re the closest I have to the kind of artistic content releases the Patreon base seems to expect, and do seem to actually produce upticks in pledges.

My first month’s Patreon pledges will just about cover the additional 32GB of memory I dropped into the Great Beast in order to work on GCC’s Subversion-to-Git conversion. It’s not going to make much of a dent in (for example) medical insurance, thanks to the statist bastards who’ve driven health-care costs into the stratosphere through their incessant meddling. (They cost my wife her job, too – that’s been psychologically rough on her.)

If you enjoy this blog, please pledge generously. Whether you file the cost under “entertainment” or “keep an infrastructure gnome fed so your civilization will keep working”, a little help from each one of a lot of people can go a long way.

Posted Wed Sep 2 00:49:00 2015 Tags:

There are four major components of Xamarin's platform product: the Android SDK, the iOS SDK, our Xamarin Studio IDE and our Visual Studio extension.

In the past, we used to release each component independently, but last year we realized that developing and testing each component against the other ones was getting too expensive, too slow and introduced gratuitous errors.

So we switched to a new style of releases where all the components ship together at the same time. We call these cycles.

We have been tuning the cycle releases. We started with time-based releases on a monthly basis, with the idea that any part of the platform that wanted to be released could catch one of these cycles, or wait for the next cycle if they did not have anything ready.

While the theory was great, the internal dependencies of these components was difficult to break, so our cycles started taking longer and longer.

On top of the cycles, we would always prepare builds for new versions of Android and iOS, so we could do same-day releases of the stacks. These are developed against our current stable cycle release, and shipped when we need to.

We are now switching to feature-based releases. This means that we are now waiting for features to be stable, with long preview periods to ensure that no regressions are introduced.

Because feature based releases can take as long as it is needed to ship a feature, we have introduced Service Releases on top of our cycles.

Our Current Releases

To illustrate this scenario, let me show what our current platform looks like.

We released our Cycle 5 to coincide with the Build conference, back in April 29th. This was our last timed release (we call this C5).

Since then we have shipped three service releases which contain important bug fixes and minor features (C5-SR1, SR2 and SR3), with a fourth being cooked in the oven right now (C5-SR4)

During this time, we have issued parallel previews of Android M and iOS 9 support, those are always built on top of the latest stable cycle. Our next iOS 9 preview for example, will be based on the C5-SR4.

We just branched all of our products for the next upgrade to the platform, Cycle 6.

This is the cycle that is based on Mono 4.2.0 and which contains a major upgrade to our Visual Studio support for iOS and plenty of improvements to Xamarin Studio. I will cover some of my favorite features in Cycle 6 in future posts.

Posted Tue Sep 1 15:02:45 2015 Tags:

This is an update on our efforts to upgrade the TLS stack in Mono.

You can skip to the summary at the end if you do not care about the sausage making details.

Currently, TLS is surfaced in a few places in the .NET APIs:

  • By the SslStream class, which is a general purpose class that can be used to turn any bidirectional stream into an TLS-powered stream. This class is what currently powers the web client in Mono.
  • By the HttpWebRequest class, which provides .NET's HTTP client. This in turn is the foundation for the modern HttpClient, both the WCF and WebServices stacks as well as the quick and dirty WebClient API.

HttpClient is in particular interesting, as it allows for different transports to be provided for it. The default implementation in .NET 4.5 and Mono today is to use an HttpWebRequest-based implementation. But on Windows 10, the implementation is replaced with one that uses WinRT's HTTP client.

Microsoft is encouraging developers to abandon HttpWebRequest and instead adopt HttpClient as it both async-friendly and can use the best available transport given on a specific platform. More on this in a second.

Mono's Managed TLS

Mono currently only supports TLS 1.0.

This is the stack that powers SslStream and HttpWebRequest.

Last year we started an effort to bring managed implementations of TLS 1.2 and TLS 1.1. Given how serious security has become and how many holes have been found in existing implementation, we built this with an extensive test suite to check for conformance and to avoid common exploits found in implementation mistakes of TLS. This effort is currently under development and you can see where it currently lives at mono-tls module.

This will give us complete TLS support for the entire stack, but this work is still going to take a few months to audit.

Platform Specific HttpClients

Most of the uses for TLS today is via the HTTP protocol, and not over custom TLS streams. This means that it is more important to get an HTTP client that supports a brand new TLS stack, than it is to provide the SslStream code.

We want to provide native HttpClient handlers for all of Mono's supported platforms: Android, iOS, Mac, Linux, BSD, Unix and Windows.

On iOS: Today Xamarin.iOS already ships a native handler, the CFNetworkHandler. This one is powered by Apple's CFNetwork stack. In recent years, Apple has improved their networking stack, and we now I strongly recommend using Paul Bett's fantastic ModernHttpClient which uses iOS' brand new NSUrlSession and uses OkHttp on Android.

On Android: in the short term, we recommend adopting ModernHttpClient from Paul Betts (bonus points: the same component works on iOS with no changes). In the long term, we will change the default handler to use the Android Java client.

In both cases, you end up with HTTP 2.0 capable clients for free.

But this still leaves Linux, Windows and other assorted operating systems without a regular transport.

For those platforms, we will be adopting the CoreFX handlers, which on Unix are powered by the libcurl library.

This still leaves HttpWebRequest and everything built on top of it running on top of our TLS stack.

Bringing Microsoft's SslStream and HttpWebRequest to Mono

While this is not really TLS related, we wanted to bring Microsoft's implementations of those two classes to Mono, as they would fix many odd corner cases in the API, and address limitations in our stack that do not exist in Microsoft's implementation.

But the code is tightly coupled to native Windows APIs which makes the adoption of this code difficult.

We have built an adaptation layer that will allow us to bring Microsoft's code and use Mono's Managed TLS implementation.

SslStream backends

Our original effort focused on a pure managed implementation of TLS because we want to ensure that the TLS stack would work on all available platforms in the same way. This also means that all of the .NET code that expects to control every knob of your secure connection to work (pinning certificates or validating your own chains for example).

That said, in many cases developers do not need this capabilities, and in fact, on Xamarin.iOS, we can not even provide the functionality, as the OS does not give users access to the certificate chains.

So we are going to be developing at least two separate SslStream implementations. For Apple systems, we will be implementing a version on top of Apple's SSL stack, and for other systems we will be developing an implementation on top of Amazon's new SSL library, or the popular OpenSSL variant of the day.

These have the advantage that we would not need to maintain the code, and we benefit from third parties doing all the hard security work and will be suitable for most uses.

For those rare uses that like to handle connections manually, you will have to wait for Mono's new TLS implementation to land.

In Summary

Android, Mac and iOS users can get the latest TLS for HTTP workloads using ModernHttpClient. Mac/iOS users can use the built-in CFNetworkHandler as well.

Soon: OpenSSL/AppleSSL based transports to be available in Mono (post Mono 4.2).

Soon: Advanced .NET SSL use case scenarios will be supported with Mono's new mono-tls stack

Soon: HttpWebRequest and SslStream stacks will be replaced in Mono with Microsoft's implementations.

Posted Thu Aug 27 16:32:32 2015 Tags:

When I built the Great Beast of Malvern, it was intended for surgery on large repositories. The specific aim in view was to support converting the NetBSD CVS to git, but that project is stalled because the political process around NetBSD’s decision about when to move seems to have seized up. I’ve got the hardware and software ready when they’re ready to move.

Now I have another repo-conversion job in the offing – and it does something I thought I’d never see. The working set exceeds 32GB! For comparison, the working set of the entire NetBSD conversion tops out at about 18GB.

What, you might well ask, can possibly have a history that huge? And the answer is…GCC. Yes, they’re looking to move from Subversion to git. And this is clearly a job for the Great Beast, once the additional 32GB I just ordered from Newegg arrives.

Posted Tue Aug 25 23:25:54 2015 Tags:

Thanks to edmundedgar on reddit I have some more accurate data to update my previous bandwidth growth estimation post: OFCOM UK, who released their November 2014 report on average broadband speeds.  Whereas Akamai numbers could be lowered by the increase in mobile connections, this directly measures actual broadband speeds.

Extracting the figures gives:

  1. Average download speed in November 2008 was 3.6Mbit
  2. Average download speed in November 2014 was 22.8Mbit
  3. Average upload speed in November 2014 was 2.9Mbit
  4. Average upload speed in November 2008 to April 2009 was 0.43Mbit/s

So in 6 years, downloads went up by 6.333 times, and uploads went up by 6.75 times.  That’s an annual increase of 36% for downloads and 37% for uploads; that’s good, as it implies we can use download speed factor increases as a proxy for upload speed increases (as upload speed is just as important for a peer-to-peer network).

This compares with my previous post’s Akamai’s UK numbers of 3.526Mbit in Q4 2008 and 10.874Mbit in Q4 2014: only a factor of 3.08 (26% per annum).  Given how close Akamai’s numbers were to OFCOM’s in November 2008 (a year after the iPhone UK release, but probably too early for mobile to have significant effect), it’s reasonable to assume that mobile plays a large part of this difference.

If we assume Akamai’s numbers reflected real broadband rates prior to November 2008, we can also use it to extend the OFCOM data back a year: this is important since there was almost no bandwidth growth according to Akamai from Q4 2007 to Q7 2008: ignoring that period gives a rosier picture than my last post, and smells of cherrypicking data.

So, let’s say the UK went from 3.265Mbit in Q4 2007 (Akamai numbers) to 22.8Mbit in Q4 2014 (OFCOM numbers).  That’s a factor of 6.98, or 32% increase per annum for the UK. If we assume that the US Akamai data is under-representing Q4 2014 speeds by the same factor (6.333 / 3.08 = 2.056) as the UK data, that implies the US went from 3.644Mbit in Q4 2007 to 11.061 * 2.056 = 22.74Mbit in Q4 2014, giving a factor of 6.24, or 30% increase per annum for the US.

As stated previously, China is now where the US and UK were 7 years ago, suggesting they’re a reasonable model for future growth for that region.  Thus I revise my bandwidth estimates; instead of 17% per annum this suggests 30% per annum as a reasonable growth rate.

Posted Sat Aug 15 04:54:38 2015 Tags:

There’s a significant debate going on at the moment in the Bitcoin world; there’s a great deal of information and misinformation, and it’s hard to find a cogent summary in one place.  This post is my attempt, though I already know that it will cause me even more trouble than that time I foolishly entitled a post “If you didn’t run code written by assholes, your machine wouldn’t boot”.

The Technical Background: 1MB Block Limit

The bitcoin protocol is powered by miners, who gather transactions into blocks, producing a block every 10 minutes (but it varies a lot).  They get a 25 bitcoin subsidy for this, plus whatever fees are paid by those transactions.  This subsidy halves every 4 years: in about 12 months it will drop to 12.5.

Full nodes on the network check transactions and blocks, and relay them to others.  There are also lightweight nodes which simply listen for transactions which affect them, and trust that blocks from miners are generally OK.

A normal transaction is 250 bytes, and there’s a hard-coded 1 megabyte limit on the block size.  This limit was introduced years ago as a quick way of avoiding a miner flooding the young network, though the original code could only produce 200kb blocks, and the default reference code still defaults to a 750kb limit.

In the last few months there have been increasing runs of full blocks, causing backlogs for a few hours.  More recently, someone deliberately flooded the network with normal-fee transactions for several days; any transactions paying less fees than those had to wait for hours to be processed.

There are 5 people who have commit access to the bitcoin reference implementation (aka. “bitcoin-core”), and they vary significantly in their concerns on the issue.

The Bitcoin Users’ Perspective

From the bitcoin users perspective, blocks should be infinite, and fees zero or minimal.  This is the basic position of respected (but non-bitcoin-core) developer Mike Hearn, and has support from bitcoin-core ex-lead Gavin Andresen.  They work on the wallet and end-user side of bitcoin, and they see the issue as the most urgent.  In an excellent post arguing why growth is so important, Mike raises the following points, which I’ve paraphrased:

  1. Currencies have network effects. A currency that has few users is simply not competitive with currencies that have many.
  2. A decentralised currency that the vast majority can’t use doesn’t change the amount of centralisation in the world. Most people will still end up using banks, with all the normal problems.
  3. Growth is a part of the social contract. It always has been.
  4. Businesses will only continue to invest in bitcoin and build infrastructure if they are assured that the market will grow significantly.
  5. Bitcoin needs users, lots of them, for its political survival. There are many people out there who would like to see digital cash disappear, or be regulated out of existence.

At this point, it’s worth mentioning another bitcoin-core developer: Jeff Garzik.  He believes that the bitcoin userbase has been promised that transactions will continue to be almost free.  When a request to change the default mining limit from 750kb to 1M was closed by the bitcoin lead developer Wladimir van der Laan as unimportant, Jeff saw this as a symbolic moment:

What Happens If We Don’t Increase Soon?

Mike Hearn has a fairly apocalyptic view of what would happen if blocks fill.  That was certainly looking likely when the post was written, but due to episodes where the blocks were full for days, wallet designers are (finally) starting to estimate fees for timely processing (miners process larger fee transactions first).  Some wallets and services didn’t even have a way to change the setting, leaving users stranded during high-volume events.

It now seems that the bursts of full blocks will arrive with increasing frequency; proposals are fairly mature now to allow users to post-increase fees if required, which (if all goes well) could make for a fairly smooth transition from the current “fees are tiny and optional” mode of operation to a “there will be a small fee”.

But even if this rosy scenario is true, this begsavoids the bigger question of how high fees can become before bitcoin becomes useless.  1c?  5c?  20c? $1?

So What Are The Problems With Increasing The Blocksize?

In a word, the problem is miners.  As mining has transitioned from a geek pastime, semi-hobbyist, then to large operations with cheap access to power, it has become more concentrated.

The only difference between bitcoin and previous cryptocurrencies is that instead of a centralized “broker” to ensure honesty, bitcoin uses an open competition of miners. Given bitcoin’s endurance, it’s fair to count this a vital property of bitcoin.  Mining centralization is the long-term concern of another bitcoin-core developer (and my coworker at Blockstream), Gregory Maxwell.

Control over half the block-producing power and you control who can use bitcoin and cheat anyone not using a full node themselves.  Control over 2/3, and you can force a rule change on the rest of the network by stalling it until enough people give in.  Central control is also a single point to shut the network down; that lets others apply legal or extra-legal pressure to restrict the network.

What Drives Centralization?

Bitcoin mining is more efficient at scale. That was to be expected[7]. However, the concentration has come much faster than expected because of the invention of mining pools.  These pools tell miners what to mine, in return for a small (or in some cases, zero) share of profits.  It saves setup costs, they’re easy to use, and miners get more regular payouts.  This has caused bitcoin to reel from one centralization crisis to another over the last few years; the decline in full nodes has been precipitous by some measures[5] and continues to decline[6].

Consider the plight of a miner whose network is further away from most other miners.  They find out about new blocks later, and their blocks get built on later.  Both these effects cause them to create blocks which the network ignores, called orphans.  Some orphans are the inevitable consequence of miners racing for the same prize, but the orphan problem is not symmetrical.  Being well connected to the other miners helps, but there’s a second effect: if you discover the previous block, you’ve a head-start on the next one.  This means a pool which has 20% of the hashing power doesn’t have to worry about delays at all 20% of the time.

If the orphan rate is very low (say, 0.1%), the effect can be ignored.  But as it climbs, the pressure to join a pool (the largest pool) becomes economically irresistible, until only one pool remains.

Larger Blocks Are Driving Up Orphan Rates

Large blocks take longer to propagate, increasing the rate of orphans.  This has been happening as blocks increase.  Blocks with no transactions at all are smallest, and so propagate fastest: they still get a 25 bitcoin subsidy, though they don’t help bitcoin users much.

Many people assumed that miners wouldn’t overly centralize, lest they cause a clear decentralization failure and drive the bitcoin price into the ground.  That assumption has proven weak in the face of climbing orphan rates.

And miners have been behaving very badly.  Mining pools orchestrate attacks on each other with surprising regularity; DDOS and block withholding attacks are both well documented[1][2].  A large mining pool used their power to double spend and steal thousands of bitcoin from a gambling service[3].  When it was noticed, they blamed a rogue employee.  No money was returned, nor any legal action taken.  It was hoped that miners would leave for another pool as they approached majority share, but that didn’t happen.

If large blocks can be used as a weapon by larger miners against small ones[8], it’s expected that they will be.

More recently (and quite by accident) it was discovered that over half the mining power aren’t verifying transactions in blocks they build upon[4].  They did this in order to reduce orphans, and one large pool is still doing so.  This is a problem because lightweight bitcoin clients work by assuming anything in the longest chain of blocks is good; this was how the original bitcoin paper anticipated that most users would interact with the system.

The Third Side Of The Debate: Long Term Network Funding

Before I summarize, it’s worth mentioning the debate beyond the current debate: long term network support.  The minting of new coins decreases with time; the plan of record (as suggested in the original paper) is that total transaction fees will rise to replace the current mining subsidy.  The schedule of this is unknown and generally this transition has not happened: free transactions still work.

The block subsidy as I write this is about $7000.  If nothing else changes, miners would want $3500 in fees in 12 months when the block subsidy halves, or about $2 per transaction.  That won’t happen; miners will simply lose half their income.  (Perhaps eventually they form a cartel to enforce a minimum fee, causing another centralization crisis? I don’t know.)

It’s natural for users to try to defer the transition as long as possible, and the practice in bitcoin-core has been to aggressively reduce the default fees as the bitcoin price rises.  Core developers Gregory Maxwell and Pieter Wuille feel that signal was a mistake; that fees will have to rise eventually and users should not be lulled into thinking otherwise.

Mike Hearn in particular has been holding out the promise that it may not be necessary.  On this he is not widely supported: that some users would offer to pay more so other users can continue to pay less.

It’s worth noting that some bitcoin businesses rely on the current very low fees and don’t want to change; I suspect this adds bitterness and vitriol to many online debates.


The bitcoin-core developers who deal with users most feel that bitcoin needs to expand quickly or die, that letting fees emerge now will kill expansion, and that the infrastructure will improve over time if it has to.

Other bitcoin-core developers feel that bitcoin’s infrastructure is dangerously creaking, that fees need to emerge anyway, and that if there is a real emergency a blocksize change could be rolled out within a few weeks.

At least until this is resolved, don’t count on future bitcoin fees being insignificant, nor promise others that bitcoin has “free transactions”.

[1] “Bitcoin Mining Pools Targeted in Wave of DDOS Attacks” Coinbase 2015

[2] “Block Withholding Attacks – Recent Research” N T Courtois 2014

[3] “GHash.IO and double-spending against BetCoin Dice” mmtech et. al 2013

[4] “Questions about the July 4th BIP66 fork”

[5] “350,000 full nodes to 6,000 in two years…” P Todd 2015

[6] “Reachable nodes during the last 365 days.”

[7] “Re: Scalability and transaction rate” Satoshi 2010

[8] “[Bitcoin-development] Mining centralization pressure from non-uniform propagation speed” Pieter Wuille 2015

Posted Tue Aug 4 02:32:27 2015 Tags: