The lower-post-volume people behind the software in Debian. (List of feeds.)

I needed a break from serious work yesterday, so SRC now speaks SCCS as well as RCS. This wasn’t difficult, I had SRC carefully factored in anticipation from when I originally wrote it.

I can’t say I think this feature will be actually useful; SCCS is pretty primitive, and the SRC support has some annoying limitations as a result. But some hacks you do just because you can, and this is one of them.

For those of you who are perversely curious: the SCCS back end doesn’t support tags or branching. It would be theoretically possible to support branching, but the SCCS branch names, being native, wouldn’t have any recognizable relationship to SRC revisions.

Also, due to the sccs prs format being rather misdesigned, SRC only sees SCCS comments up to the first empty line in them. This matters for amend, log, and fast-export. This could be fixed with a bit more cleverness in the log-dump parser; perhaps in a future release.

If you don’t feel the giggle value of taking the most ancient, crufty version-control system EVER and wrapping it in a really slick modern UI, just because you can…well, you’re probably not a hacker. So sad for you; go have a life or something.

I’m actually a little sorry there aren’t any other single-file version-control systems as ancient as RCS and SCCS. Two back ends are probably all SRC will ever have.

Posted Fri Feb 5 16:29:21 2016 Tags:

Some years ago I happened across a fascinating book titled Wicked River: The Mississippi When It Last Ran Wild. If you have any fondness for Mark Twain (as I do – I own and have read the complete works), you need to read this book. The book is an extended argument that Twain’s late-Victorian portraits of river life were a form of rosy-filtered nostalgia for a pre-Civil-War reality that was quite a bit more wild, colorful, squalid, violent, and bizarre.

The river is tamed now, corseted by locks and levees, surrounded by a settled society. It was already nearly thus when Twain was writing. But in and before Huck Finn’s time (roughly the 1840s) it had been a frontier full of strivers, mad men, bad men, and epic disasters like the New Madrid earthquakes.

Random evocative detail: there was a river pirate named John Murrell who operated out of a section of the river called Nine Mile Reach and often masqueraded as a traveling preacher; his gang was called the “Mystic Clan”, and it was for years believed that he had a master plan to foment a general slave insurrection. But that last bit may have been fabrication by a con man with a book to sell. Whether true or not, it led to riots in various cities and a mob attempt to expel all gamblers from Vicksburg based on a rumor that some of them had been part of the plot.

The book is full of you couldn’t-make-this-stuff-up stories like that. And then, much more recently, I learned about Abe Lincoln’s flatboat voyage.

In 1828, at the age of 19, young Abraham Lincoln was the junior member of a two-man crew that made a flatboat trading voyage down the Mississipi from Indiana to New Orleans. This was not a particularly unusual thing for the sons of upriver farmers to do in those days, at least not if they were thought to be resourceful enough and trusted with the trade goods of a whole town. Abe must have done well, because his town sent him south again in 1831.

These voyages traded everything the upriver farmers could sell – including the wood in the flatboats, which were generally broken up in New Orleans – for cash money and “civilized” goods that couldn’t be made in the frontier country. For so Indiana was at the time; the last Indian war in the territory was not twenty years gone when Lincoln launched.

Lincoln never wrote about the voyage itself in detail, so we only know a few specific things from other remarks and the chronology. It was his first time away from home. The boat was at one point attacked by escaped slaves, and the attack fought off. What Lincoln saw in the New Orleans slave markets kindled in him a hatred of slavery, and could be regarded as the lighting of the fuse that would eventually detonate in the Emancipation Proclamation.

There is a nonfiction book, Lincoln in New Orleans which meticulously reconstructs the world surrounding that 1828 voyage – what he would have seen, where he must have gone, what the flatboats traded, how it all worked. The book even contains instructions on how to build a flatboat.

OK, so here is what I want to make of this. “Wicked River”, the movie. A big old-fashioned historical adventure film with modern production values. The elevator pitch is “Abraham Lincoln fights the river pirates”, but the idea isn’t to produce a comic book like that vampire hunter movie – rather, to stick to the sorts of things we know that a 19-year-old boy on the wicked river in 1828 really could have gotten mixed up in, and probably did.

River pirates, con-men, grifters, escaped slaves, black-powder pistols, bad weather, the Mississippi itself – no shortage of scrapes for Abe to get into along the way, and plenty of options to foreshadow the leader he would someday become. Yes, we can drop in homages to Mark Twain; why ever not? And at the end, the farm boy comes to New Orleans – gorgeous, bawdy, decadent New Orleans – and something he sees in the slave markets changes him forever.

Barring incompetence in the production pipeline, how is it even possible for this not to be box-office gold? More than that; this doesn’t have to be a glittery Michael Bay trashfest, it could have substance and heart, even a profound moral center. I think it could be not just fun, not just good, but genuinely great cinema – something everyone involved would be proud of for the rest of their lives.

Posted Sat Jan 30 10:48:16 2016 Tags:
Some people don’t give a lot of thought to the garage door manufacturers that are out in the world. Most homes have a garage door when you move in people don’t think much about them until they have to repair or replace them. A garage door is more than just ornamental, though. It needs to be sturdy, functional, and protective as well. Let’s look at some elements of good garage doors by high-quality garage door manufacturers:

Durability
A good garage door is going to be made to last. It should be able to withstand being opened and closed on a regular basis. How regular? At least daily, and with the ability to open and close it smoothly and safely as well. Most of today’s garage doors are on cables so that the door can swing upward and fold in overhead. This makes the door able to be hidden when it’s open, it practically disappears. A garage door needs to be durable to withstand constant opening and closing --- sometimes roughly.

Sure, you may have to replace certain parts on occasion, such as: cables, windows, or insulation and depending on the style and garage door sizes, the costs will vary but if you purchase a well-made door, your expenditures should be infrequent and minimal in cost. If you do have to make a major investment, do consider it to be an investment in the value of your home and make an informed choice.

Weather Resistance
Garage door manufacturers need to consider that their doors are exposed to elements. Rain, winds, hail, maybe even a tornado or hurricane. Weather resistance is vital for a garage. The door should also have a decent seal so that the contents in the garage stay safe and dry. A lot of people store more than just their cars in the garage so protection from the elements is vital.

Garages are workshops, they’re sometimes used as laundry rooms, used for storage, and used to store a car or two. Therefore, weather resistance is important. Your garage may not be as warm as the inside of your house but a good seal and weather resistant garage door is going to be helpful.

Attractive – Curb Appeal
The door to your garage should give your home curb appeal. As much as it’s got to be strong and have the ability to withstand being opened and closed often, you want it to look good, as well. After all, it often takes up a large area of space on the very front of your home.
Not all garages are built-in on a home. Not all face the curb. Various styles and configurations exist. For that reason, there are quite a few garage door makers out there. Some homeowners who shop for replacement repair parts or replacement doors altogether will look for the best price alone. It’s much more advantageous to you, as a homeowner or commercial building owner, to also look at other areas, such as: warranty, reputation, reviews from other consumers who have bought the product, and the curb appeal factor as well.

A garage door can be functional, attractive, durable, and reasonably priced. Learn more about garage door manufacturers and their reputations to help you make the best choice for your home. Now lets take a look at the installation process.

Garage Door Opener Installation
Installing a garage door opener can make your life much simpler and safer, because the garage door opener will keep unwanted people out while also protecting loved ones that are around the garage door. A garage door is often quite heavy, and if it has nothing keeping it from closing on its own, then the garage door could fall and seriously injure someone. Also, performing a garage door opener installation is not at all complicated. Many people lean towards hiring a professional to take care of the installation. If you're installing a new door it would be wise to have the Austin area garage door company install your automatic opener at the same time.


But for those who would rather do the garage door opener installation themselves. Here are the steps required to install a garage door opener.

Before starting your garage door opener installation, you will need to get together the following items to use on the installation and additionally check a few things. You will need to make sure that you have an electrical outlet located where the opener motor will be installed, as you will one within three feet of the motor. Also, you will need an electrical wire that runs from the same area where the motor will be to a separate location where the main garage door switch will be located. If your system has safety sensors and/or a garage door frame key-panel included, you may need electrical wires ran to them as well.

As far as tools needed for the garage door opener installation, you will, more than likely, need all of the following: the actual garage door opener, a drill, a hammer, a pencil, a stepladder, a measuring tape, screwdrivers, drill bits, a 2x6 board, pliers and safety glasses. Remember if you're unsure about attempting the process on your own.... call a professional. If you decide to do this on your own, we are not responsible for any injuries.

Installation Steps:
Step 1 - With the garage door open, measure the distance between the top of the opened garage door and the ceiling. The track for your garage door opener must be a minimum of two and a half inches above the opened door. Now close the door and mark on the wall above the door at 2 and a half inches. This mark will be the bottom of your header bracket. Now install a 2x6 on the center line of the garage door from the ceiling down to the top of the garage door frame. The board will be where you attach the track that has the chain.

Step 2 - Attach the door bracket to the garage door as per the factory recommended location. Certain companies supply extra brackets to reinforce the garage door, or you may need to reinforce the garage door yourself. Now, you can attach the header bracket to the 2x6 that you added above the garage door. Make sure that you have enough clearance for the garage door.

Step 3 - Assemble the garage door opener as per the provided instructions. After this is completed, you can then insert the end of the track into the bracket that you just installed on the 2x6. Now, raise up the motor end of the opener and place it on top of the stepladder so that you can measure exactly where it needs to sit without holding it.

Step 4 - Mount the motor to the ceiling per the recommended method from the manufacturer. Make sure that the track that has the chain runs at a 90 degree angle from the garage door. Next, attach the door arm that is on the track to the bracket that is on the garage door.

Step 5 - If your system has safety sensors, you can now install them on each side of the garage door. They usually need to be installed about six inches from the floor, but anywhere from 5-10 should be ok. Just make sure that both sides are equal. Now connect the wires to your sensors and to your garage door frame key-panel (if needed).

Step 6 - Plug in the garage door opener and test the system. If the garage door opener functions correctly, you can move on to adjusting the limit switches. As per your manufacturer's instructions, adjust the open and close limits with a screwdriver.

Now, your garage door opener installation is complete, and you can enjoy the ease and convenience of having a garage door opener. Once you have had the pleasure of using a garage door opener, you will never be able to go back to not having one.
Posted Fri Jan 29 19:52:22 2016 Tags:

If you were reading A&D a year ago, you may recall that I invented a new version-control system to occupy an odd little niche that none of the exiting ones serve very well.

Well, actually, it’s a shell around a very old version-control system that makes a reasonable fast version-storage manager but has a crappy UI. Thus, SRC – RCS reloaded, with a mission to serve cases where you don’t want per-directory changesets but prefer each file to have its own separate change history. Like a directory full of separate FAQs, or your ~/bin full of little scripts.

SRC gives you a modern UI in the svn/hg/git style (but much, much simpler than git’s) and lockless operation. It has full embedded documentation and an Emacs VC backend. If your little project goes multi-file, you can instantly fast-export to git.

Today I shipped Version 1.0. This could have happened sooner, but I’ve been focusing on NTPsec pretty hard in the last year. There was one odd bug in the behavior of multi-file commands that I just hadn’t got around to fixing. (Yes, you can do multi-file commands, but the files still have separate histories.)

The whole thing is just 2KLOC of Python, and that’s with the rather extensive embedded documentation. The sort of person who frequents this blog might find the FAQ entertaining.

Posted Tue Jan 26 05:29:18 2016 Tags:

[ This blog was crossposted on Software Freedom Conservancy's website. ]

I've had the pleasure and the privilege, for the last 20 years, to be either a volunteer or employee of the two most important organizations for the advance of software freedom and users' rights to copy, share, modify and redistribute software. In 1996, I began volunteering for the Free Software Foundation (FSF) and worked as its Executive Director from 2001–2005. I continued as a volunteer for the FSF since then, and now serve as a volunteer on FSF's Board of Directors. I was also one of the first volunteers for Software Freedom Conservancy when we founded it in 2006, and I was the primary person doing the work of the organization as a volunteer from 2006–2010. I've enjoyed having a day job as a Conservancy employee since 2011.

These two organizations have been the center of my life's work. Between them, I typically spend 50–80 hours every single week doing a mix of paid and volunteer work. Both my hobby and my career are advancing software freedom.

I choose to give my time and work to these organizations because they provide the infrastructure that make my work possible. The Free Software community has shown that the work of many individuals, who care deeply about a cause but cooperate together toward a common goal, has an impact greater than any individuals can ever have working separately. The same is often true for cooperating organizations: charities, like Conservancy and the FSF, that work together with each other amplify their impact beyond the expected.

Both Conservancy and the FSF pursue specific and differing approaches and methods to the advancement of software freedom. The FSF is an advocacy organization that raises awareness about key issues that impact the future of users' freedoms and rights, and finds volunteers and pays staff to advocate about these issues. Conservancy is a fiscal sponsor, which means one of our key activities is operational work, meeting the logistical and organizational needs of volunteers so they can focus on the production of great Free Software and Free Documentation. Meanwhile, both Conservancy and FSF dedicated themselves to sponsoring software projects: the FSF through the GNU project, and Conservancy through its member projects. And, most importantly, both charities stand up for the rights of users by enforcing and defending copyleft licenses such as the GNU GPL.

Conservancy and the FSF show in concrete terms that two charities can work together to increase their impact. Last year, our organizations collaborated on many projects, such as the proposed FCC rule changes for wireless devices, jointly handled a GPL enforcement action against Canonical, Ltd., published the principles of community-oriented GPL enforcement, and continued our collaboration on copyleft.org. We're already discussing lots of ways that the two organizations can work together in 2016!

I'm proud to give so much of my time and energy to both these excellent organizations. But, I also give my money as well: I was the first person in history to become an Associate Member of the FSF (back in November 2002), and have gladly paid my monthly dues since then. Today, I also signed up as an annual Supporter of Conservancy, because I'm want to ensure that Conservancy's meets its current pledge match — the next 215 Supporters who sign up before January 31st will double their donation via the match.

For just US$20 each month, you make sure the excellent work of both these organizations can continue. This is quite a deal: if you are employed, University-educated professional living in the industrialized world, US$20 is probably the same amount you'd easily spend on a meals at restaurants or other luxuries. Isn't it even a better luxury to know that these two organizations can have employ a years' worth of effort of standing up for your software freedom in 2016? You can make the real difference by making your charitable contribution to these two organizations today:

Please don't wait: both fundraising deadlines are just six days away!

Posted Mon Jan 25 18:00:16 2016 Tags:

libinput 1.1.5 has a change in how we deal with semi-mt touchpads, in particular: interpretation of touch points will cease and we will rely on the single touch position and the BTN_TOOL_* flags instead to detect multi-finger interaction. For most of you this will have little effect, even if you have a semi-mt touchpad. As a reminder: semi-mt touchpads are those that can detect the bounding box of two-finger interactions but cannot identify which finger is which. This provides some ambiguity, a pair of touch points at x1/y1 and x2/y2 could be a physical pair of touches at x1/y2 and x2/y1. More importantly, we found issues with semi-mt touchpads that go beyond the ambiguity and reduce the usability of the touchpoints.

Some devices have an extremely low resolution when two-fingers are down (see Bug 91135), the data is little better than garbage. We have had 2-finger scrolling disabled on these touchpads since before libinput 1.0. More recently, Bug 93583 showed that some semi-mt touchpads do not assign the finger positions for some fingers, especially when three fingers are down. This results in touches defaulting to position 0/0 which triggers palm detection or results in scroll jumps, neither of which are helpful. Other semi-mt touchpads assign a straightforward 0/0 as position data and don't update until several events later (see Red Hat Bug 1295073). libinput is not particularly suited to handle this, and even if it did, the touchpad's reaction to a three-finger tap would be noticeably delayed.

In light of these problems, and since these affect all three big semi-mt touchpad manufacturers we decided to drop back and handle semi-mt touchpads as single-finger touchpads with extra finger capability. This means we track only one touchpoint but detect two- and three-finger interactions. Two-finger scrolling is still possible and so is two- and three-finger tapping or the clickfinger behaviour. What isn't possible anymore are pinch gestures and some of the built-in palm detection is deactivated. As mentioned above, this is unlikely to affect you too much, but if you're wondering why gestures don't work on your semi-mt device: the data is garbage.

Posted Mon Jan 25 05:24:00 2016 Tags:

My colleague Jason Smith has shared his views on what developers should use when trying to share code between projects. Should you go with a Shared Project or a Portable Class Library (PCL) in the world of Xamarin.Forms?

He hastily concludes that you should go with PCLs (pronounced Pickles).

For me, the PCL is just too cumbersome for most uses. It is like using a canon to kill a fly. It imposes too many limitations (limited API surface), forces you to jump through hoops to achieve some very basic tasks.

PCLs when paired with Nugets are unmatched. Frameworks and library authors should continue to deliver these, because they have a low adoption barrier and in general bring smiles and delight to their users.

But for application developers, I stand firmly on the opposite side of Jason.

I am a fan of simplicity. The simpler the technology, the easier it is for you to change things. And when you are building mobile applications chances are, you will want to make sweeping changes, make changes continously and these are just not compatible with the higher bar required by PCLs.

Jason does not like #if statements on his shared code. But this is not the norm, it is an exception. Not only it is an exception, but careful use of partial classes in C# make this a non issue.

Plugging a platform specific feature does not to use an #if block, all you have to do is isolate the functioanlity into a single method, and have each platform that consumes the code implement that one method. This elegant idea is the same elegant idea that makes the Linux kernel source code such a pleasure to use - specific features are plugged, not #ifdefed.

If you are an application developer, go with Shared Projects for your shared code. And now that we support this for F#, there is no reason to not adopt them.

Posted Fri Jan 22 17:00:34 2016 Tags:

This question turns up a lot, on the irc channel, mailing lists, forums, your local Stammtisch and at weddings. The correct answer is: this is the wrong question. And I'll explain why in this post. Note that I'll be skipping over a couple of technical bits, if you notice those then you're probably not the person that needs to ask the question in the first place.

On your current Linux desktop, right now, you have at least three processes running: the X server, a window manager/compositor and your web browser. The X server is responsible for rendering things to the screen and handling your input. The window manager is responsible for telling the X server where to render the web browser window. Your web browser is responsible for displaying this post. The X server and the window manager communicate over the X protocol, the X server and the web browser do so too. The browser and the window manager communicate through X properties using the X server as a middle man. That too is done via the X protocol. Note: This is of course a very simplified view.

Wayland is a protocol and it replaces the X protocol. Under Wayland, you only need two processes: a compositor and your web browser. The compositor is effectively equivalent to the X server and window manager merged into one thing, and it communicates with the web browser over the Wayland protocol. For this to work you need the compositor and the web browser to be able to understand the Wayland protocol.

This is why the question "is wayland ready yet" does not make a lot of sense. Wayland is the communication protocol and says very little about the implementation of the two sides that you want to communicate.

Let's assume a scenario where we all decide to switch from English to French because it sounds nicer and English was designed in the 80s when ASCII was king so it doesn't support those funky squiggles that the French like to put on every second character. In this scenario, you wouldn't ask "Is French ready yet?" If no-one around you speaks French yet, then that's not the language not being ready, the implementation (i.e. the humans) aren't ready. Maybe you can use French in a restaurant, but not yet in the supermarket. Maybe one waiter speaks both English and French, but the other one French only. So whether you can use French depends very much on the situation. But everyone agrees that eventually we'll all speak French, even though English will hang around for ages until it finally falls out of use. And those squiggles are so cute!

Wayland is the same. The protocol is stable and has been for a while. But not every compositor and/or toolkit/application speak Wayland yet, so it may not be sufficient for your use-case. So rather than asking "Is Wayland ready yet", you should be asking: "Can I run GNOME/KDE/Enlightenment/etc. under Wayland?" That is the right question to ask, and the answer is generally "It depends what you expect to work flawlessly." This also means "people working on Wayland" is often better stated as "people working on Wayland support in ....".

An exception to the above: Wayland as a protocol defines what you can talk about. As a young protocol (compared to X with 30 years worth of extensions) there are things that should be defined in the protocol but aren't yet. For example, Wacom tablet support is currently missing. Those are the legitimate cases where you can say Wayland isn't ready yet and where people are "working on Wayland". Of course, once the protocol is agreed on, you fall back to the above case: both sides of the equation need to implement the new protocol before you can make use of it.

Update 25/01/15: Matthias' answer to Is GNOME on Wayland ready yet?

Posted Fri Jan 22 06:07:00 2016 Tags:

Once upon a time, back during the Age of Exploration, there was a marvellous practice called the “silent trade”. It was a solution to a serious coordination problem between groups who had no languages in common, or distrusted each other so much that they refused to come within range of each others’ weapons.

What makes it marvellous is that it constituted experimental proof of the existence of universal, objective ethical principles sufficient to build cooperation among hostile parties.

Here’s how it worked. One party is, say, the captain of a Portuguese merchant ship on the Gold Coast. He wants some of that gold, but nobody speaks the local language. Furthermore, there are disquieting reports from the few survivors that Europeans who venture out of sight of a ship’s guns tend to get eaten.

The other party is a local chief onshore. He has the same problem; he wants cloth and beads and metal knives, but he doesn’t speak the traders’ language. Furthermore Europeans have magical bangsticks that kill at a distance, and there are disquieting reports that tribes who were too welcoming got massacred by gold-seeking adventurers.

So, the captain brings his ship inshore and a heavily armed away party makes several piles of different kinds of trade goods on the beach. When they’re back on the ship, the vessel fires a cannon and retreats far enough offshore to not be a prompt threat.

The cannonshot attracts, as is meant to, the attention of the natives. They come out of the jungle, eye the various piles, and bring out their trade goods; gold, ivory, and whatnot. A pile of goods goes next to each of the traders’ piles. The natives withdraw from the beach.

Now the ship comes back inshore. Traders eye the piles and decide which exchanges they’ll take. Carrying away the native stuff nearest a pile of trade goods signals consent for the natives to take that pile and that pile only. Leaving a pile in place, or splitting it, signals wanting a better offer. Withdrawing a pile says that the native stuff nearest it is uninteresting.

Adjustments made, the ship withdraws again. Now it is the natives’ turn to evaluate the new state of the trade and re-adjust their own piles. The same rules apply; carrying a pile of trade goods away tells the Europeans they can have what the natives had offered for it. leaving a pile in place invites the Europeans to bid up, and withdrawing a pile says the trade goods offered for it are not interesting.

Sometimes, one side might split up one of the other side’s piles, putting goods near one part but not the other; this is a way to say “some of these goods are interesting, but not the others”. Various other elaborations are recorded.

The process would continue until a cycle during which neither side altered its piles or one side withdrew them all. This signaled the close of trade.

The scenario I’ve described is unusual in that the traders and natives might actually get in visual range of each other. In some important instances, such as the longrunning salt-for-gold trade between the North African coast and sub-Saharan West Africa, that never happened; coordination was entirely by drum signal. Silent trade was also reported in the 6th century CE on the East Coast of Africa between Indians and Arabs on one side and interior tribes on the other.

The silent trade flourished in Africa from classical times (Herodotus reported Carthaginians engaging in silent barter with West Africans) to about 1500CE. It fell into disuse only when enough cultural contact developed with the interior tribes for mutual language acquisition.

The most interesting observations about the silent trade begin with the fact that, as far as the historical record can see, nobody ever cheated – or, if they did, it was an unusual and sporadic phenomenon that failed to disrupt the exchanges.

Neither the traders nor the natives nor any third party had the ability to enforce honest dealing. No force could be used, and nobody punished except by the termination of the trade. Yet self-interest policed the process quite effecively.

The silent trade, when historians think of it at all, is usually considered a trivial bit of exotica, a footnote to the sweep of history. But it is much more than that; it is a demonstration of objective universal ethics. To explain this, I need to introduce, or remind you of, a couple of related concepts.

Those of you familiar with Robert Axelrod’s studies of the iterated Prisoner’s Dilemma will recognize a theme here; parties in the silent trade faced an iterated cooperate-or-defect choice (where defecting would have been to simply run off with the other sides’ goods) and settled into a stable tit-for-tat exchange.

In that exchange, both parties achieve what game theorists call “positive-sum” interactions – both are better off than if the exchange had never taken place. (This is distinct from zero-sum interactions, in which one party gains but the other loses, and negative-sum interactions in which both parties lose.)

Now I want to introduce the notion of a Schelling point. This, due to the economist Thomas Schelling, is “a solution that people will tend to use in the absence of communication, because it seems natural, special, or relevant to them.” My favorite example is the effect rivers have on political geography. Two hostile, non-communicating tribes separated by usable land is a recipe for a frontier war, but if a river runs between them both are likely to accept it as a natural boundary and confine their warfare to punishing violations.

Finally I want to exhibit the idea of Lorenzian incomplete aggression. The naturalist Konrad Lorenz famously observed that animals who cannot communicate with language nevertheless express “I could hurt you, but I choose not to” with aggressive behavior that is deliberately interrupted or misdirected short of actual damage. Anyone who has ever seen children roughhousing, or experienced the kind of solid friendship that martial artists can develop after a hard but clean bout of sparring, knows this works in humans too.

Several Schelling points are clear on examination of the silent trade. One is that it takes place at boundaries. By putting goods on the beach with an armed party and then withdrawing, rather than pushing into the jungle to hunt game or find trading partners, the traders did not merely reduce their chances of being eaten, they combined use of a Schelling point with uncompleted aggression.

The rules of the silent trade embody at least two important ethical principles; nonaggression and voluntary reciprocal exchange. If we are asking whether these principles are universal and objective, what better evidence could we ask for than to have seen them mutually agreed on by different groups of humans without the ability to even speak to each other (let alone shared cultural assumptions) and then sustained down the generations for over a thousand years?

The silent trade gives us grounds for a very strong claim: there is a universal objective ethics, and its building blocks include (a) nonaggression, (b) Schelling points, (c) honesty, and (d) voluntary reciprocal exchange. Or to put it more simply, “Do as you would be done by.” – the Golden Rule.

I have no doubt that the behavior of everyone in the Gold Coast story I told above seems natural to the reader. This is because we are actually neurologically wired to participate in universal ethics. Some equivalent of the Golden Rule is live in every human culture, and one of the first results to emerge from evolutionary psychology in the 1990s is that humans seem to come equipped with a cheater-detection module – our performance on logic problems improves when they are framed as questions about whether someone is violating reciprocity.

It would not be not stretching a point very far to say that the silent trade is in our DNA; its building blocks, such as the ability to recognize Schelling points and uncompleted aggression, certainly are. Through the lens of the silent trade, we can begin to see universal ethics as a culture-independent evolved behavior that solves a universal problem – how to achieve and maintain positive-sum cooperation.

This realization challenges several common beliefs, including cultural relativism and Hume’s guillotine – the notion that you can never derive an “ought” (normative moral or ethical statement) from an “is” (a fact about the world). How these falsehoods became so entrenched would be a topic for several more essays, but my point here is the silent trade helps us see past them – not just with a lot of argument and theory but in a practical, concrete, empirical way.

While I will not try to develop the argument here, the reader should consider this proposition: that the only ethical claims we should accept are universal (that is, agreements that could be reached by parties that cannot coerce or even at the limit communicate with each other), and that all other ethical claims are invalid, actually damage the prospects for sustained positive-sum cooperation, and should be discarded.

Posted Wed Jan 20 00:56:31 2016 Tags:

In light of recent general confusion between X.Org the technical project and X.Org the Foundation here's a little overview.

X.Org the project

X.Org is the current reference implementation of the X Window System which has been around since the mid-80s. Its most prominent members is the X server and the related drivers but we put a whole bunch of other things under the same umbrella, e.g. mesa, drm, and - yes - wayland. Like most free software projects it is loosely organised and very few developers are involved in everything, everybody has their niche. If you're running Linux or a BSD and you can see a desktop environment in front of you, X.Org the technical project is somewhere in that stack.

X.Org the Foundation

The foundation is a non-profit organisation tasked with the stewardship of the X Window System, particularly the X.Org implementation. The most important thing is: the X.Org Foundation does not control the technical direction, it acts in a supporting role only. X.Org has a 501(c)3 tax code in the US which means that donations can be tax deducted (though we haven't collected donations in years). It also means that how we can spend money is very restricted. These days the Foundation's supporting roles are largely: sponsoring the annual X Developers Conference (XDC), providing travel sponsorship to XDC attendees and be the organisation to participate in the Google Summer of Code. Oh, and did I mention that the X.Org Foundation does not control the technical direction?

What does it matter?

The difference matters, especially for well-nuanced and thought-out statements like "X must die" in response to articles about the X.Org Foundation. If you want the Foundation to cease to exist, you're essentially saying "XDC and X.Org's GSoC participation must die". Given that a significant percentage of those two are now Wayland-related that may have some unintended side-effects. If you want the technical project to die, it may be wise to consider the side-effects. Wayland isn't quite ready yet, much of the work that is done under the umbrella of X benefits Wayland (libinput, graphics driver work, etc.).

Now if you excuse me, there's a windmill that needs tilting at. Rocinante, where are you?

Posted Tue Jan 19 01:42:00 2016 Tags:

I struck a small blow for better security today.

It started last night on an IRC channel with A&D regular Susan Sons admonishing the regulars to rotate their ssh keys regularly – that is, generate and export new key pairs so that is someone cracks the crypto on one out of your sight it won’t be replayable forever.

This is one of those security tasks that doesn’t get done often enough because it’s a fiddly pain in the ass. But (I thought to myself) I have a tool that reduces the pain. Maybe I should try to eliminate it? And started hacking.

The tool was, until yesterday, named ssh-installkeys. It’s a script wrapper written in Python that uses a Python expect engine to login into remote sites and install (or remove) ssh public keys. What makes it useful is that it remembers a lot of annoying details like fixing file and directory permissions so your ssh server won’t see a potential vulnerability and complain. Also, unlike some competing tools, it only requires you to enter your password once per update.

Some time ago I taught this code to log its installations in a config file so you have a record of where you have remote-installed keys. I realized that with a little work this meant I could support a rotate option – mass-install new keys on every site you have recorded. And did that.

I’ve been meaning for some time to change the tool’s name; ssh-installkeys is too long and clumsy. So it’s now sshexport. I also updated it to know about, and generate, ed25519 keys (that being the new hotness in ssh crypto).

In order to reduce the pain, sshexport can now now store your passwords in its list of recorded sites, so you only have to enter the password the first time you install keys and all later rotations are no-hands operations. This doesn’t actually pose much additional security risk because by hypothesis anyone who can read this file has read access to your current private ssh keys already. The correct security measure is whatever you already do to protect other sensitive data in your dot directories, like GPG directories and web passwords stored by your browser. I use drive encryption.

The result is pretty good. Not perfect; the big missing feature is that it doesn’t know how to update your keys on sites like GitLab. That would take a custom method for each such site, probably implemented with curl. Perhaps in a future release.

Posted Mon Jan 18 18:56:40 2016 Tags:

dlink
Almost since the beginning of time, NetworkManager kept an internal list of access points found in the last 3 scans.  Since the background scan were triggered at least every two minutes, an access point could stay in the list for up to 6 minutes.  This was a compromise between mobility, unreliable drivers, and an unreliable medium (eg, air).  Even when you’re not moving the closest access point may not show up in every scan.  So NetworkManager attempted to compensate by keeping access points around for a longer time.

Obviously that approach has problems if you’re driving, on a train, or on a bus.  You can end up with a huge list of access points that are obviously no longer in range.  If you turn off an access point, it could stay in the list a long time.

Ubuntu contributed a patch that exposes the “last seen time” for each access point, which allows the user-interface to decide for itself which access points to show.  A location service (like Firefox or Geoclue) may want a more complete list of access points than the live Wi-Fi network list does, for example, which is why NetworkManager keeps the list in the first place instead of only showing the results of the most recent (and potentially unreliable) scan.

But in the end this behavior needed to change, and with recent versions of wpa_supplicant it was possible to make NetworkManager’s scanning behavior better.  The supplicant also contains a scan list from which NetworkManager built it’s.  Wouldn’t it be great if there was one list instead of two?

So we threw away the internal NetworkManager list and just followed the supplicant’s list.  When the supplicant decides that an access point is no longer visible, NetworkManager removes it too.  This works better because the supplicant has more information than NetworkManager does and can make smarter decisions.  NetworkManager tweaks the supplicant’s behavior through the BSSExpireAge and BSSExpireCount properties so that any access point seen more than 4 minutes ago, or not seen in the past two scans, will be dropped.

When scans happen more often, like when a Wi-Fi network list is displayed, the two-scan limit removes access points after 20 or 30 seconds in the best case.  The supplicant performs backgrounds scans to facilitate faster roaming, which can be triggered on signal strength, which also helps remove old access points when they are out of range.

Tracking the Current BSS

Along with the scanning cleanup, NetworkManager delegates tracking the access point you’re currently associated with to wpa_supplicant’s CurrentBSS property.  Previously NetworkManager periodically asked the Wi-Fi driver what the current access point was, but this was inconsistently implemented between drivers and required hacky workarounds to smooth out intermittent results.

The supplicant’s CurrentBSS property tracks the access point the supplicant wants to be associated with, not what the driver currently is associated with, but these are almost always the same thing, and there’s no point in telling the user that they are momentarily disconnected from their access point during a scan when there is no actual interruption in traffic due to 802.11 protocol mechanisms like powersave buffering.  This was another huge cleanup in the NetworkManager codebase.

Death of dbus-glib

Finally, along with these changes all communication with wpa_supplicant was switched to use GDBus instead of the old, deprecated, and unmaintained dbus-glib library.  This advances our goal of removing all use of dbus-glib from NetworkManager, which was one of the first major users of the library in 2004, and is likely the last one too.  GDBus provides much better integration with glib and the GIO model, is fully supported, and has an saner API.

And even better, through the painstaking work of Dan Winship, Jirka Klimes, Thomas Haller, Lubomir Rintel, me, and others, all of NetworkManager 1.2 was ported to GDBus without changing our public D-Bus API.  Welcome to the future!

Posted Mon Jan 18 16:54:04 2016 Tags:

An underappreciated fact about U.S. Constitutional law is that it recognizes sources of authority prior to the U.S. Constitution itself. It is settled law that the Bill of Rights, in particular, does not confer rights, it only recognizes “natural rights” which pre-exist the Bill of Rights and the Constitution and which – this is the key point – cannot be abolished by amending the Constitution.

What is the nature of these “natural rights”? The Founding Fathers of the U.S. spole of them as an endowment by the Creator in the Declaration of Independence. This is, in modern terms, a much less religious statement than meets the eye. At the time the Declaration was written, many forward-thinking intellectuals (influenced by a now-largely-extinct movement called Deism) used the terms “God” and “Natural Law” almost interchangeably. (I have written before about how later waves of religious revival have obscured this point.)

In modern terms, we can think of “natural rights” as the political and social rules which are required to sustain “life, liberty, and the pursuit of happiness”, and derive them not from religion but from game-theoretic analysis of the behavior of competing agents in a political system.

The theorists of English Republicanism in the century and a half before the Declaration of Independence did not have the language of economics or game theory, but they developed a pretty firm grasp on the theory of natural rights by studying the historical failure modes of various political systems.

The English Republican defenses of (for example) the right to free speech were very simple: if these are not the rules of your polity, your polity will come to a bad end in tyranny and chaos and great suffering. In modern terms, they were seeking stable cooperative equilibria under the recognition that most possible sets of political rules do not yield it.

This was the thinking behind the U.S. Constitution, in general, and the Bill of Rights in particular. Because natural rights are a consequence of natural law, no law can abrogate them. Laws which intend to abrogate them are contrary to the purpose of law itself, which is to sustain a stable cooperative equilibrium in which humans can flourish, and therefore no one is bound to obey them.

This is written into black-letter law in the U.S. about even the most contentious of the ten articles of the Bill of Rights, the Second Amendment. In United States v. Cruikshank, 92 U. S. 542, 553 (1876), the Supreme Court said of the individual right to bear arms “[t]his is not a right granted by the Constitution. Neither is it in any manner dependent upon that instrument for its existence.” This language was quoted and reaffirmed in the 2008 Heller vs. D.C. decision.

Neither is it in any manner dependent upon that instrument for its existence. This is English Republicanism’s theory of natural rights limiting not merely what the law can do, but what amendments to the Constitution can do. The right to bear arms (and the right to free speech, and the other rights recognized by the first ten amendments) are not conditional; they are not grants made by law, government, or the Constitution that can be withdrawn by amending these institutions. They are prior to all this apparatus.

Switching back to a game-theoretical perspective, we can to some extent discover the meaning and extent of these rights by investigating their consequences. We can ask of rival interpretations of edge cases around these rights whether they support or hinder stable cooperative equilibrium. What we cannot do is pretend that the broad thrust of these rights is negotiable without fundamentally repudiating the entirety of the American system clear back to the Constitution and its pre-Constitutional foundations.

This has mainly been an essay about the meaning of “natural rights” and the relationship between law, philosophy, and the Constitution. But I mean to give it teeth by addressing one current political issue: could the First or Second Amendments be, in any meaningful sense, repealed? Can any legal or Constitutional process abolish the individual rights to free speech and to bear arms?

It should be clear from the foregoing that the answer is “no”. Amendment of the Constitution cannot abolish a right that was not granted by the Constitution in the first place. People who fail to grasp this understand neither the law, nor the Constitution, nor the Constitution’s ethical foundations.

Posted Sun Jan 17 12:28:48 2016 Tags:

Back in 2008, I wrote about getline.cs, a single-file command line editor for shell application. It included Emacs key bindings, history, customizable completion and incremental search. It is equivalent to GNU's readline library, except it is implemented in a single C# file.

I recently updated getline.cs to add a popup-based completion and C# heuristics for when to automatically trigger code completion. This is what it looks like when using in Mono's C# REPL in the command line:

Posted Thu Jan 14 14:30:10 2016 Tags:

A&D regulars will probably not be much surprised to learn that I’m something of a topic expert on the history of the duel of honor. This came up over on Slate Star Codex recently when I answered a question about the historical relationship of the duel or honor with street violence.

I’ve read all the scholarship on the history of dueling I can find in English. There isn’t much, and what there is mostly doesn’t seem to me to be very good. I’ve also read primary sources like dueling codes, and paid a historian’s attention to period literature.

I’m bringing this up now because I want to put a stake in the ground. I have a personal theory about why Europo-American dueling largely (though not entirely) died out between 1850 and 1900 that I think is at least as well justified as the conventional account, and I want to put it on record.

First, the undisputed facts: dueling began a steep decline in the early 1840s and was effectively extinct in English-speaking countries by 1870, with a partial exception for American frontier regions where it lasted two decades longer. Elsewhere in Europe the code duello retained some social force until World War I.

This was actually a rather swift end for a body of custom that had emerged in its modern form around 1500 but had roots in the judicial duels of the Dark Ages a thousand years before. The conventional accounts attribute it to a mix of two causes: (a) a broad change in moral sentiments about violence and civilized behavior, and (b) increasing assertion of a state monopoly on legal violence.

I don’t think these factors were entirely negligible, but I think there was something else going on that was at least as important, if not more so, and has been entirely missed by (other) historians. I first got to it when I noticed that the date of the early-Victorian law forbidding dueling by British military officers – 1844 – almost coincided with (following by perhaps a year or two) the general availability of percussion-cap pistols.

The dominant weapons of the “modern” duel of honor, as it emerged in the Renaissance from judicial and chivalric dueling, had always been swords and pistols. To get why percussion-cap pistols were a big deal, you have to understand that loose-powder pistols were terribly unreliable in damp weather and had a serious charge-containment problem that limited the amount of oomph they could put behind the ball.

This is why early-modern swashbucklers carried both swords and pistols; your danged pistol might very well simply not fire after exposure to damp northern European weather. It’s also why percussion-cap pistols, which seal the powder priming charge inside a brass cap, were first developed for naval use, the prototype being Sea Service pistols of the Napoleonic era. But there was a serious cost issue with those: each cap had to be made by hand at eye-watering expense.

Then, in the early 1840s, enterprising gunsmiths figured out how to mass-produce percussion caps with machines. And this, I believe, is what actually killed the duel. Here’s how it happened…

First, the availability of all-weather pistols put an end to practical swordfighting almost immediately. One sidearm would do rather than two. Second, dueling pistols suddenly became tremendously more reliable and somewhat more lethal. When smokeless powder became generally available in the 1880s they took another jump upwards in lethality.

Moral sentiments and state power may have been causes, but I am pretty convinced that they had room to operate because a duel of honor in 1889 was a far more dangerous proposition than it had been in 1839. Swords were effectively out of play by the latter date, pistols no longer sputtered in bad weather (allowing seconds to declare that “honor had been satisfied”) and the expected lethality of a bullet hit had gone way up due to the increased velocity of smokeless-powder rounds.

There you have it. Machine manufacture of percussion caps and the deployment of smokeless powder neatly bookend the period of the decline of the duel. I think this was cause, not coincidence.

Posted Tue Jan 12 14:05:23 2016 Tags:

I released reposurgeon 3.30 today. It has been five years and a month since the first public release.

In those five years, the design concept seems to have proved out very well, finding use in many repository conversions. But the project exhibits an unusual sociology; I don’t get lots of casual contributors, only a few exceptional ones.

Your typical open-source project sees a sort of exponential distribution in which small fix patches from people you see stop by only once are common, single feature-sized patches less so, and complex sustained work that reimagines entire subsystems is rare. There’s an obvious inverse relation between frequency and complexity scale. At intermediate and higher complexity scales you often get regular contributors who do extended work on different things over time. GPSD is like this.

On reposurgeon I see an entirely different pattern. Casual patches are rare to nonexistent. For long stretches of time I have no active collaborators at all. Then a hacker will appear out of the void and begin contributing very clever patches. He (no shes yet) will draw closer to the project, and for a few days or weeks we’ll be in an intense collaborative mode tossing ideas and patches back and forth. Some complex series of features will be implemented.

Then, his particular feature-lust fulfilled, said hacker will quietly vanish into the interstellar darkness never to be seen again, like some comet on a hyperbolic trajectory after a pass near the Sun. Never yet has there been more than one cometary hacker at a time.

OK, I exaggerate slightly. The project has some semi-regular hangers-on in the #reposurgeon channel (one of them is A&D commenter Mike Swanson). But those people tend to be power users rather than major code contributors; the pattern of large code drops by people who appear, do work that impresses the hell out of me, and then vanish, still dominates code contributions.

My wife Cathy called this one right when I remarked on it. Most people never use reposurgeon more than once, but the hands that find it are disproportionately likely to be very skilled ones. All of my half-dozen or so cometary contributors have been damn good hackers even by my elevated standards, careful and imaginative and tasteful. When people like this detect a deficiency in a tool, they fix it – and their idea of “easy” fixes would daunt lesser mortals.

It’d be nice if some of these hackers would stick around, because I love collaborating with people that bright, but oh well. They’re as in demand as only the capable can be. And at the end of the day, there are much worse things you can say of a software project than “it attracts high-quality work from high-quality people, er, even if they don’t tend to stick around”.

Posted Mon Jan 11 04:37:33 2016 Tags:

I have probably spent more time dealing with the implications and real-world scenarios of copyleft in the embedded device space than anyone. I'm one of a very few people charged with the task of enforcing the GPL for Linux, and it's been well-known for a decade that GPL violations on Linux occur most often in embedded devices such as mobile hand-held computers (aka “phones”) and other such devices.

This experience has left me wondering if I should laugh or cry at the news coverage and pundit FUD that has quickly come forth from Google's decision to move from the Apache-licensed Java implementation to the JDK available from Oracle.

As some smart commenters like Bob Lee have said, there is already at least one essential part of Android, namely Linux itself, licensed as pure GPL. I find it both amusing and maddening that respondents use widespread GPL violation by chip manufacturers as some sort of justification for why Linux is acceptable, but Oracle's JDK is not. Eventually, (slowly but surely) GPL enforcement will adjudicate the widespread problem of poor Linux license compliance — one way or the other. But, that issue is beside the point when we talk of the licenses of code running in userspace. The real issue with that is two-fold.

First, If you think the ecosystem shall collapse because “pure GPL has moved up the Android stack”, and “it will soon virally infect everyone” with copyleft (as you anti-copyleft folks love to say) your fears are just unfounded. Those of us who worked in the early days of reimplementing Java in copyleft communities thought carefully about just this situation. At the time, remember, Sun's Java was completely proprietary, and our goal was to wean developers off Sun's implementation to use a Free Software one. We knew, just as the early GNU developers knew with libc, that a fully copylefted implementation would gain few adopters. So, the earliest copyleft versions of Java were under an extremely weak copyleft called the “GPL plus the Classpath exception”. Personally, I was involved as a volunteer in the early days of the Classpath community; I helped name the project and design the Classpath exception. (At the time, I proposed we call it the “Least GPL” since the Classpath exception carves so many holes in strong copyleft that it's less of a copyleft than even the Lesser GPL and probably the Mozilla Public License, too!)

But, what does the Classpath exception from GNU's implementation have to with Oracle's JDK? Well, Sun, before Oracle's acquisition, sought to collaborate with the Classpath community. Those of us who helped start Classpath were excited to see the original proprietary vendor seek to release their own formerly proprietary code and want to merge some of it with the community that had originally formed to replace their code with a liberated alternative.

Sun thus released much of the JDK under “GPL with Classpath exception”. The reasons were clearly explained (URL linked is an archived version of what once appeared on Sun's website) on their collaboration website for all to see. You see the outcome of that in many files in the now-infamous commit from last week. I strongly suspect Google's lawyers vetted what was merged to made sure that the Android Java SDK fully gets the appropriate advantages of the Classpath exception.

So, how is incorporating Oracle's GPL-plus-Classpath-exception'd JDK different from having an Apache-licensed Java userspace? It's not that much different! Android redistributors already have strong copyleft obligations in kernel space, and, remember that Webkit is LGPL'd; there's also already weak copyleft compliance obligations floating around Android, too. So, if a redistributor is already meeting those, it's not much more work to meet the even weaker requirements now added to the incorporated JDK code. I urge you to ask anyone who says that this change will have any serious impact on licensing obligations and analysis for Android redistributors to please prove their claim with an actual example of a piece of code added in that commit under pure GPL that will combine in some way with Android userspace applications. I admit I haven't dug through the commit to prove the negative, but I'd be surprised if some Google engineers didn't do that work before the commit happened.

You may now ask yourself if there is anything of note here at all. There's certainly less here than most are saying about it. In fact, a Java industry analyst (with more than a decade of experience in the area) told me that he believed the decision was primarily technical. Authors of userspace applications on Android (apparently) seek a newer Java language implementation and given that there was a reasonably licensed Free Software one available, Google made a technical switch to the superior codebase, as it gives API users technically what they want while also reducing maintenance burden. This seems very reasonable. While it's less shocking than what the pundits say, technical reasons probably were the primary impetus.

So, for Android redistributors, are there any actual licensing risks to this change? The answer there is undoubtedly yes, but the situation is quite nuanced, and again, the problem is not as bad as the anti-copyleft crowd says. The Classpath exception grants very wide permissions. Nevertheless, some basic copyleft obligations can remain, albeit in a very weak-copyleft manner. It is possible to violate that weak copyleft, particularly if you don't understand the licensing of all third-party materials combined with the JDK. Still, since you must comply with Linux's license to redistribute Android, complying with the Classpath exception'd stuff will require only a simple afterthought.

Meanwhile, Sun's (now Oracle's) JDK, is likely nearly 100% copyright-held by Oracle. I've written before about the dangers of the consolidation of a copylefted codebase with a single for-profit, commercial entity. I've even pointed out that Oracle specifically is very dangerous in its methods of using copyleft as an aggression.

Copyleft is a tool, not a moral principle. Tools can be used incorrectly with deleterious effect. As an analogy, I'm constantly bending paper clips to press those little buttons on electronic devices, and afterwards, the tool doesn't do what it's intended for (hold papers together); it's bent out of shape and only good for the new, dubious purpose, better served by a different tool. (But, the paper clip was already right there on my desk, you see…)

Similarly, while organizations like Conservancy use copyleft in a principled way to fight for software freedom, others use it in a manipulative, drafter-unintended, way to extract revenue with no intention standing up for users' rights. We already know Oracle likes to use GPL this way, and I really doubt that Oracle will sign a pledge to follow Conservancy's and FSF's principles of GPL enforcement. Thus, we should expect Oracle to aggressively enforce against downstream Android manufacturers who fail to comply with “GPL plus Classpath exception”. Of course, Conservancy's GPL Compliance Project for Linux developers may also enforce, if the violation extends to Linux as well. But, Conservancy will follow those principles and prioritize compliance and community goodwill. Oracle won't. But, saying that means that Oracle has “its hooks” in Android makes no sense. They have as many hooks as any of the other thousands of copyright holders of copylefted material in Android. If anything, this is just another indication that we need more of those copyright holders to agree with the principles, and we should shun codebases where only one for-profit company holds copyright.

Thus, my conclusion about this situation is quite different than the pundits and link-bait news articles. I speculate that Google weighed a technical decision against its own copyleft compliance processes, and determined that Google would succeed in its compliance efforts on Android, and thus won't face compliance problems, and can therefore easily benefit technically from the better code. However, for those many downstream redistributors of Android who fail at license compliance already, the ironic outcome is that you may finally find out how friendly and reasonable Conservancy's Linux GPL enforcement truly is, once you compare it with GPL enforcement from a company like Oracle, who holds avarice, not software freedom, as its primary moral principle.

Finally, the bigger problem in Android with respect to software freedom is that the GPL is widely violated on Linux in Android devices. If this change causes Android redistributors to reevalute their willful ignorance of GPL's requirements, then some good may come of it all, despite Oracle's expected nastiness.

Update on 2016-01-06: I specifically didn't mention the lawsuit above because I don't actually think this whole situation has much to do with the lawsuit, but if folks do want to read my analysis of the Oracle v. Google lawsuit, these are my posts on it in reverse chronological order: [0], [1], [2], [3]. I figured I should add these links given that all the discussion on at least one forum discussing this blog post is about the lawsuit.

Posted Wed Jan 6 01:31:01 2016 Tags:

One problem of filling blocks is that transactions with too-low fees will get “stuck”; I’ve read about such things happening on Reddit.  Then one of my coworkers told me that those he looked at were simply never broadcast properly, and broadcasting them manually fixed it.  Which lead both of us to wonder how often it’s really happening…

My approach is to look at the last 2 years of block data, and make a simple model:

  1. I assume the tx is not a priority tx (some miners reserve space for these; default 50k).
  2. I judge the “minimum feerate to get into a block” as the smallest feerate for any transaction after the first 50k beyond the coinbase (this is an artifact of how bitcoin core builds transactions; priority area first).
  3. I assume the tx won’t be included in “empty” blocks with only a coinbase or a single non-coinbase tx (SPV mining); their feerate is “infinite”.

Now, what feerate do we assume?  The default “dumb wallet” fee is 10000 satoshi per kilobyte: bitcoin-core doesn’t do this pro-rata, so a median 300-byte transaction still pays 10000 satoshi by default (fee-per-byte 33.33).  The worse case is a transaction of exactly 1000 bytes (or, a wallet which does pro-rata fees), which would have a fee-per-byte of 10.

So let’s consider the last two years (since block 277918).  How many blocks in a row we see with a fee-per-byte > 33.33, and how many we see with a feerate of > 10:

Conclusion

In the last two years you would never have experienced a delay of more than 10 blocks for a median-size transaction with a 10,000 satoshi fee.

For a 1000-byte transaction paying the same fee, you would have experienced a 10 block delay 0.7% of the time, with a 20+ block delay on eight occasions: the worse being a 26 block delay at block 382918 (just under 5 hours).  But note that this fee is insufficient to be included in 40% of blocks during the last two years, too; if your wallet is generating such things without warning you, it’s time to switch wallets!

Stuck low-fee transactions are not a real user problem yet.  It’s good to see adoption of smarter wallets, though, because it’s expected that they will be in the near future…

Posted Sun Jan 3 22:24:43 2016 Tags:

After 20 years of evading joining the NRA, I finally did it last week.

I’ve never been a huge fan of the NRA because, despite the fearsome extremist image the mainstream media tries to hang on it, the NRA is actually rather squishy about gun rights. A major symptom of this is its lack of interest in pursuing Second Amendment court cases. Alan Gura, the civil-rights warrior who fought Heller vs. DC and several other key cases to a successful conclusion, was funded not by the NRA but by the Second Amendment Foundation. Also, in the past, the NRA has been too willing to acquiesce to unconstitutional legislation like the 1986 ban on sales of new automatic weapons to civilians.

So, you might well ask: why am I joining an organization I’m dubious about now, when the gun-rights cause seems to be winning? Popular support for Second Amendment rights is at record highs in the polls, a record seven states now have constitutional carry (no permit requirement), Texas just became the 45th state to legalize open carry last week…why am I joining an organization I’ve characterized as squishy?

I joined because the state-worshiping thugs on the other side are doubling down, and they still own most of the media and the machinery of the Federal government. After decades of pretending that they only wanted soi-disant “common-sense” legislation aimed at specific problems around the edges of gun policy, the Democratic Party is now openly talking of outright gun confiscation. The usual suspects in the national press are obediently amplifying their propaganda.

Some things you do for substantive effect – giving money to the SAF so Alan Gura can win another case is like that. Some things you do less for effect than as as a signal of pushback intended to create political momentum and demoralize the other side; joining the NRA is like that.

Meanwhile, I think my sentimental favorite gun-rights organization is still Jews for the Preservation of Firearms Ownership. Because they have the motto that truly says it all to those with any sense of history: “Never again”.

Posted Sun Jan 3 19:55:30 2016 Tags:

[ This post was crossposted on Conservancy's website. ]

I first met Ian Murdock gathered around a table at some bar, somewhere, after some conference in the late 1990s. Progeny Linux Systems' founding was soon to be announced, and Ian had invited a group from the Debian BoF along to hear about “something interesting”; the post-BoF meetup was actually a briefing on his plans for Progeny.

Many of the details (such as which conference and where on the planet it was), I've forgotten, but I've never forgotten Ian gathering us around, bending my ear to hear in the loud bar, and getting one of my first insider scoops on something big that was about to happen in Free Software. Ian was truly famous in my world; I felt like I'd won the jackpot of meeting a rock star.

More recently, I gave a keynote at DebConf this year and talked about how long I've used Debian and how much it has meant to me. I've since then talked with many people about how the Debian community is rapidly becoming a unicorn among Free Software projects — one of the last true community-driven, non-commercial projects.

A culture like that needs a huge group to rise to fruition, and there are no specific actions that can ensure creation of a multi-generational project like Debian. But, there are lots of ways to make the wrong decisions early. As near as I can tell, Ian artfully avoided the project-ending mistakes; he made the early decisions right.

Ian cared about Free Software and wanted to make something useful for the community. He teamed up with (for a time in Debian's earliest history) the FSF to help Debian in its non-profit connections and roots. And, when the time came, he did what all great leaders do: he stepped aside and let a democratic structure form. He paved the way for the creation of Debian's strong Constitutional and democratic governance. Debian has had many great leaders in its long history, but Ian was (effectively) the first DPL, and he chose not to be a BDFL.

The Free Software community remains relatively young. Thus, loss of our community members jar us in the manner that uniquely unsettles the young. In other words, anyone we lose now, as we've lost Ian this week, has died too young. It's a cliché to say, but I say anyway that we should remind ourselves to engage with those around us every day, and to welcome new people gladly. When Ian invited me around that table, I was truly nobody: he'd never met me before — indeed no one in the Free Software community knew who I was then. Yet, the mere fact that I stayed late at a conference to attend the Debian BoF was enough for him — enough for him to even invite me to hear the secret plans of his new company. Ian's trust — his welcoming nature — remains for me unforgettable. I hope to watch that nature flourish in our community for the remainder of all our lives.

Posted Wed Dec 30 23:00:00 2015 Tags: