The lower-post-volume people behind the software in Debian. (List of feeds.)
(reprinted from USENIX ;login: November/December 1992)
Have you seen Modula-3?
MODULE Main; (* hello1.m3 *) IMPORT Stdio, Wr; BEGIN Wr.PutText(Stdio.stdout, "Hello, world.\n"); END Main.
I’m not, in general, a fan of David Drake’s writing; most of his output is grimmer and far more carnographic than I care to deal with. I’ve made an exception for his RCN series because they tickle my fondness for classic Age-of-Sail adventure fiction and its pastiches, exhibiting Drake’s strengths (in particular, his deep knowledge of history) while dialing back on the cruelty and gore.
Drake’s sources are no mystery to anyone who has read Patrick O’Brien’s Aubrey-Maturin series; Daniel Leary and his companion-in-arms Adele Mundy are obvious takes on the bumptious Jack Aubrey and physician/naturalist/spy Stephen Maturin. Drake expends great ingenuity in justifying a near-clone of the Napoleonic-era British Navy in a far future with FTL drives. And to his credit, the technological and social setting is better realized than in most exercises of this kind. It compares well in that respect to, for example, David Weber’s Honor Harrington sequence.
The early books in the RCN series, accordingly, seemed fresh and inventive. Alas, in this tenth installment the series is losing its wind. We’ve already seen a couple of variations of the plot; Daniel and Adele traipse off in the Princess Cecile on a sort-out-the-wogs mission backed by Cinnabar’s spooks. In a wry nod to another genre trope, they’re looking for buried treasure.
The worldbuilding remains pretty good, and provided most of the few really good moments in this novel. Alas, as the action ground on I found the characters’ all-too-familiar tics wearing on me – Adele’s nihilistic self-loathing, Daniel’s cheerful shallow bloodymindeness, Hogg’s bumpkin shtick, Miranda the ever-perfect girlfriend. The cardboard NPCs seem flatter than ever. The series always had strong elements of formula, but now Drake mostly seems to be just repeating himself. Even the battle scenes are rather perfunctory.
This is not a book that will draw in people who aren’t fans of its prequels. I’ll read the next one, but if it isn’t dramatically improved I’m done. Perhaps Drake is tiring of the premises; it may be time for him to bring things to a suitably dramatic close.
The team has put together some beautiful getting started documentation for our iOS User Interface Designer.
In particular, check a couple of hot features on it:
We have been working with a few PlayStation 4 C# lovers for the last few months. The first PS4 powered by Mono and MonoGame was TowerFall:
We are very excited about the upcoming Transistor, by the makers of Bastion, coming out on May 20th:
Mono on the PS4 is based on a special branch of Mono that was originally designed to support static compilation for Windows's WinStore applications .
 Kind of not very useful anymore, since Microsoft just shipped static compilation of .NET at BUILD. Still, there is no wasted effort in Mono land!
Attention, world, especially Vladimir Putin.
Noone believes that the so-called "pro-Russian militants" in Sevastapol or Slovyanksk weren't Russian soldiers.
Nor does anyone believe that the election results in Crimea were representative.
The double-speak is making me think again of "pravda". Bad times.
This is a followup to my previous post, Strong SSL/TLS Cryptography in Apache and Nginx.
Perhaps hard to tell given how many users remain, but Windows XP reached its end of life on 8 April 2014. This means no more support, updates, or bug fixes—not even of critical security flaws. Windows XP use has been dwindling, but its end-of-life provides an excellent opportunity to consider removing support for it from your applications and websites.
Dropping Windows XP support provides particularly interesting results for SSL/TLS configurations, as most of the compromises one makes in their provided cipher suites are in support of old versions of Internet Explorer on Windows XP. Since those users are now even more of a walking botnet and malware infestation, we needn't continue to support them to the detriment of the rest of the Internet.
And what changes can we make? In my previous cryptography guide, I advocate disabling SSLv3 support, which breaks Internet Explorer 6 on Windows XP, but prevents a downgrade attack for everyone else. If we're willing to drop support for all versions of Internet Explorer on Windows XP†, we can accomplish two other goals:
- Only support Perfect Forward Secrecy, offering no cipher suites without forward security.
- Only support modern ciphers. Currently this just means AES (in both CBC and GCM mode) but in the future will include ChaCha20+Poly1305‡.
To make these changes, follow my previous guide but use this cipher suite ordering for Apache:
And this cipher suite ordering for Nginx:
With the current version of OpenSSL, this yields the following ciphers, in descending order of preference:
This is a small, focused list, with absolutely no compromises for security, obeying the following rules:
- Only support PFS. We favor ECDHE over DHE as the former is less resource intensive, but we support both.
- Only support modern ciphers, which currently is just AES-CBC and AES-GCM. We favor GCM mode over CBC mode as the former is more efficient and not susceptible to the BEAST attack.
- Favor 256-bit key size over 128 but support nothing smaller.
- Support SHA-2 and SHA, nothing else. Prefer SHA-2 over SHA. For SHA-2, prefer 384-bit digests over 256-bit.
With this cipher suite ordering, Chrome and Firefox will both use TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256—a mighty fine choice—but even your least-favored cipher, TLS_DHE_RSA_WITH_AES_128_CBC_SHA provides forward security and a strong cipher.
For all your hard effort, this will earn you an "A+" grade and near-perfect SSL Labs Rating:
As before, you cannot do better without silly compromises, such as only supporting TLS 1.2, which would earn you a 100 in "Protocol Support," but then only Chrome and Firefox 27 could access your site.
† Which likely just means the addition of IE 7 and 8.
‡ Indeed, I'm not thrilled to recommend only one cipher. Even if AES were perfect, we ought to have choice. I believe ChaCha20+Poly1305 is an excellent alternative. It is currently supported by Chrome but is not yet in OpenSSL. Once in the latter I will update my recommendations.
This is an update for friends of Sugar only; you know who you are.
Sugar may not have much time left. She’s been losing weight rapidly the last couple weeks, her appetite is intermittent, and she’s been having nausea episodes. She seems remarkably cheerful under the circumstances and still likes human company as much as ever, but … she really does seem old and frail now, which wasn’t so as recently as her 21st birthday in early February.
We’re bracing ourselves. If the rate she’s fading doesn’t change I think we’re going to have to euthanize her within six weeks or so. Possibly sooner. Possibly a lot sooner.
Sugar’s had a good long run. We’ll miss her a lot, but Cathy and I are both clear that it is our duty not to see her suffering prolonged unnecessarily.
If you’re a friend of Sugar and have any option to visit here to say your farewells, best do it immediately.
If you’ve somehow read this far without having met Sugar: I don’t normally blog about strictly personal things, but Sugar is a bit of an institution. She’s been a friend to the many hackers who have guested in our storied basement; I’ve seen her innocent joyfulness light up a lot of faces. We’re not the only people who will be affected by losing her.
The Heartbleed bug made the Washington Post. And that means it’s time for the reminder about things seen versus things unseen that I have to re-issue every couple of years.
Actually, this time around I answered it in advance, in an Ask Me Anything on Slashdot just about exactly a month ago. The following is a lightly edited and somewhat expanded version of that answer.
I actually chuckled when I read rumor that the few anti-open-source advocates still standing were crowing about the Hearbeat bug, because I’ve seen this movie before after every serious security flap in an open-source tool. The script, which includes a bunch of people indignantly exclaiming that many-eyeballs is useless because bug X lurked in a dusty corner for Y months, is so predictable that I can anticipate a lot of the lines.
The mistake being made here is a classic example of Frederic Bastiat’s “things seen versus things unseen”. Critics of Linus’s Law overweight the bug they can see and underweight the high probability that equivalently positioned closed-source security flaws they can’t see are actually far worse, just so far undiscovered.
That’s how it seems to go whenever we get a hint of the defect rate inside closed-source blobs, anyway. As a very pertinent example, in the last couple months I’ve learned some things about the security-defect density in proprietary firmware on residential and small business Internet routers that would absolutely curl your hair. It’s far, far worse than most people understand out there.
Friends don’t let friends run factory firmware. You really do not want to be relying on anything less audited than OpenWRT or one of its kindred (DDWRT, or CeroWRT for the bleeding edge). And yet the next time any security flaw turns up in one of those open-source projects, we’ll see a replay of the movie with yet another round of squawking about open source not working.
Ironically enough this will happen precisely because the open-source process is working … while, elsewhere, bugs that are far worse lurk in closed-source router firmware. Things seen vs. things unseen…
Returning to Heartbleed, one thing conspicuously missing from the downshouting against OpenSSL is any pointer to a closed-source implementation that is known to have a lower defect rate over time. This is for the very good reason that no such empirically-better implementation is known to exist. What is the defect history on proprietary SSL/TLS blobs out there? We don’t know; the vendors aren’t saying. And we can’t even estimate the quality of their code, because we can’t audit it.
The response to the Heartbleed bug illustrates another huge advantage of open source: how rapidly we can push fixes. The repair for my Linux systems was a push-one-button fix less than two days after the bug hit the news. Proprietary-software customers will be lucky to see a fix within two months, and all too many of them will never see a fix patch.
The reason for this is that the business models for closed-source software pretty much require software updates to be an expensive, high-friction process hedged about with fees, approval requirements, and legal restrictions. Not like open-source-land, where we can ship a fix minutes after it’s compiled and tested because nobody is trying to collect rent on it.
Sunlight remains the best disinfectant. Open source is no guarantee of perfect results, but every controlled comparison that has been tried has shown that closed source is generally worse.
Finally and in 2014 perhaps most tellingly…if the source of the code you rely on is closed, how do you know your vendor hasn’t colluded with some spy shop to install a back door?
The Ring Of Fire books are a mixed bag. Sharecropped by many authors, ringmastered by Eric Flint, they range from plodding historical soap opera to sharp, clever entertainments full of crunchy geeky goodness for aficionados of military and technological history.
When Flint’s name is on the book you can generally expect the good stuff. So it proves in the latest outing, 1636: Commander Cantrell in the West Indies, a fun ride that’s (among other things) an affectionate tribute to C.S. Forester’s Hornblower novels and age-of-sail adventure fiction in general. (Scrupulously I note that I’m personally friendly with Flint, but this is exactly because he’s good at writing books I like.)
It is 1636 in the shook-up timeline birthed by the town of Grantville’s translocation to the Thuringia of 1632. Eddie Cantrell is a former teenage D&D player from uptime who became a peg-legged hero of the Baltic War and then husband of the not-quite-princess Ann-Catherine of Denmark. Now the United States of Europe is sending him to the Caribbean with an expeditionary force, Flotilla X-Ray, to seize the island of Trinidad from the Spanish and harvest oil desperately needed by Grantville’s industry.
But it’s not a simple military mission. There are tensions among the factions in the allied fleet – the United States of Europe, the Danes, the Dutch, and a breakaway Spanish faction in the Netherlands. And the Wild Geese – exiled Irish mercenaries under the charismatic Earl Tyrconnell – have their own agenda. Cardinal Richelieu’s agents are maneuvering against the whole enterprise. And as the game opens, nobody in the fleet knows about the desperate, hidden Dutch refugee colony on Eustatia…
If the book has a fault, it’s that authors Flint and Gannon love their intricate wheels-within-wheels plotting and elaborate political intrigue a little bit too much. It’s fun to watch those gears turning for a while, but even readers who (like me) relish that sort of thing may find themselves getting impatient for stuff to start blowing up already by thirty chapters in.
No fear, we do get our rousing sea battles. With novel twists, because the mix of Grantville’s uptime technology with the native techniques of the 1600s takes tactics in some strange directions. I particularly chuckled at the descriptions of captive hot-air balloons being used as ship-launched observation platforms, a workable expedient never tried in our history. As usual, Flint (a former master machinist) writes with a keen sense of how applied technology works – and, too often, fails.
If some of the character developments and romantic pairings are maybe a little too easy to see coming, well, nobody reads fiction like this for psychological depth or surprise. It’s a solid installment in the ongoing series. Oh, and with pirates too. Arrr. I’ll read the next one.
Last week, Microsoft open sourced Roslyn, the .NET Compiler Platform for C# and VB.
Roslyn is an effort to create a new generation of compilers written in managed code. In addition to the standard batch compiler, it contains a compiler API that can be used by all kinds of tools that want to understand and manipulate C# source code.
Roslyn is the foundation that powers the new smarts in Visual Studio and can also be used for static analysis, code refactoring or even to smartly navigate your source code. It is a great foundation that tool developers will be able to build on.
I had the honor of sharing the stage with Anders Hejlsberg when he published the source code, and showed both Roslyn working on a Mac with Mono, as well as showing the very same patch that he demoed on stage running on Mono.
Roslyn on Mono
The source code as released contains some C# 6.0 features so the patches add a bootstrapping phase, allowing Roslyn to be built with a C# 5.0 compiler from sources. There are also a couple of patches to deal with paths (Windows vs Unix paths) as well as a Unix Makefile to build the result.
Sadly, Roslyn's build script depends on a number of features of MSBuild that neither Mono or MonoDevelop/XamarinStudio support currently , but we hope we can address in the future. For now, we will have to maintain a Makefile-based system to use Roslyn.
Our patches no longer apply to the tip of Roslyn master, as Roslyn is under very active development. We will be updating the patches and track Roslyn master on our fork moving forward.
Currently Roslyn generates debug information using a Visual Studio native library. So the /debug switch does not work. We will be providing an alternative implementation that uses Mono's symbol writer.
Adopting Roslyn: Mono SDK
Our goal is to keep track of Roslyn as it is being developed, and when it is officially released, to bundle Roslyn's compilers with Mono .
But in addition, this will provide an up-to-date and compliant Visual Basic.NET compiler to Unix platforms.
Our plans currently are to keep both compilers around, and we will implement the various C# 6.0 features into Mono's C# compiler.
There are a couple of reasons for this. Our batch compiler has been fine tuned over the years, and for day-to-day compilation it is currently faster than the Roslyn compiler.
The second one is that our compiler powers our Interactive C# Shell and we are about to launch something very interesting with it. This functionality is not currently available on the open sourced Roslyn stack.
In addition, we plan on distributing the various Roslyn assemblies to Mono users, so they can build their own tools on top of Roslyn out of the box.
Adopting Roslyn: MonoDevelop/Xamarin Studio
Roslyn really shines for use in IDEs.
We have started an effort to adopt Roslyn in MonoDevelop/Xamarin Studio. This means that the underlying NRefactory engine will also adopt Roslyn.
This is going to be a gradual process, and during the migration the goal is to keep using both Mono's C# compiler as a service engine and bit by bit, replace with the Roslyn components.
We are evaluating various areas where Roslyn will have a positive impact. The plan is to start with code completion  and later on, support the full spectrum of features that NRefactory provides (from refactoring to code generation).
While not related to Roslyn, I figured it was time to share this.
For the last couple of months, the ECMA C# committee has been working on updating the spec to reflect C# 5. And this time around, the spec benefits from having two independent compiler implementations.
Mono Project and Roslyn
Our goal is to contribute fixes to the Roslyn team to make sure that Roslyn works great on Unix systems, and hopefully to provide bug reports and bug fixes as time goes by.
We are very excited about the release of Roslyn, it is an amazing piece of technology and one of the most sophisticated compiler designs available. A great place to learn great C# idioms and best practices , and a great foundation for great tooling for C# and VB.
Thanks to everyone at Microsoft that made this possible, and thanks to everyone on the Roslyn team for starting, contributing and delivering such an ambitious project.
 Roslyn uses a few tracing APIs that were not available on Mono, so you must use a newer version of Mono to build Roslyn.
 We even include the patch to add french quotes that Anders demoed. Make sure to skip that patch if you don't want it
 From Michael Hutchinson:
- There are references of the form:
<Reference Include="Microsoft.Build, Version=$(VisualStudioReferenceAssemblyVersion), Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
This is a problem because our project system tries to load project references verbatim from the project file, instead of evaluating them from the MSBuild engine. This would be fixed by one of the MSBuild integration improvements I've proposed.
- There's an InvalidProjectFileException error from the xbuild engine when loading one of the targets files that's imported by several of the code analysis projects, VSL.Settings.targets. I'm pretty sure this is because it uses MSBuild property functions, an MSBuild 4.0 feature that xbuild does not support.
- They use the
AllowNonModulatedReferencemetadata on some references and it's completely undocumented, so I have no idea what it does and what problems might be caused by not handling it in xbuild.
- One project can't be opened because it's a VS Extension project. I've added the GUID and name to our list of known project types so we show a more useful error message.
- A few of the projects depend on Microsoft.Build.dll, and Mono does not have a working implementation of it yet. They also reference other MSBuild assemblies which I know we have not finished.
 Since Roslyn is much better at error recovery and has a much more comprehensive support for code completion than Mono's C# compiler does. It also has much better support for dealing with incremental changes than we do.
private. They use
private everywhere, and that is just plain ugly.
 We will find out a way of selecting which compiler to use,
mcs (Mono's C# Compiler) or Roslyn.
Some weeks ago I was tremendously amused by a report of an exchange in which a self-righteous vegetarian/vegan was attempting to berate somebody else for enjoying Kentucky Fried Chicken. I shall transcribe the exchange here:
>There is nothing sweet or savory about the rotting >carcass of a chicken twisted and crushed with cruelty. >There is nothing delicious about bloodmouth carnist food. >How does it feel knowing your stomach is a graveyard I'm sorry, but you just inadvertently wrote the most METAL description of eating a chicken sandwich in the history of mankind. MY STOMACH IS A GRAVEYARD NO LIVING BEING CAN QUENCH MY BLOODTHIRST I SWALLOW MY ENEMIES WHOLE ESPECIALLY IF THEY'RE KENTUCKY FRIED
I am no fan of KFC, I find it nasty and overprocessed. However, I found the vegan rant richly deserving of further mockery, especially after I did a little research and discovered that the words “bloodmouth” and “carnist” are verbal tokens for an entire ideology.
First thing I did was notify my friend Ken Burnside, who runs a T-shirt business, that I want a “bloodmouth carnist” T-shirt – a Spinal-Tap-esque parody of every stupid trash-metal tour shirt ever printed. With flaming skulls! And demonic bat-wings! And umlauts! Definitely umlauts.
Once Ken managed to stop laughing we started designing. Several iterations. a phone call, and a flurry of G+ messages later, we had the Bloodmouth Carnist T-shirt. Order yours today!
By the way, the skull on that shirt is me, sort of. Ken asked me to supply a photo reference, so my wife and I went to a steakhouse and she snapped a picture of me grinning maniacally over a slab of prime rib. For SCIENCE!
This had consequences. An A&D regular challenged me in private mail to explain why my consequentialist ethics don’t require me to be a vegetarian.
I broadly agree with Sam Harris’s position in The Moral Landscape that the ground of ethics has to be the minimization of pain. But I add to this that for pain to be of consequence to me it needs to be have an experiencer who is at least potentially part of a community of reciprocal trust with me. Otherwise I would be necessarily paralyzed by guilt at killing bacteria every time I breathe.
The community of (potential) reciprocal trust includes all humans, possibly excepting a tiny minority of the criminally insane. It presumptively includes extraterrestrial sophonts, if we ever discover those. I think it is prudent and conservative (in the best sense of that term) to include borderline and near-borderline sophonts like higher primates, elephants, whales, dolphins, and squid. In principle it includes any animal that can solve the other-minds problem – which probably includes some of the brighter birds. I think this category can be roughly delimited using the mirror test.
For different reasons, the community of trust includes non-sophont human commensals. My cat, Sugar, for example, who shows only dim and occasional flashes of behavior that might indicate she models other minds, but has a strong mutual-trust relationship with my wife and myself. We know what to expect of each other; we like each other. This is a kind of reciprocity with ethical significance even though the cat is not sophont.
Another way to put this is to remember the Golden Rule, “Do as you would be done by” and ask: what animals have the ability to follow it, the right kind of informational complexity required to support it?
Cows, pigs, chickens, and fish are not part of my potential community of trust. They don’t have minds capable of it – the informational complexity required doesn’t seem to be there at all (though suspicions have occasionally been raised about pigs; I’ll revisit this point). Thus, their deaths are not intrinsically ethically significant to me, any more than harvesting a head of lettuce is.
Cruelty is a different matter. I think we ought not engage in cruelty because it is damaging and coarsening; people who make a habit of being cruel to non-sophonts are more likely to become cruel and dangerous to sophonts as well. Thus, merely killing a food animal is ethically neutral, but careless cruelty towards one is wrong and deliberate cruelty is evil.
(Nevertheless, I report that the above vegan rant inculcated in me a desire to stomp into a roomful of vegans and demand my food “twisted and crushed with cruelty”. I really don’t like it when people try to jerk me around by my sensibilities as though I’m some kind of idiot who is unreachable by reasoned argument. I find it insulting and want to punch back.)
These criteria could interact in interesting ways, and there are edge cases that need more investigation. I think I would have to stop eating pork if pigs could count the way that (for example) crows can – some pigs reportedly come close enough to passing the mirror test to worry me a little. I can readily imagine that pigs bred for intelligence might come near enough to sophont to be taboo to me. On the other hand, a friend who grew up on a hog farm assures me that pigs bred for meat are stone-stupid; according to her, it’s only wild pigs I should be even marginally concerned about.
Otters are another interesting case; they seem very playful and intelligent in the wild, occasionally use tools, and can form affectionate bonds with humans. I’d very much like to see them mirror-tested; in the meantime I’m quite willing to to give them the benefit of the doubt and consider them taboo for killing and eating.
There you have it. The bloodmouth carnist theory of animal rights. Now if you’ll excuse me I’m going to go have a roast beef sandwich for lunch.
My younger son, Oscar, asked me to put bananas into the lamb curry I was planning to cook. Which inspired this:
Diced leg of lamb
Fry the onions in the ghee. Add ginger and ground spices and fry for a minute more, then add the diced lamb and brown. Add the raisins, banana (sliced), dried apricot (roughly chopped) and lemon (cut into eighths, including skin) and some yoghurt. Cook on a medium heat until the yoghurt begins to dry out, then add some more. Repeat a couple of times (I used most of a 500ml tub of greek yoghurt). Salt to taste. Eat. The lemon is surprisingly edible.
I served it with saffron rice and dal with aubergines.
Today I have released Redshift 1.9 containing a bunch of small improvements and bug fixes. Most notable improvement is the switch to an improved color correction contributed by Ingo Thies. You may want to experiment with the temperature parameters after updating to find the best setting. In addtion, a DRM adjustment method has been added so it is now possible to apply redness to the Linux console even when X is not running. This method has to be explicitly selected using
The full release notes are listed below.
- Use improved color scheme provided by Ingo Thies.
- Add drm driver which will apply adjustments on linux consoles (Mattias Andrée).
- Remove deprecated GNOME clock location provider.
- Set proc title for redshift-gtk (Linux/BSD) (Philipp Hagemeister).
- Show current temperature, location and status in GUI.
- Add systemd user unit files so that redshift can be used with systemd as a session manager (Henry de Valence).
- Use checkbox to toggle Redshift in GUI (Mattias Andrée).
- Gamma correction is applied after brightness and temperature (Mattias Andrée).
- Use XDG Base Directory Specification when looking for configuration file (Mattias Andrée).
- Load config from
%LOCALAPPDATA%\redshift.confon Windows (TingPing).
- Add RPM spec for Fedora in contrib.
- redshift-gtk has been ported to Python3 and new PyGObject bindings for Python.
In addition to the changes included in this release there are also a lot of changes in the pipeline being prepared for the next release. Mattias Andrée is currently working on support for applying distinct adjustments to different displays or outputs, and also some other related improvements (#44). We are also currently having a discussion about switching Redshift to become a D-Bus service such that a more advanced GUI can be implemented (#54). This could potentially also allow for much more customization of the color temperature, time of sunset/sunrise, etc. If you have any comments on these features, please let us know on Github.
When I heard that Brendan Eich had been forced to resign his new job as CEO at Mozilla, my first thought was “Congratulations, gay activists. You have become the bullies you hate.”
On reflection, I think the appalling display of political thuggery we’ve just witnessed demands a more muscular response. Eich was forced out for donating $1000 to an anti-gay-marriage initiative? Then I think it is now the duty of every friend of free speech and every enemy of political bullying to pledge not only to donate $1000 to the next anti-gay-marriage initiative to come along, but to say publicly that they have done so as a protest against bullying.
This is my statement that I am doing so. I hope others will join me.
It is irrelevant whether we approve of gay marriage or not. The point here is that bullying must have consequences that deter the bullies, or we will get more of it. We must let these thugs know that they have sown dragon’s teeth, defeating themselves. Only in this way can we head off future abuses of similar kind.
And while I’m at it – shame on you, Mozilla, for knuckling under. I’ll switch to Chrome over this, if it’s not totally unusable.
Over at my day job we've created a Newly Observed Domains service which tracks domain first sightings and packages them up in various ways that can be used to determine network reputation. As in most advanced DNS-related technologies, my home and guests and family are
“Open Source as Last Resort” appears to be popular this week. First, Canonical, Ltd. will finally liberate UbuntuOne server-side code, but only after abandoning it entirely. Second, Microsoft announced a plan to release its .NET compiler platform, Roslyn, under the Apache License spinning it into an (apparent, based on description) 501(c)(6) organization called the Dot Net Foundation.
This strategy is pretty bad for software freedom. It gives fodder to the idea that “open source doesn't work”, because these projects are likely to fail (or have already failed) when they're released. (I suspect, although I don't know of any studies on this, that) most software projects, like most start-up organizations, fail in the first five years. That's true if they're proprietary software projects or not.
But, using code liberation as a last straw attempt to gain interest in a failing codebase only gives a bad name to the licensing and community-oriented governance that creates software freedom. I therefore think we should not laud these sorts of releases, even though they liberate more code. We should call them for what they are: too little, too late. (I said as much in the five year old bug ticket where community members have been complaining that UbuntuOne server-side is proprietary.)
Finally, a note on using a foundation to attempt to bolster a project community in these cases:
I must again point out that the type of organization matters greatly. Those who are interested in the liberated .NET codebase should be asking Microsoft if they're going to form a 501(c)(6) or a 501(c)(3) (and I suspect it's the former, which bodes badly).
I know some in our community glibly dismiss this distinction as some esoteric IRS issue, but it really matters with regard to how the organization treats the community. 501(c)(6) organizations are trade associations who serve for-profit businesses. 501(c)(3)'s serve the public at large. There's a huge difference in their behavior and activities. While it's possible for a 501(c)(3) to fail to serve all the public's interest, it's corruption when they so fail. When 501(c)(6)'s serve only their corporate members' interest, possibly at the detriment to the public, those 501(c)(6) organizations are just doing the job they are supposed to do — however distasteful it is.
Note: I said “open source” on purpose in this post in various places. I'm specifically saying that term because it's clear these companies actions are not in the spirit of software freedom, nor even inspired therefrom, but are pure and simple strategy decisions.
A note from the publisher says Jeremy Rifkin himself asked them to ship me a copy of his latest book, The Zero Marginal Cost Society. It’s obvious why: in writing about the economics of open-source software, he thinks I provided one of the paradigmatic cases of what he wants to write about – the displacement of markets in scarce goods by zero-marginal-cost production. Rifkin’s book is an extended argument that this is is a rising trend which will soon obsolesce not just capitalism as we have known it, but many forms of private property as well.
Alas for Mr. Rifkin, my analysis of how zero-marginal-cost reproduction transforms the economics of software also informs me why that logic doesn’t obtain for almost any other kind of good – why, in fact, his general thesis is utterly ridiculous. But plain common sense refutes it just as well.
Here is basic production economics: the cost of a good can be divided into two parts. The first is the setup cost – the cost of assembling the people and tools to make the first copy. The second is the incremental – or, in a slight abuse of terminology, “marginal” – cost of producing unit N+1 after you have produced the first copy.
In a free market, normal competitive pressure pushes the price of a good towards its marginal cost. It doesn’t get there immediately, because manufacturers need to recoup their setup costs. It can’t stay below marginal cost, because if it did that the manufacturer loses money on every sale and the business crashes.
In this book, Rifkin is fascinated by the phenomenon of goods for which the marginal cost of production is zero, or so close to zero that it can be ignored. All of the present-day examples of these he points at are information goods – software, music, visual art, novels. He joins this to the overarching obsession of all his books, which are variations on a theme of “Let us write an epitaph for capitalism”.
In doing so, Rifkin effectively ignores what capitalists do and what capitalism actually is. “Capital” is wealth paying for setup costs. Even for pure information goods those costs can be quite high. Music is a good example; it has zero marginal cost to reproduce, but the first copy is expensive. Musicians must own costly instruments, be paid to perform, and require other capital goods such as recording studios. If those setup costs are not reliably priced into the final good, production of music will not remain economically viable.
Fifteen years ago I pointed out in my paper The Magic Cauldron that the pricing models for most proprietary software are economically insane. If you price software as though it were (say) consumer electronics, you either have to stiff your customers or go broke, because the fixed lump of money from each unit sale will always be overrun by the perpetually-rising costs of technical support, fixes, and upgrades.
I said “most” because there are some kinds of software products that are short-lived and have next to no service requirements; computer games are the obvious case. But if you follow out the logic, the sane thing to do for almost any other kind of software usually turns out to be to give away the product and sell support contracts. I was arguing this because it knocks most of the economic props out from under software secrecy. If you can sell support contracts at all, your ability to do so is very little affected by whether the product is open-source or closed – and there are substantial advantages to being open.
Rifkin cites me in his book, but it is evident that he almost completely misunderstood my arguments in two different ways, both of which bear on the premises of his book.
First, software has a marginal cost of production that is effectively zero, but that’s true of all software rather than just open source. What makes open source economically viable is the strength of secondary markets in support and related services. Most other kinds of information goods don’t have these. Thus, the economics favoring open source in software are not universal even in pure information goods.
Second, even in software – with those strong secondary markets – open-source development relies on the capital goods of software production being cheap. When computers were expensive, the economics of mass industrialization and its centralized management structures ruled them. Rifkin acknowledges that this is true of a wide variety of goods, but never actually grapples with the question of how to pull capital costs of those other goods down to the point where they no longer dominate marginal costs.
There are two other, much larger, holes below the waterline of Rifkin’s thesis. One is that atoms are heavy. The other is that human attention doesn’t get cheaper as you buy more of it. In fact, the opposite tends to be true – which is exactly why capitalists can make a lot of money by substituting capital goods for labor.
These are very stubborn cost drivers. They’re the reason Rifkin’s breathless hopes for 3-D printing will not be fulfilled. Because 3-D printers require feedstock, the marginal cost of producing goods with them has a floor well above zero. That ABS plastic, or whatever, has to be produced. Then it has to be moved to where the printer is. Then somebody has to operate the printer. Then the finished good has to be moved to the point of use. None of these operations has a cost that is driven to zero, or near zero at scale. 3-D printing can increase efficiency by outcompeting some kinds of mass production, but it can’t make production costs go away.
An even more basic refutation of Rifkin is: food. Most of the factors of production that bring (say) an ear of corn to your table have a cost floor well above zero. Even just the transportation infrastructure required to get your ear of corn from farm to table requires trillions of dollars of capital goods. Atoms are heavy. Not even “near-zero” marginal cost will ever happen here, let alone zero. (Late in the book, Rifkin argues for a packetized “transportation Internet” – a good idea in its own terms, but not a solution because atoms will still be heavy.)
It is essential to Rifkin’s argument that constantly he fudges the distinction between “zero” and “near zero” in marginal costs. Not only does he wish away capital expenditure, he tries to seduce his readers into believing that “near” can always be made negligible. Most generally, Rifkin’s take on production economics calls to mind the famous Orwell quote: “One has to belong to the intelligentsia to believe things like that: no ordinary man could be such a fool.”
But even putting all those mistakes aside, there is another refutation of Rifkin. In his brave impossible new world of zero marginal costs for goods, who is going to fix your plumbing? If Rifkin tries to negotiate price with a plumber on the assumption that the plumber’s hours after zero have zero marginal cost, he’ll be in for a rude awakening.
The book is full of other errors large and small. The particular offence for which I knew Rifkin before this book – wrong-headed attempts to apply the laws of thermodynamics to support his desired conclusions – reappears here. As usual, he ignores the difference between thermodynamically closed systems (which must experience an overall increase in entropy) and thermodynamically open systems in which a part we are interested in (such as the Earth’s biosphere, or an economy) can be counter-entropic by internalizing energy from elsewhere into increased order. This is why and how life exists.
Another very basic error is Rifkin’s failure to really grasp the most important function of private property. He presents it as only as a store of value and a convenience for organizing trade, one that accordingly becomes less necessary as marginal costs go towards zero. But even if atoms were weightless and human attention free, property would still function as a definition of the sphere within which the owner’s choices are not interfered with. The most important thing about owning land (or any rivalrous good, clear down to your toothbrush) isn’t that you can sell it, but that you can refuse intrusions by other people who want to rivalrously use it. When Rifkin notices this at all, he thinks it’s a bad thing.
The book is a blitz of trend-speak. Thomas Kuhn! The Internet of Things! 3D printing! Open source! Big data! Prosumers! But underneath the glossy surface are gaping holes in the logic. And the errors follow a tiresomely familiar pattern. What Rifkin is actually retailing, whether he consciously understands it that way or not (and he may not), is warmed-over Marxism – hostility to private property, capital, and markets perpetually seeking a rationalization. The only innovation here is that for the labor theory of value he has substituted a post-labor theory of zero value that is even more obviously wrong than Marx’s.
All the indicia of cod-Marxism are present. False identification of capitalism with vertical integration and industrial centralization: check. Attempts to gin up some sort of an opposition between voluntary but non-monetized collaboration and voluntary monetized trade: check. Valorizing nifty little local cooperatives as though they actually scaled up: check. Writing about human supercooperative behavior as though it falsifies classical and neoclassical economics: check. At times in this book it’s almost as though Rifkin is walking by a checklist of dimwitted cliches, ringing them like bells in a carillon.
Perhaps the most serious error, ultimately, is the way Rifkin abuses the notion of “the commons”. This has a lot of personal weight for me, because I have lived in and helped construct a hacker culture that maintains a huge software commons and continually pushes for open, non-proprietary infrastructure. I have experienced, recorded, and in some ways helped create the elaborate network of manifestos, practices, expectations, how-to documents, institutions, and folk stories that sustains this commons. I think I can fairly claim to have made the case for open infrastructure as forcefully and effectively as anyone who has ever tried to.
Bluntly put, I have spent more than thirty years actually doing what Rifkin is glibly intellectualizing about. From that experience, I say this: the concept of “the commons” is not a magic wand that banishes questions about self-determination, power relationships, and the perils of majoritarianism. Nor is it a universal solvent against actual scarcity problems. Maintaining a commons, in practice, requires more scrupulousness about boundaries and respect for individual autonomy rather than less. Because if you can’t work out how to maximize long-run individual and joint utility at the same time, your commons will not work – it will fly apart.
Though I participate in a huge commons and constantly seek to extend it, I seldom speak of it in those terms. I refrain because I find utopian happy-talk about “the commons” repellent. It strikes me as at best naive and at at worst quite sinister – a gauzy veil wrapped around clapped-out collectivist ideologizing, and/or an attempt to sweep the question of who actually calls the shots under the rug.
In the open-source community, all our “commons” behavior ultimately reduces to decisions by individuals, the most basic one being “participate this week/day/hour, or not?” We know that it cannot be otherwise. Each participant is fiercely protective of the right of all others to participate only voluntarily and on terms of their own choosing. Nobody ever says that “the commons” requires behavior that individuals themselves would not freely choose, and if anyone ever tried to do so they would be driven out with scorn. The opposition Rifkin wants to find between Lockean individualism and collaboration does not actually exist, and cannot.
Most of us also understand, nowadays, that attempts to drive an ideological wedge between our commons and “the market” are wrong on every level. Our commons is in fact a reputation market – one that doesn’t happen to be monetized, but which has all the classical behaviors, equilibria, and discovery problems of the markets economists usually study. It exists not in opposition to monetized trade, free markets, and private property, but in productive harmony with all three.
Rifkin will not have this, because for the narrative he wants these constructions must conflict with each other. To step away from software for an instructive example of how this blinds him, the way Rifkin analyzes the trend towards automobile sharing is perfectly symptomatic.
He tells a framing story in which individual automobile ownership has been a central tool and symbol of individual autonomy (true enough), then proposes that the trend towards car-sharing is therefore necessarily a willing surrender of autonomy. The actual fact – that car-sharing is popular mainly in urban areas because it allows city-dwellers to buy more mobility and autonomy at a lower capital cost – escapes him.
Car sharers are not abandoning private property, they’re buying a service that prices personal cars out of some kinds of markets. Because Rifkin is all caught up in his own commons rhetoric, he doesn’t get this and will underestimate what it takes for car sharing to spread out of cities to less densely populated areas where it has a higher discovery and coordination cost (and the incremental value of individual car ownership is thus higher).
The places where open source (or any other kind of collaborative culture) clashes with what Rifkin labels “capitalism” are precisely those where free markets have been suppressed or sabotaged by monopolists and would-be monopolists. In the case of car-sharing, that’s taxi companies. For open source, it’s Microsoft, Apple, the MPAA/RIAA and the rest of the big-media cartel, and the telecoms oligopoly. Generally there is explicit or implicit government market-rigging in play behind these – which is why talking up “the commons” can be dangerous, tending to actually legitimize such political power grabs.
It is probably beyond hope that Jeremy Rifkin himself will ever understand this. I write to make it clear to others that he cannot recruit the successes of open-source software for the anti-market case he is trying to make. His grasp of who we are, his understanding of how to make a “commons” function at scale, and his comprehension of economics in general are all fatally deficient.
(Note: this is not a technical blog entry, as the overwhelming majority of articles are on my blog .. so if you really don't care about anything other than technical topics, feel free to skip this posting altogether
The term "hunter / gatherer" gets used quite a bit when talking about our deep past, in particular when referring to human societies from the late stone age, or neolithic. For or some people, however, the past is not so long ago. There are a few tribes that still practice the hunter / gatherer lifestyle, but even in "modern" societies there is a fair amount of hunting and gathering that goes on.
In that spirit, this weekend we went into the forest to collect wild garlic with some friends. Also known as ramsons in English or bärlauch in German, this member of the chive family has a wonderful, mild garlic flavor and odor. The bulbs lay dormant in the soil until early spring when they erupt into a carpet of pungent edible leaves before flowering. After a few weeks, the plants disappear from view again until the next spring.
We picked a few kilograms of the leaves from the forest floor over the course of about an hour, though we certainly were not hurrying. The canopy of trees stretched out above us on a wonderful sunny spring day as we talked and picked. There was so much of the stuff in every direction that you couldn't even tell we'd been there after we left, despite hauling away two very large sacks of the stuff. While you can buy bärlauch in the grocery store here (@ 35 CHF per kilogram), but you miss out on the forest adventure and socializing.
So what does one do with this vegetable?
Well, when we got back home I dropped some potatoes into a pot and in short order (and one rather messy kitchen table later) we had a wonderful wild garlic and potato gnocchi. After a quick dip in boiling water, the fluffy little pillows were finished in a frying pan with butter and a handful of peas before being served with a trail of sauteed mushrooms and cedar nuts, a vine of roasted cherry tomatoes and grated parmesan. This was prefaced by a salad of mixed spring greens, tomatoes and toasted pine nuts in a simple vinaigrette, cut through with thinly sliced ribbons of our freshly harvested bärlauch leaves. Very foresty food. Yum.
Even after this feast, which fed four adults and two children, we still had a large amount of leaves to do something with. So on Sunday I made large, round ravioli, ~10cm in diameter, filled with generously large balls of ricotta and bärlauch leaves which had been sweated down with finely diced shallots and a dash of white wine.
We also made a large container of wild garlic pesto, replacing the traditional basil with the bärlauch leaves. This is the ultimate in simple uses: you blend up the leaves with some pine nuts, stir in copious amounts of olive oil until it looks like pesto and then drop in a bunch of parmesan cheese to taste. We had some bärlauch pesto with dinner last night, but most of it will end up in jars for use later in the year.
Other ideas for wild garlic while it is still fresh and plentiful in the forest include using it in soups, quiches, pasta dough and sauces for spatzle, knöplfi and other starchy staples. The omnivores among us may want to pair it with rich red meats (stuffed, marinated or wrapped) or subtle fish (I bet the pesto would go just amazingly with a variety of white fish).
Soon, though, the wild garlic will be gone and we will move on to the next seasonal food. The variety that seasonal eating brings, and the expectation that builds up all year when a given ingredient you absolutely love isn't in season, is just wonderful. It invites creativity into our cooking and adds excitement to the table.
There's another, perhaps even more important, bonus that comes along with food you harvest yourself, particularly from the wild: it connects us in a very direct and personal way to the planet we live on and the ecosystem we share it with by reminding us of the cycles of nature that we rely on for our day to day existence. Without food, we cease to be, and our food comes from the earth and its bodies of water. When we are reminded of that, it puts a lot of other things into a more tidy perspective.
So while most of the planet certainly does not rely on hunting and gathering to survive, doing a little bit of it now and again may not be the worst of ideas ...
If you hunt and gather, leave a comment below about your favorite ingredients and the dishes you love them in!
Today, Conservancy announced the addition of Karen Sandler to our management team. This addition to Conservancy's staff will greatly improve Conservancy's ability to help Conservancy's many member projects.
This outcome is one I've been working towards for a long time. I've focused for at least a year on fundraising for Conservancy in hopes that we could hire a third full-time staffer. For the last few years, I've been doing basically two full-time jobs, since I've needed to give my personal attention to virtually everything Conservancy does. This obviously doesn't scale, so my focus has been on increasing capacity at Conservancy to serve more projects better.
I (and the entire Board of Directors of Conservancy) have often worried if I were to disappear, leave Conservancy (or otherwise just drop dead), Conservancy might not survive without me. Such heavy reliance on one person is a bug, not a feature, in an organization. That's why I worked so hard to recruit Karen Sandler as Conservancy's new Executive Director. Admittedly, she helped create Conservancy and has been involved since its inception. But, having her full-time on staff is a great step forward: there's no single point of failure anymore.
It's somewhat difficult for me to relinquish some of my personal control over Conservancy. I have been mostly responsible for building Conservancy from a small unstaffed “thin” fiscal sponsor into a “full-service” fiscal sponsor that provides virtually any work that a Free Software project requests. Much of that has been thanks to my work, and it's tough to let someone else take that over.
However, handing off the Executive Director position to Karen specifically made this transition easy. Put simply, I trust Karen, and I recruited her personally to take over (one of) my job(s). She really believes in software freedom in the way that I do, and she's taught me at least half the things I know about non-profit organizational management. We've collaborated on so many projects and have been friends and colleagues — through both rough and easy times — for nearly a decade. While I think I'm justified in saying I did a pretty good job as Conservancy's Executive Director, Karen will do an even better job than I did.
I'm not stepping aside completely from Conservancy management, though. I'm continuing in the role of President and I remain on the Board of Directors. I'll be involved with all strategic decisions for the organization, and I'll be the primary manager for a few of Conservancy's program activities: including at least the non-profit accounting project and Conservancy's license enforcement activities. My primary staff role, however, will now be under the title “Distinguished Technologist” — a title we borrowed from HP. The basic idea behind this job at Conservancy is that my day-to-day work helps the organization understand the technology of Free Software and how it relates to Conservancy's work. As an initial matter, I suspect that my focus for the next few years is going to be the non-profit accounting project, since that's the most urgent place where Free Software is inadequately providing technological solutions for Conservancy's work. (Now, more than ever, I urge you to donate to that campaign, since it will become a major component of funding my day-to-day work.
I'm somewhat surprised that, even in the six hours since this announcement, I've already received emails from Conservancy member project representatives worded as if they expect they won't hear from me anymore. While, indeed, I'll cease to be the front-line contact person for issues related to Conservancy's work, Conservancy and its operations will remain my focus. Karen and I plan a collaborative management style for the organization, so I suspect for many things, Karen will brief me about what's going on and will seek my input. That said, I'm looking forward to a time very soon when most Conservancy management decisions won't primarily be mine anymore. I'm grateful for Karen, as I know that the two of us running Conservancy together will make a great working environment for both of us, and I really believe that she and I as a management team are greater than the sum of our parts.
When I have to explain how real hackers differ from various ignorant media stereotypes about us, I’ve found that one of the easiest differences to explain is transparency vs. anonymity. Non-techies readily grasp the difference between showing pride in your work by attaching your real name to it versus hiding behind a concealing handle. They get what this implies about the surrounding subcultures – honesty vs. furtiveness, accountability vs. shadiness.
One of my regular commenters is in the small minority of hackers who regularly uses a concealing handle. Because he pushed back against my assertion that this is unusual, counter-normative behavior, I set a bit that I should keep an eye out for evidence that would support a frequency estimate. And I’ve found some.
Recently I’ve been doing reconstructive archeology on the history of Emacs, the goal being to produce a clean git repository for browsing of the entire history (yes, this will become the official repo after 24.4 ships). This is a near-unique resource in a lot of ways.
One of the ways is the sheer length of time the project has been active. I do not know of any other open-source project with a continuous revision history back to 1985! The size of the contributor base is also exceptionally large, though not uniquely so – no fewer than 574 distinct committers. And, while it is not clear how to measure centrality, there is little doubt that Emacs remains one of the hacker community’s flagship projects.
This morning I was doing some minor polishing of the Emacs metadata – fixing up minor crud like encoding errors in committer names – and I made a list of names that didn’t appear to map to an identifiable human being. I found eight, of which two are role-based aliases – one for a dev group account, one for a build engine. That left six unidentified individual contributors (I actually shipped 8 to the emacs-devel list, but two more turned out to be readily identifiable within a few minutes after that).
I’m looking at this list of names, and I thought “Aha! Handle frequency estimation!”
That’s a frequency of just about exactly 1% for IDs that could plausibly be described as concealing handles in commit logs. That’s pretty low, and a robust difference from the cracker underground in which 99% use concealing handles. And it’s especially impressive considering the size and time depth of the sample.
And at that, this may be an overestimate. As many as three of those IDs look like they might actually be display handles – habitual nicknames that aren’t intended as disguise. That is a relatively common behavior with a very different meaning.