The lower-post-volume people behind the software in Debian. (List of feeds.)
So, here you are in your starship, happily settling into orbit around an Earthlike world you intend to survey for colonization. You start mapping, and are immediately presented with a small but vexing question: which rotational pole should you designate as ‘North’?
There are a surprisingly large number of ways one could answer this question. I shall wander through them in this essay, which is really about the linguistic and emotive significance of compass-direction words as humans use them. Then I shall suggest a pragmatic resolution.
First and most obviously, there’s magnetic north. Our assumption ‘the planet is Earthlike’ entails a nice strong magnetic field to keep local carbon-based lifeforms from getting constantly mutated into B-movie monsters by incoming charged particles. Magnetic north is probably going to be much closer to one pole than the other; we could call that ‘North’.
Then there’s spin-axis north. This is the assignment that makes north relate to the planet’s rotation the same way it does on Earth – that is, it implies the sun setting in the west rather than the east. Not necessarily the same as magnetic north; I don’t know of any reason to think planetary magnetic fields have a preferred relationship to the spin axis.
Next, galactic north. Earth’s orbital plane is inclined about 26% from the rotational plane of the Milky Way, which defines the Galaxy’s spin-axis directions; these have been labeled ‘Galactic North” and “Galactic South” in accordance with the Earth rotational poles they most closely match. On our new planet we could flip this around and define planetary North so it matches Galactic North.
Finally there’s habitability north. This one is fuzzier. More than 3/4ths of earth’s population lives in places where north is colder and south is warmer. We might want to choose ‘North’ to preserve that relationship, which is embedded pretty deeply in the language and folklore of most of Earth’s cultures. Thus, ‘North’ should be the hemisphere with the most habitable land. (Or, if you’re taking a shorter-term view, the hemisphere in which you drop your first settlement. But let’s ignore that complication for now.)
If all four criteria coincide, happiness. But how likely is that? They’re probably distributed randomly with respect to each other, which means we’ll probably get perfect agreement on only one in every sixteen exoplanets.
But not all these criteria are equally important. Magnetic North really only matters to geophysicists and compass-makers. Galactic North is probably interesting only to stargazers.
I think we have a clear winner if spin-axis north coincides with habitability north. This choice will preserve continuity of language pretty well. If they’re opposite, and galactic north coincides with magnetic north, that’s a tiebreaker. If the tiebreakers don’t settle it, I’d go with spin-axis north.
But reasonable people could differ on this. Discuss; maybe we could submit a proposal to the IAU.
My patch reviewing workflow looks like:
- run the test suite and capture the number of JUnit Errors and Fails
- apply the patch and check if things still compile
- run the test suite and capture the number of JUnit Errors and Fails
- compare the number of Errors and Fails before and after
- check if JavaDoc is in order
- check if there is new unit testsing where appropriate
- check for new PMD issues
- mvn clean compile test -Dmaven.test.failure.ignore=true
- cat */*/target/surefire-reports/* | grep "Tests run" | sed -e "s/, Time elapsed.* /\|/" | sort -t'|' -k2 > prepatch.txt
- git am / git cherry-pick
- repeat step 1 and 2, and safe as postpatch.txt
- diff -u prepatch.txt postpatch.txt
- repeat step 1-5, if needed.
diff -u prepatch.txt postpatch.txt
--- prepatch.txt 2014-03-08 11:41:13.520240111 +0100
+++ postpatch.txt 2014-03-08 12:59:21.022609259 +0100
@@ -3,6 +3,14 @@
Tests run: 22, Failures: 1, Errors: 0, Skipped: 0|org.openscience.cdk.atomtype.ReactionStructuresTest
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0|org.openscience.cdk.CDKTest
Tests run: 10, Failures: 1, Errors: 0, Skipped: 0|org.openscience.cdk.formula.rules.IsotopePatternRuleTest
+Tests run: 15, Failures: 0, Errors: 10, Skipped: 0|org.openscience.cdk.graph.CyclesTest
+Tests run: 14, Failures: 0, Errors: 14, Skipped: 0|org.openscience.cdk.graph.EdgeShortCyclesTest
+Tests run: 12, Failures: 0, Errors: 12, Skipped: 0|org.openscience.cdk.graph.EssentialCyclesTest
+Tests run: 31, Failures: 0, Errors: 18, Skipped: 0|org.openscience.cdk.graph.InitialCyclesTest
+Tests run: 14, Failures: 0, Errors: 12, Skipped: 0|org.openscience.cdk.graph.MinimumCycleBasisTest
+Tests run: 14, Failures: 0, Errors: 12, Skipped: 0|org.openscience.cdk.graph.RelevantCyclesTest
+Tests run: 13, Failures: 0, Errors: 11, Skipped: 0|org.openscience.cdk.graph.TripletShortCyclesTest
+Tests run: 14, Failures: 0, Errors: 14, Skipped: 0|org.openscience.cdk.graph.VertexShortCyclesTest
Tests run: 2, Failures: 2, Errors: 0, Skipped: 0|org.openscience.cdk.io.cml.QSARCMLRoundTripTest
Tests run: 14, Failures: 5, Errors: 0, Skipped: 0|org.openscience.cdk.modeling.builder3d.ForceFieldConfiguratorTest
Tests run: 15, Failures: 1, Errors: 0, Skipped: 0|org.openscience.cdk.qsar.descriptors.atomic.AtomDegreeDescriptorTest
mvn compile -DskipTests
That is the title of a paper attempting to explain (away) the 17-year nothing that happened while CAGW models were predicting warming driven by increasing CO2. CO2 increased. Measured GAT did not.
Here’s the money quote: “The most recent climate model simulations used in the AR5 indicate that the warming stagnation since 1998 is no longer consistent with model projections even at the 2% confidence level.”
That is an establishment climatologist’s cautious scientist-speak for “The IPCC’s anthropogenic-global-warming models are fatally broken. Kaput. Busted.”
I told you so. I told you so. I told you so!
I even predicted it would happen this year, yesterday on my Ask Me Anything on Slashdot. This wasn’t actually brave of me: the Economist noticed that the GAT trend was about to fall to worse than 5% fit to the IPCC models six months ago.
Here is my next prediction – and remember, I have been consistently right about these. The next phase of the comedy will feature increasingly frantic attempts to bolt epicycles onto the models. These epicycles will have names like “ENSO”, “standing wave” and “Atlantic Oscillation”.
All these attempts will fail, both predictively and retrodictively. It’s junk science all the way down.
This plugin porting is being done in a sprinter-plugins repository. With the plugins in this new repository, the Frameworks 5 dependencies have been dropped from the sprinter repository which now only requires QtCore, QtGui, QtNetwork and QtDeclarative. Note the lack of QtWidgets! There is a TODO file in sprinter-plugins that has a nice list of plugins left to do, and we are working on them one at a time in branches which get merged down into master once the plugin is fully ported and functional.
Stability and API tweaks
Plugin metadataOne of the issues I ran into was using Qt's new JSON-based plugin metadata system and the KService framework's idea of how this same data is represented. Up until now, KDE application have used .desktop files to register plugins, but the JSON metadata which is baked right into the plugin's library file (*.so) itself is more convenient and robust (as there is no .desktop file to lose). Sebastian Kügler had written a small tool to translate plugin .desktop files into .json files which provided the necessary starting point. It had some (known) limitations, however; in particular, it did not handle translations. It also carries some heavy dependencies from Frameworks 5 that shouldn't really be necessary. So I spent some time looking at what could be possible and came up with a small tool that generates json like this:
"David Edmundson ",
"Emmanuel Pescosta "
.. lots of translations ..
"Comment": "Open devices and folder bookmarks",
.. lots of translations ..
Note that applications can have their own blocks (in this case, "Sprinter") as it is entirely free-form JSON. The top-level PluginInfo block is what will end up getting standardized for KDE style plugins. This has allowed Sprinter to move information about plugins into the JSON, which means they can be filtered based on things like "what kind of results does this plugin provide" without actually loading the plugin.
We will be migrating from .desktop files to these json files across KDE's codebase as applications move to Frameworks 5, but this will not happen immediately as there are quite a few steps to get through first. I need to generalize the tool I wrote for Sprinter, adjust KPluginInfo internals a bit, massage the desktoptojson cmake macro and work with the translation team coordinator(s) to work the .json files into the workflow automation that currently is used with .desktop files. The JSON tool will be able to be used as both a one-time migration helper, turning .desktop files into .json files so that developers don't have to do this by hand, as well as an incremental translation updater so that new translations get merged in.
Performance wise, parsing these big blocks of JSON is nice and fast, with my laptop handling well over 1000 plugins per second. Reading the string data from random plugin files on disk can be quite slow, however, particularly with cold caches and rotating disks. To resolve that issue, David Faure will be working on a tool that we need anyways for application .desktop files that generates an incremental cache at installation time. This will be replacing kbuildsycoca and is being coordinated with other projects via freedesktop.org, even. One cache to rule them all.
Next Milestone: v0.2
In film making, it is better to show than to tell the audience what is going on. Great films have actors communicating emotion in their body language, facial expressions and how they deliver their lines. Less great films have scripts in which characters say things like "Wow, Joe, you sure are angry right now!"
There's an of echo of this principle when promoting products, though I'll focus here on the context of Free software: it is most effective to demonstrate how the product you are highlighting is great. Show the benefit, rather than explain the mechanism. But above all else: avoid the temptation to spend time describing why the alternatives are not great at those same things.
Most people are smart enough to draw comparisons on their own without them being explicitly written out for them. Most people will read a positive article and come to conclusions like, "Oh hey, that software is great at .. hm.. what I'm using isn't great at that.. interesting.. maybe I should check this out".
Humans are actually pretty good at reaching those kinds of interpretations on their own. Just as we can read emotion into non-verbal communication, even when it's completely make-believe in a movie, most people are good at comparing what we have with what we are observing. Some people struggle with such things, but the majority do not.
Best of all, almost nobody is daft enough to complain that you're being positive about things. (Assuming you don't misrepresent the product, of course .. but that's a different issue.) However, when one spends time talking about the negatives of other products when the intent is to communicate positive things about the topic product, the audience tends to pay more attention to the negative parts. They are likely to become distracted from the intended topic, and the result is often even more distracting (and draining) rebuttals.
By way of example: in an article that I read today about a Free software product (in theory, at least), I counted 6 paragraphs about the topic product ... out of 25. The other 19 paragraphs were about other software products. Let that sink in for a moment: 19 to 6. 76%. Three quarters.
In those 19 paragraphs, the author mentioned 5 different competing products. Few, if any, of their audience use all 5 of those products, so the article was off-topic for most of the audience at least part of the time. The topic product was not highlighted; it drowned in a sea of 6 products. 5 to 1. 83%. Four fifths.
Instead of spreading the awesomeness of the title product, the author ended up writing about other products in a negative light. In doing so, they felt the need to offer caveats and observations about how people react to negative comparison pieces (this was obviously not their first stroll down this path), which only further distracted from the intended topic.
If you want to avoid "a flood of [negative] email/comments", as the article's author lamented, here three simple suggestions for both promotional and critique pieces:
- Unless the article is an objective comparison piece, such as reporting the results of a rigorous benchmark, focus on a limited number of products. One, in fact, is a great number
- Demonstrate the attributes of the product, rather than claim or explain
- Let the audience create the comparisons in their own mind
The responses to my previous post, on the myth of the fall, brought out a lot of half-forgotten lore about pre-open-source cultures of software sharing.
Some of these remain historically interesting, but hackers talking about them display the same tendency to back-project present-day conditions I was talking about in that post. As an example, one of my regular commenters inferred (correctly, I think) the existence of a software-sharing community around ESPOL on the B5000 in the mid-1960s, but then described it as “proto-open-source”
I think that’s an easy but very misleading description to land on. In the rest of this post I will explain why, and propose terminology that I think makes a more useful set of distinctions. This isn’t just a historical inquiry, but relevant to some large issues of the present and future.
For those of you who came in late, the B5000 was an early-to-mid-1960s Burroughs mainframe that had a radically unusual trait for the period; its OS was written not in assembler but in a high-level language, a dialect of ALGOL called ESPOL that was extended so it could peek and poke the machine hardware.
B5000 sites could share source-code patches for their operating system, the MCP or Master Control Program (yes, Tron fans, it was really called that!) that were written in a high-level language and thus relatively easy to modify. To the best of my knowledge, this is the only time such a thing was done pre-Unix.
But. Like the communities around SHARE (IBM mainframe users) and DECUS (DEC minicomputers) in the 1960s and 1970s, whatever community existed around ESPOL was radically limited by its utter dependence on the permissions and APIs that a single vendor was willing to provide. The ESPOL compiler was not retargetable. Whatever community developed around it could neither develop any autonomy nor survive the death of its hardware platform; the contributors had no place to retreat to in the event of predictable single-point failures.
I’ll call this sort of community “sharecroppers”. That term is a reference to SHARE, the oldest such user group. It also roughly expresses the relationship between these user groups and contributors, on the one hand, and the vendor on the other. The implied power relationship was pretty totally asymmetrical.
Contrast this with early Unix development. The key difference is that Unix-hosted code could survive the death of not just original hardware platforms but entire product lines and vendors, and contributors could develop a portable skillset and toolkits. The enabling technology – retargetable C compilers – made them not sharecroppers but nomads, able to evade vendor control by leaving for platforms that were less locked down and taking their tools with them.
I understand that it’s sentimentally appealing to retrospectively sweep all the early sharecropper communities into “open source”. But I think it’s a mistake, because it blurs the importance of retargetability, the ability to resist or evade vendor lock-in, and portable tools that you can take away with you.
Without those things you cannot have anything like the individual mental habits or collective scale of contributions that I think is required before saying “an open-source culture” is really meaningful.
This is not just a dusty historical point. We need to remember it in a world where mobile-device vendors (yes, I’m looking at you, Apple!) would love nothing more than to lock us into walled gardens of elaborate proprietary APIs, tools, and languages.
Yes, you may be able to share source code with others in environments like that, but you can’t move what you build to anywhere else. Without that ability to exit, developers and users have only an illusion of control; all power naturally flows to the vendor.
No open-source culture can flourish or even survive under those conditions. Keeping that in mind is the best reason to be careful about our terminology.
I was a historian before I was an activist, and I’ve been reminded recently that a lot of younger hackers have a simplified and somewhat mythologized view of how our culture evolved, one which tends to back-project today’s conditions onto the past.
In particular, many of us never knew – or are in the process of forgetting – how dependent we used to be on proprietary software. I think by failing to remember that past we are risking that we will misunderstand the present and mispredict the future, so I’m going to do what I can to set the record straight.
Some blurriness about how things were back then is understandable; it can sometimes take a bit of effort even for those of us who were there in elder days to remember what it was like before PCs, before the Internet, before pixel-addressable color displays, before ubiquitous version-control systems. And there were so few of us back then – when I first found the Jargon File around 1978 you could fit every hacker in the U.S. in a medium-sized auditorium, and if you were willing to pack the aisles probably every hacker in the world.
A larger and subtler change, the one easiest to forget, is how dependent we were on proprietary technology and closed-source software in those days. Today’s hacker culture is very strongly identified with open-source development by both insiders and outsiders (and, of course, I bear some of the responsibility for that). But it wasn’t always like that. Before the rise of Linux and the *BSD systems around 1990 we were tied to a lot of software we usually didn’t have the source code for.
Part of the reason many of us tend to forget this is mythmaking by the Free Software Foundation. They would have it that there was a lost Eden of free software sharing that was crushed by commercialization in the late 1970s and early 1980s. This narrative projects Richard Stallman’s history at the MIT AI Lab on the rest of the world. But, almost everywhere else, it wasn’t like that either.
One of the few other places it was almost like that was early Unix development from 1976-1984. They really did have something recognizably like today’s open-source culture, though much smaller in scale and with communications links that were very slow and expensive by today’s standards. I was there during the end of that beginning, the last few years before AT&T’s failed attempt to lock down and commercialize Unix in 1984.
But the truth is, before the early to mid-1980s, the technological and cultural base to support anything like what we now call “open source” largely didn’t exist at all outside of those two settings. The reason is brutally simple: software wasn’t portable!
You couldn’t do what you can do today, which is write a program in C or Perl or Ruby or Python with the confident expectation that it will run on multiple architectures. My
first second full-time job writing code, in 1980, was representative for the time: writing communications software on a TRS-80 in Z-80 assembler. Assembler, people!. We wrote a lot of it. Until the early 1980s, programming in high-level languages was the exception rather than the rule. In general, you couldn’t port that stuff!
Not only was portability across architectures a near-impossible dream, you often couldn’t port between instances of the same machine without serious effort. Especially on larger machines, code tended to be intertwined with details of individual site configuration to an extent that would shock people today (IBM JCL was notoriously the worst offender, but by no means the only).
In that kind of environment, arguing about whether code should be redistributable in general was next to pointless, because unless the new machine was specifically designed to be binary-compatible with the old, ports amounted to being re-implementations anyway.
This is why the earliest social experiments in what we would now call “open source” – at SHARE and DECUS – were restricted to individual vendors’ product lines and (often) to individual machine types. And it’s why the cancellation of the PDP-10 follow-on in 1983 was such a disaster for the MIT AI Lab and SAIL and other early hacker groups. There they were, stuck, having folded huge amounts of time and genius into a huge pile of 10 assembler code and no real possibility that it would ever be useful again. And this was normal.
The Unix guys showed us the way out, by (a) inventing the first non-assembler language really suitable for systems programming, and (b) proving it by writing an operating system in it. But they did something even more fundamental — they created the modern idea of software systems that are cleanly layered and built from replaceable parts, and of re-targetable development tools.
Tellingly, Richard Stallman had to co-opt Unix technology in order to realize his vision for the Free Software Foundation. The MIT AI Lab itself never found its way to that new world. There’s a reason the Emacs text editor is the only software artifact of that culture that survives to us, and it had to be rewritten from the ground up on the way. (Correction: A symbolic-math package called MACSYMA also survives, though in relative obscurity.)
Without the Unix-spawned framework of concepts and technologies, having source code simply didn’t help very much. This is hard for younger hackers to realize, because they have no experience of the software world before retargetable compilers and code portability became relatively common. It’s hard for a lot of older hackers to remember because we mostly cut our teeth on Unix environments that were a few crucial years ahead of the curve.
But we shouldn’t forget. One very good reason is that believing a myth of the fall obscures the remarkable rise that we actually accomplished, bootstrapping ourselves up through a series of technological and social inventions to where open source on everyone’s desk and in everyone’s phone and ubiquitous in the Internet infrastructure is now taken for granted.
We didn’t get here because we failed in our duty to protect a prelapsarian software commons, but because we succeeded in creating one. That is worth remembering.
I long believed that I'd destroyed all copies of this, but a friend had it in hard copy and scanned it in. I think 13 years is long enough. Think of this as the unpublished conclusion from my Ph.D thesis.
Note, Paul Mockapetris wrote none of this, I had his name on it as part of the joke. Paul M convinced me not to submit this as an April 1 RFC that year, in fact.
For the curious, it took me about half an hour, and the result was ~80 lines bigger, mostly due to the new browsing and paging support.
The planets align this week and Luminosity is on for Tonight! This week's topics will include:
- systemd, what is good for: in a few Luminosity episodes I've talked about the tension between standardization and diversity. The question of what init systems on Linux will look like has recently taken a turn towards standardization and the venerable sysv init is about to walk into the sunset. In its stead is systemd .. but what does that *mean* exactly? I'll attempt to cut through the noise in this segment as we look at systemd and the things it can do, which is surprisingly a lot.
- GCompris: Software aimed at young people, be it educational or entertaining (or a mix), has been a focus for many Free software participants for a long while. One entrant from this category is GCompris, which is a suite of activities for children aged 2-10. We'll talk about why this audience is important for Free software and look at GCompris' recent change in development roadmap towards mobile devivces.
- Q&A: As always, questions from the audience will be taken and answered during and after the show.
You can join live on Friday the 28th on G+ or Youtube at 18:00 UTC, with live chat on irc.freenode.net in #luminosity, or catch the show later on my Youtube channel. See you then!
This is great, but it creates some additional complexities for those of us writing libraries to be used by such applications. Since I've run into problems in a couple of places now in Frameworks code, I thought I'd share ...
One of the main sources of problems is classes which create private data members (which is nearly all of them) which require an event loop for signals and slots, network, dbus, etc. If these objects have a QObject parent and that parent is moved to another thread with QObject::moveToThread, then all stays happy and fine: the private object is moved to the new thread automatically and continues event processing there.
However, if that object does not have a parent the this does not happen. Should the object the application instantiated be moved to another thread, then the private object gets left behind in the original thread. This becomes a problem if that original thread is then stopped. Or, worse, doesn't have an event loop to begin with.
(In Sprinter, some objects are created in QRunnables which are run in a QThreadPool without an event loop; these objects are then moved to a thread with an event loop .. you can see the problem.)
Even worse are singletons in libraries. These don't have a parent that they can follow between threads and they often exist for the lifespan of the library's usage. Which means that if a class from the library which uses such an internal singleton is instantiated in thread A and, when the work is complete, thread A is stopped; when another class that uses that internal singleton is instantiated, the singleton will be stuck in a thread that has no event loop.
This is usually entirely transparent to the application developer using the library, so the application writer is unlikely to be aware of these objects floating around which makes it look like things just magically break. If these private objects were public, then at least the application developer could work around these limitations in the context of the threading model they are implementing.
If you are writing a Qt library with private objects that require event loops and which do not have QObject parents, there are a few things that you can do:
- Warn application developers in the apidox; this doesn't fix anything directly, but at least the application developer can inform themselves and either not use the library with threads or work around it (e.g. by instantiating affected classes in the main app thread only)
- Ensure these private objects are in a thread with an event loop; that might mean instantiating them in the application thread or, in the case of computationally intensive objects, giving them their very own thread to wallow in.
- Do nothing and enjoy watching developers using your library in threads stare at their screen wondering why the object they just new'd is apparently alive and kicking, but not doing what it should.
Luminosity of Free Software episode 17
The Dark Enlightenment is, as I have previously noted, a large and messy phenomenon. It appears to me in part to be a granfalloon invented by Nick Land and certain others to make their own piece of it (the neoreactionaries) look larger and more influential than it actually is. The most detailed critiques of the DE so far (notably Scott Alexander’s Reactionary Philosophy in an Enormous, Planet-Sized Nutshell and Anti-Reactionary FAQ nod in the direction of other cliques on the map I reproduced but focus pretty strongly on the neoreactionaries.
Nevertheless, after we peel away clear outliers like the Techno-Commercial Futurists and the Christian Traditionalists, there remains a “core” Dark Enlightenment which shares a discernibly common set of complaints and concerns. In this post I’m going to enumerate these rather than dive deep into any of them. Development of and commentary on individual premises will be deferred to later blog posts.
(I will note the possibility that I may in summarizing the DE premises be inadvertently doing what Scott Alexander marvelously labels “steelmanning” – that is, reverse-strawmanning by representing them as more logical and coherent than they actually are. Readers should be cautious and check primary sources if in doubt.)
Complaint the first: We are all being lied to – massively, constantly, systematically – by an establishment that many DE writers call “the Cathedral”. Its power is maintained by inculcation in the masses of what a Marxist (but nobody in the DE, ever, except ironically) would call “false consciousness”. The Cathedral’s lies go far deeper than what most people think of as normal tactical political falsehoods or even conspiracy theories, down to the level of some of the core premises of post-Enlightenment civilization and widely cherished beliefs about the sustainability of racial equality, sexual equality, and democracy.
An interesting feature of the DE is how remarkably little conspiracy theorizing there is in it. Instead, DE thinkers tend to describe the Cathedral as what I have elsewhere called a “prospiracy”. The Cathedral is bound together not by a hierarchy of internal control and explicit membership; rather, it runs on a shared set of ideological premises not all of which are held or even completely understood by the people who act as part of it.
To a first approximation, the ideology of the Cathedral can be described as “leftist” (many DE writers use the term “Progressive”, not meaning it as a compliment). However, the DE analysis of Cathedral ideology is actually much more complex and less reductive than these terms might imply (a point on which I expect to expand in later posts).
I will note, by the way, the known backgrounds of several key DE thinkers creates grounds to suspect that my own critical use of “Cathedral” in connection with software engineering had some influence on the DE terminology. I do not particularly claim this as an accomplishment, but there it is.
Complaint the second: “All men are created equal” is a pernicious lie. Human beings are created unequal, both as individuals and as breeding populations. Innate individual and group differences matter a lot. Denying this is one of the Cathedral’s largest and most damaging lies. The bad policies that proceed from it are corrosive of civilization and the cause of vast and needless misery.
Another way the DE puts this complaint is that nobody on the conventional political spectrum takes Darwinism seriously enough. Left-liberals self-identify as the friends of evolution out of a desire to be “on the side of science”, but if they really understood the implications of evolutionary biology and psychology they would be more horrified by them than Christian fundamentalists are.
The emphasis on this complaint is probably the single feature which most distinguishes the DE from other kinds of conservatism and anti-left-wing reaction. I’ll be writing about it at more length because I think it is the most interesting and challenging part of the DE critique.
While I don’t intend to do that here and now, I cannot exit this summary without acknowledging that many people will read this complaint as a brief for racism. In fact the DE itself contains two relatively distinguishable cliques that have processed this complaint in different ways: the Ethno-Nationalists and the Human Bio-Diversity people – in DE jargon, eth-nats and HBD for short.
If you come to the DE looking for straight-up old-fashioned racism, the Ethno-Nationalists will supply your requirement as hot and hateful as you like. The HBD people, on the other hand, are interested in value-neutral Damned Facts. They trade not in invective but in the nuts and bolts of psychometry and behavioral genetics. A signature consequence of the difference is that European-descended white people don’t necessarily come off “best” in the comparisons they make.
Complaint the Third: Democracy is a failure. It has produced a race to the bottom in which politicians grow ever more venal, narrow interest groups ever more grasping, the function of government increasingly degenerates into subsidizing parasites at the expense of producers, and in general politics exhibits all the symptoms of what I have elsewhere called an accelerating Olsonian collapse (after Mancur Olson’s analysis in The Logic Of Collective Action).
If this sounds like a libertarian critique, it in many ways is. One of my commenters noted, astutely, that the DE bears the imprint of Hans-Hermann Hoppe’s libertarian polemic Democracy: The God That Failed. Some of the leading DE thinkers describe themselves as ex-libertarians, but their thinking has often taken some very dark and strange anti-libertarian turns since. (I’ll have more to say about this in discussing Mencius Moldbug, who is worth a post all to himself).
Note to commenters: Please do not dive into attacking or defending these premises; that will be appropriate when I discuss them individually. Appropriate discussion for this post is whether I have missed major premises or gotten these wrong in any significant way.
I expect future posts in this series to include both a closer focus on individual premises ansd on individual cliques within the Dark Enlightenment.
This is a shout-out to all martial artists and would-be martial artists in the western Philadelphia exurbs, especially: Phoenixville, Spring City, Collegeville, Mont Clare, Upper Providence, Lower Schuylkill, Valley Forge, Charlestown/Malvern, Kimberton, Audubon, and Lower Perkiomen.
I train under Sifu Dale Yeager at the Kuntao Martial Arts Club in Phoenxville, and my school has a weird problem. It’s having trouble keeping students, and near as I can figure the trouble is that the school is too good!
Seriously. We have lots of people wander in, expecting the kind of near-useless pablum that’s peddled at endless numbers of interchangeable strip-mall karate emporia. Too many spend a couple weeks finding out how rigorously we train and bail. It’s not even that our style is physically that difficult; it’s way less strenuous than, say, kickboxing or hard-style karate. But it does demand concentration, mental flexibility, a willingness to learn challenging movement sequences, and the intelligence to integrate individual moves into an entire tactical system.
We teach a blend of traditional wing chun kung fu and Philippine weapons arts, with early emphasis on short blades (higher levels go to swords). It’s a close infighting style, and Sifu thinks a major reason we don’t pull in more newbies is that we look as scary as hell when we do it. I can’t disagree. There’s a quiet, ferocious intensity to the training that drew me in immediately but might turn off anybody who was just looking for exercise.
I’m posting because I’m worried about the school. We only have about twelve to fifteen people showing up regularly; sifu just told his instructors we need to get up to around thirty because the expenses for rent and equipment aren’t going anywhere but up. He doesn’t want to jack up fees because he doesn’t really run the school for money – he’s got a pretty well-paying day job, he’s interested in passing on what he knows to the best students he can find.
And we are good students. In more than twenty years of martial arts training at more than half a dozen schools I’ve found a style this interesting and a student group this impressive maybe twice. The level of commitment and mutual help is high. (It’s a mainly mixed adult group with a wide age range and one or two older children.)
If this sounds attractive to you, come train with me. You don’t have to be a twenty-year student like my wife and I, nor a natural athlete, but you do have to be ready to train with intensity and focus. Your mind will be exercised harder than your body.
I can especially recommend the training to the sorts of people most likely to be reading this. That is programmers, engineers, techies, and geeks of all description who already get it about mental discipline and flow states. Kuntao will engage you better than the strip-mall crap ever could.
Chase the link above or call 610-237-3902 ext 803
For at least fifteen years my name and its tri-letterization has been something with which you could conjure up a lot of attention among hackers and other sorts of geek. This fact presented the more clueful of my personal friends with a delicate problem: under what circumstances would it be proper for them to invoke this instrument?
I have actually been asked for guidance about this more than once. I developed some guidelines more than a decade ago. To the best my knowledge my friends have been pretty good about applying them. I present them here for your amusement.
1. Please do not drop my name to score cheap social-status points. That’s crass and I don’t like it.
2. Do drop my name if by doing so you can achieve some mission objective of which I would approve. Examples that have come up: encouraging people to design in accordance with the Unix philosophy, or settling a dispute about hacker slang, or explaining why it’s important for everyone’s freedom for the hacker community to hang together and not get bogged down in internal doctrinal disputes.
3. Do drop my name if by doing it you can rock someone’s world in a positive way. A case of this that comes up fairly often is encouraging a young proto-hacker.
4. Do drop my name if doing so would be funny. Funny is even an acceptable excuse for scoring social-status points with it – if you think I’ll laugh when I hear the story, go right ahead.
And yes, I apply these rules (or obvious analogs thereof) to myself. I think it’s vulgar to wave my fame around in contexts where it’s irrelevant. It can be very amusing, if you’re clued in, to watch what happens when somebody in a group of programmers (or gamers or SF fans or any other population that oversamples programmers) that hasn’t met me before twigs to The Presence.
If this attitude seems odd to you, understand that fame is exhausting and psychologically dangerous (I have a lot more sympathy for rock stars who fuck themselves up with drugs than before I felt the pressure myself). Ironic detachment from one’s own celebrity is, I have found, an effective coping strategy.
My distant friend Kent Lundgren, one of the most capable and thoughtful firearms instructors out there, has written a blog post addressing the tricky question of how we might filter potential carriers of concealed weapons for competence without involving the government.
I’ve struggled with this one myself. Kent is right on, we absolutely do not want the government to have an easy pretext to forbid people from bearing arms; that is too dangerous a power to let government have. Any legal bar should have preconditions at least as difficult for the state as a finding of clinical insanity.
Yes, private-sector competency tests might be a good thing. I’m all in favor of voluntary certification. It’s the produce-on-demand part Kent suggests that’s a little worrying. We’ve got more than enough of “Your papers, please” in America already – it’s not a demand that is compatible with a free society in the long term.
Thinking about it now, though, I’m not sure how much good a firearms competency certification would actually do for basic safety. Such proposals would have the same adverse-selection problem that “gun control” laws do; the people you don’t want armed are exactly the people most likely to flout them. The effect of all such filters is perverse, to disarm only the conscientious and law-abiding.
The most important thing to remember when thinking about this sort of policy issue is a criminological fact I learned from Don B. Kates: that gun crimes and accidents are highly concentrated in an approximately 3% cohort of the population that is also strongly deviant by other measures, including: rates of domestic violence, drug and alcohol addiction, auto accidents, rates of criminal conviction, and accident proneness. Low intelligence and low impulse control are nearly defining traits of this group. Elsewhere I have borrowed some cop slang and called these people “mooks”.
Your chances of being shot deliberately or accidentally by a non-mook are on a par with your chances of being struck by lightning – such instances are so rare that each one gets individual newspaper coverage (incidentally misleading us to way overestimate the frequency).
The trouble with an (essentially) voluntary certification requirement is that non-mooks don’t need it and mooks won’t bother with it. The criminal mooks would laugh at the requirement the same way they laugh at “gun control” laws, and the mere losers generally wouldn’t have their act together enough to go through the procedural hoops. They’d carry anyway, though, because they’re stupid and thus exceptionally prone to the Dunning-Krueger effect, overestimating their own competence.
Where does this leave private-sector certification proposals? Basically, in the same bad place as “gun control” laws, without the go-directly-to-jail threat. The training requirement might do some good at slightly increasing competence levels among non-mooks, but non-mooks are already so unlikely to shoot each other that I’m doubtful any improvement in safety would breach the statistical noise level.
Storm Pax hit my area today as we were just recovering, still a bit dazed and reeling, from Storm Nika. This brought me 14 inches of snow, and it brings you a tale of progress in small things and how odd the brain’s information-retrieval process can be.
My wife, perversely, actually likes shoveling snow. Which is a good thing because there was a shit-ton of the stuff on our driveway this morning. She carved a channel from the car to the sidewalk, which had been cleared all along our street by some helpful soul with a snowblower well before we ventured outside. But that left a ridge of snow, easily 4.5′ high and 6′ thick, between the driveway/sidewalk and the middle of the street. It was heavy, half-compacted spoil thrown by a plow truck; that happens a lot here after winter storms.
Contemplating that mini-mountain, I nearly despaired of getting our car out until the spring thaw. I knew that as bad as it looked, it was going to get worse – three inches or more of snow are due tonight.
Then I noticed that the new neighbor had carved his way through that ridge with what, by the way the snow was packed vertically around the cut, had to be a snowblower. Went over, knocked, introduced myself, and asked for the loan of the thing. New neighbor turned out to be an affable sort, a gray-haired blue-collar regular joe who introduced himself as “Gordo” and was quite cheerfully willing to let me use it.
That’s how I found myself pushing your typical American gas-powered snowblower out to the sidewalk. Two-stroke gasoline engine with a rope start, yup, seen those before, mildly dreading getting it to fire up. I never happen to have worked a snowblower before, but have childhood memories of my dad fighting for fifteen minutes at a stretch to get similar beasts started on push lawnmowers. Never had to do that myself; my generation got pretty spoiled by electric-starting riding mowers.
Hmm. Directions: Turn key to “Run” position, choke lever to full, press priming button three times, pull rope slowly until there’s resistance then quickly. My eyebrows rose. You mean they’re actually telling me I don’t have to yank as hard as I can as fast as I can? This is not yer father’s lawnmower. Progress in small things…
Damn me if it didn’t start second time (first time I hadn’t got the hang of where the resistance kicked in). This is the first burden of my tale; progress in small things matters. When was the first year that some engineer figured out how to make a two-stroke engine you don’t have to swear at and futz with endlessly to get to fire up? When was the first year they put actually helpful instructions in large print, located near the controls?
OK, so I went after the ridge with a roaring snowblower. Found out it was work; sucker carves and throws snow nicely but doesn’t push itself. Then I found out that this snowblower ain’t so happy taking on a ridge that overtops the blade aperture by a couple of feet. It’s a light-duty machine really meant for snow accumulations of less than a foot or so, not one of the monster-mawed things ski resorts use.
Time to invoke my wife and the shovel. If she knocks down the higher parts of the ridge the blower will be able to chew up and throw the results. Thinking to be economical with my neighbor’s gasoline, I shut the machine off and went inside to explain the situation.
A few minutes of shovel teamwork later Cathy and I had the ridge lowered and broken up enough for the snowblower to cope. Then…I found I couldn’t get it started again. Let the swearing begin…
Now comes the second burden of my tale, which is how odd memory retrieval can be sometimes. I’m racking my brain trying to figure out what’s different this time and how to get the engine restarted. And, all unbidden, an audio track starts playing in my head. It’s Pink Floyd’s Learning to Fly, from the 1987 A Momentary Lapse of Reason album.
I have very, very good auditory memory. It includes details like pick-scrape noise in guitar solos that a lot of people don’t even seem to actually hear. For this track, it includes stretches of near-unintelligible radio chatter between pilots and ATCs that are used as a sound-wash background for instrumental parts of the arrangement. This is running in my head, and out jump two words: “mixture’s rich”.
Aha! I go over to the snowblower and back the choke off about 15% from the high setting, pull the start cord, and it fires up instantly.
Did you get that? My unconscious mind found a way to tell me what my conscious mind hadn’t figured out. The fuel-air mixture in the snowblower was too rich; I needed to back it off and let the spark have more oxygen.
Now we can get to the street and I have acquired a minor life skill; next time I have to baby a two-stroke engine I’ll know exactly what to do. Thank you, clever unconscious mind!
Does this happen to other people?
Library prepped for public use
This is rather more flexible and less purpose-specific than the single runner mode features in RunnerManager while having a perfectly symmetric API and no special paths in the code, something that made RunnerManager rather more complex than necessary.
Runner plugins can now tell if the default result set is being requested quite easily with `bool QueryContext::isDefaultMatchesRequest()` and advertise that they have such a set of matches by calling setGeneratesDefaultMatches(true).
DocumentationDocumentation is also finally coming along now that the API is stabilizing. There are a couple public classes that need API documentation still, but that will get filled in before a first release. There is high-level documentation in the docs/ folder, however, which attempts to cover usage of Sprinter both from the application and plugin perspective. It also contains the TODO file which is being rigorously kept up to date.
Everybody knows, or should know, the basic rules of firearms safety. (a) Always treat the weapon as if loaded, (b) Never point a firearm at anything you are not willing to destroy, (c) keep your finger off the trigger until you are ready to shoot, (d) be sure of your target and what is beyond it. (These are sometimes called “Cooper’s Rules” after legendary instructor Col. Jeff Cooper. There are several minor variants of the wording.)
If you follow these rules, you will never unintentionally injure anyone with a firearm. They are easy to learn and very safe. They are appropriate for civilians.
Some elite military units have different rules, with a different tradeoff between safety and combat effectiveness. I learned them from an instructor who was ex-SOCOM. The way I learned them is sufficiently amusing that the story deserves retelling.
The instruction began in the following way. Imagine several students sitting in a circle in camp chairs, the instructor almost directly across from me. Note that this was after we had learned and practiced the basic Cooper rules I described above.
The instructor began by clearing a pistol (opening the chamber port so we could see there was no bullet there or ready in the magazine) and letting the slide drop until the port was closed.
He handed me the pistol, looked at me with a slight smile, and said “Eric. Please shoot yourself through the head.”
I thought for a second, grinned, pointed the pistol at my temple, and pulled the trigger. There was a click and shocked gasps from some other students. (The gasps meant they had learned civilian rules correctly. I believe testing this was part of the instructor’s intention.)
The instructor then asked for the pistol back. I handed to him. He fiddled with it for a moment, passed it behind his back, brought it into view, offered it to me with the chamber port closed, and said again “Eric. Please shoot yourself through the head.”
I said “No, sir, I will not.”
His smile got a little wider. “Oh? And why not?”
I said “Because the weapon was out of my sight for a moment and I do not know that it is not ready to fire.” (My exact words may have been slightly different. That was the sense.)
“That was the correct answer,” he said, and proceeded to explain to all of us that elite military units must frequently carry weapons in a combat-ready state, and therefore train safety under different rules that require fighters to reason about when a firearm is in a dangerous condition.
In that exchange I violated Cooper’s Rules (a) and (b). I was thinking like a warrior who must frequently carry weapons in a ready-to-fire condition (because he can’t count on having the time to ready the weapon in a clutch situation) and knows that the warriors around him are trained to do likewise.
I’ll never forget those few minutes, because they taught all of us a valuable lesson. Also because we did not prearrange this! The instructor paid me a notable compliment by assuming that I would respond correctly both in obeying his first order and disobeying his second – and, if you think about it, there was a normative lesson there about intelligent initiative, cooperation and responsibility that goes far beyond the specific context of firearms safety.
UPDATE: Post title changed from “Military rules” because this is a story about how special-ops fighters (“operators” in military jargon) think and react.
When the X server crashes it prints a backtrace to the log file. This backtrace looks something like this:
This is a forced backtrace from the current F20 X Server package, generated by killall -11 Xorg. There is not a lot of human-readable information but you can see the call stack, and you can even recognise some internal functions. Now, in Fedora we compile with libunwind which gives us relatively good backtraces. Without libunwind, your backtrace may look like this:
(EE) 0: /usr/bin/Xorg (OsLookupColor+0x129) [0x473759]
(EE) 1: /lib64/libpthread.so.0 (__restore_rt+0x0) [0x3cd140f74f]
(EE) 2: /lib64/libc.so.6 (__select_nocancel+0xa) [0x3cd08ec78a]
(EE) 3: /usr/bin/Xorg (WaitForSomething+0x1ac) [0x46a8fc]
(EE) 4: /usr/bin/Xorg (SendErrorToClient+0x111) [0x43a091]
(EE) 5: /usr/bin/Xorg (_init+0x3b0a) [0x42c00a]
(EE) 6: /lib64/libc.so.6 (__libc_start_main+0xf5) [0x3cd0821d65]
(EE) 7: /usr/bin/Xorg (_start+0x29) [0x428c35]
(EE) 8: ? (?+0x29) [0x29]
So, even less information and it certainly makes it hard to figure out where to even get started. Luckily there is a tool to get some useful info out of that: eu-addr2line. All you need is to install the debuginfo package for the crashing program. Then it's just a matter of copying addresses.
(EE) 0: /opt/xorg/bin/Xorg (xorg_backtrace+0xb5) [0x484989]
(EE) 1: /opt/xorg/bin/Xorg (0x400000+0x8d1a4) [0x48d1a4]
(EE) 2: /lib64/libpthread.so.0 (0x3cd1400000+0xf750) [0x3cd140f750]
(EE) 3: /lib64/libc.so.6 (__select+0x33) [0x3cd08ec7b3]
(EE) 4: /opt/xorg/bin/Xorg (WaitForSomething+0x3dd) [0x491a45]
(EE) 5: /opt/xorg/bin/Xorg (0x400000+0x3561b) [0x43561b]
(EE) 6: /opt/xorg/bin/Xorg (0x400000+0x43761) [0x443761]
(EE) 7: /opt/xorg/bin/Xorg (0x400000+0x9baa8) [0x49baa8]
(EE) 8: /lib64/libc.so.6 (__libc_start_main+0xf5) [0x3cd0821d65]
(EE) 9: /opt/xorg/bin/Xorg (0x400000+0x25df9) [0x425df9]
Alright, this is useful now, I can download the source package and check where it actually goes wrong. But wait - it gets even better. Let's say you have a driver module in the callstack:
$ eu-addr2line -e /opt/xorg/bin/Xorg 0x48d1a4
You can see that we have a xf86libinput driver (libinput_drv.so) which in turn loadsd libinput.so. You can debug the crash the same way now, just change the addr2line argument:
(EE) 0: /opt/xorg/bin/Xorg (xorg_backtrace+0xb5) [0x484989]
(EE) 1: /opt/xorg/bin/Xorg (0x400000+0x8d1a4) [0x48d1a4]
(EE) 2: /lib64/libpthread.so.0 (0x3cd1400000+0xf750) [0x3cd140f750]
(EE) 3: /opt/xorg/lib/libinput.so.0 (libinput_dispatch+0x19) [0x7ffff1e51593]
(EE) 4: /opt/xorg/lib/xorg/modules/input/libinput_drv.so (0x7ffff205b000+0x2a12) [0x7ffff205da12]
(EE) 5: /opt/xorg/bin/Xorg (xf86Wakeup+0x1b1) [0x4af069]
(EE) 6: /opt/xorg/bin/Xorg (WakeupHandler+0x83) [0x444483]
(EE) 7: /opt/xorg/bin/Xorg (WaitForSomething+0x3fe) [0x491a66]
(EE) 8: /opt/xorg/bin/Xorg (0x400000+0x3561b) [0x43561b]
(EE) 9: /opt/xorg/bin/Xorg (0x400000+0x43761) [0x443761]
(EE) 10: /opt/xorg/bin/Xorg (0x400000+0x9baa8) [0x49baa8]
(EE) 11: /lib64/libc.so.6 (__libc_start_main+0xf5) [0x3cd0821d65]
(EE) 12: /opt/xorg/bin/Xorg (0x400000+0x25df9) [0x425df9]
Having this information of course doesn't mean you can fix any bug. But when you're reporting a bug it can be invaluable. If I have access to the same rpms that you're running it's possible to look up the context of the crash in the source. Or, even better, since you already have access to those you can make debugging a lot easier by attaching the required bits and pieces to a bug report. Seeing a bug report where a reporter already narrowed down where it crashes makes it a lot easier than guessing based on hex numbers what went wrong.
$ eu-addr2line -e /opt/xorg/lib/libinput.so.0 libinput_dispatch+0x19