The lower-post-volume people behind the software in Debian. (List of feeds.)

In Part I, we demonstrated Poon-Dryja channels; a generalized channel structure which used revocable transactions to ensure that old transactions wouldn’t be reused.

A channel from me<->you would allow me to efficiently send you 1c, but that doesn’t scale since it takes at least one on-blockchain transaction to set up each channel. The solution to this is to route funds via intermediaries;  in this example we’ll use the fictitious “MtBox”.

If I already have a channel with MtBox’s Payment Node, and so do you, that lets me reliably send 1c to MtBox without (usually) needing the blockchain, and it lets MtBox send you 1c with similar efficiency.

But it doesn’t give me a way to force them to send it to you; I have to trust them.  We can do better.

Bonding Unrelated Transactions using Riddles

For simplicity, let’s ignore channels for the moment.  Here’s the “trust MtBox” solution:

I send you 1c via MtBox; simplest possible version, using two independent transactions. I trust MtBox to generate its transaction after I send it mine.

What if we could bond these transactions together somehow, so that when you spend the output from the MtBox transaction, that automatically allows MtBox to spend the output from my transaction?

Here’s one way. You send me a riddle question to which nobody else knows the answer: eg. “What’s brown and sticky?”.  I then promise MtBox the 1c if they answer that riddle correctly, and tell MtBox that you know.

MtBox doesn’t know the answer, so it turns around and promises to pay you 1c if you answer “What’s brown and sticky?”. When you answer “A stick”, MtBox can pay you 1c knowing that it can collect the 1c off me.

The bitcoin blockchain is really good at riddles; in particular “what value hashes to this one?” is easy to express in the scripting language. So you pick a random secret value R, then hash it to get H, then send me H.  My transaction’s 1c output requires MtBox’s signature, and a value which hashes to H (ie. R).  MtBox adds the same requirement to its transaction output, so if you spend it, it can get its money back from me:

Two Independent Transactions, Connected by A Hash Riddle.

Handling Failure Using Timeouts

This example is too simplistic; when MtBox’s PHP script stops processing transactions, I won’t be able to get my 1c back if I’ve already published my transaction.  So we use a familiar trick from Part I, a timeout transaction which after (say) 2 days, returns the funds to me.  This output needs both my and MtBox’s signatures, and MtBox supplies me with the refund transaction containing the timeout:

Hash Riddle Transaction, With Timeout

MtBox similarly needs a timeout in case you disappear.  And it needs to make sure it gets the answer to the riddle from you within that 2 days, otherwise I might use my timeout transaction and it can’t get its money back.  To give plenty of margin, it uses a 1 day timeout:

MtBox Needs Your Riddle Answer Before It Can Answer Mine

Chaining Together

It’s fairly clear to see that longer paths are possible, using the same “timelocked” transactions.  The paper uses 1 day per hop, so if you were 5 hops away (say, me <-> MtBox <-> Carol <-> David <-> Evie <-> you) I would use a 5 day timeout to MtBox, MtBox a 4 day to Carol, etc.  A routing protocol is required, but if some routing doesn’t work two nodes can always cancel by mutual agreement (by creating timeout transaction with no locktime).

The paper refers to each set of transactions as contracts, with the following terms:

  • If you can produce to MtBox an unknown 20-byte random input data R from a known H, within two days, then MtBox will settle the contract by paying you 1c.
  • If two days have elapsed, then the above clause is null and void and the clearing process is invalidated.
  • Either party may (and should) pay out according to the terms of this contract in any method of the participants choosing and close out this contract early so long as both participants in this contract agree.

The hashing and timelock properties of the transactions are what allow them to be chained across a network, hence the term Hashed Timelock Contracts.

Next: Using Channels With Hashed Timelock Contracts.

The hashed riddle construct is cute, but as detailed above every transaction would need to be published on the blockchain, which makes it pretty pointless.  So the next step is to embed them into a Poon-Dryja channel, so that (in the normal, cooperative case) they don’t need to reach the blockchain at all.

Posted Wed Apr 1 11:46:29 2015 Tags:
Your push may fail due to “non fast-forward”. You start from a history that is identical to that of your upstream, commit your work on top of it, and then by the time you attempt to push it back, the upstream may have advanced because somebody else was also working on his own changes.


For example, between the upstream and your repositories, histories may diverge this way (the asterisk denotes the tip of the branch; the time flows from left to right as usual):


Upstream                                You


---A---B---C*      --- fetch -->        ---A---B---C*


                                                    D*
                                                   /
---A---B---C---E*                       ---A---B---C


            D?                                      D*
           /                                       /
---A---B---C---E?   <-- push ---        ---A---B---C


If the push moved the branch at the upstream to point at your commit, you will be discarding other people’s work. To avoid doing so, git push fails with “Non fast-forward”.


The standard recommendation when this happens is to “fetch, merge and then push back”. The histories will diverge and then converge like this:


Upstream                                You


                                                    D*
                                                   /
---A---B---C---E*  --- fetch -->        ---A---B---C---E


                                                       1
                                                    D---F*
                                                   /   /2
---A---B---C---E*                       ---A---B---C---E


               1                                       1
            D---F*                                  D---F*
           /   /2                                  /   /2
---A---B---C---E    <-- push ---        ---A---B---C---E


Now, the updated tip of the branch has the previous tip of the upstream (E) as its parent, so the overall history does not lose other people’s work.


The resulting history, however, is not what the majority of the project participants would appreciate. The merge result records D as its first parent (denoted with 1 on the edge to the parent), as if what happened on the upstream (E) were done as a side branch while F was being prepared and pushed back. In reality, E in the illustration may not be a single commit but can be many commits and many merges done by many people, and these many commits may have been observed as the tips of the upstream’s history by many people before F got pushed.

Even though Git treats all parents of a merge equally at the level of the underlying data model, the users have come to expect that the history they will see by following the first-parent chain tells the overall picture of the shared project history, while second and later parents of merges represent work done on side branches. From this point of view, what "fetch, merge and then push" is not quite a right suggestion to proceed from a failed push due to "non fast-forward".


It is tempting to recommend “fetch, merge backwards and then push back” as an alternative, and it almost works for a simple history:


Upstream                                You


                                                    D*
                                                   /
---A---B---C---E*  --- fetch -->        ---A---B---C---E


                                                       2
                                                    D---F*
                                                   /   /1
---A---B---C---E*                       ---A---B---C---E


               2                                       2
            D---F*                                  D---F*
           /   /1                                  /   /1
---A---B---C---E    <-- push ---        ---A---B---C---E


Then, if you follow the first-parent chain of the history, you will see how the tip of the overall project progressed. This is an improvement over the “fetch, merge and then push back”, but it has a few problems.


One reason why “merge backwards” is wrong becomes apparent when you consider what should happen when the push fails for the second time after the backward merge is made:


Upstream                                You


                                                    D*
                                                   /
---A---B---C---E*  --- fetch -->        ---A---B---C---E


                                                       2
                                                    D---F*
                                                   /   /1
---A---B---C---E*                       ---A---B---C---E


               2                                       2
            D---F?                                  D---F*
           /   /1                                  /   /1
---A---B---C---E---G    <-- push ---    ---A---B---C---E


               2                                       2   2
            D---F?                                  D---F---H*
           /   /1                                  /   /1  /1
---A---B---C---E---G    --- fetch -->   ---A---B---C---E---G


If the upstream side gained another commit G while F was being prepared, “fetch, merge backwards and then push” will end up creating a history like this, hiding D, the only real change you did in the repository, as the tip of the side branch of a side branch!

It also does not solve the problem if the work you did in D is not a single strand of pearls, but has merges from side branches. If D in the above series of illustrations were a few merges X, Y and Z from side branches of independent topics, the picture on your side, after fetching E from the updated upstream, may look like this:


    y---y---y   .
   /         \   .
  .   x---x   \   \
 .   /     \   \   \
.   /       X---Y---Z*
   /       /
---A---B---C---E


That is, hoping that the other people will stay quiet, starting from C, you merged three independent topic branches on top of it with merges X, Y and Z, and hoped that the overall project history would fast-forward to Z. From your perspective, you wanted to make A-B-C-X-Y-Z to be the main history of the project, while x, y, ... were implementation details of X, Y and Z that are hidden behind merges on side branches. And if there were no E, that would indeed have been the overall project history people would have seen after your push.

Merging backwards and pushing back would however make the history’s tip F, with its first parent E, and Z becomes a side branch. The fact that X, Y and Z (more precisely, X^2 and Y^2 and Z^2) were independent topics is lost by doing so:


    y---y---y   .
   /         \   .
  .   x---x   \   \
 .   /     \   \   \
.   /       X---Y---Z
   /       /         \2
---A---B---C---E-------F*
                     1



So "merge backwards" is not a right solution in general. It is only valid if you are building a topic directly on top of the shared integration branch, which is something you should not be doing in the first place. In the earlier illustration of creating a single D on top of C and pushing it, if there were no work from other people (i.e. E), the push would have fast-forwarded, making D as a normal commit directly on the first-parent chain. If there were work from other people like E, “merge in reverse” would instead have recorded D on a side branch. If D is a topic separate and independent from other work being done in parallel, you would consistently want to see such a change appear as a merge of a side branch.

A better recommendation might be to “fetch, rebuild the first-parent chain, and then push back”. That is, you would rebuild X, Y and Z (i.e. “git log --first-parent C..”) on top of the updated upstream E:


    y---y-------y   .
   /             \   .
  .   x-------x   \   \
 .   /         \   \   \
.   /           X’--Y’--Z’*
   /           /
---A---B---C---E


Note that this will work well naturally even when your first-parent chain has non-merge commits. For example, X and Y in the above illustration may be merges while Z is a regular commit that updates the release notes with descriptions of what was recently merged (i.e. X and Y). Rebuilding such a first-parent chain on top of E will make the resulting history very easy to understand when the reader follows the first-parent chain.

The reason why “rebuild the first-parent chain on the updated upstream” works the best is tautological. People do care about the first-parenthood when viewing the history, and you must have cared about the first-parent chain, too, when building your history leading to Z. That first-parenthood you and others care about is what is being preserved here. By definition, we cannot go wrong ;-)

And of course, this will work against a moving upstream that gained new commits while we were fixing things up on our end, because we won't be piling a new merges on top, but will be rebuilding X', Y' and Z' into X'', Y'', and Z'' instead.

To make this work on the pusher’s end, after seeing the initial “non fast-forward” refusal from “git push”, the pusher may need to do something like this:


$ git push ;# fails
$ git fetch
$ git rebase --first-parent @{upstream}


Note that “git rebase --first-parent” does not exist yet; it is one of the topics I would like to see resurrected from old discussions.

But before "rebase --first-parent" materialises, in the scenario illustrated above, the pusher can do these instead of that command:


$ git reset --hard @{upstream}
$ git merge X^2
$ git merge Y^2
$ git merge Z^2


And then, inspect the result thoroughly. As carefully as you checked your work before you attempted your first push that was rejected. After that, hopefully your history will fast-forward the upstream and everybody will be happy.


Posted Mon Mar 30 22:09:00 2015 Tags:

I finally took a second swing at understanding the Lightning Network paper.  The promise of this work is exceptional: instant reliable transactions across the bitcoin network. But the implementation is complex and the draft paper reads like a grab bag of ideas; but it truly rewards close reading!  It doesn’t involve novel crypto, nor fancy bitcoin scripting tricks.

There are several techniques which are used in the paper, so I plan to concentrate on one per post and wrap up at the end.

Revision: Payment Channels

I open a payment channel to you for up to $10

A Payment Channel is a method for sending microtransactions to a single recipient, such as me paying you 1c a minute for internet access.  I create an opening transaction which has a $10 output, which can only be redeemed by a transaction input signed by you and me (or me alone, after a timeout, just in case you vanish).  That opening transaction goes into the blockchain, and we’re sure it’s bedded down.

I pay you 1c in the payment channel. Claim it any time!

Then I send you a signed transaction which spends that opening transaction output, and has two outputs: one for $9.99 to me, and one for 1c to you.  If you want, you could sign that transaction too, and publish it immediately to get your 1c.

Update: now I pay you 2c via the payment channel.

Then a minute later, I send you a signed transaction which spends that same opening transaction output, and has a $9.98 output for me, and a 2c output for you. Each minute, I send you another transaction, increasing the amount you get every time.

This works because:

  1.  Each transaction I send spends the same output; so only one of them can ever be included in the blockchain.
  2. I can’t publish them, since they need your signature and I don’t have it.
  3. At the end, you will presumably publish the last one, which is best for you.  You could publish an earlier one, and cheat yourself of money, but that’s not my problem.

Undoing A Promise: Revoking Transactions?

In the simple channel case above, we don’t have to revoke or cancel old transactions, as the only person who can spend them is the person who would be cheated.  This makes the payment channel one way: if the amount I was paying you ever went down, you could simply broadcast one of the older, more profitable transactions.

So if we wanted to revoke an old transaction, how would we do it?

There’s no native way in bitcoin to have a transaction which expires.  You can have a transaction which is valid after 5 days (using locktime), but you can’t have one which is valid until 5 days has passed.

So the only way to invalidate a transaction is to spend one of its inputs, and get that input-stealing transaction into the blockchain before the transaction you’re trying to invalidate.  That’s no good if we’re trying to update a transaction continuously (a-la payment channels) without most of them reaching the blockchain.

The Transaction Revocation Trick

But there’s a trick, as described in the paper.  We build our transaction as before (I sign, and you hold), which spends our opening transaction output, and has two outputs.  The first is a 9.99c output for me.  The second is a bit weird–it’s 1c, but needs two signatures to spend: mine and a temporary one of yours.  Indeed, I create and sign such a transaction which spends this output, and send it to you, but that transaction has a locktime of 1 day:

The first payment in a lightning-style channel.

Now, if you sign and publish that transaction, I can spend my $9.99 straight away, and you can publish that timelocked transaction tomorrow and get your 1c.

But what if we want to update the transaction?  We create a new transaction, with 9.98c output to me and 2c output to a transaction signed by both me and another temporary address of yours.  I create and sign a transaction which spends that 2c output, has a locktime of 1 day and has an output going to you, and send it to you.

We can revoke the old transaction: you simply give me the temporary private key you used for that transaction.  Weird, I know (and that’s why you had to generate a temporary address for it).  Now, if you were ever to sign and publish that old transaction, I can spend my $9.99 straight away, and create a transaction using your key and my key to spend your 1c.  Your transaction (1a below) which could spend that 1c output is timelocked, so I’ll definitely get my 1c transaction into the blockchain first (and the paper uses a timelock of 40 days, not 1).

Updating the payment in a lightning-style channel: you sent me your private key for sig2, so I could spend both outputs of Transaction 1 if you were to publish it.

So the effect is that the old transaction is revoked: if you were to ever sign and release it, I could steal all the money.  Neat trick, right?

A Minor Variation To Avoid Timeout Fallback

In the original payment channel, the opening transaction had a fallback clause: after some time, it is all spendable by me.  If you stop responding, I have to wait for this to kick in to get my money back.  Instead, the paper uses a pair of these “revocable” transaction structures.  The second is a mirror image of the first, in effect.

A full symmetric, bi-directional payment channel.

So the first output is $9.99 which needs your signature and a temporary signature of mine.  The second is  1c for me.  You sign the transaction, and I hold it.  You create and sign a transaction which has that $9.99 as input, a 1 day locktime, and send it to me.

Since both your and my “revocable” transactions spend the same output, only one can reach the blockchain.  They’re basically equivalent: if you send yours you must wait 1 day for your money.  If I send mine, I have to wait 1 day for my money.  But it means either of us can finalize the payment at any time, so the opening transaction doesn’t need a timeout clause.

Next…

Now we have a generalized transaction channel, which can spend the opening transaction in any way we both agree on, without trust or requiring on-blockchain updates (unless things break down).

The next post will discuss Hashed Timelock Contracts (HTLCs) which can be used to create chains of payments…

Notes For Pedants:

In the payment channel open I assume OP_CHECKLOCKTIMEVERIFY, which isn’t yet in bitcoin.  It’s simpler.

I ignore transaction fees as an unnecessary distraction.

We need malleability fixes, so you can’t mutate a transaction and break the ones which follow.  But I also need the ability to sign Transaction 1a without a complete Transaction 1 (since you can’t expose the signed version to me).  The paper proposes new SIGHASH types to allow this.

[EDIT 2015-03-30 22:11:59+10:30: We also need to sign the other symmetric transactions before signing the opening transaction.  If we released a completed opening transaction before having the other transactions, we might be stuck with no way to get our funds back (as we don’t have a “return all to me” timeout on the opening transaction)]

Posted Mon Mar 30 10:47:32 2015 Tags:
Following up to the previous post, I computed a few numbers for each development cycle in the recent past.

In all the graphs in this article, the horizontal axis counts the number of days into the development cycle, and the vertical axis shows the number of non-merge commits made.

  • The bottom line in each graph shows the number of non-merge commits that went to the contemporary maintenance track.
  • The middle line shows the number of non-merge commits that went to the release but not to the maintenance track (i.e. shiny new toys, oops-fixes to them, and clean-ups that were too minor to be worth merging to the maintenance track), and
  • The top line shows the total number of non-merge commits in the release.

Even though I somehow have a fond memory of v1.5.3, the beginning of the modern Git was unarguably the v1.6.0 release. Its development cycle started in June 2008 and ended in August 2008. We can see that we were constantly adding a lot more new shiny toys (this cycle had the big "no more git-foo in user's $PATH" change) than we were applying fixes to the maintenance track during this period. This cycle lasted for 60 days, 731 commits in total among which 120 went to the maintenance track of its time.


During the development cycle that led to v1.8.0 (August 2012 to October 2012), the pattern is very different. We cook our topics longer in the 'next' branch and we can clearly see that the topics graduate to 'master' in batches, which appear as jumps in the graph.  This cycle lasted for 63 days, 497 commits in total among which 182 went to the maintenance track of its time.


The cycle led to v2.0.0 (February 2014 to June 2014) has a similar pattern, but as another "we now break backward compatibility for ancient UI wart" release, we can see that a large batch of changes were merged in early part of the cycle, hoping to give them better and longer exposure to the testing public; on the other hand, we did not do too many fixes to the maintenance track.  This cycle lasted for 103 days, 475 commits in total among which 90 went to the maintenance track of its time.



The numbers for the current cycle leading to v2.4 (February 2015 to April 2015) are not finalized yet, but we can clearly see that this cycle is more about fixing old bugs than introducing shiny new toys from this graph.  This cycle as of this writing is at its 50th day, 344 commits in total so far among which 115 went to the maintenance track.



Note that we should not be alarmed by the sharp rise at the end of the graph. We just entered the pre-release freeze period and the jump shows the final batch of topics graduating to the 'master' branch. We will have a few more weeks until the final, and during that period the graph will hopefully stay reasonably flat (any rise from this point on would mean we would be doing a last-minute "oops" fixes).

Posted Fri Mar 27 21:14:00 2015 Tags:
Earlier in the day, an early preview release for the next release of Git, 2.4-rc0, was tagged. Unlike many major releases in the past, this development cycle turned out to be relatively calm, fixing many usability warts and bugs, while introducing only a few new shiny toys.

In fact, the ratio of changes that are fixes and clean-ups in this release is unusually higher compared to recent releases. We keep a series of patches around each topic, whether it is a bugfix, a clean-up, or a new shiny toy, on its own topic branch, and each branch is merged to the 'master' branch after reviewing and testing, and then fixes and trivial clean-ups are also merged to the 'maint' branch. Because of this project structure, it is relatively easy to sift fixes and enhancement apart. Among new commits in release X since release (X-1), the ones that appear also in the last maintenance track for release (X-1) are fixes and clean-ups, while the remainder is enhancements.

Among the changes that went into v1.9.0 since v1.8.5, 23% of them were fixes that got merged to v1.8.5.6, for example, and this number has been more or less stable throughout the last year. Among the changes in v2.3.0 since v2.2.0, 18% of them were also in v2.2.2. Today's preview v2.4.0-rc0, however, has 333 changes since v2.3.0, among which 110 are in v2.3.4, which means that 33% of the changes are fixes and clean-ups.

These fixes came from 33 contributors in total, but changes from only a few usual suspects dominate and most other contributors have only one or two changes on the maintenance track. It is illuminating to compare the output between

$ git shortlog --no-merges -n -s ^maint v2.3.0..master
$ git shortlog --no-merges -n -s v2.3.0..maint

to see who prefers to work on new shiny toys and who works on product quality by fixing other people's bugs. The first command sorts the contributors by the number of commits since v2.3.0 that are only in the 'master', i.e. new shiny toys, and the second command sorts the contributors by the number of commits since v2.3.0 that are in the 'maint', i.e. fixes and clean-ups.

The output matches my perception (as the project maintainer, I at least look at, if not read carefully, all the changes) of each contributor's strength and weakness fairly well. Some are always looking for new and exciting things while being bad at tying loose ends, while others are more careful perfectionists.

Posted Fri Mar 27 05:34:00 2015 Tags:
Christian Couder (who is known for his work enhancing the "git bisect" command several years ago) and Thomas Ferris Nicolaisen (who hosts a popular podcast GitMinutes) started producing a newsletter for Git development community and named it Git Rev News.

Here is what the newsletter is about in their words:

Our goal is to aggregate and communicate some of the activities on the Git mailing list in a format that the wider tech community can follow and understand. In addition, we'll link to some of the interesting Git-related articles, tools and projects we come across.

This edition covers what happened during the month of March 2015.

As one of the people who still remembers "Git Traffic", which was meant to be an ongoing summary of the Git mailing list traffic but disappeared after publishing its first and only issue, I find this a very welcome development. Because our mailing list is a fairly high-volume one, it is almost impossible to keep up with everything that happens there, unless you are actively involved in the development process.

I hope their effort will continue and benefit the wider Git ecosystem. You can help them out in various ways if you are interested.

  • They are not entirely happy with how the newsletter is formatted. If you are handy with HTML, CSS or some blog publishing platforms, they would appreciate help in this area.
  • They are not paid full-time editors but doing this as volunteers. They would appreciate editorial help as well.
  • You can contribute by writing your own articles that summarize the discussions you found interesting on the mailing list.

Posted Wed Mar 25 20:58:00 2015 Tags:

heart, everyone knows that, today, things can not be good,cheap jordans for sale, I am afraid of. Soul jade face,cheap Authentic jordans, when Xiao Yan threw three words, and finally is completely chill down, he stared at the latter, after a moment, slowly nodded his head and said: ‘So it would be only First you can kill ah. ‘
‘Boom’
accompanied soul jade pronunciation last fall, as well as the soul of the family all day demon Phoenix family strong, almost invariably, the body of a grudge unreserved broke out, stature flash, that is, people will go far Hsiao round siege.
soul to see family and demon days while Phoenix family hands, smoked child, who is gradually cold cheek,Cheap Jordans, a step forward,cheap jordans, the body of a grudge, running into the sky.
‘soul jade, you really want to cause war between ancient tribe of ethnic fragmentation and soul?’ Gu Qingyang cold shouted.
‘Hey’ war? I am the soul of the family, may have never been afraid of you ancient tribe, so you tranquility so long, it is only to give you?Fills a little more time, I really think you can not move the soul of family fragmentation? ‘Heard that the soul is starting jade face is a blur shadow smile’ immediately turned to awe-inspiring Xiao Yan, said: ‘You are the most recent name first,jordan shoes for sale, in my soul, but does not carry a small family, even the four will always revere missed a while back, when I progenitor is said to be hands on early hands-on, but why those old guys seem very concerned about, and that makes you have to live up to now, but I think this should also coming to an end. ‘
voice down, rich black vindictive,Coach Outlet, self-soul jade suddenly overwhelming storm surge out of the body, a Unit of cold wave, since the body constantly open to diffuse.
feel the TV drama filled body and soul jade majestic open fluctuations on smoked children, who also appeared on the cheek dignified

Posted Wed Mar 25 16:42:22 2015 Tags:

body paint on the ground slippery ten meters, just stop, just stop its stature, two He is rushed to the guard house,Cheap Jordan Shoes, grabbed him, severely The throw back.
‘give you a chance to say,cheap jordans for sale, I can let you go.’ He and everyone on the main palm Xiupao swabbing a bit faint.
‘I’ve said, this is my income in among the mountains of Warcraft.’ Card Gang pale, mouth blood constantly emerge, his body lying on the ground, raised his head, staring eyes tightly He Lord every family , tough road.
swabbing hands slowly stopped, gradually being replaced by a ghastly He and everyone on the main surface, slowly down the steps, after a moment, come to the front of the card post, indifferent eyes looked moribund post cards, mouth emerged grinning touch,luckythechildrensbook.com, soon feet aloft,Cheap Jordans Outlet, then the head is facing the harsh post card stamp down, watch the momentum, if it was stepped on, I am afraid that the post of head of the card will be landing in Lima as burst open like a watermelon .
looking at this scene, on the square suddenly screams rang out round after round.
heard screams around, that He even more ferocious mouth the Lord every family, but on his head away from the post card only Cunxu distance,Coach Outlet Store, a dull sound, but it is quietly sounded at its feet on the square, and its feet, is at this moment suddenly solidified.
‘This foot down, you will you use your head to replace it,’ seven hundred and nineteenth chapter helping hand
seven hundred and nineteenth chapter assistance
slowly dull sound echoed on the training field. So that was all screams are down at the moment solidified,Cheap Jordan Shoes, all everyone is looking around, eyes filled with all kinds of emotions.
in that voice sounded grabbing, He is the Lord every family looking slightly changed, in other words so that was his share of rude Lengheng cry, but it is the foot

Posted Wed Mar 25 16:41:37 2015 Tags:

obtaining the body shocked,cheap jordans for sale, arm place, it is faintly heard between Ma Xia Bi feeling.
‘Damn, this puppet refining what is? physical force actually so horrible!’ arm uploaded to feel pain, Shen Yun also could not help but thrown touch the hearts of dismay. ‘Xiao Yan,Cheap Retro Air Jordan, brisk walking, do not delay, and then later on too late!’ Xiao Yan demon puppet manipulated from time to speed up the attack, Han pool that urgent voice came again quietly.
hear Korean pool reminder, Xiao Yan is slightly shook his head, he could feel that he has been one filled with awe-inspiring atmosphere intended to kill the lock, even now turned around and left, will be caught up quickly.
mans eye blinking, Xiao Yan palm grip suddenly facing a giant pit,www.lindsaywalden.com, a certain attraction storm surge, flood is established directly pick up the body, and finally grabbed,jordan shoes for sale, slightly spy, immediately sneer a cry, and said: ‘You Hard life touches so also can not kill you, but Ye Hao. ” coma Hung Li Xiao Yan said seems to hear, eyes and shook it, you want to open, but a serious injury, but it is so He eventually obtained a waiver can only be futile.
when Xiao Yan Li Hung caught into the hands of a sharp breaking wind sound, suddenly resounded in the sky, and soon a vague figure, like lightning, facing day to swallow Taiwan crazy shot, that kind of diffuse from the body and out of the dark intention to kill, even all the way across, are able to feel it. ‘Boy, put Hong Li, otherwise die!’
far, that silhouette is seen Xiao Yan hands of the person arrested, the moment an angry roar, came crashing again.
heard this roar, it is struggling to support Shen Yun is a happy heart,Coach Factory Outlet, Yipiao corner of my eye, and she is seen Tian Xiao Hong figure,Coach Outlet Store, the moment anxious shouted: ‘Hung old guy, you flood the home of the people, all dead In

Posted Wed Mar 25 16:40:42 2015 Tags:
After some 1700+ blog entries here, I've moved my blog a self-hosted instance of the Free software Ghost blogging system. You can read more about my move to Ghost on the new blog. I won't be posting here anymore, so if you wish to follow the blog, please update your links. The new blog is here, and rss is here. If you are reading this on one of the Blog Planet aggregators that carry my blog, they should be updating momentarily.

See you all on the other side.
Posted Mon Mar 23 19:23:00 2015 Tags:

Earlier today I was in an email exchange with a Tier 1 tech support guy at a hardware vendor who makes multiport serial boards. I had had a question in as to whether a particular board supported the Linux TIOCMIWAIT ioctl. Tier 1 guy referred the question to an engineer in their Linux development group, and Tier 1’s reply to me happened to include his email chain with the engineer.

The engineer wrote to Tier 1 “Is that Eric Raymond ‘ESR’? He’s a big deal in open-source circles.” This made me smile, because when I get made that way it usually means the engineer’s going to work rather harder to make me happy than he would for some random. This is helpful to get my work done!

But there is a duty which is the flip side of that privilege, and that’s what I’m here to write about today. Because if you are reading this at all, your odds of becoming a geek-cred certification authority someday are higher than average, and if that happens, it’s better if you consciously understand what you ought to be doing.

A few hours later my friend and A&D regular Ken Burnside called me to tell me that he was thinking about coming east to Balticon on Memorial Day, and of a clever plan. He has a friend who is local to Baltimore, a painfully shy introvert who he nevertheless thinks he might be able to lure to the convention to do some things with us.

The friend in question has been a major illustrator of SF games for more than thirty years. Because he’s so shy I’m not going to blow his cover, but I could name any one of several iconic illustrations that every science-fiction gamer has seen and you’d say, if you are one yourself, “Wow! That guy?” and want to shake his hand.

In addition, he runs an incredibly fact-dense website about some topics with huge appeal to SF fans and gamers, really well and professionally done with cites to real science and the actual mathematics. As hard-core geekiness goes it really doesn’t get better than this. He has one of the most interesting feeds on G+, too.

The guy is pretty reclusive. No, not his mother’s basement, but he doesn’t get out much. I’ve been thinking I wanted to meet him for a while, but Ken’s proposal crystallized this into a mission. I want to go to Balticon and befriend this guy and hang out with him, only partly because it’d be fun for me.

As much or more, I want to do it more because it’d be fun for him. I mean, how much validation does a guy like that ever get? Super-bright, shy as all hell, few peers anywhere – I suspect it would be a major event of his year to have “ESR” be personally nice to him, and I want to give him that. He’s more than earned it.

I think people like this guy are more important than they seem. It’s easy to dismiss SF games as a frivolity, but by helping the rest of us dream bigger, brighter, more wonderful futures – and doing it with rigor – they help bring those futures into being.

Really, what good is it to be a geek cred certification authority if you don’t use it to befriend and encourage and support people like this? Maybe, I can hope, I’ll help him feel a bit more confident. Reassure him that the recondite stuff he does is really valuable and that someone he respects cares about it and he should keep doing it. I’d like it if he walked away feeling a little taller because “ESR” treated him as a peer.

If you ever become a geek cred certification authority yourself (or even just a famous alpha hacker), I hope you will understand that this is part of your job. It’s the duty that goes with the privilege of being recognized by Tier 3 engineers. There will be people out there doing wonderful things, in software and outside it, for whom you will be one of the few sources of validation that matter. Actually providing that validation is a service to your civilization and the future; it helps keep their creativity flowing.

(Usually I post links to my blog from G+. I’m not going to link this one until after Balticon…)

Posted Mon Mar 23 19:09:38 2015 Tags:

I was invited to address the Business and Leadership Forum of the California Commonwealth Club on May 10, 2006. I spoke of serfs and lords and explored the changing nature of property and property rights in the digital era.

read more

Posted Thu Mar 19 09:02:34 2015 Tags:
We just announced that registration and presentation proposal submission is now open for the Kolab Summit 2015 which is being held in The Hague on May 2-3.

Just as Kolab itself is made up of many technologies, many technologies will be present at the summit. In addition to topics on Kolab, there will be presentations covering Roundcube, KDE Kontact and Akonadi, cyrus imap, and OpenChange among others. We have some pretty nifty announcements and reveals already lined up for the event, which will be keynoted by George Greve (CEO of Kolab Systems AG) and Jeroen van Meeuwen (lead Kolab architect). Along with the usual BoFs and hacking rooms, this should be quite an enjoyable event.

As an additional and fun twist, the Kolab Summit will be co-located with the openSUSE conference which is going on at the same time. So we'll have lots of opportunity for "hallway talks" with Geekos as well. In fact, I'll be giving a keynote presentation at the openSUSE conference about freedom as innovation. A sort of "get the engines started" presentation that I hope provokes some thought and gets some energy flowing.
Posted Mon Mar 16 10:52:00 2015 Tags:

I think Terry Pratchett’s death finally hit home for me today. I’ve been kind of numb about it before now, but today I learned abut this proposal for GNU Terry Pratchett. And as I was commenting about it on G+ I found myself crying.

Here’s a very slightly improved version of what I said on G+. I don’t think I knew Terry well enough to give him the tribute he really deserved, so this will have to do.

What – you think Terry would want us to increase the clacks overhead on his behalf? No, no; I don’t claim to have been a really close friend of his, but I knew him for enough years and through enough conversation to object that that wasn’t like him at all.

IMO, the proper tribute is to keep his name not in excess and useless reply headers but in our hearts, thoughts, and actions.

Remember how human he was. How his comedy mellowed and broadened into deep wisdom. How he laughed at humanity’s foibles without descending into bitterness. Everything he wrote celebrated intelligence and kindness. So should we.

Fuck, I’m tearing up. Dammit, I miss him. It sucks that I’ll never get to teach him pistol 102. I will remember to the end of my life the way that his reserve cracked a little when I gave him his “hacker” ribbon at Penguicon 2003 – how the child who’d been told he couldn’t be a programmer because he was “no good at maths” felt on finally knowing, all the way down, that we accepted him as one of our own.

Because Terry loved us. He loved everybody, most of the time, but he loved the people of the clacks especially. We were one of his roads not taken, and he (rightly!) saw himself in our earnestness and intelligence and introversion and determined naivete and skewed sense of humor and urge to tinker. It mattered to him that we loved him, and in the unlikely event there’s an afterlife it will matter to him still.

So don’t forget Terry, not ever. Because if we showed him what he could have been, he showed us what we can be. Wiser. Funnier. Unafraid of hard truths, but gentle in our use of them. To remember him as he deserves, become better than you are.

Posted Sun Mar 15 15:39:23 2015 Tags:
Google Code is closing down. I had a few projects running there and participated in semanticchemistry which hosts the CHEMINF ontology (doi:10.1371/journal.pone.0025513). But also ChemoJava, an extension of the CDK with GPL-licensed bits.

Fortunately, they have an exporter which automates the migration of the project to GitHub. And this is now in progress. Because the exporter is seeing a heavy load, they warn about a export times of up to twelve hours! The "0% complete" is promising, however :)

For the semanticchemistry project, I have asked the other people involved where we want to host it, as GitHub is just one of the options. A copy does not hurt anyway, but the one I am currently making may very well not be the new project location.

PubMed Commons
When you migrate your own projects and you published your work referring to this Google Code project page, please also leave a comment on PubMed Commons pointing readers to the new project page!

Posted Fri Mar 13 09:43:00 2015 Tags:

All my readers should be aware of the Rowhammer attack by now.

It gives me great pleasure to report that thanks to our foresight in specifying ECC memory for the design, the Great Beast of Malvern has armor of proof against this attack. The proof being over a thousand runs of the Rowhammer test.

Thank you, everyone who threw money into the Beast’s build budget. If y’all hadn’t been so generous, the build team might have had to make compromises. One of the most likely items to be cut would have been ECC…because registered ECC DRAM at the Beast’s speeds is so freaking expensive that the memory was about a third of the entire build budget. And now we’d have a vulnerable machine.

As it is, the Beast roars in triumph over the Rowhammer.

Oh, and what I’m currently doing with the Beast? Why, I’m repairing the very fabric of time..itself! Explanation to follow, probably early next week.

Posted Wed Mar 11 14:12:17 2015 Tags:

I’ve published a background paper on precise clocks, time service, and NTP. It is Introduction to Time Service and is meant to be read as a companion to (or before) the GPSD Time Service HOWTO.

Comments, critiques, and suggestions for additions will be welcome.

Posted Tue Mar 10 09:53:08 2015 Tags:

An incredibly shrinking Firefox faces endangered species status, says Computerworld, and reports their user market share at 10% and dropping. It doesn’t look good for the Mozilla Foundation – especially not with so much of their funding coming from Google which of course has its own browser to push.

I wish I could feel sadder about this. I was there at the beginning, of course – the day Netscape open-sourced the code that would become Mozilla and later Firefox was the shot heard ’round the world of the open source revolution, and the event that threw The Cathedral and the Bazaar into the limelight. It should be a tragedy – personally, for me – that the project is circling the drain.

Instead, all I can think is “They brought the fate they deserved on themselves.” Because principles matter – and in 2014 the Mozilla Foundation abandoned and betrayed one of the core covenants of open source.

I refer, of course, to the Foundation’s disgraceful failure to defend its newly promoted Mozilla CEO Brendan Eich against a political mob.

One of the central values of the hacker culture from which Mozilla sprang is that you are to be judged by the quality of your work alone. We aspire to be a pure meritocracy, casting aside irrelevancies of race, sex, nationality, and of political and sexual preferences.

Brendan Eich lived those values. Though he was excoriated for donating to California Proposition 8, it was never even claimed – let alone established – that he judged gay hackers on the Firefox project by anything but their code.

Another central value of the hacker culture, intertwined with judgment only by the work, is free expression – the defense of people holding and expressing unpopular opinions. It must be this way, because suppression of dissent prevents us from discovering and acknowledging that our beliefs do not align with reality. That hinders the work.

When Brendan Eich was attacked, the correct response of the Mozilla Foundation from within hacker and open-source values would have been, at minimum “His off-the-job politics are none of our business.” Ideally, it would have continued with an active defense of Eich’s right to hold and express unpopular opinions, including by donating to the causes of his choice.

That’s not what happened. Instead, the Foundation truckled to that political mob, putting Eich under enough pressure that he felt he had no alternative but to resign. By failing to defend and support Eich, the Mozilla Foundation wronged a man who had every right to expect that he, too, would be judged by his work alone.

There, are of course, also technological factors in the decline of Mozilla – an aging codebase and failure to rapidly deploy to mobile devices are two of the more obvious. But in any market-share battle, hearts and minds matter too. It’s a significant advantage to be universally thought of as the good guys.

The Mozilla Foundation threw that away. They abandoned the hacker way and trashed their own legitimacy. It was a completely unforced error.

That is why I can only think, today, that they brought their end on themselves. And hope that it serves as a hard lesson: to thrive you must, indeed judge by the work alone.

Posted Mon Mar 9 00:14:48 2015 Tags:

libinput supports edge scrolling since version 0.7.0. Whoops, how does the post title go with this statement? Well, libinput supports edge scrolling, but only on some devices and chances are your touchpad won't be one of them. Bug 89381 is the reference bug here.

First, what is edge scrolling? As the libinput documentation illustrates, it is scrolling triggered by finger movement within specific regions of the touchpad - the left and bottom edges for vertical and horizontal scrolling, respectively. This is in contrast to two-finger scrolling, triggered by a two-finger movement, anywhere on the touchpad. synaptics had edge scrolling since at least 2002, the earliest commit in the repo. Back then we didn't have multitouch-capable touchpads, these days they're the default and you'd be struggling to find one that doesn't support at least two fingers. But back then edge-scrolling was the default, and touchpads even had the markings for those scroll edges painted on.

libinput adds a whole bunch of features to the touchpad driver, but those features make it hard to support edge scrolling. First, libinput has quite smart software button support. Those buttons are usually on the lowest ~10mm of the touchpad. Depending on finger movement and position libinput will send a right button click, movement will be ignored, etc. You can leave one finger in the button area while using another finger on the touchpad to move the pointer. You can press both left and right areas for a middle click. And so on. On many touchpads the vertical travel/physical resistance is enough to trigger a movement every time you click the button, just by your finger's logical center moving.

libinput also has multi-direction scroll support. Traditionally we only sent one scroll event for vertical/horizontal at a time, even going as far as locking the scroll direction. libinput changes this and only requires a initial threshold to start scrolling, after that the caller will get both horizontal and vertical scroll information. The reason is simple: it's context-dependent when horizontal scrolling should be used, so a global toggle to disable doesn't make sense. And libinput's scroll coordinates are much more fine-grained too, which is particularly useful for natural scrolling where you'd expect the content to move with your fingers.

Finally, libinput has smart palm detection. The large majority of palm touches are along the left and right edges of the touchpad and they're usually indistinguishable from finger presses (same pressure values for example). Without palm detection some laptops are unusable (e.g. the T440 series).

These features interfere heavily with edge scrolling. Software button areas are in the same region as the horizontal scroll area, palm presses are in the same region as the vertical edge scroll area. The lower vertical edge scroll zone overlaps with software buttons - and that's where you would put your finger if you'd want to quickly scroll up in a document (or down, for natural scrolling). To support edge scrolling on those touchpads, we'd need heuristics and timeouts to guess when something is a palm, a software button click, a scroll movement, the start of a scroll movement, etc. The heuristics are unreliable, the timeouts reduce responsiveness in the UI. So our decision was to only provide edge scrolling on touchpads where it is required, i.e. those that cannot support two-finger scrolling, those with physical buttons. All other touchpads provide only two-finger scrolling. And we are focusing on making 2 finger scrolling good enough that you don't need/want to use edge scrolling (pls file bugs for anything broken)

Now, before you get too agitated: if edge scrolling is that important to you, invest the time you would otherwise spend sharpening pitchforks, lighting torches and painting picket signs into developing a model that allows us to do reliable edge scrolling in light of all the above, without breaking software buttons, maintaining palm detection. We'd be happy to consider it.

Posted Fri Mar 6 01:23:00 2015 Tags:

This feature got merged for libinput 0.8 but I noticed I hadn't blogged about it. So belatedly, here is a short description of scroll sources in libinput.

Scrolling is a fairly simple concept. You move the mouse wheel and the content moves down. Beyond that the details get quite nitty, possibly even gritty. On touchpads, scrolling is emulated through a custom finger movement (e.g. two-finger scrolling). A mouse wheel moves in discrete steps of (usually) 15 degrees, a touchpad's finger movement is continuous (within the device physical resolution). Another scroll method is implemented for the pointing stick: holding the middle button down while moving the stick will generate scroll events. Like touchpad scroll events, these events are continuous. I'll ignore natural scrolling in this post because it just inverts the scroll direction. Kinetic scrolling ("fling scrolling") is a comparatively recent feature: when you lift the finger, the final finger speed determines how long the software will keep emulating scroll events. In synaptics, this is done in the driver and causes all sorts of issues - the driver may keep sending scroll events even while you start typing.

In libinput, there is no kinetic scrolling at all, what we have instead are scroll sources. Currently three sources are defined, wheel, finger and continuous. Wheel is obvious, it provides the physical value in degrees (see this post) and in discrete steps. The "finger" source is more interesting, it is the hint provided by libinput that the scroll event is caused by a finger movement on the device. This means that a) there are no discrete steps and b) libinput guarantees a terminating scroll event when the finger is lifted off the device. This enables the caller to implement kinetic scrolling: simply wait for the terminating event and then calculate the most recent speed. More importantly, because the kinetic scrolling implementation is pushed to the caller (who will push it to the client when the Wayland protocol for this is ready), kinetic scrolling can be implemented on a per-widget basis.

Finally, the third source is "continuous". The only big difference to "finger" is that we can't guarantee that the terminating event is sent, simply because we don't know if it will happen. It depends on the implementation. For the caller this means: if you see a terminating scroll event you can use it as kinetic scroll information, otherwise just treat it normally.

For both the finger and the continuous sources the scroll distance provided by libinput is equivalent to "pixels", i.e. the value that the relative motion of the device would otherwise send. This means the caller can interpret this depending on current context too. Long-term, this should make scrolling a much more precise and pleasant experience than the old X approach of "You've scrolled down by one click".

The API documentation for all this is here: http://wayland.freedesktop.org/libinput/doc/latest/group__event__pointer.html, search for anything with "pointer_axis" in it.

Posted Fri Mar 6 00:56:00 2015 Tags: