2040 (Graham Tottle; Cameron Publicity & Marketing Ltd) is a very odd book. Ostensibly an SF novel about skulduggery on two timelines, it is a actually a ramble through a huge gallimaufry of topics including most prominently the vagaries of yachting in the Irish Sea, an apologia for British colonial administration in 19th-century Africa, and the minutiae of instruction sets of archaic mainframe computers.
It’s full of vivid ideas and imagery, held together by a merely serviceable plot and garnished with festoons of footnotes delving into odd quarters of the factual background. Some will dislike the book’s politics, a sort of nostalgic contrarian Toryism; many Americans may find this incomprehensible, or misread it as a variant of the harsher American version of traditionalist conservatism. There is much worthwhile exploding of fashionable cant in it, even if the author does sound a bit crotchety on occasion.
I enjoyed it, but I can’t exactly recommend it. Enter at your own risk.
For many years a major focus of Mono has been to be compatible-enough with .NET and to support the popular features that developers use.
We have always believed that it is better to be slow and correct than to be fast and wrong.
That said, over the years we have embarked on some multi-year projects to address some of the major performance bottlenecks: from implementing a precise GC and fine tuning it for a number of different workloads to having implemented now four versions of the code generator as well as the LLVM backend for additional speed and things like Mono.SIMD.
But these optimizations have been mostly reactive: we wait for someone to identify or spot a problem, and then we start working on a solution.
We are now taking a proactive approach.
A few months ago, Mark Probst started the new Mono performance team. The goal of the team is to improve the performance of the Mono runtime and treat performance improvements as a feature that is continously being developed, fine-tuned and monitored.
The team is working both on ways to track performance of Mono over time, implemented support for getting better insights into what happens inside the runtime and has implemented several optimizations that have been landing into Mono for the last few months.
We are actively hiring for developers to join the Mono performance team (ideally in San Francisco, where Mark is based).
Most recently, the team added a new and sophisticated new stack for performance counters which allows us to monitor what is happening on the runtime, and we are now able to export to our profiler (a joint effort between our performance team and our feature team and implemented by Ludovic). We also unified both the runtime and user-defined performance counters and will soon be sharing a new profiler UI.
The idea of a "type A personality", with health problems due to stress, was constructed by the tobacco industry.
India wants to interview the Tamil boat people who are imprisoned on an Australian ship.
The US military is responsible for around 5% of world greenhouse gas emissions.
The UK government demonstrated its contempt for the environment yet again by appointing a fracking flunky to head the Environmental Agency.
Australia's government is cutting funds for public transit to build more roads instead.
It's a natural choice, for a government that intends to boost fossil fuel use.
The Koch brothers spend hundreds of millions to influence politics, through a maze of twisty channels all different.
The US is stranding Yemeni-Americans in Yemen by confiscating their passports while they are visiting there.
Then they have to wait as much as a year before they are allowed to return to the US.
TWO FRICATIVE OVERALLS
SAGACIOUSLY SEEDY FORMULATION
PHANTASMAGORICAL SYMBOL FOR THREE DISAPPEARANCES
ACCOMMODATION NEXT TO FOLLICULAR BATHHOUSE
OKINAWA UNDER GRID OSTIARY
UNCERTAINLY POLYVALENT SPELL
THREE CALVINS OUTSIDE OF REPRESENTATION PESTILENCE
VOUCHER ABOVE TWO RESPONDENTS
LARGE HEAVY ABLATION
ASTONISHING PROBABILITY SYMBOL
I've gotten to the point where early-stage feedback from people who are interested in such a framework would be very valuable in polishing the design a bit more before laying down code. As such, I'm looking for 4-6 people to collaborate with at this early stage.
If you are interested in using green processes in your Qt application, or are simply interested in new application development patterns for Qt, and you have 20-30 minutes per week to look at API drafts and provide feedback and and/or add to the ideas that are coming together, please email me (aseigo at kde.org).
@harrisj What if Mos Eisley wasn't really that wretched and it was just Obi Wan being racist again? Mos Eisley may not look like much but it's a a bedroom community with decent schools and affordable housing. @tcarmody You can just imagine Obi-Wan after years of being a Jedi on Coruscant being stuck in this place and just getting madder and madder. @harrisj yeah nobody cares that the blue milk is so much more artisanal on Coruscant I also imagine Tosche Station as some sort of affluent suburban mall where Luke just goes to loiter when bored. all I'm saying is that for a place he allegedly hates, Obi Wan sure knows exactly where the best cantina is maybe what Obi Wan really hates is himself for having a good time and enjoying the cantina scene @davin Old Sgt. Major Kenobi was this close to muttering "bloody wogs" under his breath. Kenobi prefers the obliging company of droids which, long story, accounts in part for the Cantina's policy against them. @fhwang You can't be mad at Obi Wan. That's just how all the Jedi talked back then. @skottk Face it - Obi-Wan killed Uncle Owen and Aunt Beru in order to let Luke to sell his speeder for funds to leave the planet. @harrisj and the Greater Mos Eisley Business Improvement District doesn't care about the rantings of a separatist hermit @anildash You're all talking small potatoes. Big story is Palpatine's equity in Sienar Systems.
This truly spectacular specimen is possibly the longest example of coprolite - fossilized dinosaur feces - ever to be offered at auction. It boasts a wonderfully even, pale brown-yellow coloring and terrifically detailed texture to the heavily botryoidal surface across the whole of its immense length. The passer of this remarkable object is unknown, but it is nonetheless a highly evocative specimen of unprecedented size, presented in four sections, each with a heavy black marble custom base, an eye-watering 40 inches in length overall.
Miocene-Oligocene Wilkes Formation, Toledo, Lewis Co., Washington.
I could give three of them a good home.
Last time we talked about how wifi signals cross about 12 orders of magnitude in terms of signal power, from +30dBm (1 watt) to -90dBm (1 picowatt). I mentioned my old concern back in school about splitters causing a drop to 1/n of the signal on a wired network, where n is the number of nodes, and said that doesn't matter much after all.
Why doesn't it matter? If you do digital circuits for a living, you are familiar with the way digital logic works: if the voltage is over a threshold, say, 1.5V, then you read a 1. If it's under the threshold, then you read a 0. So if you cut all the voltages in half, that's going to be a mess because the threshold needs to get cut in half too. And if you have an unknown number of nodes on your network, then you don't know where the threshold is at all, which is a problem. Right?
Not necessarily. It turns out analog signal processing is - surprise! - not like digital signal processing.
ASK, FSK, PSK, QAM
Essentially, in receiving an analog signal and converting it back to digital, you want to do one of three things:
- see if the signal power is over/under a threshold ("amplitude shift keying" or ASK)
- or: see if the signal is frequency #0 or frequency #1 ("frequency shift keying" or FSK)
- or: fancier FSK-like schemes such as PSK or QAM (look it up yourself :)).
But first, what's wrong with ASK? Why toggle between two frequencies (FSK) when you can just toggle one frequency on and off (ASK)? The answer comes down mainly to circuit design. To design an ASK receiver, you have to define a threshold, and when the amplitude is higher than the threshold, call it a 1, otherwise call it a 0. But what is the threshold? It depends on the signal strength. What is the signal strength? The height of a "1" signal. How do we know whether we're looking at a "1" signal? It's above the threshold ... It ends up getting tautological.
The way you implement it is to design an "automatic gain control" (AGC) circuit that amplifies more when too few things are over the threshold, and less when too many things are over the threshold. As long as you have about the same number of 1's and 0's, you can tune your AGC to do the right thing by averaging the received signal power over some amount of time.
In case you *don't* have an equal number of 1's and 0's, you can fake it with various kinds of digital encodings. (One easy encoding is to split each bit into two halves and always flip the signal upside down for the second half, producing a "balanced" signal.)
So, you can do this of course, and people have done it. But it just ends up being complicated and fiddly. FSK turns out to be much easier. With FSK, you just build two circuits: one for detecting the amplitude of the signal at frequency f1, and one for detecting the amplitude of the signal at frequency f2. It turns out to be easy to design analog circuits that do this. Then you design a "comparator" circuit that will tell you which of two values is greater; it turns out to be easy to design that too. And you're done! No trying to define a "threshold" value, no fine-tuned AGC circuit, no circular reasoning. So FSK and FSK-like schemes caught on.
With that, you can see why my original worry about a 1/n signal reduction from cable splitters didn't matter. As long as you're using FSK, the 1/n reduction doesn't mean anything; your amplitude detector and comparator circuits just don't care about the exact level, essentially. With wifi, we take that to the extreme with tiny little FSK-like signals down to a picowatt or so.
But where do we stop? Why only a picowatt? Why not even smaller?
The answer is, of course, background noise. No signal exists in perfect isolation, except in a simulation (and even in a simulation, the limitations of floating point accuracy might cause problems). There might be leftover bits of other people's signals transmitted from far away; thermal noise (ie. molecules vibrating around which happen to be at your frequency); and amplifier noise (ie. inaccuracies generated just from trying to boost the signal to a point where your frequency detector circuits can see it at all). You can also have problems from other high-frequency components on the same circuit board emitting conflicting signals.
The combination of limits from amplifier error and conflicting electrical components is called the receiver sensitivity. Noise arriving from outside your receiver (both thermal noise and noise from interfering signals) is called the noise floor. Modern circuits - once properly debugged, calibrated, and shielded - seem to be good enough that receiver sensitivity is not really your problem nowadays. The noise floor is what matters.
It turns out, with modern "low-noise amplifier" (LNA) circuits, we can amplify a weak signal essentially as much as we want. But the problem is... we amplify the noise along with it. The ratio between signal strength and noise turns out to be what really matters, and it doesn't change when you amplify. (Other than getting slightly worse due to amplifier noise.) We call that the signal to noise ratio (SNR), and if you ask an expert in radio signals, they'll tell you it's one of the most important measurements in analog communications.
A note on SNR: it's expressed as a "ratio" which means you divide the signal strength in mW by the noise level in mW. But like the signal strength and noise levels, we normally want to express the SNR in decibels to make it more manageable. Decibels are based on logarithms, and because of the way logarithms work, you subtract decibels to get the same effect as dividing the original values. That turns out to be very convenient! If your noise level is -90dBm and your signal is, say, -60dBm, then your SNR is 30dB, which means 1000x. That's awfully easy to say considering how complicated the underlying math is. (By the way, after subtracting two dBm values we just get plain dB, for the same reason that if you divide 10mW by 2mW you just get 5, not 5mW.)
The Shannon Limit
So, finally... how big does the SNR need to be in order to be "good"? Can you just receive any signal where SNR > 1.0x (which means signal is greater than noise)? And when SNR < 1.0x (signal is less than noise), all is lost?
Nope. It's not that simple at all. The math is actually pretty complicated, but you can read about the Shannon Limit on wikipedia if you really want to know all the details. In short, the bigger your SNR, the faster you can go. That makes a kind of intuitive sense I guess.
(But it's not really all that intuitive. When someone is yelling, can they talk *faster* than when they're whispering? Perhaps it's only intuitive because we've been trained to notice that wifi goes faster when the nodes are closer together.)
The Shannon limit even calculates that you can transfer some data even when the signal power is lower than the noise, which seems counterintuitive or even impossible. But it's true, and the global positioning system (GPS) apparently actually does this, and it's pretty cool.
The Maximum Range of Wifi is Unchangeable
So that was all a *very* long story, but it has a point. Wifi signal strength is fundamentally limited by two things: the regulatory transmitter power limit (30dBm or less, depending on the frequency and geography), and the distance between transmitter and receiver. You also can't do much about background noise; it's roughly -90dBm or maybe a bit worse. Thus, the maximum speed of a wifi link is fixed by the laws of physics. Transmitters have been transmitting at around the regulatory maximum since the beginning.
So how, then, do we explain the claims that newer 802.11n devices have "double the range" of the previous-generation 802.11g devices?
Simple: they're marketing lies. 802.11g and 802.11n have exactly the same maximum range. In fact, 802.11n just degrades into 802.11g as the SNR gets worse and worse, so this has to be true.
802.11n is certainly faster at close and medium range. That's because 802.11g tops out at an SNR of about 20dB. That is, the Shannon Limit says you can go faster when you have >20dB, but 802.11g doesn't try; technology wasn't ready for it at the time. 802.11n can take advantage of that higher SNR to get better speeds at closer ranges, which is great.
But the claim about longer range, by any normal person's definition of range, is simply not true.
Luckily, marketing people are not normal people. In the article I linked above they explain how. Basically, they define "twice the range" as a combination of "twice the speed at the same distance" and "the same speed at twice the distance." That is, a device fulfilling both criteria has double the range as an original device which fulfills neither.
It sounds logical, but in real life, that definition is not at all useful. You can do it by comparing, say, 802.11g and 802.11n at 5ft and 10ft distances. Sure enough, 802.11n is more than twice as fast as 802.11g at 5ft! And at 10ft, it's still faster than 802.11g at 5ft! Therefore, twice the range. Magic, right? But at 1000ft, the same equations don't work out. Oddly, their definition of "range" does not include what happens at maximum range.
I've been a bit surprised at how many people believe this "802.11n has twice the range" claim. It's obviously great for marketing; customers hate the limits of wifi's maximum range, so of course they want to double it, or at least increase it by any nontrivial amount, and they will pay money for a new router if it can do this. As of this writing, even wikipedia's table of maximum ranges says 802.11n has twice the maximum range of 802.11g, despite the fact that anyone doing a real-life test could easily discover that this is simply not the case. I did the test. It's not the case. You just can't cheat Shannon and the Signal to Noise Ratio.
Coming up next, some ways to cheat Shannon and the Signal to Noise Ratio.
Planet Debian upstream is hosted by Branchable.