Or Why it’s about the dopiest thing we’re spending our tax dollars on.
In ‘Little Brother‘, the hero, Marcus, explains the problem of casting too wide a net when searching for evil:
“If you ever decide to do something as stupid as build an automatic terrorism detector, here’s a math lesson you need to learn first. It’s called “the paradox of the false positive,” and it’s a doozy.
Say you have a new disease, called Super-AIDS. Only one in a million people gets Super-AIDS. You develop a test for Super-AIDS that’s 99 percent accurate. I mean, 99 percent of the time, it gives the correct result — true if the subject is infected, and false if the subject is healthy. You give the test to a million people.
One in a million people have Super-AIDS. One in a hundred people that you test will generate a “false positive” — the test will say he has Super-AIDS even though he doesn’t. That’s what “99 percent accurate” means: one percent wrong.
What’s one percent of one million?
1,000,000/100 = 10,000
One in a million people has Super-AIDS. If you test a million random people, you’ll probably only find one case of real Super-AIDS. But your test won’t identify *one* person as having Super-AIDS. It will identify *10,000* people as having it.
Your 99 percent accurate test will perform with 99.99 percent *inaccuracy*.
That’s the paradox of the false positive. When you try to find something really rare, your test’s accuracy has to match the rarity of the thing you’re looking for. If you’re trying to point at a single pixel on your screen, a sharp pencil is a good pointer: the pencil-tip is a lot smaller (more accurate) than the pixels. But a pencil-tip is no good at pointing at a single *atom* in your screen. For that, you need a pointer — a test — that’s one atom wide or less at the tip.
This is the paradox of the false positive, and here’s how it applies to terrorism:
Terrorists are really rare. In a city of twenty million like New York, there might be one or two terrorists. Maybe ten of them at the outside. 10/20,000,000 = 0.00005 percent. One twenty-thousandth of a percent.
That’s pretty rare all right. Now, say you’ve got some software that can sift through all the bank-records, or toll-pass records, or public transit records, or phone-call records in the city and catch terrorists 99 percent of the time.
In a pool of twenty million people, a 99 percent accurate test will identify two hundred thousand people as being terrorists. But only ten of them are terrorists. To catch ten bad guys, you have to haul in and investigate two hundred thousand innocent people.”
Similarly, another blogger described the trap of correlation fallacies as explained in Stephen Jay Gould’s The Mismeasure of Man; even commentators who strongly disagree with most of Gould’s analyses generally say that his explanation of correlation and factor analysis are an excellent introduction to the subject.
I’ll briefly touch on one correlational fallacy, known as the ecological fallacy, because I saw an example of it in another thread. There are two kinds of correlations: individual correlations and ecological correlations. The former are based on observations of individuals; the latter are based on observations of aggregates. In the example here, the correlation is between income and birth rate. An individual correlation uses families as the unit of analysis; you look at how much money a family makes and how many kids they have. An ecological correlation uses regions (countries, states, counties, etc.) as the unit of analysis; you look at the average family income and average family size in each region.
As it turns out, the individual correlation between income and family size is positive whereas the ecological correlation is negative. Poorer regions generally have higher birth rates than wealthier regions, but within each region, poorer people have lower birth rates than wealthier people. Thus it’s incorrect to say that poor people have higher birth rates than middle-class people.
A similar example: in the last 3 US Presidential elections, higher-income people were considerably more likely to vote for the Republican. But states with higher average incomes were considerably more likely to give their electoral votes to the Democrat.
The upshot of all of this is that group averages simply don’t behave like individual measurements, no matter how much we “intuitively” expect them to. Believe it or not, the root cause of ecological fallacies is forgetting a simple bit of elementary-school arithmetic, namely that the sum of two or more fractions is not the same as the sum of their numerators divided by the sum of their denominators.”
Bottom line. If we try to protect ourselves to the hundred percentile, we’ll end up with 1000 fold increase in restrictions on everyone’s potential liberty without the proportionate improvement in safety as evil will simply adapt to the environment. Evolution dictates it.