Saturday, February 8, 2014

Baltimore's Speed Camera Paradox

Baltimore's mothballed speed camera program recently made national news when a secret audit leaked to the Baltimore Sun showed error rates averaging ten percent -- forty times the rate of errors previously claimed by the city -- with a few cameras having error rates of 30-50%.  Mayor Rawlings Blake appears to be in denial over the severity of the problem.  Some members of the city council have been rightfully critical of the Mayor, and particularly of the city's attempts to cover up the audit.

Certainly city officials deserve all the criticism they are getting for having tried to keep this audit secret, and for allowing such a high rate of errors to go unnoticed for so long before the press got involved.  But how did all the errors happen in the first place?  And how could the rate get so high?  Wasn't the public told that these cameras had been tested and calibrated and blah blah blah... just like every other speed camera program in the state has claimed?  How can 'properly calibrated' equipment have such a high error rate?

Well, it turns out there is a perfectly valid reason why the results of the audit are not only plausible, but are COMPLETELY PREDICTABLE if you have a device which produces a relatively small (say one time in 400) rate of 'false positives' when run in the real world (as opposed to in a testing lab).  It has nothing to do with the TYPE of equipment being used or whether it is based on Radar or LIDAR or something else.  Rather it involves a little thing called Bayes Theorem.
This is something they teach everyone in a first year "Probabilities and Statistics" class, and what it says is
"The Probability of 'A given B' equals the probability of 'B Given A' times the probability of 'A' divided by the probability of 'B'.

What on earth does that have to do with speed cameras?

Well, let's say we have a traffic pattern with vehicles to speeds that form a typical 'bell curve':

And let's say in this example, that 98% of vehicles are traveling below the ticket-threshold, and just 2% above it.  We don't have actual data for Baltimore's camera sites, however this is what the SHA recently told WTOP that the percentage at their sites was, and 2% or lower is plausible based on what other jurisdictions have asserted.  Remember, we are talking about vehicle speeds directly in front of a known speed camera site, in often congested urban/suburban traffic, not speeds people would travel on an open freeway at 7am on a Sunday.  So in this example, most drivers are clustered near or below the speed limit.

Xerox has blamed these particular errors on what they call "radar effects", which is not an error of the device but rather is a result of interference, absorption, reflection, and refraction of radar waves.  Basically, it has nothing to do with calibration.  The cameras in Baltimore PASSED ALL THEIR CALIBRATION TESTS with flying colors, even on the same day they produced obvious errors.

Let's say the rate of serious radar effects is 0.25%.  That means one time in 400, the device produces a 'false positive', giving a speed reading bad enough to be over the ticket threshold, falsely 'flashing' a non-speeding car.  That rate is low enough you will probably never spot it through sporadic testing.  If they did it every day and saw it come up once in a year, they'd probably just re-run the test, not see it next time or the 100 times after that, and just declare everything OK.  But the point is the rate of false triggers is not zero, and no amount of calibration tests under controlled conditions will change that because it is caused by things external to the device.

Note that we are not talking about the sort of error where it says you were going 47mph but you were really going 46.  That is just the limit of the precision of the device.  If you were driving 1mph below the ticket threshold, you should probably have guessed that you might get a ticket anyways.  We're talking  about the odds of a sporadic but BIG error, a complete nonsense speed reading, like the ones in Baltimore which gave tickets to stationary cars and which clocked big-rig trucks at twice their actual speed.

So going back to Bayes Theorem, we have the following probabilities:
  • P(A) = the probability that a randomly selected car is NOT speeding = 98% (0.98) 
  • P(B) = the probability that a randomly selected car will get a ticket = 2% (0.02)  *
  • P(B|A) = the probability that a randomly selected car car will get a ticket GIVEN THAT it was not speeding = 0.25% (0.0025).  This is the rate of 'false positives'.
We want to calculate the rate of errors among tickets, so:
P(A|B) = the probability that a random TICKET RECIPIENT was not speeding

P(A|B) = 0.0025 * 0.98 / 0.02 = 0.1225  which is ** 12.25% **

So in this example your rate of erroneous tickets is not 1 out of every 400, it is actually 1 out of every 8.

What this means is that the rate of errors for the system as a whole will be many times greater than the false positive rate for an individual speed measurement.   If the number of 'negatives' (non speeders) is 50-100 times greater than the number of 'positives' (ticket recipients), then for whatever rate of errors you deem to be "acceptable", the device needs to be two orders of magnitude more accurate than that or you will end up with an 'unacceptable' rate of erroneous tickets.

This well known phenomenon is called the 'base rate fallacy' the 'false positive paradox'.  If you apply a very small percentage of false positives to a very very large sample size which is mostly 'negative', you are still going to get many more errors than you would think.

If city officials or their contractor thought 1 error out of 400 readings was not something to be concerned about, then given the vast number of non-speeders passing dozens of cameras hundreds of days per year it would easily have added up to the 70,000 errors in one year which their audit estimated they had experienced.

And the fact that the error rate for the system will far higher than the rate of errors of the device is something which would apply to ANY speed camera program NO MATTER WHAT TECHNOLOGY THEY USE.  Other speed measurement technologies have their own sources of sporadic errors, but "math" does not care what the cause of false positives is.  If I had said we have a LIDAR device which only had a one in a thousand error rate due to 'sweep errors' and 'reflection', or other phenomenon (rather than radar effects), but that only 1% of people were driving over the ticket threshold, we'd still have a ten percent rate of erroneous tickets.  And all from devices which passed every single calibration test and which would be unlikely to show any errors even if you ran several hundred tests on it.

It is also worth noting that the number of people who DISPUTE tickets is far smaller than those who RECEIVE tickets (maybe 1-2%).  As such a defendant who is disputing a ticket might have an even higher probability of being innocent than an average ticket recipient, if you assume that most people who know they were speeding just pay.  District court judges are probably basing their estimate of a person's chance of being innocent on their belief in how accurate an individual speed measurement is, and are either ignoring or unaware of the fact that they are dealing with a very small subset of a very small subset of total speed measurements.  This failure to consider prior probability when judging the chance of guilt is called the 'prosecutor's fallacy', and this fallacy is part of the reason that most of the time ticket recipients have no chance of convincing the judge they were not speeding unless they have some very compelling proof.

Of course Bayes Theorem doesn't explain all the cover ups and secret meetings or attempts to clamp down on whistle blowers.  Nor does it explain the indifference of city officials to the initial reports of errors, or why this went on so long, or why the public had been kept in the dark when errors were first discovered.  But it does go to show that if you are running a speed camera program, it pays to study math.

----------------------------------------------------------
* 2/10/2014: We used the rate of tickets, rather than the rate of 'violators', and assumed a value of 2% for simplicity.  In reality the rate of tickets will depend on the rate of false negatives (ie people driving over 12mph who are not cited).  This involves a slightly different calculation, still based on Bayes Theorem:
P(A) = Probability of not being over threshold
P(B) = Probability of being detected as speeding by system
and 
P(A|B) = P(B|A)*P(A)/[P(B|A)*P(A) + P(B|~A)*P(~A)]
In this case let
P(B|A)  = 0.0025 (False Positive Rate)
P(A)    = 0.98 (98%)
P(~A)   = 0.02 (2)
P(B|~A) = 0.9975 (True Positive Rate... in this example we assume the false negative rate is the same as false positive)
P(A|B) =  .0025 * .98 / ((.0025 * .98)+(.02*.9975)) = .109375 or 10.9% 
or using a lower true positive rate (if it is not possible to capture every violation with a certain type of device) of say 0.877 (87.7%), then
P(A|B) =  .0025 * .98 / ((.0025 * .98)+(.02*.877)) = .122561  or 12.25%
So yes, the false negative rate also affects the overall error rate.