More important, they found that the matches occurred in the correct topographic (locational) order. The first dictabelt gunshot impulse pattern matched a test shot recorded on a microphone on Houston Street, close to the intersection with Elm Street. The next dictabelt gunshot impulse pattern matched a test shot recorded at the next microphone farther north on Houston Street. The third dictabelt gunshot impulse pattern matched a test shot recorded on a microphone in the intersection of Houston and Elm. The fourth dictabelt gunshot impulse pattern matched a test shot recorded on a microphone farther down on Elm Street. And the fifth dictabelt gunshot impulse pattern matched a test shot recorded on the next microphone on Elm Street.
The odds that these stunning topographic correlations could be coincidence are 125 to 1 against. Why? Because there are 125 ways that any five events can be sequenced, e.g., 5-2-4-1-2, 2-1-4-5-3, 4-5-3-1-2, 3-5-4-2-1, 3-2-4-1-5, etc., etc. Only one of those 125 ways is 1-2-3-4-5. In other words, the odds that these locational correlations are not the result of chance are 124 out of 125, or 99.20%.
Mr. Griffith corrected me on proper English, on when to use “it’s” and “its”. Something I should have learned in high school. Let me correct him on a little math on something he should have learned in high school.
What are the odds of five things ending up in the correct order? One out of 125? How did he get that? Five to the third power?
No, the number of permutations of a set with “n” members is n-factorial. This is: n * (n-1) * (n-2) * . . . 2 * 1. So, the number of ways to order a set with 5 members is 120.
Now, what about the remarkable order of the apparent position of the motorcycle over time. How it appears to be moving up Houston and then down Elm Street at a study pace. Well, it turns out, that this depends heavily on cherry picking the data, as the BBN did. For instance, the BBN found correlations for the microphones for the first “shot” at microphones 2 ( 5 ), 2 ( 5 ), 2 ( 6 ) and 2 ( 6 ). Quite good. For the second “shot” at microphones 2 ( 6 ), 2 ( 6 ), 2 ( 10 ) and 3 ( 5 ). Wildly bad. They found correlations along a stretch of about 84 feet along Houston and Elm. But the rest were not as bad.
If you make careful selections of which correlation that is considered “good”, then for the 5 shots (including Dr. Thomas’s fifth shot) you get:
2 ( 5 ), 2 ( 6 ), 2 ( 11 ), 3 ( 4 ) and 3 ( 5 ). If the first microphone at 1 ( 1 ) is considered at distance 0, and we assume a distance of 12 feet between each microphone, we get distances along this track of:
168, 180, 204, 252, 264 feet.
Note: These distances are very rough. And the second section of microphones 2 ( 1 ) through 2 ( 12 ), is much shorter than the other two, because they did not arrange these microphones in a linear line but bunched them up over a short distance around the bend of Houston to Elm, where the street was widest.
Ok. This is quite good. We get a nice steady pace of around 11 mph. Sounds quite plausible.
But what if we make a different selection of which correlations are considered good? In that case we can get:
2 ( 5 ), 3 ( 5 ), 2 ( 11 ), 3 ( 8 ), 3 ( 5 ). This gives us the distances along this track for the 5 shots of:
168, 264, 204, 300, 264 feet.
This gives us a much more erratic pattern. It appears the motorcycle initially speed forward at a tremendous speed between shots 1 and 2, then reversed direction to head back toward Houston to record the third shot, then again reversed direction and rode down Elm Street to record the fourth shot, before reversing direction for the third time and heading back toward Houston to record the fifth shot.
You see, a lot depends on which correlations you decide you like. And which ones you decide to reject as “false alarms”.
Ok, let’s do it one more time. This time, not trying to make the data look good or bad. Whatever correlation we get, we will accept its position, then select the average of all the correlations for that shot. In that case, we get the distances of:
174, 207, 204, 280, 280 feet.
Well, not too bad. A couple hiccups here and there. An apparent reversal between the second and third shot. And the motorcycle appears stopped for the fourth and fifth shot. But still, not too bad.
Well, why didn’t they do the obvious thing? Make their position estimate of the motorcycle based on the average result they got from their data. Because their data seemed in many ways to be random.
For different widely space targets were used. Of the 15 found correlations, only 4 corresponded with the location of the limousine at that time. About a success rate of 1 in 4, exactly as to be expected with random results.
For the source of the gunfire, the correlations were not consistent. Of the four judged shots with multiple correlations, three of them gave two different location for the source of the fire, the TSBD and the Grassy Knoll. Only the last shot, with three correlations, did all three agree on the same source, the TSBD. Of the 12 test shots, 8, or 67 percent came from the TSBD. Of the 15 found correlations, 12, or 80 percent, came from the TSBD. Again, the results are close to what one would expect of random data.
Only with the location of the limousine could a case be made for good, non-random data. Provided the data was cherry picked. If this was done, then it could be argued, as the BBN did, that the data supports a plausible, fairly constant 11 mph speed for the motorcycle with the stuck microphone.
But how did they make their case? Did the propose using cherry picked data. And also presented arguments in favor of using all their data, and using the average location to show the position of the motorcycle. But argue for why the cherry-picked method is better. No. they never made any arguments about why using cherry picked data is superior to using the averages of the data. They simply presented their cherry-picked data, in the form of a map of Dealey Plaza with four circles, nicely spaced, which support a near constant speed of 11 mph for the motorcycle.
One can still argue, well maybe there is not a 1 in 120 chance that we get such a correlation of the speed of the motorcycle with a 11-mph steady progress. But the data is consistent with a motorcycle progressing up Houston and Elm in generally the right direction. This suggests that the data is not random. But there is one more thing to considered, beyond cherry picking which correlations to use, that could skew the data. And that is deciding which part of the raw data to search for comparisons. It would be natural to assume they searched for all possible 2,592 combinations, of the:
• 432 recordings from the 1978 tests, 36 microphones, each recording 12 test shots.
with:
• The six suspect impulses from the 1963 Dictabelt recording.
But they had only 10 days to do all these comparisons, and the associated calculations, to find the strongest correlations. So, it is plausible they only searched for where there could be a valid correlation. So, after finding one shot, likely the second, skip searching the hundreds of possible comparisons of early shots with the microphones that are on Elm Street, where they couldn’t be. And skip searching the hundreds of possible comparisons of the later shots along Houston Street and around the bend, where they couldn’t be. This approach may have been dictated by time constraints, 10 days between the shooting tests and the making of the phone call to the HSCA reporting on the finding of 15 correlations.
And the search may have been a good deal more focused than that. After finding one shot, let’s say the second shot, they might initially search the section that is consistent with a 11 mph for the first shot. Upon finding random correlations that convince them that the motorcycle is indeed moving at about the same speed as the limousine, they could do the same with the third, fourth and fifth shot, resulting in data that matches fairly well, with a 11-mph motorcycle. Even though the data is random noise.
BBN did all they could to make the case that their data was good. Cherry picked their data. Drew a map with 4 circles that showed how consistent this cherry-picked data was with a 11-mph motorcycle. If there was more there could have been done, they would have done it.
They could have said “It takes about ‘x’ minutes to make one comparison and the associated calculations”. The rate can be maintained for 6 hours during a hard-working day. We had ‘y’ men on this job. After a couple of days, we could process ‘z’ combinations per day. By the eighth day, we had completed all 2,592 combinations. This would help nail down the somewhat good correlation with the data and the location of the motorcycle. But there is no statement to this effect. No statement that unequivocally states that all 2,592 comparisons where actually done and completed.
Combined with the randomness of the data, except in regards as to the apparent speed of the motorcycle, I believe that only a subset of the 2,592 comparisons was actually made.