Users Currently Browsing This Topic:
0 Members

Author Topic: The HSCA Acoustical Evidence: Proof of a Second Gunman in the JFK Assassination  (Read 11569 times)

Offline Joe Elliott

  • Hero Member
  • *****
  • Posts: 1727
Advertisement

No, it is not true. Dr. Thomas has explained in great detail why it is false. I quoted Dr. Thomas's entire rebuttal to the NRC panel's 78% figure, but you snipped and ignored the rebuttal, and then treated us to more of your diversionary observations and errant speculation.

You pull this stunt all the time: You ignore arguments that you can't explain; you type several paragraphs of irrelevant observations and flawed speculation; and then you pretend that you have dealt with the arguments.

I am not going to deal with Dr. Thomas who has no support from acoustic experts, not even from his closest allies, for his principle argument, the 1-in-100,000 claim.



One, your explanation is patently absurd. Two, I have not ignored your explanation but have responded to it several times, noting that, among other things, it simply ignores the details of the sonar analysis and the fact that the sonar analysis was able to simulate closer microphones and 180 positions.

"But, but, but . . . they didn't check this microphone or that microphone," etc., etc. Why don't you deal with what they did do in the sonar analysis? Why don't you deal with Dr. Thomas's point-by-point refutation of the NRC panel's bogus value assumptions for their 78% probability of chance?

Not checking for other possibilities is a big deal.

What if, in addition to for correlations of 145.15 with a test shot from the grassy knoll, at Target 3, they also checked the test shot from the TSBD at Target 3.

What if Weiss and Aschkenasy found a strong correlation with a test shot from both the TSBD and the grassy knoll? Just as BBN did with their study.

What if Weiss and Aschkenasy found strong correlations with test shots fired at different targets? Like strong correlations for both Targets 1 and 3, with the same 1963 impulse. Just as BBN did with their study.

What if Weiss and Aschkenasy found strong correlations from two different locations, like within 5 feet of microphone 3 ( 4 ), and within 7 feet of microphone 3 ( 8 ). A strong correlation with the 145.15 impulse with two different microphone locations 40 feet apart. Just as BBN did with their study.


We would conclude that these estimates of getting ‘false alarms’ or ‘false positives’ as being on 5%, or 1-in-100,000, are clearly false.

But Weiss and Aschkenasy did not check for these correlations. They did not have enough time. They were using computers to check for correlations, but they did not have enough time. I am skeptical of this. I think they were afraid of having their own data discredit the conclusions they wanted to come to. But just testing one 1963 impulse, with just one 1978 test shot, they insured that this could not happen.



What??? You still have not read the BBN report, have you?

In point of fact, the BBN scientists were tremendously impressed with the locational correlations between the dictabelt gunshots and the test-firing gunshots. They determined that the probability that chance caused those correlations was “less than 1%.” Figure 22 in the BBN report shows the microphone positions along the motorcycle route where high correlations were obtained. The BBN scientists referred to this figure in explaining why there was less than a 1% probability that chance caused the time-distance correlations. I quote from the BBN report, which you really should read some day:

Yes. I have read this. But no statement about checking for all 3,024 combinations of the seven impulses with the 432 recordings from 1978. Not only do they not say they checked all combinations, they don’t even seem to know how many possible combinations there are. They only refer to 2,592 combinations, not 3,024 combinations. There is no way they did manually check 3,024 combinations, but thought they had only checked 2,592.


Without a knowing which combinations where checked, and which were not, there is no way you or I or anyone can calculate the odds of “finding” the motorcycle seemed to be at the right place. That might only reflect them checking a limited number of microphones for each impulse.

Plus, how much faith can we put into what Dr. Barger said? Below is BBN Exhibit F-367


http://mcadams.posc.mu.edu/russ/infojfk/jfk2/f367.htm


Now, below, I quote from the same report you quoted from, just a few lines down:

Quote
There remain nine correlations that exceeded the detection threshold, and they occur at four different times:

Group 1.137.70 sec -- four correlations with test shots from the TSBD at Targets 1 and 3.*

Group 2.139.27 sec --three correlations with test shots from the TSBD at Target 3.

Group 3.145.15 sec -- one correlation with a test shot from the knoll at Target 3.

Group 4.145.61 sec -- two correlations with test shots from the TSBD at Targets 3 and 4.

*Possibly because of the presence of an overhead sign that interfered with test shots at Target 2, no correlations were found with that target.

Note the inaccuracies of the above paragraph, when compared with F-367.

With impulse 137.70, four correlations were not found from the TSBD and Targets 1 and 3.
Instead three correlations were found from the TSBD and Targets 1 and 3, plus a fourth correlation from the Grassy Knoll and Target 4.

With impulse 139.27, not just three correlations were found from the TSBD with Target 3.
In addition, a fourth correlation was also found from the Grassy Knoll with Target 3. But the BBN would just assume this inconvenient correlation go down the memory hole.

With impulse 145.15, not just one correlation was found from the Grassy Knoll with Target 3.
In addition, two more correlations were also found from the TSBD with Target 3. But again, the BBN would just assume that these inconvenient correlations also go down the memory hole.

With impulse 145.61, two correlations with test shots from TSBD at Targets 3 and 4 were not found.
Instead, three correlations with test shots from TSBD at Targets 2, 3 and 4 were found.


BBN just prunes away at the unwanted correlations that they wish they didn’t find from their final report. This is unacceptable.


Also note the excuse they give for not finding a correlation with the test shot at Target 2. “the presence of an overhead sign that interfered with test shots at Target 2”.
The 1963 microphone can be right behind the motorcycle shield, totally blocking a direct path from the rifle to the microphone. But this still allowing correlations to be found with the rifle location, the target location and the microphone location.

But an overhead sign that is many of dozens of feet from the 1978 rifle, many of dozens of feet from the 1978 microphone, that can block out the sound waves, preventing a correlation from being found. But only when firing at Target 2. Not when firing at Targets 1 or 3.



And what did the NRC panel have to say about this powerful evidence? They argued that the BBN scientists had erred, that the BBN value of P<0.01 (i.e., less than 1%) should actually be P=0.07 (i.e., 7%), and that therefore the “significance of the layout” indicated by Figure 22 is “considerably reduced” (NRC report, p. 37). Alright, so instead of the probability of chance being less than 1%, the NRC panel said it is 7%.

Now, many readers will say, “Well, okay, but that means the NRC guys admitted that the probability that chance caused those correlations is only 7%, which means the probability that the police tape was recorded by a motorcycle in Dealey Plaza is 93%.” Indeed.

The NRC panel made no effort to explain the significance of the fact that their own calculation found a 93% probability that the locational correlations occurred because the impulse patterns on the police tape were recorded by a motorcycle in Dealey Plaza. In fact, they did not even specifically mention this. They simply noted that they determined the probability of chance was 7% and acted as though they had dealt a strong blow to the BBN report. Granted, a 7% probability of chance is much more than a <1% probability, but it is still an extremely low probability.

On a side note, the BBN report explains that not every recorded shot would have an N-wave (shock wave) in its impulse patterns because the microphone that recorded the shot was not in position to record it; but, if the microphone were in position to record the N-wave, the N-wave would be “a significant part” of the echo pattern:

As noted in previous posts, this is exactly what we see in the acoustical evidence: An N-wave appears in those dictabelt shots that were recorded when the motorcycle was in position to record the N-wave, and no N-wave appears in those shots where the motorcycle was not in position to record it (BBN report, 8 HSCA 49-50).

It seems that the motorcycle shield, or the torso of Officer McLain, should interfere with all the “N-waves”.



Finally, I want to return to this question of the extremely limited tests that Weiss and Aschkenasy made. I don’t know, but let’s say they were using computer punch cards. They made:

A.   Computer program to compare any 1963 waveform with the theoretical waveform from many positions near a 1978 microphone. This has got to be 99% of the work.
B.   1 or more computer cards to represent the 1978 waveform from the test shot, from the grassy knoll, at target three, recorded from microphone 3 ( 4 ).
C.   1 or more computer cards to represent the 1963 145.15 waveform.

And ran the program to find the best location near microphone 3 ( 4 ) that would have given the best correlation, in theory.

This is fine. Now, after going through all that work, why not go the extra 1% and make:
B.   1 or more computer cards to represent the 1978 waveform from the test shot, from the TSBD, at target three, recorded from microphone 3 ( 4 ).
And run the same program again, this time with the 1978 TSBD waveform.

If this was done, what effect would it have on their final report, if they found a strong correlation for the Grassy Knoll shot, 4 feet up the street from microphone ( 4 ), but also found just as strong a correlation for the shot TSBD shot, 7 feet down the street from the same microphone?


Mr. Griffith generally dodges simple questions, but I will try it again. Never trust anyone who routinely dodges simple questions:

Question 1: What effect would it have had, on Weiss and Aschkenasy final report if they had done this, and found just as strong a correlation near microphone 3 ( 4 ) for the test shot from the TSBD as they did for the test shot from the Grassy Knoll?

I would say it would have been really bad. How could the 145.15 waveform match up with both the Grassy Knoll shot and the TSBD shot?


Question 2: Why didn’t Weiss and Aschkenasy take the extra step, just slightly more work to punch out a few more cards, to run this correlation?

I say it was likely because they feared, consciously or subconsciously, the two-correlation scenario I have described. Do you have a better theory?


By the way, I used to be a computer programmer from that era, so I know that preparing the data to be used was a fraction of the work of writing the computer program itself.

JFK Assassination Forum


Offline Michael T. Griffith

  • Hero Member
  • *****
  • Posts: 929
I am not going to deal with Dr. Thomas who has no support from acoustic experts, not even from his closest allies, for his principle argument, the 1-in-100,000 claim.

LOL! Uh-huh, in other words, you can't refute the cold, hard math of Dr. Thomas's calculation, so you're going to keep using this lame excuse to ignore it.   

You accept the NRC panel's claims, even though they have "no support from acoustics experts." In fact, all six of the HSCA acoustical experts rejected the NRC panel's no-gunshots finding and reaffirmed their four-shots finding. But, of course, you don't care, because you want to believe what the NRC panel said.

You accept the claims of amateurs like O'Dell and Myers and Bowles, even though they have "no support from acoustics experts," and even though the only acoustics experts who have studied the dictabelt recording have said it contains four gunshots.

You assume that even although Dr. Barger proof-read Dr. Thomas's 2001 article, Dr. Barger disagrees with the 1 in 100,000 calculation, even though it greatly strengthens the case for Dr. Barger's conclusions! Yeah, makes perfect sense.

You assume that even though Dr. Thomas consulted extensively with Dr. Barger when he wrote the four chapters on the acoustics evidence for his book, Dr. Thomas must have once again failed to mention that, per your theory, Dr. Barger does not agree with his 1 in 100,000 calculation, even though anyone with sufficient math skills can confirm the accuracy of the calculation for themselves.

Not checking for other possibilities is a big deal.

It is a bigger deal that you keep ignoring the fact that your "other possibilities" were ruled out by BBN as false alarms and that the WA sonar analysis proved they were false matches and proved that the match that BBN found had the highest correlation coefficient with the dictabelt grassy knoll shot--i.e., the grassy knoll test shot--was indeed the true match for the dictabelt grassy knoll shot. It is amazing that you just keep ignoring this fact.

What if, in addition to for correlations of 145.15 with a test shot from the grassy knoll, at Target 3, they also checked the test shot from the TSBD at Target 3.

What if Weiss and Aschkenasy found a strong correlation with a test shot from both the TSBD and the grassy knoll? Just as BBN did with their study.

What if Weiss and Aschkenasy found strong correlations with test shots fired at different targets? Like strong correlations for both Targets 1 and 3, with the same 1963 impulse. Just as BBN did with their study.

What if Weiss and Aschkenasy found strong correlations from two different locations, like within 5 feet of microphone 3 ( 4 ), and within 7 feet of microphone 3 ( 8 ). A strong correlation with the 145.15 impulse with two different microphone locations 40 feet apart. Just as BBN did with their study.

This stuff, AGAIN? I say AGAIN: The BBN scientists explained in their report why they determined that the two TSBD matches for the 145.15 impulse pattern were false matches. AGAIN, for one thing, the two TSBD matches had a lower correlation coefficient than did the grassy knoll match. The grassy knoll test shot had the highest correlation coefficient, so, logically and naturally enough, WA conducted their sonar analysis on that test shot instead of the two TSBD test shots, and the sonar analysis proved beyond rational doubt that the grassy knoll test shot was the true match for 145.15.

We would conclude that these estimates of getting ‘false alarms’ or ‘false positives’ as being on 5%, or 1-in-100,000, are clearly false.

No, we would conclude that you have not read the BBN report, that you are determined not to believe the acoustical evidence because it destroys your theory of the shooting, and that you lack either the capacity or the integrity to deal credibly with the HSCA acoustical research.

And, AGAIN, WA pointed out that the probability of chance is actually "considerably less" than 5%, and they explained why this is. They explained that they limited the comparison timespan to two periods that totaled 180 milliseconds because the test-shot impulses only occurred in those two periods, but that if one assumes that random noise caused the correlations, then the comparison timespan should be 370 milliseconds, not 180, which, of course, greatly reduces the probability of chance. I've pointed this out to you four times now, but you just keep ignoring it.

But Weiss and Aschkenasy did not check for these correlations. They did not have enough time. They were using computers to check for correlations, but they did not have enough time. I am skeptical of this. I think they were afraid of having their own data discredit the conclusions they wanted to come to. But just testing one 1963 impulse, with just one 1978 test shot, they insured that this could not happen.
 
Yes. I have read this. But no statement about checking for all 3,024 combinations of the seven impulses with the 432 recordings from 1978. Not only do they not say they checked all combinations, they don’t even seem to know how many possible combinations there are. They only refer to 2,592 combinations, not 3,024 combinations. There is no way they did manually check 3,024 combinations, but thought they had only checked 2,592.

Without a knowing which combinations where checked, and which were not, there is no way you or I or anyone can calculate the odds of “finding” the motorcycle seemed to be at the right place. That might only reflect them checking a limited number of microphones for each impulse.

Plus, how much faith can we put into what Dr. Barger said? Below is BBN Exhibit F-367


http://mcadams.posc.mu.edu/russ/infojfk/jfk2/f367.htm


Now, below, I quote from the same report you quoted from, just a few lines down:

Note the inaccuracies of the above paragraph, when compared with F-367.

With impulse 137.70, four correlations were not found from the TSBD and Targets 1 and 3.
Instead three correlations were found from the TSBD and Targets 1 and 3, plus a fourth correlation from the Grassy Knoll and Target 4.

With impulse 139.27, not just three correlations were found from the TSBD with Target 3.
In addition, a fourth correlation was also found from the Grassy Knoll with Target 3. But the BBN would just assume this inconvenient correlation go down the memory hole.

With impulse 145.15, not just one correlation was found from the Grassy Knoll with Target 3.
In addition, two more correlations were also found from the TSBD with Target 3. But again, the BBN would just assume that these inconvenient correlations also go down the memory hole.

With impulse 145.61, two correlations with test shots from TSBD at Targets 3 and 4 were not found. Instead, three correlations with test shots from TSBD at Targets 2, 3 and 4 were found.

BBN just prunes away at the unwanted correlations that they wish they didn’t find from their final report. This is unacceptable.

Also note the excuse they give for not finding a correlation with the test shot at Target 2. “the presence of an overhead sign that interfered with test shots at Target 2”.
The 1963 microphone can be right behind the motorcycle shield, totally blocking a direct path from the rifle to the microphone. But this still allowing correlations to be found with the rifle location, the target location and the microphone location.

But an overhead sign that is many of dozens of feet from the 1978 rifle, many of dozens of feet from the 1978 microphone, that can block out the sound waves, preventing a correlation from being found. But only when firing at Target 2. Not when firing at Targets 1 or 3.

You have once again butchered F-367. I have told you several times that you are blunderingly misreading/misrepresenting F-367, but you just keep on repeating your blundering.

F-367 is Table II in the BBN report. If you would ever break down and read that report, you would discover that the BBN scientists used Figure 22 to help explain Table II/F-367. In referring to Table II/F-367, they said,

Quote
It becomes clear upon examination of the weapon, target, and microphone locations for the several echo patterns that passed the correlation detection test at each of the four different times, that some are inconsistent with each other. Thus, some or perhaps all represent false alarms. Deciding which are false alarms was greatly facilitated by plotting the microphone locations for each of the 15 echo patterns against the time on the DPD tape when it correlated highly. This plot appears in Fig. 22, where zero on the time scale is taken to be the time on the DPD tape where high correlations were first detected. (BBN report, 8 HSCA 102)

The BBN scientists then explained why the following matches were false matches/false alarms based on acoustical/time-distance-movement reasons:

137.70, array 2, target 4 (GK test shot)
139.27, array 3, target 2 (GK test shot)
145.15, array 3, target 2 (TSBD test shot)
145.15, array 3, target 3 (TSBD test shot)
145.61, array 3, target 2 (TSBD test shot)
(BBN report, 8 HSCA 105)

Notice that the two TSBD matches for 145.15 that you've been going on and on about were two of the false matches that BBN identified. Did you catch that? You would have known this weeks ago if you had read the BBN report before choosing to attack it. Personally, I don't attack a report until I have read it, but that's just me.

It seems that the motorcycle shield, or the torso of Officer McLain, should interfere with all the “N-waves”.

Any honest person with decent eyesight can look at the bike's positions and see that you must be blind or dishonest to say this. Why do you suppose that the NRC panel, as desperate as they were to nitpick, misrepresent, and reject the acoustical evidence, did not make this argument?

Umm, and I notice that you said nothing about the NRC panel's finding that the probability that chance caused the locational correlations is only 7%. As I pointed to you, and as you quoted back in your reply, the NRC panel said that instead of the BBN probability of chance of <1%, the probability of chance is actually 7%, which, of course, means that there is a 93% probability that the correlations are not due to chance but are due to the impulse patterns having been recorded by a motorcycle in Dealey Plaza.

Finally, I want to return to this question of the extremely limited tests that Weiss and Aschkenasy made. I don’t know, but let’s say they were using computer punch cards. They made:

A.   Computer program to compare any 1963 waveform with the theoretical waveform from many positions near a 1978 microphone. This has got to be 99% of the work.
B.   1 or more computer cards to represent the 1978 waveform from the test shot, from the grassy knoll, at target three, recorded from microphone 3 ( 4 ).
C.   1 or more computer cards to represent the 1963 145.15 waveform.

And ran the program to find the best location near microphone 3 ( 4 ) that would have given the best correlation, in theory.

This is fine. Now, after going through all that work, why not go the extra 1% and make:
B.   1 or more computer cards to represent the 1978 waveform from the test shot, from the TSBD, at target three, recorded from microphone 3 ( 4 ).
And run the same program again, this time with the 1978 TSBD waveform.

If this was done, what effect would it have on their final report, if they found a strong correlation for the Grassy Knoll shot, 4 feet up the street from microphone ( 4 ), but also found just as strong a correlation for the shot TSBD shot, 7 feet down the street from the same microphone?

Mr. Griffith generally dodges simple questions, but I will try it again. Never trust anyone who routinely dodges simple questions:

Question 1: What effect would it have had, on Weiss and Aschkenasy final report if they had done this, and found just as strong a correlation near microphone 3 ( 4 ) for the test shot from the TSBD as they did for the test shot from the Grassy Knoll?

I would say it would have been really bad. How could the 145.15 waveform match up with both the Grassy Knoll shot and the TSBD shot?

Question 2: Why didn’t Weiss and Aschkenasy take the extra step, just slightly more work to punch out a few more cards, to run this correlation?

I say it was likely because they feared, consciously or subconsciously, the two-correlation scenario I have described. Do you have a better theory?

By the way, I used to be a computer programmer from that era, so I know that preparing the data to be used was a fraction of the work of writing the computer program itself.

Blah, blah, blah, blah, blah. I have explained to you several times now why WA did not do a sonar analysis on the two TSBD matches for the dictabelt grassy knoll shot: (1) because the grassy knoll test shot had the highest correlation coefficient, (2) because the two TSBD test shots had a lower correlation coefficient than the grassy knoll test shot, (3) because if the solar analysis verified the grassy knoll test shot as the true match for 145.15, this would automatically confirm that the two (lesser-quality) TSBD matches were false alarms.

Not only did the two TSBD matches for 145.15 have a lower correlation coefficient than the grassy knoll match, but the two 145.15 TSBD matches and the 145.61 TSBD match were clearly false alarms based on time-distance-movement analysis, as the BBN scientists explained in their explanation of Table II/F-367:

Quote
The second and third entries at 145.15 sec and the third entry at 145.61 sec are false alarms, because the motorcycle would have had to travel at 16 mph to gain the indicated position of only 70 ft behind the limousine at the time of the last shot. The motorcycle noise level (see Fig. 4) decreased by about 10 dB Just 3 sec before the time of the first correlations, indicating a slowing to negotiate the 120° turn onto Elm St. The motorcycle noise level did not increase for the next 13 sec, so it could not have increased speed to 16 mph and maintained it. (BBN report, 8 HSCA 105)

So there is no way that the two 145.15 TSBD matches could be true matches, and the sonar analysis confirmed this already-obvious fact. Are you going to keep claiming that WA should have done a sonar analysis on these two false matches?
« Last Edit: October 16, 2020, 05:16:22 PM by Michael T. Griffith »

Offline Joe Elliott

  • Hero Member
  • *****
  • Posts: 1727

LOL! Uh-huh, in other words, you can't refute the cold, hard math of Dr. Thomas's calculation, so you're going to keep using this lame excuse to ignore it.   

I took a course on Calculus and Analytic Geometry in my first year of college. The class had 40 good math students in it. At the end of the quarter, 26 had dropped out and only 14 passed. I had the second highest grade. I was made a tutor to help other college students with their math homework, often helping students with basic algebra courses but sometimes calculus students taking the same course I had passed or was currently taking. But here I am being lectured on how I can’t face the hard math of Dr. Thomas by someone who can’t figure out how to solve one the basic, classic math problems from high school Algebra 1.



You assume that even although Dr. Barger proof-read Dr. Thomas's 2001 article, Dr. Barger disagrees with the 1 in 100,000 calculation, even though it greatly strengthens the case for Dr. Barger's conclusions! Yeah, makes perfect sense.

A perfectly good assumption since Dr. Barger would not remain silent if he agreed with this 1 in 100,000 calculation.


You assume that even though Dr. Thomas consulted extensively with Dr. Barger when he wrote the four chapters on the acoustics evidence for his book, Dr. Thomas must have once again failed to mention that, per your theory, Dr. Barger does not agree with his 1 in 100,000 calculation, even though anyone with sufficient math skills can confirm the accuracy of the calculation for themselves.

Sufficient math skills are not enough. One must know what constants to plug into the equation. Something an acoustic expert would know but an insect expert would not.



And, AGAIN, WA pointed out that the probability of chance is actually "considerably less" than 5%, and they explained why this is. They explained that they limited the comparison timespan to two periods that totaled 180 milliseconds because the test-shot impulses only occurred in those two periods, but that if one assumes that random noise caused the correlations, then the comparison timespan should be 370 milliseconds, not 180, which, of course, greatly reduces the probability of chance. I've pointed this out to you four times now, but you just keep ignoring it.

Yes. The claimed again and again and again that this is so. But they had a chance to give powerful support for this claim, and declined to do so. They could have check for correlations for other test shots. They had the computer program already made for doing so. How much stronger this claim would have been, if they ran this program that was already developed, and discovered:

For impulse 137.70, they compared it to all the 12 test shots, with all the 36 recordings, and only found one strong correlation, for a shot from the TSBD, at Target 1, for a location within a few feet of microphone 2 ( 5 ). No other strong correlations were found.
For impulse 139.27, they compared it to all the 12 test shots, with all the 36 recordings, and only found one strong correlation, for a shot from the TSBD, at Target 2, for a location within a few feet of microphone 2 ( 6 ). No other strong correlations were found.
For impulse 145.15, they compared it to all the 12 test shots, with all the 36 recordings, and only found one strong correlation, for a shot from the Grassy Knoll, at Target 3, for a location within a few feet of microphone 3 ( 4 ). No other strong correlations were found.
For impulse 145.61, they compared it to all the 12 test shots, with all the 36 recordings, and only found one strong correlation, for a shot from the TSBD, at Target 3, for a location within a few feet of microphone 3 ( 5 ). No other strong correlations were found.

Had this been done, this would not simply be an unsupported claim. This would be a claim with great support from the data.

Instead, all they did was:

For impulse 145.15, the compared one test shot, with one recording, and found one strong correlation, for a shot from the Grassy Knoll, at Target 3. for a location within a few feet of microphone 3 ( 4 ).


So, we don’t know what the odds of finding a “false alarm”, a false positive are. For all we know the odds might be 1-in-100,000, or 1%, or 5%, or maybe a good deal higher, if a lot of locations are tested within a few feet of a microphone.

Let’s say it was 1%. Running this program on all 3,024 combinations might have given us 30 strong correlations. Of which at least 26 would have to be false positives, possibly all 30.

Let’s say it was 5%. Running this program on all 3,024 combinations might have given us 150 strong correlations. Of which at least 146 would have to be false positives, possibly all 150.

Let’s say it was Dr. Thomas’s 1-in-100,000. Running this program on all 3,024 combinations most likely would have given us 4 strong correlations.


If they had run their program for many possible combinations, we would know. Their excuse for not doing so? Lack of time. This is an unbelievable excuse. On par with “The dog ate my homework”. Once the program is written, 99% of the work is done. For about the same amount of time, one could check out 1, 10, 36 or many more combinations. And risk finding clear false positives.

If they were doing a manual check, this would be more believable. Perhaps, by coincidence, they only had time to laboriously check one combination. But these comparisons were done by computer. So, they could have done many more, if they wanted to.

The clear explanation, is they did not want to demonstrate that their procedure will find false positives. And that is why they did the minimum, just compare one 1963 waveform, with one 1978 waveform, for one shot from one location, at one target, recorded at one microphone, so finding contradictory correlations are impossible. So, their study won’t end up with the same obvious flaw the BBN study ended up with.



You have once again butchered F-367. I have told you several times that you are blunderingly misreading/misrepresenting F-367, but you just keep on repeating your blundering.

F-367 is Table II in the BBN report. If you would ever break down and read that report, you would discover that the BBN scientists used Figure 22 to help explain Table II/F-367. In referring to Table II/F-367, they said,

The BBN scientists then explained why the following matches were false matches/false alarms based on acoustical/time-distance-movement reasons:

137.70, array 2, target 4 (GK test shot)
139.27, array 3, target 2 (GK test shot)
145.15, array 3, target 2 (TSBD test shot)
145.15, array 3, target 3 (TSBD test shot)
145.61, array 3, target 2 (TSBD test shot)
(BBN report, 8 HSCA 105)

Notice that the two TSBD matches for 145.15 that you've been going on and on about were two of the false matches that BBN identified. Did you catch that? You would have known this weeks ago if you had read the BBN report before choosing to attack it. Personally, I don't attack a report until I have read it, but that's just me.

Finding false positives is bad. There are no special exceptions. Like “False positives should always to be considered bad, unless the authors of the study tell us to just ignore them”.



Any honest person with decent eyesight can look at the bike's positions and see that you must be blind or dishonest to say this. Why do you suppose that the NRC panel, as desperate as they were to nitpick, misrepresent, and reject the acoustical evidence, did not make this argument?

But we don’t know if the BBN was doing the same thing Weiss and Mr. Aschkenasy, just to a lesser extent. For all we know, they may have, mostly, limited their checking of combinations, that were consistent with a motorcycle moving at 11 mph. So, any correlation they found, which would be a false positive, would be limited to correlations which match the scenario of a motorcycle moving at 11 mph.

Again, I repeat, after over 40 years, Dr. Barger has not given any information on which of the 3,024 combinations they checked manually, over the course of 10 days, and which they did not. And so, we cannot conclude if the correlation of the data with a 11-mph motorcycle was remarkable or exactly as to be expected, even if they were looking at a recording made from the Trade Mart.


And as I said before, to any reader, never trust someone who dodging a few simple questions:


Question 1 for Mr. Griffith:

If the BBN mostly limited their checking for combinations with strong correlations:

Only checked recordings from microphones 2 ( 5 ) and 2 ( 6 ) with impulse 137.70,
only checked recordings from microphones 2 ( 6 ) and 2 ( 7 ) with impulse 139.27,
only checked recordings from microphones 2 ( 10 ) and 2 (11 ) with impulse 140.32,
only checked recordings from microphones 3 ( 3 ) and 3 ( 4 ) with impulse 145.15,
only checked recordings from microphones 3 ( 5 ) and 3 ( 6 ) with impulse 145.61,

would it be a remarkable coincidence that the strong correlations they found would be consistent with a motorcycle moving at 11 mph?

Yes or no?


JFK Assassination Forum


Offline Michael T. Griffith

  • Hero Member
  • *****
  • Posts: 929
I took a course on Calculus and Analytic Geometry in my first year of college. The class had 40 good math students in it. At the end of the quarter, 26 had dropped out and only 14 passed. I had the second highest grade. I was made a tutor to help other college students with their math homework, often helping students with basic algebra courses but sometimes calculus students taking the same course I had passed or was currently taking. But here I am being lectured on how I can’t face the hard math of Dr. Thomas by someone who can’t figure out how to solve one the basic, classic math problems from high school Algebra 1.

A perfectly good assumption since Dr. Barger would not remain silent if he agreed with this 1 in 100,000 calculation.

Sufficient math skills are not enough. One must know what constants to plug into the equation. Something an acoustic expert would know but an insect expert would not.

Yes. The claimed again and again and again that this is so. But they had a chance to give powerful support for this claim, and declined to do so. They could have check for correlations for other test shots. They had the computer program already made for doing so. How much stronger this claim would have been, if they ran this program that was already developed, and discovered:

For impulse 137.70, they compared it to all the 12 test shots, with all the 36 recordings, and only found one strong correlation, for a shot from the TSBD, at Target 1, for a location within a few feet of microphone 2 ( 5 ). No other strong correlations were found.
For impulse 139.27, they compared it to all the 12 test shots, with all the 36 recordings, and only found one strong correlation, for a shot from the TSBD, at Target 2, for a location within a few feet of microphone 2 ( 6 ). No other strong correlations were found.
For impulse 145.15, they compared it to all the 12 test shots, with all the 36 recordings, and only found one strong correlation, for a shot from the Grassy Knoll, at Target 3, for a location within a few feet of microphone 3 ( 4 ). No other strong correlations were found.
For impulse 145.61, they compared it to all the 12 test shots, with all the 36 recordings, and only found one strong correlation, for a shot from the TSBD, at Target 3, for a location within a few feet of microphone 3 ( 5 ). No other strong correlations were found.

Had this been done, this would not simply be an unsupported claim. This would be a claim with great support from the data.

Instead, all they did was:

For impulse 145.15, the compared one test shot, with one recording, and found one strong correlation, for a shot from the Grassy Knoll, at Target 3. for a location within a few feet of microphone 3 ( 4 ).

So, we don’t know what the odds of finding a “false alarm”, a false positive are. For all we know the odds might be 1-in-100,000, or 1%, or 5%, or maybe a good deal higher, if a lot of locations are tested within a few feet of a microphone.

Let’s say it was 1%. Running this program on all 3,024 combinations might have given us 30 strong correlations. Of which at least 26 would have to be false positives, possibly all 30.

Let’s say it was 5%. Running this program on all 3,024 combinations might have given us 150 strong correlations. Of which at least 146 would have to be false positives, possibly all 150.

Let’s say it was Dr. Thomas’s 1-in-100,000. Running this program on all 3,024 combinations most likely would have given us 4 strong correlations.

If they had run their program for many possible combinations, we would know. Their excuse for not doing so? Lack of time. This is an unbelievable excuse. On par with “The dog ate my homework”. Once the program is written, 99% of the work is done. For about the same amount of time, one could check out 1, 10, 36 or many more combinations. And risk finding clear false positives.

If they were doing a manual check, this would be more believable. Perhaps, by coincidence, they only had time to laboriously check one combination. But these comparisons were done by computer. So, they could have done many more, if they wanted to.

The clear explanation, is they did not want to demonstrate that their procedure will find false positives. And that is why they did the minimum, just compare one 1963 waveform, with one 1978 waveform, for one shot from one location, at one target, recorded at one microphone, so finding contradictory correlations are impossible. So, their study won’t end up with the same obvious flaw the BBN study ended up with.

Finding false positives is bad. There are no special exceptions. Like “False positives should always to be considered bad, unless the authors of the study tell us to just ignore them”.

But we don’t know if the BBN was doing the same thing Weiss and Mr. Aschkenasy, just to a lesser extent. For all we know, they may have, mostly, limited their checking of combinations, that were consistent with a motorcycle moving at 11 mph. So, any correlation they found, which would be a false positive, would be limited to correlations which match the scenario of a motorcycle moving at 11 mph.

Again, I repeat, after over 40 years, Dr. Barger has not given any information on which of the 3,024 combinations they checked manually, over the course of 10 days, and which they did not. And so, we cannot conclude if the correlation of the data with a 11-mph motorcycle was remarkable or exactly as to be expected, even if they were looking at a recording made from the Trade Mart.

And as I said before, to any reader, never trust someone who dodging a few simple questions:

Question 1 for Mr. Griffith:

If the BBN mostly limited their checking for combinations with strong correlations:

Only checked recordings from microphones 2 ( 5 ) and 2 ( 6 ) with impulse 137.70,
only checked recordings from microphones 2 ( 6 ) and 2 ( 7 ) with impulse 139.27,
only checked recordings from microphones 2 ( 10 ) and 2 (11 ) with impulse 140.32,
only checked recordings from microphones 3 ( 3 ) and 3 ( 4 ) with impulse 145.15,
only checked recordings from microphones 3 ( 5 ) and 3 ( 6 ) with impulse 145.61,

would it be a remarkable coincidence that the strong correlations they found would be consistent with a motorcycle moving at 11 mph?

Yes or no?


I think we have reached the point where it is obvious that dialogue with you on this issue is a waste of time.

I notice you once again snipped and ignored the fact that even the NRC panel admitted that there is only a 7% probability that the locational correlations are due to chance (random noise).

You finally decided to try to answer the point that WA pointed out that the probability of chance for the grassy knoll shot is "considerably less" than 5%, but your answer is comical. You erroneously claim that they passed up a chance to prove their point and then go off on another "well, they should have considered a, b, c, d, e, etc." diversion. But you ignore the fact that they did prove their point: They pointed out the obvious fact that if you expand the timespan for impulse-echo correlations from 180 milliseconds to 370 milliseconds, obviously this greatly reduces the odds that chance caused the 145.15-test shot correlations.

You are still making the silly argument that Dr. Barger secretly disagrees with Dr. Thomas's 1 in 100,000 calculation, and that Dr. Thomas has been hiding this disagreement all these years. What are you going to do when Dr. Thompson's acoustical section in his upcoming book endorses the 1 in 100,000 calculation, since Dr. Thompson has been working Dr. Barger on the acoustical evidence for years?

You are still making a number of claims that not even the NRC panel stooped so low as to make, such as your silly claim that McClain's torso would have intervened between the sound waves and the microphone, and your equally silly claims that "we don't know this or that" about what BBN and WA did. Maybe YOU don't know what they did, but people who have seriously, honestly studied the BBN and WA materials know what they did.

I notice that you on several occasions you have basically accused BBN and WA of purposely making false claims about how they analyzed the dictabelt recording. Even the NRC panel didn't stoop so low.

Folks, I would hold off on further discussion on this topic until Dr. Thompson's book Last Second in Dallas comes out on December 3. As mentioned, Dr. Thompson has been working with Dr. Barger for the last several years on the acoustical evidence. Dr. Thomas, who has seen the manuscript, tells me that he believes that Dr. Thompson's section on the acoustical evidence will firmly establish beyond any reasonable dispute that the acoustical evidence is valid.




Offline Joe Elliott

  • Hero Member
  • *****
  • Posts: 1727

I think we have reached the point where it is obvious that dialogue with you on this issue is a waste of time.

I notice you once again snipped and ignored the fact that even the NRC panel admitted that there is only a 7% probability that the locational correlations are due to chance (random noise).

But how could the NRC, or anyone else, calculate these odds, without knowing which microphones the BBN manually checked in 10 days, and which did they did not.


If the BBN made 90% of their search for combinations for:

Only checked recordings from microphones 2 ( 5 ) and 2 ( 6 ) with impulse 137.70,
only checked recordings from microphones 2 ( 6 ) and 2 ( 7 ) with impulse 139.27,
only checked recordings from microphones 2 ( 10 ) and 2 (11 ) with impulse 140.32,
only checked recordings from microphones 3 ( 3 ) and 3 ( 4 ) with impulse 145.15,
only checked recordings from microphones 3 ( 5 ) and 3 ( 6 ) with impulse 145.61,

The odds of finding good locational correlations would be considerably more than 7%.

It is possible that the NRC panel 7% probability estimate was based on a faulty assumption. That all 3,024 combinations of the 7 1963 waveforms were manually compared to the 432 1978 waveforms. If this was somehow done, and BBN never claimed to have done this, then the 7% probability estimate sounds reasonable. But I am not going to accept it until I learn what combinations did BBN check and which did they not.



You finally decided to try to answer the point that WA pointed out that the probability of chance for the grassy knoll shot is "considerably less" than 5%, but your answer is comical. You erroneously claim that they passed up a chance to prove their point and then go off on another "well, they should have considered a, b, c, d, e, etc." diversion. But you ignore the fact that they did prove their point: They pointed out the obvious fact that if you expand the timespan for impulse-echo correlations from 180 milliseconds to 370 milliseconds, obviously this greatly reduces the odds that chance caused the 145.15-test shot correlations.

Well, that is the essence of good science. Checking for a, b, c, d, e. As far as I can tell, Weiss and Aschkenasy were only focused on one possibility:
Possibility A: This was a recording made at Dealey Plaza and recorded the sound of gunshots.
And did not consider:
Possibility B: This was a recording not made a Dealey Plaza and did not record the sound of gunshots.
They should use the computer to check for both possibilities.

If they check many combinations of 1963 waveforms with 1978 waveforms, and find contradictions, the same sort of contradictions Dr. Barger and BBN found all too easily, that the same 1963 waveform matched up with both test shots from the TSBD and the Grassy Knoll, this would indicate the correlations they found were weak, were the result of random chance, and Possibility B has the most support.

But, if they do this same check and get results that contradict what the BBN found, and find no such duplicate correlations, all 1963 waveforms only give strong correlations for a shot from one location, at one target, recorded near one microphone, and the target and microphone locations are reasonable, then Possibility A has the most support.

I don’t see Weiss and Aschkenasy doing anything that looks like good science. Checking for more than one hypothesis.



You are still making the silly argument that Dr. Barger secretly disagrees with Dr. Thomas's 1 in 100,000 calculation, and that Dr. Thomas has been hiding this disagreement all these years. What are you going to do when Dr. Thompson's acoustical section in his upcoming book endorses the 1 in 100,000 calculation, since Dr. Thompson has been working Dr. Barger on the acoustical evidence for years?

This would not change my mind in the slightest, since Dr. Josiah Thompson is no more an acoustic expert than Dr. Donald Thomas. What doctors of Philosophy or Entomology have to say one this subject is not of relevance. You need to go get a quote form someone like Dr. Barger.



You are still making a number of claims that not even the NRC panel stooped so low as to make, such as your silly claim that McClain's torso would have intervened between the sound waves and the microphone, and your equally silly claims that "we don't know this or that" about what BBN and WA did. Maybe YOU don't know what they did, but people who have seriously, honestly studied the BBN and WA materials know what they did.

I think it is likely that McLain’s torso would have been in the way for a later shot from the TSBD. Just look at a good ‘acoustic’ map of Dealey Plaza. And remember that Weiss and Aschkenasy said they believed the microphone was mounted on the left side of the motorcycle. Any reader can judge for themselves if it looks likely that Officer McLain’s torso would block a direct line of sight to the TSBD, for the “fifth” shot.

•   Note: For this argument, I am accepting BBN’s claim as where Officer McLain was, within green circle 5, even though he was most certainly much further back, where his torso would not block this line of sight.

https://www.maryferrell.org/wiki/images/6/60/Pict_essay_acousticshistory_AcousticMap_lrg.jpg


And as for your second point, we don’t know which of the 3,024 combinations of 1963 waveforms and 1978 waveforms the BBN manually check for in 10 days. And which they didn’t. They never said.

But if you do know, don’t withhold this from all us. Provide a link to a clear statement from Dr. Barger or the BBN about which combinations were checked.


JFK Assassination Forum


Offline Michael T. Griffith

  • Hero Member
  • *****
  • Posts: 929
A bit more background information on Dr. Thomas might be helpful. In addition to being a USDA entomological research scientist, he is, of necessity, an expert on statistics. Dr. Thomas has authored or co-authored 116 scientific papers in the field of entomology. A number of his published scientific papers have included extensive and complex statistics. Here is a list of his papers:

https://www.researchgate.net/profile/Donald_Thomas6

Of course, Dr. Thomas's 2001 article on the acoustical evidence was published in the peer-reviewed criminal science journal Science & Justice. The article drew international attention, including a favorable review by the Washington Post, and reopened the debate on the acoustical evidence. Here is the article:

http://www.jfklancer.com/pdf/Thomas.pdf

Significantly, when four of the NRC panel members responded to Dr. Thomas's article, four years later, they did *not* challenge Dr. Thomas's 1 in 100,000 calculation! Four years after Dr. Thomas's article appeared, Drs. Ramsey, Horowitz, Chernoff, and Garwin, along with a Dr. Linsker, wrote a response to the article for Science & Justice, and their response was published in the volume 45, number 4, 2005 edition of the journal. Their entire argument was that since they had allegedly proved that the suspect impulse patterns on the police tape did not occur during the assassination, the BBN and WA correlations were all merely coincidences and that therefore they did not need to respond to Dr. Thomas's finding that there was only a 1 in 100,000 chance that the dictabelt grassy knoll shot was not caused by gunfire.



« Last Edit: October 21, 2020, 08:59:43 PM by Michael T. Griffith »

Offline Michael T. Griffith

  • Hero Member
  • *****
  • Posts: 929
Lone-gunman theorists frequently cite Michael O'Dell's amateurish and flawed research on the HSCA acoustical evidence. John McAdams carries O'Dell's research on his website. Come to find out that O'Dell has also dabbled in the RFK case and has produced equally flawed "acoustical analysis" in that case.

Mel "Conspiracies Never Happen" Ayton asked O'Dell to analyze the one and only recording of the RFK assassination, a recording made by a journalist named Pruszynski. O'Dell wrote that he was only able to identify six shots on the tape. If the tape contains more than eight shots, then there must have been more than one gunman, because Sirhan's gun could only hold eight bullets (and Sirhan had no chance to reload).

When six acoustical experts examined the Pruszynski tape, five of them determined that it contains at least 10 shots and at least one group of two shots that were fired within 148 milliseconds of each other, far too quickly to have been fired by the same gun.

The five experts were Philip Van Praag, a world-renowned expert on audio recording technology and the man who literally wrote the book on the development of audio recorders; Wes Dooley and Paul Pegas of Audio Engineering Associates in Pasadena, California; Edward Brixen in Copenhagen, Denmark, who is also a ballistics expert; and Phil Spencer Whitehead of the Georgia Institute of Technology in Atlanta, Georgia.

The one acoustical expert who did not find more than eight shots on the tape was Dr. Philip Harrison, who was asked Mel Ayton to analyze the tape. Further investigation revealed that Harrison used a less-than-ideal a copy of the tape, didn't use any of the specialized equipment that Van Praag used, didn't use any of the test or enhanced recordings that Van Praag made of the tape, somehow did not notice either of the 120-150-millisecond double-shot groups, and admitted there were several impulses on the tape whose sources he could not identify. Also, it turned out that Harrison was not even aware of Pruszynski's movements and did not know where the microphone was. It seems that Mel Ayton did not give Harrison all the facts when he asked him to analyze the recording.

Anyway, we are only a little over five weeks from the release of Dr. Josiah Thompson's highly anticipated book Last Second in Dallas, which will include a detailed defense and confirmation of the HSCA acoustical evidence. Dr. Thompson has been working with Dr. James Barger on the acoustical evidence for the last several years.

 
« Last Edit: October 24, 2020, 07:29:43 PM by Michael T. Griffith »

Offline Michael T. Griffith

  • Hero Member
  • *****
  • Posts: 929
A bit more background information on Dr. Thomas might be helpful.

Trust me, nobody cares a xxxx.

You mean you don't.

Lone-gunman theorists frequently cite Michael O'Dell's amateurish and flawed research on the HSCA acoustical evidence.

Mr. Science calling somebody's research amateurish while at the same time repeatedly has failed to support his own fantasy dicta-belt needle jump speaks volumes.

HUH??? I asked you several times to explain why you choose Decker's anomalous crosstalk, which has the largest time offset of any crosstalk event, as your time indicator as opposed to  the five time indicators that put the gunfire segment during the assassination, i.e., the Channel 1 12:28 time notation, the Channel 2 12:30 time notation, the Fisher crosstalk, and Curry's two Dealey Plaza transmissions. You ducked this straightforward question every time.

Of course, we both know why you cling to Decker's crosstalk as your time indicator: otherwise, you would have to explain the powerful, intricate correlations between the police tape impulse patterns and the Dealey Plaza test shots. You would need need to start by explaining why even the NRC panel admitted that there is only a 7% probability that chance caused the amazing locational correlations between the dictabelt gunshots and the test-firing gunshots.

JFK Assassination Forum