On the hazards of significance testing. Part 2: the false discovery rate, or how not to make a fool of yourself with P values

Jump to follow-up What follows is a simplified version of part of a paper that has now appeared as a preprint on arXiv. If you find anything wrong, or obscure, please email me. Be vicious: it will improve the eventual paper. The paper has now appeared in the new Royal Society Open Science journal. There is a comments section at the end of the paper, for discussion. The first comment is from me, a correction of a typo that was spotted within hours. Luckily it’s pretty obvious. It’s a follow-up to my very first paper, which was written in 1959 – 60, while I was a fourth year undergraduate (the history is at a recent blog). I hope this one is better. ‘". . . before anything was known of Lydgate’s skill, the judgements on it had naturally been divided, depending on a sense of likelihood, situated perhaps in the pit of the stomach, or in the pineal gland, and differing in its verdicts, but not less valuable as a guide in the total deficit of evidence" ‘George Eliot (Middlemarch, Chap. 45) "The standard approach in teaching, of stressing the formal definition of a p-value while warning against its misinterpretation, has simply been an abysmal failure”  Sellke et al. (2001) `The American Statistician’ (55), 62–71 The last post was about screening. It showed that most screening tests are useless, in the sense that a large proportion of people who test positive do not have the condition. This propo...
Source: DC's goodscience - Category: Science Authors: Tags: false discovery rate statistics Bayesian P values significance Source Type: blogs