It hasn’t been a great year for social science, with several high profile scandals involving faked or bad data. The New York Times has reported on a new article in Science looking into the reproducibility of 100 psychology articles. The paper concludes that 60 of these papers seemed to have issues. Fortunately, they did not uncover any fraudulent or outright false results, but it does seem like the papers were consistently overstating the evidence for their conclusions.
I haven’t had time to read the paper yet, but there are a few reasons why I think this could occur, so I might write a follow up post or two about this. The Times article does mention that the studies replicating the 100 papers were required to have the input of the original authors to make sure that the studies were as similar as possible.
I think it is important to note that the physical sciences are not immune to pathological results. Sometimes bad results happen even if everything is done correctly. Statistics typically allows this, so given enough perfectly designed and executed studies, some are bound to be wrong. Sometimes the authors simply forget to account for systematic uncertainty or maybe there is a problem with the theoretical model being tested. That is why we often see “3-sigma” effects in high energy physics disappear into noise. While “3-sigma” seemingly implies that there is a tiny chance of the result being wrong, there are still a lot of caveats. In the case of something like social sciences, the kind of precision found in much of physics is simply impossible to achieve, so the chance of getting a false result is likely to be much higher.
Hopefully someone will vet this paper to see if it, too, has bad methodology.