Over at his blog Bylogos some time back, Dr. John Byl (PhD Astronomy) has written an interesting post regarding whether we can trust publish scientific data. As written:
...
This naturally raises the question: How wide-spread is bias and fraud in science?
David Shatz, in his 2004 book Peer Review: A Critical Inquiry concludes that reviewers are “biased toward papers that affirm their prior convictions…are biased against innovation and/or are poor judges of quality.” Reviewers also seem biased in favor of authors from prestigious institutions. Shatz describes a study in which “papers that had been published in journals by authors from prestigious institutions were retyped and resubmitted with a non-prestigious affiliation indicated for the author. Not only did referees mostly fail to recognize these previously published papers in their field, they recommended rejection.” In 2003 the British Royal Society studied the effects of peer review. The chairman of the investigating committee reported that peer review has been criticized for being used by the scientific establishment “to prevent unorthodox ideas, methods, and views, regardless of their merit, from being made public.”
Epidemiologist John Ioannidis, in a paper (2005) entitled Why most published research findings are false finds that a randomly chosen scientific paper has less than a 50% chance of being true. Small sample sizes, poor study design, researcher bias, and selective reporting and other problems combine to make most research findings false. But even large, well-designed studies are not always right. Many papers may be accurate measures only of the prevailing bias among scientists.
Bias is difficult to avoid. It may be quite unintentional. Consider the case of astronomer Walter Adams. In 1925 he tested Einstein's theory of relativity by measuring the red shift of the binary companion of Sirius, brightest star in the sky. Einstein's theory predicted a red shift of six parts in a hundred thousand; Adams found just such an effect. A triumph for relativity. However, in 1971, with updated estimates of the mass and radius of Sirius, it was found that the predicted red shift should have been much larger--28 parts in a hundred thousand. Later observations of the red shift did indeed measure this amount, showing that Adams' observations were flawed. He "saw" what he had expected to see.
In short, our theoretical expectations can influence what we see. We tend to give undue weight to those observations that agree with our expectations and ignore or discard those that don't. Observational confirmation may sometimes be little more than wishful thinking.
...
Having worked in the research industry for 3 years, albeit not at the doctoral level, I have witnessed the competition and pressure for results. The maxim "Publish or Perish" holds true, and there is intense pressure to produce results, with some resorting to forging results and spinning the data. It wasn't that long ago that a Korean scientist claimed to have cloned humans (a claim that turned out to be false) after all.
During my honors year in university, one of the things each of us students were to do in one of our modules was to choose a journal paper and discuss it, and we were graded on our presentation and discussion of the paper. I distinctively remembered one particular journal (which I shall not name) received the most criticism of its papers for either having unclear results, for omitting negative and/or positive controls, for showing possible signs of photoshopping (to "enhance results" etc), or for coming up with conclusions that are not clearly supported by the research data. The journal was obviously not one of the top tier high impact factor journals like Nature or Science, but neither was it a junk low impact factor journal
Scientists are thus not necessarily unbiased seekers of truth. Most are not out to willingly deceive the public for sure, but the pressure of the industry and the lure of fame and the fear of losing their jobs does impact the quality of research findings. While theoretically science can validate and verify itself because research findings are supposed to be reproducible, most research findings are just not verified for a very simple reason: no money and no time. Most scientists are busy with their own projects, and the last thing they want to do is to spend limited time and even more limited resources duplicating another scientist's research (and research cost a LOT of money). When a paper is published, the research is taken as truth. It is only when someone uses the research findings as a basis for further research and found that their experiments do not work that possible investigation would take place (or the researcher can just choose another project to work on which promises better results)
All of these just mean that we should be skeptical of scientific truth claims especially when pronounced dogmatically. As Byl says:
In sum, there is cause for some skepticism regarding the reliability of published scientific data . Data might well be distorted, fabricated or suppressed. Papers critical of the dominant paradigm might well be prevented from being published in mainline scientific journals. This is hardly surprizing [sic]. After all, scientists are only human — fallen and fallible. They, too, are driven by various extra-scientific motivations, whether ideology, wealth or fame. It is thus important to double-check whether what was reported to have been observed is in fact accurate and complete.
Of course, we haven't gotten to the issue of interpretation of research data, which is another minefield altogether for those who put their trust in Science (capital 'S').
[HT: Phil Johnson]
P.S.: I am cautioning skepticism regarding dogmatic scientific truth claims, NOT agnosticism about all of science.
No comments:
Post a Comment
This is my blog, and in order to facilitate an edifying exchange, I have came up with various blog rules. Please do read them before commenting, as failure to abide by them would make your post liable to being unapproved for publication. Violation of any of the rules three or more times, or at the blog owner's judgment, would make one liable to be banned from posting unless the blog owner (me) is satisfied that such behavior would not occur again.