The general public, and also many scientists, often underestimate how difficult it is to control all the relevant variables in a study, or even know what they are, and how processes like publication, peer review, and replicability are more fallible as epistemic safeguards than we tend to assume. This article in the current issue of the New Yorker magazine thoughtfully raises some of these issues. (The link only gives you the opening paragraphs; to read the whole article you would need to subscribe, or even [gasp!] go to a library. It's all just SO twentieth century!)
http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer
Wednesday, December 8, 2010
Subscribe to:
Post Comments (Atom)
4 comments:
It may be fallible, but that doesn't mean that we can't rely on the scientific method to justify our reasons, provide both normative and probative force, and all that jazz...right?
Quite frankly, it's the closest thing we have to a truly objective epistemology!
I would frame the matter differently. Certainly the institutions of science have accomplished a lot by rigorous application of the best available epistemological principles. Those institutions (as is the wont of institutions) have also grown dogmatic and complacent, mistaking surrogates like successful peer-reviewed publication for probative epistemic force. They, and the rest of us, have neglected to think critically about the potential and actual distortions of such processes.
I am not, that is, a radical skeptic about scientific methods (I do insist on the plural here, as methodology varies a lot in subject-appropriate ways). Rather, I am a small-s skeptic about virtually all specific findings ratified by institutionalized processes.
While I am not a Reagan fan (this may seem like a non sequitor, but stay with me for just a bit), the addage "trust, but verify" holds true. Using standard criteria, there is a 1 in 20 (and sometimes 1 in 100- usually if the FDA is involved) chance of a random association in most studies that test only 1 hypothesis. It is not uncommon for multiple hypotheses to be tested in the same study (thus increasing the chances of inferring a spurious association, like "is efficacious" or "is safe"). It is usually good to be skeptical of the findings from a single study. Multiple studies reduce the likelihood of an association being attributable to chance. Now, all of this assumes no "gaming" and that randomization has been well executed. And all of the above assume an RCT-- once we move into the realm of noncontrolled studies, it just get more complicated (ie more reason to hold the results to higher standards of rigor/evidence). But I am an applied epistemologist by training, and, when these are done properly, I do have greater confidence in what I learn from well-designed RCT studies compared to nearly any other epirical (or epistemological) alternative (just disclosing my bias). But you are quite right, Matt, the use of plural is spot-on re: methodologies)
sorry, typo "sequitur" (typing too fast... :-)
Post a Comment