The Decline Effect
Dec. 8th, 2010 08:26 pmVery interesting article on the "decline effect" in the New Yorker this week. Full text here.
In short: scientific research, as we do it now, is subject to a "decline effect" whereby initially impressive results are subject to ever-less-impressive replications and the eventual retirement of many once-compelling theories. I've written about this before; the article doesn't reflect how well this is already understood in the research community.
Is this real? Yes, absolutely. Every scientific result has one foot in reality, and another foot in politics. The decline effect is indeed a criticism of inefficiency and bias in the research process, but it is also a reflection of how, in the end, reality always wins. After all, it's not as if evidence for fallacious theories gets stronger over time - the decline reflects the slow, inevitable triumph of ugly reality over beautiful ideas.
With attacks on science and academia being a stated goal of the Republican party, it is perhaps not the best time for observations like this to be popularized. And it bears mentioning that the current fashionability of the decline effect is, itself, a manifestation of the same cycle of excitement and decline it tries to describe.
The article's description of the theory of "verbal overshadowing" in cognitive science, and questions about its actual importance, caught my eye as being especially amenable to deeper review. Is it possible that an entire paradigm has been based on a fallacious idea? Ehhh... it seems likely that the discovery of this effect was associated with an improbable coincidence that made the effect seem stronger than it really was, but it's pretty damn unlikely that it doesn't exist at all. It's easy to believe that research results have been biased by the assumption this effect is real, but that's not the same as saying it doesn't exist.
What was more interesting was its mention of spurious correlations in genetics research. Having spent most of my career on this subject, I can tell you that even ten or twelve years ago, any single article purporting to find a link between gene expression or genomic markers and some phenotypic characteristic was always assumed to be a statistical fluke. The standards of publication have always been too low, and it's long been the case the poor statistics and optimistic interpretations of data have crowded journals with junk.
That said, not everything in science is equally junky. A variety of smartasses have made a big deal about how "most articles published in scientific journals are wrong". And that's true, but it's not as harsh a criticism as it might seem. It's generally understood that any single conclusion should be taken with a large grain of salt. There exists a hierarchy of ideas in science, ranging from speculative, unlikely, or merely weak ideas represented by single articles, up through the conclusions of research reviews and expert opinion, to the highest standard of reliability, which is reflected in the consensus opinions of major scientific institutions. There's a huge difference in the credibility of conclusions at each level, and this distinction tends to get lost (or deliberately buried) when the goal is to undermine the credibility of the whole process.
The thing that really moves science along is not evidence, per se. Evidence guides the process, but all the big leaps have been just that - leaps - which happen all at once, are based more on intuition than on data, and are either immediately and correctly embraced as "true", or slowly and very painfully rejected as "false". Maybe it's not the most efficient process - surely, there are easier ways of turning back from false leads. Or maybe the "scientific method", as it is commonly taught, isn't actually up to the task of apprehending reality, and we wouldn't get anywhere at all without assuming the risk of being wildly wrong.
There are definitely ways of improving the research process, but they tend to be radical and politically untenable. There might be great ways of organizing research in principle, but the problem of "you can't get there from here" blocks their implementation in practice.
In short: scientific research, as we do it now, is subject to a "decline effect" whereby initially impressive results are subject to ever-less-impressive replications and the eventual retirement of many once-compelling theories. I've written about this before; the article doesn't reflect how well this is already understood in the research community.
Is this real? Yes, absolutely. Every scientific result has one foot in reality, and another foot in politics. The decline effect is indeed a criticism of inefficiency and bias in the research process, but it is also a reflection of how, in the end, reality always wins. After all, it's not as if evidence for fallacious theories gets stronger over time - the decline reflects the slow, inevitable triumph of ugly reality over beautiful ideas.
With attacks on science and academia being a stated goal of the Republican party, it is perhaps not the best time for observations like this to be popularized. And it bears mentioning that the current fashionability of the decline effect is, itself, a manifestation of the same cycle of excitement and decline it tries to describe.
The article's description of the theory of "verbal overshadowing" in cognitive science, and questions about its actual importance, caught my eye as being especially amenable to deeper review. Is it possible that an entire paradigm has been based on a fallacious idea? Ehhh... it seems likely that the discovery of this effect was associated with an improbable coincidence that made the effect seem stronger than it really was, but it's pretty damn unlikely that it doesn't exist at all. It's easy to believe that research results have been biased by the assumption this effect is real, but that's not the same as saying it doesn't exist.
What was more interesting was its mention of spurious correlations in genetics research. Having spent most of my career on this subject, I can tell you that even ten or twelve years ago, any single article purporting to find a link between gene expression or genomic markers and some phenotypic characteristic was always assumed to be a statistical fluke. The standards of publication have always been too low, and it's long been the case the poor statistics and optimistic interpretations of data have crowded journals with junk.
That said, not everything in science is equally junky. A variety of smartasses have made a big deal about how "most articles published in scientific journals are wrong". And that's true, but it's not as harsh a criticism as it might seem. It's generally understood that any single conclusion should be taken with a large grain of salt. There exists a hierarchy of ideas in science, ranging from speculative, unlikely, or merely weak ideas represented by single articles, up through the conclusions of research reviews and expert opinion, to the highest standard of reliability, which is reflected in the consensus opinions of major scientific institutions. There's a huge difference in the credibility of conclusions at each level, and this distinction tends to get lost (or deliberately buried) when the goal is to undermine the credibility of the whole process.
The thing that really moves science along is not evidence, per se. Evidence guides the process, but all the big leaps have been just that - leaps - which happen all at once, are based more on intuition than on data, and are either immediately and correctly embraced as "true", or slowly and very painfully rejected as "false". Maybe it's not the most efficient process - surely, there are easier ways of turning back from false leads. Or maybe the "scientific method", as it is commonly taught, isn't actually up to the task of apprehending reality, and we wouldn't get anywhere at all without assuming the risk of being wildly wrong.
There are definitely ways of improving the research process, but they tend to be radical and politically untenable. There might be great ways of organizing research in principle, but the problem of "you can't get there from here" blocks their implementation in practice.