Monday, January 9, 2012

Effect and cause as a clue to the meaning of science

The December 16, 2011, issue of WIRED has a piece by Jonah Lehrer called "Trials and Errors: Why Science is Failing Us" (click here to read it). Mr. Lehrer's argument seems to be that some phenomena are too complex for scientific method to be able to discover what causes them. In his conclusion he writes:
And yet, we must never forget that our causal beliefs are defined by their limitations. For too long, we’ve pretended that the old problem of causality can be cured by our shiny new knowledge. If only we devote more resources to research or dissect the system at a more fundamental level or search for ever more subtle correlations, we can discover how it all works. But a cause is not a fact, and it never will be; the things we can see will always be bracketed by what we cannot. And this is why, even when we know everything about everything, we’ll still be telling stories about why it happened. It’s mystery all the way down.
The comments following the piece do a good job of of pointing out the flaws in the reasoning by which Mr. Lehrer reaches this conclusion. However, one issue is omitted. That issue is that science is not about causes.

Science is about effects. At its simplest, an effect is a non-random relationship between two variables. Scientific experimentation investigates effects by varying one of the variables (the indendent variable) and seeing what happens to the other variable (the dependent variable). The goal is to explain the effect - that is, become more effective in predicting the dependent variable. This model can be expanded to handle large numbers of variables. For example, one of the things I do in evaluating satisfaction with a program is to investigate simultaneously the relative importance of several variables in accounting for satisfaction. What you typically find when you do this correctly is that only a few of the variables have any relationship to satisfaction. What you often find, too, is that the variables that account for their satisfaction are different from the reasons particpants report when asked why they like the program.

The methods I use are correlational, so they cannot attribute causation. What they tell you is that as one thing varies, so does another. Furthermore, the analyses of satisfaction I do are non-experimental, so I can't even be sure that the estimates of the correlations are all that exact. What I can do, though, is make a recommendation that changes be made to see if dealing with the the variables identified by the data analysis will improve satisfaction.

The same considerations apply to a lot of health research, and that consideration alone goes a long way to accounting for the examples Mr. Lehrer adduces. What health researchers do is develop their own recommendations for further research that will test whether their conclusions are correct. In fact, the supposed failure Mr. Lehrer describes is in fact a demonstration of the success of science - a hypothesis was developed from prior research to test whether a drug was effective, and the test failed to find evidence that it was effective. That failure by itself is informative - it tells us not to prescribe the drug.

One of the commenters at the link above (urgelt) goes into the issue of the adequacy of research in more detail. My post of January 5 (click here) provides another example of this type of difficulty. What is clear is that error is inherent in the process of scientific experimentation, and that the foundation of scientific method includes a recognition that error is inherent. Reports of statistical analysis of research results typically include many estimates of the error involved in the relationships estimated by the statistical techniques.

As for Mr. Lehrer's remarks about the mythical nature of causes, scientific method has long allowed explanatory variables that have no real existence (intelligence, for example, cannot be directly measured but only inferred from behaviour). Variables like this are called explanatory fictions. The reason they are allowed is that the point of science is to explain an effect, not to find out what its actual cause is. If a fictional variable can explain the effect where something tangible and real can't, so much the better. Furthermore, even a small improvement in accuracy of prediction will often produce large benefits. Obviously, something which improves accuracy only a small amount is unlikely to be a cause in any meaningful sense, but it can still play an important role in practice.

Complex systems often frustrate scientific research simply because there are so many potential effects to examine, not because scientists are naive about the nature of causes, which anyway they aren't looking for. Mr. Lehrer freely acknowledges that science has been spectacularly successful with some complex systems (the health of large populations, for example), so concluding that failures to be successful with others mean that science has failed to solve the problem of causation is not only questionable and hasty but irrelevant as well.

I am confident that the scientific research of 100 years from now will be superior to today's research. I am also confident that the reason for its superiority will not be that it has solved the problem of causation.

Website
Twitter

Research, cause, and effect © 2012, John FitzGerald

No comments:

Post a Comment