Real-World Economics Review Blog
from Lars Syll
A popular idea in quantitative social sciences – and nowadays also including economics – is to think of a cause (C) as something that increases the probability of its effect or outcome (O). That is:
P(O|C) > P(O|-C)
However, as is also well-known, a correlation between two variables, say A and B, does not necessarily imply that that one is a cause of the other, or the other way around, since they may both be an effect of a common cause, C.
In statistics and econometrics this “confounder” problem is usually solved by “controlling for” C, i. e. by holding C fixed. This means that one actually look at different “populations” – those in which C occurs in every case, and those in which C doesn’t occur at all. This means that knowing the value of A does not influence the probability of C [P(C|A) = P(C)]. So if there then still exist a correlation between A and B in either of these populations, there has to be some other cause operating. But if all other possible causes have been “controlled for” too, and there is still a correlation between A and B, we may safely conclude that A is a cause of B, since by “controlling for” all other possible causes, the correlation between the putative cause A and all the other possible causes (D, E, F …) is broken.
This is of course a very demanding prerequisite, since we may never actually be sure to have identified all putative causes. Even in scientific experiments may the number of uncontrolled causes be innumerable. Since nothing less will do, we do all understand how hard it is to actually get from correlation to causality. This also means that only relying on statistics or econometrics is not enough to deduce causes from correlations.
Some people think that randomization may solve the empirical problem. By randomizing we are getting different “populations” that are homogeneous in regards to all variables except the one we think is a genuine cause. In that way we are supposed being able not having to actually know what all these other factors are.
If you succeed in performing an ideal randomization with different treatment groups and control groups that is attainable. But it presupposes that you really have been able to establish – and not just assume – that the probability of all other causes but the putative (A) have the same probability distribution in the treatment and control groups, and that the probability of assignment to treatment or control groups are independent of all other possible causal variables.
Unfortunately, real experiments and real randomizations seldom or never achieve this. So, yes, we may do without knowing all causes, but it takes ideal experiments and ideal randomizations to do that, not real ones.
This means that in practice we do have to have sufficient background knowledge to deduce causal knowledge. Without old knowledge, we can’t get new knowledge – and, no causes in, no causes out.
John Maynard Keynes was – as is yours truly – very critical of the way statistical tools are used in social sciences. In his criticism of the application of inferential statistics and regression analysis in the early development of econometrics, Keynes in a critical review of the early work of Tinbergen, writes:
My line of criticism (and Keynes’s) is also shared by e.g. eminent mathematical statistician . In his Statistical Models and Causal Inference (2010) Freedman writes:
Most advocates of econometrics (and regression analysis) want to have deductively automated answers to fundamental causal questions. Econometricians think – as David Hendry expressed it in Econometrics – alchemy or science? (1980) – they “have found their Philosophers’ Stone; it is called regression analysis and is used for transforming data into ‘significant results!’” But as David Freedman poignantly notes in Statistical Models: “Taking assumptions for granted is what makes statistical techniques into philosophers’ stones.” To apply “thin” methods we have to have “thick” background knowledge of what’s going on in the real world, and not in idealized models. Conclusions can only be as certain as their premises – and that also applies to the quest for causality in econometrics and regression analysis.
A popular idea in quantitative social sciences – and nowadays also including economics – is to think of a cause (C) as something that increases the probability of its effect or outcome (O). That is:
P(O|C) > P(O|-C)
However, as is also well-known, a correlation between two variables, say A and B, does not necessarily imply that that one is a cause of the other, or the other way around, since they may both be an effect of a common cause, C.
In statistics and econometrics this “confounder” problem is usually solved by “controlling for” C, i. e. by holding C fixed. This means that one actually look at different “populations” – those in which C occurs in every case, and those in which C doesn’t occur at all. This means that knowing the value of A does not influence the probability of C [P(C|A) = P(C)]. So if there then still exist a correlation between A and B in either of these populations, there has to be some other cause operating. But if all other possible causes have been “controlled for” too, and there is still a correlation between A and B, we may safely conclude that A is a cause of B, since by “controlling for” all other possible causes, the correlation between the putative cause A and all the other possible causes (D, E, F …) is broken.
This is of course a very demanding prerequisite, since we may never actually be sure to have identified all putative causes. Even in scientific experiments may the number of uncontrolled causes be innumerable. Since nothing less will do, we do all understand how hard it is to actually get from correlation to causality. This also means that only relying on statistics or econometrics is not enough to deduce causes from correlations.
Some people think that randomization may solve the empirical problem. By randomizing we are getting different “populations” that are homogeneous in regards to all variables except the one we think is a genuine cause. In that way we are supposed being able not having to actually know what all these other factors are.
If you succeed in performing an ideal randomization with different treatment groups and control groups that is attainable. But it presupposes that you really have been able to establish – and not just assume – that the probability of all other causes but the putative (A) have the same probability distribution in the treatment and control groups, and that the probability of assignment to treatment or control groups are independent of all other possible causal variables.
Unfortunately, real experiments and real randomizations seldom or never achieve this. So, yes, we may do without knowing all causes, but it takes ideal experiments and ideal randomizations to do that, not real ones.
This means that in practice we do have to have sufficient background knowledge to deduce causal knowledge. Without old knowledge, we can’t get new knowledge – and, no causes in, no causes out.
John Maynard Keynes was – as is yours truly – very critical of the way statistical tools are used in social sciences. In his criticism of the application of inferential statistics and regression analysis in the early development of econometrics, Keynes in a critical review of the early work of Tinbergen, writes:
Prof. Tinbergen agrees that the main purpose of his method is to discover, in cases where the economist has correctly analysed beforehand the qualitative character of the causal relations, with what -strength each of them operates. If we already know what the causes are, then (provided all the other conditions given below are satisfied) Prof. Tinbergen, given the statistical facts, claims to be able to attribute to the causes their proper quantitative importance. If (anticipating the conditions which follow) we know beforehand that business cycles depend partly on the present rate of interest and partly on the birth-rate twenty years ago, and that these are independent factors in linear correlation with the result, he can discover their relative importance. As regards disproving such a theory, he cannot show that they are not verce causce, and the most he may be able to show is that, if they are verce cause, either the factors are not independent, or the correlations involved are not linear, or there are other relevant respects in which the economic environment is not homogeneous over a period of time (perhaps because non-statistical factors are relevant).This, of course, is absolutely right. Once you include all actual causes into the original (over)simple model, it may well be that the causes are no longer independent or linear, and that a fortiori the coefficients in the econometric equations no longer are identifiable. And so, since all causal factors are not included in the original econometric model, it is not an adequate representation of the real causal structure of the economy that the model is purportedly meant to represent.
Am I right in thinking that the method of multiple correlation analysis essentially depends on the economist having furnished, not merely a list of the significant causes, which is correct so far as it goes, but a complete list? For example, suppose three factors are taken into account, it is not enough that these should be in fact verce causce; there must be no other significant factor. If there is a further factor, not taken account of, then the method is not able to discover the relative quantitative importance of the first three. If so, this means that the method is only applicable where the economist is able to provide beforehand a correct and indubitably complete analysis of the significant factors. The method is one neither of discovery nor of criticism. It is a means of giving quantitative precision to what, in qualitative terms, we know already as the result of a complete theoretical analysis.
My line of criticism (and Keynes’s) is also shared by e.g. eminent mathematical statistician . In his Statistical Models and Causal Inference (2010) Freedman writes:
If the assumptions of a model are not derived from theory, and if predictions are not tested against reality, then deductions from the model must be quite shaky. However, without the model, the data cannot be used to answer the research question …And in Statistical Models: Theory and Practice (2009) Freedman writes:
In my view, regression models are not a particularly good way of doing empirical work in the social sciences today, because the technique depends on knowledge that we do not have. Investigators who use the technique are not paying adequate attention to the connection – if any – between the models and the phenomena they are studying. Their conclusions may be valid for the computer code they have created, but the claims are hard to transfer from that microcosm to the larger world …
Regression models often seem to be used to compensate for problems in measurement, data collection, and study design. By the time the models are deployed, the scientific position is nearly hopeless. Reliance on models in such cases is Panglossian …
Given the limits to present knowledge, I doubt that models can be rescued by technical fixes. Arguments about the theoretical merit of regression or the asymptotic behavior of specification tests for picking one version of a model over another seem like the arguments about how to build desalination plants with cold fusion and the energy source. The concept may be admirable, the technical details may be fascinating, but thirsty people should look elsewhere …
Causal inference from observational data presents may difficulties, especially when underlying mechanisms are poorly understood. There is a natural desire to substitute intellectual capital for labor, and an equally natural preference for system and rigor over methods that seem more haphazard. These are possible explanations for the current popularity of statistical models.
Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling – by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by the data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance.
The usual point of running regressions is to make causal inferences without doing real experiments. On the other hand, without experiments, the assumptions behind the models are going to be iffy. Inferences get made by ignoring the iffiness of the assumptions. That is the paradox of causal infernce …And as Stephen Morgan and Christopher Winship write in their seminal Counterfactuals and Causal Inference (2007):
Path models do not infer causation from association. Instead, path models assume causation through response schedules, and – using additional statistical assumptions – estimate causal effects from observational data … The problems are built into the assumptions behind the statistical models … If the assumptions don’t hold, the conclusions don’t follow from the statistics.
Regression models have some serious weaknesses. Their ease of estimation tends to suppress attention to features of the data that matching techniques force researchers to consider, such as the potential heterogeneity of the causal effect and the alternative distributions of covariates across those exposed to different levels of the cause. Moreover, the traditional exogeneity assumption of regression … often befuddles applied researchers … As a result, regression practitioners can too easily accept their hope that the specification of plausible control variables generates as-if randomized experiment.Econometrics (and regression analysis) is basically a deductive method. Given the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by statistical/econometric procedures like regression analysis may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.
Most advocates of econometrics (and regression analysis) want to have deductively automated answers to fundamental causal questions. Econometricians think – as David Hendry expressed it in Econometrics – alchemy or science? (1980) – they “have found their Philosophers’ Stone; it is called regression analysis and is used for transforming data into ‘significant results!’” But as David Freedman poignantly notes in Statistical Models: “Taking assumptions for granted is what makes statistical techniques into philosophers’ stones.” To apply “thin” methods we have to have “thick” background knowledge of what’s going on in the real world, and not in idealized models. Conclusions can only be as certain as their premises – and that also applies to the quest for causality in econometrics and regression analysis.
No comments:
Post a Comment