Part I - What is Cliometrics ?
The aim of this note is to arrive at precise notions concerning the subject-matter of cliometrics.
Cliometric analysis is foremost a theoretical approach. Great emphasis is placed on developing a coherent and consistent theoretical model that will provide the basis for interpreting historical economic and social phenomena. Cliometric models enable a better understanding of the real world. Because economic and social processes are complex, a thorough understanding of the underlying forces and interrelationships is generally impossible. Models break up phenomena into more manageable portions by abstracting those variables that are believed to be a significant influence on choice and subjecting them to deductive reasoning based on a set of accepted axioms. Logical conclusions are then derived which must be translated into propositions about the real world. These propositions or predictions must then be compared to actual behavior and experience, either by observation or statistical methods.
In cliometrics there are two modes of discourse : positive (what is) and normative (what ought to be). To this I would add a third : descriptive analysis. The success of positive, normative and descriptive theory are to be judged by different criteria and hence they are not susceptible to the same criticism. Critics of the cliometric approach often confuse these three modes of analysis resulting in much ill conceived criticism. Here a very simplified account of cliometric methodology is given.
Positive cliometrics
Positive cliometrics is the empirical branch of the discipline. I seeks to generate a set of testable, that is potentially refutable, predictions than can be verified by the empirical evidence. A positive cliometric model is a meaningful model if it is both correct and useful. It is correct if it is internally consistent; it is useful if it focuses on a significant influence on choice. A meaningful model is thus one that generates predictions to which behavior conforms more frequently that those generated by some alternative competing theory. If the model is successful in predicting, then the negative judgment can be made that the model has not been falsified.
Positive cliometric analysis is used to make qualitative predictions and to organize data for the testing of these predictions. It is predictive and empirical. The predictions of positive cliometric models must be interpreted with some care. First, such models only establish partial relationships. For example, one of the most common predictions in cliometrics is the inverse relationship between the price of a good and the quantity demanded. However, this statement must be read with an important caveat the caveat of ceteris paribus. The prediction states that in practice the quantity demanded will increase as price increase only if all other factors affecting demand such as income, tastes, and the relative prices of other goods remain constant. Thus the predictions of positive cliometric models are in the nature of conditional statements if A then B, given C, but may B never be observed to occur because other influences on A (C) have also changed. Secondly, since positive cliometric models only deal with partial relationships they do not imply that other factors, economic and non economic, are of no, or less, importance in explaining behavior. A cliometrician may argue, for example, that people will respond to cost-pressures (such as liability for damages) in the care they take in an activity which places safety of others at risk, and he may empirically establish this proposition. But this finding does not naturally lead to the conclusion that pecuniary incentives are the only, or even the best, means of achieving an increase in the level of safety.
Normative cliometrics
Normative cliometrics is the ethical branch of cliometrics concerned with efficiency, distributive and social justice and prescribing corrective measures to improve social welfare. Normative propositions cannot, as a matter of fact, be verified, nor are they in principle verifiable. Value judgments cannot be tested to see whether they are true or false; they are acceptable or unacceptable.
Descriptive cliometrics
Much cliometric analysis does not fall neatly into either positive or normative categories. Indeed the bulk of cliometrics is not designed to generate testable predictions nor to determine the social desirability of a set of policies. It is abstract theory of economic and social problems whose function is to generate logical deductions and to describe economic and social phenomena in a historical perspective. In this perspective, it is useful to distinguish a third (though not exhaustive) category of cliometric analysis descriptive cliometrics. Descriptive cliometrics attempts to model/analyse historical processes and to describe the economic and social influences that affect them. It is thus based on assumptions that are more or less realistic and therefore subject to empirical verification. Much of the literature by cliometricians falls into this category. It seeks to provide a comprehensive model of history based on economics and econometrics.
Although there are a number of historical economic schools the dominant approach used in cliometrics is neoclassical economics. Here the main ingredients of this approach are outlined.
Methodological Individualism
Neoclassical economics builds on the postulate of methodological individualism the view that social theories must be based on the attitudes and behavior of individuals. The basic unit of analysis is thus the rational individual and the behavior of groups is assumed to be the outcome of the decisions taken by the individuals who compose them. Neoclassical economics is, in other words, individualistic economics based on the behavioral model of rational choice.
Maximization Principle
Economic man or woman is assumed to be a self-interested egotist who maximizes his utility. The assumptions of utility (and profit) maximization, or economic rationality as it is sometimes referred to, have given rise to much criticism and confusion. When an economist says that an individual is acting rationally or maximizing his utility he is saying no more than that the individual is making consistent choices, and that he chooses the preferred alternative(s) from those available to him. That is, it is assumed the individual is the best judge of his own welfare the notion of consumer sovereignty. The economic approach does not content that all individuals are rational nor that these assumptions are necessarily realistic. Rather economic man or woman is some weighted average of the individuals under study in which the extremes in behavior are evened out. The theory allows for irrationality but argues that groups of individuals behave as if their members are rational. Also the utility maximizing postulate does not asset that individuals consciously calculate the cost and benefits of all actions, only that their behavior can be explained as if they did so.
Market and Prices
The final ingredient of the cliometric approach concerns the concepts of a market and price. Even in areas where there is not an explicit market, the cliometric approach will often study the subject by analogy with the market concepts of supply, demand and price.
Claude DIEBOLT, for AFC.
In Autumn 2002.
In Autumn 2002.
Part II - What is retrospective growth accounting ?
Checking theoretical hypotheses on economic growth requires always new statistical sets. The method of quantitative history launched in the 1960s by S. Kuznets, J. Marczewski and F. Perroux is essential for reaching this aim. It consists of assembling historical facts in homogenous, comparable time units in order to measure changes in intervals of time (generally annually). The advantage of this method is that the moment of operation of the observer's choice is shifted. Instead of acting while observing the reality to be described, he operates during the construction of the reference system serving in the recording of facts rendered conceptually homogeneous. This methodology should allow for an empirical verification or the rejection of initial hypotheses hinged on the pattern of theoretical interpretation.
However, a map is not the area it plots, and care must be taken not to confuse reality and its description. The approach is not applicable to isolated historical events. It is used to describe the history of the masses, but is not sensitive to the history of the heroes. It is only an image of reality and does not draw all of reality's contours. Quantitative history does not aim at replacing traditional descriptive history. On the contrary, the two forms of historical investigation are strictly complementary and hence indispensable for a better knowledge of the past. The application of quantitative methods can nevertheless profoundly renew the terms that have progressively become legitimate. The great merit of statistical formalism is that it allows for the examination of the logical and quantitative consequences of historical proposals, which could not be obtained by a discussion based on literary documents and the like.
The proposed method has two advantages. Firstly, it is of immediate practical interest as it is an original reconstruction of the process of growth. It is also of theoretical interest as it provides better knowledge of the mechanisms governing the long development of the economic system. We are nevertheless aware that the statistical work provides only the quantitative aspect of changes in structure. Although it is important, it is not sufficient to provide the complete picture of the sequence of facts.
The decisive role of deductive theories in empirical research must therefore be stressed. Starting with a general idea, they attempt at identifying symptoms in reality by means of a chronological series of statistics. Like the Frankfurter Gesellschaft für Konjunkturforschung (founded in 1926 by E. Altschul), we recommend an economic semiology that would implement the results of statistical research and deductive reasoning of the theory. There is no conflict or compartmentalisation between empirical research and deductive theory; on the contrary, these disciplines are only truly valuable individually if they draw on each other's results. The validity of the theoretical hypotheses thus depends both on their external coherence, i.e. on their conformity with real facts, and their internal coherence, i.e. their ability to provide one or more solutions to the raised questions. The general equilibrium theory is extremely significant in this respect. It has given rise to much work concerning the validity of its bases and of the teachings drawn from it. In contrast, research on the existence of a solution has been carried out by only a handful of specialists. Here, it is still possible to discuss the coherence of a theoretical hypothesis, but it is more important to submit it to empirical verification.
1. The scope of quantitative history
Quantitative history aims at drawing up a general macroscopic synthesis by constructing models integrated at the national level with the prospect of possible links between several national models. This requires a succession of research operations, the most important of which is the creation of chronological data series covering the whole of the period under consideration.
The initial work consists of sorting dozens or even hundreds of volumes of old statistics. The figures are transcribed and reclassified using a pre-established model (e.g. using national budget items). This meticulous research is a question of doing rather than learning and is carried out in the silence of archives and libraries. The classifications used in statistical documents change according to the year and the recording administration. It is necessary at all times to exert some judgement to decide on the matching of the original item and the item under which the figure is to be transcribed.
2. The quantitative history method
The quantitative history method applies when a large number of causes and their complexity and intermingling make the use of experimental methods impossible. The moment at which the observer's choice intervenes is shifted. Instead of being set during observation of the reality to be described, it applies during the elaboration of the system of reference used to count facts thus rendered conceptually homogeneous.
The definition just given essentially consists of three main phases:
- collection of documents;
- analysis of the data collected;
- interpretation of the results.
The first phase is mainly descriptive in order to prepare the real work of the researcher by co-ordinating the collected information. It is a preliminary task required for a serious analysis. All the documents related to the field of study are assembled. The data from archived documents must not be merely recorded, but have to be subjected to intelligent, perspicacious criticism. Some are discarded when considered unreliable; others must be corrected when their interpretation reveals sources of error. Finally, estimates may be required to fill gaps. The final results are assembled in a statistical table.
Having arrived at this mass of numerical information, the researcher must then put the figures in a logical order and classify them according to a previously established nomenclature. The data are reduced, substituted by a small number of representative series drawn by calculation from the ordered mass of numerical data.
The third and final interpretation phase consists of drawing conclusions from the analysis. It is a decisive stage as here an attempt is made to explain the observations. There is a vast scope of interpretative possibilities, ranging from merely checking hypotheses to forecasting future developments.
One has to be aware that the conclusions drawn from the analysis of the statistical sets contain a degree of uncertainty. Also the degree of the validity of the general observations has to be considered. Quantitative history is a long, slow process that is never free from difficulties. There are hesitations to overcome and problems to solve at every stage, as each new research operation is a new beginning and there are decisions that can only be taken alone.
Claude DIEBOLT, for AFC.
In Autumn 2002.
In Autumn 2002.
Part III - What is Convergence ?
The notions of rate of convergence (ß) and evolution of dispersion (a) have been widely used in recent studies of economic growth. The notion of ß-convergence refers to the rate at which the income or production per capita of a poor region tends to catch up that of a rich region. In other words, ß-convergence is observed if the initially poor economic units in a set of cross-sectional data tend to grow more rapidly than the rich units. If the ß coefficient is calculated without taking into account the characteristics that determine the levels of long-term equilibrium of economies (such as the savings ratio, technologies and institutions), it is absolute convergence. The conditional convergence hypothesis is then applied when differences in long-term equilibrium values are taken into account. Two economies may thus have different savings ratios, reflecting differences in the time preference rates. In this case, the traditional neoclassical framework forecasts that the two economies should display the same rate of equilibrium growth but that the economy with the highest savings ratio would display a higher income per capita in a stationary state. In this respect, the conditional convergence notion refers to the hypothesis according to which the economy that is initially the farthest from its own equilibrium trajectory will experience more rapid growth.
Estimation of the absolute convergence coefficient is performed using a non-linear regression based on cross-sectional data in the following form:
in which t and T are respectively the first and last years of the observation period and i is an economic unit. Y is the per capita economic unit and u a remainder. The left-hand side of the equation is therefore an approximation of the annual growth of the economic unit i between years t and T. In the context of an analysis of absolute convergence, B is a constant for all the economic units. Its value is determined by the growth of the economic indicators at equilibrium. Furthermore, the coefficient (1-e-ß(T-t))/(T-t) makes it possible to allow for the share of growth that can be accounted for by the initial level of the economic indicator. The gap between the various economic units decreases exponentially at rate ß from t=0 to T. A situation of convergence (a positive ß coefficient) implies a negative relation between the average growth rate during the observation period and the logarithm of the initial level of the economic indicator per capita. The greater the value of ß, the more the economic indicator per capita in the poor region will rapidly approach the level observed in the rich region.
The second convergence notion, referred to as a convergence, is based on the analysis of time series. It is hinged on the movement of a dispersion index of an economic indicator per capita. The dispersion index can be measured in several ways. The standard deviation of the log of income per capita is often used. If the dispersion of income per capita between a set of economies tends to decrease in time, a convergence is observed. In practice, a convergence is observed if the dispersion index time series is integrated of order one and displays a negative drift (a stochastic trend) or evolves with a decreasing trend (a deterministic trend).
In conclusion, it can be said that the notions of ß and a convergence are interdependent. The catching up process (ß convergence) contributes to reducing the cross-sectional dispersion of income (a convergence), but exogenous disturbances to relative growth rates tend to accentuate dispersion.
Claude DIEBOLT, for AFC.
In Autumn 2002.
In Autumn 2002.
Part IV - A New View of Evolutionary Economics, Game Theory and Cliometrics
Since the publication in 1982 of Nelson and Winter's work An Evolutionary Theory of Economic Change, the evolutionary approach to economic and social phenomena is claimed in an increasing number of works and by an increasing number of authors. The early 1990s thus marked the establishment of a scientific community centred on evolutionary principles and a journal (the Journal of Evolutionary Economics was founded in 1991). This reveals deep dissatisfaction with neoclassical thinking which, with the conjunctural change in the early 1970s, displayed its inability to make a pertinent appraisal of the structural crisis affecting the economic and social system.
In fact, economic analyses couched in evolutionary terms result from a fundamental criticism of the general equilibrium theory. The criticism was itself inspected by a set of previous questions mainly concerning aspects of the coherence of the dominant thinking and its methodological and epistemological foundations. Thus, for the partisans of the approach, the general equilibrium theory is unable to describe the economic movement. Both structural change phenomena and crisis situations in the system are denied or eliminated. In consequence, the present slump is handling in an indirect manner because, in a pure economic model, all markets balance out and disturbances can only be a violation of the basic hypotheses. The view proposed is that of a stable, timeless economic system, that is to say in which the essential of any possibility of dynamics and historical change is rejected. Furthermore, the economic system is perceived as an enclosed domain isolated from the social field. The general equilibrium theory thus gives a poor description of the complex social issues that build up the conflict processes that occur during economic development. In fact, in the dominant reasoning, the relations established by the market between supposedly autonomous units is sufficient to ensure the overall coherence of individual actions. The latter is made possible by competitive reasoning reduced to commercial relations that are in turn regulated by a price system.
The ambition of the evolutionary project is therefore to develop a new interpretation of the dynamics of the socio-economic system, that is to say to implement a real change in the way in which the economy is viewed. The evolutionary theory is essentially multiform. It is a theory of change. It is in the Schumpeterian tradition and aimed at interpreting the long movements that affect economic and social activity.
The foundation of the evolutionary approach lies in the analogy with the principles of the biological theories of evolution. In economics, one could summarise the point of departure as follows: a varied set of individuals compete for a vital, rare resource. The nature of the environment determines the qualities required in individuals in order to succeed in this competition. The individuals that have most developed these qualities increase their viability and they reproduction is enhanced, whereas, in contrast, the individuals who have least developed them are threatened. The aggregate characteristics of the population evolve endogenously through this selection mechanism. The process is maintained continuously by endogenous and exogenous changes in the selection criterion and by the mutations of certain individuals, recreating microscopic heterogeneity and possibly forming the vectors of new characteristics with an advantage for selection.
These principles also represent the fundamentals of evolutionary game theory. Following on from the survey performed by A. GREIF in 1996, their application to cliometrics seem most stimulating, on the one hand for renewing analysis of strategic interactions and on the other for the comparative study of growth paths in the nineteenth and twentieth centuries. This being said, before going on to this doubtless important but above all joint stage, we first seek to review the literature, that is to say present the genesis and development of evolutionary game research.
Claude DIEBOLT, for AFC.
In Autumn 2003.
In Autumn 2003.
Professor Prem raj Pushpakaran writes -- 2020 marks the birth centenary year of Douglass Cecil North, who pioneered the cliometrics !!!
ReplyDelete