Showing posts with label models. Show all posts
Showing posts with label models. Show all posts

Tuesday, 30 September 2014

Kinetic exchange models of markets


From Wikipedia, the free encyclopedia


The multi-agent dynamic models used in Econometrics are very limited, and very questionable. With Transfinancial Economics it could become possible to gain a far more accurate understanding of the economy in Real-Time. http://www.p2pfoundation.net/Transfinancial_Economics

Jump to: navigation, search
Kinetic exchange models are multi-agent dynamic models inspired by the statistical physics of energy distribution, which try to explain the robust and universal features of income/wealth distributions.
Understanding the distributions of income and wealth in an economy has been a classic problem in economics for more than a hundred years. Today it is one of the main branches of Econophysics.


Data and Basic tools[edit]

In 1897, Vilfredo Pareto first found a universal feature in the distribution of wealth. After that, with some notable exceptions, this field had been dormant for many decades, although accurate data had been accumulated over this period. Considerable investigations with the real data during the last fifteen years (1995–2010) revealed[1] that the tail (typically 5 to 10 percent of agents in any country) of the income/wealth distribution indeed follows a power law. The rest (bulk) of the population (i.e., the low-income population) follow a different distribution which is debated to be either Gibbs or log-normal.
Basic tools used in this type of modelling are probabilistic and statistical methods mostly taken from the kinetic theory of statistical physics. Monte Carlo simulations often come handy in solving these models.

Overview of the models[edit]

Since the distributions of income/wealth are the results of the interaction among many heterogeneous agents, there is an analogy with statistical mechanics, where many particles interact. This similarity was noted by Meghnad Saha and B. N. Srivastava in 1931[2] and thirty years later by Benoit Mandelbrot.[3] In 1986, an elementary version of the stochastic exchange model was first proposed by J. Angle.[4]
In the context of kinetic theory of gases, such an exchange model was first investigated by A. Dragulescu and V. Yakovenko.[5][6] The main modelling effort has been put to introduce the concepts of savings,[7][8] and taxation[9] in the setting of an ideal gas-like system. Basically, it assumes that in the short-run, an economy remains conserved in terms of income/wealth; therefore law of conservation for income/wealth can be applied. Millions of such conservative transactions lead to a steady state distribution of money (gamma function-like in the Chakraborti-Chakrabarti model with uniform savings,[7] and a gamma-like bulk distribution ending with a Pareto tail[10] in the Chatterjee-Chakrabarti-Manna model with distributed savings[8]) and the distribution converges to it. The distributions derived thus have close resemblance with those found in empirical cases of income/wealth distributions.
Though this theory has been originally derived from the entropy maximization principle of statistical mechanics, it has recently been shown[11] that the same could be derived from the utility maximization principle as well, following a standard exchange-model with Cobb-Douglas utility function. The exact distributions produced by this class of kinetic models are known only in certain limits and extensive investigations have been made on the mathematical structures of this class of models.[12][13] The general forms have not been derived so far.

Criticisms[edit]

This class of models has attracted criticisms from many dimensions.[14] It has been debated for long whether the distributions derived from these models are representing the income distributions or wealth distributions. The law of conservation for income/wealth has also been a subject of criticism.

See also[edit]

References[edit]

  1. Jump up ^ Chatterjee, A.; Yarlagadda, S.; Chakrabarti, B.K. (2005). Econophysics of Wealth Distributions. Springer-Verlag (Milan). 
  2. Jump up ^ Saha, M.; Srivastava, B.N. (1931). A Treatise on Heat. Indian Press (Allahabad). p. 105.  (the page is reproduced in Fig. 6 in Sitabhra Sinha, Bikas K Chakrabarti, Towards a physics of economics, Physics News 39(2) 33-46, April 2009)
  3. Jump up ^ Mandelbrot, B.B. (1960). "The Pareto-Levy law and the distribution of income". International Economic Review 1: 69. doi:10.2307/2525289. 
  4. Jump up ^ Angle, J. (1986). "The surplus theory of social stratification and the size distribution of personal wealth". Social Forces 65 (2): 293–326. doi:10.2307/2578675. JSTOR 2578675. 
  5. Jump up ^ Dragulescu, A.; Yakovenko, V. (2000). "The statistical mechanics of money". European Physical Journal B 17: 723–729. doi:10.1007/s100510070114. 
  6. Jump up ^ Garibaldi, U.; Scalas, E.; Viarenga, P. (2007). "Statistical equilibrium in exchange games". European Physical Journal B 60: 241–246. doi:10.1140/epjb/e2007-00338-5. 
  7. ^ Jump up to: a b Chakraborti, A.; Chakrabarti, B.K. (2000). "Statistical mechanics of money: how savings propensity affects its distribution". European Physical Journal B 17: 167–170. doi:10.1007/s100510070173. 
  8. ^ Jump up to: a b Chatterjee, A.; Chakrabarti, B.K.; Manna, K.S.S. (2004). "Pareto law in a kinetic model of market with random saving propensity". Physica A 335: 155. doi:10.1016/j.physa.2003.11.014. 
  9. Jump up ^ Guala, S. (2009). "Taxes in a simple wealth distribution model by inelastically scattering particles". Interdisciplinary description of complex systems 7(1): 1–7. 
  10. Jump up ^ Chakraborti, A.; Patriarca, M. (2009). "Variational Principle for the Pareto Power Law". Physical Review Letters 103: 228701. doi:10.1103/PhysRevLett.103.228701. 
  11. Jump up ^ A. S. Chakrabarti, B. K. Chakrabarti (2009). "Microeconomics of the ideal gas like market models". Physica A 388: 4151–4158. doi:10.1016/j.physa.2009.06.038. 
  12. Jump up ^ During, B.; Matthes, D.; Toscani, G. (2008). "Kinetic equations modelling wealth distributions: a comparison of approaches". Physical Review E 78: 056103. doi:10.1103/physreve.78.056103. 
  13. Jump up ^ Cordier, S.; Pareschi, L.; Toscani, G. (2005). "On a kinetic model for a simple market economy". Journal of Statistical Physics 120: 253–277. doi:10.1007/s10955-005-5456-0. 
  14. Jump up ^ Mauro Gallegati, Steve Keen, Thomas Lux and Paul Ormerod (2006). "Worrying Trends in Econophysics". Physica A 371: 1–6. doi:10.1016/j.physa.2006.04.029. 

Further reading[edit]

Wednesday, 19 June 2013

What is Cliometrics ?


Blogger Ref Link  http://www.p2pfoundation.net/Transfinancial_Economics

 

 

Part I - What is Cliometrics ?
The aim of this note is to arrive at precise notions concerning the subject-matter of cliometrics.
Cliometric analysis is foremost a theoretical approach. Great emphasis is placed on developing a coherent and consistent theoretical model that will provide the basis for interpreting historical economic and social phenomena. Cliometric models enable a better understanding of the real world. Because economic and social processes are complex, a thorough understanding of the underlying forces and interrelationships is generally impossible. Models break up phenomena into more manageable portions by abstracting those variables that are believed to be a significant influence on choice and subjecting them to deductive reasoning based on a set of accepted axioms. Logical conclusions are then derived which must be translated into propositions about the real world. These propositions or predictions must then be compared to actual behavior and experience, either by observation or statistical methods.
In cliometrics there are two modes of discourse : positive (what is) and normative (what ought to be). To this I would add a third : descriptive analysis. The success of positive, normative and descriptive theory are to be judged by different criteria and hence they are not susceptible to the same criticism. Critics of the cliometric approach often confuse these three modes of analysis resulting in much ill conceived criticism. Here a very simplified account of cliometric methodology is given.

Positive cliometrics

Positive cliometrics is the empirical branch of the discipline. I seeks to generate a set of testable, that is potentially refutable, predictions than can be verified by the empirical evidence. A positive cliometric model is a meaningful model if it is both correct and useful. It is correct if it is internally consistent; it is useful if it focuses on a significant influence on choice. A meaningful model is thus one that generates predictions to which behavior conforms more frequently that those generated by some alternative competing theory. If the model is successful in predicting, then the negative judgment can be made that the model has not been falsified.
Positive cliometric analysis is used to make qualitative predictions and to organize data for the testing of these predictions. It is predictive and empirical. The predictions of positive cliometric models must be interpreted with some care. First, such models only establish partial relationships. For example, one of the most common predictions in cliometrics is the inverse relationship between the price of a good and the quantity demanded. However, this statement must be read with an important caveat the caveat of ceteris paribus. The prediction states that in practice the quantity demanded will increase as price increase only if all other factors affecting demand such as income, tastes, and the relative prices of other goods remain constant. Thus the predictions of positive cliometric models are in the nature of conditional statements if A then B, given C, but may B never be observed to occur because other influences on A (C) have also changed. Secondly, since positive cliometric models only deal with partial relationships they do not imply that other factors, economic and non economic, are of no, or less, importance in explaining behavior. A cliometrician may argue, for example, that people will respond to cost-pressures (such as liability for damages) in the care they take in an activity which places safety of others at risk, and he may empirically establish this proposition. But this finding does not naturally lead to the conclusion that pecuniary incentives are the only, or even the best, means of achieving an increase in the level of safety.

Normative cliometrics

Normative cliometrics is the ethical branch of cliometrics concerned with efficiency, distributive and social justice and prescribing corrective measures to improve social welfare. Normative propositions cannot, as a matter of fact, be verified, nor are they in principle verifiable. Value judgments cannot be tested to see whether they are true or false; they are acceptable or unacceptable.

Descriptive cliometrics

Much cliometric analysis does not fall neatly into either positive or normative categories. Indeed the bulk of cliometrics is not designed to generate testable predictions nor to determine the social desirability of a set of policies. It is abstract theory of economic and social problems whose function is to generate logical deductions and to describe economic and social phenomena in a historical perspective. In this perspective, it is useful to distinguish a third (though not exhaustive) category of cliometric analysis descriptive cliometrics. Descriptive cliometrics attempts to model/analyse historical processes and to describe the economic and social influences that affect them. It is thus based on assumptions that are more or less realistic and therefore subject to empirical verification. Much of the literature by cliometricians falls into this category. It seeks to provide a comprehensive model of history based on economics and econometrics.
Although there are a number of historical economic schools the dominant approach used in cliometrics is neoclassical economics. Here the main ingredients of this approach are outlined.

Methodological Individualism

Neoclassical economics builds on the postulate of methodological individualism the view that social theories must be based on the attitudes and behavior of individuals. The basic unit of analysis is thus the rational individual and the behavior of groups is assumed to be the outcome of the decisions taken by the individuals who compose them. Neoclassical economics is, in other words, individualistic economics based on the behavioral model of rational choice.

Maximization Principle

Economic man or woman is assumed to be a self-interested egotist who maximizes his utility. The assumptions of utility (and profit) maximization, or economic rationality as it is sometimes referred to, have given rise to much criticism and confusion. When an economist says that an individual is acting rationally or maximizing his utility he is saying no more than that the individual is making consistent choices, and that he chooses the preferred alternative(s) from those available to him. That is, it is assumed the individual is the best judge of his own welfare the notion of consumer sovereignty. The economic approach does not content that all individuals are rational nor that these assumptions are necessarily realistic. Rather economic man or woman is some weighted average of the individuals under study in which the extremes in behavior are evened out. The theory allows for irrationality but argues that groups of individuals behave as if their members are rational. Also the utility maximizing postulate does not asset that individuals consciously calculate the cost and benefits of all actions, only that their behavior can be explained as if they did so.

Market and Prices

The final ingredient of the cliometric approach concerns the concepts of a market and price. Even in areas where there is not an explicit market, the cliometric approach will often study the subject by analogy with the market concepts of supply, demand and price.
Claude DIEBOLT, for AFC.
In Autumn 2002.


 

Part II - What is retrospective growth accounting ?

Checking theoretical hypotheses on economic growth requires always new statistical sets. The method of quantitative history launched in the 1960s by S. Kuznets, J. Marczewski and F. Perroux is essential for reaching this aim. It consists of assembling historical facts in homogenous, comparable time units in order to measure changes in intervals of time (generally annually). The advantage of this method is that the moment of operation of the observer's choice is shifted. Instead of acting while observing the reality to be described, he operates during the construction of the reference system serving in the recording of facts rendered conceptually homogeneous. This methodology should allow for an empirical verification or the rejection of initial hypotheses hinged on the pattern of theoretical interpretation.
However, a map is not the area it plots, and care must be taken not to confuse reality and its description. The approach is not applicable to isolated historical events. It is used to describe the history of the masses, but is not sensitive to the history of the heroes. It is only an image of reality and does not draw all of reality's contours. Quantitative history does not aim at replacing traditional descriptive history. On the contrary, the two forms of historical investigation are strictly complementary and hence indispensable for a better knowledge of the past. The application of quantitative methods can nevertheless profoundly renew the terms that have progressively become legitimate. The great merit of statistical formalism is that it allows for the examination of the logical and quantitative consequences of historical proposals, which could not be obtained by a discussion based on literary documents and the like.
The proposed method has two advantages. Firstly, it is of immediate practical interest as it is an original reconstruction of the process of growth. It is also of theoretical interest as it provides better knowledge of the mechanisms governing the long development of the economic system. We are nevertheless aware that the statistical work provides only the quantitative aspect of changes in structure. Although it is important, it is not sufficient to provide the complete picture of the sequence of facts.
The decisive role of deductive theories in empirical research must therefore be stressed. Starting with a general idea, they attempt at identifying symptoms in reality by means of a chronological series of statistics. Like the Frankfurter Gesellschaft für Konjunkturforschung (founded in 1926 by E. Altschul), we recommend an economic semiology that would implement the results of statistical research and deductive reasoning of the theory. There is no conflict or compartmentalisation between empirical research and deductive theory; on the contrary, these disciplines are only truly valuable individually if they draw on each other's results. The validity of the theoretical hypotheses thus depends both on their external coherence, i.e. on their conformity with real facts, and their internal coherence, i.e. their ability to provide one or more solutions to the raised questions. The general equilibrium theory is extremely significant in this respect. It has given rise to much work concerning the validity of its bases and of the teachings drawn from it. In contrast, research on the existence of a solution has been carried out by only a handful of specialists. Here, it is still possible to discuss the coherence of a theoretical hypothesis, but it is more important to submit it to empirical verification.

1. The scope of quantitative history

Quantitative history aims at drawing up a general macroscopic synthesis by constructing models integrated at the national level with the prospect of possible links between several national models. This requires a succession of research operations, the most important of which is the creation of chronological data series covering the whole of the period under consideration.
The initial work consists of sorting dozens or even hundreds of volumes of old statistics. The figures are transcribed and reclassified using a pre-established model (e.g. using national budget items). This meticulous research is a question of doing rather than learning and is carried out in the silence of archives and libraries. The classifications used in statistical documents change according to the year and the recording administration. It is necessary at all times to exert some judgement to decide on the matching of the original item and the item under which the figure is to be transcribed.

2. The quantitative history method

The quantitative history method applies when a large number of causes and their complexity and intermingling make the use of experimental methods impossible. The moment at which the observer's choice intervenes is shifted. Instead of being set during observation of the reality to be described, it applies during the elaboration of the system of reference used to count facts thus rendered conceptually homogeneous.
The definition just given essentially consists of three main phases:
  • collection of documents;
  • analysis of the data collected;
  • interpretation of the results.
The first phase is mainly descriptive in order to prepare the real work of the researcher by co-ordinating the collected information. It is a preliminary task required for a serious analysis. All the documents related to the field of study are assembled. The data from archived documents must not be merely recorded, but have to be subjected to intelligent, perspicacious criticism. Some are discarded when considered unreliable; others must be corrected when their interpretation reveals sources of error. Finally, estimates may be required to fill gaps. The final results are assembled in a statistical table.
Having arrived at this mass of numerical information, the researcher must then put the figures in a logical order and classify them according to a previously established nomenclature. The data are reduced, substituted by a small number of representative series drawn by calculation from the ordered mass of numerical data.
The third and final interpretation phase consists of drawing conclusions from the analysis. It is a decisive stage as here an attempt is made to explain the observations. There is a vast scope of interpretative possibilities, ranging from merely checking hypotheses to forecasting future developments.
One has to be aware that the conclusions drawn from the analysis of the statistical sets contain a degree of uncertainty. Also the degree of the validity of the general observations has to be considered. Quantitative history is a long, slow process that is never free from difficulties. There are hesitations to overcome and problems to solve at every stage, as each new research operation is a new beginning and there are decisions that can only be taken alone.
Claude DIEBOLT, for AFC.
In Autumn 2002.
 

 

Part III - What is Convergence ?

The notions of rate of convergence (ß) and evolution of dispersion (a) have been widely used in recent studies of economic growth. The notion of ß-convergence refers to the rate at which the income or production per capita of a poor region tends to catch up that of a rich region. In other words, ß-convergence is observed if the initially poor economic units in a set of cross-sectional data tend to grow more rapidly than the rich units. If the ß coefficient is calculated without taking into account the characteristics that determine the levels of long-term equilibrium of economies (such as the savings ratio, technologies and institutions), it is absolute convergence. The conditional convergence hypothesis is then applied when differences in long-term equilibrium values are taken into account. Two economies may thus have different savings ratios, reflecting differences in the time preference rates. In this case, the traditional neoclassical framework forecasts that the two economies should display the same rate of equilibrium growth but that the economy with the highest savings ratio would display a higher income per capita in a stationary state. In this respect, the conditional convergence notion refers to the hypothesis according to which the economy that is initially the farthest from its own equilibrium trajectory will experience more rapid growth.
Estimation of the absolute convergence coefficient is performed using a non-linear regression based on cross-sectional data in the following form:
 
in which t and T are respectively the first and last years of the observation period and i is an economic unit. Y is the per capita economic unit and u a remainder. The left-hand side of the equation is therefore an approximation of the annual growth of the economic unit i between years t and T. In the context of an analysis of absolute convergence, B is a constant for all the economic units. Its value is determined by the growth of the economic indicators at equilibrium. Furthermore, the coefficient (1-e-ß(T-t))/(T-t) makes it possible to allow for the share of growth that can be accounted for by the initial level of the economic indicator. The gap between the various economic units decreases exponentially at rate ß from t=0 to T. A situation of convergence (a positive ß coefficient) implies a negative relation between the average growth rate during the observation period and the logarithm of the initial level of the economic indicator per capita. The greater the value of ß, the more the economic indicator per capita in the poor region will rapidly approach the level observed in the rich region.
The second convergence notion, referred to as a convergence, is based on the analysis of time series. It is hinged on the movement of a dispersion index of an economic indicator per capita. The dispersion index can be measured in several ways. The standard deviation of the log of income per capita is often used. If the dispersion of income per capita between a set of economies tends to decrease in time, a convergence is observed. In practice, a convergence is observed if the dispersion index time series is integrated of order one and displays a negative drift (a stochastic trend) or evolves with a decreasing trend (a deterministic trend).
In conclusion, it can be said that the notions of ß and a convergence are interdependent. The catching up process (ß convergence) contributes to reducing the cross-sectional dispersion of income (a convergence), but exogenous disturbances to relative growth rates tend to accentuate dispersion.
Claude DIEBOLT, for AFC.
In Autumn 2002.
 

 

Part IV - A New View of Evolutionary Economics, Game Theory and Cliometrics

Since the publication in 1982 of Nelson and Winter's work An Evolutionary Theory of Economic Change, the evolutionary approach to economic and social phenomena is claimed in an increasing number of works and by an increasing number of authors. The early 1990s thus marked the establishment of a scientific community centred on evolutionary principles and a journal (the Journal of Evolutionary Economics was founded in 1991). This reveals deep dissatisfaction with neoclassical thinking which, with the conjunctural change in the early 1970s, displayed its inability to make a pertinent appraisal of the structural crisis affecting the economic and social system.
In fact, economic analyses couched in evolutionary terms result from a fundamental criticism of the general equilibrium theory. The criticism was itself inspected by a set of previous questions mainly concerning aspects of the coherence of the dominant thinking and its methodological and epistemological foundations. Thus, for the partisans of the approach, the general equilibrium theory is unable to describe the economic movement. Both structural change phenomena and crisis situations in the system are denied or eliminated. In consequence, the present slump is handling in an indirect manner because, in a pure economic model, all markets balance out and disturbances can only be a violation of the basic hypotheses. The view proposed is that of a stable, timeless economic system, that is to say in which the essential of any possibility of dynamics and historical change is rejected. Furthermore, the economic system is perceived as an enclosed domain isolated from the social field. The general equilibrium theory thus gives a poor description of the complex social issues that build up the conflict processes that occur during economic development. In fact, in the dominant reasoning, the relations established by the market between supposedly autonomous units is sufficient to ensure the overall coherence of individual actions. The latter is made possible by competitive reasoning reduced to commercial relations that are in turn regulated by a price system.
The ambition of the evolutionary project is therefore to develop a new interpretation of the dynamics of the socio-economic system, that is to say to implement a real change in the way in which the economy is viewed. The evolutionary theory is essentially multiform. It is a theory of change. It is in the Schumpeterian tradition and aimed at interpreting the long movements that affect economic and social activity.
The foundation of the evolutionary approach lies in the analogy with the principles of the biological theories of evolution. In economics, one could summarise the point of departure as follows: a varied set of individuals compete for a vital, rare resource. The nature of the environment determines the qualities required in individuals in order to succeed in this competition. The individuals that have most developed these qualities increase their viability and they reproduction is enhanced, whereas, in contrast, the individuals who have least developed them are threatened. The aggregate characteristics of the population evolve endogenously through this selection mechanism. The process is maintained continuously by endogenous and exogenous changes in the selection criterion and by the mutations of certain individuals, recreating microscopic heterogeneity and possibly forming the vectors of new characteristics with an advantage for selection.
These principles also represent the fundamentals of evolutionary game theory. Following on from the survey performed by A. GREIF in 1996, their application to cliometrics seem most stimulating, on the one hand for renewing analysis of strategic interactions and on the other for the comparative study of growth paths in the nineteenth and twentieth centuries. This being said, before going on to this doubtless important but above all joint stage, we first seek to review the literature, that is to say present the genesis and development of evolutionary game research.

Claude DIEBOLT, for AFC. 
In Autumn 2003.






Friday, 30 November 2012

Complexity Economics

Complexity economics is the application of complexity science to the problems of economics. It studies computer simulations to gain insight into economic dynamics, and avoids the assumption that the economy is a system in equilibrium.[1]

Contents

[hide]

[edit] Models

The "nearly archetypal example" is an artificial stock market model created by the Santa Fe Institute in 1989.[2] The model shows two different outcomes, one where "agents do not search much for predictors and there is convergence on a homogeneous rational expectations outcome" and another where "all kinds of technical trading strategies appearing and remaining and periods of bubbles and crashes occurring".[2]
Another area has studied the prisoner's dilemma, such as in a network where agents play amongst their nearest neighbors or a network where the agents can make mistakes from time to time and "evolve strategies".[2] In these models, the results show a system which displays "a pattern of constantly changing distributions of the strategies".[2]
More generally, complexity economics models are often used to study how non-intuitive results at the macro-level of a system can emerge from simple interactions at the micro level. This avoids assumptions of the representative agent method, which attributes outcomes in collective systems as the simple sum of the rational actions of the individuals.

[edit] Measures

Harvard Economist Ricardo Hausmann and MIT's physicist Cesar A. Hidalgo introduced a spectral method to measure the complexity of a country's economy by inferring it from the structure of the network connecting countries to the products that they export. The measure combines information of a country's diversity, which is positively correlated with a country's productive knowledge, with measures of a product ubiquity (number of countries that produce or export the product).[3][4] This concept, known as the "Product Space", has been further developed by the Harvard-MIT Observatory of Economic Complexity, releasing the Atlas of Economic Complexity[4] in 2011.

[edit] Relevance

The Economic Complexity Index (ECI) introduced by Hausmann and Hidalgo [3][4] is highly predictive of future GDP per capita growth. In [4] Hausmann, Hidalgo et al. show that the ability of the ECI to predict future GDP per capita growth is between 5 times and 20 times larger than the World Bank's measure of governance, the World Economic Forum's (WEF) Global Competitiveness Index (GCI) and standard measures of human capital, such as years of schooling and cognitive ability.[5][6]

[edit] Features

Brian Arthur, Steven N. Durlauf, and David A. Lane describe several features of complex systems that deserve greater attention in economics.[7]
  1. Dispersed interaction—The economy has interaction between many dispersed, heterogeneous, agents. The action of any given agent depends upon the anticipated actions of other agents and on the aggregate state of the economy.
  2. No global controller—Controls are provided by mechanisms of competition and coordination between agents. Economic actions are mediated by legal institutions, assigned roles, and shifting associations. No global entity controls interactions. Traditionally, a fictitious auctioneer has appeared in some mathematical analyses of general equilibrium models, although nobody claimed any descriptive accuracy for such models. Traditionally, many mainstream models have imposed constraints, such as requiring that budgets be balanced, and such constraints are avoided in complexity economics.
  3. Cross-cutting hierarchical organization—The economy has many levels of organization and interaction. Units at any given level behaviors, actions, strategies, products typically serve as "building blocks" for constructing units at the next higher level. The overall organization is more than hierarchical, with many sorts of tangling interactions (associations, channels of communication) across levels.
  4. Ongoing adaptation—Behaviors, actions, strategies, and products are revised frequently as the individual agents accumulate experience.[8]
  5. Novelty niches—Such niches are associated with new markets, new technologies, new behaviors, and new institutions. The very act of filling a niche may provide new niches. The result is ongoing novelty.
  6. Out-of-equilibrium dynamics—Because new niches, new potentials, new possibilities, are continually created, the economy functions without attaining any optimum or global equilibrium. Improvements occur regularly.

[edit] Contemporary trends in economics

Complexity economics has a complex relation to previous work in economics and other sciences, and to contemporary economics. Complexity-theoretic thinking to understand economic problems has been present since their inception as academic disciplines. Research as shown, that no two separate micro-events are completely isolated,[9] and there is definitely a relationship that cause a major macroeconomic structure. However, the relationship is not always in one direction, but there is a reciprocal influence when feedback is in operation.[10]
Complexity economics has been applied to many fields.

[edit] Intellectual predecessors

Complexity economics draws inspiration from behavioral economics, Marxian economics, institutional economics/evolutionary economics, Austrian economics and the work of Adam Smith.[11] It also draws inspiration from other fields, such as statistical mechanics in physics, and evolutionary biology. Some of the 20th century intellectual background of complexity theory in economics is examined in Alan Marshall (2002) The Unity of Nature, Imperial College Press: London.

[edit] Applications

The theory of complex dynamic systems has been applied in diverse fields in economics and other decision sciences. These applications include capital theory,[12][13] game theory,[14] the dynamics of opinions among agents composed of multiple selves,[15] and macroeconomics.[16] In voting theory, the methods of symbolic dynamics have been applied by Donald G. Saari.[17] Complexity economics has attracted the attention of historians of economics.[18]

[edit] Complexity economics as mainstream, but non-orthodox

According to Colander (2000), Colander, Holt & Rosser (2004), and Davis (2008) contemporary mainstream economics is evolving to be more "eclectic",[19][20] diverse,[21][22][23] and pluralistic.[24] Colander, Holt & Rosser (2004) state that contemporary mainstream economics is "moving away from a strict adherence to the holy trinity---rationality, selfishness, and equilibrium", citing complexity economics along with recursive economics and dynamical systems as contributions to these trends.[25] They classify complexity economics as now mainstream but non-orthodox.[26][27]

[edit] Criticism

In 1995-1997 publications, Scientific American journalist John Horgan "ridiculed" the movement as being the fourth C among the "failed fads" of "complexity, chaos, catastrophe, and cybernetics".[2] In 1997, Horgan wrote that the approach had "created some potent metaphors: the butterfly effect, fractals, artificial life, the edge of chaos, self organized criticality. But they have not told us anything about the world that is both concrete and truly surprising, either in a negative or in a positive sense".[2][28][29]
Rosser "granted" Horgan "that it is hard to identify a concrete and surprising discovery (rather than "mere metaphor") that has arisen due to the emergence of complexity analysis" in the discussion journal of the American Economic Association, the Journal of Economic Perspectives.[2] Surveying economic studies based on complexity science, Rosser wrote that the findings, rather than being surprising, confirmed "already-observed facts".[2] Rosser wrote that there has been "little work on empirical techniques for testing dispersed agent complexity models".[2] Nonetheless, Rosser wrote that "there is a strain of common perspective that has been accumulating as the four C's of cybernetics, catastrophe, chaos and complexity emerged, which may now be reaching a critical mass in terms of influencing the thinking of economists more broadly".[2]

[edit] See also

[edit] Notes

  1. ^ Beinhocker, Eric D. The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics. Boston, Massachusetts: Harvard Business School Press, 2006.
  2. ^ a b c d e f g h i j Rosser, J. Barkley, Jr. "On the Complexities of Complex Economic Dynamics" Journal of Economic Perspectives, V. 13, N. 4 (Fall 1999): 169-192.
  3. ^ a b Hidalgo, Cesar A.; Hausmann Ricardo (2009). "The Building Block of Economic Complexity". PNAS 106 (106(26)): 10570–10575. doi:10.1073/pnas.0900943106. PMC 2705545. PMID 19549871. //www.ncbi.nlm.nih.gov/pmc/articles/PMC2705545/.
  4. ^ a b c d Hausmann & Hidalgo et al. (2011). The Atlas of Economic Complexity. Cambridge MA: Puritan Press. ISBN 0615546625. http://atlas.media.mit.edu/book/.
  5. ^ "Complexity matters". The Economist. Oct 27th 2011. http://www.economist.com/blogs/freeexchange/2011/10/buidling-blocks-economic-growth.
  6. ^ "Diversity Training". The Economist. Feb 4, 2010. http://www.economist.com/node/15452697?story_id=15452697.
  7. ^ Arthur, Brian; Durlauf, Steven; Lane, David A (1997). "Introduction: Process and Emergence in the Economy". The Economy as an Evolving Complex System II. Reading, Mass.: Addison-Wesley. http://www.santafe.edu/~wbarthur/Papers/ADLIntro.html. Retrieved 2008-08-26
  8. ^ Shiozawa, Y. (2004). "Evolutionary Economics in the 21st Century: A Manifest". Evolutionary and Institutional Economics Review 1 (1): 5–47.
  9. ^ Albert-Laszlo Barabasi "explaining (at 27:07) that no two events are completely isolated in the BBC Documentary". BBC. http://topdocumentaryfilms.com/six-degrees-of-separation/. Retrieved 11 June 2012. "Unfolding the science behind the idea of six degrees of separation"
  10. ^ "Page 20 - Ten Principles of Complexity & Enabling Infrastructures". by Professor Eve Mitleton-Kelly, Director Complexity Research Programme, London School of Economics. http://psych.lse.ac.uk/complexity/Papers/Ch2final.pdf. Retrieved 1 June 2012.
  11. ^ Colander, David (March, 2008). "Complexity and the History of Economic Thought". http://sandcat.middlebury.edu/econ/repec/mdl/ancoec/0804.pdf. Retrieved 29 July 2012.
  12. ^ Rosser, J. Barkley, Jr. "Reswitching as a Cusp Catastrophe", Journal of Economic Theory, V. 31 (1983): 182-193
  13. ^ Ahmad, Syed Capital in Economic Theory: Neo-classical, Cambridge, and Chaos. Brookfield: Edward Elgar (1991)
  14. ^ Sato, Yuzuru, Eizo Akiyama and J. Doyne Farmer. "Chaos in learning a simple two-person game", Proceedings of the National Academy of Sciences of the United States of America, V. 99, N. 7 (2 Apr. 2002): 4748-4751
  15. ^ Krause, Ulrich. "Collective Dynamics of Faustian Agents", in Economic Theory and Economic Thought: Essays in honour of Ian Steedman (ed. by John Vint et al.) Routledge: 2010.
  16. ^ Flaschel, Peter and Christian R. Proano (2009). "The J2 Status of 'Chaos' in Period Macroeconomics Models"</A>, Studies in Nonlinear Dynamics & Econometrics, V. 13, N. 2. http://www.bepress.com/snde/vol13/iss2/art2/
  17. ^ Saari, Donald G. Chaotic Elections: A Mathematician Looks at Voting. American Mathematical Society (2001).
  18. ^ Bausor, Randall. "Qualitative dynamics in economics and fluid mechanics: a comparison of recent applications", in Natural Images in Economic Thought: Markets Read in Tooth and Claw (ed. by Philip Mirowski). Cambridge: Cambridge University Press (1994).
  19. ^ "Economists today are not neoclassical according to any reasonable definition of the term. They are far more eclectic, and concerned with different issues than were the economists of the early 1900s, whom the term was originally designed to describe." Colander (2000, p. 130)
  20. ^ "Modern economics involves a broader world view and is far more eclectic than the neoclassical terminology allows." Colander (2000, p. 133)
  21. ^ "In our view, the interesting story in economics over the past decades is the increasing variance of acceptable views..." Colander, Holt & Rosser (2004, p. 487)
  22. ^ "In work at the edge, ideas that previously had been considered central to economics are being modified and broadened, and the process is changing the very nature of economics." Colander, Holt & Rosser (2004, p. 487)
  23. ^ "When certain members of the existing elite become open to new ideas, that openness allows new ideas to expand, develop, and integrate into the profession... These alternative channels allow the mainstream to expand, and to evolve to include a wider range of approaches and understandings... This, we believe, is already occurring in economics." Colander, Holt & Rosser (2004, pp. 488–489)
  24. ^ "despite an increasing pluralism on the mainstream economics research frontier..." Davis (2008, p. 353)
  25. ^ Colander, Holt & Rosser (2004, p. 485)
  26. ^ "The second (Santa Fe) conference saw a very different outcome and atmosphere than the first. No longer were mainstream economists defensively adhering to general equilibrium orthodoxy... By 1997, the mainstream accepted many of the methods and approaches that were associated with the complexity approach." Colander, Holt & Rosser (2004, p. 497) Colander, Holt & Rosser (2004, pp. 490–492) distinguish between orthodox and mainstream economics.
  27. ^ Davis (2008, p. 354)
  28. ^ Horgan, John, "From Complexity to Perplexity," Scientific American, June 1995, 272:6, 104 09.
  29. ^ Horgan, John, The End of Science: Facing the Limits of Knowledge in the Twilight of the Scientific Age. Paperback ed, New York: Broadway Books, 1997.

[edit] References

[edit] External links






The Blogger Ref Link http://www.p2pfoundation.net/Transfinancial_Economics