Thursday, 31 January 2013

Pop Up Economics/ Tim Harford


Tim Harford's new Radio 4 series Pop-Up Economics tells stories about fascinating people and ideas in Among the stories he tells are those of Al Roth, who created a clearing-house for kidneys, the cold war game theorist Thomas Schelling and Bill Phillips, who he argues was the 'Indiana Jones of economics'.
Phillips worked as a busker, a gold miner and a crocodile hunter before studying at the London School of Economics where he used a system of water pumps and valves to create the first working model of the British economy or indeed of any economy.
Pop-Up Economics is broadcast on BBC Radio 4 on Wednesdays at 8.45pm from 16th Jan to 13th Feb. You can listen again online by downloading the podcast.



The following is some background info on Harford. He is essentially a populariser on "Economics," and ofcourse, does not offer anything really new, and important as far as economics per se goes. Wikipedia entry may be of interest all the same. RS




From Wikipedia, the free encyclopedia
Jump to: navigation, search
Tim Harford
Born1973
Alma materBrasenose College, Oxford[1]
AwardsRoyal Statistical Society Excellence in Journalism, Bastiat Prize
Tim Harford (born 1973) is a English economist and journalist, residing in London.[2] He is the author of four economics books and writes his long-running FT column, "The Undercover Economist", which is syndicated in Slate magazine, revealing the economic ideas behind everyday experiences. His new column, "Since You Asked," offers a sceptical look at the news of the week.
Harford studied at Aylesbury Grammar School and then at Brasenose College, Oxford, gaining a BA in Philosophy, Politics and Economics[1] and then an MPhil in Economics in 1998. He joined the Financial Times in 2003 on a fellowship in commemoration of the business columnist Peter Martin. He continued to write his column after joining the International Finance Corporation in 2004, and re-joined the Financial Times as economics leader writer in April 2006. He is also a member of the newspaper's editorial board.
In October 2007, Harford replaced Andrew Dilnot on the BBC Radio 4 series More or Less. He is a visiting fellow at Nuffield College, Oxford.

Contents

 [hide

[edit] Awards

[edit] Publications

[edit] References

[edit] External links

Freakonomics


The following may be of some relevance, and interest! RS


Steven D. Levitt is an economist. Stephen J. Dubner is a writer. They co-authored Freakonomics, a book about cheating teachers, bizarre baby names, self-dealing Realtors, and crack-selling mama’s boys. They figured it would sell about 80 copies. Instead, it has sold 4 million, in 35 languages. Then they wrote SuperFreakonomics, with stories about drunk walking, the economics of prostitution, and how to stop global warming. It hasn’t quite sold 4 million copies yet but it’s getting there. A lot of other stuff has happened, too. A blog. A radio show. A movie. Lectures. Even Jon Stewart — and Beauty and the Geek. This is the place where all that stuff continues to happen. Welcome to Freakonomics.com. (Data from Official Stie)


....
Which is more dangerous, a gun or a swimming pool?
What do schoolteachers and sumo wrestlers have in common?
How much do parents really matter?
These may not sound like typical questions for an economist to ask. But Steven D. Levitt is not a typical economist. He studies the riddles of everyday life—from cheating and crime to parenting and sports—and reaches conclusions that turn conventional wisdom on its head.

Freakonomics is a groundbreaking collaboration between Levitt and Stephen J. Dubner, an award-winning author and journalist. They set out to explore the inner workings of a crack gang, the truth about real estate agents, the secrets of the Ku Klux Klan, and much more.  (ref Amazon).......





More on Freakonomics curtesy of Wikipedia.

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Freakonomics:
A Rogue Economist Explores
the Hidden Side of Everything
Freakonomics.jpg
Author(s)Steven D. Levitt
Stephen J. Dubner
CountryUnited States
LanguageEnglish
Subject(s)Economics, Sociology
Genre(s)Non-fiction
PublisherWilliam Morrow
Publication dateApril 12, 2005
Media typeHardback & Paperback
Pages336 pp (hardback edition)
ISBNISBN 0-06-123400-1 (Hardback), ISBN 0-06-089637-X (large print paperback)
OCLC Number73307236
Followed bySuperFreakonomics
Freakonomics: A Rogue Economist Explores the Hidden Side of Everything is a 2005 non-fiction book by University of Chicago economist Steven Levitt and New York Times journalist Stephen J. Dubner. The book has been described as melding pop culture with economics.[1]. By late 2009, it had sold over 4 million copies worldwide.[2]

Contents

 [hide

[edit] Overview

The book is a collection of 'economic' articles written by Levitt, an expert who has already gained a reputation for applying economic theory to diverse subjects not usually covered by "traditional" economists; he does, however, accept the standard neoclassical microeconomic model of rational utility-maximization. In Freakonomics, Levitt and Dubner argue that economics is, at root, the study of incentives. The book's topics include:
One example of the authors' use of economic theory involves demonstrating the existence of cheating among sumo wrestlers. In a sumo tournament, all wrestlers in the top division compete in 15 matches and face demotion if they do not win at least eight of them. The sumo community is very close-knit, and the wrestlers at the top levels tend to know each other well. The authors looked at the final match, and considered the case of a wrestler with seven wins, seven losses, and one fight to go, fighting against an 8-6 wrestler. Statistically, the 7-7 wrestler should have a slightly below even chance, since the 8-6 wrestler is slightly better. However, the 7-7 wrestler actually wins around 80% of the time. Levitt uses this statistic and other data gleaned from sumo wrestling matches, along with the effect that allegations of corruption have on match results, to conclude that those who already have 8 wins collude with those who are 7-7 and let them win, since they have already secured their position for the following tournament. Despite round condemnation of the claims by the Japan Sumo Association following the book's publication in 2005, the 2011 Grand tournament in Tokyo was cancelled for the first time since 1946 because of allegations of match fixing.[3]
The authors attempt to demonstrate the power of data mining. Many of their results emerge from Levitt's analysis of various databases, and asking the right questions. Authors posit that various incentives encourage teachers to cheat by assisting their students with multiple-choice high-stakes tests. Such cheating in the Chicago school system is inferred from detailed analysis of students' answers to multiple choice questions. But first Levitt asks, "What would the pattern of answers look like if the teacher cheated?" The simple answer: difficult questions at the end of a section will be more correct than easy ones at the beginning.

[edit] Reappraisals

In Chapter 2 of Freakonomics, the authors wrote of their visit to folklorist Stetson Kennedy's Florida home where the topic of Kennedy's investigations of the Ku Klux Klan were discussed. However, in their January 8, 2006 column in the New York Times Magazine, Dubner and Levitt wrote of questions about Stetson Kennedy's research ("Hoodwinked", pp. 26–28) leading to the conclusion that Kennedy's research was at times embellished for effectiveness.
In the "Revised and Expanded Edition" this embellishment was noted and corrected: "Several months after Freakonomics was first published, it was brought to our attention that this man's portrayal of his crusade, and various other Klan matters, was considerably overstated....we felt it was important to set straight the historical record."[4]

[edit] Refutations

[edit] Effects of abortion ban

There have been many responses to the theory that legal abortion reduces crime – see Legalized abortion and crime effect: Responses and The Impact of Legalized Abortion on Crime for details.
Freakonomics commented on the effects of an abortion ban in Romania (Decree 770), stating that "Compared to Romanian children born just a year earlier, the cohort of children born after the abortion ban would do worse in every measurable way: they would test lower in school, they would have less success in the labor market, and they would also prove much more likely to become criminals. (p. 118)". John DiNardo, a professor at the University of Michigan, retorts that the paper cited by Freakonomics states "virtually the opposite of what is actually claimed":
On average, children born in 1967 just after abortions became illegal display better educational and labor market achievements than children born prior to the change. This outcome can be explained by a change in the composition of women having children: urban, educated women were more likely to have abortions prior to the policy change, so a higher proportion of children were born into urban, educated households. (Pop-Eleches, 2002, p.34).
—John DiNardo, Freakonomics: Scholarship in the Service of Storytelling[5]
Levitt responded on the Freakonomics Blog that Freakonomics and Pop-Eleches "are saying the same thing":
Here is the abstract of the version of the Pop-Eleches paper that we cited:
…Children born after the abortion ban attained more years of schooling and greater labor market success. This is because urban, educated women were more likely to have abortions prior to the policy change, and the relative number of children born to this type of woman increased after the ban. However,controlling for composition using observable background variables, children born after the ban on abortions had worse educational and labor market achievements as adults. Additionally, I provide evidence of crowding in the school system and some suggestive evidence that cohorts born after the introduction of the abortion ban had higher infant mortality and increased criminal behavior later in life.
The introduction of the Pop-Eleches paper says:
This finding is consistent with the view that children who were unwanted during pregnancy had worse socio-economic outcomes once they became adults.

[edit] Effects of extra police on crime

Freakonomics claimed that it was possible to "tease" out the effect of extra police on crime by analysing electoral cycles. The evidence behind these claims was shown to be due partly to a programming error. McCrary stated "While municipal police force size does appear to vary over state and local electoral cycles ... elections do not induce enough variation in police hiring to generate informative estimates of the effect of police on crime."[5]

[edit] Criticism

Freakonomics has been criticised for in fact being a work of sociology and/or criminology, rather than economics. Israeli economist Ariel Rubinstein criticised the book for making use of dubious statistics and complained that "economists like Levitt ... have swaggered off into other fields", saying that the "connection to economics ... [is] none" and that the book is an example of "academic imperialism".[6] Arnold Kling has suggested the book is an example of "amateur sociology".[7]
Thomas Ferguson, author of Golden Rule: The Investment theory of party competition was asked in 2009 to respond to the following claim in Freakonomics:
"A winning candidate can cut his spending in half and lose only 1 percent of the vote. Meanwhile, a losing candidate who doubles his spending can expect to shift the vote in his favor by only that same 1 percent."
His response was:
"Where on earth do such figures come from? You would need a fully specified regression equation to do this, that incorporated a lot of variables. Unless you hold constant everything else, including issues -- not easy even to imagine -- such claims are nonsense. Think of a couple of cases. Obviously, an incumbent Congressman or woman with a big margin could spend a bit less and probably do almost as well. By contrast, candidates in close elections surely cannot do this. The real issue is the dependence of money on taking conservative issue positions. Claims about existing candidates typically reflect censored data. That is, there's no one able to run that can run very far to the left."
Economist Robert P. Murphy takes exception to the way the book describes economists and their field, saying the authors end up actually describing econometrics. He also contends the book's ambiguous style makes it very difficult to determine exactly what the authors are claiming in various chapters.[8]

[edit] Publishing history

Freakonomics peaked at number two among nonfiction on The New York Times Best Seller list and was named the 2006 Book Sense Book of the Year in the Adult Nonfiction category. The book received positive reviews from critics. The review aggregator Metacritic reported the book had an average score of 67 out of 100, based on 16 reviews.[9]
Screen shot of Freakonomics Blog
The success of the book has been partly attributed to the blogosphere. In the campaign prior to the release of the book in April 2005, the publisher (William Morrow and Company) chose to target bloggers in an unusually strategical way, sending galley copies to over a hundred of them, as well as contracting two specialized word of mouth (buzz marketing) agencies.[1]
In 2006, the Revised and Expanded Edition of the book was published, with the most significant corrections in the second chapter (see above).[10]

[edit] Freakonomics blog

The authors started their own Freakonomics blog, which is "meant to keep the conversation going", in 2005.
In May 2007, writer and blogger Melissa Lafsky was hired as the full time editor of the site.[11] In August 2007, the blog was incorporated into The New York Times' web site – the authors had been writing joint columns for The New York Times Magazine since 2004 – and the domain Freakonomics.com became a redirect there.[12] In March 2008, Annika Mengisen replaced Lafsky as the blog editor.[13] The Freakonomics blog ended its association with the New York Times on March 1, 2011.[14]
Among the recurrent guest bloggers on the Freakonomics blog are Ian Ayres,[15] Daniel Hamermesh,[16] Eric A. Morris,[17] Sudhir Venkatesh,[18] Justin Wolfers[19] and others.
In 2008, Stephen Dubner asked for questions from the site's readers and then featured them in an extended Q&A on "Best Places to Live" with demographics expert Bert Sperling.[20]

[edit] SuperFreakonomics

In April 2007, co-author Stephen Dubner announced that there would be a sequel to Freakonomics, and that it would contain further writings about street gang culture from Sudhir Venkatesh, as well as a study of the use of money by capuchin monkeys.[21] Dubner said the title would be SuperFreakonomics,[22] and that one topic would be what makes people good at what they do.[23] The book was released in Europe in early October 2009 and in the United States on October 20, 2009.

[edit] Film adaptation

In 2010, Chad Troutwine, Chris Romano, and Dan O'Meara produced a documentary film adaptation with a budget of nearly US$3 million in an omnibus format by directors Seth Gordon, Morgan Spurlock, Alex Gibney, Eugene Jarecki, Rachel Grady, and Heidi Ewing.[24] It was the Closing Night Gala premiere film at the Tribeca Film Festival on April 30, 2010.[25] It was also the Opening Night film at the AFI/Discovery SilverDocs film festival on June 21, 2010. Magnolia Pictures has acquired distribution rights for a Fall 2010 release.[26]
Freakonomics: The Movie was released in major cities with a pay what you want pricing offer for selected preview showings.[27] No report of the results has yet been published.

[edit] Freakonomics Consulting Group

In 2009, Steven Levitt co-founded Freakonomics Consulting Group, a business and philanthropy consulting company now known as The Greatest Good. Founding partners include Nobel laureates Daniel Kahneman and Gary Becker, as well as several other prominent economists.[28]

[edit] See also

[edit] References

  1. ^ a b Deahl, Rachel (6 May 2005). "Getting a Buzz On: How Publishers Are Turning Online to Market Books". The Book Standard. http://www.allbusiness.com/retail-trade/miscellaneous-retail-miscellaneous/4399655-1.html.
  2. ^ Fox, Justin (26 October 2009). "Is the World Ready for Freakonomics Again?". Time.com. http://freakonomics.com/2006/09/20/freakonomics-20/. Retrieved 7 June 2011.
  3. ^ "Sumo tournament cancelled amid match-fixing scandal". BBC. 2011-02-06. http://www.bbc.co.uk/news/world-asia-pacific-12375649.
  4. ^ Levitt, Steven D.; Dubner, Stephen J. (5 October 2006). Freakonomics: A Rogue Economist Explores the Hidden Side of Everything (Revised and Expanded Edition). William Morrow. p. xiv. ISBN 0-06-123400-1.
  5. ^ a b DiNardo, John. "Freakonomics: Scholarship in the Service of Storytelling". American Law and Economics Review (Oxford Journals) 8 (3): 615–626. http://www-personal.umich.edu/~jdinardo/Pubs/aler.pdf.
  6. ^ Rubinstein, Ariel. "Freak-Freakonomics". The Economists' Voice 3 (9). doi:10.2202/1553-3832.1226. http://arielrubinstein.tau.ac.il/papers/freak.pdf.
  7. ^ Kling, Arnold (5 July 2005). "Freakonomics or Amateur Sociology?". Ideas in Action with Jim Glassman. http://www.ideasinactiontv.com/tcs_daily/2005/07/freakonomics-or-amateur-sociology.html. Retrieved 7 June 2011.
  8. ^ Murphy, Robert P. (25 May 2005). "More Fun than Truth". Mises.org. http://mises.org/daily/1817. Retrieved 2012-03-20.
  9. ^ "Freakonomics by Steven D. Levitt and Stephen J. Dubner: Reviews". Metacritic. Archived from the original on 18 February 2008. http://web.archive.org/web/20080218120040/http://www.metacritic.com/books/authors/levittstevendandstephenjdubner/freakonomics. Retrieved 11 March 2008.
  10. ^ Dubner, Stephen J. (20 September 2006). "Freakonomics 2.0". Freakonomics (blog). http://freakonomics.com/2006/09/20/freakonomics-20/. Retrieved 7 June 2011.
  11. ^ Dubner, Stephen J. (4 May 2007). "Please Welcome the First Editor of Freakonomics.com". Freakonomics (blog). http://freakonomics.com/2007/05/04/please-welcome-the-first-editor-of-freakonomicscom/. Retrieved 7 June 2011.
  12. ^ Dubner, Stephen J. (7 August 2007). "Moving Day". Freakonomics (blog). http://freakonomics.com/2007/08/07/moving-day/. Retrieved 7 June 2011.
  13. ^ Dubner, Stephen J. (17 March 2008). "Please welcome...". Freakonomics (blog). http://freakonomics.blogs.nytimes.com/2008/03/17/please-welcome/. Retrieved 7 June 2011.
  14. ^ Dubner, Stephen J. (18 January 2011). "Yes, This Blog Is Leaving NYTimes.com". Freakonomics (blog). http://freakonomics.com/2011/01/18/yes-this-blog-is-leaving-nytimes-com/. Retrieved 7 June 2011.
  15. ^ "Posts published by Ian Ayres". The New York Times. http://freakonomics.blogs.nytimes.com/author/ian-ayres/. Retrieved 2 May 2010.
  16. ^ "Posts published by Daniel Hamermesh". The New York Times. http://freakonomics.blogs.nytimes.com/author/daniel-hamermesh/. Retrieved 2 May 2010.
  17. ^ "Posts published by Eric A. Morris". The New York Times. http://freakonomics.blogs.nytimes.com/author/eric-a-morris/. Retrieved 2 May 2010.
  18. ^ "Posts published by Sudhir Venkatesh". The New York Times. http://freakonomics.com/author/sudhir-venkatesh/. Retrieved 2 May 2010.
  19. ^ "Posts published by Justin Wolfers". The New York Times. http://freakonomics.com/author/justin-wolfers/. Retrieved 2 May 2010.
  20. ^ Dubner, Stephen. "Bert Sperling Answers Your "Best Places to Live" Questions". http://www.freakonomics.com/2008/10/14/bert-sperling-answers-your-best-places-to-live-questions/. Retrieved 3 August 2012.
  21. ^ Lombardi, Candace (19 April 2007). "Freakonomics writer talks monkey business". CNET News. http://news.com.com/Freakonomics+writer+talks+monkey+business/2100-1026-6177655.html?part=dht&tag=nl.e433. Retrieved 7 June 2011.
  22. ^ Conley, Lucas (1 November 2005). "Freakonomics, economic hit men, undercover economists. This ain't Adam Smith.". Fast Company. http://www.fastcompany.com/magazine/100/next-economist.html. Retrieved 7 June 2011.
  23. ^ "Here Is What SuperFreakonomics Will Look Like". The New York Times. 7 August 2009. http://freakonomics.blogs.nytimes.com/2009/08/07/here-is-what-superfreakonomics-will-look-like/. Retrieved 2 May 2010.
  24. ^ "Freakonomics". Internet Movie Database. http://www.imdb.com/title/tt1152822/. Retrieved 20 July 2009.
  25. ^ Kohn, Eric (1 May 2010). "TRIBECA REVIEW — Movies Within a Movie: The Anthology Documentary "Freakonomics"". indieWIRE. http://www.indiewire.com/article/tribeca_review_movies_within_a_movie_the_anthology_documentary_freakonomics/. Retrieved 17 November 2010.
  26. ^ "Magnolia Picks Up 'Freakonomics' Documentary". News in Film. http://www.newsinfilm.com/2010/04/05/magnolia-picks-up-freakonomics-movie/. Retrieved 5 April 2010.
  27. ^ "Pay what you want to see Freakonomics: The Movie". http://www.avclub.com/articles/pay-what-you-want-to-see-freakonomics-the-movie,45265/.
  28. ^ "The Greatest Good - Consulting". http://www.greatestgood.com. Retrieved July 14, 2012.
  29. ^ Gladwell, Malcolm (March 2006). "Levitt and Dubner respond". http://gladwell.typepad.com/gladwellcom/2006/03/levitt_and_dubn.html. Retrieved 23 December 2012.

[edit] Further reading

[edit] External links

A Supercomputer-Based Economic Model

It was less than thirty years ago that the American people elected a movie star to become President. His theory of  “supply side” economics, also called “trickle down economics” and “Voodoo economics” resulted in a short term spike in prosperity followed by a realization that we had run up a huge mountain debt for the country.  Meanwhile, in an attempt to keep up with the flood of U.S. government spending on defense, the Soviet Union pretty much went bankrupt and eventually went out of business, thus generating the new capitalist  nation we now call Russia.
Today, as our ship of state founders, having rammed an economic iceberg of frozen mortgages and credit, we are a divided nation – as least as far as economic theory goes. On one hand we have Republicans who, more or less, advocate more Voodoo economics by cutting taxes on businesses and the rich folk.  The theory is that if you give the rich folk a lot of money they will hire more people – instead of just banking the profits. On the other hand the Democrats are looking to FDRish, New Dealish, concepts including lots more government regulation of the financial sector, direct injections of money into the economy by construction projects, and other means.  Both Democrats and Republicans seem to like the idea of bailing out the banks some more, in one way or another. The question I have is this: isn’t this all a little Voodooish too?  I mean: where is the science?
In the United States we have quite a few supercomputers, massive conglomerations of processing units that can perform umpteen zillion computations per second.  They are great for predicting the behavior of nuclear reactions when there are lots of variables and lots of things happening at the same time.  They’re great for predicting the outcome of complex processes, like what will the weather be tomorrow, given that we have inputs, every second,  from hundreds of weather stations scattered all over the place.  So, if I might ask, why doesn’t our government create a vast working model of our economy in a supercomputer? Then the Democrats and Republicans could poke a variable here and there and see what comes out the end.  We could test John McCain’s economic theory and simulate the results with high reliability.  We could do the same with President Obama’s.  We could even test Rush Limbaugh’s – if he has one.  The problem is we don’t have a national or world economic model running in a supercomputer – at least I haven’t been able to find one, and I haven’t heard any politician saying to the other, “Hey, let’s put your idea into the national supercomputer and see what happens. Let’s see whose proposals will work the best, and then we’ll use that!”
It seems that Ben Bernanke does have a sort of economic computer model that works sort of OK.  But it isn’t a supercomputer model, capable of incorporating lots and lots of variables. There is a report that the Argonne National Laboratory is now beginning to take steps in the creation of an economic model that runs on their supercomputer, which is one of the fastest supercomputers in the world. This could be a major step forward in getting us away, for once and for all, from politicians making economic decisions based upon unproven ideology, constituent self-interest, or just Voodoo reasoning.  While we wait for the Argonne supercomputer programs to be written, perhaps we could ask the Democrats and the Republicans to give their suggestions into Ben Bernanke and have him put them in his program and see what it predicts – just for fun, you know?
It seems like such an obvious idea – using our national supercomputing capabilities that we have had for many, many years to model our economy. It makes one wonder why we didn’t do this twenty years ago. I suppose it’s a good thing we weren’t doing Voodoo nuclear engineering twenty years ago, but it’s long past time we got past Voodoo economics too,  and began studying economics as a multi-variable science, just like the weather.  With enough computing power, and the application of real science, we should be able to model, create, and maintain a stable, growing economy for a long time to come.  Too bad for the politicians, though, I wonder what they would have left to debate about then?


Rich McSheehy's Weblog

New Directions



PS. I added my link contact on Transfinancial Economics as comment to the original article above , and  it was accepted. Supercomputers play a vital role in advanced stage TFE  http://www.p2pfoundation.net/Transfinancial_Economics

Economic Model

From Wikipedia, the free encyclopedia
  (Redirected from Economic models)
Jump to: navigation, search
A diagram of the IS/LM model
In economics, a model is a theoretical construct that represents economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified framework designed to illustrate complex processes, often but not always using mathematical techniques. Frequently, economic models posit structural parameters. Structural parameters are underlying parameters in a model or class of models.[1] A model may have various parameters and those parameters may change to create various properties. Methodological uses of models include investigation, theorizing, fitting theories to the world.[2]

Contents

 [hide

[edit] Overview

In general terms, economic models have two functions: first as a simplification of and abstraction from observed data, and second as a means of selection of data based on a paradigm of econometric study.
Simplification is particularly important for economics given the enormous complexity of economic processes. This complexity can be attributed to the diversity of factors that determine economic activity; these factors include: individual and cooperative decision processes, resource limitations, environmental and geographical constraints, institutional and legal requirements and purely random fluctuations. Economists therefore must make a reasoned choice of which variables and which relationships between these variables are relevant and which ways of analyzing and presenting this information are useful.
Selection is important because the nature of an economic model will often determine what facts will be looked at, and how they will be compiled. For example inflation is a general economic concept, but to measure inflation requires a model of behavior, so that an economist can differentiate between real changes in price, and changes in price which are to be attributed to inflation.
In addition to their professional academic interest, the use of models include:
  • Forecasting economic activity in a way in which conclusions are logically related to assumptions;
  • Proposing economic policy to modify future economic activity;
  • Presenting reasoned arguments to politically justify economic policy at the national level, to explain and influence company strategy at the level of the firm, or to provide intelligent advice for household economic decisions at the level of households.
  • Planning and allocation, in the case of centrally planned economies, and on a smaller scale in logistics and management of businesses.
  • In finance predictive models have been used since the 1980s for trading (investment, and speculation), for example emerging market bonds were often traded based on economic models predicting the growth of the developing nation issuing them. Since the 1990s many long-term risk management models have incorporated economic relationships between simulated variables in an attempt to detect high-exposure future scenarios (often through a Monte Carlo method).
A model establishes an argumentative framework for applying logic and mathematics that can be independently discussed and tested and that can be applied in various instances. Policies and arguments that rely on economic models have a clear basis for soundness, namely the validity of the supporting model.
Economic models in current use do not pretend to be theories of everything economic; any such pretensions would immediately be thwarted by computational infeasibility and the paucity of theories for most types of economic behavior. Therefore conclusions drawn from models will be approximate representations of economic facts. However, properly constructed models can remove extraneous information and isolate useful approximations of key relationships. In this way more can be understood about the relationships in question than by trying to understand the entire economic process.
The details of model construction vary with type of model and its application, but a generic process can be identified. Generally any modelling process has two steps: generating a model, then checking the model for accuracy (sometimes called diagnostics). The diagnostic step is important because a model is only useful to the extent that it accurately mirrors the relationships that it purports to describe. Creating and diagnosing a model is frequently an iterative process in which the model is modified (and hopefully improved) with each iteration of diagnosis and respecification. Once a satisfactory model is found, it should be double checked by applying it to a different data set.

[edit] Types of models

According to whether all the model variables are deterministic, economic models can be classified as stochastic or non-stochastic models; according to whether all the variables are quantitative, economic models are classified as discrete or continuous choice model; according to the model's intended purpose/function, it can be classified as quantitative or qualitative; according to the model's ambit, it can be classified as a general equilibrium model, a partial equilibrium model, or even a non-equilibrium model; according to the economic agent's characteristics, models can be classified as rational agent models, representative agent models etc.
  • Non-stochastic mathematical models may be purely qualitative (for example, models involved in some aspect of social choice theory) or quantitative (involving rationalization of financial variables, for example with hyperbolic coordinates, and/or specific forms of functional relationships between variables). In some cases economic predictions of a model merely assert the direction of movement of economic variables, and so the functional relationships are used only in a qualitative sense: for example, if the price of an item increases, then the demand for that item will decrease. For such models, economists often use two-dimensional graphs instead of functions.
  • Qualitative models – Although almost all economic models involve some form of mathematical or quantitative analysis, qualitative models are occasionally used. One example is qualitative scenario planning in which possible future events are played out. Another example is non-numerical decision tree analysis. Qualitative models often suffer from lack of precision.
At a more practical level, quantitative modelling is applied to many areas of economics and several methodologies have evolved more or less independently of each other. As a result, no overall model taxonomy is naturally available. We can nonetheless provide a few examples which illustrate some particularly relevant points of model construction.
  • An accounting model is one based on the premise that for every credit there is a debit. More symbolically, an accounting model expresses some principle of conservation in the form
algebraic sum of inflows = sinks − sources
This principle is certainly true for money and it is the basis for national income accounting. Accounting models are true by convention, that is any experimental failure to confirm them, would be attributed to fraud, arithmetic error or an extraneous injection (or destruction) of cash which we would interpret as showing the experiment was conducted improperly.
  • Optimality and constrained optimization models – Other examples of quantitative models are based on principles such as profit or utility maximization. An example of such a model is given by the comparative statics of taxation on the profit-maximizing firm. The profit of a firm is given by
 \pi(x,t) = x p(x) - C(x) - t x \quad
where p(x) is the price that a product commands in the market if it is supplied at the rate x, xp(x) is the revenue obtained from selling the product, C(x) is the cost of bringing the product to market at the rate x, and t is the tax that the firm must pay per unit of the product sold.
The profit maximization assumption states that a firm will produce at the output rate x if that rate maximizes the firm's profit. Using differential calculus we can obtain conditions on x under which this holds. The first order maximization condition for x is
 \frac{\partial  \pi(x,t)}{\partial x} =\frac{\partial  (x p(x) - C(x))}{\partial x} -t= 0
Regarding x is an implicitly defined function of t by this equation (see implicit function theorem), one concludes that the derivative of x with respect to t has the same sign as
 \frac{\partial^2  (x p(x) - C(x))}{\partial^2 x}={\partial^2\pi(x,t)\over \partial x^2},
which is negative if the second order conditions for a local maximum are satisfied.
Thus the profit maximization model predicts something about the effect of taxation on output, namely that output decreases with increased taxation. If the predictions of the model fail, we conclude that the profit maximization hypothesis was false; this should lead to alternate theories of the firm, for example based on bounded rationality.
Borrowing a notion apparently first used in economics by Paul Samuelson, this model of taxation and the predicted dependency of output on the tax rate, illustrates an operationally meaningful theorem; that is one which requires some economically meaningful assumption which is falsifiable under certain conditions.
  • Aggregate models. Macroeconomics needs to deal with aggregate quantities such as output, the price level, the interest rate and so on. Now real output is actually a vector of goods and services, such as cars, passenger airplanes, computers, food items, secretarial services, home repair services etc. Similarly price is the vector of individual prices of goods and services. Models in which the vector nature of the quantities is maintained are used in practice, for example Leontief input-output models are of this kind. However, for the most part, these models are computationally much harder to deal with and harder to use as tools for qualitative analysis. For this reason, macroeconomic models usually lump together different variables into a single quantity such as output or price. Moreover, quantitative relationships between these aggregate variables are often parts of important macroeconomic theories. This process of aggregation and functional dependency between various aggregates usually is interpreted statistically and validated by econometrics. For instance, one ingredient of the Keynesian model is a functional relationship between consumption and national income: C = C(Y). This relationship plays an important role in Keynesian analysis.elow

[edit] Quantitative vs. Qualitative models

A quantitative model is designed to produce accurate predictions, without elucidating the underlying dynamics. On the other hand, a qualitative model aims to explain these dynamics without necessarily fitting empirical data or informing accurate predictions. Interest rate parity can be deemed a qualitative model in this sense: though it generally fails to fit exchange rate data as well as higher-powered statistical forecasting models, it offers an intuitive interpretation of the exchange rate and its relation to foreign and domestic interest and inflation rates. Views on the relative merits of qualitative and quantitative models vary across the profession: Milton Friedman can be viewed as having advocated a qualitative approach, while Ronald Coase worried that "if you torture the data long enough, it will confess;" Prospect theory as proposed by Daniel Kahneman(a Nobel prize winner) is more quantiative, while rational agent models are more qualitative.

[edit] Pitfalls

[edit] Restrictive, unrealistic assumptions

Provably unrealistic assumptions are pervasive in neoclassical economic theory (also called the "standard theory" or "neoclassical paradigm"), and those assumptions are inherited by simplified models for that theory. (Any model based on a flawed theory, cannot transcend the limitations of that theory.) Joseph Stiglitz' 2001 Nobel Prize lecture [3] reviews his work on Information Asymmetries, which contrasts with the assumption, in standard models, of "Perfect Information". Stiglitz surveys many aspects of these faulty standard models, and the faulty policy implications and recommendations that arise from their unrealistic assumptions. Stiglitz writes: (p. 519–520)
"I only varied one assumption – the assumption concerning perfect information – and in ways which seemed highly plausible. ... We succeeded in showing not only that the standard theory was not robust – changing only one assumption in ways which were totally plausible had drastic consequences, but also that an alternative robust paradigm with great explanatory power could be constructed. There were other deficiencies in the theory, some of which were closely connected. The standard theory assumed that technology and preferences were fixed. But changes in technology, R & D, are at the heart of capitalism. ... I similarly became increasingly convinced of the inappropriateness of the assumption of fixed preferences. (Footnote: In addition, much of recent economic theory has assumed that beliefs are, in some sense, rational. As noted earlier, there are many aspects of economic behavior that seem hard to reconcile with this hypothesis.)"
Economic models can be such powerful tools in understanding some economic relationships, that it is easy to ignore their limitations. One tangible example where the limits of Economic Models collided with reality, but were nevertheless accepted as "evidence" in public policy debates, involved models to simulate the effects of NAFTA, the North American Free Trade Agreement. James Stanford published his examination of 10 of these models. [4] [5]
The fundamental issue is circularity: embedding one's assumptions as foundational "input" axioms in a model, then proceeding to "prove" that, indeed, the model's "output" supports the validity of those assumptions. Such a model is consistent with similar models that have adopted those same assumptions. But is it consistent with reality? As with any scientific theory, empirical validation is needed, if we are to have any confidence in its predictive ability.
If those assumptions are, in fact, fundamental aspects of empirical reality, then the model's output will correctly describe reality (if it is properly "tuned", and if it is not missing any crucial assumptions). But if those assumptions are not valid for the particular aspect of reality one attempts to simulate, then it becomes a case of "GIGO" – Garbage In, Garbage Out".
James Stanford outlines this issue for the specific Computable General Equilibrium ("CGE") models that were introduced as evidence into the public policy debate, by advocates for NAFTA: [6]
"..CGE models are circular: if trade theory holds that free trade is mutually beneficial, then a quantitative simulation model based on that theoretical structure will automatically show that free trade is mutually beneficial...if the economy actually behaves in the manner supposed by the modeler, and the model itself sheds no light on this question, then a properly calibrated model may provide a rough empirical estimate of the effects of a policy change. But the validity of the model hangs entirely on the prior, nontested specification of its structural relationships ... [Hence, the apparent consensus of pro-NAFTA modelers] reflects more a consensus of prior theoretical views than a consensus of quantitative evidence."
Commenting on Stanford's analysis, one computer scientist wrote,
"When simulating the impact of a trade agreement on labor, it seems absurd to assume a priori that capital is immobile, that full employment will prevail, that unit labor costs are identical in the U.S. and Mexico, that American consumers will prefer products made in America (even if they are more expensive), and that trade flows between the U.S. and Mexico will exactly balance. Yet a recent examination of ten prominent CGE models showed that nine of them include at least one of those unrealistic assumptions, and two of the CGE models included all the above assumptions.
This situation bears a disturbing resemblance to computer-assisted intellectual dishonesty. Human beings have always been masters of self-deception, and hiding the essential basis of one's deception by embedding it in a computer program surely helps reduce what might otherwise become an intolerable burden of cognitive dissonance." [7]
In commenting on the general phenomenon of embedding unrealistic "GIGO" assumptions in neoclassical economic models, Nobel prizewinner Joseph Stiglitz is only slightly more diplomatic: (p. 507-8)
"But the ... model, by construction, ruled out the information asymmetries which are at the heart of macro-economic problems. Only if an individual has a severe case of schizophrenia is it possible for such problems to arise. If one begins with a model that assumes that markets clear, it is hard to see how one can get much insight into unemployment (the failure of the labor market to clear)." [3]
Despite the prominence of Stiglitz' 2001 Nobel prize lecture, the use of misleading (perhaps intentionally) neoclassical models persisted in 2007, according to these authors: [8]
" ... projected welfare gains from trade liberalization are derived from global computable general equilibrium (CGE) models, which are based on highly unrealistic assumptions. CGE models have become the main tool for economic analysis of the benefits of multilateral trade liberalization; therefore, it is essential that these models be scrutinized for their realism and relevance. ... we analyze the foundation of CGE models and argue that their predictions are often misleading. ... We appeal for more honest simulation strategies that produce a variety of plausible outcomes."
The working paper, "Debunking the Myths of Computable General Equilibrium Models", [9] provides both a history, and a readable theoretical analysis of what CGE models are, and are not. In particular, despite their name, CGE models use neither the Walrass general equilibrium, nor the Arrow-Debreus General Equilibrium frameworks. Thus, CGE models are highly distorted simplifications of theoretical frameworks—collectively called "the neoclassical economic paradigm" – which—themselves—were largely discredited by Joseph Stiglitz.
In the "Concluding Remarks" (p. 524) of his 2001 Nobel Prize lecture, Stiglitz examined why the neoclassical paradigm—and models based on it—persists, despite his publication, over a decade earlier, of some of his seminal results showing that Information Asymmetries invalidated core Assumptions of that paradigm and its models:
"One might ask, how can we explain the persistence of the paradigm for so long? Partly, it must be because, in spite of its deficiencies, it did provide insights into many economic phenomena. ... But one cannot ignore the possibility that the survival of the [neoclassical] paradigm was partly because the belief in that paradigm, and the policy prescriptions, has served certain interests." [3]
In the aftermath of the 2007–2009 global economic meltdown, the profession's attachment to unrealistic models is increasingly being questioned and criticized. After a weeklong workshop, one group of economists released a paper highly critical of their own profession's unethical use of unrealistic models. Their Abstract offers an indictment of fundamental practices:
"The economics profession appears to have been unaware of the long build-up to the current worldwide financial crisis and to have significantly underestimated its dimensions once it started to unfold. In our view, this lack of understanding is due to a misallocation of research efforts in economics. We trace the deeper roots of this failure to the profession’s focus on models that, by design, disregard key elements driving outcomes in real-world markets. The economics profession has failed in communicating the limitations, weaknesses, and even dangers of its preferred models to the public. This state of affairs makes clear the need for a major reorientation of focus in the research economists undertake, as well as for the establishment of an ethical code that would ask economists to understand and communicate the limitations and potential misuses of their models." [10]

[edit] Omitted details

A great danger inherent in the simplification required to fit the entire economy into a model is omitting critical elements. Some economists believe that making the model as simple as possible is an art form, but the details left out are often contentious. For instance:
  • Market models often exclude externalities such as unpunished pollution. Such models are the basis for many environmentalist attacks on mainstream economists. It is said that if the social costs of externalities were included in the models their conclusions would be very different, and models are often accused of leaving out these terms because of economist's pro-free market bias.
  • In turn, environmental economics has been accused of omitting key financial considerations from its models. For example the returns to solar power investments are sometimes modelled without a discount factor, so that the present utility of solar energy delivered in a century's time is precisely equal to gas-power station energy today.
  • Financial models can be oversimplified by relying on historically unprecedented arbitrage-free markets, probably underestimating the chance of crises, and under-pricing or under-planning for risk.
  • Models of consumption either assume that humans are immortal or that teenagers plan their life around an optimal retirement supported by the next generation. (These conclusions are probably harmless, except possibly to the credibility of the modelling profession.)
  • All Models share the same problem of the butterfly effect. Because they represent large complex nonlinear systems, it is possible that any missing variable as well as errors in value of included variables can lead to erroneous results.

[edit] Are economic models falsifiable?

The sharp distinction between falsifiable economic models and those that are not is by no means a universally accepted one. Indeed one can argue that the ceteris paribus (all else being equal) qualification that accompanies any claim in economics is nothing more than an all-purpose escape clause (See N. de Marchi and M. Blaug.) The all else being equal claim allows holding all variables constant except the few that the model is attempting to reason about. This allows the separation and clarification of the specific relationship. However, in reality all else is never equal, so economic models are guaranteed to not be perfect. The goal of the model is that the isolated and simplified relationship has some predictive power that can be tested, mainly that it is a theory capable of being applied to reality. To qualify as a theory, a model should arguably answer three questions: Theory of what?, Why should we care?, What merit is in your explanation? If the model fails to do so, it is probably too detached from reality and meaningful societal issues to qualify as theory. Research conducted according to this three-question test finds that in the 2004 edition of the Journal of Economic Theory, only 12% of the articles satisfy the three requirements.” [11] Ignoring the fact that the ceteris paribus assumption is being made is another big failure often made when a model is applied. At the minimum an attempt must be made to look at the various factors that may not be equal and take those into account.

[edit] History

One of the major problems addressed by economic models has been understanding economic growth. An early attempt to provide a technique to approach this came from the French physiocratic school in the Eighteenth century. Among these economists, François Quesnay should be noted, particularly for his development and use of tables he called Tableaux économiques. These tables have in fact been interpreted in more modern terminology as a Leontiev model, see the Phillips reference below.
All through the 18th century (that is, well before the founding of modern political economy, conventionally marked by Adam Smith's 1776 Wealth of Nations) simple probabilistic models were used to understand the economics of insurance. This was a natural extrapolation of the theory of gambling, and played an important role both in the development of probability theory itself and in the development of actuarial science. Many of the giants of 18th century mathematics contributed to this field. Around 1730, De Moivre addressed some of these problems in the 3rd edition of the Doctrine of Chances. Even earlier (1709), Nicolas Bernoulli studies problems related to savings and interest in the Ars Conjectandi. In 1730, Daniel Bernoulli studied "moral probability" in his book Mensura Sortis, where he introduced what would today be called "logarithmic utility of money" and applied it to gambling and insurance problems, including a solution of the paradoxical Saint Petersburg problem. All of these developments were summarized by Laplace in his Analytical Theory of Probabilities (1812). Clearly, by the time David Ricardo came along he had a lot of well-established math to draw from.

[edit] Tests of macroeconomic predictions

In the late 1980s a research institute compared twelve leading macroeconomic models available at the time. They compared the models' predictions for how the economy would respond to specific economic shocks (allowing the models to control for all the variability in the real world; this was a test of model vs. model, not a test against the actual outcome). Although the models simplified the world and started from a stable, known common parameters the various models gave significantly different answers. For instance, in calculating the impact of a monetary loosening on output some models estimated a 3% change in GDP after one year, and one gave almost no change, with the rest spread between.[12]
Partly as a result of such experiments, modern central bankers no longer have as much confidence that it is possible to 'fine-tune' the economy as they had in the 1960s and early 1970s. Modern policy makers tend to use a less activist approach, explicitly because they lack confidence that their models will actually predict where the economy is going, or the effect of any shock upon it. The new, more humble, approach sees danger in dramatic policy changes based on model predictions, because of several practical and theoretical limitations in current macroeconomic models; in addition to the theoretical pitfalls, (listed above) some problems specific to aggregate modelling are:
  • Limitations in model construction caused by difficulties in understanding the underlying mechanisms of the real economy. (Hence the profusion of separate models.)
  • The law of Unintended consequences, on elements of the real economy not yet included in the model.
  • The time lag in both receiving data and the reaction of economic variables to policy makers attempts to 'steer' them (mostly through monetary policy) in the direction that central bankers want them to move. Milton Friedman has vigorously argued that these lags are so long and unpredictably variable that effective management of the macroeconomy is impossible.
  • The difficulty in correctly specifying all of the parameters (through econometric measurements) even if the structural model and data were perfect.
  • The fact that all the model's relationships and coefficients are stochastic, so that the error term becomes very large quickly, and the available snapshot of the input parameters is already out of date.
  • Modern economic models incorporate the reaction of the public & market to the policy maker's actions (through game theory), and this feedback is included in modern models (following the rational expectations revolution and Robert Lucas, Jr.'s critique of the optimal control concept of precise macroeconomic management). If the response to the decision maker's actions (and their credibility) must be included in the model then it becomes much harder to influence some of the variables simulated.

[edit] Comparison with models in other sciences

Complex systems specialist and mathematician David Orrell wrote on this issue and explained that the weather, human health and economics use similar methods of prediction (mathematical models). Their systems – the atmosphere, the human body and the economy – also have similar levels of complexity. He found that forecasts fail because the models suffer from two problems : i- they cannot capture the full detail of the underlying system, so rely on approximate equations; ii- they are sensitive to small changes in the exact form of these equations. This is because complex systems like the economy or the climate consist of a delicate balance of opposing forces, so a slight imbalance in their representation has big effects. Thus, predictions of things like economic recessions are still highly inaccurate, despite the use of enormous models running on fast computers. [2]

[edit] The effects of deterministic chaos on economic models

Economic and meteorological simulations may share a fundamental limit to their predictive powers: chaos. Although the modern mathematical work on chaotic systems began in the 1970s the danger of chaos had been identified and defined in Econometrica as early as 1958:
"Good theorising consists to a large extent in avoiding assumptions....(with the property that)....a small change in what is posited will seriously affect the conclusions."
(William Baumol, Econometrica, 26 see: Economics on the Edge of Chaos).
It is straightforward to design economic models susceptible to butterfly effects of initial-condition sensitivity.[13][14]
However, the econometric research program to identify which variables are chaotic (if any) has largely concluded that aggregate macroeconomic variables probably do not behave chaotically. This would mean that refinements to the models could ultimately produce reliable long-term forecasts. However the validity of this conclusion has generated two challenges:
  • In 2004 Philip Mirowski challenged this view and those who hold it, saying that chaos in economics is suffering from a biased "crusade" against it by neo-classical economics in order to preserve their mathematical models.
  • The variables in finance may well be subject to chaos. Also in 2004, the University of Canterbury study Economics on the Edge of Chaos concludes that after noise is removed from S&P 500 returns, evidence of deterministic chaos is found.
More recently, chaos (or the butterfly effect) has been identified as less significant than previously thought to explain prediction errors. Rather, the predictive power of economics and meteorology would mostly be limited by the models themselves and the nature of their underlying systems (see Comparison with models in other sciences above).

[edit] The critique of hubris in planning

A key strand of free market economic thinking is that the market's "invisible hand" guides an economy to prosperity more efficiently than central planning using an economic model. One reason, emphasized by Friedrich Hayek, is the claim that many of the true forces shaping the economy can never be captured in a single plan. This is an argument which cannot be made through a conventional (mathematical) economic model, because it says that there are critical systemic-elements that will always be omitted from any top-down analysis of the economy.[15]

[edit] Examples of economic models

[edit] See also

[edit] Notes

  1. ^ Moffatt, Mike. (2008) About.com Structural Parameters Economics Glossary; Terms Beginning with S. Accessed June 19, 2008.
  2. ^ Mary S. Morgan, 2008 "models," The New Palgrave Dictionary of Economics, 2nd Edition, Abstract.
       • Vivian Walsh 1987. "models and theory," The New Palgrave: A Dictionary of Economics, v. 3, pp. 482-83.
  3. ^ a b c Joseph E. Stiglitz. 2001 Nobel Prize lecture: "INFORMATION AND THE CHANGE IN THE PARADIGM IN ECONOMICS". http://nobelprize.org/nobel_prizes/economics/laureates/2001/stiglitz-lecture.pdf.
  4. ^ James Stanford. "Continental Economic Integration: Modeling the Impact on Labor," Annals of the American Academy of Political and Social Science, Mar 1993, V526 p. 92-110
  5. ^ James Stanford. 1993. "FREE TRADE AND THE IMAGINARY WORLDS OF ECONOMIC MODELERS". http://www.pcdf.org/1993/45stanfo.htm.
  6. ^ Robert Aponte. "NAFTA AND MEXICAN MIGRATION TO MICHIGAN AND THE U.S.". http://www.jsri.msu.edu/RandS/research/wps/wp25.pdf.
  7. ^ Rick Crawford. 1996. Gerbner, George; Mowlana, Hamid; Schiller, Herbert I (1996), Computer-assisted Crises, ISBN 978-0-8133-2072-4, http://www.google.com/search?tbm=bks&tbo=1&q=CGE+models+included+all+the+above+assumptions.+This+situation+bears+a+disturbing+resemblance in "Invisible Crises: What Conglomerate Control of Media Means for America and the World". Ed. Herbert Schiller, Hamid Mowlana, George Gerbner. Westview. 1996.     Free, authorized version viewable at: Computer-assisted Crises, http://infowarethics.org/computer-assisted.crises.html
  8. ^ Lance Taylor & Rudiger von Arnim. March 2007. "Projected Benefits of the Doha Round Hinge on Misleading Trade Models". http://www.newschool.edu/cepa/publications/policynotes/Doha%20Policy%20Note%20Final%2003_12_07.pdf.
  9. ^ Mitra-Kahn, Benjamin H., 2008. "Debunking the Myths of Computable General Equilibrium Models". http://www.newschool.edu/cepa/publications/workingpapers/SCEPA%20Working%20Paper%202008-1%20Kahn.pdf. SCEPA Working Paper 01-2008.
  10. ^ Colander, David; Follmer, Hans; Haas, Armin; Goldberg, Michael D.; Juselius, Katarina; Kirman, Alan; Lux, Thomas; and Sloth, Birgitte: The Financial Crisis and the Systemic Failure of Academic Economics. SSRN 1355882. (March 9, 2009). Univ. of Copenhagen Dept. of Economics Discussion Paper No. 09-03
  11. ^ Klein, Daniel B. and Pedro P. Romero. "Model Building Versus Theorizing: The Paucity of Theory in the Journal of Economic Theory" (May 2007). [1]
  12. ^ Frankel,, Jeffrey A. (May 1986). "The Sources of Disagreement Among International Macro Models and Implications for Policy Coordination". NBER Working Paper. http://www.nber.org/papers/w1925.pdf. Retrieved 23 January 2012.
  13. ^ Paul Wilmott on his early research in finance: "I quickly dropped... chaos theory (as) it was too easy to construct ‘toy models’ that looked plausible but were useless in practice." Wilmott, Paul (2009), Frequently Asked Questions in Quantitative Finance, John Wiley and Sons, p. 227, http://books.google.com/books?id=n4swgjSoMyIC&lpg=PT227&pg=PT227#v=onepage
  14. ^ Kuchta, Steve (2004) (pdf), Nonlinearity and Chaos in Macroeconomics and Financial Markets, University of Connecticut, http://www.sp.uconn.edu/~ages/files/NL_Chaos_and_%20Macro%20-%20429%20Essay.pdf
  15. ^ Hayek, Friedrich (September, 1945), "The Use of Knowledge in Society", American Economic Review 35 (4): 519–530, JSTOR 1809376.

[edit] References

  • Baumol, William & Blinder, Alan (1982), Economics: Principles and Policy (2nd ed.), New York: Harcourt Brace Jovanovich, ISBN 0-15-518839-9.
  • Caldwell, Bruce (1994), Beyond Positivism: Economic Methodology in the Twentieth Century (Revised ed.), New York: Routledge, ISBN 0-415-10911-6.
  • Holcombe, R. (1989), Economic Models and Methodology, New York: Greenwood Press, ISBN 0-313-26679-4. Defines model by analogy with maps, an idea borrowed from Baumol and Blinder. Discusses deduction within models, and logical derivation of one model from another. Chapter 9 compares the neoclassical school and the Austrian school, in particular in relation to falsifiability.
  • Lange, Oskar (1945), "The Scope and Method of Economics", Review of Economic Studies (The Review of Economic Studies Ltd.) 13 (1): 19–32, doi:10.2307/2296113, JSTOR 2296113. One of the earliest studies on methodology of economics, analysing the postulate of rationality.
  • de Marchi, N. B. & Blaug, M. (1991), Appraising Economic Theories: Studies in the Methodology of Research Programs, Brookfield, VT: Edward Elgar, ISBN 1-85278-515-2. A series of essays and papers analysing questions about how (and whether) models and theories in economics are empirically verified and the current status of positivism in economics.
  • Morishima, Michio (1976), The Economic Theory of Modern Society, New York: Cambridge University Press, ISBN 0-521-21088-7. A thorough discussion of many quantitative models used in modern economic theory. Also a careful discussion of aggregation.
  • Orrell, David (2007), Apollo's Arrow: The Science of Prediction and the Future of Everything, Toronto: Harper Collins Canada, ISBN 0-00-200740-1.
  • Phillips, Almarin (1955), "The Tableau Économique as a Simple Leontief Model", Quarterly Journal of Economics (The MIT Press) 69 (1): 137–144, doi:10.2307/1884854, JSTOR 1884854.
  • Samuelson, Paul A. (1948), "The Simple Mathematics of Income Determination", in Metzler, Lloyd A., Income, Employment and Public Policy; essays in honor of Alvin Hansen, New York: W. W. Norton.
  • Samuelson, Paul A. (1983), Foundations of Economic Analysis (Enlarged ed.), Cambridge: Harvard University Press, ISBN 0-674-31301-1. This is a classic book carefully discussing comparative statics in microeconomics, though some dynamics is studied as well as some macroeconomic theory. This should not be confused with Samuelson's popular textbook.
  • Tinbergen, Jan (1939), Statistical Testing of Business Cycle Theories, Geneva: League of Nations.
  • Walsh, Vivian (1987), "Models and theory", The New Palgrave: A Dictionary of Economics, 3, New York: Stockton Press, pp. 482–483, ISBN 0-935859-10-1.
  • Wold, H. (1938), A Study in the Analysis of Stationary Time Series, Stockholm: Almqvist and Wicksell.
  • Wold, H. & Jureen, L. (1953), Demand Analysis: A Study in Econometrics, New York: Wiley.

[edit] External links