Showing posts with label econometrics. Show all posts
Showing posts with label econometrics. Show all posts

Monday, 8 November 2021

Nowcasting

 From Wikipedia, the free encyclopedia/ 

Jump to navigationJump to search

Nowcasting in economics is the prediction of the present, the very near future, and the very recent past state of an economic indicator. The term is a contraction of "now" and "forecasting" and originates in meteorology. It has recently become popular in economics as typical measures used to assess the state of an economy (e.g., gross domestic product (GDP)), are only determined after a long delay and are subject to revision.[1] Nowcasting models have been applied most notably in Central Banks, who use the estimates to monitor the state of the economy in real-time as a proxy for official measures.[2][3]

Principle[edit]

While weather forecasters know weather conditions today and only have to predict future weather, economists have to forecast the present and even the recent past. Many official measures are not timely due to the difficulty in collecting information. Historically, nowcasting techniques have been based on simplified heuristic approaches but now rely on complex econometric techniques. Using these statistical models to produce predictions eliminates the need for informal judgement.[4]

Nowcast models can exploit information from a large quantity of data series at different frequencies and with different publication lags.[5] Signals about the direction of change in GDP can be extracted from this large and heterogeneous set of information sources (such as jobless figures, industrial orders, trade balances) before the official estimate of GDP is published. In nowcasting, this data is used to compute sequences of current quarter GDP estimates in relation to the real time flow of data releases.

Development[edit]

Selected academic research papers show how this technique has developed.[6][7][8][9][10][11][12][13]

Banbura, Giannone and Reichlin (2011)[14] and Marta Banbura, Domenico Giannone, Michele Modugno & Lucrezia Reichlin (2013)[15] provide surveys of the basic methods and more recent refinements.

Nowcasting methods based on social media content (such as Twitter) have been developed to estimate hidden sentiment such as the 'mood' of a population[16] or the presence of a flu epidemic.[17]

A simple-to-implement, regression-based approach to nowcasting involves mixed-data sampling or MIDAS regressions.[18] The MIDAS regressions can also be combined with machine learning approaches.[19]

Econometric models can improve accuracy.[20] Such models can be built using bayesian vector autoregressionsdynamic factors, bridge equations using time series methods, or some combination with other methods.[21]

Implementation[edit]

Economic nowcasting is largely developed by and used in central banks to support monetary policy.

Many of the Reserve Banks of the US Federal Reserve System publish macroeconomic nowcasts. The Federal Reserve Bank of Atlanta publishes GDPNow to track GDP.[3][21] Similarly, the Federal Reserve Bank of New York publishes a dynamic factor model nowcast.[2] Neither are official forecasts of the Federal Reserve regional bank, system, or the FOMC; nor do they incorporate human judgment.

Nowcasting can also be used to estimate inflation[22] or the business cycle.[23]

References[edit]

  1. ^ Hueng, C. James (2020-08-25), "Alternative Economic Indicators", W.E. Upjohn Institute, pp. 1–4, doi:10.17848/9780880996778.ch1ISBN 978-0-88099-677-8Missing or empty |title= (help)
  2. Jump up to:a b "Nowcasting Report - FEDERAL RESERVE BANK of NEW YORK"www.newyorkfed.org. Retrieved 2020-09-24.
  3. Jump up to:a b "GDPNow"www.frbatlanta.org. Retrieved 2020-09-24.
  4. ^ Giannone, Domenico; Reichlin, Lucrezia; Small, David (May 2008). "Nowcasting: The real-time informational content of macroeconomic data"Journal of Monetary Economics55 (4): 665–676. CiteSeerX 10.1.1.597.705doi:10.1016/j.jmoneco.2008.05.010. Retrieved 12 June 2015.
  5. ^ Bańbura, Marta; Modugno, Michele (2012-11-12). "Maximum Likelihood Estimation of Factor Models on Datasets with Arbitrary Pattern of Missing Data"Journal of Applied Econometrics29(1): 133–160. doi:10.1002/jae.2306hdl:10419/153623ISSN 0883-7252S2CID 14231301.
  6. ^ Camacho, Maximo; Perez-Quiros, Gabriel (2010). "Introducing the euro-sting: Short-term indicator of euro area growth"Journal of Applied Econometrics25 (4): 663–694. doi:10.1002/jae.1174. Retrieved 12 June 2015.
  7. ^ Matheson, Troy D. (January 2010). "An analysis of the informational content of New Zealand data releases: The importance of business opinion surveys"Economic Modelling27 (1): 304–314. doi:10.1016/j.econmod.2009.09.010. Retrieved 12 June 2015.
  8. ^ Evans, Martin D. D. (September 2005). "Where Are We Now? Real-Time Estimates of the Macroeconomy"International Journal of Central Banking1 (2). Retrieved 12 June 2015.
  9. ^ Rünstler, G.; Barhoumi, K.; Benk, S.; Cristadoro, R.; Den Reijer, A.; Jakaitiene, A.; Jelonek, P.; Rua, A.; Ruth, K.; Van Nieuwenhuyze, C. (2009). "Short-term forecasting of GDP using large datasets: a pseudo real-time forecast evaluation exercise". Journal of Forecasting28 (7): 595–611. doi:10.1002/for.1105.
  10. ^ Angelini, Elena; Banbura, Marta; Rünstler, Gerhard (2010). "Estimating and forecasting the euro area monthly national accounts from a dynamic factor model"OECD Journal: Journal of Business Cycle Measurement and Analysis1: 7. Retrieved 12 June 2015.
  11. ^ Domenico, Giannone; Reichlin, Lucrezia; Simonelli, Saverio (23 November 2009). "Is the UK still in recession? We don't think so". Vox. Retrieved 12 June 2015.
  12. ^ Kajal, Lahiri; Monokroussos, George (2013). "Nowcasting US GDP: The role of ISM business surveys". International Journal of Forecasting29 (4): 644–658. CiteSeerX 10.1.1.228.3175doi:10.1016/j.ijforecast.2012.02.010S2CID 12028550.
  13. ^ Antolin-Diaz, Juan; Drechsel, Thomas; Petrella, Ivan (2014). "Following the Trend: Tracking GDP when Long-Run Growth is Uncertain"CEPR Discussion Papers 10272. Retrieved 12 June2015.
  14. ^ Banbura, Marta; Giannone, Domenico; Reichlin, Lucrezia (2010). "Nowcasting". In Clements, Michael P.; Hendry, David F. (eds.). Oxford Handbook on Economic Forecasting. Oxford University Press.
  15. ^ Banbura, Marta; Giannone, Domenico; Modugno, Michele; Reichlin, Lucrezia (2013). "Chapter 4. Nowcasting and the Real-Time Dataflow". In Elliot, G.; Timmerman, A. (eds.). Handbook on Economic Forecasting. Handbook of Economic Forecasting. 2Elsevier. pp. 195–237. doi:10.1016/B978-0-444-53683-9.00004-9ISBN 9780444536839S2CID 14278918.
  16. ^ Lansdall‐Welfare, Thomas; Lampos, Vasileios; Cristianini, Nello (August 2012). "Nowcasting the mood of the nation"Significance9 (4): 26–28. doi:10.1111/j.1740-9713.2012.00588.x. Archived from the original on 20 August 2012.
  17. ^ Lampos, Vasileios; Cristianini, Nello (2012). "Nowcasting Events from the Social Web with Statistical Learning" (PDF)ACM Transactions on Intelligent Systems and Technology3 (4): 1–22. doi:10.1145/2337542.2337557S2CID 8297993.
  18. ^ Andreou, Elena; Ghysels, Eric; Kourtellos, Andros (2011-07-08). "Forecasting with Mixed-Frequency Data"Oxford Handbooks Onlinedoi:10.1093/oxfordhb/9780195398649.013.0009.
  19. ^ Babii, Andrii; Ghysels, Eric; Striaukas, Jonas (2020). "Machine learning time series regressions with an application to nowcasting".
  20. ^ Tessier, Thomas H.; Armstrong, J. Scott (2015). "Decomposition of time-series by level and change"Journal of Business Research68 (8): 1755–1758. doi:10.1016/j.jbusres.2015.03.035.
  21. Jump up to:a b Higgins, Patrick (July 2014). "GDPNow: A Model for GDP "Nowcasting"" (PDF)Federal Reserve Bank of Atlanta Working Paper Series.
  22. ^ Ahn, Hie Joo; Fulton, Chad (2020). "Index of Common Inflation Expectations"FEDS Notes2020 (2551). doi:10.17016/2380-7172.2551ISSN 2380-7172 – via Board of Governors of the Federal Reserve System.
  23. ^ Aruoba, S. Boragan; Diebold, Francis; Scotti, Chiara (2008). "Real-Time Measurement of Business Conditions". Cambridge, MA. doi:10.3386/w14349.

External links[edit]

Saturday, 14 November 2015

Econometrics — fictions masquerading as science 13 November, 2015 at 17:14 | Posted in Statistics & Econometrics | Lars Syll Blog/ Blogger Ref http://www.p2pfoundation.net/Transfinancial_Economics
     
rabbitIn econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes casual knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture.
As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we equate randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts.
Accepting a domain of probability theory and a sample space of “infinite populations” — which is legion in modern econometrics — also implies that judgments are made on the basis of observations that are actually never made! Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.
In his book Statistical Models and Causal Inference: A Dialogue with the Social Sciences David Freedman touches on this fundamental problem, arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels:
statLurking behind the typical regression model will be found a host of such assumptions; without them, legitimate inferences cannot be drawn from the model. There are statistical procedures for testing some of these assumptions. However, the tests often lack the power to detect substantial failures. Furthermore, model testing may become circular; breakdowns in assumptions are detected, and the model is redefined to accommodate. In short, hiding the problems can become a major goal of model building.
Using models to make predictions of the future, or the results of interventions, would be a valuable corrective. Testing the model on a variety of data sets – rather than fitting refinements over and over again to the same data set – might be a good second-best … Built into the equation is a model for non-discriminatory behavior: the coefficient d vanishes. If the company discriminates, that part of the model cannot be validated at all.
Regression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals. However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions. Under the circumstances, reliance on model outputs may be quite unjustified. Making the ideas of validation somewhat more precise is a serious problem in the philosophy of science. That models should correspond to reality is, after all, a useful but not totally straightforward idea – with some history to it. Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …
In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science.
Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science.

Tuesday, 18 December 2012

Econometrics

Econometrics is the application of mathematics and statistical methods to economic data and described as the branch of economics that aims to give empirical content to economic relations. [1] More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference."[2] An influential introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships."[3] The first known use of the term "econometrics" (in cognate form) was by Paweł Ciompa in 1910. Ragnar Frisch is credited with coining the term in the sense that it is used today.[4]
Econometrics is the unification of economics, mathematics, and statistics. This unification produces more than the sum of its parts.[5] Econometrics adds empirical content to economic theory allowing theories to be tested and used for forecasting and policy evaluation.[6]

Contents

 [hide

[edit] Basic econometric models: linear regression

The basic tool for econometrics is the linear regression model. In modern econometrics, other statistical tools are frequently used, but linear regression is still the most frequently used starting point for an analysis.[7] Estimating a linear regression on two variables can be visualized as fitting a line through data points representing paired values of the independent and dependent variables.

Okun's law representing the relationship between GDP growth and the unemployment rate. The fitted line is found using regression analysis.
For example, consider Okun's law, which relates GDP growth to the unemployment rate. This relationship is represented in a linear regression where the change in unemployment rate (\Delta\ Unemployment) is a function of an intercept ( \beta_0 ), a given value of GNP growth multiplied by a slope coefficient  \beta_1 and an error term, \epsilon:
 \Delta\ Unemployment= \beta_0 + \beta_1\text{Growth} + \varepsilon.
The unknown parameters  \beta_0 and  \beta_1 can be estimated. Here  \beta_1 is estimated to be -1.77 and  \beta_0 is estimated to be 0.83. This means that if GNP grew one point faster, the unemployment rate would be predicted to drop by .94 points (-1.77*1+0.83). The model could then be tested for statistical significance as to whether an increase in growth is associated with a decrease in the unemployment, as hypothesized. If the estimate of  \beta_1 were not significantly different from 0, we would fail to find evidence that changes in the growth rate and unemployment rate were related.

[edit] Theory

Econometric theory uses statistical theory to evaluate and develop econometric methods. Econometricians try to find estimators that have desirable statistical properties including unbiasedness, efficiency, and consistency. An estimator is unbiased if its expected value is the true value of the parameter; It is consistent if it converges to the true value as sample size gets larger, and it is efficient if the estimator has lower standard error than other unbiased estimators for a given sample size. Ordinary least squares (OLS) is often used for estimation since it provides the BLUE or "best linear unbiased estimator" (where "best" means most efficient, unbiased estimator) given the Gauss-Markov assumptions. When these assumptions are violated or other statistical properties are desired, other estimation techniques such as maximum likelihood estimation, generalized method of moments, or generalized least squares are used. Estimators that incorporate prior beliefs are advocated by those who favor Bayesian statistics over traditional, classical or "frequentist" approaches.

[edit] Gauss-Markov theorem

The Gauss-Markov theorem shows that the OLS estimator is the best (minimum variance), unbiased estimator assuming the model is linear, the expected value of the error term is zero, errors are homoskedastic and not autocorrelated, and there is no perfect multicollinearity.

[edit] Linearity

The dependent variable is assumed to be a linear function of the variables specified in the model. The specification must be linear in its parameters. This does not mean that there must be a linear relationship between the independent and dependent variables. The independent variables can take non-linear forms as long as the parameters are linear. The equation  y = \alpha + \beta x^2, \, qualifies as linear while  y = \alpha + \beta^2 x, does not.
Data transformations can be used to convert an equation into a linear form. For example, the Cobb-Douglas equation—often used in economics—is nonlinear:
Y=AL^{\alpha}K^{\beta}\varepsilon \,
But it can be expressed in linear form by taking the natural logarithm of both sides:[8] ln Y=ln A + \alpha ln L + \beta lnK + ln\varepsilon
This assumption also covers specification issues: assuming that the proper functional form has been selected and there are no omitted variables.

[edit] Expected error is zero

\operatorname{E}[\,\varepsilon\,] = 0.
The expected value of the error term is assumed to be zero. This assumption can be violated if the measurement of the dependent variable is consistently positive or negative. The miss-measurement will bias the estimation of the intercept parameter, but the slope parameters will remain unbiased.[9]
The intercept may also be biased if there is a logarithmic transformation. See the Cobb-Douglas equation above. The multiplicative error term will not have a mean of 0, so this assumption will be violated.[10]
This assumption can also be violated in limited dependent variable models. In such cases, both the intercept and slope parameters may be biased.[11]

[edit] Spherical errors

\operatorname{Var}[\,\varepsilon|X\,] = \sigma^2 I_n,
Error terms are assumed to be spherical otherwise the OLS estimator is inefficient. The OLS estimator remains unbiased, however. Spherical errors occur when errors have both uniform variance (homoscedasticity) and are uncorrelated with each other.[12] The term "spherical errors" will describe the multivariate normal distribution: if \operatorname{Var}[\,\varepsilon|X\,] = \sigma^2 I_n in the multivariate normal density, then the equation f(x)=c is the formula for a “ball” centered at μ with radius σ in n-dimensional space.[13]
Heteroskedacity occurs when the amount of error is correlated with an independent variable. For example, in a regression on food expenditure and income, the error is correlated with income. Low income people generally spend a similar amount on food, while high income people may spend a very large amount or as little as low income people spend. Heteroskedacity can also be caused by changes in measurement practices. For example, as statistical offices improve their data, measurement error decreases, so the error term declines over time.
This assumption is violated when there is autocorrelation. Autocorrelation can be visualized on a data plot when a given observation is more likely to lie above a fitted line if adjacent observations also lie above the fitted regression line. Autocorrelation is common in time series data where a data series may experience "inertia."[14] If a dependent variable takes a while to fully absorb a shock. Spatial autocorrelation can also occur geographic areas are likely to have similar errors. Autocorrelation may be the result of misspecification such as choosing the wrong functional form. In these cases, correcting the specification is the preferred way to deal with autocorrelation.
In the presence of non-spherical errors, the generalized least squares estimator can be shown to be BLUE.[15]

[edit] Exogeneity of independent variables

\operatorname{E}[\,\varepsilon|X\,] = 0.
This assumption is violated if the variables are endogenous. Endogeneity can be the result of simultaneity, where causality flows back and forth between both the dependent and independent variable. Instrumental variable techniques are commonly used to address this problem.

[edit] Full rank

The sample data matrix must have full rank or OLS cannot be estimated. There must be at least one observation for every parameter being estimated and the data cannot have perfect multicollinearity.[16] Perfect multicollinearity will occur in a "dummy variable trap" when a base dummy variable is not omitted resulting in perfect correlation between the dummy variables and the constant term.
Multicollinearity (as long as it is not "perfect") can be present resulting in a less efficient, but still unbiased estimate.

[edit] Methods

Applied econometrics uses theoretical econometrics and real-world data for assessing economic theories, developing econometric models, analyzing economic history, and forecasting.[17]
Econometrics may use standard statistical models to study economic questions, but most often they are with observational data, rather than in controlled experiments. In this, the design of observational studies in econometrics is similar to the design of studies in other observational disciplines, such as astronomy, epidemiology, sociology and political science. Analysis of data from an observational study is guided by the study protocol, although exploratory data analysis may by useful for generating new hypotheses.[18] Economics often analyzes systems of equations and inequalities, such as supply and demand hypothesized to be in equilibrium. Consequently, the field of econometrics has developed methods for identification and estimation of simultaneous-equation models. These methods are analogous to methods used in other areas of science, such as the field of system identification in systems analysis and control theory. Such methods may allow researchers to estimate models and investigate their empirical consequences, without directly manipulating the system.
One of the fundamental statistical methods used by econometricians is regression analysis. For an overview of a linear implementation of this framework, see linear regression. Regression methods are important in econometrics because economists typically cannot use controlled experiments. Econometricians often seek illuminating natural experiments in the absence of evidence from controlled experiments. Observational data may be subject to omitted-variable bias and a list of other problems that must be addressed using causal analysis of simultaneous-equation models.[19]

[edit] Experimental economics

In recent decades, econometricians have increasingly turned to use of experiments to evaluate the often-contradictory conclusions of observational studies. Here, controlled and randomized experiments provide statistical inferences that may yield better empirical performance than do purely observational studies.[20]

[edit] Data

Data sets to which econometric analyses are applied can be classified as time-series data, cross-sectional data, panel data, and multidimensional panel data. Time-series data sets contain observations over time; for example, inflation over the course of several years. Cross-sectional data sets contain observations at a single point in time; for example, many individuals' incomes in a given year. Panel data sets contain both time-series and cross-sectional observations. Multi-dimensional panel data sets contain observations across time, cross-sectionally, and across some third dimension. For example, the Survey of Professional Forecasters contains forecasts for many forecasters (cross-sectional observations), at many points in time (time series observations), and at multiple forecast horizons (a third dimension).

[edit] Instrumental variables

In many econometric contexts, the commonly-used ordinary least squares method may not recover the theoretical relation desired or may produce estimates with poor statistical properties, because the assumptions for valid use of the method are violated. One widely-used remedy is the method of instrumental variables (IV). For an economic model described by more than one equation, simultaneous-equation methods may be used to remedy similar problems, including two IV variants, Two-Stage Least Squares (2SLS), and Three-Stage Least Squares (3SLS).[21]

[edit] Computational methods

Computational concerns are important for evaluating econometric methods and for use in decision making.[22] Such concerns include mathematical well-posedness: the existence, uniqueness, and stability of any solutions to econometric equations. Another concern is the numerical efficiency and accuracy of software.[23] A third concern is also the usability of econometric software.[24]

[edit] Example

A simple example of a relationship in econometrics from the field of labor economics is:
 \ln(\text{wage}) = \beta_0 + \beta_1 (\text{years of education}) + \varepsilon.
This example assumes that the natural logarithm of a person's wage is a linear function of (among other things) the number of years of education that person has acquired. The parameter \beta_1 measures the increase in the natural log of the wage attributable to one more year of education. The term \varepsilon is a random variable representing all other factors that may have direct influence on wage. The econometric goal is to estimate the parameters, \beta_0 \mbox{ and } \beta_1 under specific assumptions about the random variable \varepsilon. For example, if \varepsilon is uncorrelated with years of education, then the equation can be estimated with ordinary least squares.
If the researcher could randomly assign people to different levels of education, the data set thus generated would allow estimation of the effect of changes in years of education on wages. In reality, those experiments cannot be conducted. Instead, the econometrician observes the years of education of and the wages paid to people who differ along many dimensions. Given this kind of data, the estimated coefficient on Years of Education in the equation above reflects both the effect of education on wages and the effect of other variables on wages, if those other variables were correlated with education. For example, people born in certain places may have higher wages and higher levels of education. Unless the econometrician controls for place of birth in the above equation, the effect of birthplace on wages may be falsely attributed to the effect of education on wages.
The most obvious way to control for birthplace is to include a measure of the effect of birthplace in the equation above. Exclusion of birthplace, together with the assumption that \epsilon is uncorrelated with education produces a misspecified model. Another technique is to include in the equation additional set of measured covariates which are not instrumental variables, yet render \beta_1 identifiable.[25] An overview of econometric methods used to study this problem can be found in Card (1999).[26]

[edit] Journals

The main journals which publish work in econometrics are Econometrica, the Journal of Econometrics, the Review of Economics and Statistics, Econometric Theory, the Journal of Applied Econometrics, Econometric Reviews, the Econometrics Journal,[27] Applied Econometrics and International Development, the Journal of Business & Economic Statistics, and the Journal of Economic and Social Measurement.

[edit] Limitations and criticisms

See also Criticisms of econometrics
Like other forms of statistical analysis, badly specified econometric models may show a spurious correlation where two variables are correlated but causally unrelated. In a study of the use of econometrics in major economics journals, McCloskey concluded that economists report p values (following the Fisherian tradition of tests of significance of point null-hypotheses), neglecting concerns of type II errors; economists fail to report estimates of the size of effects (apart from statistical significance) and to discuss their economic importance. Economists also fail to use economic reasoning for model selection, especially for deciding which variables to include in a regression.[28][29]
In some cases, economic variables cannot be experimentally manipulated as treatments randomly assigned to subjects.[30] In such cases, economists rely on observational studies, often using data sets with many strongly associated covariates, resulting in enormous numbers of models with similar explanatory ability but different covariates and regression estimates. Regarding the plurality of models compatible with observational data-sets, Edward Leamer urged that "professionals ... properly withhold belief until an inference can be shown to be adequately insensitive to the choice of assumptions".[31]
Economists from the Austrian School argue that aggregate economic models are not well suited to describe economic reality because they waste a large part of specific knowledge. Friedrich Hayek in his The Use of Knowledge in Society argued that "knowledge of the particular circumstances of time and place" is not easily aggregated and is often ignored by professional economists.[32][33]

[edit] See also

[edit] Notes

  1. ^ M. Hashem Pesaran (1987). "Econometrics," The New Palgrave: A Dictionary of Economics, v. 2, p. 8 [pp. 8-22]. Reprinted in J. Eatwell et al., eds. (1990). Econometrics: The New Palgrave, p. 1 [pp. 1-34]. Abstract (2008 revision by J. Geweke, J. Horowitz, and H. P. Pesaran).
  2. ^ P. A. Samuelson, T. C. Koopmans, and J. R. N. Stone (1954). "Report of the Evaluative Committee for Econometrica," Econometrica 22(2), p. 142. [p p. 141-146], as described and cited in Pesaran (1987) above.
  3. ^ Paul A. Samuelson and William D. Nordhaus, 2004. Economics. 18th ed., McGraw-Hill, p. 5.
  4. ^ • H. P. Pesaran (1990), "Econometrics," Econometrics: The New Palgrave, p. 2, citing Ragnar Frisch (1936), "A Note on the Term 'Econometrics'," Econometrica, 4(1), p. 95.
       • Aris Spanos (2008), "statistics and economics," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
  5. ^ Greene, 1.
  6. ^ Geweke, Horowitz & Pesaran 2008.
  7. ^ Greene (2012), 12.
  8. ^ Kennedy 2003, p. 110.
  9. ^ Kennedy 2003, p. 129.
  10. ^ Kennedy 2003, p. 131.
  11. ^ Kennedy 2003, p. 130.
  12. ^ Kennedy 2003, p. 133.
  13. ^ Greene 2012, p. 23-note.
  14. ^ Greene 2010, p. 22.
  15. ^ Kennedy 2003, p. 135.
  16. ^ Kennedy 2003, p. 205.
  17. ^ Clive Granger (2008). "forecasting," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
  18. ^ Herman O. Wold (1969). "Econometrics as Pioneering in Nonexperimental Model Building," Econometrica, 37(3), pp. 369-381.
  19. ^ Edward E. Leamer (2008). "specification problems in econometrics," The New Palgrave Dictionary of Economics. Abstract.
  20. ^ • H. Wold 1954. "Causality and Econometrics," Econometrica, 22(2), p p. 162-177.
       • Kevin D. Hoover (2008). "causality in economics and econometrics," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract and galley proof.
  21. ^ Peter Kennedy (economist) (2003). A Guide to Econometrics, 5th ed. Description, preview, and TOC, ch. 9, 10, 13, and 18.
  22. ^ • Keisuke Hirano (2008). "decision theory in econometrics," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
       • James O. Berger (2008). "statistical decision theory," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
  23. ^ B. D. McCullough and H. D. Vinod (1999). "The Numerical Reliability of Econometric Software," Journal of Economic Literature, 37(2), pp. 633-665.
  24. ^ • Vassilis A. Hajivassiliou (2008). "computational methods in econometrics," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
       • Richard E. Quandt (1983). "Computational Problems and Methods," ch. 12, in Handbook of Econometrics, v. 1, pp. 699-764.
       • Ray C. Fair (1996). "Computational Methods for Macroeconometric Models," Handbook of Computational Economics, v. 1, pp. [1]-169.
  25. ^ Judea Pearl (2000). Causality: Model, Reasoning, and Inference, Cambridge University Press.
  26. ^ David Card (1999) "The Causal Effect of Education on Earning," in Ashenfelter, O. and Card, D., (eds.) Handbook of Labor Economics, pp 1801-63.
  27. ^ http://www.wiley.com/bw/journal.asp?ref=1368-4221
  28. ^ McCloskey (May 1985). "The Loss Function has been mislaid: the Rhetoric of Significance Tests". American Economic Review 75 (2).
  29. ^ Stephen T. Ziliak and Deirdre N. McCloskey (2004). "Size Matters: The Standard Error of Regressions in the American Economic Review," Journal of Socio-economics, 33(5), pp. 527-46 (press +).
  30. ^ Leamer, Edward (March 1983). "Let's Take the Con out of Econometrics". American Economic Review 73 (1): 34. http://www.jstor.org/pss/1803924.
  31. ^ Leamer, Edward (March 1983). "Let's Take the Con out of Econometrics". American Economic Review 73 (1): 43. http://www.jstor.org/pss/1803924.
  32. ^ Robert F. Garnett. What Do Economists Know? New Economics of Knowledge. Routledge, 1999. ISBN 978-0-415-15260-0. p. 170
  33. ^ G. M. P. Swann. Putting Econometrics in Its Place: A New Direction in Applied Economics. Edward Elgar Publishing, 2008. ISBN 978-1-84720-776-0. p. 62-64

[edit] References

(2007) v. 1: Econometric Theoryv. 1. Links to description and contents.
(2009) v. 2, Applied Econometrics. Palgrave Macmillan. ISBN 978-1-4039-1799-7 Links to description and contents.
  • Pearl, Judea (2009, 2nd ed.). Causality: Models, Reasoning and Inference, Cambridge University Press, Description, TOC, and preview, ch. 1-10 and ch. 11. 5 economics-journal reviews, including Kevin D. Hoover, Economics Journal.
  • Pindyck, Robert S., and Daniel L. Rubinfeld (1998, 4th ed.). Econometric Methods and Economic Forecasts, McGraw-Hill.
  • Studenmund, A.H. (2011, 6th ed.). Using Econometrics: A Practical Guide. Contents (chapter-preview) links.
  • Wooldridge, Jeffrey (2003). Introductory Econometrics: A Modern Approach. Mason: Thomson South-Western. ISBN 0-324-11364-1 Chapter-preview links in brief and detail.

[edit] Further reading

[edit] External links