Tuesday, 11 December 2012

"...Economics has more data the human mind can comprehend......"


The following article is very important . It comes from the Monetary Sovereignty blog which is concerned with perspectives on Modern Monetary Theory, or MTT. What is important about this is the realization that a supercomputer, or supercomputers could build up a real picture of the economy possibly in real time. Such a major, and vital concept is revealed in an advanced stage Transfinancial Economics, the new, evolving global paradigm.

http://www.p2pfoundation.net/Transfinancial_Economics

RS.


How IBM could save the world?



Economics has more data than the human mind can comprehend. So economists (including this one) tend to focus on small clumps of data we can visualize, and from them, we draw conclusions: For instance, consider these data:
1817-1821: U. S. Federal Debt reduced 29%. Depression began 1819.
1823-1836: U. S. Federal Debt reduced 99%. Depression began 1837.
1852-1857: U. S. Federal Debt reduced 59%. Depression began 1857.
1867-1873: U. S. Federal Debt reduced 27%. Depression began 1873.
1880-1893: U. S. Federal Debt reduced 57%. Depression began 1893.
1920-1930: U. S. Federal Debt reduced 36%. Depression began 1929.
I conclude, from these data, federal surpluses cause depressions. I even offer a rationalle: Surpluses remove dollars from the economy, and a growing economy requires a growing supply of money, while a shrinking supply of money causes economic shrinkage (a depression). I see ample proof of this in many places, and currently the euro nations are on track to provide even more proof.
But wait a minute. The 1819 depression began after just two years of surplus, while the 1929 depression waited for nine years of surplus. Why?
And then there was the Clinton surplus of 1998 -2000, that only caused a recession in 2001. So was there something else that triggered, or outright caused, these depressions?
I discuss this at: What triggers recessions and depressions?? But that discussion barely brushed the surface of the question.
Why didn’t I go deeper? Too many variables of indeterminate weights.
Read any paper, book or blog post on economics, and you will see conclusions, possibly supported by data, but you’ll not see all the related data along with historically proven weights. You might even see formulas on the order of X = 1a + 2b + 3c . . . N, but how predictive are those formulas? Commodity and stock chartists provide seemingly infinite graphs, and how predictive are they?
While I feel confident that federal surpluses, and even reductions in deficit growth, hurt the economy, and I offer data to support this conclusion, I do not offer proof. No economist ever has proved much of anything, though we all argue mightily for our positions. If this reminds you of religion, where nothing is proved and everyone is absolutely certain, you’re right. Economics is closer to religion than to science, and the reason is complexity.
It doesn’t have to be this way. The human brain is limited in the number of related factors it consciously can organize. Show me a formula based on a dozen variables, and I will not be able to visualize it.
But ask me to catch a fly ball, in which my brain subconsciously must analyze such variables as the speed and trajectory of the ball, wind speed, wind direction, ground (running) conditions, ball weight and size, plus all the past experiences I’ve had in running and catching a ball, and my brain can predict exactly where my glove has to be, and when — usually.
I’ll run at exactly the right pace, neither too slowly nor too fast, so that the trajectory of my glove intersects the trajectory of the ball, right on time — an amazing feat made even more amazing by the thousands of decisions and predictions my brain must make when signalling each my muscles to contract the right amounts at the right moments, just so I can take one step, let alone intercept a fly ball.
Why can I make all those predictions, involving thousands of weighted variables , but am unable to visualize a handful of variables simultaneously? I believe the answer is: Feedback.

Last year, the IBM computer named "Watson", defeated the two greatest human players in Jeopardy
history. Those who know the game, understand that this achievement was orders of magnitude beyond
winning at chess. Jeopardy questions are filled with linguistic misdirections puns, rhymes, puzzles and verbal tricks.
English by itself is a complex language. Consider the real headline, “English Left Waffles on Falklands.” What does it mean? Did the English cook up a stack of waffles and leave them on some islands? Or did it mean the English left (i.e. liberals) were undecided about what to do with the Falklands?
Add that misdirection to the need to understand facts, slogans and ideas we all take for granted, and you can visualize of the kind of complexity Watson conquered. How did it do it?
Well, I can tell you what didn’t happen. There weren’t an infinite number of programmers inputting an infinite number of possible questions, in the hopes that one would match the latest Jeopardy question.
No, instead they used machine learning. Here is an example. One of the questions named two people and asked what they had in common. The answer was supposed to be what state (Iowa, Ohio,etc.) they came from Watson missed the first question, because it found something else the two had even more in common. The human contestants answered correctly.
Then Watson was told the correct answer, but not the reasoning behind it. The same thing happened with the second question. Watson gave the wrong answer. Humans gave the right answer. Watson was told the right answer.
But, on the third question, Watson answered correctly. It had “learned,” from the first two answers, that a state name was wanted. Thereafter, Watson answered all similar questions correctly. Given all possible answers, Watson offered the answer having the highest probability.
There would have been no way for programmers to anticipate that question, then program Watson with the answer. Machine learning accomplished in seconds, what ordinary programming never could.
Similarly, though I have caught thousands of balls in every weather, on every kind of field — balls of different sizes and shapes (beachballs, footballs, marbles) –balls going at different speeds, different distances — the next time I catch a ball, the situation will be unique. But I will receive feedback — continuous feedback. And the odds are, I either will catch the ball or quickly will realize I can’t.
With every step I take, my brain will recognize thousand of things familiar enough to analyze, and based on that familiarity, will make appropriate adjustments, perhaps millions of adjustments per second. And this feedback will allow me to predict exactly where my glove needs to be and how my muscles need to move.
Bottom line: Economics never will be a complete science so long as economists rely solely on conclusions drawn from limited data. The solution is to use a super computer, of Watson capacity or greater, that is given every conceivable piece of data prior to every important result — a super computer that is told to correlate all that data with each result (i.e. “correct answer”), and to learn from each result (feedback), the most likely next result.
IBM spent millions on Watson. They achieved some measure of publicity, but they now can achieve so much more. If IBM would create an “economics Watson,” pumped full of data and engaged in machine learning, IBM could predict, and thereby change, the world.
Rodger Malcolm Mitchell
http://www.rodgermitchell.com


http://rodgermmitchell.wordpress.com/2012/05/04/how-ibm-can-change-the-world/
Ref link

No comments:

Post a Comment