Monday, 20 October 2014

Some Economics Websites


From Wikipedia, the free encyclopedia/Blogger Ref Site http://www.p2pfoundation.net/Multi-Dimensional_Science

Jump to: navigation, search

Subcategories

This category has the following 2 subcategories, out of 2 total.

P

T

Shadow Stats

From Wikipedia, the free encyclopedia/Blog Ref http://www.p2pfoundation.net/Transfinancial_Economics


Jump to: navigation, search
ShadowStats.com
Web addressshadowstats.com
Type of siteAnalyses Government Economic & Unemployment Statistics
OwnerJohn Williams
Current statusOnline
Shadowstats.com is a website that claims to recalculate government published economic and unemployment statistics that have been updated or replaced or are no longer emphasized by traditional media outlets.
The site is authored by John Williams, an economic consultant with an economics BA and an MBA from Dartmouth College, New Hampshire.[1]


Claims[edit]

Regarding inflation statistics, Williams says that some of the biggest changes to the Consumer Price Index were made between 1997-1999 in an effort to reduce Social Security outlays, using controversial changes by Alan Greenspan that include "hedonic regression", or the increased quality of goods.[2] Some other investors have echoed Williams' views, most prominently Bill Gross, who reportedly called the US CPI an "haute con job".[2] John S. Greenlees and Robert B. McClelland, staff economists at the US Bureau of Labor Statistics, wrote a paper to address CPI "misconceptions", such as those of Williams.[3]
Regarding unemployment statistics, Williams points out that under President Lyndon B. Johnson, the U-3 unemployment rate series was created, which excludes people who stopped looking for work for more than a year ago as well as part-time workers who are seeking full-time employment. Although the old unemployment rate series', which include part time workers looking for full time work and unemployed who stopped looking over a year ago, is still published monthly by BLS, the U-3 series is generally considered more meaningful and is the headline rate picked up by most media outlets.[4]
Regarding growth statistics, Williams reports that the official numbers for U.S. Gross Domestic Product (GDP) and jobs growth range from "deceptive"[5] to "rigged" and "manipulated".

Critical reception[edit]

Shadowstats has been strongly criticized, particularly on its estimates of inflation. Market participants, economists and bloggers frequently point out that basic math shows that the Shadowstats CPI is an extreme overestimate. [6][7][8][3]
University of Maryland Professor Katharine Abraham, who previously headed the agency responsible for publishing official unemployment and inflation data, says of Williams' claims, "The culture of the Bureau of Labor Statistics is so strong that it's not going to happen." Steve Landefeld, former director of the Bureau of Economic Analysis, the Commerce Department agency that prepares quarterly GDP reports, said in response to an article about Williams, "the bureau rigorously follows guidelines designed to ensure its work remains totally transparent and absolutely unbiased." In the same article, UC San Diego economist Valerie Ramey, a member of the Federal Economic Statistics Advisory Committee, defended the methodological changes claiming they were only made "after academic economists did decades of research and said they should be done."[9]

Notes[edit]

  1. Jump up ^ Shadowstats main page: "Walter J. "John" Williams was born in 1949. He received an A.B. in Economics, cum laude, from Dartmouth College in 1971, and was awarded a M.B.A. from Dartmouth's Amos Tuck School of Business Administration in 1972[...]. During his career as a consulting economist...
  2. ^ Jump up to: a b "Indeed, over the past few years, some of Mr Williams's views on economic indicators - the consumer price index, in particular - have been echoed by better-known and leading investment community figures, such as the bond investor Bill Gross, the strategist Stephen Roach and James Grant."
    Sikols, Richard (2008-09-20). "Forget short-sellers, Pollyanna creep could be the culprit". The Times Online (The Times). Retrieved 23 November 2009. 
  3. ^ Jump up to: a b John S. Greenlees and Robert B. McClelland: Addressing misconceptions about the Consumer Price Index, Monthly Labor Review, August 2008
  4. Jump up ^ Alternative measures of labor underutilization, BLS.
  5. Jump up ^ Forsyth, Randall W. (30-10-09). ""Risk Trade" Returns, at Least for a Day". Barron's. Dow Jones & Company, Inc. Retrieved 23 November 2009.  Check date values in: |date= (help)
  6. Jump up ^ Why Shadow Government Statistics is very, very, very wrong, Michael Sankowski, The Traders Crucible, February 1, 2011.
  7. Jump up ^ Niall Ferguson Has Not Admitted He Was Wrong About Inflation, Adam Ozimek, Forbes, October 10, 2013
  8. Jump up ^ The Absurdity of ShadowStats Inflation Estimates, David Clayton, May 15, 2011.
  9. Jump up ^ Zuckerman, Sam (25 May 2008). "Economist challenges government data". The San Francisco Chronicle. 

External links[edit]

Monitoring Inflation in Real-Time

The following is the realization of the importance of gathering prices in Real-Time. In Transfinancial Economics this plays a vital role in not just monitoring inflation but also controlling it en direct with dynamic super-flexible controls at the Point of Sale ( ie. EPOS).Ofcourse, in TFE the coverage of prices in Real-Time is "infinitely" more comprehensive than anything seen before. See http://www.p2pfoundation.net/Transfinancial_Economics




 Also,  google search comes up with other seeming inflation monitoring schemes.




https://www.google.co.uk/?gfe_rd=cr&ei=BNVEVNurKPPH8geVhoGQDw&gws_rd=ssl#q=monitoring+inflation+in+real+time








The Billion Prices Project @ MIT  (Source of following "Article")














The Billion Prices Project is an academic initiative that uses prices collected from hundreds of online retailers around the world on a daily basis to conduct economic research.
This page shows our most recent research leveraging high-frequency price data, as well as the US daily inflation index 
The Team
The BPP is led by Professors Roberto Rigobon and Alberto Cavallo at the MIT Sloan School of Management.
Roberto Rigobon Roberto Rigobon
Professor of Economics
Applied Economics Group
MIT – Sloan School of Management
Alberto Cavallo Alberto Cavallo
Assistant Professor of Economics
Applied Economics Group
MIT – Sloan School of Management




































We thank all the current and past MIT students that participated in the project through MIT’s UROP program for their help and enthusiasm.
For our technology and computational needs, we receive outstanding help from Mark Riedesel, Chay Casso, Armand Doucette, Ray Faith, Ken Valentine, and others at Sloan Technology Services. We also thank the members of SIPB at MIT for hosting this page.


The Sponsors
We are extremely thankful to the MIT Sloan School of Management for its financial support.

.
Daily Inflation Data
If you are looking for more high-frequency inflation data across countries and sectors, please contact PriceStats, the company that collects the online data we use in our research initiatives and experimental indexes.




Friday, 17 October 2014

Watched by the Web: Surveillance Is Reborn


Books of The Times


Google does it. Amazon does it. Walmart does it. And, as news reports last week made clear, the United States government does it.


Sonny Figueroa/The New York Times

BIG DATA

A Revolution That Will Transform How We Live, Work, and Think
By Viktor Mayer-Schönberger and Kenneth Cukier
242 pages. Eamon Dolan/Houghton Mifflin Harcourt. $27.

Rob Judges
Viktor Mayer-Schönberger
Rob Judges
Kenneth Cukier
Does what? Uses “big data” analysis of the swelling flood of data that is being generated and stored about virtually every aspect of our lives to identify patterns of behavior and make correlations and predictive assessments.
Amazon uses customer data to give us recommendations based on our previous purchases. Google uses our search data and other information it collects to sell ads and to fuel a host of other services and products.
The National Security Agency, a news article in The Guardian revealed last week, is collecting the phone records of millions of American customers of Verizon — “indiscriminately and in bulk” and “regardless of whether they are suspected of any wrongdoing” — under a secret court order. Under another surveillance program called Prism, The Guardian and The Washington Post reported, the agency has been collecting data from e-mails, audio and video chats, photos, documents and logins, from leading Internet companies like Microsoft, Yahoo, Google, Facebook and Apple, to track foreign targets.
Why spread such a huge net in search of a handful of terrorist suspects? Why vacuum up data so indiscriminately? “If you’re looking for a needle in the haystack, you need a haystack,” Jeremy Bash, chief of staff to Leon E. Panetta, the former director of the Central Intelligence Agency and defense secretary, said on Friday.
In “Big Data,” their illuminating and very timely book, Viktor Mayer-Schönberger, a professor of Internet governance and regulation at the Oxford Internet Institute at Oxford University, and Kenneth Cukier, the data editor for The Economist, argue that the nature of surveillance has changed.
“In the spirit of Google or Facebook,” they write, “the new thinking is that people are the sum of their social relationships, online interactions and connections with content. In order to fully investigate an individual, analysts need to look at the widest possible penumbra of data that surrounds the person — not just whom they know, but whom those people know too, and so on.”
Mr. Cukier and Mr. Mayer-Schönberger argue that big data analytics are revolutionizing the way we see and process the world — they even compare its consequences to those of the Gutenberg printing press. And in this volume they give readers a fascinating — and sometimes alarming — survey of big data’s growing effect on just about everything: business, government, science and medicine, privacy and even on the way we think. Notions of causality, they say, will increasingly give way to correlation as we try to make sense of patterns.
Data is growing incredibly fast — by one account, it is more than doubling every two years — and the authors of this book argue that as storage costs plummet and algorithms improve, data-crunching techniques, once available only to spy agencies, research labs and gigantic companies, are becoming increasingly democratized.
Big data has given birth to an array of new companies and has helped existing companies boost customer service and find new synergies. Before a hurricane, Walmart learned, sales of Pop-Tarts increased, along with sales of flashlights, and so stores began stocking boxes of Pop-Tarts next to the hurricane supplies “to make life easier for customers” while boosting sales. UPS, the authors report, has fitted its trucks with sensors and GPS so that it can monitor employees, optimize route itineraries and know when to perform preventive vehicle maintenance.
Baseball teams like Billy Beane’s Oakland A’s (immortalized in Michael Lewis’s best-seller “Moneyball”) have embraced new number-crunching approaches to scouting players with remarkable success. The 2012 Obama campaign used sophisticated data analysis to build a formidable political machine for identifying supporters and getting out the vote. And New York City has used data analytics to find new efficiencies in everything from disaster response, to identifying stores selling bootleg cigarettes, to steering overburdened housing inspectors directly to buildings most in need of their attention. In the years to come, Mr. Mayer-Schönberger and Mr. Cukier contend, big data will increasingly become “part of the solution to pressing global problems like addressing climate change, eradicating disease and fostering good governance and economic development.”
There is, of course, a dark side to big data, and the authors provide an astute analysis of the dangers they foresee. Privacy has become much more difficult to protect, especially with old strategies — “individual notice and consent, opting out and anonymization” — losing effectiveness or becoming completely beside the point.
“The ability to capture personal data is often built deep into the tools we use every day, from Web sites to smartphone apps,” the authors write. And given the myriad ways data can be reused, repurposed and sold to other companies, it’s often impossible for users to give informed consent to “innovative secondary uses” that haven’t even been imagined when the data was first collected.
The second danger Mr. Cukier and Mr. Mayer-Schönberger worry about sounds like a scenario from the sci-fi movie “Minority Report,” in which predictions seem so accurate that people can be arrested for crimes before they are committed. In the real near future, the authors suggest, big data analysis (instead of the clairvoyant Pre-Cogs in that movie) may bring about a situation “in which judgments of culpability are based on individualized predictions of future behavior.”
Already, insurance companies and parole boards use predictive analytics to help tabulate risk, and a growing number of places in the United States, the authors of “Big Data” say, employ “predictive policing,” crunching data “to select what streets, groups and individuals to subject to extra scrutiny, simply because an algorithm pointed to them as more likely to commit crime.”
Last week an NBC report noted that in so-called signature drone strikes “the C.I.A. doesn’t necessarily know who it is killing”: in signature strikes “intelligence officers and drone operators kill suspects based on their patterns of behavior — but without positive identification.”
One problem with relying on predictions based on probabilities of behavior, Mr. Mayer-Schönberger and Mr. Cukier argue, is that it can negate “the very idea of the presumption of innocence.”
“If we hold people responsible for predicted future acts, ones they may never commit,” they write, “we also deny that humans have a capacity for moral choice.”
At the same time, they observe, big data exacerbates “a very old problem: relying on the numbers when they are far more fallible than we think.” They point to escalation of the Vietnam War under Robert S. McNamara (who served as secretary of defense to Presidents John F. Kennedy and Lyndon B. Johnson) as a case study in “data analysis gone awry”: a fierce advocate of statistical analysis, McNamara relied on metrics like the body count to measure the progress of the war, even though it became clear that Vietnam was more a war of wills than of territory or numbers.
More recent failures of data analysis include the Wall Street crash of 2008, which was accelerated by hugely complicated trading schemes based upon mathematical algorithms. In his best-selling 2012 book, “The Signal and the Noise,” the statistician Nate Silver, who writes the FiveThirtyEight blog for The New York Times, pointed to failures in areas like earthquake science, finance and biomedical research, arguing that “prediction in the era of Big Data” has not been “going very well” (despite his own successful forecasts in the fields of politics and baseball).
Also, as the computer scientist and musician Jaron Lanier points out in his brilliant new book, “Who Owns the Future?,” there is a huge difference between “scientific big data, like data about galaxy formation, weather or flu outbreaks,” which with lots of hard work can be gathered and mined, and “big data about people,” which, like all things human, remains protean, contradictory and often unreliable.
To their credit, Mr. Cukier and Mr. Mayer-Schönberger recognize the limitations of numbers. Though their book leaves the reader with a keen appreciation of the tools that big data can provide in helping us “quantify and understand the world,” it also warns us about falling prey to the “dictatorship of data.”
“We must guard against overreliance on data,” they write, “rather than repeat the error of Icarus, who adored his technical power of flight but used it improperly and tumbled into the sea.”




Another Book Review


Book Review: Big Data: A Revolution That Will Transform How We Live, Work and Think

FacebookBufferPocket
nic_tempiniIn Big Data: A Revolution That Will Transform How We Live, Work and Think, two of the world’s most-respected data experts reveal the reality of a big data world and outline clear and actionable steps that will equip the reader with the tools needed for this next phase of human evolution. Niccolo Tempini finds that rather than showing how the impact of data-driven innovations will advance the march of humankind, the authors merely present a thin collection of happy-ending business stories.
This was originally posted on LSE Review of Books.
Big Data: A Revolution That Will Transform How We Live, Work and Think. Kenneth Cukier and Viktor Mayer-Schonberger. Hodder. March 2013.
Find this book amazon-logo
My issue with Big Data is that it does not take big data seriously enough. Although the authors have pedigree (Editor at the Economist; Professor at Oxford) this is not an academic text: it belongs to that category of popular essays that attempt to stimulate debate. Anyone who works with data (e.g. technologists, scientists, politicians, consultants) or questions what will be borne from our age of data affluence may have expectations for this book - unfortunately it falls short on providing any real answer.

The book paints an impending revolution in mighty strokes. The authors claim the impact of data-driven innovations will advance the march of humankind. What they end up presenting is a thin collection of happy-ending business stories — flight fare prediction, book recommendation, spell-checkers and improved vehicle maintenance. It’s too bad the book’s scientific champion Google Flu Trends, a tool which predicts flu rates through search queries, has proven so fallible. Last February it forecast almost twice the number of cases reported by the official count of theCenter for Disease Control.
Big data will certainly affect many processes in a range of industries and environments, however, this book gestures at an inevitable social revolution in knowledge making (‘god is dead’), for which I do not find coherent evidence.
The book correctly points out that data is rapidly becoming the “raw material of business”. Many organisations will tap into the new data affluence, the outcome of a long historical process that includes ‘datafication’ (I’ll define later) and the diffusion of technologies that have tremendously reduced the costs involved in data production, storage and processing.
So, where’s the revolution? The book argues for three rather simplistic shifts.
The first shift – the new world is characterised by “far more data”. The authors say that just as a movie emerges from a series of photographs, increasing amounts of data are as important because quantitative changes bring about qualitative changes. The technical equivalent in big data is the ability to survey a whole population instead of just sampling random portions of it.
The second shift is that “looking at vastly more data also permits us to loosen up our desire for exactitude”. Apparently, in big data, “with less error from sampling we can accept more measurement error”. According to the authors, science is obsessed with sampling and measurement error as a consequence of coping in a ‘small data’ world.
It would be amazing if the problems of sampling and measurement error really disappeared when you’re “stuffed silly with data”. But context matters, as Microsoft researcher Kate Crawford cogently argues in her blog. It is easy to treat samples as n=all as data get closer to full coverage, yet researchers still need to account for the representativeness of their sample. Consider how the digital divide – some people are on the Internet, others are not — affects the data available to researchers.
While a missed prediction does not cause much damage if it is about book recommendations on Amazon, a similar error when doing policy making through big data is potentially more serious. Crawford reminds us that Google Flu Trends failed because of measurement error. In big data, data are proxies of events, not the events themselves. Google Flu Trends cannot distinguish with certainty people who have the flu from people who are just searching about it. Google may tune “its predictions on hundreds of millions of mathematical modelling exercises using billion of data points”, but volume is not enough. What matters is the nature of the data points and Google has apples mixed with oranges.
The third and most radical shift implies “we won’t have to be fixated on causality [...] the idea of understanding the reasons behind all that happens.” This is a straw man argument. The traditional image of science the authors discuss (fixated with causality, paranoid about exactitude) conflates principles with practices. Correlational thinking has been driving a lot of processes and institutional behaviours in the real world. Nevertheless, “Felix, qui potuit rerum cognoscere causas” (Fortunate who was able to the know the causes of things) – which happens to be the motto of the LSE – is still bedrock in Western political life and philosophy. The authors cannot dismiss causation so cavalierly.
However, it appears that they do. Big data, they say, means that the social sciences “have lost their monopoly on making sense of empirical data, as big-data analysis replaces the highly skilled survey specialists of the past. The new algorithmists will be experts in the areas of computer science, mathematics, and statistics; and they would act as reviewers of big data analyses and predictions.” This is an odd claim given that the social sciences are thriving precisely because expert narratives are a necessary component of how data becomes operational. This book is a shining example that big data speaks the narrative experts give it. What close observers know is that even at the most granular level of practice, analytic understanding is necessary when managers attempt to implement these systems in the world.
The book is blinded by its strongest assumption: that quantitative analysis is devoid of qualitative assessment. For the authors, to datafy is merely to put a phenomenon “in a quantified format so it can be tabulated and analysed.” Their argument, that “mathematics gave new meaning to data – it could now be analysed, not just recorded and retrieved”, implies that analysis begins only after phenomena get reduced to quantifiable formats. Human judgement is just an inconvenience of a ‘small data’ world that has no role in the process of making data. This is why they warn that in the impending world of big data, “there will be a special need to carve out a place for the human”.
It is hard to see how imagination and practical context will suddenly cease to play a fundamental role in innovation. But innovation could definitely be jeopardised if big data systems are not recognized for what they are – tools for optimising resource management. Big data may not be an instrument of discovery; while certainly it is a way of managing entities that are already known. Big data promises to be financially valuable – because it is primarily a managerial resource (e.g. pricing fares, finding books, moving spare parts, etc.).
In the world according to Cukier and Mayer-Schönberger, all the challenges of knowledge-making are about to evaporate. With big data affluence – sampling, exactitude, and the pursuit of causality will no longer be issues. The most pressing question is the problem of data valuation. Now there is a problem the authors are willing to discuss seriously: how can data be transformed into a stable financial asset when most of its utility as a predictive resource is not predictable?
So eager are the authors to mark the potential value of big data for organisations (data can only be an asset to a corporation) that they overlook the impact of these systems on other social actors. So what if big data environments reconfigure social inequalities? While the citizen will earn new responsibilities (like privacy management), only corporate entities will be able to systematically generate, own and exploit big data sets.
Big data is serious. There will be winners and there will be losers. What the public need is a book that explains the stakes so that they can be active participants in this revolution, rather than be passive recipients of corporate competition.
———————————————————————————————–
Niccolò Tempini is a PhD Candidate in Information Systems at the London School of Economics and Political Science. You can follow Niccolò on Twitter @tmpncl. Read more reviews by Niccolò.



Big Data: A Revolution That Will Transform How We Live, Work and Think, By Viktor Mayer-Schonberger and Kenneth Cukier

Synopsis

Fascinating reference here is made to ".....seeing inflation in real time...." This is in keeping with the evolving concept of Transfinancial Economics. I hope to read this book soon. http://www.p2pfoundation.net/Transfinancial_Economics




A New York Times bestseller. Longlisted for the Financial Times/Goldman Sachs Business Book of the Year Award. Since Aristotle, we have fought to understand the causes behind everything. But this ideology is fading. In the age of big data, we can crunch an incomprehensible amount of information, providing us with invaluable insights about the what rather than the why. We're just starting to reap the benefits: tracking vital signs to foresee deadly infections, predicting building fires, anticipating the best moment to buy a plane ticket, seeing inflation in real time and monitoring social media in order to identify trends. But there is a dark side to big data. Will it be machines, rather than people, that make the decisions? How do you regulate an algorithm? What will happen to privacy? Will individuals be punished for acts they have yet to commit? In this groundbreaking and fascinating book, two of the world's most-respected data experts reveal the reality of a big data world and outline clear and actionable steps that will equip the reader with the tools needed for this next phase of human evolution.




 Imprint: John Murray Publishers Ltd; (14 March 2013)



Another summary of the work from the same source The Guardian, Pindar


Thanks to the internet, social networking, smartphones and credit cards, more data is being collected and stored about us than ever before – a level of surveillance the Stasi could only dream about, say Mayer-Schönberger and Cukier in this informative introduction to the "datafication" of our lives. Big data analysis gives big business a competitive edge (all those Amazon recommendations), but governments have invested heavily in it, too. The risks to privacy and freedom are obvious, but the authors accentuate the positive. Big data has useful applications in medicine, science and "culturomics". Mayer-Schönberger and Cukier make interesting observations about data-crunching techniques and they also report that analysts have found substantial amounts of "lexical dark matter" (words in books but not in dictionaries). In this brave new world of big data, Google and Amazon are frontrunners – although behind the NSA and GCHQ. The next challenge may be avoiding the "dictatorship of data".

To Save Everything, Click Here by Evgeny Morozov – review

Morozov takes a hard look at the claims of cybertheorists and concludes that our techno future might be dark and danger

Google self-driving car
Utopia postponed … Google's self-driving car might have unintended consequences. Photograph: Justin Sullivan/Getty Images
Newsflash: the internet doesn't exist. If you think there is just one thing called "the Internet" with a single logic and set of values – rather than a variety of different networked technologies, each with its own character and challenges – and that the rest of the world must be reshaped around it, then you are an "Internet-centrist". If you think the messiness and inefficiency of political and cultural life are problems that should be fixed using technology, then you are a "solutionist". And if you think that the age of Twitter and online videos of sneezing cats is so unlike anything that has gone before that we must tear up the rule-book of civilisation, then you are an "epochalist". Such coinages are one of the drive-by amusements of reading Evgeny Morozov, who, since his first book, The Net Delusion, has become one of our most penetrating and brilliantly sardonic critics of techno-utopianism.
  1. To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems that Don’t Exist
  2. by Evgeny Morozov
  1. Tell us what you think: Star-rate and review this book
He certainly has some colourful adversaries. One is Jeff Jarvis, a new-media cyberhustler and consultant who is serially wrong about the near future, and seemingly cannot bear to hear any criticism of his adored Silicon Valley corporations. Appearing on the BBC earlier this year after Facebook had been hacked, he accused his interviewer of spreading "technopanic", insisted the whole story was "crap", and said: "This interview shouldn't exist." Afterwards, he tweeted: "The BBC can kiss my ass," and "Fuck you, BBC."
Among Morozov's other targets are Amazon chief Jeff Bezos, with his "populist rage against institutions" (except his own); LinkedIn supremo Reid Hoffman, who has perpetrated a book-shaped product entitled The Start-Up of You; Google's Eric Schmidt, who believes that an algorithm could one day tell you what is the "Best music from Lady Gaga"; Microsoft engineer Gordon Bell, lifelogger extraordinaire and exemplary lunatic of the mindset that holds that Truth, in the form of perfect data recall, is the absolute social value; and the games-will-save-the-world theorist Jane McGonigal, whose work Morozov likens to "a bad parody of Mitt Romney".
But Morozov's attacks go deeper than a righteous ridicule – he also interrogates the intellectual foundations of the cybertheorists, and finds that, often, they have cherry-picked ideas from the scholarly literature that are at best highly controversial in their own fields. His readings in this vein of Clay Shirky, Steven Johnson, David Weinberger and numerous other cyberintellectuals are suavely devastating.
We must, Morozov argues forcefully, place today's arguments in a broader context. "To talk about gamification" – the management-theory fad that seeks to apply videogame-style motivations and rewards to real-world practices – "without also discussing BF Skinner's behaviourism," he writes, is "misguided". Here the Belarus-born author also justifiably plays an autobiographical trump card: "As someone who grew up in the final years of the Soviet Union, even I remember the penchant that Soviet managers had for gamification: students were shipped to the fields to harvest wheat or potatoes, and since the motivation was lacking, they too were assigned points and badges."
The cyberhustlers are constantly declaring Year Zero and demanding that society be reformed according to the demands of "the Internet". But their understanding of the institutions they dream of seeing torn down – politics, the media, and now even university education – is superficial, as is their understanding of "the Internet" itself, whose secretive, privately-owned corporations are nothing like as "open" as their cheerleaders insist everything else must henceforth be. (Another of today's serious digital critics, Jaron Lanier, emphasises this point in his latest book, Who Owns the Future?) The important and admirably fufilled purpose of Morozov's book, then, is to argue, as he finally sums up: "that there are good reasons not to run our politics as a start-up … that there are good reasons to value subjective but high-quality criticism, even if it doesn't stem from the 'wisdom of crowds' [… and] that numbers often tell us less than we think and quantification as such might actually thwart reforms."
Quantification, in the form of "Big Data", is the subject of Viktor Mayer-Schönberger and Kenneth Cukier's initially more celebratory book, Big Data: A Revolution That Will Transform How We Live, Work and Think. Now that we can collect and analyse vast quantities of data – often, all or nearly all the relevant data rather than just statistical samples – wonderful things can happen. Google predicts the spread of flu in near-real-time by analysing searches; engineers foresee the failure of engine parts that wirelessly phone home; and Walmart notices that, just before a hurricane hits, sales of Pop-Tarts increase. That there is a certain bathos to the progression of these examples is to be expected in an era that does not differentiate too pedantically between what is good for business and what is good for people.
Mayer-Schönberger and Cukier laudably demolish some of the more ludicrous big-data fantasies – for example, the claim by former Wired editor Chris Anderson that big data in science means "the end of theory" – but they also choose not to draw some arguably important distinctions. Is there, perhaps, a difference between "data" and "information" and "knowledge"? Might it be useful to distinguish between which articles a computer program has determined are "popular" on the internet, and which are actually worth reading?
The dark side of big data, according to the authors, lies in surveillance – in communist East Germany, they point out, the Stasi were aspiring big-data fanatics – and in the alarming prospect of Minority Report-style pre-emptive policing. (According to a study too recent to make it into either book, Facebook "likes" can already be used to accurately predict "sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender".) We could sleepwalk into a "dictatorship of data", where the algorithms mining the data for actionable recommendations are inscrutable and unaccountable "black boxes". So, these authors conclude, we need a new cadre of "algorithmists", people who scrutinise code for its obscured political choices.
Morozov, too, calls for "algorithmic auditors". (Already, he points out, a single Californian company determines automatically what will count as hate speech and obscenity in the comment systems of thousands of websites.) More imaginatively, he also points out many possible consequences of the social engineers' techno-fixes. "Would self-driving cars," he wonders pointedly, "result in inferior public transportation as more people took up driving?" If you can measure and upload your health, diet and fitness data to be "shared" with insurance companies, then you'll get cheaper insurance, say Mayer-Schönberger and Cukier cheerfully. But wait, says Morozov, such individual decisions don't take place in a vacuum: "If I choose to track and publicise my health, and you choose not to, then sooner or later your decision to do nothing might be seen as tacit acknowledgment that you have something to hide." These are, then, social and political problems, and ones that the mantra of individual choice cyborgised through shiny new technologies will often answer in ways that harm the already vulnerable.
In one amazing possible future, this newspaper's website would know, thanks to your aggregated personal data, whether you are already a fan of Jeff Jarvis and Clay Shirky: if so, it would then serve to you an alternative version of this review that scolded Evgeny Morozov for his curmudgeonly hatred of inevitable progress. Happily, we are not there yet, and we can still argue along with him about what sort of future we want. Data-dissidents of the world, unite: you have nothing to lose but your targeted adverts.
• Steven Poole's You Aren't What You Eat is published by Union Books. To order To Save Everything, Click Here and Big Data with free UK p&p call Guardian book service on 0330 333 6846 or go to guardian.co.uk/bookshop

Thursday, 16 October 2014

The Central Bank with an expanded role in a purely electronic monetary system

Blogger Ref http://www.p2pfoundation.net/Transfinancial_Economics 




The following is a link to a paper by Trond Andresen




http://www.paecon.net/PAEReview/issue68/Andresen68.pdf











Abstract


Physical currency (bills and coins) is being phased out as an important means of exchange both in developed and developing countries. Transactions are increasingly done by debit card, computer, and mobile phone. This technologically driven process opens up some very useful possibilities, among these new and – for society – beneficial roles for the Central Bank. The paper assumes a scenario where the country in question issues its own currency, and all money is "electronic" – no bills and coins. This gives an extra impetus to the sovereign money solution; all deposits are at the Central Bank.

The paper also argues that in such a system – where banks are not allowed to create "credit money" when issuing loans (in this resembling the "100% reserve" solution supported by many reformers) – the economy need not, in spite of this, be "starved" of credit for investment – a warning that is not only sounded by the defenders of today’s financial system, but also by many of its critics. This goal might be achieved by the unconventional trick of letting commercial banks create the needed sovereign money at the Central Bank for their lending.






A third point of the paper is to argue that simplification of the financial system should be a goal in itself.


JEL codes B50, E5, E40, E42, E44, E58, G20, G28, H12, H62