Tuesday, 24 November 2015

Hedge funds, economics and similations

Hedge funds, economics and simulations
Justin Lyon

Founder and CEO, Simudyne/Source LinkedIn

Hedge funds, economics and simulations

Computer simulations using advanced mathematical techniques such as agent based modelling and system dynamics are used by hedge funds to make vast amounts of money. Hedge funds operate in a complex and continually evolving competitive global system.
Advanced simulations can not only be used to make vast amounts of money for hedge funds; they must be used on a daily basis to design economic policies in the 21st century so we avert tomorrow's crisis.
Survival and success factors include having strong controls and an outstanding reputation to attract and retain both investors and the brightest people they can find,
  • who are continually being fed better information from their contacts and by their data models;
  • who can execute their decisions rapidly and without error, and
  • who are able to continually test their ideas and to learn from their own and others’ experiences.
They don’t have to get it right all the time, but they do have to survive and outperform their competitors (which include less risky investment alternatives) in bad times as well as in good. Hedge funds are filled with the really bright guys ('Masters of the Universe' - to quote Tom Wolfe - e.g., the twenty year top-performing ex-mathematician James Simon of Renaissance Capital). These bright guys use their superior modelling and risk management techniques in a highly leveraged way to make regular exceptional profits for their partners and themselves. Even so, only a few practitioners are believed to have begun to use models that take into account irrational and collective behaviour hence the frequent, recent, and regular whining about ‘once in a hundred year’ events and ‘25 sigma deviations’ (e.g. from a Goldman’s hedge fund), but see recent reports in Nature and New Scientist and work by the Econophysics community.
While many of these guys have a strong math or physics background, the rest of the market (and its regulators and risk managers) is filled with classical traders (typically chartists and people who sense what’s happening by talking to people and looking at patterns on screens) and people who have, since the eighties, been to business school where they were taught Standard Finance Theory (SFT) and its companion the Efficient Market Hypothesis. SFT comes from a world where computation is expensive and short cut assumptions (such as the use of only the first two moments of a distribution - mean and variance - which was also assumed to be Gaussian) justified the arguments in favour of the random walk approach to investing, the CAPM, use of alpha and beta, adoption of the brilliant Black-Scholes derivative pricing formula based on the stochastic calculus and so on. These arguments have been rattling on for years (and it’s the conventional SFT approach that seems to have been adopted by regulators and risk managers) although there is lots of evidence to suggest that chartists make their money from adopting the opposite viewpoint - which seemingly is related to the market impact of traders themselves. It seems reasonable to suggest that, in the future, leading hedge funds can gain additional benefit from a smarter approach to modelling that begins to incorporate more of the micro (types of trader/institutional, their states and probable behaviours under various market conditions and its evolution) as well as the existing macro level of modelling.
Following some spectacular disasters in the eighties (and complaints about Japanese banks getting unfair advantages by excessive balance sheet growth for the capital employed), regulators came up with Basel I (and then Basel II) which imposed controls on banks’ risk-adjusted capital ratios but without accompanying it with proper investigation of how they were implemented (hence the drive by most banks to use off-balance sheet vehicles and other tactics to get round the regulations and grow their earnings). Until recently nobody paid much attention to what happens at a system level when everyone is corralled into adopting the same mechanism - another unintended consequence of standardised regulation was seen in the raging arguments over the implementation of the ‘mark to market’ regime. One of the biggest issues in the 2008 crisis has been a largely non-systemic approach to modelling and measuring risk i.e., looking at an individual institution’s portfolio of instruments, their past correlations, risk parameters etc. as if it could, in a crisis, act in isolation when regulation has, of course, created systemic correlations which come into play as soon as systemic risk begins to arise. A smart hedge fund might have anticipated these probable behaviours and taken advantage of them at an early stage.
This lack of insight into complex system behaviour and its opportunities (and threats) may possibly be put down to the prevalence of a quantitative mind set derived from the relatively predictable world of mathematics and physics, which may be fine while the complex financial system is in one of its relatively stable states. When that is no longer the case (Black Swan events?), rather more powerful mental models coming from biology (e.g. evolutionary behaviour) and from non-linear complex adaptive systems (e.g., climate or enzyme kinetics in biochemistry) may be  needed. Indeed one biological concept - punctuated equilibrium - could fit the 2008 credit crunch very well; the post crunch world will be quite likely to have rather different dynamics.
This is not new. Way back in 2006, scientists had already identified opportunities to use advanced modelling techniques to understand economies. From The Cambrian age of economics in The Economist print edition (Jul 20th 2006):
 Eric Beinhocker, of the McKinsey Global Institute, in his book “The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics” argues that economists should abandon blackboard deduction in favour of computer simulation. The economists he likes do not “solve” models of the economy—deducing the prices and quantities that will prevail in equilibrium—rather they grow them “in silico”, as he puts it.
. . .
An early example is the sugarscape simulation done in 1995 by Joshua Epstein and Robert Axtell, of the Brookings Institution. On a computer-generated landscape, studded with “sugar” mountains, they scattered a variety of simple, sugar-eating creatures, which compete for this precious commodity. Some creatures move faster than others, some see farther, and some burn sugar at a higher metabolic rate than their rivals.
. . .
Such simulations may be unpredictable, but they are nonetheless understandable, Mr Beinhocker insists. By toying with different parameters, such as metabolic rates or the height of the sugar mountains, analysts can learn how to “tune” their model to generate different results. This understanding may be more valuable than a forecast, he argues. But whatever such enlightenment is worth, it is not easy to communicate to others. The revelations contained in a deductive proof or theorem are easy to pass on: they leave a set of footprints for other people to follow, making it easy for a theory to persuade and convert. For simulations, by contrast, “the only way to see what happens is to run the model and evolve it—there is no shortcut.”
Models built using insights from complex adaptive systems, that is, using agent-based modelling, system dynamics and so on, are transforming fields from biochemistry to epidemiology to ecology to weather forecasting, and whilst penetration into hedge funds is accelerating, uptake by policymakers remains slow. Advanced simulations can not only be used to make vast amounts of money for hedge funds; they must be used on a daily basis to design economic policies in the 21st century so we avert tomorrow's crisis.
Steven Keen, who has spent "40 years fighting delusions in economics", outlines eloquently why "economics must undergo a long-overdue intellectual revolution" in his book, Debunking Economics. The Queen of England famously asked academic economists at the London School of Economics about the 2008 crisis, "Why did nobody notice it?" Their response was that they "lost sight of the bigger picture" and that no one could have seen the crisis coming. As Professor Keen puts it:
Balderdash. Though the precise timing of the crisis was impossible to pick, a systemic crisis was both inevitable and, to astute observers [NB: hedge funds] in the mid 2000s, likely to occur in the very near future.
In a much more recent article, Greece crisis: Better models can show how to stabilise eurozone in The New Scientist (7 July 2015):
Doyne Farmer of the Santa Fe Institute in New Mexico and the University of Oxford says leaders and their economic advisers have no way to work out quantitatively which mix of solutions works best, so fall back on ideological preferences and whatever historical examples support them. Deadlock ensues.
Mainstream models, called Dynamic Stochastic General Equilibrium models, assume that left to their own devices, economic systems will reach “equilibrium” as a result of buyers and sellers independently acting to try to maximise their own benefit. They assume that periodic disturbances are due to outside influences, not produced spontaneously within the system.
“Empirical studies now show this cannot be true,” says Farmer. Such models also failed spectacularly to predict or explain the financial crisis of 2008. “The atomistic, optimising agents underlying existing models do not capture behaviour during a crisis period,” European Central Bank president Jean-Claude Trichet said in 2010.
What might work, some researchers said then, are agent-based models (ABMs), which use modern computers’ number-crunching power to simulate people and institutions who do not necessarily behave optimally, and who interact.
Paul Mason, economics editor at Channel 4 News and author of Post-Capitlism, recently wrote in the article, 'My wish for 2015: a machine to judge political claims against reality'.
If I could rub an empty lager can, and get a genie to appear and grant me one wish for 2015, it would be for something apparently banal but revolutionary: an accurate simulation of the economy.
Some points for today are therefore that survival and success in a complex evolving environment such as the global finance system require the continual development, deployment and effective use of simulation models that reflect more features of reality than those used by competitors. Even though this may not be a guarantee of survival and success, it at least would be an indicator to investors and others of a greater chance of long-term survival. Some aspects of such models require both top-down (macro - using system dynamics) and bottom up (micro - using agent based and discrete event) modelling. This is one of Providence's key strengths and, when combined with powerful visualisation for better insights and powerful number crunching for ever closer approaches to real-time sensing and response, may be the ultimate differentiator of the hedge fund of the future. Simudyne is making that world possible.

Who Will Own the Robots?


    We’re in the midst of a jobs crisis, and rapid advances in AI and other technologies may be one culprit. How can we get better at sharing the wealth that technology creates?
    Editor’s note: This is the third in a series of articles about the effects of software and automation on the economy. You can read the other stories here and here.
    The way Hod Lipson describes his Creative Machines Lab captures his ambitions: “We are interested in robots that create and are creative.” Lipson, an engineering professor at Cornell University (this July he’s moving his lab to Columbia University), is one of the world’s leading experts on artificial intelligence and robotics. His research projects provide a peek into the intriguing possibilities of machines and automation, from robots that “evolve” to ones that assemble themselves out of basic building blocks. (His Cornell colleagues are building robots that can serve as baristas and kitchen help.) A few years ago, Lipson demonstrated an algorithm that explained experimental data by formulating new scientific laws, which were consistent with ones known to be true. He had automated scientific discovery.
    Lipson’s vision of the future is one in which machines and software possess abilities that were unthinkable until recently. But he has begun worrying about something else that would have been unimaginable to him a few years ago. Could the rapid advances in automation and digital technology provoke social upheaval by eliminating the livelihoods of many people, even as they produce great wealth for others?
    “More and more computer-guided automation is creeping into everything from manufacturing to decision making,” says Lipson. In the last two years alone, he says, the development of so-called deep learning has triggered a revolution in artificial intelligence, and 3-D printing has begun to change industrial production processes. “For a long time the common understanding was that technology was destroying jobs but also creating new and better ones,” says Lipson. “Now the evidence is that technology is destroying jobs and indeed creating new and better ones but also fewer ones. It is something we as technologists need to start thinking about.”
    Worries that rapidly advancing technologies will destroy jobs date back at least to the early 19th century, during the Industrial Revolution in England. In 1821, a few years after the Luddite protests, the British economist David Ricardo fretted about the “substitution of machinery for human labour.” And in 1930, during the height of the worldwide depression, John Maynard Keynes famously warned about “technological unemployment” caused by “our discovery of means of economising the use of labour.” (Keynes, however, quickly added that “this is only a temporary phase of maladjustment.”)
    Now, technology is once again under suspicion as rising income inequality confronts the United States, Europe, and much of the rest of the developed world. A recent report from the Organization for Economic Cooperation and Development concluded that the gap between the rich and poor is at a historically high level in many of its 34 member countries, driven largely by a drop in earning power for the bottom 40 percent of the population. Many of the lowest earners have seen wages decrease over the last few decades, and the OECD warns that income inequality is now undermining economic growth. Meanwhile, the erosion of the American middle class and the pressure on the lowest-paid U.S. workers has been painfully evident for years.
    Only 68 percent of men between 30 and 45 who have a high school diploma were working full time in 2013, according to a recent report by the Hamilton Project at the Brookings Institution, a Washington-based public-policy group. Earnings for the typical worker haven’t kept up with the growth of the economy for decades. Median earnings for a man without a high school diploma fell 20 percent from 1990 to 2013, while wages for those with only a high school diploma dropped 13 percent. Women have fared somewhat better, though they still generally earn less than men. Over the same period, earnings for women without a high school diploma dropped 12 percent, while earnings for those with a high school diploma actually rose by 3 percent.
    Do today’s rapid advances in artificial intelligence and automation portend a future in which robots and software greatly reduce the need for human workers?
    It is notoriously hard to determine the factors that go into job creation and earnings, and it is particularly difficult to isolate the specific impact of technology from that of, say, globalization, economic growth, access to education, and tax policies. But advances in technology offer one plausible, albeit partial, explanation for the decline of the middle class. A prevailing view among economists is that many people simply don’t have the training and education required for the increasing number of well-paying jobs requiring sophisticated technology skills. At the same time, software and digital technologies have displaced many types of jobs involving routine tasks such as those in accounting, payroll, and clerical work, forcing many of those workers to take more poorly paid positions or simply abandon the workforce. Add to that the increasing automation of manufacturing, which has eliminated many middle-class jobs over the past decades, and you begin to see why much of the workforce is feeling squeezed.
    These are long-term trends that began decades ago, says David Autor, an MIT economist who has studied “job polarization”—the disappearance of middle-skill jobs even as demand increases for low-paying manual work on the one hand and highly skilled work on the other. This “hollowing out” of ­the middle of the workforce, he says, “has been going on for a while.”
    Nevertheless, the recession of 2007–2009 may have sped up the destruction of many relatively well-paid jobs requiring repetitive tasks that can be automated. These so-called routine jobs “fell off a cliff in the recession,” says Henry Siu, an economist at the University of British Columbia, “and there’s been no large rebound.” This type of work, which includes white-collar jobs in sales and administration as well as blue-collar jobs in assembly work and machine operation, makes up about 50 percent of employment in the United States. Siu’s research also shows that the disappearance of these jobs has most harshly affected people in their 20s, many of whom seem to have simply stopped looking for work.
    That’s bad enough. But there’s an even more fundamental fear. Is this a harbinger of what’s to come for other sectors of the workforce, as technology takes over more and more of the jobs that have long been considered secure paths to a middle-class life? Are we at the beginning of an economic transformation that is unique in history, wonderful for what it could do in bringing us better medicine, services, and products, but devastating for those not in a position to reap the financial benefits? Will robots and software replace most human workers?
    Scaring children
    No one knows the answer. Many economists see little convincing evidence that advances in technology will be responsible for a net decrease in the number of jobs, or that what we’re undergoing is any different from earlier transitions when technology destroyed some jobs but improved employment opportunities over time. Still, over the last several years, a number of books and articles have argued that the recent advances in artificial intelligence and automation are inherently different from past technological breakthroughs in what they portend for the future of employment. Martin Ford is one of those who think this time is different. In his new book, Rise of the Robots: Technology and the Threat of a Jobless Future, Ford points to numerous examples of new technologies, such as driverless cars and 3-D printing, that he thinks will indeed eventually replace most workers. How then will we adapt to this “jobless future”?
    Ford recommends a guaranteed basic income as part of the answer. Simply put, his prescription is to give people a modest amount of money. It’s not a new idea. One version of it, called a negative income tax, was popularized by the conservative economist Milton Friedman during the early 1960s as a way to replace some of the growing government bureaucracy. And Ford quotes ­the economist Friedrich Hayek, who in 1979 described assuring a minimum income as a way to provide “a sort of floor below which nobody need fall even when he is unable to provide for himself.” Both Richard Nixon and his 1972 presidential rival George McGovern, a liberal Democrat, championed some form of the policy.
    The idea went out of fashion in the 1980s, but it has returned in recent years as a way to help those people shut out of the labor markets. In the libertarian version, it’s a way to provide a safety net with minimum government involvement; in the progressive version, it supplements other programs to help the poor.

    Whether it is good politics or good social policy has been endlessly debated. Recently, others have suggested a related policy: expanding the Earned Income Tax Credit, which would give some extra money to low-paid workers. These ideas probably do make sense as a way to strengthen the social safety net. But if you believe that the rapid advance of technology could eliminate the need for most workers, such policies do little to directly address that scenario. Allowing a large number of workers to become irrelevant in the technology-centric economy would be a huge waste of human talent and ambition—and would probably put an enormous financial burden on society. What’s more, a guaranteed basic income does not offer much to those in the middle class whose jobs are at risk, or to those who have recently fallen from financial security in the absence of well-paying jobs.
    It might also be premature to plan for a dystopian future of hardly any jobs. Ford’s Rise of the Robots offers many examples of impressive achievements in automation, software, and AI that could make some jobs obsolete—even those requiring highly trained professionals in fields like radiology and law. But how do you assess just how specific technologies like these will affect the total number of jobs in the economy?
    In fact, there is not much evidence on how even today’s automation is affecting employment. Guy Michaels and his colleague Georg Graetz at the London School of Economics recently looked at the impact of industrial robots on manufacturing in 17 developed countries. The findings tell a mixed story: the robots did seem to replace some low-skill jobs, but their most important impact was to significantly increase the productivity of the factories, creating new jobs for other workers. Overall, there was no evidence that the robots reduced total employment, says Michaels.

    If it’s difficult to quantify the effect of today’s technology on job creation, it’s impossible to accurately predict the effects of future advances. That opens the door to wild speculation. Take an extreme example raised by Ford: molecular manufacturing. As proposed by some nanotechnology boosters, most notably the author K. Eric Drexler, the idea is that one day it will be possible to build almost anything with nanoscale robots that move atoms around like tiny building blocks. Though Ford acknowledges that it might not happen, he warns that jobs will be devastated if it does.
    The credence Ford gives to Drexler’s vision of nanobots slaving away in molecular factories seems less than warranted, though, given that the idea was debunked by the Nobel-­winning chemist Richard Smalley more than a decade ago (see “Will the Real Nanotech Please Stand Up?”). Smalley saw great potential for nanotech in areas such as clean energy, but his objection to molecular manufacturing as ­Drexler described it was simple: it ignores the rules of chemistry and physics governing the way atoms bind and react with each other. Smalley admonished Drexler: “You and people around you have scared our children. I don’t expect you to stop, but … while our future in the real world will be challenging and there are real risks, there will be no such monster as the self-replicating mechanical nanobot of your dreams.”
    Though Ford does note Smalley’s criticism, one begins to wonder whether his conjuring the “rise of the robots” might not indeed be needlessly scaring our children. Speculating about such far-fetched possibilities is a distraction in thinking about how to address future concerns, much less existing job woes.
    A more realistic, but in its way more interesting, version of the future is being written in the downtown Chicago offices of Narrative Science. Its software, called Quill, is able to take data—say, the box score of a baseball game or a company’s annual report—and not only summarize the content but extract a “narrative” from it. Already, Forbes is using it to create some stories about corporate earnings, and the Associated Press is using a rival’s product to write some sports stories. The quality is readable and is likely to improve greatly in coming years.
    “Short-term and medium-term, [AI] will displace work but not necessarily jobs.”
    Yet despite the potential of such technology, it is not clear how it would affect employment. “As AI stands today, we’ve not seen a massive impact on white-collar jobs,” says Kristian Hammond, a Northwestern University computer scientist who helped create the software behind Quill and is a cofounder of the company. “Short-term and medium-term, [AI] will displace work but not necessarily jobs,” he says. If AI tools do some of the scut work involved in analyzing data, he says, people can be “free to work at the top of their game.”
    And as impressive as Quill and other recent advances are, Hammond is not yet convinced that the capabilities of general-purpose AI are poised for great expansion. The current resurgence in the field, he says, is being driven by access to massive amounts of data that can be quickly analyzed and by the immense increase in computing power over what was available a few years ago. The results are striking, but the techniques, including some aspects of the natural-language generation methods that Quill employs, make use of existing technologies empowered by big data, not breakthroughs in AI. Hammond says some recent descriptions of certain AI programs as black boxes that teach themselves capabilities sound more like “magical rhetoric” than realistic explanations of the technology. And it remains uncertain, he adds, whether deep learning and other recent advances will truly “work as well as touted.”
    In other words, it would be smart to temper our expectations about the future possibilities of machine intelligence.
    The gods of technology
    “Too often technology is discussed as if it has come from another planet and has just arrived on Earth,” says Anthony Atkinson, a fellow of Nuffield College at the University of Oxford and a professor at the London School of Economics. But the trajectory of technological progress is not inevitable, he says: rather, it depends on choices by governments, consumers, and businesses as they decide which technologies get researched and commercialized and how they are used.
    Atkinson has been studying income inequality since the late 1960s, a period when it was generally a subject on the back burner of mainstream economics. Over those years, income inequality has grown dramatically in a number of countries. Its levels rose in the U.K. in the 1980s and have not fallen since, and in the United States they are still rising, reaching historically unprecedented heights. The publication last year of his frequent collaborator Thomas Piketty’s remarkably successful Capital in the 21st Century made inequality the hottest topic in economics. Now Atkinson’s new book, called Inequality: What Can Be Done?, proposes some solutions. First on his list: “encouraging innovation in a form that increases the employability of workers.”

    When governments choose what research to fund and when businesses decide what technologies to use, they are inevitably influencing jobs and income distribution, says Atkinson. It’s not easy to see a practical mechanism for picking technologies that favor a future in which more people have better jobs. But “at least we need to ask” how these decisions will affect employment, he says. “It’s a first step. It might not change the decision, but we will be aware of what is happening and don’t have to wait until we say, ‘Oh dear, people have lost their jobs.’”
    Part of the strategy could emerge from how we think about productivity and what we actually want from machines. Economists traditionally define productivity in terms of output given a certain amount of labor and capital. As machines and software—capital—become ever cheaper and more capable, it makes sense to use less and less human labor. That’s why the prominent Columbia University economist Jeffrey Sachs recently predicted that robots and automation would soon take over at Starbucks. But there are good reasons to believe that Sachs could be wrong. The success of Starbucks has never been about getting coffee more cheaply or efficiently. Consumers often prefer people and the services humans provide.
    Take the hugely popular Apple stores, says Tim O’Reilly, the founder of O’Reilly Media. Staffed by countless swarming employees armed with iPads and iPhones, the stores provide a compelling alternative to a future of robo-retail; they suggest that automating services is not necessarily the endgame of today’s technology. “It’s really true that technology will take away a class of jobs,” says O’Reilly. “But there is a choice in how we use technology.”
    In that sense, Apple stores have found a winning strategy by not following the conventional logic of using automation to lower labor costs. Instead, the company has cleverly deployed an army of tech-savvy sales employees toting digital gadgets to offer a novel shopping experience and to profitably expand its business.
    O’Reilly also points to the enormous success of the car service Uber. By using technology to create a convenient and efficient reservation and payment service, it has created a robust market. And in doing so, it has expanded the demand for drivers—who, with the aid of a smartphone and app, now have greater opportunities than they might working for a conventional taxi service.
    The lesson is that if advances in technology are playing a role in increasing inequality, the effects are not inevitable, and they can be altered by government, business, and consumer decisions. As the economist Paul Krugman recently told an audience at a forum called “Globalization, Technological Change, and Inequality” in New York City, “A lot of what’s happening [in income inequality] is not just the gods of technology telling us what must happen but is in fact [due to] social constructs that could be different.”
    Who owns the robots?
    The effects of automation and digital technology on today’s employment picture are sometimes downplayed by those who point to earlier technology transitions. But that ignores the suffering and upheaval during those periods. Wages in England were stagnant or fell for around 40 years after the beginning of the Industrial Revolution, and the misery of factory workers is well documented in the literature and political writings of the day.
    In his new book, The Great Divide, the Columbia University economist Joseph Stiglitz suggests that the Great Depression, too, can be traced to technological change: he says its underlying cause was not, as is typically argued, disastrous government financial policies and a broken banking system but the shift from an agricultural economy to a manufacturing one. Stiglitz describes how the advent of mechanization and improved farming practices quickly transformed the United States from a country that needed many farmers to one that needed relatively few. It took the manufacturing boom fueled by World War II to finally help workers through the transition. Today, writes Stiglitz, we’re caught in another painful transition, from a manufacturing economy to a service-based one.
    Those who are inventing the technologies can play an important role in easing the effects. “Our way of thinking as engineers has always been about automation,” says Hod Lipson, the AI researcher. “We wanted to get machines to do as much work as possible. We always wanted to increase productivity; to solve engineering problems in the factory and other job-related challenges is to make things more productive. It never occurred to us that isn’t a good thing.” Now, suggests Lipson, engineers need to rethink their objectives. “The solution is not to hold back on innovation, but we have a new problem to innovate around: how do you keep people engaged when AI can do most things better than most people? I don’t know what the solution is, but it’s a new kind of grand challenge for engineers.”

    Ample opportunities to create jobs could come from much-needed investments in education, aging infrastructure, and research in areas such as biotechnology and energy. As Martin Ford rightly warns, we could be in for a “perfect storm” if climate change grows more severe at a time when technological unemployment imposes increased economic pressure. Whether this happens will depend in large part on which technologies we invent and choose to embrace. Some version of an automated vehicle seems inevitable, for example; do we use this to make our public transportation systems more safe, convenient, and energy efficient, or do we simply fill the highways with driverless cars and trucks?
    There is little doubt that at least in the short term, the best bulwark against sluggish job creation is economic growth, whether that’s accomplished through innovative service-intensive businesses like the Apple stores and Uber or through investments in rebuilding our infrastructure and education systems. It is just possible that such growth will overcome the worries over robots taking our jobs.
    Andrew McAfee, the coauthor with his MIT colleague Erik Brynjolfsson of The Second Machine Age, has been one of the most prominent figures describing the possibility of a “sci-fi economy” in which the proliferation of smart machines eliminates the need for many jobs. (See “Open Letter on the Digital Economy,” in which McAfee, Brynjolfsson, and others propose a new approach to adapting to technological changes.) Such a transformation would bring immense social and economic benefits, he says, but it could also mean a “labor-light” economy. “It would be a really big deal, and it’s not too soon to start the conversation about it,” says McAfee. But it’s also, he acknowledges, a prospect that is many decades away. Meanwhile, he advocates pro-growth policies “to prove me wrong.” He says, “The genius of capitalism is that people find things to do. Let’s give it the best chance to work.”
    Here’s the rub. As McAfee and Brynjolfsson explain in The Second Machine Age, one of the troubling aspects of today’s technological advances is that in financial terms, a few people have benefited from them disproportionately (see “Technology and Inequality”). As Silicon Valley has taught us, technology can be both a dynamic engine of economic growth and a perverse intensifier of income inequality.
    Whoever owns the capital will benefit as robots and artificial intelligence inevitably replace many jobs.
    In 1968, J.C.R. Licklider, one of the creators of today’s technology age, co-wrote a remarkably prescient article called “The Computer as a Communication Device.” He predicted “on line interactive communities” and explained their exciting possibilities. Licklider also issued a warning at the end of the paper:
    “For the society, the impact will be good or bad, depending mainly on the question: Will ‘to be on line’ be a privilege or right? If only a favored segment of the population gets a chance to enjoy the advantage of ‘intelligence amplification,’ the network may exaggerate the discontinuity in the spectrum of intellectual opportunity.”
    Various policies can help redistribute wealth or, like the guaranteed basic income, provide a safety net for those at or near the bottom. But surely the best response to the economic threats posed by digital technologies is to give more people access to what Licklider called “intelligence amplification” so that they can benefit from the wealth new technology creates. That will mean providing fairer access to quality education and training programs for people throughout their careers.
    It also means, says Richard Freeman, a leading labor economist at Harvard University, that far more people need to “own the robots.” He’s talking not only about machines in factories but about automation and digital technologies in general. Some mechanisms already exist in profit-sharing programs and employee stock-ownership plans. Other practical investment programs can be envisioned, he says.
    Whoever owns the capital will benefit as robots and AI inevitably replace many jobs. If the rewards of new technologies go largely to the very richest, as has been the trend in recent decades, then dystopian visions could become reality. But the machines are tools, and if their ownership is more widely shared, the majority of people could use them to boost their productivity and increase both their earnings and their leisure. If that happens, an increasingly wealthy society could restore the middle-class dream that has long driven technological ambition and economic growth.
    David Rotman

    Social Credit: A Simple Explanation


    January 28, 2015 / Blogger Ref  http://www.p2pfoundation.net/Transfinancial_Economics   

    (Left. The founder of the Social Credit movement, Major Clifford Hugh Douglas 1879-1952)

    Social Credit addresses a fundamental flaw in our
    economic system, the gap between a plethora of
    products and the lack of money in purchasers hands. 

    by Oliver Heydorn Ph.D. 

       Social Credit refers to the ideas of the brilliant Anglo-Scottish engineer, Major Clifford Hugh Douglas (1879-1952).
       Douglas identified what is wrong with the industrial economy and also explained how to fix it.
        The core problem is that there is never enough money to buy what we produce. In essence, people don't earn enough to afford the plethora of available consumer goods and services.
       This gap is caused by many factors. Profits, including profits derived from interest on loans, is only one of them. Savings and the re-investment of savings are two others. The most important cause, however, has to do with how real capital (i.e., machines and equipment) builds up costs at a faster rate than it distributes incomes to workers.
        The economy must compensate for this recurring gap between prices and incomes. Since most of the money supply is created out of nothing by the banks, the present financial system fills the gap by relying on governments, firms, and consumers to borrow additional money into existence so that the level of consumer buying power can be increased.
        As a society we are always mortgaging our future earnings in order to get enough purchasing power so that we can pay present prices in full. Whenever we fail to borrow enough money, the economy stalls and the government may even start a war to reboot it. To the extent that we succeed in bridging the gap, we contribute to the building-up of a mountain of debt that can never be paid off.
         Filling the gap with debt-money is also inflationary, wasteful, and puts the whole society on a production-consumption treadmill. It is the prime cause behind social tensions, environmental damage, and international conflict.
         All of this dysfunction is tolerated because the banks profit from it. Compensating for the gap transfers wealth and power from the common consumers to the owners of the financial system.

    bridgegap.jpgFILLING THE GAP DEBT-FREE
        Douglas proposed that instead of filling the gap with debt-money, the gap could and should be filled with debt-free money.
         This money would be created by an organ of the state, a National Credit Office, and distributed to consumers. Some of it would be issued indirectly in the form of a National Discount on all retail prices, while another portion would be issued directly in the form of a National Dividend.[1]    
         Since the productive capacity of the modern, industrial economy is enormous, an honest representation of our productive power would allow us to enjoy an abundance of beneficial goods and services along side increasing leisure. Our economies could become socially equitable, environmentally sustainable, and internationally concordant.
         Unlike some other monetary reform proposals, Social Credit does not advocate the nationalization of the banks. It is completely opposed to any scheme that would see us jump from the frying pan of a self-serving private system into the fire of a complete state monopoly over money and its issuance. The latter would be a fine basis for the introduction of a totalitarian society.
       Social Crediters, by contrast, stand for the decentralization of economic and political power in favour of the individual. Social Credit's proposal for an honest monetary system is not socialist but rather anti-socialist. It is completely compatible with a free enterprise economy (incorporating free markets, private property, individual initiative, and the profit motive) Cf. .http://www.socred.org/blogs/view/why-social-credit-is-not-socialism.
        Getting an understanding of Social Credit is well worth the effort, as it may just manage to save civilization.
    Heydorn pic1.jpg
            Oliver Heydorn (olheydorn@yahoo.ca) is the founder and director of The Clifford Hugh Douglas Institute for the Study and              Promotion of Social                 Creditwww.socred.org
             He is also the author of two recent books on the subject (available via amazon)  Social Credit Economics
    - See more at: http://henrymakow.com/2015/01/social-credit-a-simple-explanation.html#sthash.C8GlkSGp.dpuf

    Concerns grow over China's 'nightmarish' social credit score system


    National database will be set up to rate each citizen's trustworthiness – but does it go too far?

    Frederic J. Brown/AFP/Getty Images
    China is preparing to introduce a controversial credit score system which ranks each citizen's trustworthiness based on a variety of financial and social factors.
    The Social Credit System (SCS) is still in its trial stages, with the government planning on creating a national database by the end of 2020.
    Citizens and organisations will be ranked not just on their financial reliability but also on their social interactions and consumer spending, the BBC's Celia Hatton reports.  This information will then be shared between public institutions.
    The exact details remain unclear, but the system reportedly takes a variety of things into account, including points on a person's driving licence, products they buy and how they are evaluated at work.
    Rogier Creemers, who studies Chinese media policy and political change at the University of Oxford, agrees that the planned measures go well beyond establishing financial creditworthiness.
    "All that behaviour will be integrated into one comprehensive assessment of you as a person, which will then be used to make you eligible or ineligible for certain jobs, or social services," he told the New Scientist.
    One of the main pilot projects is currently being run by Sesame Credit, a subsidiary of the Chinese e-commerce giant Alibaba. Perhaps most controversially, the company openly admits that it judges the types of products shopper buy online, the BBC says.
    "Someone who plays video games for ten hours a day, for example, would be considered an idle person, and someone who frequently buys diapers would be considered as probably a parent, who on balance is more likely to have a sense of responsibility," said Li Yingyun, Sesame's technology director.
    The company then rewards people with high credit scores with perks such as a prominent dating profile on the Baihe matchmaking site to VIP reservations with hotels and car rental companies.
    The system has prompted criticism from many outside of the country, including American Civil Liberties Union policy analyst Jay Stanley, who labelled the programme "nightmarish".
    But others believe an innovative and comprehensive credit rating system is sorely needed in China. "Many people don't own houses, cars or credit cards in China, so that kind of information isn't available to measure," explains technology blogger Wen Quan.
    Creemers, who was responsible for translating publicly released documents about the SCS, said the Big Brother fears raised about the system are typical of Western media's coverage of China.
    "Pretty much anything China does makes people panicked," he said. "And many times we don't recognise that we are doing similar things."

    Does the size of the state hold back the economy?


     Blogger Ref http://www.p2pfoundation.net/Transfinancial_Economics


    LATER this week, the British government will announce its latest spending review. Its aim will be to balance the budget by 2020. In part, this is because the Conservative party genuinely believes a smaller state is good for economic growth in the long run. But is there a hard-and-fast relationship?
    This is the kind of issue economists spend years researching. But your blogger pursued a simple approach; comparing the GDP per capita of OECD countries with government spending as a proportion of GDP. (Statistical purists should note; the latest figures on the former are from 2014 but the latter only go up to 2013.)

    The OECD has data covering both measures for 26 countries. So step 1 was to rank the countries by government spending and then divide them into quartiles (six countries in the 1st and 4th quartiles, 7 countries in the 2nd and 3rd). The results were as follows
    Quartile                                        GDP per capita ($)
    Highest spenders                         40,285
    2nd highest                                  36,399
    3rd highest                                   50,070
    Lowest                                          41,550
    It is hard to spot any relationship from that grouping. As you can see, the highest spending group has roughly the same GDP per capita as the lowest spenders. The third quartile contains Luxembourg which, thanks to its huge financial services sector, has GDP per capita 50% higher than any other nation.
    We can slice the data the other way and rank the countries by GDP per capita and then see what their government spending is.
    Quartile                                        Govt spending as % of GDP
    Richest                                         40.9%
    2nd richest                                   51.6%
    3rd richest                                    44.8%
    Poorest                                         46.9%
    At first glance, things look more promising for the small-staters. The poorest quartile of countries has higher spending than the wealthiest. But look at the second richest quartile; it spends a lot. If you divided the whole group into two halves, then the richest countries spend 46.6% of GDP on government and the poorest 45.7%.
    Now, of course, a lot more sophisticated analysis can be done, covering growth rates etc; the rate of government spending in any given year also depends on the health of the economy (thanks to unemployment benefits etc).
    But this rough-and-ready analysis shows it is unwise to be too dogmatic. Most countries have a mixed economy with public and private sector involvement; more than two-thirds of governments in this sample spend between 40% and 55% of GDP. Clearly the history of communism shows that total government control is bad for growth. Sweden's experience since the 1990s suggests that a mixed economy can benefit from a reduction in the state if it becomes too large.  However, the developed world is ageing and there may be a natural tendency for government spending (on pensions, health) etc to grow larger over the next few decades. Instead of shrinking the state, government might find they are running hard to stay in the same place.

    Friday, 20 November 2015



    Jump to: navigation, search
    At the P2P Foundation, we see the institutional emergence of a commons economy through the following three social forms:
    • at the heart of value creations are productive communities of contributors, paid or unpaid, that create shared resources i.e. commons
    • around these shared resources, new forms of entrepreneurial coalitions form, that created added value for the market to create livelihoods for the commoners, which can take the form of Open Cooperatives, Platform Cooperatives
    This wiki section focuses mostly on the new entrepreneurial coalitions and the productive communities (sometimes called neo-tribes) with whom they are connected.
    Key concepts related to this are Phyles and Neo-Venetianist Networks, concepts developed by Las Indias. A fictional treatment can be found in the Diamond Age, a science fiction book by Neal Stephenson. The 'poor man's' equivalent may be found in the description of transmigrant networks in: "Etrangers de passage. Poor to poor, peer to peer d’Alain Tarrius Editions de l’Aube.
    Among our favourites for the moment are:

    Associated organisations in this space are:

    Documentation on post-corporate practices

    What the world and humanity, and all those beings that are affected by our activities require is a mode of production, and relations of production, that are “free, fair and sustainable” at the same time. Post-corporate entities and the productive communities they are based on are pioneering new 'generative' practices, that co-create value with the commons, rather than 'extractive' practices that enclose the commons or capture value from the commons.

    1. Open Business Models based on shared knowledge

    Closed business models are based on artificial scarcity. Though knowledge is a non- or anti-rival good that gains in use value the more it is shared, and though it can be shared easily and at very low marginal cost when it is in digital form, many extractive firms still use artificial scarcity to extract rents from the creation or use of digitized knowledge. Through legal repression or technological sabotage, naturally shareable goods are made artificially scarce, so that extra profits can be generated. This is particularly galling in the context of life-saving or planet-regenerating technological knowledge. The first commandment is therefore the ethical commandment of sharing what can be shared, and only creating market value from resources that are scarce and create added value on top or along these commons. Open business models are market strategies that are based on the recognition of natural abundance and the refusal to generate income and profits by making them artificially scarce.
    Wiki section at http://p2pfoundation.net/Category:Business_Models

    2. Open Cooperativism

    Many new more ethical and generative forms are being created, that have a higher level of harmony with the contributory commons. The key here is to choose post-corporate forms that are able to generate livelihoods for the contributing commoners.
    Open cooperatives in particular would be cooperatives that share the following characteristics:
    1) they are mission-oriented and have a social goal that is related to the creation of shared resources
    2) they are multi-stakeholder governed, and include all those that are affected by or contributing to the particular activity
    3) they constitutionally, in their own rules, commit to co-create commons with the productive communities
    I often add the fourth condition that they should be global in organisational scope in order to create counter-power to extractive multinational corporations.
    Cooperatives are one of the potential forms that commons-friendly market entities could take. We see the emergence of more open forms such as neo-tribes (think of the workings of the Ouishare community), or more tightly organized neo-builds, such as Enspiral.org, Las Indias or the Ethos Foundation. Yet more open is the network form chose by the Sensorica open scientific hardware community, which wants to more tightly couple contributions with generated income, by allowing all micro-tasked contributions in the reward system, through open value or contributory accounting (more below).
    Wiki section at http://p2pfoundation.net/Category:Open_Company_Formats

    3. Open Value Accounting or Contributory Accounting

    Peer production is based on distributed tasks, freely contributed by a open community-driven collaborative infrastructure. The tradition of salaries based on fixed job description may not be the most appropriate way to reward those that contribute to such processes. Hence the emergence of open value accounting or contributory accounting. As practiced already by Sensorica, this means that any contributor may add contributions, log them according to project number, and after peer evaluation is assigned 'karma points'. When income is generated, it flows into these weighted contributions, so that every contributor is fairly rewarded. Contributory accounting, or other similar solutions, are important to avoid that only a few contributors more closely related to the market, capture the value that has been co-created by a much larger community. Open book accounting insure that the (re)distribution of value is transparent for all contributors.

    Wiki section at http://p2pfoundation.net/Category:P2P_Accounting

    4. Benefit-Sharing through CopyFair Licenses

    The copyleft licenses allow anyone to re-use the necessary knowledge commons on the condition that changes and improvements are added to that same commons. This is a great advance, but should not be abstracted from the need for fairness. When moving to physical production which involves finding resources for buildings, raw materials and payments to contributors, the unfettered commercial exploitation of such commons favours extractive models. Thus the need to maintain the knowledge sharing, but to ask reciprocity for the commercial exploitation of the commons, so that there is a level playing field for the ethical economic entities that do internalize social and environmental costs. This is achieved through copyfair licenses which, while allowing full sharing of the knowledge, ask for reciprocity in exchange for the right of commercialization.

    Wiki section at http://p2pfoundation.net/Category:Licensing

    5. Commonfare solidarity practices

    As one of the strong results of financial and neo-liberal globalization, the power of nation-states has gradually weakened, and there is now a strong and integrated effort to unwind the solidarity mechanisms that were embedded in the welfare state models. As long as we do not have the power to reverse this slide, it is imperative that we reconstruct solidarity mechanisms of distributed scope, a practice which we could call 'commonfare'. Examples such as the Broodfonds (NL), Friendsurance (Germany) and the health sharing ministries (U.S.), or cooperative entities such Coopaname in France, show us the new forms of distributed solidarity that can be developed to deal with the risks of life and work.
    Wiki section at http://p2pfoundation.net/Category:P2P_Solidarity

    6. Sustainable Manufacturing through an Open Source Circular Economy

    Open productive communities insure maximum participation through modularity and granularity. Because they operate in a context of shared and abundant resources, the practice of planned obsolescence, which is not a bug but a feature for profit-maximizing corporations, is alien to them. Ethical entrepreneurial entities will therefore use these open and sustainable designs and produce sustainable good and services.
    Wiki section at http://p2pfoundation.net/Category:Design

    7. Mutual coordination of production through Open Supply Chains and Open Book Accounting

    What decision-making is for planning, and pricing is for the market, mutual coordination is for the commons!
    We will never achieve a sustainable 'circular economy', in which the output of one production process is used as an input for another, with closed value chains and where every cooperation has to be painfully negotiated in the conditions of lack of transparency. But entrepreneurial coalitions who are already co-dependent on a collaborative commons can create eco-systems of collaboration through open supply chains, in which the production processes become transparent, and through which every participant can adapt his behaviour based on the knowledge available in the network. There is no need for over-production when the production realities of the network become common knowledge.
    Wiki section at http://p2pfoundation.net/Category:Mutual_Coordination

    8. Cosmo-Localization: what is light is global, what is heavy is local

    “What is light is global, and what is heavy is local”: this is the new principle animating commons-based peer production in which knowledge is globally shared, but production can take place on demand and based on real needs, through a network of distributed coworking and microfactories. Certain studies have shown that up to two-thirds of matter and energy does not go to production, but to transport, which is clearly unsustainable. A return to relocalized production is a since qua non for the transition towards sustainable production.
    Wiki section via http://p2pfoundation.net/Category:Sustainable_Manufacturing

    9. Mutualization of physical infrastructures

    Platform cooperatives, data cooperatives and fairshares forms of distributed ownership can be used to co-own our infrastructures of production.
    The misnamed 'sharing economy' from AirBnB and Uber nevertheless shows the potential of matching idle resources. Co-working, skillsharing, ridesharing are examples of the many ways in which we can re-use and share resources to dramatically augment the thermo-dynamic efficiencies of our consumption.
    In the right context of co-ownership and co-governance, a real sharing economy can achieve dramatic advances in reduced resource use. Our means of production, inclusive machines, can be mutualized and self-owned by all those that create value.

    Wiki section at http://p2pfoundation.net/Category:Sharing

    10. Mutualization of generative capital

    Generative forms of capital cannot rely on a extractive money supply that is based on compound interest that is due to extractive banks. We have to abolish the 38% financial tax that is owed on all goods and services and transform our monetary system, and substantively augment the use of mutual credit systems.
    Wiki section at http://p2pfoundation.net/Category:Peerfunding

    Key Resources

    Key Players

    In France

    CEDRIC : Collaborative Ecosystem Development and Roadmap Innovating for the Commons
    1. Association Ekopratik
    2. Association Open Atlas
    3. Chez Nous [1]
    4. Dialoguea [2]
    5. Living Coop
    6. Multi Bao [3]
    7. Nacelle 0.2 [4]
    8. Organisation Pixel humain [5] video
    9. P2P Foundation France with Julien Cantoni
    10. Projet Communecter [6] video
    11. Unissons