Showing posts with label innovation. Show all posts
Showing posts with label innovation. Show all posts

Friday, 8 May 2015

What disruptive innovation means

The Economist explains

  


EVERY so often a management idea escapes from the pages of the Harvard Business Review and becomes part of the zeitgeist. In the 1990s it was “re-engineering”. Today it is “disruptive innovation”. TechCrunch, a technology-news website, holds an annual “festival of disruption”. CNBC, a cable-news channel, produces an annual “disruptor list” of the most disruptive companies. Mentioning “disruptive innovation” adds a veneer of sophistication to bread-and-butter speeches about education or health care. But just what is disruptive innovation?
The theory of disruptive innovation was invented by Clayton Christensen, of Harvard Business School, in his book “The Innovator’s Dilemma”. Mr Christensen used the term to describe innovations that create new markets by discovering new categories of customers. They do this partly by harnessing new technologies but also by developing new business models and exploiting old technologies in new ways. He contrasted disruptive innovation with sustaining innovation, which simply improves existing products. Personal computers, for example, were disruptive innovations because they created a new mass market for computers; previously, expensive mainframe computers had been sold only to big companies and research universities.

The “innovator’s dilemma” is the difficult choice an established company faces when it has to choose between holding onto an existing market by doing the same thing a bit better, or capturing new markets by embracing new technologies and adopting new business models. IBM dealt with this dilemma by launching a new business unit to make PCs, while continuing to make mainframe computers. Netflix took a more radical move, switching away from its old business model (sending out rental DVDs by post) to a new one (streaming on-demand video to its customers). Disruptive innovations usually find their first customers at the bottom of the market: as unproved, often unpolished, products, they cannot command a high price. Incumbents are often complacent, slow to recognise the threat that their inferior competitors pose. But as successive refinements improve them to the point that they start to steal customers, they may end up reshaping entire industries: classified ads (Craigslist), long distance calls (Skype), record stores (iTunes), research libraries (Google), local stores (eBay), taxis (Uber) and newspapers (Twitter).
Partly because of disruptive innovation, the average job tenure for the CEO of a Fortune 500 company has halved from ten years in 2000 to less than five years today. There is good reason to think that the pace of change will increase, as computer power increases and more things are attached to the internet, expanding its disruptive influence into new realms. Google promises to reinvent cars as autonomous vehicles; Amazon promises to reinvent shopping (again) using drones; 3D printing could disrupt manufacturing. But perhaps the most surprising disruptive innovations will come from bottom-of-the-pyramid entrepreneurs who are inventing new ways of delivering education and health-care for a fraction of the cost of current market leaders.

Dig deeper:
Freelance workers will reshape the future of companies (Dec 2014)
Uber risks a consumer backlash over its tough tactics (Nov 2014)
Amazon has upended industries and changed the way the world shops (June 2014)

Friday, 28 November 2014

The future of jobs


The onrushing wave

Previous technological innovation has always delivered more long-run employment, not less. But things can change



IN 1930, when the world was “suffering…from a bad attack of economic pessimism”, John Maynard Keynes wrote a broadly optimistic essay, “Economic Possibilities for our Grandchildren”. It imagined a middle way between revolution and stagnation that would leave the said grandchildren a great deal richer than their grandparents. But the path was not without dangers.
One of the worries Keynes admitted was a “new disease”: “technological unemployment…due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.” His readers might not have heard of the problem, he suggested—but they were certain to hear a lot more about it in the years to come.
For much of the 20th century, those arguing that technology brought ever more jobs and prosperity looked to have the better of the debate. Real incomes in Britain scarcely doubled between the beginning of the common era and 1570. They then tripled from 1570 to 1875. And they more than tripled from 1875 to 1975. Industrialisation did not end up eliminating the need for human workers. On the contrary, it created employment opportunities sufficient to soak up the 20th century’s exploding population. Keynes’s vision of everyone in the 2030s being a lot richer is largely achieved. His belief they would work just 15 hours or so a week has not come to pass.
When the sleeper wakes
Yet some now fear that a new era of automation enabled by ever more powerful and capable computers could work out differently. They start from the observation that, across the rich world, all is far from well in the world of work. The essence of what they see as a work crisis is that in rich countries the wages of the typical worker, adjusted for cost of living, are stagnant. In America the real wage has hardly budged over the past four decades. Even in places like Britain and Germany, where employment is touching new highs, wages have been flat for a decade. Recent research suggests that this is because substituting capital for labour through automation is increasingly attractive; as a result owners of capital have captured ever more of the world’s income since the 1980s, while the share going to labour has fallen.
At the same time, even in relatively egalitarian places like Sweden, inequality among the employed has risen sharply, with the share going to the highest earners soaring. For those not in the elite, argues David Graeber, an anthropologist at the London School of Economics, much of modern labour consists of stultifying “bullshit jobs”—low- and mid-level screen-sitting that serves simply to occupy workers for whom the economy no longer has much use. Keeping them employed, Mr Graeber argues, is not an economic choice; it is something the ruling class does to keep control over the lives of others.
Be that as it may, drudgery may soon enough give way to frank unemployment. There is already a long-term trend towards lower levels of employment in some rich countries. The proportion of American adults participating in the labour force recently hit its lowest level since 1978, and although some of that is due to the effects of ageing, some is not. In a recent speech that was modelled in part on Keynes’s “Possibilities”, Larry Summers, a former American treasury secretary, looked at employment trends among American men between 25 and 54. In the 1960s only one in 20 of those men was not working. According to Mr Summers’s extrapolations, in ten years the number could be one in seven.
This is one indication, Mr Summers says, that technical change is increasingly taking the form of “capital that effectively substitutes for labour”. There may be a lot more for such capital to do in the near future. A 2013 paper by Carl Benedikt Frey and Michael Osborne, of the University of Oxford, argued that jobs are at high risk of being automated in 47% of the occupational categories into which work is customarily sorted. That includes accountancy, legal work, technical writing and a lot of other white-collar occupations.
Answering the question of whether such automation could lead to prolonged pain for workers means taking a close look at past experience, theory and technological trends. The picture suggested by this evidence is a complex one. It is also more worrying than many economists and politicians have been prepared to admit.
The lathe of heaven
Economists take the relationship between innovation and higher living standards for granted in part because they believe history justifies such a view. Industrialisation clearly led to enormous rises in incomes and living standards over the long run. Yet the road to riches was rockier than is often appreciated.
In 1500 an estimated 75% of the British labour force toiled in agriculture. By 1800 that figure had fallen to 35%. When the shift to manufacturing got under way during the 18th century it was overwhelmingly done at small scale, either within the home or in a small workshop; employment in a large factory was a rarity. By the end of the 19th century huge plants in massive industrial cities were the norm. The great shift was made possible by automation and steam engines.
Industrial firms combined human labour with big, expensive capital equipment. To maximise the output of that costly machinery, factory owners reorganised the processes of production. Workers were given one or a few repetitive tasks, often making components of finished products rather than whole pieces. Bosses imposed a tight schedule and strict worker discipline to keep up the productive pace. The Industrial Revolution was not simply a matter of replacing muscle with steam; it was a matter of reshaping jobs themselves into the sort of precisely defined components that steam-driven machinery needed—cogs in a factory system.
The way old jobs were done changed; new jobs were created. Joel Mokyr, an economic historian at Northwestern University in Illinois, argues that the more intricate machines, techniques and supply chains of the period all required careful tending. The workers who provided that care were well rewarded. As research by Lawrence Katz, of Harvard University, and Robert Margo, of Boston University, shows, employment in manufacturing “hollowed out”. As employment grew for highly skilled workers and unskilled workers, craft workers lost out. This was the loss to which the Luddites, understandably if not effectively, took exception.

With the low-skilled workers far more numerous, at least to begin with, the lot of the average worker during the early part of this great industrial and social upheaval was not a happy one. As Mr Mokyr notes, “life did not improve all that much between 1750 and 1850.” For 60 years, from 1770 to 1830, growth in British wages, adjusted for inflation, was imperceptible because productivity growth was restricted to a few industries. Not until the late 19th century, when the gains had spread across the whole economy, did wages at last perform in line with productivity (see chart 1).
Along with social reforms and new political movements that gave voice to the workers, this faster wage growth helped spread the benefits of industrialisation across wider segments of the population. New investments in education provided a supply of workers for the more skilled jobs that were by then being created in ever greater numbers. This shift continued into the 20th century as post-secondary education became increasingly common.
Claudia Goldin, an economist at Harvard University, and Mr Katz have written that workers were in a “race between education and technology” during this period, and for the most part they won. Even so, it was not until the “golden age” after the second world war that workers in the rich world secured real prosperity, and a large, property-owning middle class came to dominate politics. At the same time communism, a legacy of industrialisation’s harsh early era, kept hundreds of millions of people around the world in poverty, and the effects of the imperialism driven by European industrialisation continued to be felt by billions.
The impacts of technological change take their time appearing. They also vary hugely from industry to industry. Although in many simple economic models technology pairs neatly with capital and labour to produce output, in practice technological changes do not affect all workers the same way. Some find that their skills are complementary to new technologies. Others find themselves out of work.
Take computers. In the early 20th century a “computer” was a worker, or a room of workers, doing mathematical calculations by hand, often with the end point of one person’s work the starting point for the next. The development of mechanical and electronic computing rendered these arrangements obsolete. But in time it greatly increased the productivity of those who used the new computers in their work.
Many other technical innovations had similar effects. New machinery displaced handicraft producers across numerous industries, from textiles to metalworking. At the same time it enabled vastly more output per person than craft producers could ever manage.
Player piano
For a task to be replaced by a machine, it helps a great deal if, like the work of human computers, it is already highly routine. Hence the demise of production-line jobs and some sorts of book-keeping, lost to the robot and the spreadsheet. Meanwhile work less easily broken down into a series of stereotyped tasks—whether rewarding, as the management of other workers and the teaching of toddlers can be, or more of a grind, like tidying and cleaning messy work places—has grown as a share of total employment.
But the “race” aspect of technological change means that such workers cannot rest on their pay packets. Firms are constantly experimenting with new technologies and production processes. Experimentation with different techniques and business models requires flexibility, which is one critical advantage of a human worker. Yet over time, as best practices are worked out and then codified, it becomes easier to break production down into routine components, then automate those components as technology allows.
If, that is, automation makes sense. As David Autor, an economist at the Massachusetts Institute of Technology (MIT), points out in a 2013 paper, the mere fact that a job can be automated does not mean that it will be; relative costs also matter. When Nissan produces cars in Japan, he notes, it relies heavily on robots. At plants in India, by contrast, the firm relies more heavily on cheap local labour.
Even when machine capabilities are rapidly improving, it can make sense instead to seek out ever cheaper supplies of increasingly skilled labour. Thus since the 1980s (a time when, in America, the trend towards post-secondary education levelled off) workers there and elsewhere have found themselves facing increased competition from both machines and cheap emerging-market workers.

Such processes have steadily and relentlessly squeezed labour out of the manufacturing sector in most rich economies. The share of American employment in manufacturing has declined sharply since the 1950s, from almost 30% to less than 10%. At the same time, jobs in services soared, from less than 50% of employment to almost 70% (see chart 2). It was inevitable, therefore, that firms would start to apply the same experimentation and reorganisation to service industries.
A new wave of technological progress may dramatically accelerate this automation of brain-work. Evidence is mounting that rapid technological progress, which accounted for the long era of rapid productivity growth from the 19th century to the 1970s, is back. The sort of advances that allow people to put in their pocket a computer that is not only more powerful than any in the world 20 years ago, but also has far better software and far greater access to useful data, as well as to other people and machines, have implications for all sorts of work.
The case for a highly disruptive period of economic growth is made by Erik Brynjolfsson and Andrew McAfee, professors at MIT, in “The Second Machine Age”, a book to be published later this month. Like the first great era of industrialisation, they argue, it should deliver enormous benefits—but not without a period of disorienting and uncomfortable change. Their argument rests on an underappreciated aspect of the exponential growth in chip processing speed, memory capacity and other computer metrics: that the amount of progress computers will make in the next few years is always equal to the progress they have made since the very beginning. Mr Brynjolfsson and Mr McAfee reckon that the main bottleneck on innovation is the time it takes society to sort through the many combinations and permutations of new technologies and business models.
A startling progression of inventions seems to bear their thesis out. Ten years ago technologically minded economists pointed to driving cars in traffic as the sort of human accomplishment that computers were highly unlikely to master. Now Google cars are rolling round California driver-free no one doubts such mastery is possible, though the speed at which fully self-driving cars will come to market remains hard to guess.
Brave new world
Even after computers beat grandmasters at chess (once thought highly unlikely), nobody thought they could take on people at free-form games played in natural language. Then Watson, a pattern-recognising supercomputer developed by IBM, bested the best human competitors in America’s popular and syntactically tricksy general-knowledge quiz show “Jeopardy!” Versions of Watson are being marketed to firms across a range of industries to help with all sorts of pattern-recognition problems. Its acumen will grow, and its costs fall, as firms learn to harness its abilities.
The machines are not just cleverer, they also have access to far more data. The combination of big data and smart machines will take over some occupations wholesale; in others it will allow firms to do more with fewer workers. Text-mining programs will displace professional jobs in legal services. Biopsies will be analysed more efficiently by image-processing software than lab technicians. Accountants may follow travel agents and tellers into the unemployment line as tax software improves. Machines are already turning basic sports results and financial data into good-enough news stories.
Jobs that are not easily automated may still be transformed. New data-processing technology could break “cognitive” jobs down into smaller and smaller tasks. As well as opening the way to eventual automation this could reduce the satisfaction from such work, just as the satisfaction of making things was reduced by deskilling and interchangeable parts in the 19th century. If such jobs persist, they may engage Mr Graeber’s “bullshit” detector.
Being newly able to do brain work will not stop computers from doing ever more formerly manual labour; it will make them better at it. The designers of the latest generation of industrial robots talk about their creations as helping workers rather than replacing them; but there is little doubt that the technology will be able to do a bit of both—probably more than a bit. A taxi driver will be a rarity in many places by the 2030s or 2040s. That sounds like bad news for journalists who rely on that most reliable source of local knowledge and prejudice—but will there be many journalists left to care? Will there be airline pilots? Or traffic cops? Or soldiers?

There will still be jobs. Even Mr Frey and Mr Osborne, whose research speaks of 47% of job categories being open to automation within two decades, accept that some jobs—especially those currently associated with high levels of education and high wages—will survive (see table). Tyler Cowen, an economist at George Mason University and a much-read blogger, writes in his most recent book, “Average is Over”, that rich economies seem to be bifurcating into a small group of workers with skills highly complementary with machine intelligence, for whom he has high hopes, and the rest, for whom not so much.
And although Mr Brynjolfsson and Mr McAfee rightly point out that developing the business models which make the best use of new technologies will involve trial and error and human flexibility, it is also the case that the second machine age will make such trial and error easier. It will be shockingly easy to launch a startup, bring a new product to market and sell to billions of global consumers (see article). Those who create or invest in blockbuster ideas may earn unprecedented returns as a result.
In a forthcoming book Thomas Piketty, an economist at the Paris School of Economics, argues along similar lines that America may be pioneering a hyper-unequal economic model in which a top 1% of capital-owners and “supermanagers” grab a growing share of national income and accumulate an increasing concentration of national wealth. The rise of the middle-class—a 20th-century innovation—was a hugely important political and social development across the world. The squeezing out of that class could generate a more antagonistic, unstable and potentially dangerous politics.
The potential for dramatic change is clear. A future of widespread technological unemployment is harder for many to accept. Every great period of innovation has produced its share of labour-market doomsayers, but technological progress has never previously failed to generate new employment opportunities.

The productivity gains from future automation will be real, even if they mostly accrue to the owners of the machines. Some will be spent on goods and services—golf instructors, household help and so on—and most of the rest invested in firms that are seeking to expand and presumably hire more labour. Though inequality could soar in such a world, unemployment would not necessarily spike. The current doldrum in wages may, like that of the early industrial era, be a temporary matter, with the good times about to roll (see chart 3).
These jobs may look distinctly different from those they replace. Just as past mechanisation freed, or forced, workers into jobs requiring more cognitive dexterity, leaps in machine intelligence could create space for people to specialise in more emotive occupations, as yet unsuited to machines: a world of artists and therapists, love counsellors and yoga instructors.
Such emotional and relational work could be as critical to the future as metal-bashing was in the past, even if it gets little respect at first. Cultural norms change slowly. Manufacturing jobs are still often treated as “better”—in some vague, non-pecuniary way—than paper-pushing is. To some 18th-century observers, working in the fields was inherently more noble than making gewgaws.
But though growth in areas of the economy that are not easily automated provides jobs, it does not necessarily help real wages. Mr Summers points out that prices of things-made-of-widgets have fallen remarkably in past decades; America’s Bureau of Labour Statistics reckons that today you could get the equivalent of an early 1980s television for a twentieth of its then price, were it not that no televisions that poor are still made. However, prices of things not made of widgets, most notably college education and health care, have shot up. If people lived on widgets alone— goods whose costs have fallen because of both globalisation and technology—there would have been no pause in the increase of real wages. It is the increase in the prices of stuff that isn’t mechanised (whose supply is often under the control of the state and perhaps subject to fundamental scarcity) that means a pay packet goes no further than it used to.
So technological progress squeezes some incomes in the short term before making everyone richer in the long term, and can drive up the costs of some things even more than it eventually increases earnings. As innovation continues, automation may bring down costs in some of those stubborn areas as well, though those dominated by scarcity—such as houses in desirable places—are likely to resist the trend, as may those where the state keeps market forces at bay. But if innovation does make health care or higher education cheaper, it will probably be at the cost of more jobs, and give rise to yet more concentration of income.

The machine stops
Even if the long-term outlook is rosy, with the potential for greater wealth and lots of new jobs, it does not mean that policymakers should simply sit on their hands in the mean time. Adaptation to past waves of progress rested on political and policy responses. The most obvious are the massive improvements in educational attainment brought on first by the institution of universal secondary education and then by the rise of university attendance. Policies aimed at similar gains would now seem to be in order. But as Mr Cowen has pointed out, the gains of the 19th and 20th centuries will be hard to duplicate.
Boosting the skills and earning power of the children of 19th-century farmers and labourers took little more than offering schools where they could learn to read, write and do algebra. Pushing a large proportion of college graduates to complete graduate work successfully will be harder and more expensive. Perhaps cheap and innovative online education will indeed make new attainment possible. But as Mr Cowen notes, such programmes may tend to deliver big gains only for the most conscientious students.
Another way in which previous adaptation is not necessarily a good guide to future employment is the existence of welfare. The alternative to joining the 19th-century industrial proletariat was malnourished deprivation. Today, because of measures introduced in response to, and to some extent on the proceeds of, industrialisation, people in the developed world are provided with unemployment benefits, disability allowances and other forms of welfare. They are also much more likely than a bygone peasant to have savings. This means that the “reservation wage”—the wage below which a worker will not accept a job—is now high in historical terms. If governments refuse to allow jobless workers to fall too far below the average standard of living, then this reservation wage will rise steadily, and ever more workers may find work unattractive. And the higher it rises, the greater the incentive to invest in capital that replaces labour.
Everyone should be able to benefit from productivity gains—in that, Keynes was united with his successors. His worry about technological unemployment was mainly a worry about a “temporary phase of maladjustment” as society and the economy adjusted to ever greater levels of productivity. So it could well prove. However, society may find itself sorely tested if, as seems possible, growth and innovation deliver handsome gains to the skilled, while the rest cling to dwindling employment opportunities at stagnant wages.

Monday, 2 December 2013

Game Theory: Too Much and Too Little?

 

 
 
 
 
 
 
 
 
 
In introducing game theory (in chapters 7-9), MWG build upon the theory of rational choice by individual agents, developed previously in the book to attempt to analyze (describe, explain, and even predict?) the interactions of such agents as well as the outcomes to which they give rise.   In previous chapters, MWG discuss interactions only in the form of the arms-length interactions of numerous firms and consumers in specific markets (e.g. under  ‘perfect competition’, in chapters 3 and 5).
Non-cooperative game theory is presented as the presumed default theory, if not the authoritative one, for the understanding of interpersonal interactions (including consideration of how cooperation may emerge even in the presence of self-seeking behavior and the absence of an ability to make binding agreements[1], with a game formally defined as a situation in which a number of individuals interact in a setting of strategic interdependence[2]).  The chapters are devoted to a consideration of what might be called the internal discourse of game theory, in particular various ‘refinements’ of the concept of equilibrium which are deemed to add explanatory force.  Thus, the chapters proceed from a consideration of the iterated elimination of dominated strategies, to those of Nash equilibrium, subgame perfect equilibrium, trembling hand perfect equilibrium, perfect Bayesian equilibrium and so forth (how hard it is to tell pianissimo from crescendo!).
After having encountered this battery of concepts -- supported by an entire symbolic armamentarium and buttressed by suitably general existence proofs -- and having learnt to apply them by doing a sufficient number of exercises, the student is left suitably impressed by the technical sophistication of game theory and eager to apply its insights in the world.  However, what has she really learnt?  An ‘external’ critic, willing to raise issues outside of the established frame, might introduce the following issues:
The Ultra-Calculating Conception of the Strategic Actor: Uses and Abuses
As we know the model of the homo economicus foregrounded in the textbook portrays individuals as actors seeking to maximize utility (often interpreted in a narrowly self-interested manner) or profits.  The text explicitly identifies “rationality” with the possession of complete and acyclical preferences, which it takes to be the foundation of such maximization.  Shaikh (2012) uses the term of “hyperrationality” to distinguish this concept of rationality in economics from a more general notion that actions and opinions should be based on reason (a broader view of rationality, as having good reasons to do what it is that one does, which we have also associated with Amartya Sen).  However, the agent of game theory, perhaps more than any other agent encountered so far in the textbook, is one who is assumed to be not merely “rational” (indeed hyperrational, in the sense of Shaikh) but to be so in an even more restrictive sense which we refer to as being “ultra-calculating”. The ultra-calculating agent is assumed to be heroically forward looking (anticipating all possible combinations of actions and resulting future states), to engage in elephantine record keeping (recalling all previously known actions and states of the game), and to have unrestricted and costless computational capabilities.  Further, an ultra-calculating rational agent will also assume that every other player is engaged in the same calculations, benefiting from the same computational capabilities.  Agents possessing such an expansive ‘algorithmic’ capability entails that they can, inter alia:
  • Build complex scenarios consisting of multiple layers (for instance, by identifying possible responses to possible responses to possible responses to their actions in order to a form a ‘complete contingent plan’ or strategy; a bedrock of game theoretic reasoning), as well as
  • Form complex conjectures about the beliefs that agents hold about one another (exemplified by the assumption called common knowledge -- regarding indefinitely iterated alternating layers of belief about one another’s propensity to act “rationally” -- in order to identify certain strategies as unlikely to be played or to altogether exclude them).
There are many examples of conclusions that appear to depend on such assumptions concerning the presence of shared information and behavioral propensities (including the capacity to engage in heroic feats of calculation).  These include the successive deletion of strictly dominated strategies or processes of backward induction as illustrated, for example, in the centipede game (treated in MWG on pp 281-282, Example 9.B.5).  An example of this kind lends itself to ambivalent interpretation.  On the one hand, it demonstrates the virtuoso technical prowess of game-theoretical analysis through its ability to reach theoretical conclusions based on specific assumptions. On the other hand, it can also be viewed as a reductio ad absurdum -- showing the implausibility of a certain view of strategic interaction rather than the ability of game theory to suggest a determinate and plausible outcome.  In the centipede game specifically, this result in striking: there is a unique subgame-perfect Nash equilibrium in which both players cut the game short, each earning only $1 as a result, when they might instead get $100 dollars by continuing it.  Is this unique Nash equilibrium a reasonable prediction of what real players might play?  Empirical studies[3] show that it is actually rarely observed, and that some level of cooperation is evidenced in the game when it is played.  Similarly, there is evidence of cooperative behavior in finitely repeated prisoners’ games where standard reasoning determines it could not arise.  If backward induction suggests that rational players will ‘defect’ at the first chance they have to do so in the centipede game, or that no ‘rational’ self-interested prisoner would do anything other than betray the other in the last round (and therefore in every round) of a finitely repeated prisoner’s dilemma, how do we explain that cooperation seems to emerge even in these very scenarios?
Escaping Unrealism: Between Scylla and Charybdis
There are two ways of addressing the embarrassment of unrealism.  The first approach is to adopt still greater unrealism in the description of the setting but in such a way as to engineer the required result -- while maintaining the premise of the ultra-calculating approach to rationality.  For instance, one can assume that the game is infinitely repeated, thus creating the possibility that strategies involving retaliation for non-cooperative behavior can be used to sustain cooperation (or more precisely, its behavioral equivalent, since it is typically supposed that individuals are motivated only by self-seeking considerations).  Although this approach generates more realistic empirical consequences it does so by making the description of the setting even less realistic.
The second approach is to adopt greater realism in the description of human agency (i.e. relaxing the premise of an ultra-calculating agent) while maintaining the formal description of the setting.  For instance, one may allow for the possibility that individuals are motivated by considerations of fairness, or by adherence to social norms, or allow for more extended conceptions of rationality (such as those involving Smithian enlightened self-interest or Kantian regard for moral law -- discussed further below) as a way of directly introducing the foundations of cooperative behavior.  Dropping the assumption of common knowledge, and introducing uncertainty in the mind of at least one player about the ‘rationality’ of other players, may lead to accounts of the process of learning in which agents develop knowledge about each others’ actions and beliefs over time (See e.g. Bicchieri 1993) leading to strategies different from those which would otherwise be chosen.  
While both approaches can appear to rescue the initial framework from its fatal incapacity to generate plausible conclusions, the latter is decidedly more attractive, as it does away with an assumption (the ultra-calculating approach) that is patently inconsistent with knowledge of ourselves and of others.
Distinct from the problem of the ultra-calculating approach generating determinate but unconvincing results is that of its generating insufficiently determinate results.   We can consider, for example, the introduction (in relation to MWG’s discussion of the centipede game on p282) of the concept of trembling-hand perfect Nash equilibrium in order to exclude strategies which would not be played if one’s opponent were to make small mistakes or act in a fashion which was ‘slightly’ irrational.  Embracing the possibility of departures from ultra-calculating rationality becomes here a tool for narrowing the range of predicted outcomes.
The extraordinary range of equilibria which is a generic feature of many games (captured for instance in a specific context by the ‘folk theorem’ accommodating for a wide range of possible behavior, see MWG p404[4]), as well as the lack of robustness of the predicted outcomes of games to the way in which the strategic interaction is characterized (an example is provided by the radically different outcomes encountered when inter-firm competition is characterized in terms of quantity-based Cournot competition or price-based Bertrand competition) are among the features which have dispatched the idea that game theory could have very much predictive content.  If one can predict (almost) anything then one has predicted nothing.
We can also think of instances in which computational complexity might render the prescribed strategy (even if theoretically computable) completely irrelevant as a practical matter. For instance, if the game of chess is treated as finite (ending if a specified number of moves of a given type occur[5]) it possesses a ‘solution’ in the sense that there exists a (weakly) dominant strategy which necessarily leads to a win or a draw (i.e. either white can force a win or a draw, black can force a win or a draw, or both sides can force a draw – see Aumann (1989), Eichberger (1993), or Hart (1992)). This is a consequence of the proposition described in MWG as Zermelo’s theorem[6] (Prop. 9.B.1 on p272): every finite game of perfect information has a pure strategy Nash equilibrium that can be derived through backward induction.  The fact that in chess such a strategy has neither been found nor is likely to be found, even with the aid of the most powerful of computers (which, of course, is exactly what makes the game exciting and worth playing), seems to provide sufficient evidence that the ultra-calculating approach is not a good assumption.  It is perhaps unsurprising that the world’s best chess players spend much of their effort on psychic preparations or on efforts to undermine the psychic state of their opponent, and that chess is often thought of as involving a heavy element of intuition which cannot be reduced to brute-force computation (see e.g., here).  The surprising conclusion that at least one player in chess has a (weakly) dominant strategy is not especially useful in description, explanation, prediction or prescription.   Analogously, in less deterministic contexts than chess, the choice of heuristic decision-making rules (as opposed to the use of computational methods) may be crucial to describing and explaining the choices of actors (and perhaps even to prescribing for them)[7].
Aside from the implausibility of the conception of the agent as a relentless calculating machine, the narrowness of the conception of the agent’s motivations proves another severe limitation.  In game theory, the agent is generally assumed to be motivated only by the payoffs realized in the game.  If the agent is instead assumed to give any importance at all to moral or expressive concerns relating to the nature of an action (e.g. on whether it is compatible with a certain sense of integrity or personal identity) rather than focusing exclusively upon the outcome to which it gives rise, this can lead to a very different analysis (see e.g. Rabin (1994)).   It is often possible to interpret whatever a person appears to be maximizing as that person’s goal (not always:  As Sen’s work on menu dependence (Sen 1997, 2002b) has shown, some very reasonable behaviors, and in particular that of acting in accordance with social norms, may not even be compatible with the maximization of a utility function, as it may violate WARP, a necessary condition for such maximization).  However, there appears to be little explanatory value to such an approach. Sen (1987, 2002a) undertakes a tripartite classification of the aspects of behavior attributed to the utility maximizing homo economicus (self-centered welfare, self-welfare goals, and self-goal choice). Relaxing any of these behavioral assumptions, or characterizing them in specific ways, may lead to very different conclusions as to what is to be expected in situations of strategic interdependence.  Ultimately, a substantive diagnosis of what specific elements of reasoning, psychology and environment guide an individual’s actions is likely to be needed to generate a contentful explanation (on which see also Sen (1977)).
Importantly, individuals may adhere to social norms (which may lead to some recognition being given to other people’s goals, or simply to acting in accordance with relevant contextual requirements).  Norms of honesty, promise keeping, and reciprocity, to name just a few, play a critical role in social dynamics.  Although there has been a literature attempting to show how the emergence of norms as a result of repetitive social interactions could lend itself to a game-theoretical account[8], there are strong reasons to cast doubt on the ultimate explanatory value of an approach in which norms are viewed as the extended consequence of goal-seeking instrumental behavior (see e.g. Elster (1989, 2009)).  Whatever the origins of such norms, if individuals (at least some) act in accordance with them, social interactions and outcomes will be influenced accordingly, as recognized in the game theoretic literature on co-ordination games, as well as on the impact of some players choosing "irrational" out-of-equilibirum play.  Moreover, social norms may be activated in sophisticated ways that are contingent on the behavior and orientations of others, as well as on the nature of available strategies and payoffs, and failing to take account of their empirical salience can led to consequent errors in explanation and prediction (i.e. in the very 'heartand' of game theory's claims to relevance: see e.g. Bicchieri (2006), Bowles and Gintis (2013) and Ostrom (1990)).
In the social context, the presence of such procedural considerations can lead to outcomes which differ greatly from those conventionally highlighted in game theory, and as noted above, shed light on the dynamics of cooperative behavior.  To take an obvious and insufficiently recognized example, we may consider a version of the prisoner’s dilemma that involves actions with a symmetric private cost and public benefit (such as not littering).  Whereas the prisoner’s dilemma in its conventional rendition involves the seemingly inescapable conclusion that all agents will ‘defect’ if they are ‘rational’, one may predict that they will act in a fashion which is exactly the contrary if the agents are assumed instead to be Kantians following the categorical imperative (doing as they would will others to do if they were all to act in accordance with a universal law). Do economists really wish to argue that Kant’s conception of the agent is one that is insufficiently rational?   One can have good reasons to do what one does without acting in a fashion that is narrowly instrumental.  In particular, acting in accordance with procedural moral criteria may be viewed as perfectly reasonable (Sen (1987); on the lengthy history of such arguments see also Tuck (2008)).
Finally, the agent of game theory also has a ‘hard self’. She is a sovereign who has an unambiguous idea of her payoffs, as well as possible choice, and pursues these relentlessly (as already noted).  In fact, various traditions in social ‘science’ recognize that agents are often susceptible to formation by social processes, and by small and large manipulations by others (which may or may not be intentional) that shape their identities, self-understandings and perceived interests.  The work of scholars concerned with power [see e.g., from diverse perspectives, Bourdieu (1984), Butler (1997), Fanon (1952), Foucault (1977, 2009), Godelier (1986), Hegel (1807, 1977), Lukes (2005), Said (1993) and Sen (1990)] gives an idea of some of the difficulties that may be encountered if a game theoretic framework is applied to situations in which agents can be influenced to arrive at beliefs and perceived interests, through the effects of a dominating force or power structure that may be diffuse and may incorporate subtle methods such as influencing self-understanding.  The actions available to dominant actors may include such possibilities as to influence the perceived payoffs or permissible strategies of others.   More generally, even without appeal to the role of power, the self-understandings of agents, including their preferences, may be endogenous to the play of the game (see e.g. Piore (1995)).  In such situations, identifying an agent’s interests in terms of payoffs may not be straightforward, but more pertinently still, neglecting the presence of such modes of interaction may lead to a portrayal of the game which is incorrect in terms of its description and therefore risks inaccuracy in explanation and in prediction.
Game Theory 2.0? Prolegomenon
The historical program of game theory might be thought to have been underpinned by at least three components: the predictive, the prescriptive, and the explanatory.  The predictive motivation involved the idea that game theory could provide an account of the actions and outcomes likely to emerge in situations of strategic interdependence.  The prescriptive motivation involved the idea that game theory could provide a guide to how to act if one sought to bring about certain outcomes (see Mirowski 2002 for discussion of the links between game theory and national security concerns, and in particular the role played by the RAND Corporation and the Cowles Commission in the elaboration of game theory during the Cold War).  The explanatory motivation (perhaps the least explicitly articulated) involved the idea that game theory could provide a repertoire of concepts through which one could understand strategic interactions.
How well has game theory done in each of these respects? We have already recognized the fact that game theory generated an ‘embarrassment of riches’ in the form of too many equilibria.  Perhaps this helps to explain why game theory, which was a booming field within economics (and in particular in economic theory) in the 1970s and 1980s, replacing general equilibrium theory (after its own encounter with an embarrassment of riches, in the form of the Sonnenschein-Mantel-Debreu results) as the place where promising young economic theorists went to prove themselves, has entered its own disciplinary desuetude (if jobs and monies are any indication).
If game theory has done better as a prescriptive body, it is not because success in prediction has enabled prescription (since there has been little such success) but because of the link between prescription and explanation, and the much greater success of game theory in the latter.  Where game theory has succeeded most is in providing a repertoire of concepts that can be used to describe the dynamics of strategic interactions, when applied in conjunction with empirical judgment concerning the relevance of the concepts to specific cases.  Such description has in turn provided the basis of more incisive approaches to explanation of observed actions and outcomes, as well as to prescription.  Whereas prediction demands a high degree of ex ante determinacy, the use of game theoretic concepts in these other respects does not, replying only on ex ante and ex post usefulness of the concepts developed in order to make sense of a messy world.  To the extent there are empirical regularities which deployments of game theoretic concepts can illuminate, these are, even if useful to recognize, of a rather non-specific kind (such as the idea that collective action might be harder to achieve when there are larger numbers of agents, on which see Olson (1965)).  This is the relevance that game theory finds in industrial economics, in business strategy, geopolitical analyses and other fields.[9]  In applied problems (from understanding macroeconomic coordination of countries to entry deterrence in oligopolies) the language of game theory has proved a valuable aid to interpretation.  Where the use of game theory has also, recently, been applied in institutional design, for instance in the design of auctions or in matching algorithms (see here), this is not so much because of its predictive usefulness as because of the theoretical surety it provides that outcomes with specific desired properties (such as generating allocations which cannot be improved upon through further trade, or being difficult to manipulate through means such as the strategic reporting of preferences) can be achieved.  However, human beings are ingenious at stepping outside of a specified frame and finding entirely unanticipated methods of making things work to their advantage -- reshaping ostensibly well-defined strategies and payoffs and not merely recognizing or respecting them.[10]  For this reason, the confidence that game theory provides general principles for institutional design may not be wholly merited.
In the end, game theory has provided us a vocabulary, with a richer range of concepts than had previously been available (we have not discussed all of these concepts above; for instance those appearing in evolutionary game theory, which have a distinct conceptual foundation insofar as they may not rely on intentionality at all, and which incidentally find no mention in MWG).  If this is a triumph -- and it is, insofar as game theory involves genuine intellectual achievements, including the development of certain non-obvious concepts -- it is a smaller one than had been hoped for.  It also carries its own dangers.  In particular, our means of understanding interpersonal relations may privilege, or even become limited to, the game-theoretic repertoire.  Can we escape the tyranny of such formalism (“throw away the ladder”[11]) while deriving whatever insights it may help us to arrive at?
We are left with the following questions. What is the role of empirical investigation concerning specific contexts of strategic interaction and the attitudes as well as outcomes that prevail within them? Can a game (e.g. inter-firm competition) ever be discussed without reference to the meta-games in which it is embedded (e.g. larger institutional context of market, law, government and society) and the possibility of reshaping the meta-game as a way of influencing the outcome of the game? What role do understandings of the structures and mechanisms through which social identities, preferences and perceived interests are shaped, and the way in which these are in turn shaped by intentional and unintentional action, play in the analysis of the dynamics of interdependence?  What is the relation between the dynamics of the multiple self[12] (the complex agent containing within herself distinct ideas of permissible strategies and relative payoffs) and the dynamics of multiple selves (interaction between persons)?  What about non-equilibrium reasoning, which describes processes by which people react to one another instead of insisting on identifying states defined by mutually compatible reactions?  [Is the cascade of actions undertaken by lemmings an ‘equilibrium’? Is this an abstruse example or does it apply to such phenomena as the formation and collapsing of asset bubbles?]  Can insights that are not easy to formulate within the existing framework of game theory still find a place in the analysis of situations of strategic interdependence? Is a broader integrated perspective possible in which game theory is treated as one tool of social enquiry rather than the key? What would this imply about the appropriate relation between economics and adjoining disciplines such as sociology and psychology?
Although MWG provides a faithful introduction to game theory, it does so along decidedly conventional, and even complacent lines.  The authors cannot be wholly faulted for this, as there is not very much by way of a developed alternative body of theory expressly concerned with strategic interactions.[13]  Who will create it?
References
Aumann, R. J. (1989). Lectures on Game Theory. Boulder, CO: Westview
Bicchieri, C. (1993). Rationality and Coordination. Cambridge, Cambridge University Press. Second Edition, 1996.
Bicchieri, C. (2006). The Grammar of Society: The Nature and Dynamics of Social Norms. Cambridge: Cambridge University Press.
Bourdieu, P. (1984). Distinction: A Social Critique of the Judgment of Taste, Cambridge, MA: Harvard University Press.
Bowles, S. and H. Gintis (2013). A Cooperative Species: Human Reciprocity and its Evolution. Prineton: Princeton University Press.
Butler, J. (1997). The Psychic Life of Power: Theories in Subjection. Stanford University Press.
Dixit, A. and Nalebuff, B. (1991). Thinking Strategically: The Competitive Edge in Business, Politics and Everyday Life. New York: W W Norton.
Dixit, A. and Nalebuff, B. (2008). The Art of Strategy: A Game Theorist's Guide to Success in Business and Life. New York: W W Norton.
Eichberger, J. (1993). Game Theory for Economists, San Diego: Academic Press.
Elster, J. (1987). ed. The Multiple Self.  Cambridge: Cambridge University Press.
Elster, J. (1989). “Social Norms and Economic Theory”, Journal of Economic Perspectives, Vol. 3, No. 4. Available on: http://www.jstor.org/stable/1942912
Elster, J. (2009). “Norms”, in Oxford Handbook of Analytical Sociology, New York: Oxford University Press. Available in draft form on urrutiaelejalde.org/files/2012/01/elster.pdf‎.
Fanon, F.  (1952). Black Skin, White Masks (1967 translation by Charles Lam Markmann). New York: Grove Press.
Foucault, M. (1977). Discipline and Punish: The Birth of the Prison (translation by Alan Sheridan), New York: Random House (second edition 1995).
Foucault, M. (2009). Security, Territory, Population: Lectures at the Collège de France 1977—1978, New York: Picador.
Gigerenzer, G. (2010).  Rationality for Mortals: How People Cope with Uncertainty.  New York: Oxford University Press.
Godelier, M. (1986). The Making of Great Men. Male Domination and Power among the New Guinea Baruya. Cambridge University Press.
Hargreaves-Heap, S. and Varoufakis, Y. (2004). Game Theory: A Critical Introduction. London: Routledge.
Hart, S. (1992). Games in Extensive and Strategic Forms, in Aumann, R. J., and Hart, S. (eds.), Handbook of Game Theory, Volume 1, Amsterdam, North-Holland.
Hegel, G.W.F. (1807, 1977). Phenomenology of Spirit.  Translated by A.V. Miller with analysis of the text and foreword by J.N. Findlay. Oxford: Clarendon Press.
Lukes, S. (2005). Power: A Radical View, New York: Macmillan (second Edition).
McKelvey, R. and Palfrey, T. (1992). “An experimental study of the centipede game”.  Econometrica 60 (4): 803–836.
Mirowski, P. (2002). Machine Dreams: Economics Becomes a Cyborg Science. New York: Cambridge University Press.
Nagel, R. and Tang, F. F. (1998). "An Experimental Study on the Centipede Game in Normal Form: An Investigation on Learning". Journal of Mathematical Psychology 42 (2–3): 356–384.
Nandy, A. (1983). The Intimate Enemy: Loss and Recovery of Self Under Colonialism. Delhi: Oxford University Press.
Olson, M. (1965). The Logic of Collective Action: Public Goods and the Theory of Groups, Cambridge: Harvard University Press, (second edition 1971).
Ostrom, E. (1990).  Governing the Commons:  The Evolution of Institutions for Collective Action, Cambridge: Cambridge University Press.
Piore, M. (1995).  Beyond Individualism. Cambridge, MA: Harvard University Press.
Rabin, M. (1993). “Incorporating Fairness into Game Theory and Economics”, American Economic Review 83: 1281-1302.
Rabin, M. (1994). 'Incorporating Behavioral Assumptions into Game Theory', in James Friedman (ed.), Problems of Coordination in Economic Activity, Norwell, MA: Kluwer Academic Publishers.
Said, E. (1993). Culture and Imperialism. New York: Knopf.
Schwalbe, U. and Walker, P. (2001), 'Zermelo and the Early History of Game Theory', Games and Economic Behavior, Vol. 34:123-137.
Sen, A. (1977). 'Rational Fools: A Critique of the Behavioral Foundations of Economic Theory'. Philosophy and Public Affairs, Volume  6, No. 4.
Sen, A. (1987). On Ethics and Economics, Basil Blackwell.  Welfare, Goals and Choices, pp. 80-88
Sen, A. (1990). “Gender and Cooperative Conflicts”, in Tinker, I. Persistent Inequalities. New York: Oxford University Press.
Sen, A. (1997). “Maximization and the Act of Choice”.  Econometrica,  65(4): 745-780.
Sen, A. (2002a). “Goals, Commitment and Identity” in Rationality and Freedom, Cambridge: Harvard University Press.
Sen, A. (2002b). “Consistency of Choice”, in Rationality and Freedom. Cambridge: Harvard University Press.
Shaikh, A. (2012). “Rethinking Microeconomics: A Proposed Reconstruction”, Working Paper, The New School for Social Research.
Tuck, R. ( 2008 ).  Free Riding.  Cambridge, MA: Harvard University Press.
Zermelo, E. (1913). “Uber eine Anwendung der Mengenlehre auf die Theorie des Schachspiels”, Proc. Fifth Congress Mathematicians, (Cambridge 1912), Cambridge University Press 1913, 501-504.
Footnotes
[1] MWG, Part Two, p.218.
[2] MWG, Ch.7. p.219.
[3] See for example McKelvey and Palfrey (1992) and Nagel and Tang (1998).
[4] The folk theorem is introduced by MWG in the course of applying game theory to market power in Ch. 12.
[5] See e.g. http://en.wikipedia.org/wiki/Rules_of_chess . Note that a draw can be claimed, but need not be claimed, in such a circumstance.
[6] The theorem is named after mathematician Zermelo, who tried to analyze systematically (in an early article published in 1913) the question of whether there existed “winning positions” in chess, from which the other party could be unavoidably checkmated (Zermelo 1913). What is described as Zermelo’s theorem in MWG was not in fact established in Zermelo’s original article. For a modern translation of the original, and a thorough discussion of the subsequent misunderstandings of it, see Schwalbe and Walker (2001), available on www.math.harvard.edu/~elkies/FS23j.03/zermelo.pdf‎ .
[7] For exploration of this insight in the context of individual decision-making, see Gigerenzer (2010). 
[8] For references to views of social norms as equilibria of coordination games or products of evolutionary selection see e.g. http://plato.stanford.edu/entries/social-norms/.
[9] See for example Dixit and Nalebuff (1991, 2008), and others. 
[10] “Anything you can do, I can do meta”: A quip attributed to the late G.A. Cohen.
[12] See e.g., from very different points of view, Elster (1987) and Nandy (1983).
[13] For a critical perspective, however, see Hargreaves Heap, S. and Varoufakis, Y. (2004). 


Tuesday, 18 December 2012

Innovation

From Wikipedia, the free encyclopedia

  (Redirected from Economists of innovation)

Jump to: navigation, search

Innovation is the development of new customers value through solutions that meet new needs, inarticulate needs, or old customer and market needs in new ways. This is accomplished through different or more effective products, processes, services, technologies, or ideas that are readily available to markets, governments, and society. Innovation differs from invention in that innovation refers to the use of a better and, as a result, novel idea or method, whereas invention refers more directly to the creation of the idea or method itself. Innovation differs from improvement in that innovation refers to the notion of doing something different (Lat. innovare: "to change") rather than doing the same thing better.

Contents

 [hide

[edit] Etymology

The word innovation derives from the Latin word innovates, which is the noun form of innovare "to renew or change," stemming from in—"into" + novus—"new". Diffusion of innovation research was first started in 1903 by seminal researcher Gabriel Tarde, who first plotted the S-shaped diffusion curve. Tarde (1903) defined the innovation-decision process as a series of steps that includes:[1]
  1. First knowledge
  2. Forming an attitude
  3. A decision to adopt or reject
  4. Implementation and use
  5. Confirmation of the decision

[edit] Inter-disciplinary views

[edit] Individual

Creativity has been studied using many different approaches.

[edit] Society

Due to its widespread effect, innovation is an important topic in the study of economics, business, entrepreneurship, design, technology, sociology, and engineering. In society, innovation aids in comfort, convenience, and efficiency in everyday life. For instance, the benchmarks in railroad equipment and infrastructure added to greater safety, maintenance, speed, and weight capacity for passenger services. These innovations included wood to steel cars, iron to steel rails, stove-heated to steam-heated cars, gas lighting to electric lighting, diesel-powered to electric-diesel locomotives. By the mid-20th century, trains were making longer, faster, and more comfortable trips at lower costs for passengers.[2] Other areas that add to everyday quality of life include: the innovations to the light bulb from incandescent to compact fluorescent then LED technologies which offer greater efficiency, durability and brightness; adoption of modems to cellular phones, paving the way to smartphones which supply the public with internet access any time or place; cathode-ray tube to flat-screen LCD televisions and others.

[edit] Business and economics

In business and economics, innovation is the catalyst to growth. With rapid advancements in transportation and communications over the past few decades, the old world concepts of factor endowments and comparative advantage which focused on an area’s unique inputs are outmoded for today’s global economy. Economist Joseph Schumpeter, who contributed greatly to the study of innovation, argued that industries must incessantly revolutionize the economic structure from within, that is innovate with better or more effective processes and products, such as the shift from the craft shop to factory. He famously asserted that “creative destruction is the essential fact about capitalism.”[3] In addition, entrepreneurs continuously look for better ways to satisfy their consumer base with improved quality, durability, service, and price which come to fruition in innovation with advanced technologies and organizational strategies.[4]
One prime example is the explosive boom of Silicon Valley startups out of the Stanford Industrial Park. In 1957, dissatisfied employees of Shockley Semiconductor, the company of Nobel laureate and co-inventor of the transistor William Shockley, left to form an independent firm, Fairchild Semiconductor. After several years, Fairchild developed into a formidable presence in the sector. Eventually, these founders left to start their own companies based on their own, unique, latest ideas, and then leading employees started their own firms. Over the next 20 years, this snowball process launched the momentous startup company explosion of information technology firms. Essentially, Silicon Valley began as 65 new enterprises born out of Shockley’s eight former employees.[5]

[edit] Organizations

In the organizational context, innovation may be linked to positive changes in efficiency, productivity, quality, competitiveness, market share, and others. All organizations can innovate, including for example hospitals,[6] universities, and local governments. For instance, former Mayor Martin O’Malley pushed the City of Baltimore to use CitiStat, a performance-measurement data and management system that allows city officials to maintain statistics on crime trends to condition of potholes. This system aids in better evaluation of policies and procedures with accountability and efficiency in terms of time and money. In its first year, CitiStat saved the city $13.2 million.[7] Even mass transit systems have innovated with hybrid bus fleets to real-time tracking at bus stands. In addition, the growing use of mobile data terminals in vehicles that serves as communication hubs between vehicles and control center automatically send data on location, passenger counts, engine performance, mileage and other information. This tool helps to deliver and manage transportation systems.[8]
Still other innovative strategies include hospitals digitizing medical information in electronic medical records; HUD’s HOPE VI initiatives to eradicate city’s severely distressed public housing to revitalized, mixed income environments; the Harlem Children’s Zone that uses a community-based approach to educate local area children; and EPA’s brownfield grants that aids in turning over brownfields for environmental protection, green spaces, community and commercial development.

[edit] Sources of Innovation

There are several sources of innovation. According to Peter F. Drucker the general sources of innovations are different changes in industry structure, in market structure, in local and global demographics, in human perception, mood and meaning, in the amount of already available scientific knowledge, etc.. Also, internet research, developing of people skills, language development, cultural background, skype, Facebook, etc. In the simplest linear model of innovation the traditionally recognized source is manufacturer innovation. This is where an agent (person or business) innovates in order to sell the innovation. Another source of innovation, only now becoming widely recognized, is end-user innovation. This is where an agent (person or company) develops an innovation for their own (personal or in-house) use because existing products do not meet their needs. MIT economist Eric von Hippel has identified end-user innovation as, by far, the most important and critical in his classic book on the subject, Sources of Innovation.[9] In addition, the famous robotics engineer Joseph F. Engelberger asserts that innovations require only three things: 1. A recognized need, 2. Competent people with relevant technology, and 3. Financial support.[10] The Kline Chain-linked model of innovation[11] places emphasis on potential market needs as drivers of the innovation process, and describes the complex and often iterative feedback loops between marketing, design, manufacturing, and R&D.
Innovation by businesses is achieved in many ways, with much attention now given to formal research and development (R&D) for "breakthrough innovations." R&D help spur on patents and other scientific innovations that leads to productive growth in such areas as industry, medicine, engineering, and government.[12] Yet, innovations can be developed by less formal on-the-job modifications of practice, through exchange and combination of professional experience and by many other routes. The more radical and revolutionary innovations tend to emerge from R&D, while more incremental innovations may emerge from practice – but there are many exceptions to each of these trends.
An important innovation factor includes customers buying products or using services. As a result, firms may incorporate users in focus groups (user centred approach), work closely with so called lead users (lead user approach) or users might adapt their products themselves. The lead user method focuses on idea generation based on leading users to develop breakthrough innovations. U-STIR, a project to innovate Europe’s surface transportation system, employs such workshops.[13] Regarding this user innovation, a great deal of innovation is done by those actually implementing and using technologies and products as part of their normal activities. In most of the times user innovators have some personal record motivating them. Sometimes user-innovators may become entrepreneurs, selling their product, they may choose to trade their innovation in exchange for other innovations, or they may be adopted by their suppliers. Nowadays, they may also choose to freely reveal their innovations, using methods like open source. In such networks of innovation the users or communities of users can further develop technologies and reinvent their social meaning.[14][15]

[edit] Goals/failures

Programs of organizational innovation are typically tightly linked to organizational goals and objectives, to the business plan, and to market competitive positioning. One driver for innovation programs in corporations is to achieve growth objectives. As Davila et al. (2006) notes, "Companies cannot grow through cost reduction and reengineering alone... Innovation is the key element in providing aggressive top-line growth, and for increasing bottom-line results." [16]
One survey across a large number of manufacturing and services organizations found, ranked in decreasing order of popularity, that systematic programs of organizational innovation are most frequently driven by: Improved quality, Creation of new markets, Extension of the product, range, Reduced labor costs, Improved production processes, Reduced materials, Reduced environmental damage, Replacement of products/services, Reduced energy consumption, Conformance to regulations.[16]
These goals vary between improvements to products, processes and services and dispel a popular myth that innovation deals mainly with new product development. Most of the goals could apply to any organisation be it a manufacturing facility, marketing firm, hospital or local government. Whether innovation goals are successfully achieved or otherwise depends greatly on the environment prevailing in the firm.[17]
Conversely, failure can develop in programs of innovations. The causes of failure have been widely researched and can vary considerably. Some causes will be external to the organization and outside its influence of control. Others will be internal and ultimately within the control of the organization. Internal causes of failure can be divided into causes associated with the cultural infrastructure and causes associated with the innovation process itself. Common causes of failure within the innovation process in most organisations can be distilled into five types: Poor goal definition, Poor alignment of actions to goals, Poor participation in teams, Poor monitoring of results, Poor communication and access to information.[18]

[edit] Diffusion

InnovationLifeCycle.jpg
Once innovation occurs, innovations may be spread from the innovator to other individuals and groups. This process has been proposed that the life cycle of innovations can be described using the 's-curve' or diffusion curve. The s-curve maps growth of revenue or productivity against time. In the early stage of a particular innovation, growth is relatively slow as the new product establishes itself. At some point customers begin to demand and the product growth increases more rapidly. New incremental innovations or changes to the product allow growth to continue. Towards the end of its life cycle growth slows and may even begin to decline. In the later stages, no amount of new investment in that product will yield a normal rate of return
The s-curve derives from an assumption that new products are likely to have "product life". i.e. a start-up phase, a rapid increase in revenue and eventual decline. In fact the great majority of innovations never get off the bottom of the curve, and never produce normal returns.
Innovative companies will typically be working on new innovations that will eventually replace older ones. Successive s-curves will come along to replace older ones and continue to drive growth upwards. In the figure above the first curve shows a current technology. The second shows an emerging technology that currently yields lower growth but will eventually overtake current technology and lead to even greater levels of growth. The length of life will depend on many factors.[19]

[edit] Measures

There are two fundamentally different types of measures for innovation: the organizational level and the political level.

[edit] Organizational level

The measure of innovation at the organizational level relates to individuals, team-level assessments, and private companies from the smallest to the largest. Measure of innovation for organizations can be conducted by surveys, workshops, consultants or internal benchmarking. There is today no established general way to measure organizational innovation. Corporate measurements are generally structured around balanced scorecards which cover several aspects of innovation such as business measures related to finances, innovation process efficiency, employees' contribution and motivation, as well benefits for customers. Measured values will vary widely between businesses, covering for example new product revenue, spending in R&D, time to market, customer and employee perception & satisfaction, number of patents, additional sales resulting from past innovations.[20]

[edit] Political level

For the political level, measures of innovation are more focused on a country or region competitive advantage through innovation. In this context, organizational capabilities can be evaluated through various evaluation frameworks, such as those of the European Foundation for Quality Management. The OECD Oslo Manual (1995) suggests standard guidelines on measuring technological product and process innovation. Some people consider the Oslo Manual complementary to the Frascati Manual from 1963. The new Oslo manual from 2005 takes a wider perspective to innovation, and includes marketing and organizational innovation. These standards are used for example in the European Community Innovation Surveys.[21]
Other ways of measuring innovation have traditionally been expenditure, for example, investment in R&D (Research and Development) as percentage of GNP (Gross National Product). Whether this is a good measurement of innovation has been widely discussed and the Oslo Manual has incorporated some of the critique against earlier methods of measuring. The traditional methods of measuring still inform many policy decisions. The EU Lisbon Strategy has set as a goal that their average expenditure on R&D should be 3% of GDP.[22]

[edit] Indicators

Many scholars claim that there is a great bias towards the "science and technology mode" (S&T-mode or STI-mode), while the "learning by doing, using and interacting mode" (DUI-mode) is widely ignored. For an example, that means you can have the better high tech or software, but there are also crucial learning tasks important for innovation. But these measurements and research are rarely done.
A common industry view (unsupported by empirical evidence) is that comparative cost-effectiveness research (CER) is a form of price control which, by reducing returns to industry, limits R&D expenditure, stifles future innovation and compromises new products access to markets.[23] Some academics claim the CER is a valuable value-based measure of innovation which accords truly significant advances in therapy (those that provide 'health gain') higher prices than free market mechanisms.[24] Such value-based pricing has been viewed as a means of indicating to industry the type of innovation that should be rewarded from the public purse.[25] The Australian academic Thomas Alured Faunce has developed the case that national comparative cost-effectiveness assessment systems should be viewed as measuring 'health innovation' as an evidence-based concept distinct from valuing innovation through the operation of competitive markets (a method which requires strong anti-trust laws to be effective) on the basis that both methods of assessing innovation in pharmaceuticals are mentioned in annex 2C.1 of the AUSFTA.[26][27][28]

[edit] Measurement indices

Several indexes exist that attempt to measure innovation include:
  • The Innovation Index, developed by the Indiana Business Research Center, to measure innovation capacity at the county or regional level in the U.S.[29]
  • The State Technology and Science Index, developed by the Milken Institute is a U.S. wide benchmark to measure the science and technology capabilities that furnish high paying jobs based around key components.
  • The Oslo Manual is focused on North America, Europe, and other rich economies.
  • The Bogota Manual, similar to the above, focuses on Latin America and the Caribbean countries.
  • The Creative Class developed by Richard Florida
  • The Innovation Capacity Index (ICI) published by a large number of international professors working in a collaborative fashion. The top scorers of ICI 2009–2010 being: 1. Sweden 82.2; 2. Finland 77.8; and 3. United States 77.5.
  • The Global Innovation Index is a global index measuring the level of innovation of a country, produced jointly by The Boston Consulting Group (BCG), the National Association of Manufacturers (NAM), and The Manufacturing Institute (MI), the NAM's nonpartisan research affiliate. NAM describes it as the "largest and most comprehensive global index of its kind".
  • The INSEAD Global Innovation Index
  • The INSEAD Innovation Efficacy Index

[edit] Global innovation index

This international innovation index is one of many research studies that try to build a ranking of countries related to innovation. Other indexes are the Innovations Indikator, Innovation Union Scoreboard, EIU Innovation Ranking, BCG International Innovation Index, Global Competitiveness Report, World Competitiveness Scoreboard, ITIF Index. The top 3 countries among all these different indexes are Switzerland, Sweden and Singapore.[30]
The global innovation index looks at both the business outcomes of innovation and government's ability to encourage and support innovation through public policy. The study comprised a survey of more than 1,000 senior executives from NAM member companies across all industries; in-depth interviews with 30 of the executives; and a comparison of the "innovation friendliness" of 110 countries and all 50 U.S. states. The findings are published in the report, "The Innovation Imperative in Manufacturing: How the United States Can Restore Its Edge."[31]
The report discusses not only country performance but also what companies are doing and should be doing to spur innovation. It looks at new policy indicators for innovation, including tax incentives and policies for immigration, education and intellectual property.
The latest index was published in March 2009.[32] To rank the countries, the study measured both innovation inputs and outputs. Innovation inputs included government and fiscal policy, education policy and the innovation environment. Outputs included patents, technology transfer, and other R&D results; business performance, such as labor productivity and total shareholder returns; and the impact of innovation on business migration and economic growth. The following is a list of the twenty largest countries (as measured by GDP) by the International Innovation Index:
RankCountryOverallInnovation InputsInnovation Performance
1 South Korea2.261.752.55
2 United States1.801.282.16
3 Japan1.791.162.25
4 Sweden1.641.251.88
5 Netherlands1.551.401.55
6 Canada1.421.391.32
7 United Kingdom1.421.331.37
8 Germany1.121.051.09
9 France1.121.170.96
10 Australia1.020.891.05
11 Spain0.930.830.95
12 Belgium0.860.850.79
13 China0.730.071.32
14 Italy0.210.160.24
15 India0.060.14−0.02
16 Russia−0.09−0.02−0.16
17 Mexico−0.160.11−0.42
18 Turkey−0.210.15−0.55
19 Indonesia−0.57−0.63−0.46
20 Brazil−0.59−0.62−0.51

[edit] Government policies

Given the noticeable effects on efficiency, quality of life, and productive growth, innovation is a key factor in society and economy. Consequently, policymakers are working to develop environments that will foster innovation and its resulting positive benefits. For instance, experts are advocating that the U.S. federal government launch a National Infrastructure Foundation, a nimble, collaborative strategic intervention organization that will house innovations programs from fragmented silos under one entity, inform federal officials on innovation performance metrics, strengthen industry-university partnerships, and support innovation economic development initiatives, especially to strengthen regional clusters. Because clusters are the geographic incubators of innovative products and processes, a cluster development grant program would also be targeted for implementation. By focusing on innovating in such areas as precision manufacturing, information technology, and clean energy, other areas of national concern would be tackled including government debt, carbon footprint, and oil dependence.[12] The U.S. Economic Development Administration understand this reality in their continued Regional Innovation Clusters initiative.[33] In addition, federal grants in R&D, a crucial driver of innovation and productive growth, should be expanded to levels similar to Japan, Finland, South Korea, and Switzerland in order to stay globally competitive. Also, such grants should be better procured to metropolitan areas, the essential engines of the American economy.[12]
Many countries recognize the importance of research and development as well as innovation including Japan’s Ministry of Education, Culture, Sports, Science and Technology (MEXT);[34] Germany’s Federal Ministry of Education and Research;[35] and the Ministry of Science and Technology in the People’s Republic of China [1]. Furthermore, Russia’s innovation programme is the Medvedev modernisation programme which aims at creating a diversified economy based on high technology and innovation. Also, the Government of Western Australia has established a number of innovation incentives for government departments. Landgate was the first Western Australian government agency to establish its Innovation Program.[36] The Cairns Region established the Tropical Innovation Awards in 2010 open to all businesses in Australia.[37] The 2011 Awards were extended to include participants from all Tropical Zone Countries.

[edit] See also

[edit] References

  1. ^ Tarde, G. (1903). The laws of imitation (E. Clews Parsons, Trans.). New York: H. Holt & Co.
  2. ^ EuDaly, K, Schafer, M, Boyd, Jim, Jessup, S, McBridge, A, Glischinksi, S. (2009). The Complete Book of North American Railroading. Voyageur Press. 1-352 pgs.
  3. ^ Schumpeter, J. A. (1943). Capitalism, Socialism, and Democracy (6 ed.). Routledge. pp. 81–84. ISBN 0-415-10762-8.
  4. ^ Heyne, P., Boettke, P. J., and Prychitko, D. L. (2010). The Economic Way of Thinking. Prentice Hall, 12th ed. Pp. 163, 317–318.
  5. ^ Gregory Gromov (2011). Silicon Valley History. http://www.netvalley.com/svhistory.html
  6. ^ Salge, T.O. & Vera, A. 2009, Hospital innovativeness and organizational performance, Health Care Management Review, Vol. 34, Issue 1, pp. 54–67.
  7. ^ Perez, T. and Rushing R. (2007). The CitiStat Model: How Data-Driven Government Can Increase Efficiency and Effectiveness. Center for American Progress Report. Pp. 1–18.
  8. ^ Transportation Research Board. (2007). Transit Cooperative Research Program (TCRP) Synthesis 70: Mobile Data Terminals. Pp. 1–5. http://onlinepubs.trb.org/onlinepubs/tcrp/tcrp_syn_70.pdf
  9. ^ Von Hippel, E. (1988). Sources of Innovation. Oxford University Press. The Sources of Innovation
  10. ^ Engelberger, J. F. (1982). Robotics in practice: Future capabilities. Electronic Servicing & Technology magazine.
  11. ^ Kline (1985). Research, Invention, Innovation and Production: Models and Reality, Report INN-1, March 1985, Mechanical Engineering Department, Stanford University.
  12. ^ a b c Mark, M., Katz, B., Rahman, S., and Warren, D. (2008) MetroPolicy: Shaping A New Federal Partnership for a Metropolitan Nation. Brookings Institution: Metropolitan Policy Program Report. Pp. 4–103.
  13. ^ "U-STIR". U-stir.eu. http://www.u-stir.eu/index.phtml?id=2537&ID1=2537&sprache=en. Retrieved 2011-09-07.
  14. ^ Tuomi, I. (2002). Networks of Innovation. Oxford University Press. Networks of Innovation
  15. ^ Siltala, R. (2010). Innovativity and cooperative learning in business life and teaching. University of Turku.
  16. ^ a b Davila, T., Epstein, M. J., and Shelton, R. (2006). "Making Innovation Work: How to Manage It, Measure It, and Profit from It. " Upper Saddle River: Wharton School Publishing.
  17. ^ Khan, A. M (1989). Innovative and Noninnovative Small Firms: Types and Characteristics. Management Science, Vol. 35, no. 5. Pp. 597–606.
  18. ^ O'Sullivan, David (2002). "Framework for Managing Development in the Networked Organisations". Journal of Computers in Industry 47 (1): 77–88.
  19. ^ Rogers, E. M. (1962). Diffusion of Innovation. New York, NY: Free Press.
  20. ^ Davila, Tony; Marc J. Epstein and Robert Shelton (2006). Making Innovation Work: How to Manage It, Measure It, and Profit from It. Upper Saddle River: Wharton School Publishing
  21. ^ OECD The Measurement of Scientific and Technological Activities. Proposed Guidelines for Collecting and Interpreting Technological Innovation Data. Oslo Manual. 2nd edition, DSTI, OECD / European Commission Eurostat, Paris 31 Dec 1995.
  22. ^ "Industrial innovation – Enterprise and Industry". Ec.europa.eu. http://ec.europa.eu/enterprise/policies/innovation/. Retrieved 2011-09-07.
  23. ^ Chalkidou K, Tunis S, Lopert R, Rochaix L, Sawicki PT, Nasser M, Xerri B. Comparative Effectiveness research and Evidence-Based Health Policy: Experience from Four Countries. The Milbank Quarterly 2009; 87(2): 339–367 at 362–363.
  24. ^ Roughead E, Lopert R and Sansom L. Prices for innovative pharmaceutical products that provide health gain: a comparison between Australia and the United States Value in Health 2007;10:514–20
  25. ^ Hughes B. Payers Growing Influence on R&D Decision Making. Nature Reviews Drugs Discovery 2008; 7: 876–78.
  26. ^ Faunce T, Bai J and Nguyen D. Impact of the Australia-US Free Trade Agreement on Australian medicines regulation and prices. Journal of Generic Medicines 2010; 7(1): 18-29
  27. ^ Faunce TA. Global intellectual property protection of “innovative” pharmaceuticals:Challenges for bioethics and health law in B Bennett and G Tomossy (eds) Globalization and Health Springer 2006 http://law.anu.edu.au/StaffUploads/236-Ch%20Globalisation%20and%20Health%20Fau.pdf . Retrieved 18 June 2009.
  28. ^ Faunce TA. Reference pricing for pharmaceuticals: is the Australia-United States Free Trade Agreement affecting Australia's Pharmaceutical Benefits Scheme? Medical Journal of Australia. 2007 Aug 20;187(4):240–2.
  29. ^ "Tools". Statsamerica.org. http://www.statsamerica.org/innovation/data.html. Retrieved 2011-09-07.
  30. ^ "Innovation Indicator 2011". 2011. http://www.innovationsindikator.de/der-innovationsindikator/english-summary/. Retrieved 2012-05-27.
  31. ^ "U.S. Ranks #8 In Global Innovation Index". Industryweek.com. 2009-03-10. http://www.industryweek.com/articles/u-s-_ranks_8_in_global_innovation_index_18638.aspx. Retrieved 2009-08-28.
  32. ^ "The Innovation Imperative in Manufacturing: How the United States Can Restore Its Edge" (PDF). http://www.nam.org/innovationreport.pdf. Retrieved 2009-08-28.
  33. ^ http://www.eda.gov/PDF/EDA_FY_2010_Annual_Report.pdf
  34. ^ "Science and Technology". MEXT. http://www.mext.go.jp/english/a06.htm. Retrieved 2011-09-07.
  35. ^ "BMBF " Ministry". Bmbf.de. http://www.bmbf.de/en/Ministry.php. Retrieved 2011-09-07.
  36. ^ http://www.landgate.wa.gov.au/innovation
  37. ^ http://www.tropicalinnovationawards.com

[edit] External links