Wednesday, 4 September 2013

Why you should read the essay, Red Plenty Platforms

 
photo of Michel Bauwens

Michel Bauwens/ P2P Foundation
20th August 2013



Nick Dyer-Whiteford, in an extended review of Red Plenty, discusses the renewed possibilities for cybernetic planning. In this first installment, we present the history and present forms of this economic thought movement. This is a very informative and thoughtful essay.
* Article: RED PLENTY PLATFORMS. By Nick Dyer-Witheford. CULTURE MACHINE VOL 14 • 2013
From the introduction, by Nick Dyer-Witheford:
“This paper takes Spufford’s novel as a starting point from which to embark on an examination of the computing platforms that would be necessary for a contemporary ‘red plenty’. It is not a discussion of the merits and demerits of hacktivism, digital disobedience, electronic fabrics of struggle, tweets in the street and Facebook revolutions, but of digital communism. This is a topic that has already been touched on by the wave of rethinking life after capitalism triggered by the 1989 implosion of the USSR, in proposals for ‘participatory economics’ (Albert & Hahnel, 1991), a ‘new socialism’ (Cockshott & Cottrell, 1993), ‘twenty first century socialism’ (Dieterich, 2006), or forms of ‘commonwealth’(Hardt & Negri, 2009). Unlike some of these sources, however, this essay does not aim to provide detailed, often competitive, ‘blue-prints’ for a new society, but rather what Greig de Peuter, in a personal conversation, once called ‘red-prints’- approximating orientations to revolutionary possibilities.
In discussing computing and communism it is almost impossible to escape accusations of abandoning struggles and subjects to a machinic determinism. Certainly all automatic, teleological, and evolutionary models, including schematic choreographies of forces and relations of production, should be rejected. Just as important, however, is the avoidance of a contrary humanist determinism, which overstates the autonomy and ontological privilege of ‘man versus machine’. Here, modes of production, and the struggles that convulse them, are understood as combinations of human and machine agents, entangled, hybridized and co-determined Deleuzo- DeLandian ‘assemblages’ (Thorburn, 2013).
That is why the estimate sent to me by Benjamin Peters, historian of Soviet cybernetics, that, compared with the machines available to the planners of Red Plenty in, say, 1969, the processing power of the fastest computer in 2019 will represent ‘roughly a 100,000,000,000 fold increase in operations per second’, is exciting, a factoid that is, as Peters remarks, ‘not itself meaningful but still suggestive’. The argument that follows explores this suggestivity. This article thus looks at the most direct through-line from Soviet cybernetics’ continuing attempts to theorize forms of economic planning based on labour time algorithms and super-computing. It then discusses how concerns about authoritarian central planning might be affected by social media and software agents, before going on to consider whether planning is redundant in a world of automata, copying and replication. In partial answer to that last question, ‘Red Plenty Platforms’ scans the role of cybernetics in the planetary bio-crisis, concluding with some general observations about cybernetics on today’s ‘communist horizon’ (Dean, 2012).”
* The history of Cybernetic Planning
“If central planning suffered from a calculation problem, why not just solve it with real calculation machines? This was precisely the point made by Hayek’s opponent, the economist Oskar Lange, who, retrospectively reviewing the ‘socialist calculation’ debate, remarked: ‘today my task would be much simpler. My answer to Hayek … would be: so what’s the trouble? Let us put the simultaneous equations on an electronic computer and we shall obtain the solution in less than a second’ (1967: 159). Such was the project of the cyberneticians featured in Red Plenty, a project driven by the realization that the apparently successful Soviet industrial economy, despite its triumphs in the 1940s and ‘50s, was slowly stagnating amidst organizational incoherence and informational bottlenecks.
Their effort depended on a conceptual tool, the input-output table, whose development is associated with two Russian mathematicians: the émigré Wassily Leontief, who worked in the US, and the Soviet Union’s Kantorovich, the central protagonist of Red Plenty. Inputoutput tables – which, it was recently discovered, are amongst the intellectual foundations of Google’s PageRank algorithm (Franceschet, 2010) – chart the complex interdependence of a modern economy by showing how outputs from one industry (e.g. steel or cotton) provide inputs for another (say, cars or clothing), so that one can estimate the change in demand resulting from a change in production of final goods. By the 1960s such tables were an accepted instrument of large scale industrial organizations: Leontief’s work played a role in the logistics of the US Air Force’s massive bomber offensive against Germany. However, the complexity of an entire national economy was believed to preclude their application at such a level.
Soviet computer scientists set out to surmount this problem. As early as the 1930s, Kantorovich had improved input-output tables with the mathematical method of linear programming that estimated the best, or ‘optimizing’, combination of production techniques to meet a given target. The cyberneticians of the 1960s aimed to implement this breakthrough on a massive scale by establishing a modern computing infrastructure to rapidly carry out the millions of calculations required by Gosplan, the State Board for Planning that oversaw economic five year plans. After a decade of experimentation, their attempt collapsed, frustrated by the pitiful state of the Soviet computer industry – which, being some two decades behind that of the US, missed the personal computer revolution and did not develop an equivalent to the Internet. It was thus utterly inadequate to the task set for it. All this, alongside political opposition from a nomenklatura that, seeing in the new scientific planning method a threat to its bureaucratic power, compelled abandonment of the project (Castells, 2000; Gerovitch, 2008; Peters, 2012).
This was not the only twentieth century project of ‘cybernetic revolutionaries’; as remarkable was the attempt by Salvador Allende’s Chilean regime to introduce a more decentralized version of electronic planning, ‘Project Cybersyn’ (Medina, 2005). Led by the Canadian cybernetician Stafford Beer, this was conceived as a system of communication and control that would enable the socialist regime to collect economic data, and relay it to government decision makers, even while embedding within its technology safeguards against state micro-management and encouragement for many-sided discussions of planning decisions. This was an attempt at socio-technical engineering of democratic socialism that today perhaps seems more attractive than the post-Stalinist manoeuvres of the Soviet computer planners. But it met an even more brutal fate; Project Cybersyn was extinguished in the Pinochet coup of 1973. In the end the failure of the USSR to adapt to a world of software and networks contributed to its economic/military defeat by the United States. Its disintegration, in which, as Alec Nove (1983) demonstrated, information bottlenecks and reporting falsifications played a major role, seemed to vindicate the Austrian economists. Hayek’s praise of market catallaxy thus became central to the ‘neoliberal thought collective’ (Mirowski, 2009) that led the subsequent victory march of global capitalism.
The combined pressure of the practical disaster of the USSR and the theoretical argument of the Austrian school exerted immense force inside what remained of the left, pressuring it to reduce and reset the limit of radical aspiration to, at most, an economy of collectively owned enterprises coordinated by price signals. The many variants on such ‘market socialist’ proposals have evoked rebuttals from Marxists who refuse to concede to commodity exchange. Perhaps because they grant to the market the automatic information processing functions ascribed by the Austrian economists and market socialists, they may address issues of technological innovation or public data availability, yet do not seem to engage deeply with the potentialities of contemporary computing.
Today, post-crash, claims that markets are infallible information machines may seem less credible than they did a quarter of century ago. The parasitic energy-theft that underlies price-signal transmissions (exploitation at the point of production); the inability of individual commodity exchanges to register collective consequences (the so-called ‘externalities’); and the recursivity of a chrematistic system that loops back on itself in financial speculation, have all become more salient in the midst of global capital’s economic and ecological implosion.” (www.culturemachine.net/index.php/cm/article/view/511/526)
* Contemporary schools of thought and the problem of Labour Algorithms
“Despite the fall of actually-existing socialism, the idea of computerized economic planning continued to be developed by small groups of theorists, who have advanced its conceptual scope further than anything attempted by Soviet cyberneticians. Two schools have been of particular importance: the ‘New Socialism’ of Scottish computer scientists Paul Cockshott and Alan Cottrell (1993); and the German ‘Bremen School’, which includes Peter Arno (2002) and Heinz Dieterich (2006), the latter an advocate of Venezuelan-style ‘Twenty First Century Socialism’. These tendencies have recently converged (Cockshott, Cottrell & Dieterich, 2010). However, because little of the Bremen group’s work is translated, the focus here will be on the New Socialism of Cockshott and Cottrell.
The distinguishing mark of the New Socialist project is its classic Marxist rigor. Accordingly, its twenty-first century super-computer planning follows to the letter the logic of the late nineteenth century Critique of the Gotha Program (Marx, 1970), which famously suggests that at the first, ‘lower’ stage to communism, before conditions of abundance allow ‘to each according to his needs’, remuneration will be determined by the hours of socially necessary labour required to produce goods and services. In the capitalist workplace, workers are paid for the reproduction of the capacity to labour, rather than for the labour actually extracted from them; it is this that enables the capitalist to secure surplus value. The elimination of this state of affairs, Cockshott and Cottrell contend, requires nothing less than the abolition of money—that is, the elimination of the fungible general medium of exchange that, through a series of metamorphoses of money in and out of the commodity form, creates the self-expanding value that is capital. In their new Socialism, work would be remunerated in labour certificates; an hour’s work could be exchanged for goods taking, on a socially average basis, an equivalent time to produce. The certificates would be extinguished in this exchange; they would not circulate, and could not be used for speculation. Because workers would be paid the full social value of their labour, there would be no owner profits, and no capitalists to direct resource allocation. Workers would, however, be taxed to establish a pool of labour-time resources available for social investments made by planning boards whose mandate would be set by democratic decisions on overall social goals.
Labour time thus provides the ‘objective unit of value’ for the New Socialism (Cockshott & Cottrell 2003: 3). It is at this point that its proponents invoke the capacities of information technology. Such a system would require an enumeration of the labour time expended, both directly and indirectly, in the creation of goods and services, to assess the number certificates for which these goods and services can be exchanged, and to enable the planning of their production. The basic tool of the input-output table reappears, with special attention to labour time, both as an input necessary for the production of goods, and as an output that itself requires the inputs of training and education. However, here the New Socialists have to confront a basic objection. Since the fall of the USSR it has been conventionally accepted that the scale of information processing attempted by its cyberneticians was simply too large to be feasible. Writing in the 1980s, Nove (1983) suggested that such an effort, involving the production of some twelve million discrete items, would demand a complexity input-output calculation impossible even with computers. This claim was repeated in recent discussions of Red Plenty, with critics of central planning suggesting that, even using a contemporary ‘desktop machine’, solving the equations would take ‘roughly a thousand years’ (Shalizi, 2012).
Cockshott and Cottrell’s answer involves new tools, both conceptual and technical. The theoretical advances are drawn from branches of computing science that deal with abbreviating the number of discrete steps needed to complete a calculation. Such analysis, they suggest, shows their opponents’ objections are based on ‘pathologically inefficient’ methods (Cockshott, in Shalizi, 2012).
The input-output structure of the economy is, they point out, ‘sparse’—that is to say, only a small fraction of the goods are directly used to produce any other good. Not everything is an input for everything else: yogurt is not used to produce steel. The majority of the equations invoked to suggest insuperable complexity are therefore gratuitous. An algorithm can be designed to short-cut through input-output tables, ignoring blank entries, iteratively repeating the process until it arrives at a result of an acceptable order of accuracy.
The time would be further reduced by massive increases in computer processing speed yielded by Moore’s Law. Suggesting high-level economic planning is done on a ‘desktop machine’ is disingenuous. The issue is supercomputing capacity. According to an email communication from Benjamin Peters, in 1969, the time of Red Plenty, the ‘undisputed workhorse’ of the Soviet information economy was the BESM-6 (‘bol’shaya electronicheskaya schetnaya mashina’ – literally the ‘large/major electronic calculating machine’), which could perform at an operating speed of 800,000 flops or ‘floating operations per second’ – that is, at 8 megaflops, or 10^6 flops. By 2013, however, supercomputers used in climate modelling, material testing and astronomical calculations are commonly exceeding 10 quadrillion flops or ten ‘petaflops’. The holder of the crown at the time of writing is Cray’s Titan at the Oak Ridge National Laboratory achieving some 17.6 petaflops (10^15) (Wikipedia, 2013). Supercomputers with an ‘exaflop’ capacity (10^18 flops) are predicted from China by 2019 (Dorrier, 2012). Thus, as Peters (2013) says, ‘giving the Soviets a bit generously 10^7 flops in 1969, we can find (10^18 – 10^7 = 10^11) . . . a 100,000,000,000 fold increase’ by today.
With these capacities, Cockshott and Cottrell’s suggestion that the computer requirements for large scale economic planning could be handled by facilities comparable to those now used for meteorological purposes, seems at least plausible. The ‘calculation problem’, however, involves not just data processing but the actual availability of data; Hayek’s claim was not merely that central planners cannot crunch economic numbers fast enough, but that the numbers in a sense do not exist prior to price setting, which provide an otherwise absent measure of production performance and consumption activity. Again, Cockshott and Cottrell suggest the answer lies in computers being used as a means of harvesting economic information. Writing in the early 1990s, and invoking levels of network infrastructure available in Britain at the time, they suggest a coordinating system consisting of few personal computers in each production unit, using standard programming packages, would process local production data and send it by ‘telex’ to a central planning facility, which every twenty minutes or so would send out a radio broadcast of adjusted statistical data to be input at local levels.
This is a scenario too reminiscent of the ramshackle techno-futurism of Terry Gilliam’s Brazil. To bring the New Socialists up to date we should instead refer to Fredric Jameson’s iconoclastic vision of Wal- Mart as ‘the shape of a Utopian future looming through the mist’ (2009: 423). His point is that, if one for a moment ignores the gross exploitation of workers and suppliers, Wal-Mart is an entity whose colossal organizational powers model the planned processes necessary to raise global standards of living. And as Jameson recognizes, and other authors document in detail (Lichtenstein, 2006), this power rests on computers, networks and information. By the mid 2000s Wal-Mart’s data-centers were actively tracking over 680 million distinct products per week and over 20-million customer transactions per day, facilitated by a computer system second in capacity only to that of the Pentagon. Barcode scanners and point of sale computer systems identify each item sold, and store this information. Satellite telecommunications link directly from stores to the central computer system, and from that system to the computers of suppliers, to allow automatic reordering. The company’s early adoption of Universal Product Codes had led to a ‘higher stage’ requirement for Radio Frequency Identification (RFID) tags in all products to enable tracking of commodities, workers and consumers within and beyond its global supply chain.”


Why you should read the essay, Red Plenty Platforms


Blogger Ref Link http://www.p2pfoundation.net/Transfinancial_Economics



 

No comments:

Post a Comment