Exploring mainly Heterodox type Economics, Monetary Reform, Environmental Sustainability, and Climate Change. It is a resource of Internet articles, and also promotes awareness of a futuristic modern universal Paradigm known as TFE, or Transfinancial Economics which is probably the most advanced, and most "scientific" form of Economics in the world .
Algortihms of Capitalism is the new book curated by Matteo Pasquinelli. This link would direct the reader to the Italian version of this very exciting volume which brings together the Accelerationist Manifesto, some reactions to it, and some important reflections relevant to what Toni Negri calls ‘the #Acclerationisty politics’ that can be drawn from the manifesto. Most of the articles have been already online in English as well. Here I collected some of those: #Accelerate: Manifesto for an Accelerationist Politics by Alex Williams and Nick Srnicek Reflections on the Manifesto for an Accelerationist Politics by Toni Negri
Matteo Pasquinelli: “To Anticipate and Accelerate: Italian Operaismo and Reading Marx’s Notion of Organic Composition of Capital”, Rethinking Marxism journal, vol. 26, n. 2, 2014.
another intriguining peiece from Pasquinelli: “The Power of Abstraction and Its Antagonism. On Some Problems Common to Contemporary Neuroscience and the Theory of Cognitive Capitalism”, Psychopathologies of Cognitive Capitalism, Part 2. Berlin: Archive Books, 2014. Red stack attack! Algorithms, capital and the automation of the common by Tiziana Terranova:
On this blog [http://syntheticedifice.wordpress.com/] it also possible to follow other reactions and relevant discussion around the Accele-rationalism. Below is for instance a friendly but undermining critique by McKenzie Wark, taken from there:
#Celerity: A Critique of the Manifesto for an Accelerationist Politics
To be honest #Accelerationist manifesto also sounds to me, at first several readings, like a call for Kautskian and/or Plekhanovian politics 2.0. It suggests that to breakdown the global networked cognitive savage capitalism we must lead it into a future trap by Accelerating it until it is broken!
Although this sounds like an excitingly good-crazy idea, I have to step on the brake…. It has always been hard for me to accept the idea that suggest that in order to or if you want to negate and transcend something bad, first you need to let it be worsen and worsen, faster and faster, violent and violent… then bam! Even though political imagery says ‘push it harder towards the cliff’, scary question remains: ‘what if… it still survives then!’ It is great and energizing to hear about the anger and hope being formulated in such intelligent way; crying that the time is up and we need get away with this maniacal system as soon as possible. However, I find myself sympatyizing with Wark’s strong and friendly criticism, suggesting that ‘ok, lets get rid of it, but not accelerating it.. by hacking it, now!” I believe that Wark is right. There exists another ways to hack the capitalist mode of production instead of making it happen faster.
Agreeing with many others who think global working class is currently making it self through ongoing and intensifying struggles, I would formulate a good hack, as a bottom up class project, surely one part of wider free libre and open source code, of which algorithms are currently being written:
“The seed form of the self-organisation of the global working classes needs to be simultaneously well grounded, transnational, and global. It needs also be open, free/gratis and accessible for all the working people; so that they can freely enter and leave it. As modularly integrated organized networks it should be aiming at and capable of linking industrial, digital and inforamtional liek hacker-, academic-, art- workers, sex-. domoestic-, immigrant- … all fragments of the working classes, as well as social-environmental-cultural-informational-sexual justice activists. Adoptable principles and protocols, in form of the ‘code’, which can be pre-determined, as the coding process itself, has to be well documented, open and accessible to local, workplace, neighbourhood, issue based, activist or other forms of political collectives. It should be operating similar to Anonymous, 15M, Occupy, Gezi and other decentralized forms, yet based on more advanced and structured working protocols, closer to FLOSS projects, grassroots and worker’s owned cooperatives. It should not include membership, service, representation sort of logics that at the end leads to the reproduction of disempowerment for involving nodes, creating clientalism. Such form should not be organized by professional intellectuals and activists from outside in, and from top down towards the working people. With an opposite perspective, it should be an open design process led by volunteer participation, based on self-governing and representation principles.
It should be able to put forward creative, assertive and effective direct non-violent mass action, which makes fun of and ridicule the target by allowing the formation of collective intelligence. An active peer-to-peer self-learninig protocols and praxis should be at the core cultural production and re-creation beyond straitjacket put on the working ‘class’. Instead of having teachers who must show the right and enlightened road to the candidate working class members, who needs to get a self-consciousness, a global and networked labour union should be providing working people with the access to the tools, resources and key networks that would make self- empowerment easily possible. By linking spaces where continuous open exchanges take place and carry the energy from one space to other. Utilizing How to(s), Do it Yourself and Do it With Others guides, in online and real world context, by FLOSS communication tools as well as mass-action tactics it would replace top down (issue-anger-action) organizing model, which would allow self-articulation, respectful and collaborative working praxis by harmonized through peer-to-peer digital communication where possible and desirable, as well as face to face and secure meetings, cultural and recreational events cultural events. It should be collaborating with other organizations, creative and productive projects that undermines capitalist mode of production and develop the algorithms and codes of alternative modes, as operating systems that could replace capitalism. Such global network needs to grow by linking existing radical networks groups of activists, hackers, organizers, makers, DIY groups, squatters, eco-willages, diggers, immigrants, asylum seekers, solidarity networks, and so on. In a way all nodes could associate with the globally networked ties, while keeping their autonomy. Instead of #Accelerating capitalism, a better motto we should be spreading might be:
“All empower one, one empower
The NSA revelations highlight the role sophisticated algorithms play in sifting through masses of data. But more surprising is their widespread use in our everyday lives. So should we be more wary of their power?
The financial sector has long used algorithms to predict market fluctuations, but they can also help police identify crime hot spots or online shops target their customers. Photograph: Danil Melekhin/Getty Images
On 4 August 2005, the police department of Memphis, Tennessee, made so many arrests over a three-hour period that it ran out of vehicles to transport the detainees to jail. Three days later, 1,200 people had been arrested across the city – a new police department record. Operation Blue Crush was hailed a huge success.
Larry Godwin, the city's new police director, quickly rolled out the scheme and by 2011 crime across the city had fallen by 24%. When it was revealed Blue Crush faced budget cuts earlier this year, there was public outcry. "Crush" policing is now perceived to be so successful that it has reportedly been mimicked across the globe, including in countries such as Poland and Israel. In 2010, it was reported that two police forces in the UK were using it, but their identities were not revealed.
Crush stands for "Criminal Reduction Utilising Statistical History". Translated, it means predictive policing. Or, more accurately, police officers guided by algorithms. A team of criminologists and data scientists at the University of Memphis first developed the technique using IBM predictive analytics software. Put simply, they compiled crime statistics from across the city over time and overlaid it with other datasets – social housing maps, outside temperatures etc – then instructed algorithms to search for correlations in the data to identify crime "hot spots". The police then flooded those areas with highly targeted patrols.
"It's putting the right people in the right places on the right day at the right time," said Dr Richard Janikowski, an associate professor in the department of criminology and criminal justice at the University of Memphis, when the scheme launched. But not everyone is comfortable with the idea. Some critics have dubbed it "Minority Report" policing, in reference to the sci-fi film in which psychics are used to guide a "PreCrime" police unit.
The use of algorithms in policing is one example of their increasing influence on our lives. And, as their ubiquity spreads, so too does the debate around whether we should allow ourselves to become so reliant on them – and who, if anyone, is policing their use. Such concerns were sharpened further by the continuing revelations about how the US National Security Agency (NSA) has been using algorithms to help it interpret the colossal amounts of data it has collected from its covert dragnet of international telecommunications.
"For datasets the size of those the NSA collect, using algorithms is the only way to operate for certain tasks," says James Ball, the Guardian's data editor and part of the paper's NSA Files reporting team. "The problem is how the rules are set: it's impossible to do this perfectly. If you're, say, looking for terrorists, you're looking for something very rare. Set your rules too tight and you'll miss lots of, probably most, potential terror suspects. But set them more broadly and you'll drag lots of entirely blameless people into your dragnet, who will then face further intrusion or even formal investigation. We don't know exactly how the NSA or GCHQ use algorithms – or how extensively they're applied. But we do know they use them, including on the huge data trawls revealed in the Guardian."
From dating websites and City trading floors, through to online retailing and internet searches (Google's search algorithm is now a more closely guarded commercial secret than the recipe for Coca-Cola), algorithms are increasingly determining our collective futures. "Bank approvals, store cards, job matches and more all run on similar principles," says Ball. "The algorithm is the god from the machine powering them all, for good or ill." Most observers blame the 'flash crash' of May 2010 on the use of algorithms to perform high-frequency trading. Photograph: Spencer Platt/Getty Images But what is an algorithm? Dr Panos Parpas, a lecturer in the quantitative analysis and decision science ("quads") section of the department of computing at Imperial College London, says that wherever we use computers, we rely on algorithms: "There are lots of types, but algorithms, explained simply, follow a series of instructions to solve a problem. It's a bit like how a recipe helps you to bake a cake. Instead of having generic flour or a generic oven temperature, the algorithm will try a range of variations to produce the best cake possible from the options and permutations available."
Parpas stresses that algorithms are not a new phenomenon: "They've been used for decades – back to Alan Turing and the codebreakers, and beyond – but the current interest in them is due to the vast amounts of data now being generated and the need to process and understand it. They are now integrated into our lives. On the one hand, they are good because they free up our time and do mundane processes on our behalf. The questions being raised about algorithms at the moment are not about algorithms per se, but about the way society is structured with regard to data use and data privacy. It's also about how models are being used to predict the future. There is currently an awkward marriage between data and algorithms. As technology evolves, there will be mistakes, but it is important to remember they are just a tool. We shouldn't blame our tools."
The "mistakes" Parpas refers to are events such as the "flash crash" of 6 May 2010, when the Dow Jones industrial average fell 1,000 points in just a few minutes, only to see the market regain itself 20 minutes later. The reasons for the sudden plummet has never been fully explained, but most financial observers blame a "race to the bottom" by the competing quantitative trading (quants) algorithms widely used to perform high-frequency trading. Scott Patterson, a Wall Street Journal reporter and author of The Quants, likens the use of algorithms on trading floors to flying a plane on autopilot. The vast majority of trades these days are performed by algorithms, but when things go wrong, as happened during the flash crash, humans can intervene.
"By far the most complicated algorithms are to be found in science, where they are used to design new drugs or model the climate," says Parpas. "But they are done within a controlled environment with clean data. It is easy to see if there is a bug in the algorithm. The difficulties come when they are used in the social sciences and financial trading, where there is less understanding of what the model and output should be, and where they are operating in a more dynamic environment. Scientists will take years to validate their algorithm, whereas a trader has just days to do so in a volatile environment."
Most investment banks now have a team of computer science PhDs coding algorithms, says Parpas, who used to work on such a team. "With City trading, everyone is running very similar algorithms," he says. "They all follow each other, meaning you get results such as the flash crash. They use them to speed up the process and to break up big trades to disguise them from competitors when a big investment is being made. It's an on-going, live process. They will run new algorithms for a few days to test them before letting them loose with real money. In currency trading, an algorithm lasts for about two weeks before it is stopped because it is surpassed by a new one. In equities, which is a less complicated market, they will run for a few months before a new one replaces them. It takes a day or two to write a currency algorithm. It's hard to find out information about them because, for understandable reasons, they don't like to advertise when they are successful. Goldman Sachs, though, has a strong reputation across the investment banks for having a brilliant team of algorithm scientists. PhDs students in this field will usually be employed within a few months by an investment bank."
The idea that the world's financial markets – and, hence, the wellbeing of our pensions, shareholdings, savings etc – are now largely determined by algorithmic vagaries is unsettling enough for some. But, as the NSA revelations exposed, the bigger questions surrounding algorithms centre on governance and privacy. How are they being used to access and interpret "our" data? And by whom?
Dr Ian Brown, the associate director of Oxford University's Cyber Security Centre, says we all urgently need to consider the implications of allowing commercial interests and governments to use algorithms to analyse our habits: "Most of us assume that 'big data' is munificent. The laws in the US and UK say that much of this [the NSA revelations] is allowed, it's just that most people don't realise yet. But there is a big question about oversight. We now spend so much of our time online that we are creating huge data-mining opportunities." Algorithms can run the risk of linking some racial groups to particular crimes. Photograph: Alamy Brown says that algorithms are now programmed to look for "indirect, non-obvious" correlations in data. "For example, in the US, healthcare companies can now make assessments about a good or bad insurance risk based, in part, on the distance you commute to work," he says. "They will identity the low-risk people and market their policies at them. Over time, this creates or exacerbates societal divides. Professor Oscar Gandy, at the University of Pennsylvania, has done research into 'secondary racial discrimination', whereby credit and health insurance, which relies greatly on postcodes, can discriminate against racial groups because they happen to live very close to other racial groups that score badly."
Brown harbours similar concerns over the use of algorithms to aid policing, as seen in Memphis where Crush's algorithms have reportedly linked some racial groups to particular crimes: "If you have a group that is disproportionately stopped by the police, such tactics could just magnify the perception they have of being targeted."
Viktor Mayer-Schönberger, professor of internet governance and regulation at the Oxford Internet Institute, also warns against humans seeing causation when an algorithm identifies a correlation in vast swaths of data. "This transformation presents an entirely new menace: penalties based on propensities," he writes in his new book, Big Data: A Revolution That Will Transform How We Live, Work and Think, which is co-authored by Kenneth Cukier, the Economist's data editor. "That is the possibility of using big-data predictions about people to judge and punish them even before they've acted. Doing this negates ideas of fairness, justice and free will. In addition to privacy and propensity, there is a third danger. We risk falling victim to a dictatorship of data, whereby we fetishise the information, the output of our analyses, and end up misusing it. Handled responsibly, big data is a useful tool of rational decision-making. Wielded unwisely, it can become an instrument of the powerful, who may turn it into a source of repression, either by simply frustrating customers and employees or, worse, by harming citizens."
Mayer-Schönberger presents two very different real-life scenarios to illustrate how algorithms are being used. First, he explains how the analytics team working for US retailer Target can now calculate whether a woman is pregnant and, if so, when she is due to give birth: "They noticed that these women bought lots of unscented lotion at around the third month of pregnancy, and that a few weeks later they tended to purchase supplements such as magnesium, calcium and zinc. The team ultimately uncovered around two dozen products that, used as proxies, enabled the company to calculate a 'pregnancy prediction' score for every customer who paid with a credit card or used a loyalty card or mailed coupons. The correlations even let the retailer estimate the due date within a narrow range, so it could send relevant coupons for each stage of the pregnancy."
Harmless targeting, some might argue. But what happens, as has already reportedly occurred, when a father is mistakenly sent nappy discount vouchers instead of his teenage daughter whom a retailer has identified is pregnant before her own father knows?
Mayer-Schönberger's second example on the reliance upon algorithms throws up even more potential dilemmas and pitfalls: "Parole boards in more than half of all US states use predictions founded on data analysis as a factor in deciding whether to release somebody from prison or to keep him incarcerated." Norah Jones: a specially developed algorithm predicted that her debut album contained a disproportionately high number of hit records. Photograph: Olycom SPA/Rex Features Christopher Steiner, author of Automate This: How Algorithms Came to Rule Our World, has identified a wide range of instances where algorithms are being used to provide predictive insights – often within the creative industries. In his book, he tells the story of a website developer called Mike McCready, who has developed an algorithm to analyse and rate hit records. Using a technique called advanced spectral deconvolution, the algorithm breaks up each hit song into its component parts – melody, tempo, chord progression and so on – and then uses that to determine common characteristics across a range of No 1 records. McCready's algorithm correctly predicted – before they were even released – that the debut albums by both Norah Jones and Maroon 5 contained a disproportionately high number of hit records.
The next logical step – for profit-seeking record companies, perhaps – is to use algorithms to replace the human songwriter. But is that really an attractive proposition? "Algorithms are not yet writing pop music," says Steiner. He pauses, then laughs. "Not that we know of, anyway. If I were a record company executive or pop artist, I wouldn't tell anyone if I'd had a number one written by an algorithm."
Steiner argues that we should not automatically see algorithms as a malign influence on our lives, but we should debate their ubiquity and their wide range of uses. "We're already halfway towards a world where algorithms run nearly everything. As their power intensifies, wealth will concentrate towards them. They will ensure the 1%-99% divide gets larger. If you're not part of the class attached to algorithms, then you will struggle. The reason why there is no popular outrage about Wall Street being run by algorithms is because most people don't yet know or understand it."
But Steiner says we should welcome their use when they are used appropriately to aid and speed our lives. "Retail algorithms don't scare me," he says. "I find it useful when Amazon tells me what I might like. In the US, we know we will not have enough GP doctors in 15 years, as not enough are being trained. But algorithms can replace many of their tasks. Pharmacists are already seeing some of their prescribing tasks replaced by algorithms. Algorithms might actually start to create new, mundane jobs for humans. For example, algorithms will still need a human to collect blood and urine samples for them to analyse."
There can be a fine line, though, between "good" and "bad" algorithms, he adds: "I don't find the NSA revelations particularly scary. At the moment, they just hold the data. Even the best data scientists would struggle to know what to do with all that data. But it's the next step that we need to keep an eye on. They could really screw up someone's life with a false prediction about what they might be up to."
Book: Algorithms of Capital. Ed./curated by Matteo Pasquinelli.
URL = http://matteopasquinelli.com/algorithms-of-capital/
The book is based on the #Acclerationisty politics. Anyway the above link is to the Italian version. Yet most of the articles are already online in English.
Discussion
Orsan Senalp:
“Here is how I would formulate a part of the code needs yet to be put in algorithms:
I agree with many others who think global working class is currently making it self through ongoing and intensifying struggles. In my opinion any form of self-organisation of the global working classes needs to be, simultaneously very well grounded, transnational, and global. It also needs to be open to all the working people, so free [gratis and accessible] to enter and leave, designed as modularly integrated organized networks linking workers [including hacker-, academic-, art-, sex-. … so on workers], social-environmental-cultural-informational-sexual justice activists.
Adoptable principles, in form of the ‘code’, which can be pre-determined, as well as the coding process itself needs to be very well documented, totally open and accesible to local, workplace, neighbourhood, issue based, activist or other forms of political collectives. In a way similar to Anonymous, 15M, Occupy, Gezi. or other decentralized forms, but with more structured and open working protocols, as it is in FLOSS projects, or grassroots and worker cooperatives. It sould not be including membership, service, representation type logics that leads to reproduction of disempowerment for the involving nodes. It should not be organized by intellectual activists from out side in, and from top down towards the working class. With an opposite perspective, it should be designed by volunteer participation based on self-governing and representation principles. It should be able to put forward creative, assertive and effective direct non-violent mass action, which makes fun of and ridicule the target by allowing the formation of collective intelligence. Therefore active peer to peer self-learninig should be the core cultural production and learning principle. Instead of having teachers who must show the right and enlightented road to the candidate working class members, who needs to get a self-consciousness, a global and networked labour union should be providing working people with the access to the tools, resources and key networks that would make self- empowerment easily possible. By linking spaces where continious open exchanges take place and carry the energy from one space to other. Utilizing How to(s), Do it Yourself and Do it With Others guides, in online and real world context, by FLOSS communication tools as well as mass-action tactics it would replace top down (issue-anger-action) organizing model, which would allow self-articulation, respectful and collaborative working praxis by harmonized through peer to peer digital communication -where possible and desirable, as well as face to face and secure meetings, cultural and recreational events cultural events. It should be collaboraing with other organizations, creative and productive projects that undermines capitalist mode of production and develop the algorithms and codes of alternative modes, as operating systems that could replace capitalism. Such global network needs to grow by linking existing radical networks groups of activists, hackers, organizers, makers, DIY groups, squatters, eco-willages, diggers, immigrants, asylum seekers, solidarity networks, and so on. In a way all nodes could associate with the globally networked ties, while keeping their autonomy.
So instead of #Accelerate motto, i would suggest some thing like: “All empower one, one empower all!”
Contents
Links to English versions:
Accelerate: Manifesto for an Accelerationist Politics by Alex
“To Anticipate and Accelerate: Italian Operaismo and Reading Marx’s Notion of Organic Composition of Capital”, Rethinking Marxism journal, vol. 26, n. 2, 2014.
“The Power of Abstraction and Its Antagonism. On Some Problems Common to Contemporary Neuroscience and the Theory of Cognitive Capitalism”,Psychopathologies of Cognitive Capitalism, Part 2. Berlin: Archive Books, 2014.: http://matteopasquinelli.com/power-of-abstraction/]
Red stack attack! Algorithms, capital and the automation of the common
In early 1965 the Publishers of Political Literature (Prague) published a book by Pavel Pelikan and Oldrich Kyn, Kybernetika v ekonomii (Cybernetics in economics). We have asked the authors to give a brief summary of the book's contents. The editors
During the last two years some Czechoslovak economists have contemplated possibility of applying methods of Cybernetics in economics. This was an outcome of ongoing attempts to improve the system of planning and economic management. A need to develop the scientifically founded theory of economic control has naturally brought attention to problems of information, decision-making, automatic regulation, control and organization in economic systems.
As yet there is no agreement about the nature of Cybernetics itself, and even less about the application of Cybernetics in the economy. Some people put the main emphasis on mathematical models and the techniques of transmitting and processing economic information. We believe thatCybernetics in economics means primarily a specific approach to economic problems, in which the objects of investigation are not physical processes, but exchange of information and decision-making. That is why the exposition in the book places stress on problems of organization and control in economic systems. The book is presented in a popular way and does not require any great knowledge of mathematics even though the authors are convinced that Cybernetics has in mathematics a great ally .
The first chapter gives a summary account of the basic concepts that may not be specific to cybernetic view of reality, but are useful for the later exposition. These are such concepts as object, system, input, output, structure, behavior (deterministic, stochastic), isomorphism, homomorphism, models, etc. This chapter also introduces ideas of 'analysis' and 'synthesis' of systems and brings examples of the behavior of very simple 'feedback' systems. Also the concepts of 'stability' and 'equilibrium' are introduced.
We considered it useful to differentiate consistently throughout the book between the concepts 'object' and 'system'. Object we take to mean the actually existing thing or a collection of existing elements connected by mutual interactions. A system, on the other hand, represents a certain abstraction of the actual object. It originates when we define which of all the elements we include in the system, their properties and mutual relationships. When we use the system in place of objects, we always reduce the number of elements, properties, and relations, so that the system is never a full picture of the object. Of course, this also means that the systems never include full information about the objects. According to the way the system is reduced, the conclusions derived by the analysis of the system can be more or less adequate to reality. This is important for conclusions that are derived from theoretical models of reality.
Economics is not just a descriptive science; it attempts to find the nature and logic of economic processes. One of its most important tasks is to predict the future states of economic objects. But it is never possible to draw conclusions from direct observation only. They must be made by deduction from the behavior of economic models. The model, however, is always a simplification of reality and, therefore, deductions about future developments derived from it can only approximate the actual development.
The conclusions drawn by Marx, Ricardo and Marshall differ considerably, although they were concerned with the same real object, for they were deduced from different theoretical models.In all three cases the conclusions were deduced quite logically and convincingly, but the correctness of each depends on how the theoretical model corresponded to reality. The actual development of capitalist economy seemed to confirm Marx's prediction better than those of Ricardo and Marshall. This shows that Marx's model may have been more adequate for solving this question. Of course this does not mean that Marx's model was equally adequate for solving all questions. It is quite possible that Marshall's model might be more adequate than Marx's for the analysis of demand.
A model has always a specific purpose. It is set to solve certain problems. The fact that it is adequate to the solution of some problem does not prove that it is adequate for all.
The second chapter gives an historical account of the origin of Cybernetics, of the viewpoint of cybernetic investigation, and of the relation of Cybernetics to other sciences, especially economics. The third chapter is called, 'Information and the Decision-Making Process'. Besides introducing the concept of information and explaining problems of measuring the amount of information, attention is devoted here primarily to algorithm and program. Algorithm is an exact procedure that in finite steps--consisting of given elementary logical or mathematical operation--leads from the information received to the decision. The notion of algorithm, however, can be somewhat generalized if we abandon the requirement of finality and include random choice according to a given distribution of probabilities among the basic operations. Then we can consider the method of trial and error also to be an algorithm.
The algorithms that govern decision-making in the human brain are very complicated. Some of them people know and can describe. But it is important to realize that people cannot describe all algorithms they use in practice. Similarly, one can describe some algorithms of economic decision-making (see for example optimization tools of operational research), but can never describe all of them fully. Economic decision-making is all the more difficult because frequently a complete information about the situation cannot be obtained. In such situations it is necessary to guess the probability of success as well as the risk of losses. It is sometimes difficult even to determine which of possible outcomes is the best. For example the optimal state of the economy can be defined as the state that maximizes utility of all the people in society, but the way of measuring this maximum is problematic as the theories of the social welfare function indicate. Many people may have a clear idea of what is good and useful, but find it difficult to formulate it precisely. Individual preferences of people may be incompatible and they may change over time as a result of human experience. For this reason, the human factor cannot be eliminated from economic decisions. Economic decision-making is not only a matter of knowledge that can be learned, but it is also an art based on intuition.
Real economies are made up of people and for people with all their human qualities and weaknesses. An economic system cannot be designed without regard to man. Therefore the attempts to determine economic value and utility in the same impersonal way as quantities of matter and energy in physics, cannot succeed. The measurement of value and utility is inseparable from man with all his irrational aspects.
A system that makes decisions must contain elements capable of performing a certain group of basic operations and must contain within itself also information about algorithms that would ensure their proper sequence. We call this information about algorithm a program. The program can be contained directly in the structure (hardware) of the system or it can reside in the memory.
Depending on the shares of external and internal information we roughly distinguish the following three types of programs:
Closed programs: Systems with a program of this type have no information inputs. The resulting decisions are determined in advance by the internal program.
Unconditioned reflexes: Unlike the preceding program, this system has inputs that bring information from the environment but the system reacts always in the same predetermined way.
Conditioned reflexes: A system with a program of this type has even looser connection between the original program and the actual decision. In the preceding case the information from the environment determined directly the decision of the system according to a fixed algorithm. Here the algorithm itself is created through the influence of the environment.
Systems with conditioned reflexes are frequently called learning systems because their programs permit learning. Since systems with programs of a reflex type are capable of continuously receiving information from environment, there need not be as much information contained in their programs. Of course the reduction of the volume of information is only a less important benefit. It is more important that not all the needed information is available initially when the system was constructed. Systems with closed programs are incapable of performing tasks that depend on unpredictable events.
We can now draw some conclusions for economics and particularly for the theory of economic planning. It is evident that a function of the plan is to provide a program for economic activity in the future.If it is a closed program, it must determine output of an extensive range of products in advance. Although this was for some time considered to be the only and most effective form of planning its shortcomings are obvious. In such a case the plan must contain enormous amount of information and at the same time the stability of the economic system would be threatened by every unforeseen event, whether this were change in demand, in technology of production or in foreign markets. In contrast to this, a plan of the second category would continuously receive information about the changes in conditions and derive operative decisions that could not be made in detail at the time when the plan was set up. The particular objective would beto plan the reaction to unpredictable circumstances. A plan of the type of conditioned reflexes would mean a further improvement in these reactions. The system would learn from its successes and mistakes.
The fourth chapter deals with 'goal-seeking systems'. This is the term given to systems that react to changes in environment in such a way that they will achieve certain goal or objective. The objective can be a state or modification of the environment or of the system itself.
It is evident that not all the real systems can be considered 'goal-seeking systems'. We can classify real system in the following three groups:
Systems whose behavior is determined exclusively by physical inputs. Such systems can be either deterministic or stochastic but there is no need to use the concepts of information and decision-making in this case.
Systems whose behavior depends not only on physical inputs, but also on information received, but there is no need for the concept of goal or objective.
Systems whose behavior depends on physical inputs, information received and the goal of the system. These are 'goal-seeking systems'.
Classifying systems according to these criteria is in certain extent arbitrary. Information is always carried by some physical process, but we may or may not ascribe informational significance to it. The same object can be studied either with or without concepts of information and decision-making. If we do not use the concept of information, but observe only the changes in the physical states of the system, we can describe the motions as cause and effect using laws of physics. The concepts of goal-seeking and causality do not necessarily contradict each other, but are merely two different views of the same reality.
The goal-seeking system can originate in two ways:
They may be created by activity of another goal-seeking system .
They may be spawned by the process of natural selection.
The second method needs an environment with abundant energy that contains elements from which such a system can be created. The surplus of energy makes these elements enter into random bonds that are either reinforced or dissolved by further developments. As a result different arrangements of existing elements are sequentially tested for survival. The bonds that arise between elements introduce into the environment qualities that had formerly not existed. As soon as some system is created by chance it can in turn change the probability of creating other systems. The probability of a certain system arising is not directly related to the probability of its disappearing. Thus it could happen that after passage of time some systems would predominate even though when they originated it might have seemed that there was only small probability of that.
The role of chance is very important in the process of spontaneous evolution. Systems that already exist are constantly subject to random influences that affect their organization. In biology these fortuitous changes are called mutations. They occur without any intentions to improve or worsen the existing systems. Nevertheless they are the reason that the systems continuously improve.
The principle of natural selection is also important for the progress of human society. Social progress means the introduction of new technologies of production, new economic relationships,new forms of management and organization, better than the preceding ones. Their novelty lies in the fact that they were unknown before they were introduced. There was no information about them and therefore they could not have been consciously introduced with guaranteed success simultaneously throughout the whole society. Mutations in society are also of a fortuitous nature. No experts can guarantee correct filtration of unfavorable mutations so that only favorable remain. Just recall the gallery of scientists and inventors of genius who were not recognized by their contemporaries. Only when the emergence of random mutations is not prevented can advantages of some new forms be demonstrated in practice and thus be extended to all of society. Every society that wishes to speed up its evolution must make random mutations possible. The new forms must have the possibility of showing their advantages or their shortcomings in practice to be compared with previous forms. This is, of course, not easy because society should also prevent survival of unfavorable mutations that could act as cancer on the organism.
We can divide the systems with goal-seeking behavior into regulating, controlling or organizing systems. If we define system in such a way that regulated, controlling or organizing system are a part of it, then we speak of automatic regulation, control or organization.
For a long time Marxists believed that the automatic or spontaneous processes could lead only to imbalances and anarchy in the economy. They concluded that it would be desirable to suppress automatism and replace it--where possible--with the conscious, centralized direction. The error of this idea is apparent.Reducing automatism leads to growth in size of the central administrative apparatus, making it slow, cumbrous and bureaucratic.
In economic systems goal seeking is often manifested as purposefulness and the goals as economic interests. People form their goals according to their individual tastes and abilities, but also under the influence of the social and economic environment in which they live. They are constrained in their behavior in two ways: by material conditions, and by socio-economic factors. The socio-economic determination enters into the decision-making process usually through the objective function. Therefore we must distinguish between activity that man cannot carry on at all from those which he can but does not perform, because he has no interest in doing so.
There are both similarities and differences between the goals and interests of different people. There exist general interests but also individual deviations from them. If we regard society as a whole we can think of these individual differences as random deviations. The greater the number of people with common or only slightly different interests in a certain group, the closer are their ties and the closer is their cooperation. We can, therefore, observe the formation of group interests in society.
In designing the control of economic systems, it is necessary to reckon with the fact that these systems are made up of people, that is to say, elements that are themselves systems acting not only purposefully, but also consciously. Therefore each control directive is only one of the information inputs, used by the controlled to make a conscious decision. Controlling people means influencing their decision-making in the desired way. The harmony or conflict of interests between the controlling body and the controlled play an important role. If both parties want to reach the same goal it is not necessary to make the controlling directive compulsory. If such a common interest is lacking, the controlling directive must be supplemented by measures that ensure its effectiveness. Among the means employed the most frequent is the use of force.
Sometimes a deliberate distortion of information is used for this purpose. If, for example, the controlling body succeeds in persuading the controlled that carrying out the orders is in their own interest, although it may be only the end desired by the controlling person or body, people can actually be made to act against their own interests. Nationalism, chauvinism, racism, etc., can be misused by some groups of people to induce others to actions that in reality do not serve their interests.
The fifth chapter deals with some economic models. Here it is not a question of a purely cybernetic view of the economy, but of a certain interpretation of models known from the classical economic approach. First, the various possibilities of introducing systems in the national economy are shown, and the resultant possibility of constructing different economic models.
Examples that are discussed next include dynamic models of the market (cobweb) and structural Input-Output models (of the Leontief type). On the first glance the original Leontief structural model includes exclusively physical interactions. There are no flows of information or decision-making processes in the model. It is interesting to note that a Leontief Input-Output model can be interpreted in several different ways:
as an instrument of a description and analysis of the existing structure of the economy that is, as a process of obtaining scientific information on the state of the national economy;
as a theoretical mathematical model of self-regulatory processes that lead to the creation of economic equilibrium (similarly as in models of Leon Walras). In this case scientific information on the behavior of economic systems, is obtained,
as a part of the algorithm of decision-making by the planning body, which determines the program of activity of economic units. Here there is a decision-making function that contains within itself knowledge of the conditions of economic equilibrium.
The chapter continues with the comparisons of some aggregate models of growth that are based on the Marxist concept of the process of accumulation and the Keynesian concept of the multiplier process. The Feldmann-Mahalanobis type of model is also shown in simplified form.
The sixth chapter makes an attempt at a cybernetic view of the national economy,that is, a view of the economy as informational and decision-making structure. The system created for this purpose consists of people who are interconnected by informational flows. The environment consists of other social systems and nature. Nature here is taken to include all the physical things relevant to the economic process, including 'artificial' nature - machinery, buildings, equipment, etc. - that are the product of human productive activity.
By acting on nature, society introduces a certain organization into it. Since this action has some goal, we may speak of society as a system with goal-seeking behavior. Unlike technical systems, which can have a goal that is quite general, without any relation to the existence of the system and its elements,we have here interdependence between the goal, the system itself and its elements.
The goal of a social system, is not just survival of the society as a whole, but it is also survival and well being of individual people, who are the elements of which the social system is composed. Compare it to the relation of human body to cells, the elements of which it is composed. The purpose of the existence of cells can be seen as the preservation of the existence of the human organism; the human organism protects its cells only insofar as they are needed for its life. As opposed to this, the existence of man is an end in itself. Man does not exist for society, but society for man. Society cannot sacrifice a person because it is not needed. In the concept of the needs of society we must include the needs of its members.
If we wish, therefore, to consider society as a system with objective behavior, we must realize that there is here an unusual bond between the goal of society and the elements of which society is composed. Whereas in the case of technical apparatus any structure is suitable if its behavior makes possible the achievement of the desired properties of the system, the demand made on structure in society as a whole is to achieve certain economic results, but must at the same time meet the people's wishes in some way. Therefore not every structure that assures a high social product is a suitable one if it forces people to lead an unpleasant life.
People are the individual units in the structure of a social system. Some of the people stand right on the dividing line between society and nature and, with their work, act directly on nature (productive labor) we can call these people the output elements of society. The others are the internal elements of society and their social role is primarily to receive information, make decisions, and provide information for other people.
People may have very diverse abilities to make decisions. This depends on:
complexity of the algorithm by which a person is able to make decisions,
ability to learn, that is flexibility in adapting the algorithm to new situations,\
appropriateness of personal goals to the task.
It is obvious that it is not desirable to put people with anti-social goals into important social position even if otherwise their ability to make decisions is excellent. But sometimes it is difficult to choose between a capable worker with unsuitable personal objectives and an incompetent worker whose interests may seem to coincide perfectly with the social goals.
Depending on its place in the structure of the economic system each decision-making position is characterized by its weight and risk.
The weight depends on the size of the subsystem that is being controlled and on the relative reduction of degrees freedom.
The risk depends on the probability distribution of gains or losses
The risk of decision depends on the amount and kind of information that the decision-making agent obtains. There is not only distortion of information as it is transmitted in the economy, but also a deliberate distortion. Systems that provide information sometimes try to use distorted information to influence the decisions taken by those who receive the information. Therefore a theory of economic information needs more than a statistical theory of information that was elaborated primarily for the needs of communication technique, but it would be necessarily to include also some considerations from the theory of games.
Human capacity to receive and process information is limited. No individual can handle all the types of information needed for a smooth operation of society. Consequently, the whole social control process must be subdivided among various controlling subsystems. For example, we can visualize society as a hierarchy, with the lower base formed by the output elements, and those who make the decisions are placed in upper layers over those who receive and implement the decisions. The base elements send information about the state of nature upward to upper layers of hierarchy. Since each place in the hierarchy has a limited capacity, the amount of information must be gradually reduced. Not all the information collected below can arrive at the highest places. The problem, of course, is how to reduce information without losing what is essential for making decisions. The reduction of information on the way up means that the highest places cannot issue decisions that would contain enough information to eliminate all the uncertainty in the output elements. That is to say, each place in the hierarchy has a certain degree of freedom for independent decisions. The allocation of degrees of freedom within the hierarchy determines what is usually called the degree of centralization or decentralization. High degree of centralization requires large information processing capacity at the upper layers to reduce the risk of centralized decision-making. If there is too much centralization it can easily happen that the costs of transmitting and processing information would be many times higher than the most pessimistic estimates of loss that could occur with an effective reduction of information and a decentralization of a large part of the decision-making.
In the subsequent discussion the book continues with some reflections on the direct transmission of information, the transmission of algorithms, synthesis of parts of the hierarchy, assignments of personnel to decision posts, and finally with some thoughts on planning.
With Transfinancial Economics it would be possible to deal with rapid changes in the Free Market Price.In the case of many airline charges, algorithms, and computers are actually used to determine dynamic price changes necessary for that industry. http://www.p2pfoundation.net/Transfinancial_Economics
Dynamic pricing, also called real-time pricing, is an approach to setting the cost for a product or service that is highly flexible. The goal of dynamic pricing is to allow a company that sells goods or services over the Internet to adjust prices on the fly in response to market demands.
Changes are controlled by pricing bots, which are software agents that gather data and use algorithms to adjust pricing according to business rules. Typically, the business rules take into account such things as the customer's location, the time of day, the day of the week, the level of demand and competitors' pricing. With the advent of big data and big data analytics, however, business rules for price adjustments can be made more granular. By collecting and analyzing data about a particular customer, a vendor can more accurately predict what price the customer is willing to pay and adjust prices accordingly.
Dynamic pricing is legal, and the general public has learned to accept dynamic pricing when purchasing airline tickets or reserving hotel rooms online. The approach, which is sometimes marketed as a personalization service, has been less successful with online retail vendors. Dynamic pricing can be contrasted with fixed pricing, an approach to setting the selling price for a product or service that does not fluctuate. Source Ref Tech Target
Dynamic pricing is a pricing strategy in which businesses set highly flexible prices for products or services based on current market demands.[1] Business are able to stay competitive by changing prices based on algorithms that take into account competitor pricing, supply and demand, and other external factors. Dynamic pricing is a common practice in several industries such as hospitality, travel, entertainment, and retail. Each industry takes a slightly different approach to repricing based on its needs and the demand for the product. One commonality, however, is the use of dynamic pricing to increase revenue and profits, whether to fill a stadium, flight, or sales quota. Hospitality
Hotels and other players in the hospitality industry use dynamic pricing to adjust the cost of rooms and packages based on the supply and demand needs at a particular moment.[2] The goal of dynamic pricing in this industry is to find the best price that consumers are willing to pay. Another name for dynamic pricing in the industry is demand pricing and is a form of price discrimination, which is used to try to maximize revenue based on the willingness to pay of different market segments. They feature price increases when demand is high and decreases to stimulate demand when it is low. Having a variety of prices based on the demand at that point in the day makes it possible for hotels to generate more revenue by bringing in customers at the different price points they are willing to pay. Travel
Airlines change prices often depending on the day of the week, time of day, and number of days before the flight.[3] For airlines, dynamic pricing factors in different components such as: how many seats a flight has, departure time, and average cancellations on similar flights.[4] Entertainment
Sports ticketing is a segment of the entertainment industry that effectively uses real-time pricing to boost revenue. Dynamic pricing is particularly important in baseball because MLB teams play around twice as many games as some other sports and in much larger venues.[5]
Sports that are outdoors have to factor weather into pricing strategy, in addition to date of the game, date of purchase, and opponent.[6]
Ticket retailers have much more flexibility with dynamic pricing because tickets for a game during inclement weather will sell better at a lower price; conversely, when a team is on a winning streak, fans will be willing to pay more. Retail
Retailers, and online retailers in particular, adjust the price of their products according to competitors, time, traffic, conversion rates, and sales goals.[7] The aim of dynamic pricing is to increase revenue and profit. There are three basic ways to do this.
First, retailers can use price intelligence to reprice based on the prices of their competitors.
Second, retailers can drop prices when demand is low.
Third, retailers can increase prices while demand is high.
Pricing Based on Competitors
Businesses that want to price competitively will monitor their competitors’ prices and adjust accordingly. Amazon is a market leader in retail that reprices often,[8] which encourages other retailers to alter their prices to stay competitive. Competitor-based dynamic pricing can increase sales, especially if they take advantage when other retailers run out of stock. Time Based Pricing
Many industries change prices depending on the time of day, especially online retailers, whose customers usually shop the most in the evening. Dropping prices during the morning and afternoon can be an effective way to increase sales during typically slow times of the day. Raising prices during the evening is a way to generate more revenue and profit because demand is highest then.
Transportation is another area where prices vary based on the time of day. The San Francisco Bay Bridge charges a higher toll during rush hour and on the weekend, when drivers are more likely to be travelling.[9] This is an effective way to boost revenue when demand is high, while also managing demand since drivers unwilling to pay the premium will avoid those times. Dynamic pricing in transportation is also called peak-load pricing. Conversion Rate Pricing
Pricing based on conversion rates is a way to turn window shoppers into buyers. When conversion rates of viewers to buyers is low, dropping the price can help turn it around. Future of Dynamic Pricing
Dynamic pricing is becoming an important factor for retailers,[10] as many have already adopted some form of it in order to counteract showrooming. The concept of dynamic pricing has been around for many years, particularly in the airline and hotel industries, but retail is one of the newer industries to adopt this pricing strategy. Nonetheless, adoption has accelerated recently as retailers have seen the impact on revenue and profits.