Exploring mainly Heterodox type Economics, Monetary Reform, Environmental Sustainability, and Climate Change. It is a resource of Internet articles, and also promotes awareness of a futuristic modern universal Paradigm known as TFE, or Transfinancial Economics which is probably the most advanced, and most "scientific" form of Economics in the world .
The ability of chemists to redesign chemical transformations to minimize the generation of hazardous waste is an important first step in pollution prevention. By preventing waste generation, we minimize hazards associated with waste storage, transportation and treatment.
2. Maximize Atom Economy
Atom Economy is a concept, developed by Barry Trost of Stanford University that evaluates the efficiency of a chemical transformation. Similar to a yield calculation, atom economy is a ratio of the total mass of atoms in the desired product to the total mass of atoms in the reactants. One way to minimize waste is to design chemical transformations that maximize the incorporation of all materials used in the process into the final product, resulting in few if any wasted atoms. Choosing transformations that incorporate most of the starting materials into the product is more efficient and minimizes waste.
3. Design less Hazardous Chemical Synthesis
Wherever practicable, synthetic methodologies should be designed to use and generate substances that possess little or no toxicity to human health and the environment. The goal is to use less hazardous reagents whenever possible and design processes that do not produce hazardous by-products. Often a range of reagent choices exist for a particular transformation. This principle focuses on choosing reagents that pose the least risk and generate only benign by-products.
4. Design Safer Chemicals and Products
Chemical products should be designed to affect their desired function while minimizing their toxicity. Toxicity and ecotoxicity are properties of the product. New products can be designed that are inherently safer, while highly effective for the target application. In academic labs this principle should influence the design of synthetic targets and new products.
5. Use Safer Solvents/Reaction Conditions
The use of auxiliary substances (e.g., solvents, separation agents, etc.) should be made unnecessary wherever possible and innocuous when used. Solvent use leads to considerable waste. Reduction of solvent volume or complete elimination of the solvent is often possible. In cases where the solvent is needed, less hazardous replacements should be employed. Purification steps also generate large sums of solvent and other waste (chromatography supports, e.g.). Avoid purifications when possible and minimize the use of auxiliary substances when they are needed.
6. Increase Energy Efficiency
Energy requirements of chemical processes should be recognized for their environmental and economic impacts and should be minimized. If possible, synthetic and purification methods should be designed for ambient temperature and pressure, so that energy costs associated with extremes in temperature and pressure are minimized.
7. Use Renewable Feedstocks
Whenever possible, chemical transformations should be designed to utilize raw materials and feedstocks that are renewable. Examples of renewable feedstocks include agricultural products or the wastes of other processes. Examples of depleting feedstocks include raw materials that are mined or generated from fossil fuels (petroleum, natural gas or coal).
8. Avoid Chemical Derivatives
Unnecessary derivatization (use of blocking groups, protection/ deprotection, temporary modification of physical/chemical processes) should be minimized or avoided if possible, because such steps require additional reagents and can generate waste. Synthetic transformations that are more selective will eliminate or reduce the need for protecting groups. In addition, alternative synthetic sequences may eliminate the need to transform functional groups in the presence of other sensitive functionality.
9. Use Catalysts
Catalytic reagents (as selective as possible) are superior to stoichiometric reagents. Catalysts can serve several roles during a transformation. They can enhance the selectivity of a reaction, reduce the temperature of a transformation, enhance the extent of conversion to products and reduce reagent-based waste (since they are not consumed during the reaction). By reducing the temperature, one can save energy and potentially avoid unwanted side reactions.
10. Design for Degradation
Chemical products should be designed so that at the end of their function they break down into innocuous degradation products and do not persist in the environment. Efforts related to this principle focus on using molecular-level design to develop products that will degrade into hazardless substances when they are released into the environment.
11. Analyze in Real-Time to Prevent Pollution
It is always important to monitor the progress of a reaction to know when the reaction is complete or to detect the emergence of any unwanted by-products. Whenever possible, analytical methodologies should be developed and used to allow for real-time, in-process monitoring and control to minimize the formation of hazardous substances.
12. Minimize the Potential for Accidents
One way to minimize the potential for chemical accidents is to choose reagents and solvents that minimize the potential for explosions, fires and accidental release. Risks associated with these types of accidents can sometimes be reduced by altering the form (solid, liquid or gas) or composition of the reagents.
* Modified from Anastas, P. T.; Warner, J. C. Green Chemistry: Theory and Practice; Oxford University Press: New York, 1998; pp 30.
Experimental economics is the application of experimental methods[1] to study economic questions. Data collected in experiments are used to estimate effect size, test the validity of economic theories, and illuminate market mechanisms. Economic experiments usually use cash to motivate subjects, in order to mimic real-world incentives. Experiments are used to help understand how and why markets and other exchange systems function as they do.
A fundamental aspect of the subject is design of experiments. Experiments may be conducted in the field or in laboratory settings, whether of individual or groupbehavior.[2]
Variants of the subject outside such formal confines include natural and quasi-natural experiments.[3]
Coordination games are games with multiple pure strategy Nash equilibria. There are two general sets of questions that experimental economists typically ask when examining such games: (1) Can laboratory subjects coordinate, or learn to coordinate, on one of multiple equilibria, and if so are there general principles that can help predict which equilibrium is likely to be chosen? (2) Can laboratory subjects coordinate, or learn to coordinate, on the Pareto best equilibrium and if not, are there conditions or mechanisms which would help subjects coordinate on the Pareto best equilibrium? Deductive selection principles are those that allow predictions based on the properties of the game alone. Inductive selection principles are those that allow predictions based on characterizations of dynamics. Under some conditions at least groups of experimental subjects can coordinate even complex non-obvious asymmetric Pareto-best equilibria. This is even though all subjects decide simultaneously and independently without communication. The way by which this happens is not yet fully understood.[6]
In games of two players or more, the subjects often form beliefs about what actions the other subjects are taking and these beliefs are updated over time. This is known as belief learning. Subjects also tend to make the same decisions that have rewarded them with high payoffs in the past. This is known as reinforcement learning.
Until the 1990s, simple adaptive models, such as Cournot competition or fictitious play, were generally used. In the mid-1990s, Alvin E. Roth and Ido Erev demonstrated that reinforcement learning can make useful predictions in experimental games. In 1999, Colin Camerer and Teck Ho introduced Experience Weighted Attraction (EWA), a general model that incorporated reinforcement and belief learning, and shows that fictitious play is mathematically equivalent to generalized reinforcement, provided weights are placed on past history.
Criticisms of EWA include overfitting due to many parameters, lack of generality over games, and the possibility that the interpretation of EWA parameters may be difficult. Overfitting is addressed by estimating parameters on some of the experimental periods or experimental subjects and forecasting behavior in the remaining sample (if models are overfitting, these out-of-sample validation forecasts will be much less accurate than in-sample fits, which they generally are not). Generality in games is addressed by replacing fixed parameters with "self-tuning" functions of experience, allowing pseudo-parameters to change over the course of a game and to also vary systematically across games.
Modern experimental economists have done much notable work recently. Roberto Weber has raised issues of learning without feedback. David Cooper and John Kagel have investigated types of learning over similar strategies. Ido Erev and Greg Barron have looked at learning in cognitive strategies. Dale Stahl has characterized learning over decision making rules. Charles A. Holt has studied logit learning in different kinds of games, including games with multiple equilibria. Wilfred Amaldoss has looked at interesting applications of EWA in marketing. Amnon Rapoport, Jim Parco and Ryan Murphy have investigated reinforcement-based adaptive learning models in one of the most celebrated paradoxes in game theory known as the centipede game.
Edward Chamberlin is thought to have conducted "not only the first market experiment, but also the first economic experiment of any kind."[7]Vernon Smith, drawing on Chamberlin's work, but also modifying it in key respects, conducted pioneering economics experiments on the convergence of prices and quantities to their theoretical competitive equilibrium values in experimental markets.[7] Smith studied the behavior of "buyers" and "sellers", who are told how much they "value" a fictitious commodity and then are asked to competitively "bid" or "ask" on these commodities following the rules of various real world market institutions (e.g., the Double auction as well the English and Dutch auctions). Smith found that in some forms of centralized trading, prices and quantities traded in such markets converge on the values that would be predicted by the economic theory of perfect competition, despite the conditions not meeting many of the assumptions of perfect competition (large numbers, perfect information).
Over the years, Smith pioneered – along with other collaborators – the use of controlled laboratory experiments in economics, and established it as a legitimate tool in economics and other related fields. Charles Plott of the California Institute of Technology collaborated with Smith in the 1970s and pioneered experiments in political science, as well as using experiments to inform economic design or engineering to inform policies. In 2002, Smith was awarded (jointly with Daniel Kahneman) the Bank of Sweden Prize in Economic Sciences "for having established laboratory experiments as a tool in empirical economic analysis, especially in the study of alternative market mechanisms".
Experimental finance studies financial markets with the goals of establishing different market settings and environments to observe experimentally and analyze agents' behavior and the resulting characteristics of trading flows, information diffusion and aggregation, price setting mechanism and returns processes. Presently, researchers use simulation software to conduct their research.
For instance, experiments have manipulated information asymmetry about the holding value of a bond or a share on the pricing for those who don't have enough information, in order to study stock market bubbles.
The term "social preferences" refers to the concern (or lack thereof) that people have for each other's well-being, and it encompasses altruism, spitefulness, tastes for equality, and tastes for reciprocity. Experiments on social preferences generally study economic games including the dictator game, the ultimatum game, the trust game, the public goods game, and modifications to these canonical settings. As one example of results, ultimatum game experiments have shown that people are generally willing to sacrifice monetary rewards when offered low allocations, thus behaving inconsistently with simple models of self-interest. Economic experiments have measured how this deviation varies across cultures.
Agent-based computational modeling is a relatively recent method in economics with experimental dimensions.[8] Here the focus is on economic processes, including whole economies, as dynamic systems of interacting agents, an application of the complex adaptive systemsparadigm.[9] The "agent" refers to "computational objects modeled as interacting according to rules," not real people.[8] Agents can represent social and/or physical entities. Starting from initial conditions determined by the modeler, an ACE model develops forward through time driven solely by agent interactions.[10] Issues include those common to experimental economics in general[11] and by comparison[12] as well as development of a common framework for empirical validation and resolving open questions in agent-based modeling.[13]
The above guidelines have developed in large part to address two central critiques. Specifically, economics experiments are often challenged because of concerns about their "internal validity" and "external validity", for example, that they are not applicable models for many types of economic behavior, so the experiments simply aren't good enough to produce useful answers. Interestingly, however, none of the critiques towards this methodology is specific to it, as they are immediately applicable to either theoretical or empirical approaches or both.[14][citation needed]
Jump up ^J. DiNardo, 2008. "natural experiments and quasi-natural experiments," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
Jump up ^• Vernon L. Smith, 1992. "Game Theory and Experimental Economics: Beginnings and Early Influences," in E. R. Weintraub, ed., Towards a History of Game Theory, pp. 241– 282. • _____, 2001. "Experimental Economics," International Encyclopedia of the Social & Behavioral Sciences, pp. 5100–5108. Abstract per sect. 1.1 & 2.1. • Charles R. Plott and Vernon L. Smith, ed., 2008. Handbook of Experimental Economics Results, v. 1, Elsevier, Part 4, Games, ch. 45–66 preview links. • Vincent P. Crawford, 1997. "Theory and Experiment in the Analysis of Strategic Interaction," in Advances in Economics and Econometrics: Theory and Applications, pp. 206–242. Cambridge. Reprinted in Colin F. Camerer et al., ed. (2003). Advances in Behavioral Economics, Princeton. 1986–2003 papers. Description, contents, and preview., Princeton, ch. 12.
Jump up ^Martin Shubik, 2002. "Game Theory and Experimental Gaming," in Robert Aumann and Sergiu Hart, ed., Handbook of Game Theory with Economic Applications, Elsevier, v. 3, pp. 2327–2351. Abstract.
Jump up ^Gunnthorsdottir Anna, Vragov Roumen, Seifert Stefan and Kevin McCabe 2010 "Near-efficient equilibria in contribution-based competitive grouping," Journal of Public Economics, 94, pp. 987-994. [1]
Jump up ^Leigh Tesfatsion, 2003. "Agent-based Computational Economics: Modeling Economies as Complex Adaptive Systems," Information Sciences, 149(4), pp. 262–268. Abstract.
Jump up ^• Leigh Tesfatsion, 2006. "Agent-Based Computational Economics: A Constructive Approach to Economic Theory," ch. 16, Handbook of Computational Economics, v. 2, pp. 831–880. Abstract/outline. 2005 prepublication version. • Kenneth Judd, 2006. "Computationally Intensive Analyses in Economics," Handbook of Computational Economics, v. 2, ch. 17, pp. 881– 893. • Leigh Tesfatsion and Kenneth Judd, ed., 2006. Handbook of Computational Economics, v. 2. Description & and chapter-preview links.
Jump up ^Vernon L. Smith, 2008b. "experimental economics," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
Jump up ^John Duffy, 2006. "Agent-Based Models and Human Subject Experiments," ch. 19, Handbook of Computational Economics, v.2, pp. 949–101. Abstract.
Jump up ^• Leigh Tesfatsion, 2006. "Agent-Based Computational Economics: A Constructive Approach to Economic Theory," ch. 16, Handbook of Computational Economics, v. 2, sect. 5. Abstract and pre-pub PDF. • Akira Namatame and Takao Terano (2002). "The Hare and the Tortoise: Cumulative Progress in Agent-based Simulation," in Agent-based Approaches in Economic and Social Complex Systems. pp. 3– 14, IOS Press. Description. • Giorgio Fagiolo, Alessio Moneta, and Paul Windrum, 2007 "A Critical Guide to Empirical Validation of Agent-Based Models in Economics: Methodologies, Procedures, and Open Problems," Computational Economics, 30(3), pp. 195–226.
Jump up ^Camerer, Colin F. (2011-12-30). The Promise and Success of Lab-Field Generalizability in Experimental Economics: A Critical Reply to Levitt and List. Working Paper Series.
Battalio, Raymond C., et al., 1973. "A Test of Consumer Demand Theory Using Observations of Individual Consumer Purchases," Economic Inquiry, 11(4), pp. 411–428.
Chamberlin, Edward H., 1948. "An Experimental Imperfect Market," Journal of Political Economy, 56(2), pp. 95–108.
Davis, Douglas D., and Charles A. Holt, 1993. Experimental Economics, Princeton. Description, preview and ch. 1 (complete).
Friedman, Daniel, and Shyam Sunder, 1994. Experimental Methods: A Primer for Economists, Cambridge University Press. Description/contents links and scrollable preview.
Grether, David M., and Charles R. Plott, 1979. "Economic Theory of Choice and the Preference Reversal Phenomenon," American Economic Review, 69(4 ), pp. 623–638.
Gunnthorsdottir Anna, Vragov Roumen, Seifert Stefan and Kevin McCabe, 2010. "Near-efficient equilibria in contribution-based competitive grouping," Journal of Public Economics, 94, pp. 987-994.[2]
Guala, Francesco, 2005. The Methodology of Experimental Economics, Cambridge. Description/contents links and ch. 1 excerpt.
Hertwig, Ralph, and Andreas Ortmann, 2001. "Experimental Practices in Economics : A Methodological Challenge for Psychologists?" Behavioral and Brain Sciences, 24(3), pp. 383–403.
Holt, Charles A., and Susan K. Laury, 2002. "Risk Aversion and Incentive Effects," American Economic Review, 92(5) pp. 1644–1655.
Kagel, John H. et al., 1975. "Experimental Studies of Consumer Demand Behavior Using Laboratory Animals," Economic Inquiry, 13(1), pp. 22–38. Abstract.
Kagel, John H., and Alvin E. Roth, ed., 1995. The Handbook of Experimental Economics, Princeton University Press. Description/TOC and detailed contents.
Kahneman Daniel, Jack L. Knetsch, and Richard Thaler, 1986. "Fairness as a Constraint on Profit Seeking: Entitlements in the Market," American Economic Review, 76(4), pp. 728–741.
Plott, Charles R., 1982. "Industrial Organization Theory and Experimental Economics," Journal of Economic Literature, 20(4), pp. 1485–1527. Reprinted in Plott, 2001, Market Institutions and Price Discovery, pp. 18–59. Elgar. Description.
_____ and Susan K. Laury, 2002. "Risk Aversion and Incentive Effects," American Economic Review, 92(5) pp. 1644–1655.
Plott, Charles R., and Vernon L. Smith, 2008. Handbook of Experimental Economics Results, v. 1, Elsevier. Description and chapter-link previews.
Roth, Alvin E., and Michael W Malouf, 1979. "Game-theoretic Models and the Role of Information in Bargaining," Psychological Review, 86(6), pp. 574–594.
Smith, Vernon L., 1962. "An Experimental Study of Competitive Market Behavior," Journal of Political Economy, 70(2), pp. 111–137.
____, 1982. "Microeconomic Systems as an Experimental Science," American Economic Review, 72(5), pp. 923–955.
_____, 1991. Papers in Experimental Economics [1962–88], Cambridge. Description and chapter-preview links.
_____, [1987] 2008. "experimental methods in economics," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
JessX: free Opensource software for experimental economics. Simulation of a wide scope of market microstructures, information asymmetry... provided with server, client, session analyzer, and automatic trader.