About
Content
Store
Forum

Rebirth of Reason
War
People
Archives
Objectivism

Post to this threadMark all messages in this thread as readMark all messages in this thread as unreadBack one pagePage 0Page 1Page 2Page 3Forward one pageLast Page


Post 20

Friday, April 1, 2005 - 10:14amSanction this postReply
Bookmark
Link
Edit
Thanks for the detailed explanation Ed.

Altruism is (empirically) looking less and less life-supporting, thanks to game theory and similar research.

I never knew altruism was seen as life-supporting. Not here anyway :-)


Post 21

Friday, April 1, 2005 - 11:20amSanction this postReply
Bookmark
Link
Edit
Marcus, of course you and I know that altruism is death-entailing, but we know this from a healthy admixture of reason and experience.

Most folks however, have not yet reached this health admixture (the "right amounts" of reason and experience), and for them, it may take a landslide of experiences of the consequences of altruism, before they question its virtue. That is the unique avenue that game theory opens: landslides of experiences--without the "mountains of corpses and rivers of blood."

It's true that inferences from game theory ought to be questioned, but this is merely an intellectual hurdle, and not a brick wall, automatically preventing advancement.

The vulgar empiricists will only be persuaded by controlled, in-your-face evidence. It is my hope that game theory will, someday, supply this so that we can all live better lives together (with greater benefit from the rationality of others).

Ed
(Edited by Ed Thompson
on 4/01, 11:23am)


Post 22

Thursday, August 3, 2006 - 3:49amSanction this postReply
Bookmark
Link
Edit

Ed: “As it turns out – upon adequate experience and reflection – ethical egoism DOES SERVE OTHERS, albeit indirectly…”

But if others are the intended beneficiaries of ethical egoism, then the behaviour is no longer egoistic, but altruistic. An ethical egoist cannot intend to serve others without falling into altruism. In that case, if the ethical egoist is to remain true to his principles, where only he is the proper beneficiairy of his actions, the benefits others may derive from the actions of the ethical egoist are accidental, and not ethical.

Ethics is prescriptive. The egoism/altruism issue is about who should be the beneficiary of one’s actions, and not all beneficiaries fit into that category.

In which case, option D in the schema:

A)    irrational self-serving behavior (unethical egoism)
B)    irrational other-serving behavior (altruism)
C)    rational self-serving behavior (ethical egoism)
D)    rational other-serving behavior (?)

…cannot be filled by ethical egoism. A more logical and consistent way of depicting the schema would be:

A)    irrational self-serving behavior (irrational egoism}
B)    irrational other-serving behavior (irrational altruism)
C)    rational self-serving behavior (ethical egoism)
D)    rational other-serving behavior (rational altruism)

Brendan


Post 23

Thursday, August 3, 2006 - 6:24amSanction this postReply
Bookmark
Link
Edit



the benefits others may derive from the actions of the ethical egoist are accidental, and not ethical.


No, not accidental, but consequental....... and thus still ethical.....


Post 24

Thursday, August 3, 2006 - 10:34pmSanction this postReply
Bookmark
Link
Edit
Brendan,

================
But if others are the intended beneficiaries of ethical egoism, then the behaviour is no longer egoistic, but altruistic.
================

More to the point, it serves the best interests of others (unrelated to intention). By the way, it's nice to see you again, my friendly intellectual adversary.



================
the benefits others may derive from the actions of the ethical egoist are accidental, and not ethical.
================

Of course, what you're saying (whether you mean to, or not) is that ... "the benefits others may derive from the actions of the ethical egoist are, not good (ie. of value to the acting egoist)". I'd check that premise before proceeding on this line of reasoning with me, Brendan. Morality is about an individual's value (ie. the "good FOR me" of my acts).



================
…cannot be filled by ethical egoism. A more logical and consistent way of depicting the schema would be:

A) irrational self-serving behavior (irrational egoism}
B) irrational other-serving behavior (irrational altruism)
C) rational self-serving behavior (ethical egoism)
D) rational other-serving behavior (rational altruism)
================

Brendan, methinks you're getting rusty (else I've become more of a genius -- in the months since we've interacted). Notice how option C is the ONLY explictly-ethical position? Did you "mean" to convey that?

Ed

Sanction: 8, No Sanction: 0
Sanction: 8, No Sanction: 0
Post 25

Friday, August 4, 2006 - 4:40pmSanction this postReply
Bookmark
Link
Edit
Also, a very important FACT is that an individual can almost never be certain that what he does WILL be of benefit to another!  Heck - even knowing what benefits himself is sometimes hard to determine.  That is why altruism fails, because even the best intentions cannot understand reality to the necessary degree to ensure maximum benefit.  If collectivists truly believed in a collective, they would realize that the collective is made up of individuals, all serving their own needs is the goal of each part of the collective, and therefore the entire collective benefits most by each acting in its own self-interest.  It is when the small groups decide to usurp their own ethical authority (themselves) and decide for others what is best for them, that the hell of altruism is released.  Economics proves it - that is why the market works.  History proves it - that is why communism fails and massive death was caused by it.


Post 26

Friday, August 4, 2006 - 3:13pmSanction this postReply
Bookmark
Link
Edit

Ed: “Of course, what you're saying…is that ... "the benefits others may derive from the actions of the ethical egoist are, not good (ie. of value to the acting egoist)".

Thanks for the kind words, Ed. My comments related to the justification for moral actions, that is, what makes something morally good. Rand claims that a value is something that one aims to gain and keep, and that the egoist should pursue only those things that further his own interests.

Moral language is prescriptive, and moral behaviour is volitional. The egoist should act to secure his own moral good, not the good of others. Therefore, from his point of view, the benefits that accrue to others from his actions have no moral significance.

It may be argued that the benefits enjoyed by others are of value to the egoist because they enable him to more easily pursue his own interests. But in order that they can be good for him, they must be good for others, and egoism doesn’t provide the justification for why those benefits are good for others.

What would make those benefits morally good for others would be their contribution to the general welfare. But that is a utilitarian justification. Which is why egoism cannot occupy option D in your schema. And if ‘rational other-serving behavior’ means nothing more than ‘ rational self-serving behavior ` -- as you seem to be arguing -- then option D is redundant anyway.

As for the schema’s wording, let’s not be misled by semantics – both egoism and altruism are aspects of ethical theories.

Brendan


Post 27

Friday, August 4, 2006 - 10:28pmSanction this postReply
Bookmark
Link
Edit
Brendan wrote that
if others are the intended beneficiaries of ethical egoism, then the behaviour is no longer egoistic, but altruistic. An ethical egoist cannot intend to serve others without falling into altruism. In that case, if the ethical egoist is to remain true to his principles, where only he is the proper beneficiairy of his actions, the benefits others may derive from the actions of the ethical egoist are accidental, and not ethical.
No, this is not right. Ethical egoism does not say that others cannot be the intended beneficiaries of one's action. It says only that the actor must always be the beneficiary of his own action -- that he must never sacrifice it for the sake of another end or goal. For example, if I support my wife and children because I love them, then they are the intended beneficiaries of my action. But I am also the intended beneficiary, because I am acting for the sake of my values, and in so doing, am acting egoistically. However, if I support someone for whom I have no love or respect, simply out of a sense of self-sacrificial duty, then I am not the intended beneficiary of my action; only they are, in which case, my action is not egoistic but altruistic.

- Bill

Post 28

Saturday, August 5, 2006 - 3:59pmSanction this postReply
Bookmark
Link
Edit

Ed: “Ethical egoism does not say that others cannot be the intended beneficiaries of one's action.”

Sure, but the context of my comments was Ed’s schema, which makes a four-fold distinction between self-serving and other-serving behaviours. Effectively, these distinctions are: irrational egoism, irrational altruism, rational egoism and rational altruism. In reality, within the context of egoism/altruism, all moral behaviours can be placed under one of these four categories,

But Rand claims that there are only two opposed positions: egoism vs altruism, and by implication that all moral behaviours fit into this dichotomy. Ed attempts to replicate this understanding with his schema, by claiming that rational other-serving behaviours (category D) are a species of rational self-serving behaviours (category C). In doing so, he reduces his schema to three positions: irrational egoism, irrational altruism, egoism.

Which leaves irrational egoism the odd man out. It doesn’t obviously fit into the egoism/altruism dichotomy, unless one were to argue that irrational egoism is a form of altruism. I doubt that a convincing case could be made for that, in which case, the original four-fold schema should stand.

That is, in Ed’s original schema, D should remain as a valid position, rather than being absorbed into C.

Brendan


Sanction: 5, No Sanction: 0
Sanction: 5, No Sanction: 0
Post 29

Sunday, November 9, 2008 - 11:25amSanction this postReply
Bookmark
Link
Edit
Research update

==============

Volunteering as Red Queen mechanism for cooperation in public goods games.

Institute for Mathematics, University of Vienna, Strudlhofgasse 4, A-1090 Vienna, Austria.

 

The evolution of cooperation among nonrelated individuals is one of the fundamental problems in biology and social sciences. Reciprocal altruism fails to provide a solution if interactions are not repeated often enough or groups are too large. Punishment and reward can be very effective but require that defectors can be traced and identified. Here we present a simple but effective mechanism operating under full anonymity. Optional participation can foil exploiters and overcome the social dilemma. In voluntary public goods interactions, cooperators and defectors will coexist. We show that this result holds under very diverse assumptions on population structure and adaptation mechanisms, leading usually not to an equilibrium but to an unending cycle of adjustments (a Red Queen type of evolution). Thus, voluntary participation offers an escape hatch out of some social traps. Cooperation can subsist in sizable groups even if interactions are not repeated, defectors remain anonymous, players have no memory, and assortment is purely random.

==============

Recap:

Sustainable human cooperation has to be voluntary. Folks need to be able to opt out of paying for social engineering and redistributive programs. When folks are left free to withdraw their resources from altruistic schemes, cooperation can exist. If this exit option is disallowed -- if folks are made to pay for things in which they don't believe -- then human cooperation goes extinct.

 

 

==============

The evolution of cooperation and altruism--a general framework and a classification of models.

Department of Ecology and Evolution, University of Lausanne, Biophore, 1015 Lausanne, Switzerland. ll316@cam.ac.uk

 

One of the enduring puzzles in biology and the social sciences is the origin and persistence of intraspecific cooperation and altruism in humans and other species. Hundreds of theoretical models have been proposed and there is much confusion about the relationship between these models. To clarify the situation, we developed a synthetic conceptual framework that delineates the conditions necessary for the evolution of altruism and cooperation. We show that at least one of the four following conditions needs to be fulfilled: direct benefits to the focal individual performing a cooperative act; direct or indirect information allowing a better than random guess about whether a given individual will behave cooperatively in repeated reciprocal interactions; preferential interactions between related individuals; and genetic correlation between genes coding for altruism and phenotypic traits that can be identified. When one or more of these conditions are met, altruism or cooperation can evolve if the cost-to-benefit ratio of altruistic and cooperative acts is greater than a threshold value. The cost-to-benefit ratio can be altered by coercion, punishment and policing which therefore act as mechanisms facilitating the evolution of altruism and cooperation. All the models proposed so far are explicitly or implicitly built on these general principles, allowing us to classify them into four general categories.

==============

Recap:

Human cooperation requires one of four things, in order to sustainably exist:

 

1. utility/profit (direct benefit to cooperators)

2. trade (investment information about the cooperative reciprocity, or the value, that others produce to trade with us)

3. kinship (interactions between related individuals)

4. discrimination based on someone's genetics

 

All cooperation not based on one of these four things will fail.

 

 

==============

Probabilistic participation in public goods games.

Graduate School of Engineering, Soka University, Tokyo, Japan. tsasaki@soka.ac.jp

 

Voluntary participation in public goods games (PGGs) has turned out to be a simple but effective mechanism for promoting cooperation under full anonymity. Voluntary participation allows individuals to adopt a risk-aversion strategy, termed loner. A loner refuses to participate in unpromising public enterprises and instead relies on a small but fixed pay-off. This system leads to a cyclic dominance of three pure strategies, cooperators, defectors and loners, but at the same time, there remain two considerable restrictions: the addition of loners cannot stabilize the dynamics and the time average pay-off for each strategy remains equal to the pay-off of loners. Here, we introduce probabilistic participation in PGGs from the standpoint of diversification of risk, namely simple mixed strategies with loners, and prove the existence of a dynamical regime in which the restrictions ono longer hold. Considering two kinds of mixed strategies associated with participants (cooperators or defectors) and non-participants (loners), we can recover all basic evolutionary dynamics of the two strategies: dominance; coexistence; bistability; and neutrality, as special cases depending on pairs of probabilities. Of special interest is that the expected pay-off of each mixed strategy exceeds the pay-off of loners at some interior equilibrium in the coexistence region.

==============

Recap:

The individual right of freedom to be a non-participating loner is required for sustainable human cooperation, though cooperators can achieve higher expected pay-offs than loners can -- proving that while the right not to trade with others is a metaphysical necessity for sustainable human cooperation, trade is still good for man on Earth.

 

 

===============

A new consequence of Simpson's paradox: stable cooperation in one-shot prisoner's dilemma from populations of individualistic learners.

Department of Psychology, University College London, London, United Kingdomn. n.chater@ucl.ac.uk

 

Theories of choice in economics typically assume that interacting agents act individualistically and maximize their own utility. Specifically, game theory proposes that rational players should defect in one-shot prisoners' dilemmas (PD). Defection also appears to be the inevitable outcome for agents who learn by reinforcement of past choices, because whatever the other player does, defection leads to greater reinforcement on each trial. In a computer simulation and 4 experiments, the authors show that, apparently paradoxically, when players' choices are correlated by an exogenous factor (here, the cooperativeness of the specific PD chosen), people obtain greater average reinforcement for cooperating, which can sustain cooperation. This effect arises from a well-known statistical paradox, Simpson's paradox. The authors speculate that this effect may be relevant to aspects of real-world human cooperative behavior.

=================

Recap:

Folks in the real world cooperate rationally because it maximizes value. Early conclusions from Prisoner's Dilemma games (where you supposedly get less jail time if you "rat-out" your partner in the game) are spurious conclusions stemming from scope-violating statistical artifacts (artifacts of the experimental process).

 

 

=================

The evolution of prompt reaction to adverse ties.

COMO, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium. svsegbro@vub.ac.be

 

BACKGROUND: In recent years it has been found that the combination of evolutionary game theory with population structures modelled in terms of dynamical graphs, in which individuals are allowed to sever unwanted social ties while keeping the good ones, provides a viable solution to the conundrum of cooperation. It is well known that in reality individuals respond differently to disadvantageous interactions. Yet, the evolutionary mechanism determining the individuals' willingness to sever unfavourable ties remains unclear.

 

RESULTS: We introduce a novel way of thinking about the joint evolution of cooperation and social contacts. The struggle for survival between cooperators and defectors leads to an arms race for swiftness in adjusting social ties, based purely on a self-regarding, individual judgement. Since defectors are never able to establish social ties under mutual agreement, they break adverse ties more rapidly than cooperators, who tend to evolve stable and long-term relations. Ironically, defectors' constant search for partners to exploit leads to heterogeneous networks that improve the survivability of cooperators, compared to the traditional homogenous population assumption.

 

CONCLUSION: When communities face the prisoner's dilemma, swift reaction to adverse ties evolves when competition is fierce between cooperators and defectors, providing an evolutionary basis for the necessity of individuals to adjust their social ties. Our results show how our innate resilience to change relates to mutual agreement between cooperators and how "loyalty" or persistent social ties bring along an evolutionary disadvantage, both from an individual and group perspective.

=================

Recap:

Allowing folks to sever their ties -- to voluntarily "non-participate" or to be a strategic loner -- is required for human cooperation. Looters, moochers, cheats, and defectors are never able to establish non-coercive social ties -- but trading partners, using self-regarding individual judgement, can make such ties. Thus, in a free market, looters, moochers, cheats, and defectors run out of victims -- and productive human trade dominates, leading to a constantly-increasing standard of living.

 

Party loyalty -- or loyalty to people like Rev. Wright (or to their specific type of uprised, tyranny-of-the-victim thinking) -- is inherently destructive (to both individuals and groups); though it may be transiently sustained by use of the institutionalized force and fraud of a totalitarian dictatorship.

 

 

=================

Human altruism: economic, neural, and evolutionary perspectives.

Institute for Empirical Research in Economics, University of Zurich, Bluemlisalpstrasse 10, 8006 Zuerich, Switzerland. efehr@iew.unizh.ch

 

Human cooperation represents a spectacular outlier in the animal world. Unlike other creatures, humans frequently cooperate with genetically unrelated strangers, often in large groups, with people they will never meet again, and when reputation gains are small or absent. Experimental evidence and evolutionary models suggest that strong reciprocity, the behavioral propensity for altruistic punishment and altruistic rewarding, is of key importance for human cooperation. Here, we review both evidence documenting altruistic punishment and altruistic cooperation and recent brain imaging studies that combine the powerful tools of behavioral game theory with neuroimaging techniques. These studies show that mutual cooperation and the punishment of defectors activate reward related neural circuits, suggesting that evolution has endowed humans with proximate mechanisms that render altruistic behavior psychologically rewarding.

=================

Recap:

Because of evolution, it feels good to cooperate and trade value for value with others (just like investing does). However, just because it feels good, doesn't mean you should give until it hurts. You have got to first create wealth for yourself, before you should ever think about investing anything in others. Otherwise, you're just a Schmoo.

 

 

=================

Altruism may arise from individual selection.

Grupo Interdisciplinar de Sistemas Complejos (GISC), Departamento de Matemáticas, Universidad Carlos III de Madrid, 28911 Leganés, Madrid, Spain. anxo@math.uc3m.es

 

The fact that humans cooperate with non-kin in large groups, or with people they will never meet again, is a long-standing evolutionary puzzle. Altruism, the capacity to perform costly acts that confer benefits on others, is at the core of cooperative behavior. Behavioral experiments show that humans have a predisposition to cooperate with others and to punish non-cooperators at personal cost (so-called strong reciprocity) which, according to standard evolutionary game theory arguments, cannot arise from selection acting on individuals. This has led to the suggestion of group and cultural selection as the only mechanisms that can explain the evolutionary origin of human altruism. We introduce an agent-based model inspired on the Ultimatum Game, that allows us to go beyond the limitations of standard evolutionary game theory and show that individual selection can indeed give rise to strong reciprocity. Our results are consistent with the existence of neural correlates of fairness and in good agreement with observations on humans and monkeys.

=================

Recap:

The Spaniards above are mistaken. The core of cooperation is the expectation of value of investing in others. Altruism, which is about sacrifice -- not about investment -- has nothing to do with cooperation. Altruism is a "unilateral" decision or process -- not a reciprocal one. 

 

Reality is best depicted by the Ultimatum Game, where one person slices the pie, and the other person gets to pick who gets what (and the original person gets to call off the whole deal, if treated "unfairly"). "I slice, you pick" is nothing other than the market-based trade of mutual consent. In a free market, if folks are shysters -- they go broke (because other traders vote against them with their dollars).

 

 

===================

Evolutionary games and population dynamics: maintenance of cooperation in public goods games.

Program for Evolutionary Dynamics, Harvard University, One Brattle Square, Cambridge, MA 02138, USA. christoph_hauert@harvard.edu

 

The emergence and abundance of cooperation in nature poses a tenacious and challenging puzzle to evolutionary biology. Cooperative behaviour seems to contradict Darwinian evolution because altruistic individuals increase the fitness of other members of the population at a cost to themselves. Thus, in the absence of supporting mechanisms, cooperation should decrease and vanish, as predicted by classical models for cooperation in evolutionary game theory, such as the Prisoner's Dilemma and public goods games. Traditional approaches to studying the problem of cooperation assume constant population sizes and thus neglect the ecology of the interacting individuals. Here, we incorporate ecological dynamics into evolutionary games and reveal a new mechanism for maintaining cooperation. In public goods games, cooperation can gain a foothold if the population density depends on the average population payoff. Decreasing population densities, due to defection leading to small payoffs, results in smaller interaction group sizes in which cooperation can be favoured. This feedback between ecological dynamics and game dynamics can generate stable coexistence of cooperators and defectors in public goods games. However, this mechanism fails for pairwise Prisoner's Dilemma interactions and the population is driven to extinction. Our model represents natural extension of replicator dynamics to populations of varying densities.

===================

Recap:

Altruism is a morality of death. If fully practiced, it would lead to the extinction of mankind.

 

 

===================

Partner choice creates competitive altruism in humans.

Department of Neurobiology and Behaviour, Cornell University, Ithaca, NY 14853, USA. pjb46@cornell.edu

 

Reciprocal altruism has been the backbone of research on the evolution of altruistic behaviour towards non-kin, but recent research has begun to apply costly signalling theory to this problem. In addition to signalling resources or abilities, public generosity could function as a costly signal of cooperative intent, benefiting altruists in terms of (i) better access to cooperative relationships and (ii) greater cooperation within those relationships. When future interaction partners can choose with whom they wish to interact, this could lead to competition to be more generous than others. Little empirical work has tested for the possible existence of this 'competitive altruism'. Using a cooperative monetary game with and without opportunities for partner choice and signalling cooperative intent, we show here that people actively compete to be more generous than others when they can benefit from being chosen for cooperative partnerships, and the most generous people are correspondingly chosen more often as cooperative partners. We also found evidence for increased scepticism of altruistic signals when the potential reputational benefits for dishonest signalling were high. Thus, this work supports the hypothesis that public generosity can be a signal of cooperative intent, which people sometimes 'fake' when conditions permit it.

===================

Recap:

A free market is all about reputation. Shysters cannot ever dominate in a free market.

 

 

===================

Social evaluation-induced amylase elevation and economic decision-making in the dictator game in humans.

Department of Cognitive and Behavioral Science, Graduate School of Arts and Sciences, University of Tokyo, Tokyo, Japan. taikitakahashi@gmail.com

 

OBJECTIVE: Little is known regarding the relationship between social evaluation-induced neuroendocrine responses and generosity in game-theoretic situations. Previous studies demonstrated that reputation formation plays a pivotal role in prosocial behavior. This study aimed to examine the relationships between a social evaluation-induced salivary alpha-amylase (sAA) response and generosity in the dictator game. The relationship is potentially important in neuroeconomics of altruism and game theory.

 

METHODS: We assessed sAA and allocated money in the dictator game in male students with and without social evaluation.

 

RESULTS Social evaluation-responders allocated significantly more money than controls; while there was no significant correlation between social evaluation-induced sAA elevation and the allocated money.

 

CONCLUSIONS: Social evaluation significantly increases generosity in the dictator game, and individual differences in trait characteristics such as altruism and reward sensitivity may be important determinants of generosity in the dictator game task.

===================

Recap:

When folks are free to trade and free to choose -- they try to do right by others (trade value for value).

 

 

===================

Evolution of cooperation with shared costs and benefits.

Department of Biological Sciences, University of Illinois at Chicago, Chicago, IL 60607, USA. squirrel@uic.edu

 

The quest to determine how cooperation evolves can be based on evolutionary game theory, in spite of the fact that evolutionarily stable strategies (ESS) for most non-zero-sum games are not cooperative. We analyse the evolution of cooperation for a family of evolutionary games involving shared costs and benefits with a continuum of strategies from non-cooperation to total cooperation. This cost-benefit game allows the cooperator to share in the benefit of a cooperative act, and the recipient to be burdened with a share of the cooperator's cost. The cost-benefit game encompasses the Prisoner's Dilemma, Snowdrift game and Partial Altruism. The models produce ESS solutions of total cooperation, partial cooperation, non-cooperation and coexistence between cooperation and non-cooperation. Cooperation emerges from an interplay between the nonlinearities in the cost and benefit functions. If benefits increase at a decelerating rate and costs increase at an accelerating rate with the degree of cooperation, then the ESS has an intermediate level of cooperation. The game also exhibits non-ESS points such as unstable minima, convergent-stable minima and unstable maxima. The emergence of cooperative behaviour in this game represents enlightened self-interest, whereas non-cooperative solutions illustrate the Tragedy of the Commons. Games having either a stable maximum or a stable minimum have the property that small changes in the incentive structure (model parameter values) or culture (starting frequencies of strategies) result in correspondingly small changes in the degree of cooperation. Conversely, with unstable maxima or unstable minima, small changes in the incentive structure or culture can result in a switch from non-cooperation to total cooperation (and vice versa). These solutions identify when human or animal societies have the potential for cooperation and whether cooperation is robust or fragile.

=======================

Recap:

When it's a non-zero-sum game, like reality is, folks follow enlightened self-interest -- and choose based on costs and benefits, cooperating only to the extent that their created wealth allows. They can't cooperate without wealth or with "other-peoples' wealth" (like disproportionately taxing the top 5% of wage earners) -- because that leads to extinction.

 

Ed

(Edited by Ed Thompson on 11/09, 4:44pm)


Post 30

Sunday, March 20, 2011 - 3:02pmSanction this postReply
Bookmark
Link
Edit
Research update

Key:
cooperation: working with others on either production or trade (non-zero sum games) or equitable distribution (zero-sum games)
defection: using fraud to cheat others out of value
spite: harming others (as in the name of justice), even if it harms you, too -- like suicide bombers, but not so drastic


A theoretical analysis of temporal difference learning in the iterated prisoner's dilemma game.
Our daily experience tells, however, that real social agents including humans learn to cooperate based on experience. In this paper, we analyze a reinforcement learning model called temporal difference learning and study its performance in the iterated Prisoner's Dilemma game. Temporal difference learning is unique among a variety of learning models in that it inherently aims at increasing future payoffs, not immediate ones. It also has a neural basis.

We analytically and numerically show that learners with only two internal states properly learn to cooperate with retaliatory players and to defect against unconditional cooperators and defectors.
Recap:
If the assumption is made that humans can learn from experiences, then folks will learn to cooperate with one kind of person: persons who pass judgment on others and treat them appropriately. As for the people who don't care about judging others -- and don't modify how they treat others -- (i.e., altruists and predators), they run out of people who will continue to cooperate with them (i.e., they go extinct).


Evolution of cooperation under N-person snowdrift games.
Here we study the evolutionary dynamics of cooperators and defectors in a population in which groups of individuals engage in N-person, non-excludable public goods games. ... However, when the group size and population size become comparable, we find that spite sets in, rendering cooperation unfeasible.
Recap:
In a zero-sum game where work needs to be done for anyone to benefit, cooperation is unfeasible between humans -- because free loaders will be answered with "altruistic spite" from others who judge them.


Mutual trust and cooperation in the evolutionary hawks-doves game.
Using a new dynamical network model of society in which pairwise interactions are weighted according to mutual satisfaction, we show that cooperation is the norm in the hawks-doves game when individuals are allowed to break ties with undesirable neighbors and to make new acquaintances in their extended neighborhood. Moreover, cooperation is robust with respect to rather strong strategy perturbations. ...

Given the metaphorical importance of this game for social interaction, this is an encouraging positive result as standard theory for large mixing populations prescribes that a certain fraction of defectors must always exist at equilibrium.
Recap:
When individuals are allowed free association (including not getting taxed for the benefit of strangers), cooperation is robust and defectors (cheaters) don't always exist, but may go extinct, at equilibrium.


Stochastic evolutionary dynamics of direct reciprocity.
We perform 'knock-out experiments' to study how various strategies affect the evolution of cooperation. We find that 'tit-for-tat' is a weak catalyst for the emergence of cooperation, while 'always cooperate' is a strong catalyst for the emergence of defection. Our analysis leads to a new understanding of the optimal level of forgiveness that is needed for the evolution of cooperation under direct reciprocity.
Recap:
The effect of judging others and treating them appropriately increases cooperation and success among humans. The effect of the contrary action of always being kind to everyone and of giving of yourself to those in need (wherever you may find them) -- has the opposite effect of creating more evil in the world (emergence of more cheaters than before).


Evolutionary game theory meets social science: is there a unifying rule for human cooperation?
An analysis of the reputation and action rules that govern some representative cooperative strategies both in models and in economic experiments confirms that the different frameworks share a conditional action rule and several reputation rules. The common conditional rule contains an option between costly punishment and withholding benefits that provides alternative enforcement methods against defectors. Depending on the framework, individuals can switch to the appropriate strategy and method of enforcement.
Recap:
The option to punish others with justice (at a cost to yourself), and to withhold all welfare taken from you and given out to others, is what it is that allows for the very possibility of human cooperation in the first place.


Critical dynamics in the evolution of stochastic strategies for the iterated prisoner's dilemma.
These results imply that in populations of players that can use previous decisions to plan future ones, cooperation depends critically on whether the players can rely on facing the same strategies that they have adapted to. Defection, on the other hand, is the optimal adaptive response in environments that change so quickly that the information gathered from previous plays cannot usefully be integrated for a response.
Recap:
When there is objective law (where you can rely on facing the same dynamics in future interactions), then cooperation between humans is sustainable. When there isn't objective law -- such as when a government announces that it will be sporadically intervening into the economy (with non-objective "stimulus plans" and preferential, ad hoc "bailouts") -- then cooperation becomes unsustainable, and the only way to even make temporary gains is to cheat, wrong, and defraud others.


Social experiments in the mesoscale: humans playing a spatial prisoner's dilemma.
In our large experimental setup, cooperation was not promoted by the existence of a lattice beyond a residual level (around 20%) typical of public goods experiments. Our findings also indicate that both heterogeneity and a "moody" conditional cooperation strategy, in which the probability of cooperating also depends on the player's previous action, are required to understand the outcome of the experiment.
Recap:
If you experimentally make people pay more than an aggregate tax rate of 20%, cooperation gradually becomes unsustainable -- which empirically validates the Laffer Curve.


Adaptive Dynamics of Altruistic Cooperation in a Metapopulation: Evolutionary Emergence of Cooperators and Defectors or Evolutionary Suicide?
... the required conditions on the cost and benefit functions are rather restrictive, e.g., altruistic cooperation cannot evolve in a defector population. We also observe selection for too low cooperation, such that the whole metapopulation goes extinct and evolutionary suicide occurs.

We observed intuitive effects of various parameters on the numerical value of the monomorphic singular strategy. Their effect on the final coexisting cooperator-defector pair is more complex: changes expected to increase cooperation decrease the strategy value of the cooperator.
Recap:
If you don't give folks the option to refuse to "give back to society" or the option to refuse to "pay their fair share" (if you make "altruism" a law) -- then evolutionary suicide occurs and we all go extinct.

Ed


Post 31

Thursday, December 29, 2011 - 2:37pmSanction this postReply
Bookmark
Link
Edit
Here is a study showing that you have to be careful not to explain the noted cooperation in game theory games by reference to pro-social (i.e., "altruistic") motives:


In this study, the researchers made cooperation highly lucrative. If you donated some of your "kitty" (or your "bankroll", if you will) to a hypothetical 'public goods project', then you got a return-on-investment! Another way to say this is that "altruism", in this game, was made into a profitable venture! The "perfect" strategy in this type of game is to donate all of your money to the public goods project, because the more you give, the more you get back. The results?

Even though folks would make the most money by being fully "altruistic" -- and they were painstakingly made aware of that fact -- they were not fully "altruistic". It remains to be explained why folks -- folks who understood what they were doing -- would not give fully to a public goods project (when that is precisely what would have made them the most rich).

Ed

(Edited by Ed Thompson on 12/29, 2:39pm)


Post 32

Thursday, December 29, 2011 - 2:49pmSanction this postReply
Bookmark
Link
Edit
It remains to be explained why folks -- folks who understood what they were doing -- would not give fully to a public goods project (when that is precisely what would have made them the most rich).
Commonsense? Not enough oxytocin? (see other thread)
-------------

More bad research. Actually, it is more like propaganda and an attempt to train people to be good little altruists... disguised as research.

Post 33

Saturday, July 14, 2012 - 8:23amSanction this postReply
Bookmark
Link
Edit
Evidence that capitalism is hard-wired into humans*

In an article entitled, Risk and the evolution of human exchange, the authors conclude:

The results provide strong support for the hypothesis that people are pre-disposed to evaluate gains from exchange and respond to unsynchronized variance in resource availability through endogenous reciprocal trading relationships.
What they found was that when folks are sometimes given constant access to values and then sometimes given non-constant access to values, that folks trade a lot with each other when access is non-constant. This mimics real life. In real life, money doesn't grow on trees and food doesn't passively fall onto your plate. Instead, you have to work to gain value. You have to produce things that you need in order to survive. You either produce or die. Other people, in the same, precarious position as you are, are "naturally" willing to divide labor and trade with you -- in the interest of self-preservation. It's because values don't fall from trees, that people naturally gravitate toward capitalism.

Now, this is true, but with a qualifier: The people you study have got to have been made aware of this fundamental fact that values don't fall from trees. If people forget that, if they think values fall from trees, then they become "free" to intellectually adopt other political strategies besides capitalism (e.g., communism). To do that, to adopt communism in place of capitalism, requires a certain level of mental evasion about the fact that values don't fall from trees.

Ed

*Of course, I picked that title for its emotional appeal -- even though it does not represent the facts of the matter, as viewed through a lens of noncontradictory integration (the refined "truth" of the matter)

(Edited by Ed Thompson on 7/14, 8:24am)


Post 34

Sunday, September 2, 2012 - 8:46amSanction this postReply
Bookmark
Link
Edit
This just in: 17% of Communists refused to give-up half their money for the welfare of others (and 89% of them refused to give-up 70-90% of their money).

Earlier this year, there was a Game Theory study done at Southeast University in Nanjing, China. Scientists were checking brain activity in participants playing the Ultimatum Game (a game where you and an "opponent" can both get rich by agreeing to divide up monetary gifts, rather than continually disagreeing with each other). One proposes an offer, the other accepts or rejects. If the offer was rejected, all money from that round is lost. Note how similar this is to the story of Ellis Wyatt in Atlas Shrugged who -- rather than willingly share his private property with others proposing to take some of it -- destroyed all of his personal wealth, instead.

Here is a link to the article:

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3383671/

And here are quotes from the article, interspersed with my colorful comments:
The minimal acceptance amount was significantly higher when the property was initially endowed to the participant (4.86±0.33) than when the bill was initially endowed to the allocator (2.86±0.33), p<0.001.
[Translation: I, a 21.6 year-old, communist*, Chinese college student, would be made content with 28.6% of your money. That is the lowest aggregate tax rate I would okay with. If you are keeping more than 71.4% of your money, then I am going to have to go and join one of those "Occupy Wall Street" militias in protest of your adoption of "excessive capitalism." Conversely, if it's my money, I would be content with 48.6% of it -- allowing you to take 51.4% of my money -- before I would take up arms against you, in defense of my personal property.]
Moreover, the participants indicated that the fairest offer for themselves was 6.48±0.25 yuan (out of 10 yuan) when the property was initially endowed to the participant, which was significantly higher than the amount when the bill was initially endowed to the allocator (4.67±0.26), p<0.001.
[Translation: I, a 21.6 year-old, communist, Chinese college student, think it's most fair if I keep 65% of my money (leaving 35% for you). Conversely, if it's your money, I think it's most fair if I get 47% of it.]
Simple-effect tests showed that the acceptance rate to disadvantageous unequal offers was significantly higher in the "other" condition (0.36±0.07) than in the "self" condition (0.11±0.04), t(20)=3.81, p<0.01.
[Translation: If you only offer me 10, 20, or 30%, then I'll accept it 36% of the time if it was "your" money to begin with, but only 11% of the time if it was "my" money to begin with. In other words, if it's my money to begin with, then 89% of the time I will claw you to the death -- or burn down my own factories (leaving no one with anything) -- in order to keep it in my possession.]
A similar pattern was observed for equal offers (0.99±0.003 vs. 0.83±0.06), t(20)=2.59, p<0.05. However, this effect was absent for advantageous unequal offers, t(20)=–0.70, p>0.1.
[Translation: Even if you offer me half of the bounty, if it was "my" money to start with, then 17% of the time I'm going to reject it (or 17% of my friends would reject it). 17% of us, rather than to have to deal with a Chinese Uncle Sam knocking on the door "asking" for half of our wealth, would readily throw the money into the flames before willingly giving up half of what we own -- for the so-called welfare of unfamiliar others]


Ed

*I admit that not all Chinese college students are necessarily communists (which calls the very title of this post into question), and that some of them may be capitalists trapped within a largely-communist country.

(Edited by Ed Thompson on 9/02, 9:07am)


Post 35

Sunday, September 2, 2012 - 9:26amSanction this postReply
Bookmark
Link
Edit
[Note: Below is an 'adaptation-and-repost' from another thread, in order to keep related material together for future reference]

Study proving that altruism -- if it exceeds 13.3% -- is lethal.

Here is the link to the abstract of the article:

Did Warfare Among Ancestral Hunter-Gatherers Affect the Evolution of Human Social Behaviors?

[A free full-text copy of the study is available to those who sign-up with Science (AAAS) at the link in the upper-right.]

What the researcher (Samuel Bowles) discovered is that altruism is linked to war. If you don't have much war, then you cannot have much altruism, either. This is because altruism is unproductive. Another way to say this is that altruism, because it does not create wealth (and because wealth creation is a requirement of human life), altruism requires plundering wealth from others -- either by moral appeasement or by outright war. In a telling display of this, Bowles compares 3 hunter-gatherer groups in Australia:

1) the Murngin
2) the Tiwi
3) the Anbara

The Murngin are perpetual warriors and are able to support a maximum of 13.3% altruism. The Tiwi are more peaceful (can support up to about 7% altruism), and the Anbara are the most peaceful (can support up to almost 3% altruism). One upshot is that if you want what Obama wants -- i.e., lots of altruism -- then we are going to have to go to war with the rest of the world. Another is the aggregate tax rate.

An aggregate tax rate is an indication of altruism -- because so very often the money is taken from you to pay for things you don't believe in. If we were a nation in constant war, then an aggregate tax rate of 7% might be justified somehow. If we were a peaceful nation, even an aggregate tax rate of 3% would be potentially unjustifiable. Note: This is based on empirics rather than on an a priori moral argument. I prefer the moral argument, but the scientific argument against altruism is mounting.

About the researcher and exactly what he discovered

Samuel Bowles is a Game Theory researcher attempting to replicate real-life dynamics in the form of statistical outcome models -- and running these dynamics to discover what kind of outcomes you get. Here is a string of telling quotes from the article which work to illustrate what he did, how he did it, and what he found:
I use a variant of these models along with a new set of empirical estimates of the extent of war among both prehistoric and historic hunter-gatherers to derive an explicit measure of the importance of warfare in the evolution of human social behavior. This measure is the maximum degree of altruistic behavior—namely c*, the greatest cost borne by individuals in order to benefit fellow group members—that could have proliferated given the empirically likely extent of warfare during the Late Pleistocene and early Holocene. ...

For simplicity, I represent the altruistic behavior in question as the expression of a single allele and let individuals reproduce asexually; the model is readily extended to any form of vertical transmission, including cultural. ...

What is the maximum cost of altruism (c*) such that the group benefits would offset the within-group selection pressures against the altruists? ...

... note that c* = 0.03, for example, is a quite substantial cost, one that in the absence of intergroup competition would lead the fraction of altruists in a group to fall from 0.9 to 0.1 in just 150 generations. An illustration more directly related to the question of warfare is the following. Suppose that in every generation, a group is engaged in a war with probability ê = 2ä and that an altruistic "warrior" will die with certainty in a lost war and with probability 0.20 in a war in which the group prevails, while nonaltruistic members also die with certainty in lost wars but do not die in won wars. (These mortality assumptions are extremely unfavorable for the altruists.) Assuming the altruists have no reproductive advantages during peacetime, then c = 0.2ä ...

... if groups were as differentiated as these populations and as warlike as the Murngin, between-group competition could overcome very strong within-group selection against altruistic behavior. Even for groups similar to the more peaceful Anbara, quite costly forms of altruism could proliferate by this mechanism (c* = 0.029).

Largest cost (c*) for an altruistic trait to proliferate given estimates of genetic differentiation and mortality in intergroup hostilities (ä) among three Arnhem Land, Australian hunter-gatherer populations. ...
Even being at war all the time however, would not sustain population-scale altruism (c) of more than about 7% (c* = 0.07). During peacetime, you cannot sustain even 3% altruism on a population-scale (c* = 0.03).

How altruism is limited by war and by group size

The highest amount of altruism possible, under the greatest frequency of warfare (because altruism begets warfare), was 13.3% in a warrior band of 26 people. If you increase the number of people in your tribe or your band then the highest amount of sustainable altruism drops to 7% -- with the peaceful Anbara bringing up the rear at a sustainable altruism rate of 2.9%.

How that relates to a scientific prescription for an upper limit on aggregate tax rates

... and, converting the numbers into aggregate tax rates, this discovery would mean that -- utilitarianly-speaking -- the highest aggregate tax rates for humans on earth should be:

13.3% for isolated, warrior gangs
7% for warrior societies
3% for peaceful societies

That, and not more than that, is what is sustainable. A society which exceeds these limits will eventually collapse -- though it may take 100 generations. This is the limit of forced taxation. Alternative methods of funding government (e.g., user fees) would not necessarily follow the same dynamics and may, therefore, continually exceed these limits. For example, it's still possible to have a sustainable society where user fees, amounting to 15% of all generated wealth, end up paying for the government -- even if it is impossible to have a sustainable society where forced taxation, amounting to 15% of all generated wealth, does the same thing.

That's because user fees are not altruistic in the sense that forced taxation is.

More on aggregate tax rates

If aggregate tax rates in the US were, say, 6% of all generated wealth, then it might take 3000 years before we, as a society, collapse.

But what if aggregate tax rates are actually much higher than 6%? How fast would our collapse come then?

It may be that with an aggregate tax rate of around 50% that you can collapse a society in, say, 80 or 90 years. If that is the case, then we'd be close to collapsing (because our aggregate tax rate has been close to 50% for a couple of decades now). The only way out is to lower the taxes. We can try going to war in Middle East countries, or whatever -- but that will never support altruism higher than about 7% or so. If you have tax rates of 50% or higher, then no amount of war can save you -- all it can do is postpone the inevitable.

Ed

(Edited by Ed Thompson on 9/02, 9:45am)


Post 36

Saturday, November 24, 2012 - 11:30amSanction this postReply
Bookmark
Link
Edit
Bowles is associated with another study showing that if 22% of interacting individuals are willing to punish defectors -- if 22% of us have the sense and inclination to practice justice -- then payoffs of cooperation are already two-thirds as large as if we were all walking around being perfect "Don't tread on me or anyone" vigilantes.

What that means is that it's possible to have a capitalist society exist indefinitely (without any war, hunger, etc.), but you need to have about 1 in 5 folks who would be willing to punish rights violators. If 1 in 5 folks would step up to the plate and be willing to do this, then increasing peace and prosperity will ensue. Here is a short story highlighting the dynamics:

Ten folks are stranded on an island and they are wondering whether or not they will experience ever-increasing peace and prosperity. They interact with one another and decide to experiment by dividing labor and trading value for value (some folks fish, others forage, others build huts made of straw, etc). Every now and again, someone decides to be a free-rider and defect on a contract, only pretending to be cooperative in order to obtain value by fraud. When this happens, if 8 of the 10 people look the other way and don't punish the free-rider and, most importantly, if just 2 of the 10 people gossip about and gang-up on the free rider -- then the free-rider behavior is eventually extinguished (or at least minimized to a level so low that it is unimportant).

If 3 of 10 people are willing to punish defectors, results are more impressive.

If 4 of 10 people are willing to punish defectors, results are even more impressive.

Important Note: Notice how you do not even need a majority (>50%) of people onboard with the program, in order to obtain a fully-just and perpetually-improving society. What this means, for example, is that a minority of Objectivists could generate a fully-free society. Not everyone needs to be an Objectivist in order to achieve this. You don't even need half of everyone to be an Objectivist. You could get, say, 95% of all of the benefits of a fully-free society with less than half of the population actually practicing what Peikoff refers to as "a proud morality of eudaemonia."

Now, not everyone on the planet is willing to punish defectors. There are at least 3 reasons for this:

1) there is a risk/cost to being a punisher, and some folks would rather avoid all of the risk/cost associated with benefiting from life inside of a sphere of justice
2) the morality of altruism may have duped you into "looking the other way" ("turn the other cheek")
3) some people -- perhaps as much as 5% of any given population -- are just too young, old, weak, or frail to be in a position to effectively punish others

Regarding (1) above, the scientific article shows how being a punisher, as long as 22% of others do so as well, has net-positive payoffs -- i.e., that it is a game rigged in the favor of the punisher, not against him (as previous research, involving unrealistic assumptions, had erroneously concluded). A good question is this:

What proportion of people who have read Ayn Rand would be willing to punish rights violators?

:-)

It definitely exceeds the 22% threshold necessary for a perpetually-improving capitalist society. It's likely that more than half of everyone who has read Rand, and almost 100% of self-proclaimed Objectivists, would be willing to gossip about and gang-up on rights violators.

Ed

(Edited by Ed Thompson on 11/24, 11:49am)


Post 37

Saturday, November 24, 2012 - 12:59pmSanction this postReply
Bookmark
Link
Edit
Now just do the math ...

:-)

According to ARI, 7 million copies of Atlas Shrugged have been sold. Assuming that half of the readers have both adopted Objectivism and are still alive (and that only such readers adopt Objectivism), you get 3.5 million Objectivists in the world. But, because the book, once purchased, can be shared with others, the extent of readership is not reflected by the number of copies sold. In fact, for every copy of Atlas Shrugged sold, there are probably 2-8 readers  (while some may have kept the book to themselves and never shared it with others, others shared it with muliple persons). That means that there are 5-14 million Objectivists in the world. Now, if there are 300 million Americans in the world and all of the Objectivists had moved to America -- then the citizenship of America would be 2-5% Objectivist (depending on the "conversion rate").

If we assume -- for the sake of optimism -- that, say, 5% of us in America are already Objectivists, then we only have to capture about 17% more of the total population! And 17% of 300 million is 51 million. That's all we need. We need to "convert" 51 million more people and we are golden (assuming, again for optimism's sake, a static population in America).

That shouldn't be that hard to do. All it would take would be a couple good books, a couple good songs, and a couple good movies ... mixed in with, say, an economic crisis on a scale so grand that it leads most folks to go back and check their premises and ... Voila! ... you are there.

Get crackin'!

:-)

Producers of the world, unite!

Ed

(Edited by Ed Thompson on 11/24, 1:32pm)


Post 38

Saturday, November 24, 2012 - 7:01pmSanction this postReply
Bookmark
Link
Edit
Can I be " The Punisher" ?
Lol

Post 39

Saturday, November 24, 2012 - 8:39pmSanction this postReply
Bookmark
Link
Edit
:-)

Yes, Jules, you can be 'the Punisher' and I will merely serve as your back-up.  I figured that you would step up to the task. Now, if 1 in 5 people would be willing to work together to hold criminals accountable, then -- according to computer models of interacting individuals watched through time for thousands of generations of reproduction -- then paradise awaits us all.

:-)

Ed

p.s. Things are actually more "rosie" than I have outlined them. There is cause for more optimism than I have been portraying. You see, I was following the assumption that you have to get to a society that is 22% Objectivist, but actually you could have a mixture of philosophies -- as long as 22% of them firmly believe in justice. For instance, if 11% of the society was Objectivist, and 11% of the society were members of the Tea Party (dismissing overlap) -- then we'd be where we need to be. And besides being where we need to in order to make capitalism work, in the current system that we find ourselves in we would need to repudiate naysaying, socialist, power-lusters. And we can do that with science.

That's how you get there from here.


Post to this threadBack one pagePage 0Page 1Page 2Page 3Forward one pageLast Page


User ID Password or create a free account.