only who can do this in 1 hour
SEMESTER A 2020 – 21
ID NUMBER ______________________________
1. Please answer the following 5 questions according to the two texts “Is Greed Good” and “ Adventures in Good and Evil”
2. You have 1 ½ hours to complete the task (matala). For students with time extension, additional 20 minutes. This task is 15% of your final grade.
3. You may write your answers on this page or in a word file.
4. Please upload your answers to moodle when you have finished.
NOTE: IF ANY EXPRESSIONS ARE COPIED FROM THE TEXT WHEN INSTRUCTIONS ARE ‘IN YOUR OWN WORDS’ YOU WILL NOT GET POINTS. CHANGING THE ORDER OF THE WORDS IN THE SENTENCE IS NOT ACCEPTED.
1a. What was the behavior that Ockenfels and his colleagues were studying?
1b. What behavior was Milgram studying?
2. Explain IN YOUR OWN WORDS the similarities between the experiment with the rhesus monkeys (Adventures) and the behavior of a child and jelly beans (Is Greed)
3. Answer the following question in YOUR OWN WORDS
i. According to Ockenfels what were the results of the “Ultimate Game” studies?
ii. According to Christian Smith what were the results of “Consumer Capitalism”
4. Revenge and revenge feedback are mentioned in the texts.
i. What is the explanation for revenge in the context of the text Adventures in Good and Evil?
ii. What is the explanation for revenge feedback in the context of the text Is Greed Good?
5. In YOUR OWN WORDS compare/contrast the following quotes.
“the concept of Homo economicus (economic man) as a rational, selfish person who single-mindedly strives for maximum profit.” From Is Greed Good?
“people’s ethical decision making is strongly driven by gut emotions rather than by rational, analytic thought.” From Adventures in good and evil? (20 points)
Is Greed Good?
Economists are finding that social concerns often outdo selfishness in financial decision making, a view that helps to explain why tens of millions of people send money to strangers they find on the Internet
By Christoph Uhlhaas
From Scientific American July 31, 2007
Could you buy a used car online, sight unseen and without a test-drive? How about a plane? A vehicle changes hands on eBay Motors every 60 seconds, including one private business jet that sold for $4.9 million. Every second buyers collectively swap more than $1,839 for products through eBay, sending money to complete strangers with no guarantee that the goods they buy will in fact arrive, let alone in the condition they expect.
As a rule, they are not disappointed. To some economists, this is a borderline miracle, because it contradicts the concept of Homo economicus (economic man) as a rational, selfish person who single-mindedly strives for maximum profit. According to this notion, sellers should pocket buyers’ payments and send nothing in return. For their part, buyers should not trust sellers—and the market should collapse.
Economist Axel Ockenfels of the University of Cologne in Germany and his colleagues have spent the past several years figuring out why this does not happen. It turns out that humans do not always behave as if their sole concern is their personal financial advantage—and even when they do, they consider social motives in the profit-making equation. As Ockenfels has discovered, a sense of fairness often plays a big role in people’s decisions about what to do with their money and possessions, and it is also an essential part of what drives trust in markets full of strangers such as eBay.
Ockenfels’s Equity, Reciprocity and Competition (ERC) theory, which he developed with economist Gary Bolton of Pennsylvania State University, states that people not only try to maximize their gains but also watch to see that they get roughly the same share as others: they are happy to get one piece of cake as long as the next person does not get two pieces. This fairness gauge apparently even has a defined place in the brain. On eBay, however, fairness takes the system only halfway, researchers have now learned; eBay’s reputation system is critical for augmenting the level of trust enough for the market to work.
Circumstance also sculpts behavior, studies have revealed, regardless of natural character traits or values. That is, whether a person is competing in a market of strangers or negotiating with a partner can make a big difference in whether fairness, reciprocity or selfishness will predominate. In fact, the ERC theory hints at ways to alter economic institutions to nudge people to compete—or cooperate—more or less than they currently do.
Economists have long been studying volunteers in the laboratory to determine how and why they make financial decisions. In competitive markets, from the U.S. Stock Exchange to auctions at Sotheby’s, people generally act like Homo economicus, behaving in ways that maximize their own profits.
But inherent selfishness cannot explain behavior in other settings. Take a child who has been given a bag of jelly beans, which her left-out sibling is eyeing jealously. Many children would voluntarily share the candy just to be fair, even though that would mean fewer jelly beans for them. Mathematicians who practice game theory see something similar when they ask people to bargain in a test of social motives called the Ultimatum Game. In this two-player game, player A is endowed with a certain sum, say, $20, if he agrees to share some of it with player B. If B accepts A’s offer, the money is divided accordingly. But if B rejects the offer, both players end up with nothing.
In Ultimatum Game studies, researchers have found that the average offer is about 40 percent of the sum and that the most frequent split is 50–50, analogous to a child giving her sibling half or nearly half of the jelly beans she received. The recipient, B, usually accepts such roughly equal offers. When A offers less than one third of the total, however, B usually reacts with scorn and scraps the deal. This response seems nonsensical to someone who is only out to maximize profit. But it is more logical if people have a competing social concern: fairness. If individuals want a fair split, then accepting significantly less than that would mean forfeiting that objective.
A motivation for fairness also seems to be an important factor on eBay, in which the “Buy It Now” format—or an auction with just one buyer—resembles an Ultimatum Game; a seller offers an item at a price that a buyer can accept or reject. To test this hypothesis, Ockenfels and Bolton recruited 100 German university students with selling experience on eBay, divided them into 50 buyer-seller pairs, and asked the sellers to hawk $20 certificates (funded by the researchers) to their assigned partners on eBay.
Consistent with previous Ultimatum results, the most popular selling price was $10, which would result in an equal split of the experimental pot. All but one buyer accepted this offer. Prices above $17 were uniformly rebuffed as too greedy, and some also refused costs between $10 and $17, refuting the idea that monetary incentive alone governs the deal. On the contrary, in this bargaining situation an equal split maximizes profits, Ockenfels says, because buyers generally will not accept unfair offers and sellers seem to realize that. “Fair dealing pays off,” he concludes.
In many cases, however, people will forgive a biased outcome if it comes about by chance rather than through a deliberate act. Ockenfels and Bolton recently asked volunteers to play an Ultimatum Game variant in which player A chooses to split the money either 50–50 or 80–20. If the choice was 80–20, 41 percent of recipients refused the offer. But only 7 percent rejected the 80–20 split when it came from a robot acting at random. This result, Ockenfels says, suggests many people will accept unequal deals as long as all participants have been given a fair chance.
Not everyone is the same, of course. The demand for such procedural fairness, in which people get equal treatment even if the outcome is unfair, may have a cultural component. Anecdotal evidence suggests, for instance, that Americans may be more concerned with procedural fairness than Germans are. Germans seem more likely to insist on equivalent outcomes, Ockenfels says. Individual differences matter, too. Some people are very sensitive to being cheated, whereas others are far less bothered, even nonchalant, when they receive unequal treatment.
That said, discerning values from behavior is often hopelessly confounded by circumstance, Ockenfels says. When he and Bolton asked people to compete for their $20 certificates in experimental eBay auctions with one seller and nine buyers each, they found that the selling price zoomed above $19, a far cry from the equal split that pervaded the previous one-on-one game. Homo economicus trumped fairness in the auction, because a fair player has no way to strive for equity in a situation in which each person must overbid the others to get anything at all. “In markets, all people behave selfishly, but that doesn’t mean they really are,” Ockenfels comments. “The institutions make you behave in certain ways.”
In the researchers’ experimental auction, trust was not a factor, because the (presumably trustworthy) experimenters vouched for the $20 certificates. Yet trust is a critical issue on eBay, in which sellers are anonymous and have little pecuniary incentive to actually ship the items they have sold.
To figure out why they ship anyway, Ockenfels, Bolton and Penn State business professor Elena Katok asked 144 university students to play a trust game that mimics the situation on eBay. In the game, a seller and a buyer each start off with the same sum, say, $35; that is the payoff when no trade takes place. The seller also has an item to be sold for $35, but its value to the buyer is $50, so a trade nets the buyer an extra $15. The seller pays the shipping costs here, $20, so a trade also nets the seller an additional $15. But if the seller fails to ship an item, the seller receives a $35 bonus and the buyer loses the entire endowment. If the buyer chooses not to take this risk, no trade occurs.
In this game, the outcome is fair after either a successful trade or no trade—but most advantageous to the seller if the seller fails to ship. Homo economicus would thus never ship, and no rational buyer would buy. But 37 percent of the sellers were willing to ship, the researchers found, suggesting that some sellers were motivated by an intrinsic sense of fairness and some buyers had bet on that. And in a modified trust game that endows the buyer with an extra $70 regardless of the outcome, the researchers predicted that fair-minded sellers would not ship, because that choice would equate buyer and seller sums at $70. As expected, many fewer sellers (only 7 percent) decided to send the fictitious goods, signifying that the main reason for trustworthiness is fairness.
Rumor Has It
Nevertheless, sellers must ship as much as 70 percent of the time for buying in such a game—or on eBay—to be profitable, according to Ockenfels. How does eBay boost trust to that level? The answer: feedback. On eBay, sellers and buyers can evaluate one another after a transaction has been completed, and these evaluations are made public for future buyers and sellers. “This reputation system functions like an organized rumor mill and replaces the gossip systems of the off-line world,” Ockenfels explains. Because a bad reputation scares off future buyers, even strategic and rational sellers have an incentive to be trustworthy.
To quantify the power of this rumor mill, Ockenfels and his colleagues compared market activity among strangers matched for 30 rounds of transactions without a feedback mechanism against a similar market that included feedback. They found that the feedback system elicited significantly more buying—56 percent—as compared with buying without it—37 percent. More shipping also occurred, rising to 73 percent—above the threshold for trust to be profitable—as compared with shipping for transactions without the reputation system: these hovered around 39 percent. The results indicate that feedback can fill the trust gap in a market such as eBay’s, multiplying the impact of intrinsic trustworthiness.
But the feedback system is imperfect. About 98 percent of ratings on eBay are positive, according to Ockenfels, suggesting that some disappointed eBay buyers do not post negative ratings. Buyers may fear “revenge feedback,” when a seller retaliates for a bad rating with a negative rating of the buyer, claiming that the buyer paid late or with a bad check, for instance. Indeed, in Ockenfels’s experiments, many of those who are not happy with a trade do not give feedback at all.
This lack of feedback is obviously not good for the reputation system. So Ockenfels and Bolton, along with economist Ben Greiner, now at Harvard University, have been working with eBay to design choices that induce people to post truthful and detailed negative feedback. eBay’s revised format, Feedback 2.0, debuted April 30. It lets buyers rate transaction specifics such as accuracy of an item’s description, seller communication and shipping speed, in addition to the overall rating of positive, neutral or negative.
The extra detail increases the feedback’s value to future buyers. And to help allay worries of retaliatory feedback, buyers give their ratings anonymously. Furthermore, sellers can see the detailed ratings only after providing feedback of their own, preventing retaliatory feedback even if the seller later intuits which buyer posted a poor evaluation. What the new system cannot prevent, however, is one-time cheaters. Buying a car or plane online is still pretty risky.
Ockenfels is not about to do that. He visits eBay only occasionally, to buy things for his two children. And if you notice an auction with “aockenfels” as the seller, you have probably stumbled on an economics experiment.
Adventures in Good and Evil
What makes some of us saints and some of us sinners? The evolutionary roots of morality.
By Sharon Begley
From: The Newsweek
It isn’t surprising that the best-known experiments in psychology (apart from Pavlov’s salivating dogs) are those Stanley Milgram ran beginning in the 1960s. Over and over, with men and women, with the old and the young, he found that when ordinary people are told to administer increasingly stronger electric shocks to an unseen person as part of a “learning experiment,” the vast majority – sometimes 93 percent – complied, even when the learner (actually one of the scientists) screamed in anguish and pleaded, “Get me out of here!” Nor is it surprising that Milgram’s results have been invoked to explain atrocities from the Holocaust to Abu Ghraib and others in which ordinary people followed orders to commit heinous acts. What is surprising is how little attention science has paid to the dissenters in Milgram’s experiments. Some participants did balk at following the command to torture their partner. As one of them, World War II veteran Joseph Dimow, recalled decades later, “I refused to go any further.”
On second thought, ignoring the few people who did not fit the pattern-in this case, of throwing morality to the wind in order to obey authority – is not that surprising: in probing the neurological basis and the evolutionary roots of good and evil, scientists have mostly focused on the majority and made sweeping generalizations. In general, most people’s moral sense capitulates in the face of authority, as Milgram showed. In general, the roots of our moral sense-of honesty, altruism, compassion, generosity and sense of justice and fairness – are sunk deep in evolutionary history, as can be seen in our primate cousins, who are capable of remarkable acts of altruism. In one classic experiment, a chain in the cage of a rhesus monkey did double duty: it brought food to the monkey who pulled it, but delivered an electric shock to a second monkey. After observing the effect of pulling the chain on their companions, one monkey stopped pulling the chain for five days and one stopped for 12 days, primatologist Frans de Waal recounts in his 2006 book, “Primates and Philosophers: How Morality Evolved.” The monkeys “were literally starving themselves to avoid inflicting pain on another,” he writes. The closer a monkey was related to the victim, the longer it would go hungry, which supports the idea that morality evolved because it aided the survival of those with whom we share the most genes. Darwin himself viewed morality as the product of evolution. But monkeys and apes, like people, have taken a trait that evolved to help kin and extended it to completely unrelated creatures. De Waal once saw a chimpanzee pick up an injured starling, climb the highest tree in her enclosure, carefully unfold the bird’s wings and loft it toward the fence to get it airborne.
And the final “in general” is that people’s ethical decision making is strongly driven by gut emotions rather than by rational, analytic thought. If people are asked whether they would be willing to throw a switch to redirect deadly fumes from a room with five children to a room with one, most say yes, and neuroimaging shows that their brain’s rational, analytical regions had swung into action to make the requisite calculation. But few people say they would kill a healthy man in order to distribute his organs to five patients who will otherwise die, even though the logic – kill one, save five – is identical: a region in our emotional brain rebels at the act of directly and actively taking a man’s life, something that feels immeasurably worse than the impersonal act of throwing a switch in an air duct. We have gut feelings of what is right and what is wrong.
These generalizations are all well and good, but they get you only so far. They do not explain, for instance, why Joseph Dimow balked at Milgram’s experiments. They do not explain why a Tibetan monk who had been incarcerated for years by the Chinese said (in a story the Dalai Lama is fond of telling) that his greatest fear during captivity was that he would lose his compassion for the prison guards who tortured him. They do not explain why – given the human capacity for forgiveness and revenge, for compassion as well as cruelty, for both altruism and selfishness – some people fall at one end of the moral spectrum and some at the other. Nor do they explain a related mystery – namely, whether it is possible to cultivate virtue through the way we construct a society, raise children or even train our own brains.
Saying that the brain is designed for both virtues and vices “tells us nothing more than what everyone already knew,” says Alan Wallace, a Buddhist scholar and president of the Santa Barbara Institute for Consciousness Studies. “The important questions are what accounts for human variation in moral behavior? And are there ways to cultivate virtues?” Unfortunately, says Ernst Fehr of the University of Zurich, who has done pioneering work on the evolution of altruism and cooperation, there is precious little research on individual differences. “We know that women tend to be more altruistic than men on average, older people tend to be more altruistic than younger ones, students are less altruistic than nonstudents,” he says. “People with higher IQs tend to be more altruistic/cooperative.” However, there is little or no correlation between altruism and standard personality traits such as shyness, agreeableness and openness to new experiences.
That may be because altruism and its cousin, generosity, seem to reflect less who you are than what you see. The greatest barrier to greater generosity, at least in the wealthy West, is that “people think they’re in a world of scarcity and living on the edge,” says Christian Smith of Notre Dame University, who has studied what motivates people to give. “Consumer capitalism makes people feel they don’t have enough, so they feel they don’t have enough to give away.” But obviously some people do give very generously. That may reflect something very basic. “Being taught that it’s important to give and, even more, having that behavior modeled for you makes a big difference,” says Smith. So does empathy, which may explain why panhandlers on my subway so often seem to do better with people who are scruffily dressed and struggling than with the pearls-and-pumps set.
Observing compassion and forgiveness can spur those virtues, too. But in these cases, whether you are likely to be forgiving or vengeful, compassionate or cold, may depend less on having a role model and more on emotion. A specific cluster of emotional traits seem to go along with compassion. People who are emotionally secure, who view life’s problems as manageable and who feel safe and protected tend to show the greatest empathy for strangers and to act altruistically and compassionately. In contrast, people who are anxious about their own worth and competence, who avoid close relationships or are clingy in those they have tend to be less altruistic and less generous, psychologists Philip Shaver of the University of California, Davis, and Mario Mikulincer of Bar-Ilan University in Israel have found in a series of experiments. Such people are less likely to care for the elderly, for instance, or to donate blood.
Intrigued by the growing evidence that the brain can be altered by experience in fundamental ways – a property called neuroplasticity – Shaver wondered if it would be possible to induce feelings of security and self-worth, thereby strengthening the neural circuitry that underlies compassion and altruism. “If only people could feel safer and less threatened, they would have more psychological resources to devote to noticing other people’s suffering and doing something to alleviate it,” says Shaver, who as a young man considered entering the priesthood.
To find out, he and Mikulincer had volunteers watch a young woman perform a series of unpleasant tasks. “Liat looked at gory photographs of people who had been severely injured. She pet a rat. She immersed a hand in ice water. And then she faced the prospect of petting a tarantula. After making a valiant attempt, she whimpered that she couldn’t, begging that “maybe the other person can do it.” Explaining that the experiment had to continue, the scientists asked a volunteer if he would trade places with Liat (who was actually part of the study). Volunteers who were trusting and secure in their own skin were four times more likely to swap places as those who were anxious and insecure. Even inducing this sense of trust and security made people more likely to help Liat. “Making a person feel more secure had this beneficial effect,” says Shaver. “It worked on everyone.” It was an intriguing hint that virtue could be boosted by altering people’s emotions.
The Tibetan monk who worried that he would grow to hate his Chinese captors has not had his brain scanned for clues to what accounted for his compassion, but others have. Encouraged by the Dalai Lama to lend their brains to science, Buddhist monks have made regular treks to the University of Wisconsin. There, psychologist Richard Davidson uses fMRI to compare activity in the brains of monks who practice Buddhist compassion meditation (a deep, sustained focus on the wish that all sentient beings be free from suffering) to that in the brains of volunteers who do not. One difference leaped out: heightened activity in regions involved in perspective-taking and empathy, not just during meditation, when you’d expect it, but when the monks viewed pictures of suffering, such as an injured child. “It seems that mental training that cultivates compassion produces lasting changes in these circuits, changes specific to the response to suffering,” says Davidson. “The message I’ve taken from this is that there are virtues that can be thought of as the product of trainable mental skills.”
Redesigning the brain to strengthen the circuits that underlie virtues obviously can’t explain all the differences between people in their levels of altruism, compassion and willingness to forgive. Meditation, after all, remains a niche activity. But ordinary, everyday experiences leave footprints on the brain no less than effortful mental training(C does. Dimow attributed his refusal to torture his unseen partner in the Milgram experiment to being brought up in a family that was “steeped in a class-struggle view of society, which taught me that authorities would often have a different view of right and wrong than mine,” and to his Army training, when “we were told that soldiers had a right to refuse illegal orders.”
Psychologist Michael McCullough of the University of Miami calls such experiences “learning histories,” and he suspects they explain much of the difference between people in their willingness to forgive and their desire for revenge. In his 2008 book “Beyond Revenge: The Evolution of the Forgiveness Instinct,” he argues that both forgiveness and revenge “solved critical evolutionary problems for our ancestors.” Forgiveness helps to preserve valuable relationships. Exacting revenge acts as a deterrent against attacks, cheating or freeloading. It also establishes the revenge taker as someone not to be crossed, preempting future attacks. “We have blueprints in the brain for both revenge and forgiveness, and depending on our circumstances as well as our life histories, we are more likely to use one or the other,” says McCullough. “By the time I’m an adult, my history of being betrayed, violated, having my trust broken – or their opposites – pushes me toward a strategy tuned to the circumstances of my development.”
When people can count on the rule of law to punish infractions, they are less prone to seek personal revenge. Conversely, when society lacks a mechanism to defend people’s rights, “parents teach their children to cultivate a tough reputation and not let anyone get away with messing with them,” McCullough says. But there is also what he calls an “effortful path to forgiveness”. More recently evolved parts of the brain might exert top-down control over the emotional regions that otherwise compel vengeance-probably something similar to the mental training that the Buddhists undergo. After a gunman took Amish schoolchildren hostage in 2006, killing five girls, the community said they not only forgave the killer, they also donated money to his widow. “Evolution favors organisms that can be vengeful when it’s necessary, that can forgive when it’s necessary and that have the wisdom to know the difference,” says McCullough. If scientists can fathom the roots of the differences between sinner and saint, maybe more of us can move into the latter group.