Monday, January 24, 2005

Whether Selfish or Co-operative, We Get What We Want

Last Friday I attended the very optimistically named "spring" edition of the Boston KM Cluster. Temperatures hovered close to zero (Farenheit) that morning. Any groundhogs in the neighborhood were surely laughing at the foolishness and frozen noses of those of us who heralded "spring" by attending this event.

Quite a few notables from the intersecting worlds of business and social network analysis attended this KM Cluster. I'm not going to say who, exactly, because we held the meeting under the "Chatham House Rule":
"When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed."
I had never formally encountered the Chatham House Rule before, but I will certainly remember it from now on. It both promotes open discussion within the meeting and enables relatively specific followup after the meeting (especially when the meeting relates to broader community development).

One point raised during the conference especially resonated with me because it related directly to my recent posts on structural holes. Recall that a structural hole is a gap between social groups, and that a broker can profit by importing an old idea from one group into a novel setting in the other group, where the idea assumes significant new value. This theory is an extremely compelling explanation for how collaborative innovation works.

However, idealistic networkers like myself can have a hard time swallowing all the implications of structural holes. In particular, the theory of structural holes suggests that information brokers have much to gain by strategically hoarding information and sharing only what suits them. And that puts me in a bit of a spot: As objective as I am, I have to admit that much of my work in SNA is fueled by my personal desire to promote
collaboration as a pragmatic tool for success in a culture that seems more focused on the benefits of competitiveness. So what do I do when my favorite theory in SNA (structural holes) seems to argue strongly in favor of selfish competition?

One of the speakers at KM Cluster addressed the topic of structural holes very specifically, and added a crucial insight that helped me resolve this formerly thorny dilemma. He pointed out that information brokers can pursue power or leadership. Brokers can accumulate power by hoarding information and strategically sharing only what benefits them personally. Brokers who want to grow as leaders shouldn't be so selfish, however. Hoarding information may improve opportunities for personal power but it prevents colleagues from sensing the larger community to which they belong. Brokers who share this kind of information promote themselves as leaders even as they let opportunities for personal power out of their grasp.

In the end, there is plenty of room for competition and collaboration, and for power and leadership. In fact, research suggests that society enjoys just the right amount of each -- that there may be just enough selfish people and co-operative people so that the expected benefit of either strategy is the same. For a very readable glimpse of this theory (which draws heavily on game theory in general and evolutionary stable strategies in particular), check out this article in the latest issue of The Economist: "Games People Play."

2 comments:

Bruce Hoppe said...

Human evolution

Games people play
Jan 20th 2005
From The Economist print edition

Kobal

The co-operative and the selfish are equally successful at getting what they want

MANY people, it is said, regard life as a game. Increasingly, both biologists and economists are tending to agree with them. Game theory, a branch of mathematics developed in the 1940s and 1950s by John von Neumann and John Nash, has proved a useful theoretical tool in the study of the behaviour of animals, both human and non-human.

An important part of game theory is to look for competitive strategies that are unbeatable in the context of the fact that everyone else is also looking for them. Sometimes these strategies involve co-operation, sometimes not. Sometimes the “game” will result in everybody playing the same way. Sometimes they will need to behave differently from one another.

The research by Dr Houser and Dr Kurzban is published in the Proceedings of the National Academy of Sciences.

But there has been a crucial difference in the approach taken by the two schools of researchers. When discussing the outcomes of these games, animal behaviourists speak of “evolutionarily stable strategies”, with the implication that the way they are played has been hard-wired into the participants by the processes of natural selection. Economists prefer to talk of Nash equilibria and, since economics is founded on the idea of rational human choice, the implication is that people will adjust their behaviour (whether consciously or unconsciously is slightly ambiguous) in order to maximise their gains. But a study just published in the Proceedings of the National Academy of Sciences, by Robert Kurzban of the University of Pennsylvania and Daniel Houser of George Mason University in Fairfax, Virginia, calls the economists' underlying assumption into question. This study suggests that it may be fruitful to work with the idea that human behaviour, too, can sometimes be governed by evolutionarily stable strategies.

Double or quits?

Dr Kurzban and Dr Houser were interested in the outcomes of what are known as public-goods games. In their particular case they chose a game that involved four people who had never met (and who interacted via a computer) making decisions about their own self-interest that involved assessing the behaviour of others. Each player was given a number of virtual tokens, redeemable for money at the end of the game. A player could keep some or all of these tokens. Any not kept were put into a pool, to be shared among group members. After the initial contributions had been made, the game continued for a random number of turns, with each player, in turn, being able to add to or subtract from his contribution to the pool. When the game ended, the value of the pool was doubled, and the new, doubled value was divided into four equal parts and given to the players, along with the value of any tokens they had held on to. If everybody trusts each other, therefore, they will all be able to double their money. But a sucker who puts all his money into the pool when no one else has contributed at all will end up with only half what he started with.

This is a typical example of the sort of game that economists investigating game theory revel in, and both theory and practice suggests that a player can take one of three approaches in such a game: co-operate with his opponents to maximise group benefits (but at the risk of being suckered), free-ride (ie, try to sucker co-operators) or reciprocate (ie, co-operate with those who show signs of being co-operative, but not with free-riders). Previous investigations of such strategies, though, have focused mainly on two-player games, in which strategy need be developed only in a quite simple context. The situation Dr Kurzban and Dr Houser created was a little more like real life. They wanted to see whether the behavioural types were clear-cut in the face of multiple opponents who might be playing different strategies, whether those types were stable, and whether they had the same average pay-off.

The last point is crucial to the theory of evolutionarily stable strategies. Individual strategies are not expected to be equally represented in a population. Instead, they should appear in proportions that equalise their pay-offs to those who play them. A strategy can be advantageous when rare and disadvantageous when common. The proportions in the population when all strategies are equally advantageous represent the equilibrium.

And that was what happened. The researchers were able to divide their subjects very cleanly into co-operators, free-riders and reciprocators, based on how many tokens they contributed to the pool, and how they reacted to the collective contributions of others. Of 84 participants, 81 fell unambiguously into one of the three categories. Having established who was who, they then created “bespoke” games, to test whether people changed strategy. They did not. Dr Kurban and Dr Houser were thus able to predict the outcomes of these games quite reliably. And the three strategies did, indeed, have the same average pay-offs to the individuals who played them—though only 13% were co-operators, 20% free-riders and 63% reciprocators.

This is only a preliminary result, but it is intriguing. It suggests that people's approaches to co-operation with their fellows are, indeed, evolutionarily stable. Of course, it is a long stretch from showing equal success in a laboratory game to showing it in the mating game that determines evolutionary outcomes. But it is good to know that in this context at least, nice guys do not come last. They do just as well as the nasty guys and, indeed, as the wary majority.

Anonymous said...

Great Blog, check out this business. This is the Goose that lays you Golden Eggs! based business free home legitimate

Enjoy!