Wednesday, February 25, 2009

N-Effect Review: Part One

Does the number of people in a competition influence competitive motivation? If so, would additional competitors likely increase or decrease motivation? Research by Stephen M. Garcia and Avishalom Tor attempts to answer these questions. Their resulting paper in Psychological Science entitled “The N-Effect: More Competitors, Less Competition” appears to be generating a lot of interest in the research community. In fact, as of two weeks ago it was the number two ranked paper in recent downloads on SSRN. In a three part series, I will provide a compressive analysis of this newly influential paper. This first installment includes a paper summary and an initial criticism of methods.


Previous research suggests that social comparison leads to increased competitive motivation in subjects. While most prior work examined subjective factors in social comparison, this new study focuses on an objective factor inherent to all competitive situations, namely the number of competitors. Though the probability of winning is held constant, the authors of this study predict that, past a certain threshold, subjects facing many competitors in performing an individual task will have less competitive motivation than subjects facing few competitors. They claim this “N-Effect” occurs because, as the number of competitors (N) moves from few to many, it becomes increasingly difficult and less informative for subjects to partake in social comparisons and it is these social comparisons that fuel competitive motivation.


When assessed on the whole, the evidence presented by Garcia and Tor makes a good case for the existence of an N-Effect; however, many of the component parts of their supporting studies and arguments deserve discussion.

The authors are primarily interested in competitive motivation yet the studies contained in their paper measure a number of other dependent variables, some requiring an extended set of logical connections to support the overall N-Effect theory.

Study number followed by dependent variable:

1 a & b--Actual Performance (test scores)
2--Actual Performance (speed)
3--Self Reported Motivation
4--Self Reported Competitive Feelings / Social Comparison
5--Self Reported Motivation / Comparison / Ease of winning

For instance, SAT scores in Study 1a are a measure of actual performance on a test taking task which only has an indirect relationship to competitive motivation. Instead of the linear relationship the authors seem to imply, one could have predicted an inverted U shaped relationship between competitive motivation and actual performance. At a high enough level of motivation, performance should start to suffer as subjects become over stimulated. So in Study 2 it is possible that subjects trying to rapidly take a test are already on the far, downward sloping side of the motivation/performance curve. If that were true the 100 N condition could actually produce more competitive motivation than the 10 N condition yet fit the result of a slower completion time. This explanation may be less likely than the more intuitive one assumed by the authors but its existence begs further evidentiary support for the author’s theory to stand.

Study 1b’s conclusion that actual performance on the CRT test is a measure of competitive motivation is even more difficult to justify. Whereas the SAT is a form of competition with material rewards for high performance relative to fellow test takers, the CRT is not competitive in nature. CRT scores remain the private knowledge of the subject (if they are even shared with the subject at all) and these scores do not impact the acquisition of material rewards such as gaining admittance to good universities. Additionally, a very high degree of competitive motivation might interfere with the suppression of “intuitive,” “System 1” responses that “spring quickly to mind,” (Frederick, 2005) lowering the scores of highly motivated subjects. This is contrary to the prediction of Garcia and Tor.

Given the ability to make diametrically opposed predictions regarding high competitive motivation and actual performance, it is difficult to place much validity in the results of studies 1 and 2 on their own. Fortunately subsequent studies do measure motivation more directly through subject self reporting. However, self reporting has its own set of pitfalls. Subjects may not truly know their own level of motivation and/or may choose levels they believe are socially appropriate or what the researcher wants to see. Studies 3, 4, and 5 would be much stronger if the authors had a more direct way of measuring competitive motivation such as effort invested in actually performing a task or perhaps willingness to make actual payments to eliminate other competitors from consideration.

In assessing the findings one must also remember that imagined N and actual N are not equivalent. Subjects physically running a road race in a crowd of 500 people that they can see, smell, and bump into might produce different levels of motivation than an imagined nameless and faceless hoard of 500 from Study 3. This fact may currently limit the N-Effect’s ability to extend its claims beyond imagined competition to real competition. While Studies 1 a and b do examine subjects who are physically in a room facing actual other “competitors,” as previously noted these studies measure actual performance and not motivation.

The Facebook task used in Study 5 may be inappropriate for a controlled examination of competitive motivation derived from social comparison. Inherent to Facebook is the concept of “friending” and social display of popularity. The Facebook friending task could work as a social comparison prime, especially for undergraduate subjects. While this may affect all subject groups equally and thus be a non-factor when comparing groups to each other, it is also possible that such priming could impact the results in unexpected ways, especially if it was not recognized by the researchers at the design stage. A different task should have been chosen for this study.

Next Installment: Alternative Explanations for N-Effect Results

Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19, 25-42.

Wednesday, February 18, 2009

ID the Elasticity

Imagine that you were asked to choose between two movies recommended by Netflix. For the purposes of this exercise, also imagine that you are a cheapskate like me and have only subscribed to the “one at a time” movie rental option so you can indeed only choose one movie to watch next. In this scenario, you will not learn the title of the movie before choosing but you will learn bit by bit about three key features for each movie, one feature at a time. First you learn the names of the star actors and actresses for each film. Then, after a few seconds, you learn who directed each film. Finally you are told each movie’s genre. You do not have to make a choice until you have heard all of the information.

Now, does the order in which the feature information is presented make any difference? For instance, if instead you were told first about the genre, then the actors, and then the director would you make a different choice of film than in the original scenario? Could you find yourself spending the weekend with The Wedding Crashers instead of Apocalypse Now simply because the order changed? Normative decision theory holds that order doesn’t matter. However, research into Information Distortion by J. Edward Russo, Victoria Medvec, and others suggests that, in practice, order matters a great deal to decision makers. It has long been known that people irrationally seek out and interpret information in ways to favor a decision that has already been made. This happens for a variety of reasons including cognitive dissonance reduction. However, Information Distortion (ID) takes place much earlier, before the decision has actually been made which makes it all the more fascinating. ID theory suggests that someone just has to form an initial preference for one option over another and thereafter any new information he or she receives gets distorted to favor that initial preference. Like when standing in front of a funhouse mirror, with ID the new facts themselves look materially different. So in our movie example, if the first thing learned was the directors, David Dobkin and Francis Ford Coppola respectively, you would be more likely to interpret further information with a bias toward your initial preference based only on director (hopefully Coppola). If instead you first considered “drama or comedy” and your initial preference tilted toward comedy, then when you later learned about the directors you might suddenly have a new found respect for David Dokin and have a greater chance of watching Vince Vaughn get crushed in backyard football and tied to the bed posts.

Why does ID happen? Russo along with collaborators Kurt Carlson, Margaret Meloy, and Kevyn Yong put forth an answer in their 2008 paper published in the Journal of Experimental Psychology. Their research examines three possible goals as causes of ID: conserving effort, creating separation (making the choices more distinct), and maintaining consistency. Over the course of three experiments they conclude that consistency is the most likely explanation. However, it should be noted that these experiments tested a limited universe of three theories. There could be other significant factors in ID.

Before considering additional factors it is worth making a few observations about the theories chosen for testing by Russo and colleagues. Of the theories, only consistency has a strong social element. I would guess if subjects were asked to explicitly rank their goals by importance that consistency would be ranked highest. Additionally, the goal of conserving effort would be associated with making a quick decision. Quick decisions are only made once, early in the process, and then it is over. So in this case subjects have limited opportunity to express their goal to experimenters as a "conserving effort" subject would presumably make their decision and then try to just ignore further information. On the other hand, a "consistency driven" subject must express their goal each time he or she faces new information. It may be difficult to compare this two goals using the same experimental design.

Now, a few alternative explanations do come to mind. Perhaps subjects experience a form of “trial choice” and start “rooting” for their choice to be right. With this theory, although subject are not yet locked into a decision, they are trying it on for size and simulating the experience of having made a final choice. If this alternative theory is true then there may be very little new at all going on with ID. Instead the phenomenon would be a mere extension of the classic distortion theories to trial as well as final choices. Another, less powerful, explanation lies in subject interpretation of “authoritative intent.” Chefs creating menus, professors designing word problems, and even Netflix recommenders usually present information in an intentional order and that order often follows the rule of most important information first. Subjects may be relying too much on this typical pattern.

A final alternative challenges not the goal but the mechanism of ID, elements of which are suggested in Christopher Hsee’s theory of Elastic Justification. Information Distortion in its very name implies a change in the decision maker’s interpretation of the facts themselves. Movie directors are somehow judged more tallented, when considering a prospective date five foot two is somehow a little taller, etc. However, it is possible that something else is going on. Sure subject interpretation of a new fact may change a little yet the weight given in the final decision to the importance of that fact could be altered more dramatically. You might still think Owen Wilson is brilliant but the importance of actor quality in movie selection could be reduced when you learn the names of the directors first. ID cannot be entirely ruled out because this alternative does not explain all of the results in the paper; however, Russo’s experiments do not test the weighting element possibility. (Note that ID may provide an alternative explanation for Elastic Justification instead of the other way around).