Monday, March 9, 2009

N-Effect Review: Part Three – Conclusion

Over the last two postings I have been reviewing Stephen M. Garcia and Avishalom Tor’s “N-Effect.” In this third and concluding posting I give my verdict on the N-Effect and provide some additional research ideas for further work in the social comparison space. A note to readers: This series is best read starting with Part One.

Verdict on N-Effect

Despite the criticisms and alternative explanations in past postings , collectively Garcia and Tor’s studies begin to build a strong case for an N-Effect. The authors cite numerous past studies linking social comparison to competitive motivation so any failure to unequivocally prove a relationship internal to this research paper is easily forgivable. Furthermore, their early analysis that N is a ubiquitous objective factor in all competitive situations and that subjects are more likely to make social comparisons when there are few rather than may competitors due to, if nothing else, the fact that “it becomes less viable and informative to compare oneself, or anticipate comparisons, with a great multitude of Targets,” seem quite sound. The later is proven out experimentally in several of the studies. As for the appropriateness of dependent variable measures, while one can poke holes in the relationships, the simplest explanation is that actual performance and self reported motivation are measures of true competitive motivation. Study 4 goes a long way toward invalidating a ratio bias hypothesis (discussed previously) and Studies 3 and 5 are quite persuasive in arguing for an N-Effect. Study 3 incorporates a social comparison orientation (SCO) assessment of subjects, a scale that was designed and demonstrated to “reveal interpersonal differences” in subject tendencies for social comparison. Study 3 analysis demonstrates that high SCO scoring subjects are more likely to exhibit the N-Effect, a fact that is very difficult to explain with alternative theories. Study 5 measures social comparison, competitive motivation, ease of task, and N all in one place -- negating the need to rely on assumptions about the relationships between these variable formed in earlier, more limited studies. Study 5 shows that social comparison is indeed acting as a mediator of competitive motivation.

Advancing the Broader Research Agenda

The N-Effect research prompts several questions related to the broader research agenda in social comparison and competition motivation.

Mapping Motivation and Actual Performance: What is the relationship between motivation and actual performance? If actual performance is to be a reliable measure of competitive motivation, a more granular mapping should be done to determine if there is indeed a linear versus an inverted U relationship. (It is likely that others have already established this relationship but it is not mentioned in the paper)

Revisit Past Studies: Are there previous studies on social comparison that could be reexamined in light of the N-Effect? Most likely earlier social comparison studies have neglected to specify or at least recognize group size. New insights into these past studies could be gained by revisiting the role N may have played in their results.

Interventions: What interventions would enable subjects to view competitions in the same way for a given probability of winning?

Other Objective Factors: What other objective factors besides N might play a role in social comparison?

Framing Effects: If subjects were assessing their possibility of incurring a penalty instead of receiving an award would it change the nature of the N-Effect, perhaps reversing the polarity of social comparison?

Sample v. Population: Do subjects recognize potential sample effects when considering competitive difficulty? While a randomly selected group of 100 people is likely normally distributed in ability and represent the population, a group of 10 people has the potential to be quite skewed in their abilities. Perhaps the subject in a small N group is unlucky and finds themselves up against a small but exceptionally talented group of people who are excellent at solving the assigned task. Could this factor partially explain increased need for social comparison for small N groups?

Monday, March 2, 2009

N-Effect Review: Part Two

In this second installment of a three part series, I explore alternatives to Stephen M. Garcia and Avishalom Tor’s “N-Effect” theory and suggest several additional studies that would help solidify the research findings.

Alternative Explanations

Regression: One starting point for an alternative explanation to the authors’ findings on N and motivation can be found in Study 5. As stated in the Results and Discussion section, “Participants felt it would be easier to win the cash prize in the 10 competitors condition than in the 10,000 competitors condition.” Putting aside the implications this result has on a ratio bias explanation, let us just grant that for whatever reason subjects believe it is easier to win competitions with fewer competitors. If this is true, objective social comparison around N could theoretically be completely removed from the explanation of competitive motivation and replaced by subject confidence in performing easy or difficult tasks. Past research (e.g. Moore & Cain, 2005) suggests that for easy tasks subjects believe they will be above average in their performance of the task and on difficult tasks they will perform below average. Since subjects have better access to their own abilities “their beliefs about others’ performances tend to be regressive and less extreme than their beliefs about their own performances.” This Overconfidence and Underconfidence effect has been demonstrated at low and high values for N.

However, similar to the authors own observations in regard to ease or difficulty of achieving a payoff, high confidence could either decrease or increase motivation (“I’m going to win anyway so I do not need to work hard” or “I am confident I can win so I’m going to work very hard as I know the effort will pay off”). A clear conclusion cannot be determined using only the results of the N-Effect paper. We can assume that subjects in Study 2 would judge that task to be easy. Collectively the task in Study 5 has a mean difficulty assessment that isn’t close to either extreme. Finally, we have no way of knowing how subjects feel about the foot race and job interview imagined tasks of Studies 3 and 4. If we had a set of both inherently difficult and inherently easy tasks we could compare the low N and high N motivation rankings between tasks. If high confidence yields high motivation, we should find the motivation to complete in low N versions of easy tasks to be higher than in low N versions of difficult tasks. Factored into the analysis it may be that task difficulty plays a mediating role in competitive motivation instead of social comparison.

Numeracy Failings: The study 4 control for subject ratio biases combined with the study 5 finding that subjects judged small N tasks as easier than large N tasks constitutes persuasive evidence against a classic ratio bias explanation for the N-Effect. However, this finding does not preclude other failures in subject numeracy from playing a role. For instance, subjects might be influence by formatting effects related to frequency (e.g. Gigerenzer, 1994; Gigerenzer & Hoffrage, 1995). This theory suggests that subjects are better able to interpret information expressed as a frequency (2 out of 10 or 20 out of 100) better than when the same information is expressed as a probability (20% chance of winning). N-Effect Studies 2, 3, 4, and 5 all use percentage probabilities instead of frequencies when presenting information to subjects. This formatting effect could cause subjects to misinterpret their probability of winning in each scenario, perhaps assuming they somehow have a better chance of winning when there are fewer competitors. Again, social comparison would not be necessary to explain competitive motivation. Note that Study 5 describes some “manipulation checks about N and the percentage of competitors that would win.” From the context it can be assumed that the checks were also formatted as percentages; however, these checks are not described in detail in the paper.

Winner-Take-All Heuristic: Many competitions only reward the very top performing competitor or at least have special rewards for the top competitor. This situation could lead to a heuristic for subjects that competitions with more competitors are harder to win -- even when faced with information that the competitions have an equal probability of positive outcome. Instead of translating a 20% chance of winning with 50 or 500 competitors into 10/50 or 100/500 odds respectively, subjects may instead have an internal representation that is tugged toward an extreme of 1/50 and 1/500. This incorrect assumption on ease of winning could influence competitive motivation without social comparison playing a role as discussed previously.

Additional N-Effect Studies

Multiple additional studies could be run to more clearly demonstrate the N-Effect and rule out alternative explanations.

N-Effect Continuum: The authors of the N-Effect admit that the effect arises only within a certain territory of N values but it is unclear where this territory lies. Starting from no competitors and adding one at a time, at what point do social facilitation improvements in motivation give way to N-Effect reductions in motivation due to reduced social comparison? Is the continuum different for various competitive arenas like poker, running, debating, etc.?

Direct Measures of Motivation: As mentioned in the previous posting, Studies 3, 4, and 5 would be much stronger if the authors had a more direct way of measuring competitive motivation such as effort invested in actually performing a task or perhaps willingness to make actual payments to eliminate other competitors from consideration.

Actual N: Further studies involving actual task performance in the presence of small and large N competitors groups should be run to prove the N effect is not limited to imaged N sets.

Ratio Bias on Stimulus: Study 4 looks at subjects’ general susceptibility to the ratio bias but it neglects to question subjects directly to see if they exhibit a ratio bias on the scenario stimulus itself. Subjects could be asked to choose which competition they would rather compete in [10, 30, 50, 100, or indifferent for Study 4] as a more direct test for the bias.

SCO Expansion: Rerun Studies 2, 4, and 5 including the SCO measure to make sure the same confirming results are found as in Study 3.

Easy v. Hard Tasks: As mentioned in the “Regression” alternative explanation above, a study designed to compare N level competitive motivation assessments for easy and hard tasks could be quite informative. If subjects rate low (high) N condition competitive motivation in easy tasks very similarly to low (high) N condition competitive motivation in difficult tasks, the N-Effect theory would be strengthened.

Format Effect Control: Rerun the studies using frequency numbers instead of percentages to describe the winning outcomes and thereby control for a formatting explanation.

Winner-Take-All Control: Remind subjects of several instances where winner take all is not the rule of competition and where all subjects who win are treated equally (passing the legal board exam for example) as an attempt to reduce any winner take all bias prior to rerunning N-Effect studies.

Next Installment: Conclusions and Future Directions

Gigerenzar, G. (1994). Why the distinction between single-event probabilities and frequencies is important for psychology (and vice versa). In G. Wright & P. Ayton (Eds.), Subjective probability (pp. 129-161). New York: Wiley --- NOTE reference found in Reyna and Brainerd’s “Numeracy, ratio bias, and denominator neglect in judgments of risk and probability,” Learning and Individual Differences, March 2007

Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102, 684-704. --- NOTE reference found in Reyna and Brainerd’s “Numeracy, ratio bias, and denominator neglect in judgments of risk and probability,” Learning and Individual Differences, March 2007

Moore, D. A. & Cain, D. M. (2007). Overconfidence and underconfidence: When and why people underestimate (and overestimate) the competition. Organizational Behavior and Human
Decision Processes, 103, 197-213.