Monday, March 9, 2009

N-Effect Review: Part Three – Conclusion


Over the last two postings I have been reviewing Stephen M. Garcia and Avishalom Tor’s “N-Effect.” In this third and concluding posting I give my verdict on the N-Effect and provide some additional research ideas for further work in the social comparison space. A note to readers: This series is best read starting with Part One.


Verdict on N-Effect

Despite the criticisms and alternative explanations in past postings , collectively Garcia and Tor’s studies begin to build a strong case for an N-Effect. The authors cite numerous past studies linking social comparison to competitive motivation so any failure to unequivocally prove a relationship internal to this research paper is easily forgivable. Furthermore, their early analysis that N is a ubiquitous objective factor in all competitive situations and that subjects are more likely to make social comparisons when there are few rather than may competitors due to, if nothing else, the fact that “it becomes less viable and informative to compare oneself, or anticipate comparisons, with a great multitude of Targets,” seem quite sound. The later is proven out experimentally in several of the studies. As for the appropriateness of dependent variable measures, while one can poke holes in the relationships, the simplest explanation is that actual performance and self reported motivation are measures of true competitive motivation. Study 4 goes a long way toward invalidating a ratio bias hypothesis (discussed previously) and Studies 3 and 5 are quite persuasive in arguing for an N-Effect. Study 3 incorporates a social comparison orientation (SCO) assessment of subjects, a scale that was designed and demonstrated to “reveal interpersonal differences” in subject tendencies for social comparison. Study 3 analysis demonstrates that high SCO scoring subjects are more likely to exhibit the N-Effect, a fact that is very difficult to explain with alternative theories. Study 5 measures social comparison, competitive motivation, ease of task, and N all in one place -- negating the need to rely on assumptions about the relationships between these variable formed in earlier, more limited studies. Study 5 shows that social comparison is indeed acting as a mediator of competitive motivation.


Advancing the Broader Research Agenda

The N-Effect research prompts several questions related to the broader research agenda in social comparison and competition motivation.

Mapping Motivation and Actual Performance: What is the relationship between motivation and actual performance? If actual performance is to be a reliable measure of competitive motivation, a more granular mapping should be done to determine if there is indeed a linear versus an inverted U relationship. (It is likely that others have already established this relationship but it is not mentioned in the paper)

Revisit Past Studies: Are there previous studies on social comparison that could be reexamined in light of the N-Effect? Most likely earlier social comparison studies have neglected to specify or at least recognize group size. New insights into these past studies could be gained by revisiting the role N may have played in their results.

Interventions: What interventions would enable subjects to view competitions in the same way for a given probability of winning?

Other Objective Factors: What other objective factors besides N might play a role in social comparison?

Framing Effects: If subjects were assessing their possibility of incurring a penalty instead of receiving an award would it change the nature of the N-Effect, perhaps reversing the polarity of social comparison?

Sample v. Population: Do subjects recognize potential sample effects when considering competitive difficulty? While a randomly selected group of 100 people is likely normally distributed in ability and represent the population, a group of 10 people has the potential to be quite skewed in their abilities. Perhaps the subject in a small N group is unlucky and finds themselves up against a small but exceptionally talented group of people who are excellent at solving the assigned task. Could this factor partially explain increased need for social comparison for small N groups?

Monday, March 2, 2009

N-Effect Review: Part Two


In this second installment of a three part series, I explore alternatives to Stephen M. Garcia and Avishalom Tor’s “N-Effect” theory and suggest several additional studies that would help solidify the research findings.

Alternative Explanations

Regression: One starting point for an alternative explanation to the authors’ findings on N and motivation can be found in Study 5. As stated in the Results and Discussion section, “Participants felt it would be easier to win the cash prize in the 10 competitors condition than in the 10,000 competitors condition.” Putting aside the implications this result has on a ratio bias explanation, let us just grant that for whatever reason subjects believe it is easier to win competitions with fewer competitors. If this is true, objective social comparison around N could theoretically be completely removed from the explanation of competitive motivation and replaced by subject confidence in performing easy or difficult tasks. Past research (e.g. Moore & Cain, 2005) suggests that for easy tasks subjects believe they will be above average in their performance of the task and on difficult tasks they will perform below average. Since subjects have better access to their own abilities “their beliefs about others’ performances tend to be regressive and less extreme than their beliefs about their own performances.” This Overconfidence and Underconfidence effect has been demonstrated at low and high values for N.

However, similar to the authors own observations in regard to ease or difficulty of achieving a payoff, high confidence could either decrease or increase motivation (“I’m going to win anyway so I do not need to work hard” or “I am confident I can win so I’m going to work very hard as I know the effort will pay off”). A clear conclusion cannot be determined using only the results of the N-Effect paper. We can assume that subjects in Study 2 would judge that task to be easy. Collectively the task in Study 5 has a mean difficulty assessment that isn’t close to either extreme. Finally, we have no way of knowing how subjects feel about the foot race and job interview imagined tasks of Studies 3 and 4. If we had a set of both inherently difficult and inherently easy tasks we could compare the low N and high N motivation rankings between tasks. If high confidence yields high motivation, we should find the motivation to complete in low N versions of easy tasks to be higher than in low N versions of difficult tasks. Factored into the analysis it may be that task difficulty plays a mediating role in competitive motivation instead of social comparison.

Numeracy Failings: The study 4 control for subject ratio biases combined with the study 5 finding that subjects judged small N tasks as easier than large N tasks constitutes persuasive evidence against a classic ratio bias explanation for the N-Effect. However, this finding does not preclude other failures in subject numeracy from playing a role. For instance, subjects might be influence by formatting effects related to frequency (e.g. Gigerenzer, 1994; Gigerenzer & Hoffrage, 1995). This theory suggests that subjects are better able to interpret information expressed as a frequency (2 out of 10 or 20 out of 100) better than when the same information is expressed as a probability (20% chance of winning). N-Effect Studies 2, 3, 4, and 5 all use percentage probabilities instead of frequencies when presenting information to subjects. This formatting effect could cause subjects to misinterpret their probability of winning in each scenario, perhaps assuming they somehow have a better chance of winning when there are fewer competitors. Again, social comparison would not be necessary to explain competitive motivation. Note that Study 5 describes some “manipulation checks about N and the percentage of competitors that would win.” From the context it can be assumed that the checks were also formatted as percentages; however, these checks are not described in detail in the paper.

Winner-Take-All Heuristic: Many competitions only reward the very top performing competitor or at least have special rewards for the top competitor. This situation could lead to a heuristic for subjects that competitions with more competitors are harder to win -- even when faced with information that the competitions have an equal probability of positive outcome. Instead of translating a 20% chance of winning with 50 or 500 competitors into 10/50 or 100/500 odds respectively, subjects may instead have an internal representation that is tugged toward an extreme of 1/50 and 1/500. This incorrect assumption on ease of winning could influence competitive motivation without social comparison playing a role as discussed previously.

Additional N-Effect Studies

Multiple additional studies could be run to more clearly demonstrate the N-Effect and rule out alternative explanations.

N-Effect Continuum: The authors of the N-Effect admit that the effect arises only within a certain territory of N values but it is unclear where this territory lies. Starting from no competitors and adding one at a time, at what point do social facilitation improvements in motivation give way to N-Effect reductions in motivation due to reduced social comparison? Is the continuum different for various competitive arenas like poker, running, debating, etc.?

Direct Measures of Motivation: As mentioned in the previous posting, Studies 3, 4, and 5 would be much stronger if the authors had a more direct way of measuring competitive motivation such as effort invested in actually performing a task or perhaps willingness to make actual payments to eliminate other competitors from consideration.

Actual N: Further studies involving actual task performance in the presence of small and large N competitors groups should be run to prove the N effect is not limited to imaged N sets.

Ratio Bias on Stimulus: Study 4 looks at subjects’ general susceptibility to the ratio bias but it neglects to question subjects directly to see if they exhibit a ratio bias on the scenario stimulus itself. Subjects could be asked to choose which competition they would rather compete in [10, 30, 50, 100, or indifferent for Study 4] as a more direct test for the bias.

SCO Expansion: Rerun Studies 2, 4, and 5 including the SCO measure to make sure the same confirming results are found as in Study 3.

Easy v. Hard Tasks: As mentioned in the “Regression” alternative explanation above, a study designed to compare N level competitive motivation assessments for easy and hard tasks could be quite informative. If subjects rate low (high) N condition competitive motivation in easy tasks very similarly to low (high) N condition competitive motivation in difficult tasks, the N-Effect theory would be strengthened.

Format Effect Control: Rerun the studies using frequency numbers instead of percentages to describe the winning outcomes and thereby control for a formatting explanation.

Winner-Take-All Control: Remind subjects of several instances where winner take all is not the rule of competition and where all subjects who win are treated equally (passing the legal board exam for example) as an attempt to reduce any winner take all bias prior to rerunning N-Effect studies.

Next Installment: Conclusions and Future Directions

Gigerenzar, G. (1994). Why the distinction between single-event probabilities and frequencies is important for psychology (and vice versa). In G. Wright & P. Ayton (Eds.), Subjective probability (pp. 129-161). New York: Wiley --- NOTE reference found in Reyna and Brainerd’s “Numeracy, ratio bias, and denominator neglect in judgments of risk and probability,” Learning and Individual Differences, March 2007

Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102, 684-704. --- NOTE reference found in Reyna and Brainerd’s “Numeracy, ratio bias, and denominator neglect in judgments of risk and probability,” Learning and Individual Differences, March 2007

Moore, D. A. & Cain, D. M. (2007). Overconfidence and underconfidence: When and why people underestimate (and overestimate) the competition. Organizational Behavior and Human
Decision Processes, 103, 197-213.

Wednesday, February 25, 2009

N-Effect Review: Part One


Does the number of people in a competition influence competitive motivation? If so, would additional competitors likely increase or decrease motivation? Research by Stephen M. Garcia and Avishalom Tor attempts to answer these questions. Their resulting paper in Psychological Science entitled “The N-Effect: More Competitors, Less Competition” appears to be generating a lot of interest in the research community. In fact, as of two weeks ago it was the number two ranked paper in recent downloads on SSRN. In a three part series, I will provide a compressive analysis of this newly influential paper. This first installment includes a paper summary and an initial criticism of methods.

Summary

Previous research suggests that social comparison leads to increased competitive motivation in subjects. While most prior work examined subjective factors in social comparison, this new study focuses on an objective factor inherent to all competitive situations, namely the number of competitors. Though the probability of winning is held constant, the authors of this study predict that, past a certain threshold, subjects facing many competitors in performing an individual task will have less competitive motivation than subjects facing few competitors. They claim this “N-Effect” occurs because, as the number of competitors (N) moves from few to many, it becomes increasingly difficult and less informative for subjects to partake in social comparisons and it is these social comparisons that fuel competitive motivation.

Criticism

When assessed on the whole, the evidence presented by Garcia and Tor makes a good case for the existence of an N-Effect; however, many of the component parts of their supporting studies and arguments deserve discussion.

The authors are primarily interested in competitive motivation yet the studies contained in their paper measure a number of other dependent variables, some requiring an extended set of logical connections to support the overall N-Effect theory.

Study number followed by dependent variable:

1 a & b--Actual Performance (test scores)
2--Actual Performance (speed)
3--Self Reported Motivation
4--Self Reported Competitive Feelings / Social Comparison
5--Self Reported Motivation / Comparison / Ease of winning

For instance, SAT scores in Study 1a are a measure of actual performance on a test taking task which only has an indirect relationship to competitive motivation. Instead of the linear relationship the authors seem to imply, one could have predicted an inverted U shaped relationship between competitive motivation and actual performance. At a high enough level of motivation, performance should start to suffer as subjects become over stimulated. So in Study 2 it is possible that subjects trying to rapidly take a test are already on the far, downward sloping side of the motivation/performance curve. If that were true the 100 N condition could actually produce more competitive motivation than the 10 N condition yet fit the result of a slower completion time. This explanation may be less likely than the more intuitive one assumed by the authors but its existence begs further evidentiary support for the author’s theory to stand.

Study 1b’s conclusion that actual performance on the CRT test is a measure of competitive motivation is even more difficult to justify. Whereas the SAT is a form of competition with material rewards for high performance relative to fellow test takers, the CRT is not competitive in nature. CRT scores remain the private knowledge of the subject (if they are even shared with the subject at all) and these scores do not impact the acquisition of material rewards such as gaining admittance to good universities. Additionally, a very high degree of competitive motivation might interfere with the suppression of “intuitive,” “System 1” responses that “spring quickly to mind,” (Frederick, 2005) lowering the scores of highly motivated subjects. This is contrary to the prediction of Garcia and Tor.

Given the ability to make diametrically opposed predictions regarding high competitive motivation and actual performance, it is difficult to place much validity in the results of studies 1 and 2 on their own. Fortunately subsequent studies do measure motivation more directly through subject self reporting. However, self reporting has its own set of pitfalls. Subjects may not truly know their own level of motivation and/or may choose levels they believe are socially appropriate or what the researcher wants to see. Studies 3, 4, and 5 would be much stronger if the authors had a more direct way of measuring competitive motivation such as effort invested in actually performing a task or perhaps willingness to make actual payments to eliminate other competitors from consideration.

In assessing the findings one must also remember that imagined N and actual N are not equivalent. Subjects physically running a road race in a crowd of 500 people that they can see, smell, and bump into might produce different levels of motivation than an imagined nameless and faceless hoard of 500 from Study 3. This fact may currently limit the N-Effect’s ability to extend its claims beyond imagined competition to real competition. While Studies 1 a and b do examine subjects who are physically in a room facing actual other “competitors,” as previously noted these studies measure actual performance and not motivation.

The Facebook task used in Study 5 may be inappropriate for a controlled examination of competitive motivation derived from social comparison. Inherent to Facebook is the concept of “friending” and social display of popularity. The Facebook friending task could work as a social comparison prime, especially for undergraduate subjects. While this may affect all subject groups equally and thus be a non-factor when comparing groups to each other, it is also possible that such priming could impact the results in unexpected ways, especially if it was not recognized by the researchers at the design stage. A different task should have been chosen for this study.

Next Installment: Alternative Explanations for N-Effect Results

Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19, 25-42.

Wednesday, February 18, 2009

ID the Elasticity


Imagine that you were asked to choose between two movies recommended by Netflix. For the purposes of this exercise, also imagine that you are a cheapskate like me and have only subscribed to the “one at a time” movie rental option so you can indeed only choose one movie to watch next. In this scenario, you will not learn the title of the movie before choosing but you will learn bit by bit about three key features for each movie, one feature at a time. First you learn the names of the star actors and actresses for each film. Then, after a few seconds, you learn who directed each film. Finally you are told each movie’s genre. You do not have to make a choice until you have heard all of the information.


Now, does the order in which the feature information is presented make any difference? For instance, if instead you were told first about the genre, then the actors, and then the director would you make a different choice of film than in the original scenario? Could you find yourself spending the weekend with The Wedding Crashers instead of Apocalypse Now simply because the order changed? Normative decision theory holds that order doesn’t matter. However, research into Information Distortion by J. Edward Russo, Victoria Medvec, and others suggests that, in practice, order matters a great deal to decision makers. It has long been known that people irrationally seek out and interpret information in ways to favor a decision that has already been made. This happens for a variety of reasons including cognitive dissonance reduction. However, Information Distortion (ID) takes place much earlier, before the decision has actually been made which makes it all the more fascinating. ID theory suggests that someone just has to form an initial preference for one option over another and thereafter any new information he or she receives gets distorted to favor that initial preference. Like when standing in front of a funhouse mirror, with ID the new facts themselves look materially different. So in our movie example, if the first thing learned was the directors, David Dobkin and Francis Ford Coppola respectively, you would be more likely to interpret further information with a bias toward your initial preference based only on director (hopefully Coppola). If instead you first considered “drama or comedy” and your initial preference tilted toward comedy, then when you later learned about the directors you might suddenly have a new found respect for David Dokin and have a greater chance of watching Vince Vaughn get crushed in backyard football and tied to the bed posts.


Why does ID happen? Russo along with collaborators Kurt Carlson, Margaret Meloy, and Kevyn Yong put forth an answer in their 2008 paper published in the Journal of Experimental Psychology. Their research examines three possible goals as causes of ID: conserving effort, creating separation (making the choices more distinct), and maintaining consistency. Over the course of three experiments they conclude that consistency is the most likely explanation. However, it should be noted that these experiments tested a limited universe of three theories. There could be other significant factors in ID.


Before considering additional factors it is worth making a few observations about the theories chosen for testing by Russo and colleagues. Of the theories, only consistency has a strong social element. I would guess if subjects were asked to explicitly rank their goals by importance that consistency would be ranked highest. Additionally, the goal of conserving effort would be associated with making a quick decision. Quick decisions are only made once, early in the process, and then it is over. So in this case subjects have limited opportunity to express their goal to experimenters as a "conserving effort" subject would presumably make their decision and then try to just ignore further information. On the other hand, a "consistency driven" subject must express their goal each time he or she faces new information. It may be difficult to compare this two goals using the same experimental design.


Now, a few alternative explanations do come to mind. Perhaps subjects experience a form of “trial choice” and start “rooting” for their choice to be right. With this theory, although subject are not yet locked into a decision, they are trying it on for size and simulating the experience of having made a final choice. If this alternative theory is true then there may be very little new at all going on with ID. Instead the phenomenon would be a mere extension of the classic distortion theories to trial as well as final choices. Another, less powerful, explanation lies in subject interpretation of “authoritative intent.” Chefs creating menus, professors designing word problems, and even Netflix recommenders usually present information in an intentional order and that order often follows the rule of most important information first. Subjects may be relying too much on this typical pattern.


A final alternative challenges not the goal but the mechanism of ID, elements of which are suggested in Christopher Hsee’s theory of Elastic Justification. Information Distortion in its very name implies a change in the decision maker’s interpretation of the facts themselves. Movie directors are somehow judged more tallented, when considering a prospective date five foot two is somehow a little taller, etc. However, it is possible that something else is going on. Sure subject interpretation of a new fact may change a little yet the weight given in the final decision to the importance of that fact could be altered more dramatically. You might still think Owen Wilson is brilliant but the importance of actor quality in movie selection could be reduced when you learn the names of the directors first. ID cannot be entirely ruled out because this alternative does not explain all of the results in the paper; however, Russo’s experiments do not test the weighting element possibility. (Note that ID may provide an alternative explanation for Elastic Justification instead of the other way around).

Thursday, January 29, 2009

Choice Blindness – What are we forgetting?


Photographer: Mark Hanlon

MIT is currently in its Independent Activities Period (IAP). Each January students take a break from regular class work and experience a variety of mini courses ranging from weighty subjects like “Energy Storage Solutions” to just for fun topics like “Build Your Own Electric Guitar.” IAP is a great way to brush up on skills or try something completely new. Fortunately for me the non-credit courses are open to alumni so I’ve spent my week immersed in “Statistics and Visualization for Data Analysis and Inference” (very useful but not much fun) and “Philosophy of Cognitive Science – Choose your own adventure!” (esoteric but mind-bendingly interesting).


Yesterday in the cognitive philosophy course we covered research by Petter Johansson and his colleagues from Lund University. In a 2005 Science paper entitled “Failure to Detect Mismatches between Intention and Outcome in a Simple Decision Task,” the authors present a series of experiments which lead to a construct Johansson calls Choice Blindness. As stated in the paper’s introduction:


“A fundamental assumption of theories of decision-making is that we detect mismatches between intention and outcome, adjust our behavior in the face of error, and adapt to changing circumstances. Is this always the case? We investigated the relation between intention, choice, and introspection. Participants made choices between presented face pairs on the basis of attractiveness, while we covertly manipulated the relationship between choice and outcome that they experienced. Participants failed to notice conspicuous mismatches between their intended choice and the outcome they were presented with, while nevertheless offering introspectively derived reasons for why they chose the way they did. We call this effect choice blindness.”

It is worth taking a more detailed look at the experiment to grasp the full magnitude of this surprising result. Subjects (male and female) were presented with pictures of two different female faces at a time on two separate cards. They were then were asked to point to the picture they found the most attractive. In the manipulation condition, the experimenter performed a slight of hand card trick and presented back to the subject the card they did not pick and asked them to justify their choice. In most cases these subjects proceeded to justify why they choose this other woman as if it was the card they had intended to choose all along, completely unaware of the fact that they DID NOT choose this picture. Amazing!

Why does this happen? The authors suggest that subjects fail in introspection. Johansson believes that, at the time of outcome, subjects no longer have access to their original intentions. A “fundamental assumption of theories of decision making [that] intentions and outcomes form a tight loop” is somehow broken.


There may be another, somewhat complementary, explanation in “motivated forgetting.” Motivated forgetting suggests that given a strong motive and suitable vehicle for belief, people are capable of forgetting information that does not suit their motive and falsely remembering alternative “facts” that do. Note that motivated forgetting involves the subject authentically forgetting the true facts and not just conveniently pretending to forget them for a social/external benefit. In the case of the card experiment, subject desire for consistency may be a strong enough motive to cause subjects to forget their original intent and remember it as it’s opposite.If motivated forgetting is playing a role one might predict that subjects faced with a different motive would not exhibit choice blindness. The following experimental idea could use some work but what if subjects were evaluating and choosing between two race horses before a race at the betting window. The experimenter would then place a bet on the behalf of a subject based on the subject’s choice. In the manipulation case experimenters would ask for a bet, making sure the subject overheard them, to be placed the opposite horse to the one the subject chose. If motivated forgetting is playing a role, I would predict that when the false choice horse won subjects would not remember that this horse was not their original choice and be able to explain the reasons they “chose” this false horse much like in the Johansson experiment. An important difference would arise when the false choice horse lost and the true choice horse won. In this case subjects may remember having actually chosen the winning horse as their motive is now different. This second result would not be predicted with choice blindness.

Wednesday, January 7, 2009

Scapegoat the Visual

I have a meeting with Harvard Professor Max Bazerman today. As most readers will know, Professor Bazerman is world renowned for his research on negotiations and judgment in managerial decision making. He also studies bounded awareness and ethicality, societal decision making, and want/should conflicts. Along with collaborators Don Moore, Francesca Gino, and Lisa Shu, Professor Bazerman’s recent research dives into several aspects of “Moral Luck” discussed previously. In preparation for the meeting I created the following visual to better explain some of my thinking on the topic. I am happy to share it here on http://www.prospecttheory.net/.




Saturday, January 3, 2009

BITH: Buying Behavior Part Two


This year’s Society for Judgment and Decision Making conference featured a symposium organized by CMU’s Leslie John and Jessica Wisdom on “Behavioral Economics and Health.” One of the papers presented in this symposium (by Ms. John) just became Behavior in the Headlines (BITH). The study, “Financial Incentive–Based Approaches for Weight Loss,” was recently published in JAMA and it is starting to be picked up by the popular press including the Pittsburgh Post-Gazette. The CMU website sums up the results quite nicely:

“[The study] …placed adult dieters into three groups. One group entered a daily lottery and received winnings only if they reached their targeted weight levels. A second group invested their own money, but lost it if they didn't meet their goals. The third group was given no monetary incentive at all.

The goal: lose a pound a week over 16 weeks.

The results were striking. The mean weight loss for both incentive groups was more than 13 pounds — with about half the participants reaching the 16-pound goal. But the mean weight loss for the control group was only 4 pounds.”


Followers of this blog will note that this is not the first time that schemes using financial rewards to elicit good behavior and their potential pitfalls have been discussed. I am curious as to how these results square with Uri Gneezy and Aldo Rustichini’s study mentioned in previous posts on an undesirable practice of day care center parents. In their study, when financial incentives were used to incent good behavior it backfired in an irreversible way. Dan Ariely’s explanation claims that when incentives are taken from the domain of social exchange to the domain of economic exchange the process cannot easily be reversed. This would suggest that when the financial incentive is removed in the weight loss study, subjects should whiplash back to bad behavior, as they have no further economic motivation to continue. What happened?

During the next seven months the subjects in all groups did gain weight back but apparently not as much weight as they had lost. To me this result is inconclusive. Logically, if a lot of weight was lost it will require that a lot of weight to be added back, which simply takes time. Perhaps seven months is not long enough. How rapid was the rate of weight regain by subject group? Are the high weight loss subjects gaining back at a faster rate? Will subjects eventually overshoot their old weight and end up heavier and worse off than they were before? The next layer of questions involves whether or not it was the financial incentive itself or the focusing/scoring/gaming process (beyond mere weigh-ins) that caused subjects to lose weight. In summary this is a great study attacking an important problem while opening up a number of good new research questions. I’m "hungry" for more papers from this world class group of collaborators.

Co-Authors: Kevin G. Volpp, MD, PhD; Leslie K. John, MS; Andrea B. Troxel, ScD; Laurie Norton, MA; Jennifer Fassbender, MS; George Loewenstein, PhD