Tuesday, December 30, 2008

Mechanical Research


If the research community at large is not already well aware of it, I recommend taking a moment to check out Amazon’s Mechanical Turk for its data collection potential. Like the historical chess playing “machine” Turk of its namesake, the site works by harnessing people behind the scenes to do bite sized “Human Intelligence Tasks” or HITs. HITs are small tasks still best performed by human rather than machine intelligence and include tagging photos with the proper labels, eliminating duplicate entries from catalogs, writing reviews, and, most importantly for academic researchers, filling out surveys.

As an example, I just executed a trial run of Mechanical Turk for a study I’m working on regarding curiosity for inherently positive or negative information and the circumstances in which subjects are better able to control the satisfaction of their curiosity. I created a HIT out of the control version of the two different curiosity questionnaires and asked that the survey be completed by up to 100 unique subjects for a reward of 25 cents. In less than four hours from submitting the HIT I have 100 responses at a cost of $27.50 (the site charges a small fee for use). Additionally, Mechanical Turk allows you to reject and not compensate any responders who did not complete their HIT satisfactorily. So, as a quality check I mixed in a question that helped ensure the responders were paying attention. A vast majority of the participants correctly answered a question very similar to the following: “If one hundred thousand and nine is greater than nine thousand enter ‘Q’ otherwise enter ‘T’.”

Though a quick, powerful, and cheap way to collect human subject data, Mechanical Turk does appear to have some major limitations. Most importantly I have yet to figure out a way to bar past respondents from answering subsequent altered versions of surveys used in between subject study designs, though as each respondent has a unique ID it is possible for repeat participants to be eliminated after the fact. Additionally, the baseline demographics of the typical Mechanical Turk worker in the subject pool and the self-selected participant factor may require special statistical treatment. Finally, the interface for creating surveys is rather limited so HTML skills are required.

Even with its limitations, at the very least Mechanical Turk seems like a great vehicle to do pilot studies. I plan to use it next on an outcome bias study to see if there is any merit in pursuing research on an alternative theory to Moral Luck written about previously in this blog.

Thursday, November 20, 2008

Towelie Logic




The Society for Judgment and Decision Making held its annual conference this past weekend in beautiful (but cold) downtown Chicago. Roughly 500 researchers from around the world gathered to discuss the latest findings in the JDM realm. This was my second conference and it did not disappoint.

One talk that caught my attention was entitled “Will a Rose Smell as Sweet by Another Name? Specification-Seeking in Decision-Making.” The talk was presented by Christopher Hsee (University of Chicago Graduate School of Business) and Yang Yang (Shanghai Jiao Tong University) based on their forthcoming paper in Journal of Consumer Research. As described in their abstract:

“We offer a framework about when and how specifications (e.g., megapixels of a camera, number of airbags in a massage chair) influence consumer preferences and report five studies that test the framework. Studies 1-3 show that even when consumers can directly experience the relevant products and the specifications carry little or no new information, their preference is still influenced by specifications, including specifications that are self-generated and by definition spurious, and specifications that the respondents themselves deem uninformative. Studies 4 and 5 show that relative to choice, hedonic preference (liking) is more stable and less influenced by specifications.”

I provide an overview of the towel study from the paper as well as my take on the findings in last night’s Hightechfever.tv broadcast.



Thursday, November 6, 2008

Je Regrette


ProspectTheory.net is not a political or self-help forum; however, I need to confess the regret I feel over a recent vote. Fortunately my regret is true to theme and related to judgment and decision-making.

On November 4th I voted on a relatively minor ballot issue here in Massachusetts to ban the practice of greyhound dog racing. I knew the issue was on the ballot along with two others of greater importance. This is unusual for me but I entirely forgot about the measure and entered the balloting booth without already having made a decision on the greyhound issue. I was in a hurry so made a snap decision to vote in favor of banning the practice.

This was a simple yes or no question but as a JDM geek I want to understand why I voted this way. I made a quick decision in the heat of moment. Was I using my hot/emotional system and my rational side would have arrived at a different answer, hence the feeling of regret? Quite possibly. When I voted yes I was thinking of the practice of dog racing itself. I visualized caged dogs, elderly men gambling, smoking, and ticket stub litter and it all seemed primitive and sad. However, I believe the greatest contributor to the decision was my internal framing of the question and my related starting point anchor. In the voting booth the question I asked myself was “do I disapprove of this practice?” I could have formed a different question which was actually much closer to the true one, “should the freedom of others be restricted in this domain?” My baseline/default answers to these two questions are quite different, even in the abstract. Do I disapprove of a questionable practice? Yes. Do I want to restrict freedoms? No. Even if I managed to ask myself both questions, the answer to the question I started with could serve as a powerful anchor and determine the ultimate outcome of the decision.

I am sure that politicos and partisans are well aware of this effect and use it to their advantage. Here we have another argument for a Nudge like Libertarian Paternalism when it comes to designing our ballots. Voting only works if we are asking the right questions.

Tuesday, November 4, 2008

A Curious Education

I am currently reading a broad survey paper by George Loewenstein as background for a possible curiosity experiment (“The Psychology of Curiosity: A Review and Reinterpretation,” Psychological Bulletin 1994. Vol. 116, No. 1.75-98). In the paper, there is a comment that relates to my recent “Buying Behavior” posting and provides credence to Harvard Economics Professor Roland Fryer’s idea to motivate scholastic achievement by paying students for good behavior. My previous post urged caution.

In discussing the societal implications of his information-gap theory of curiosity, which surmises that curiosity arises in the distance between “what one knows and what one wants to know,” Loewenstein states:

“The information-gap perspective has significant implications for education. Educators know much more about educating motivated students than they do about motivating them in the first place. As Engelhard and Monsaas (1988, p. 22) stated, ‘historically, education research has focused primarily on the cognitive outcomes of schooling’ rather than on motivational factors. The theoretical framework proposed here has several implications for curiosity stimulation in educational settings. First, it implies that curiosity requires a preexisting knowledge base. Simply encouraging students to ask questions—a technique often prescribed in the pedagogical literature—will not, in this view, go very far toward stimulating curiosity. To induce curiosity about a particular topic, it may be necessary to ‘prime the pump’ to stimulate information acquisition in the initial absence of curiosity. The new research showing that extrinsic rewards do not quell intrinsic motivation suggests that such rewards may be able to serve this function without drastically negative side effects.”

So there is hope that the extrinsically incented Capital Gains approach advocated by Fryer will “prime the pump” on learning and curiosity will take care of the rest without triggering the adverse effects of entering into economic verses social exchange.

Saturday, October 25, 2008

BITH: Buying Behavior

(Carol Leone Childcare)
Behavior in the Headlines: Harvard Economics Professor Roland Fryer has an idea to boost performance in our failing schools – pay the students. A new program called Capital Gains is being piloted in the Washington DC area and it is paying students up to $1,500 for good performance in a variety of areas including testing and attendance. The thinking behind the program is that better incentives will encourage students to exhibit better behavior, leading them eventually to academic success. There are no hard data yet on the effectiveness of the approach so the program is currently being run as an experiment. A short movie on the DC pilot can be viewed here.

Fryer’s motive is laudable and his approach has merit; however, the DC experiment is not without risk. By providing financial incentives, Capital Gains is moving the expectation for good scholastic performance from the domain of social exchange to the domain of economic exchange. Such a transition may produce results in the opposite direction intended and the change caused by the program may be difficult to reverse.

Take for example Uri Gneezy and Aldo Rustichini’s study on an undesirable practice of day care center parents. In their resulting paper, “A Fine is a Price,” Gneezy and Rustichini describe the over-time hours and other difficulties for day care center staff caused by late child pick-ups. Day care center management imposed a financial penalty on tardy patents to discourage the practice; however, management was in for a surprise. Late pick-ups actually increased substantially after the fine policy was imposed. Further, when the day care centers later tried to remove the fine, the occurrence of late pick-ups remained at its new, higher level. One explanation for these results, also championed by Dan Ariely in his new book Predictably Irrational, is that parents no longer felt obligated by social contract to pick-up their children on time. The guilt of imposing a social difficulty was replaced by a specific economic value. If the fine price was lower than the value parents placed on the extra effort needed to get to the day care on time, parents simply showed up late and paid the fine. Once in this economic realm it was difficult to reverse the new framing. Dropping the fine merely gave parents a new fine “price” of zero dollars.

The study discussed was focused on a penalty verses an incentive payment so the results may not be directly applicable. However, in the case of Capital Gains, it is possible that students will not value the incentive money as much as they had previously valued the hopeful expectations of their parents or even their own sense of self respect. After students are in an economic exchange mindset they will understand the benefit of good behavior in explicit financial terms. If a student decides that the extra effort is not worth $1,500, she may decide to put in even less effort than she did prior to the program. If students remain in an economic exchange mindset, a later need to remove the financial incentive could leave students even less motivated than they were before the incentive system.

Though there are risks, I am happy to see programs like Capital Gains attempting to improve our nation’s educational system. The current system is failing our children, especially those from disadvantaged backgrounds. It is time to try something new. Thank you Professor Fryer.


Saturday, October 18, 2008

Competitions as Commitment Devices


Today I had the privilege of judging the first phase of MIT’s famous $100K Business Plan competition. My fellow judges and I watched approximately 30 participants give their best 60-second elevator pitch to sell their business idea. Our contestants were all part of the “development track” which consists of businesses that address global issues such as poverty or the environment. Participants covered an impressive variety of quality ideas and technologies including low cost/non-electrical lighting, unique bio-fuels, medical diagnostics that cost 10 cents and can be mailed to the lab on a post card, solar cooking, and mesh wireless networks that overcome last mile issues in rural areas.

I have a strong personal interest in developmental entrepreneurship. In my day job I was a founding member of the IBM World Development Initiative and the leader behind its successful Global ThinkPlace Challenge to brainstorm sustainable solutions to African poverty. I was also a development track $100K participant myself (although back then it was only the $50K). My “Wider Reach” team, consisting of fellow Sloan MBAs Brian Roughan, Armina Karapetyan, Kamal Quadir, and Joe Zeff along with MIT Media Lab PhD student Jose Espinosa, was the winner of the $2,000 IDEAS Award, the $1K development track, and a semi-finalist in the overall $50K competition. We designed a mobile phone accessible marketplace for Bangladesh. Kamal Quadir stuck with the plan and made it a reality. The company is now called CellBazaar and its success has been widely covered by the media including The Wall Street Journal and The Economist.

Kamal’s story illustrates the power of business plan competitions. Kamal went to MIT with the intent of landing a job in finance. Instead he is an entrepreneur helping thousand of impoverished farmers sell their crops more efficiently. How did this happen? While the organizers of business plan competitions may think they are creating entrepreneurs by providing a little financing and exposure, I believe that these competitions are actually serving as Cialdini commitment devices.

Business plan contestants, many of whom had no original intention of becoming entrepreneurs, make small but increasingly significant commitments to the competition. It is easy to put together a 60 second elevator pitch but next they are writing executive summaries and then complete business plans. At each stage contestants declare publicly both orally and in writing their intent to follow through with the business idea. Eventually, through the absolutely amazing power of cognitive dissonance, contestants feel pressure to make sense of their efforts and declarations and rationalize that they must truly want to start these businesses after all. In fact, the large prize money offered in these contests actually takes away from the cognitive dissonance effect as it provides an alternative justification to contestants. So, counter intuitively, if the $100K organizers really want to create more entrepreneurs they might consider dropping the prize money back down to the original $1K!

Monday, October 6, 2008

BITH: The Financial Crisis and Action Ambiguity

Behavior in the Headlines: The stock market is down, credit flow is paralyzed, and Americans are expressing fervent outrage at “Wall Street fat cats and greed.” Constituents are demanding that any “bailout” package include stiff punishment for the financial insiders who “got rich on the path to getting us into this mess.” In the framework of the Moral Luck/Scapegoat discussion, we have a bad outcome along with a strong associated judgment that the actions of financial insiders were unethical and a high willingness to punish. Which of the two theories, Moral Luck or Scapegoat, does a better job of explaining the current situation? Our financial crisis has a unique attribute that may provide insight into these two effects – action ambiguity. In this case at least, it seems that Scapegoat prevails.

While the market was going up (good outcome) Americans had very little interest in the activities of the masters of the universe in high finance. We are all rapidly learning more about our dire economic straights but I believe most still have very little knowledge of the specific actions undertaken by the financial insiders at which Americans are now so angered. This is a case of ethical acts in a black box. With a bad outcome people are willing to judge activity as unethical even before they understand who made what actions. Moral Luck suggests an action will look less ethical in light of a bad outcome. Here we have bad outcomes generating a desire to judge Wall Street actors as unethical before we have even identified what specific actions we are judging.

The financial crisis is a messy real world example and not truly an action black box. Obviously some people, including key thought leaders, know more about the specific questionable actions of Wall Street insiders. However, perhaps a similar black box experimental design could be constructed. Introduce a bad outcome. Next ask subjects if someone should be responsible for the outcome. Then introduce an actor who can be logically associated with the outcome. Minimize the description of the action so that is has very little detail and phrase it in a statement that control subjects would find ethically neutral in a vacuum. “The chef mixed the cake.” I predict that given the right kind of bad outcome (one without culturally predetermined judgments of blame or innocence) subjects will assume that someone must be responsible and further be willing to assign some ethical responsibility to whatever logically connectable subject actor is introduced.

Friday, October 3, 2008

The Blame Game: Desperately Seeking Scapegoat


Before moving on to other phenomena, let’s take the topic of moral luck for another spin. To begin with, please observe that there is an inherent assumption in the standard framing of the effect. The assumption is that subjects are judging the moral act itself. As demonstrated in experiments with this framing in mind, subject judges non-normatively rate an act as less moral when there is a negative outcome and more moral when there is a positive outcome -- as compared to control groups that judged the act “without an outcome.” Like an optical illusion, juxtaposing a moral act with different outcomes can make the act itself look different.

As a mental exercise, let us see if we can turn the standard framing on its head. What if the effect is manifest not from subjects judging moral acts in the light of outcomes but instead from subjects compelled to assign blame for bad outcomes? Colloquially, this concept is found in the term “scapegoat” and in the phrase “someone is going to have to take the fall.” In this alternative “outcome as driver view,” the energy/motivation to blame/punish/label amoral is generated by the bad outcome not an aversion to the revisited morality of the precursor act.

Consider bad outcomes in a vacuum. A sweet little old lady with no family or friends loses her poorly diversified retirement savings in the stock market and can no longer support herself. Or alternatively, a flood victim is left trapped on his roof for days and finally, succumbing to the elements and starvation, dies. With these outcomes, one of the very first questions we are compelled to ask ourselves is “who is to blame?” This seems a natural and productive response. We want to know how this unjust situation could possibly have been allowed to happen or even if someone purposefully caused it to happen. The reason we want to know who is to blame is so that we may be better able address a pressing need for support in the case of the old lady (who is responsible for her now?) or respond to similar situations in the future in the case of the flood. This motivation to assign blame exists even though, unlike in the moral luck experiments, there is no preceding moral antagonist identified. If we were to identify a possible antagonist (stockbroker, FEMA official) the motivation to find someone or something responsible would compel subjects to rate an available antagonist negatively to restore a sense of justice.

This scapegoat framing could explain why subjects rate the immorality of actors when there are bad outcomes as more immoral when compared to the control which has “no outcome” but, on the surface, the theory does not explain why situations with positive outcomes are rated as less immoral than the control. Since this is a mental exercise and we are questioning assumptions, let us take things a step further and challenge the idea that the control scenarios truly represents no outcome. Perhaps there is an outcome and a negative one at that – uncertainty.

Most people very much dislike uncertainty. Moral luck experimental control stimuli leave subjects with unresolved scenarios in which subjects can easily envision bad outcomes resulting. This uncertainty and threat of a bad outcome is in itself a negative outcome. An uncertainty outcome is likely less saliently negative than the certain loss of retirement funding or death from starvation in the earlier examples, however, it is still a negative outcome that may generate desire to assign blame.

Finally, if we quickly assume that positive outcomes may generate motivation to assign positive credit or at least (and perhaps more likely) they do not generate motivation to assign blame, then the scapegoat theory would indeed explain the outcomes seen in the various moral luck experiments. The certain negative outcomes scenarios would have its antagonists rated the worst, the uncertainty outcomes rated negatively and next to worse, and the certain positive outcomes would rate best either as a positive act or at least a neutral one.

For this scapegoat theory to have any value there should be additional hypothesis that could be generated which would predict results that differ from what might be predicted using the standard moral luck theory. Here are a few possible ones that pop to mind (some more merited than others):

Negative Outcomes:

* When faced with a bad outcome in the absence of a moral antagonist, subjects will be willing and able to self generate a generic antagonist and assign blame. The worse the outcome the worse will be the morality rating.

* When an antagonist is introduced, even one with weak ties of responsibility, subjects will assign near full blame to this antagonist (similar rating to the subject’s invented generic antagonist).

* Subsequently, when a more clearly responsible second antagonist is introduced in the presence of first, subjects will reassign most of the blame to the second antagonist, improving the morality ratings of the first.

* If morality is associated with an actor, each actor should generate their own rating independent of other actors. If blame is a fixed quantity based on the negativity of the outcome, introducing more antagonist actors will diffuse assigned blame across antagonists.


Positive Outcomes:

* When faced with a good outcome in the absence of a moral (pro/an)tagonist, subjects will have more difficultly self generating a moral actor and assigning credit in the form of a favorable morality rating.

* When a moral actor is introduced, one with weak ties of responsibility, subjects will assign relatively neutral to positive morality ratings for this actor (similar rating to the subjects invented actor if they were able to generate one).


I believe the desire to assign blame in the case of bad outcomes is powerful, so powerful that people will even sometimes personify the natural world to have “a someone” to blame. However, I am under no real illusions. Remember that this is merely a thought exercise and that the simplest explanation is usually the best. The moral luck framing of this phenomenon has been the stuff of philosophy for a long time and the basis for experiments by some of most admired researchers in the field. I had to go through a lot of logic gymnastics to challenge the moral luck assumptions and in the process generated many new assumptions of my own to lay out the scapegoat theory. There are possibly some big holes in the theory and the predictions are still pretty loose. Additionally, there are very likely results generated by actual moral luck experiments that the scapegoat theory does not explain as I’ve only looked at the most basic findings here. Crafting an alternative explanation is enjoyable and it may be interesting to run a few experiments to test some of the new predictions. However, I predict that we will want to stick with the findings of the papers noted in the previous posting.

Wednesday, October 1, 2008

Moral Luck

In philosophy, the term moral luck is used to describe the morality of an action judged in the light of its uncontrollable and unforeseeable consequences instead of in isolation. Such judgments are not normative. The morality of an action should not be different because of a lucky good outcome or an unfortunate bad end. Questions of moral luck are not new to philosophy but they seemed like fertile ground for experimental behavioral research. In fact I have been working on a rough experimental design to demonstrate the moral luck effect in collaboration with Columbia’s Leonard Lee.

Here is the draft set up:

--------------------------------------------------------

A doctor is visited by a patient complaining of a stomach ache and other vague symptoms. The doctor has a “gut feeling” that the patient may be suffering from Disease X. Disease X is a serious condition and, left untreated, it can reduce the expected lifespan of a sufferer by up to 5 years. The majority of medical experts estimate there is only a 1 in 10,000 chance that a randomly selected person in the population will have Disease X. None of the symptoms of which the patient is complaining are associated with Disease X and two other doctors have already examined the patient and ruled out the disease. These other two doctors believe the patient has a mild form of a flu virus that should resolve itself in a few days.

There is a test for Disease X that is 100% accurate in its diagnosis. Diagnosed early the disease can be cheaply treated with outstanding success; however, the test costs $5,000 to administer and in 2% of cases the test itself results in a serious infection, which also has negative effects on expected lifespan.

The doctor decided to run the test based on his own judgment. [GOOD OUTCOME: The lab results from the test show that the patient does have Disease X which can now be cheaply and effectively treated. The patient may or may not have an infection resulting from the test (2% chance of infection). BAD OUTCOME: The lab results from the test show that the patient does not have Disease X. The patient may or may not have an infection resulting from the test (2% chance of infection).] On a scale of 1 to 5, how moral was the doctor’s decision to run the test?

Very Immoral (1 to 7 scale) Highly Moral


Should an experienced doctor be allowed to make such a decision even if the statistical odds are not in favor of his or her decision?

Yes / No


If you believe the patient’s family is justified in suing the doctor for medical malpractice, what is a reasonable dollar amount that the doctor’s insurance should be expected to pay in compensation? The average malpractice payment at the doctor’s hospital is $50,000.

$__________________


--------------------------------------------------------

There are a number of improvements that should be made to this set up before running it with real subjects but that may not be necessary. In researching the project I ran across two new papers on the subject that already provide quite solid evidence of a moral luck like effect.

Francesca Gino, Don A. Moore, and Max H. Bazerman, “No harm, no foul: The outcome bias in ethical judgments,” HBS Working Paper Number: 08-080, February 2008

Gino, Francesca, Lisa Lixin Shu, and Max H. Bazerman. "Nameless + Harmless = Blameless: When Seemingly Irrelevant Factors Influence Judgment of (Un)ethical Behavior." Harvard Business School Working Paper, No. 09-020, August 2008.

So it looks like we do not get to be the experimental moral luck vanguard. On the plus side I was at least lucky enough to have a wonderful coffee conversation with one of the authors yesterday, Lisa Shu. I look forward to reading more of her work.

Tuesday, September 30, 2008

High Tech Fever Interview


Hightechfever.tv is a live, weekly Cambridge local access TV show that focuses on entrepreneurship and technology. Despite its humble profile the show pulls in an amazingly great range of guests due to, MIT lecturer and host, Joost Bonsen’s extensive network. On September 10th I sat down with Joost to discuss several judgment and decision-making topics including behavioral economics, prospect theory, anchoring, and cognitive reflection testing. Prominent scholars mentioned include Kahneman, Tversky, Ariely, Prelec, Loewenstein, Frederick, and Thaler. The Youtube link above shows the most relevant segment of the interview.

Sunday, September 21, 2008

Iconoclast


National Public Radio does a surprisingly good job covering behavioral topics. Such topics are typical for science based shows including Science Friday and The Infinite Mind but occasionally one of the mainstream shows like On Point will cover an interesting behavior topic as well.

(Above: Mooby the Golden Calf Idol)
On Point’s host Tom Ashbrook drives me crazy with his hyperbolic comments and questions; however, this Wednesday’s show, “Getting Outside the Box” wasn’t half bad thanks to guest Professor Gregory Berns. Berns packs an MD/PhD and is a professor of Psychiatry and Behavioral Sciences at Emory University as well as of Biomedical Engineering at the Georgia Institute of Technology. He uses fMRI and computer modeling techniques to attack questions of Neuroeconomics. Berns was on the show plugging his new book, “Iconoclast: A Neuroscientist Reveals How to Think Differently.” I have not yet read the book but given its focus on the intersection of psychology and innovation (the whole purpose of this blog) you can bet I will be ordering it from Amazon directly.

Who is an iconoclast in this context? As Berns puts it, iconoclasts are those rare individuals that have “truly novel ideas that tear down existing ways of thought and put something new in its place.” Obviously, most people are not iconoclasts. Professor Berns gives one explanation for this rarity by pointing to brain physics and psychology’s efficiency principle. Our brains have limited hardware. For example, the human brain apparently gets by on only 40 Watts of energy. Similarly limited, the bandwidth of the human eye is slower than a cable modem at only 10MBs per second (from researchers at my dear old Penn. The show claimed 1MB/s). To function on such limited capacity we need to use thinking short cuts; however, to be creative the brain has to get out of this efficiency mode. Iconoclasts have this capability and exercise it.

One of the show’s callers, a fan of Harvard’s E. O. Wilson, provided the additional insight that iconoclastic behavior in animals tends to get them killed. For example, the maverick iconoclast in a school of fish that turns right when everyone else turns left is the one most likely to be eaten. Evolution itself is against iconoclasts.

Nonetheless, at a cultural level (Ashbrook’s one good comment) we value iconoclasts (or at least we think we do) and many of us want to be one. Berns spends most of the show explaining three primary factors related to iconoclasm and innovation along with a few hints at how to enhance your own iconoclastic behavior.

1) Perception: The mental images in our imagination are usually based on past experiences. It takes novel stimuli and juxtaposition to trigger the perception shift necessary for new thought.

2) Fear: As Berns puts it, the “fear of loosing the status quo is one of the greatest inhibitors of change and innovation.” To be an iconoclast you must suppress fear. Fear resides in the amygdala and Berns believes its impact can be overridden by conscious thought. “Cognitive Reappraisal” can control fear by reframing the view of the problem.

3) Social Intelligence: To be a proper iconoclast you have to be able to sell your ideas. People don’t like change so iconoclasts have to sell them on changing their minds.

One could question this list. Does an iconoclast really need all three of these abilities? An organization could exhibit iconoclastic behavior without any one individual who possesses all these abilities. For instance, a creative genius coward could be paired with a fearless leader spokesperson with the social intelligence to sell the ideas. In practice this too seems a rare combination yet it is not clear how Berns’s observation apply at the organizational behavior level. Iconoclastic behavior of teams seems like a great topic for further research.

Friday, September 19, 2008

Breadcrumbs of Self-Deception


Can there be a benefit to forgetting and, if so, do people have a mechanism that enables them to forget? It is a bit counterintuitive to think that we might benefit from forgetting but with a little effort it is not difficult to think of things we might prefer not to remember. Perhaps we would rather forget some particularly embarrassing moments from high school, the time of an appointment we did not really wish to keep, or even the displeasure of watching the new Indiana Jones movie. Might there also be possible benefits from forgetting the true justification for making a decision?

As discussed previously, research by Christopher Hsee describes how people incorporate unjustifiable information into their decision-making. The vast majority of people do not want to see themselves as making unethical decisions. As such, Hsee’s hypothesis is that people will only incorporate unjustifiable information when the decision-weighting of the justifiable information is vague and elastic. Unjustifiable information is thus incorporated indirectly and subconsciously through readjusting the weighting of the justifiable information. People thereby make unjustified decisions yet preserve their self image as ethical and just.

As an example of the phenomenon, one of Hsee’s studies uses a scenario involving a condo appraiser who, for different subject groups, is motivated either to deflate or inflate the valuation of a property. Their motivation derives from an unjustifiable conflict of interest where the condo seller/buyer is actually the fiancée of the appraiser. Hsee shows that appraisers only inflate/deflate values when there is substantial elastic wiggle room in how the appraiser evaluates the justifiable attributes of the apartment like age of the appliances, carpeting, etc. and that they do not make biased assessments when the attribute weightings are not subjective.

Hsee’s experimental results and those of others seem to strongly support this theory and it is very likely correct. However, let’s consider an alternative explanation centered on motivated forgetting. First, grant the assumption that people do not want to consider themselves as unjust. Next, assume decision subjects are at least conscious of the temptation to incorporate unjustifiable information. I believe Hsee would agree with this awareness. However, motivated forgetting goes further to suggest that at the time of the decision subjects are consciously aware of their incorporation of unjustified information. These subjects are conscious of their transgression yet are careful to lay out a logical story of how the same conclusion could be reached using only justifiable information. The story may partially be constructed to explain the decision to others but the story serves a larger purpose for the decision maker himself. After the story is created the subject very promptly and conveniently forgets the true basis of their decision. Then, if necessary at some future point, they can follow the story like “breadcrumbs of self-deception” to retain their world view as a just individual. Like Hsee, the motivated forgetting approach assumes subjects will only incorporate unjustifiable information when a decision biased by motive can be justified based on a manipulation of the justifiable information. This is because without sufficient nuggets of justifiable information to weave a breadcrumb story around, the motivated forgetting mechanism would not function. Subjects know when they would not be able to fool themselves.

As a theory, motivated forgetting seems to add little value here as it predicts the same outcomes as Hsee’s elasticity hypothesis. Also, in practice it would be extremely difficult to verifiably demonstrate that motivating forgetting is occurring. Perhaps someone brighter than me will have an idea for a good experiment.

Wednesday, September 17, 2008

Justifying the Unjustifiable


Several weeks ago I met with Columbia University Marketing Professor Leonard Lee. Over a fantastic pistachio milkshake at Tom’s Café (of Seinfeld fame), we got on the topic of moral decision-making research. Leonard suggested that I read an older paper by Chicago Professor Christopher Hsee. The paper, “Elastic justification: How unjustifiable factors influence judgments,” describes how unjustifiable information can subconsciously creep into the decision-making process. To summarize, Hsee breaks information into two primary categories:

  • Justifiable: Information that is directly relevant to the judgment being made by the accepted criteria of the decision-maker (e.g. a candidate’s experience)
  • Unjustifiable: Information the decision-maker may wish to take into consideration but that he knows, by the criteria of what is acceptable, should not come to bear on the decision. (e.g. a candidate’s race or sex)

Hsee’s studies show that decision-makers do not include unjustifiable information in their decisions except under certain conditions relating to the JUSTIFIABLE information. The degree of "elasticity" in how to weigh the justifiable factors determines how much influence the unjustifiable information has on the judgment. If the importance weighting assigned to various justifiable attributes is clearly defined and fixed (inelastic), unjustifiable information does not influence the decision. However, if the weighting is vague and open to interpretation (elastic), unjustifiable information will influence the decision by reweighting the importance of the justifiable factors. Since past literature shows that people try to make justified decisions and line up consistent facts/evidence for those decisions, Hsee concludes that people do not knowingly incorporate unjustifiable information. The process is subconscious.

The paper’s findings stand out because they are contrary to normative decision theory which would not recognize a split between justifiable and unjustifiable information. Instead, the normative approach takes in all information, assigns each factor a relevant weight according to its probable utility impact, and concludes with the utility maximizing decision (see for example Jon Baron’s work on Multi-Attribute Utility Theory). In contrast, Hsee’s work shows that decision-makers exclude unjustifiable information in certain cases. Reweighting utility impact for a given factor is also non-normative.

In the paper Hsee sites three historical studies as evidence as well as two of his own. It is the second of Hsee’s studies with which I take some issue. In this second study subjects were given a “language intuition test” by which they were asked to select from a multiple choice list the meaning of unfamiliar Chinese characters. 20 questions were asked of each subject, only 10 of which counted towards a total score. Subject later self reported their score with a financial incentive for higher scores. Hsee suggests that two factors would influence the self reported score: 1) actual performance and 2) desire to report a high score for the financial incentive. He then asserts that the former is a justifiable factor and the later is unjustifiable and also predicts that the degree of elasticity in the grading process will determine the amount the unjustifiable factor will influence scores upward. In the inelastic condition subjects were asked to score only odd numbered questions. In the elastic condition, scores were based on the subject’s evaluation of the “ying/yang nature of the symbols” – a highly vague and subjective approach. Subjects were to count the 10 questions whose symbols they found to be the most “yang” in nature. As anticipated, subjects in the elastic condition reported higher overall scores.

Unfortunately the study design does not seem well suited as a test of Hsee’s theory. I do not believe it is possible to determine whether or not subjects are indirectly incorporating unjustifiable information by reweighting the justifiable or merely directly incorporating their “unjustifiable” desire for a high score. In fact, in the elastic condition there is not a justifiable method of selection provided to subjects at all. How does one justifiably assess the “yangness” of a character? Faced with a process such as this one it is no surprise that subjects did not feel unjustified in selecting questions that they happened to get right as yang characters. The the yangness approach may break a social contract with the subjects, allowing them to have free reign in “cheating” on their scoring. One might ask what the selection process of students who did not wish to cheat would have looked like. The study would be better formed if it provided a subjective yet seemingly justifiable procedure for selection.

I do not wish to discard the entire paper based on Study #2. Hsee has a very interesting theory with significant merit conferred by the other studies. There are also some very interesting additional questions the follow from the study. Further research seem warranted to answer the following:

  • Can we evaluate the specific weighting assigned by subjects to justifiable factors? For example in Hsee’s study #1 involving real estate appraisal, what relative weights do subjects place on living room size, appliances, carpets, etc?

  • Can we demonstrate that the weight assigned to a specific factor changed? For instance can we demonstrate that subjects specifically reweighted the value of good appliances relative to carpets as a means to indirectly incorporate unjustifiable factors? Perhaps this can be solicited by asking subjects to assign numeric weightings or at least by rank ordering the importance of the factors.

  • How might we prove/disprove that this process takes place subconsciously? Perhaps subjects instead make conscious decisions and then use a purposeful forgetting/self-deception mechanism to remain justified in their own eyes – (more forthcoming on a "breadcrumbs of self-deception" idea). There would seem to be significant moral implications if this is a conscious process.

Sunday, July 13, 2008

Irony


Sometimes life just begs to be personified. Small coincidences, especially those that appear to carryout some kind of karmic rebalancing mission, are hard to contemplate without adding ears, nose, and ginning smirk to the blank potato head vastness of the universe. My last posting explored an implied criticism of US intellectual property law and the monopoly protection granted to what may be acts of “discovery” verses “invention.” So of course it came as a shock that just this week I was granted my first patent for an invention my co-inventors and I submitted seven to eight years ago. Patent number 7,398,227 “Methods, systems, and computer for managing purchasing data.” Not exactly the telephone but I’ll take it.

Monday, June 30, 2008

In the Air

I have mixed feelings about Malcolm Gladwell. His writing is certainly clear and engaging and he manages to create impassioned interest in social science topics that would normally draw yawns. However, I cannot help but feel he receives credit better due to true academics for the volume of hard won research they produce and he neatly makes digestible. It is possible that his latest article in the New Yorker is no exception but it does give pause to reconsider. He may have some “new” ideas of his own.

The article is entitled, “In the Air: Who Says Big Ideas are Rare” and in it, regardless of intent, Gladwell provides a subtle yet powerful indictment of the current intellectual property system. The preponderance of the article itself is spent highlighting the many historical incidences of simultaneous invention/discovery (e.g. telephone by both Alexander Graham Bell and Elisha Gray) and profiling modern day invention shop Intellectual Ventures co-founded by Nathan Myhrvold. What really is of interest is the digression into the nature of invention/discovery. The best summary of Gladwell’s line of thinking is made in the article itself with the observation - “Ideas weren’t precious. They were everywhere, which suggested that maybe the extraordinary process that we thought was necessary for invention – genius, obsession, serendipity, epiphany – wasn’t necessary at all.” The grand implication is that technology a process of discovery not innovation.

Why does it matter if technology is “discovered” verses “invented?” I would argue the difference is much more than semantics. In an invention paradigm, technology is brought forth by the blood, sweat, and tears of the inventor(s) along with their unique creativity and insight. Technology is direct progeny of the inventor and it exists in the universe as a direct result of inventors’ actions. In a discovery paradigm, the technology always existed. The laws of physics and chemistry -- even the social scientific laws of behavior -- were already there to dictate function and need. If Gladwell and others are drawing the right conclusion from years of simultaneous invention “multiples,” the discovery of the technology is also inevitable. In this paradigm an inventor, or better put discoverer, may apply similar effort and creative insight but does his or her discovery of technology justify monopoly protection offered by patents -- especially if discovery by somebody is already a forgone conclusion?

I do not know which, if any, of these paradigms are right but Gladwell’s questioning of invention provides some real food for thought. If technology is discovery based, what is the appropriate incentive structure to drive it? Would reorienting on a discovery mindset attract different personalities to work on technology? What should our expectations be about the speed of technology progress if all possible technology already exists, waiting to be discovered?

Monday, May 12, 2008

Behavioral Economics and Digital Institutions: A View from the Field


The Gruter Institute for Law and Behavioral Research is holding a conference at Lake Tahoe next week. The theme is “Law, Behavior & the Brain.” I have been asked to give a talk on digital institutions. You may be asking, “What is a digital institution and what does it have to do with my brain?” For those answers and more, here is the abstract for my talk.

Abstract

Digital Institutions allow us to “reframe strategic interactions,”[1] incorporate externalities, and leverage behavioral economic tendencies toward more positive outcomes. Compared to traditional institutions they are relatively quick to establish, cheap to operate, and easy to adapt or discard. They can even be self organizing and in the extreme lead to a vision of “governance by algorithm.”[2]

Today’s most innovative corporations, though founded as traditional institutions, are both witness to and participant in the emergence of their digital counterparts. These same companies will also be deeply changed by the phenomena. This talk provides a first person view from within IBM’s famed T.J. Watson Research Center. With 60 years of history, over 3,000 researchers, and multiple Nobel Prizes, IBM Research provides an exceptional environment for identifying and making sense of change. The discussion covers three topics from the perspective of a Digital Institutions “practitioner.” First, a survey is provided covering several technologies developed at IBM Research that enable Digital Institutions; technologies that include 3D metaverses, real time speech-to-speech translation, and semantic web reasoners. Next, we examine a specific instance of Digital Institution formation involving a volunteer team leading IBM’s World Development Initiative (WDI) to address the needs of those people at the “bottom of the pyramid” living off less than $5 per day. Finally, the talk covers a problem space for future investigation in intellectual property licensing where Digital Institutions might produce better outcomes through overcoming psychological bias in decision making.



[1] Oliver Goodenough and Monika Gruter Cheny, “Is Free Enterprise Values in Action?” Preface to “Moral Markets: The Critical Role of Values in the Economy,” Edited by Paul J. Zak, 2008
[2] John Henry Clippinger, “A Crowd of One: The Future of Individual Identity,” 2007

Tuesday, April 29, 2008

Penny for your Thoughts: Revisited

Here is a paper that might put a new lens on the previously discussed Alter and Oppenheimer money familiarity study (or perhaps vice versa). In "The Dishonesty of Honest People: A Theory of Self-Concept Maintenance," Nina Mazar, On Amir, and Dan Ariely discuss the results of a series of cheating experiments. In the experiments students are paid 50 cents for each question they get right on a test. Some of the experimental conditions provide an opportunity to cheat by allowing students to self report their scores. The surprising finding is that cheating dramatically increases when tokens are given to students instead of cash, even though these tokens can be exchanged for cash at a station only a few feet away. The authors of the Dishonesty study believe there is something special about cash. It is why people might take home a few office supplies but they are much less likely to take the equivalent value out of petty cash. Moral ambiguity is erased.

Perhaps there is a related effect at play in Adam Alter and Daniel Oppenheimer’s research. $2 bills, Susan B. Anthony dollars, and altered bills play the role of tokens. Like the tokens in the Dishonesty study people rationally know they have equivalent value to cash (ironically a token of value in its own right). However, in practice tokens and cash are not the same since people behave differently by say, increasing their cheating or having a willingness to part with a token at a discount. Of course we beg the question of why $2 bills are tokens and $1 bills are not. We may be back to familiarity.

Another take is that familiarity is playing a role in the Dishonesty study. There could be a discounting of the unfamiliar tokens taking place. People are cheating the same “amount” but they need more of the discounted tokens to steal the equivalent perceptual value.

Sunday, April 20, 2008

Penny for your Thoughts

Into the growing body of research on “predictably irrational” behavior comes a new study from Adam Alter and Daniel Oppenheimer of Princeton, to be published in the Psychonomic Bulletin & Review. Like others before it, this study examines the great plasticity of perceived value. In their experiments, Alter and Oppenheimer’s subjects value a $1 Susan B Anthony coin less than they do a traditional $1 greenback bill. Similarly, subjects also seem to value $2 bills less than they rationally should as compared to the more frequently encountered “ones.” The authors argue that this is a result of familiarity which subjects value more than they rationally should.

Initially, the authors’ explanation of the results did not strike me as correct. Generally rare things are the most valuable. Even with money, the units we see the least frequently (like $100 bills) are more valuable than those we see all the time ($1 bills). However, I have tried to think of alternative theories and I have not hit on anything that holds together as well as the familiarity explanation.

One alternative explanation is that subjects are using touch stone categories as heuristics to determine value. Imagine that we place amounts we encounter into one of three buckets valued lowest to highest: pocket change, wallet, and bankable. These categories are formed around how people use money, where they physically store it, and perhaps some kind of “purchase ambition.” Subjects would place a Susan B. Anthony coin into the pocket change bucket. The $1 bill would be placed in the higher category of “wallet” and thus be valued more. Of course the $2 bill falls into the same “wallet” category as well yet subjects value the $2 bill significantly less than two $1 bills. Something else must be going on.

Perhaps there is a glass half full or half empty framing effect with value. Here are some possible subconscious monologues. I have a single coin and I feel poor because you can’t get much for a coin (half empty). I have a dollar bill and I feel relatively wealthy because one dollar is a psychological threshold to having something of real value (half full). The same effect may happen with two $1 bills. I separate the money into two units and frame each on the threshold of value and feel even wealthier (half full). On the other hand, the $2 bill may inspire visions of higher value bills. I now frame on the wish that I had a $20 bill and I then feel poor (half empty).

Neither of the two theories I put forward can account for a third experiment run by the authors but it does prompt a new alterative. In the third experiment subjects effectively compare the value of one real $1 bill and one that has been slightly altered by reversing the image of George Washington’s head (the fake). The fake should appear less “familiar” than the real bill and therefore be valued less, which indeed was the case.

Is it possible that subjects are not valuing the familiar itself but instead discounting the unfamiliar? Unfamiliar money could somehow make subjects less comfortable, perhaps because they fear that it could be fake? They may worry it will be more difficult to trade it to others in the future even if they understand that it is the equivalent to the more frequently encountered money. Maybe, but to risk a bad pun this is probably just the other side of the same coin. I think I will stick with the authors on this one after all. They present the simplest explanation to all the observations.

Friday, April 18, 2008

Neuroeconomics: Testosterone and Trading

The Wall Street Journal recently published an article on the possible impact of testosterone on stock traders. The study is by John M. Coates, a senior research fellow at Cambridge. His general finding is that higher levels of testosterone are correlated with higher levels of risk taking and (during the course of the study anyhow) better trading results. Increased risk taking behavior seems intuitive; however, the one percentage point gain in financial performance is not as clear cut. One could dig into the original study to see how Coates tried to account for this but I do not see a clear way to determine if the trading decisions were good or bad ones based only on the outcome. Taking really risky bets can sometimes yield high returns. However, could those same returns be generated (on average) with more certainty and less risk? Would they have a higher risk adjusted rate of return?

The more interesting study mentioned is that of MIT Sloan’s Andrew Lo. He wired up traders to monitor their psycho-physiological state in real time while executing real trades. It seems he would have much tighter paring of the independent variable to the actual decision making moment. However, this study may suffer the same issues with the outcome measure. Some time ago Professor Lo presented his experimental design in a class I was taking at the MIT Media Lab (Sandy Pentland’s Digital Anthropology Class). I was an MBA student at the time and playing a lot of Texas Hold’em with buddies from the Muddy Charles -- so perhaps that was the inspiration but I suggested he run a similar experiment on Black Jack players. Unlike with the stock market, the expected value of given Black Jack hands can be precisely quantified. On a hand by hand basis you could determine if the subject made the decision that maximized his or her expected value from playing the hand. I would guess that a higher testosterone level and/or greater excited psycho-physiological state would cause subjects to make suboptimal decisions, even if on occasion they got lucky and won.

Monday, March 31, 2008

Framing Effects and IP Licensing


Marketing Professor John Gourville has produced some very interesting work on how framing effects contribute to new product failure. In my opinion Gourville’s primary contribution is to extend the analysis of framing effects to include the influence it has on product teams creating the innovation as well as the more traditionally studied consumer. He speculates that product innovation teams eventually get so engrossed in the new features of their innovative product that they reframe to the innovation as status quo - while their potential customers of course to not share this new frame. Since individuals value gains less than they do comparable losses the new framing leads to substantial misjudgment of customer willingness to adopt the new product. Product innovation teams see “living without” their fancy new features as a loss to great to bear whereas customers view these same features as small gains. The work is published in a few places including as a working paper from the Marketing Science Institute called The Curse of Innovation: Why Innovative New Products Fail.

I believe a different but related analysis can be carried out in the realm of IP licensing. Generally speaking, in the corporate world research investments are made with the intent of creating valuable IP for a company’s own use in products it brings to market. However, at the other end of the research process companies often find they have some subset of IP that is valuable but does not necessarily fit the company’s business plan. In this case the company is faced with a decision: a) license/sell the IP to another company who wants to use it in the market in exchange for a fee and/or royalty stream or b) hold onto the IP “for now” as an option just in case the company later decides it wants to make use of it.

Many argue that far too often the choice is “b” for various reasons including inertia, fear of creating a competitor, inability to find a suitable licensee or to properly value the asset, etc. I believe a substantial contributor to under licensing is the frame in which the eventual performance of a licensed asset is viewed. Many in the licensor company will view licensee performance from a perspective of “it could have been ours.” They take on a frame that had their company made use of this IP they would have 100% of whatever benefit is derived by the licensee. The entire gain of the licensee is framed into a “loss” for the licensor. Even if the licensor received a substantial royalty in the arrangement and thus shared materially in the success, this royalty would be viewed in the realm of gains or netted out entirely as a loss if it is considered at all. This creates a paradox in that the greater the success of the licensing arrangement the more licensors will perceive a costly loss. It is no wonder in the face of this paradox licensor decision makers often wrongly choose to let the asset “rot on the shelf.” They are able to get away with this because most companies do not seem to assign the cost of spoiling IP inventory to the licensor decision makers. That true loss needs to make its way into the mental calculus if we are to make better licensing decisions.

Monday, February 25, 2008

Moral Reasoning: Why do people perceive stealing intellectual property as different from the theft of physical property?

An interesting new collaboration may happen out of my visit to the Berkman Center. Oliver Goodenough, who was mentioned in the last post, is embarking on a project to better understand why some people who would never steal something as mundane as office supplies are somehow able to self justify their theft of intellectual property. The IP itself could take many forms from song or movie downloads to knowing infringement of a business process patent and can be quite valuable. Yet for some reason, who knows maybe even good reasons, people view some IP theft as more minor than shoplifting a t-shirt. Professor Goodenough and I are hoping to work together to address some of the interesting questions this paradox creates.

Below are a few initial concepts for exploration, several of which came out of a conversation I had with a good friend Ryan Ismert.

Perception of Amoral Act

  • Expectation -- We have been trained since childhood that content can be "free" because of advertising supported TV and radio. Not many would say that someone is "stealing" because they step away during the commercial break, even though these ads are paying for our content. This "used to getting it for free" sentiment may compel people to avoid IP payments when they can obtain content by other means.
  • Metaphor-- When we are taught the nature of theft; we are most typically provided with material property examples. If the crime doesn't fit neatly into these core examples we may more easily rationalize our theft.
  • Victim Impact -- Crimes have victims. Stealing physical property is a zero sum game. In most cases only one owner can benefit from it at a time. As such, stealing physical property deprives one party for the benefit of another and creates a clear victim. With content and ideas, copying someone else's bits or ideas does not directly take from their owner. Their owner is still free to use the content/idea. Instead the theft circumvents their monopoly right to that IP.
  • Prevent v. Produce -- Patents provide the right to prevent someone else from producing. To the owner they do not create the right or the obligation to produce. Overcoming someone else's right to prevent me from doing something (practicing an idea or down-loading free content if I am able to) does not feel like stealing in the same way that home invasion does.
  • Problem with Disregarding -- Once you learn something it is very difficult to neglect that information. If someone is exposed beneficial IP, such as a superior business process, it will be very difficult for them to disregard it. That person may feel compelled to follow a more efficient/superior process if one is known.

Willingness to Commit Act

  • Tangible Risk -- Physical property must be absconded. The act of stealing in the material world is observable, as is the evidence embodied in the object itself. Intangible property theft and retention is perceived (probably falsely) to be less observable.
  • Victim Empathy -- In many cases, the victim of IP theft is less directly observable. Physical property theft victims can usually be specifically identified and therefore have a greater likelihood of creating empathy. IP theft victims are more often unsympathetic corporations who are even sometimes cast in the villain role. It may even create a Robin Hood effect if the IP thief is seen as stealing from the rich for the benefit of the "poor."

Monday, February 18, 2008

Berkman Center Visit

A few colleagues and I paid a visit to the Berkman Center for Internet and Society at Harvard Law School this past week. They are doing some fascinating work in a variety of areas. The topic of the day was how to bring sustainable development to people living at the “Bottom of the Pyramid” – those living on less than $2 per day. Berkman projects on Identity and Reputation in scale systems are especially relevant. At the village level a person at the bottom of the pyramid can only effectively do business with people they know and trust. They have little access to the modern infrastructure wealthier people rely on every day such as ID cards, credit ratings, etc. Finding ways to expand trust infrastructure is critical to development projects of all kinds.

After the meeting I spent some time with Oliver Goodenough, a Professor of Law at Vermont Law School. I did not realize this until after the meeting but it turns out Goodenough worked with Richard Dawkins including co-writing the Nature paper “The 'St Jude' Mind Virus.” Now he is doing research which applies neuroscience to problems in business/law. This includes using fMRI to study moral reasoning when it comes to legal subjects.

It is amazing to me to see how rapidly the cross pollination of social sciences research with neuroscience is taking place. Our increased understanding of the brain is going to remake social science as we know it. We are living in exciting times.

Wednesday, January 30, 2008

HBR Innovation Killers

There is a lot more to say on intangibles but I need to step away from that to comment on a new Clay Christensen, Stephen Kaufman, and Willy Shih article in the January 2008 Harvard Business Review. Besides, the topic is sort of related. It is about “how financial tools destroy your capacity to do new things.” The basic idea is that financially oriented tools that may work well for guiding decisions in an existing line of business often fail when applied to new and innovative projects. The tool examples given are net present value calculations, treatment of fixed and sunk costs, and earnings per share obsession. They conclude with a discussion on how Discovery Driven Planning (DDP) -- created by Ian MacMillan and Rita McGrath as a discipline for learning your way into a new venture -- is a better approach for corporate venturing than the standard stage gate process. (For full disclosure, I worked for Mac and Rita as an undergraduate and I created and taught a case module on DDP for Wharton’s Exec Ed. And yes I was very excited to see it included in the article!)

From a big picture standpoint I think the article’s premise is entirely correct. These financial tools encourage incremental innovation over disruptive, they cause people to play numbers games to hit hurdle rates, they don’t allow for an evolving strategy, etc., etc. There is a particularly insightful section discussing how the owner/agent problem for investors and company management is now really an agent/agent problem given the role of professional investment managers. I do however disagree on some of the specifics included in the article.

Take the analysis as to how NPV is inappropriately applied in corporations considering investments in innovation. It very likely is inappropriately applied but not for the reasons provided. To get the full context you really need to read the article (hbr.org), but here is a quick summary of the argument. The authors state that NPV analysis 1) compares innovation investment cash flow forecasts against an assumed steady cash flow coming from the existing business “doing nothing” when in reality that existing cash flow is more likely to decline and 2) the impact of the terminal value estimation for investments in innovation furthers the unfair comparison.

1) IMPLIED STEADY CASH FLOW ASSUMPTION
I do not believe the authors are entirely correct in their depiction of how NPV is used to make decisions within a company. First of all, the idea that the company is making a choice between investing in innovation and “doing nothing” is false or certainly over simplified. In fact the company is definitely going to do something. The real application of NPV is to decide between a whole mess of competing projects of all kinds. Additionally, even though finance theory would dictate that a company should accept all NPV positive investments; in practice companies work with limited resources (including attention, management talent, etc.) and must choose between many competing projects -- rejecting many that (on paper) are NPV positive. Next, most large corporations divide their resources into budgets. They have budgets set aside for ongoing operations and typically have entirely separate budgets for “special projects” such as R&D, corporate ventures, etc. The rate of return they have historically seen from their ongoing operations does help set the hurdle rate for all investments but otherwise investments in specific special projects are considered separately from business as usual resource allocations. A given innovation project is much more likely to be compared on an NPV basis with another innovation project. Reducing the cash flow forecast of the existing business or even adjusting down the discount rate to acknowledge that returns from existing business will erode would not likely change the decision outcome. So I think in practice the corporate decision maker is not really falling into the “trap” the authors suggest. The person allocating the special projects budget never (directly anyhow) considers the existing business cash stream, be it steady or declining.

As a thought exercise, if a company suddenly reduced their year 3-5 cash stream estimate from the existing business, would it change how they allocate their special projects budget? It may cause them to increase the size of the special projects budget next year, effectively making more investments in innovation; however, NPV as a financial tool played no role in that change.


2) TERMINAL VALUE
Here the argument is that the right estimates for the pro forma are hard to choose, especially farther out in time. Project leaders do their best to create a 5 year projection and then “punt” to a terminal value for the out years. Terminal value takes the final projected year with an assumption of fixed future growth and turns it into an annuity. I believe the authors’ argument is that this annuity portion may underestimate the value of the innovative investment since, with innovative projects, growth rapidly accelerates after year 5. Further, the company’s existing business will be in decline by this time but, as discussed previously, the company makes an implicit assumption that it will not be declining; a more heinous assumption the farther out we go.

I believe this is the very first time I have ever heard anyone suggest that terminal values underestimate value. As the article itself mentions, terminal values often account for over half the NPV. This is an annuity that runs on forever. How much more should we possibly value it? Would not the authors’ own argument that businesses eventually decline apply to the innovative business as well? Over reliance on terminal values to make an investment case work is a much more prevalent abuse of NPV than any the authors are citing.

As always there is more to say but I better end this posting here. Again, I loved the big picture concept presented. Over and inappropriate application of quantitative financial tools will cause a company to miss the boat on innovation and lead it down a path to disruption. I only take exception on some of the specific arguments used to defend the thesis. I am not a finance person so take my analysis with a grain of salt but I think there are at least a few red flags here worth examining more closely.

Please read the article. You will be well rewarded.
http://harvardbusinessonline.hbsp.harvard.edu/hbsp/hbr/articles/article.jsp?ml_action=get-article&articleID=R0801F&ml_page=1&ml_subscriber=true

Thursday, January 10, 2008

Investing in Intangible Assets

Companies (or perhaps more properly individuals within companies) are regularly faced with decisions on how best to invest their limited resources. One asset class often suspected of underinvestment is "Intangibles." Intangible assets are things like staff skills, brand, R&D, and even supplier relationships. I will divide the decisions to be made about intangibles into two buckets:

1. Investing in intangible assets
–Whether to invest in intangibles
–Deciding amongst intangible investment options
–How much to invest over what duration


2. Capitalizing on previous intangible investment
–Leverage asset internally
–Leverage asset externally
–Hold as an option



The psychology of decision making regarding intangibles probably warrants several postings but I'd like to start with some internal monologues that could go through the minds of executives making decisions of type #1 that would lead to underinvestment:


Investment and return are significantly separated in time

* I have a large implied discount rate (which may be significantly greater than the one I have for tangible investments)

* My micro incentives are misaligned (I wont still be in this job when the return comes)

Complexity of system interactions that eventually lead to return

* The number of steps between investment and return makes this investment too confusing for me to be comfortable.

* Complexity means it will be hard for others to give me proper credit for the returns when they do come (Will they see I caused this positive return or will it be attributed to someone else?). On the contrary – it may be more difficult to assign losses.

Accounting

* Because it is so hard to attribute gains, my spending on intangibles may be accounted for and seen by others simply as a cost versus an investment. No asset goes on the books to reflect my investment.

Control

* I don’t have the same level of control over intangible assets which makes me nervous. I like to feel in control. I have more trouble becoming a gate keeper to my intangible asset which is a key source of power

Status

* My investment doesn’t create a specific asset I can point to as “owning” which makes it more difficult to gain the respect of others (unlike a factory, supercomputer, etc.)


Trust

* It’s harder for me to tell which intangible asset business proposals are real and which are budget grabbing “scams.” Will this return actually happen or am I being tricked?

Of course it is very difficult to really know what is going on in the minds of decision makers but perhaps this gives us a few concepts to test.

Sunday, January 6, 2008

Purpose

I am interested in how people make decisions, especially those decisions that surround the creation and commercialization of innovation. For me, I hope blogging will be a way to express my interests and ideas (no matter how naive) and that the process will stimulate discussion. If we are lucky we may discover something new about how people think.

"Prospect Theory" is an ode to Daniel Kahneman and Amos Tversky who gave us many of the most important advances in modern decision theory. Under this title I will be sharing thoughts on articles, ideas for new research, and examples of innovation decisions from my experience. In "real life" I am a business development executive with a major research lab so I have lived some of the theory we will be discussing. I hope you find it valuable.

More to come at www.prospecttheory.net