cover
Penguin Brand Logo
BY THE SAME AUTHOR

Quasi-Rational Economics

The Winner’s Curse: Paradoxes and Anomalies of Economic Life

Nudge: Improving Decisions About Health, Wealth, and Happiness

(with Cass R. Sunstein)

Richard H. Thaler


MISBEHAVING

How Economics Became Behavioural

Penguin logo
ALLEN LANE

UK | USA | Canada | Ireland | Australia
India | New Zealand | South Africa

Penguin Books is part of the Penguin Random House group of companies whose addresses can be found at global.penguinrandomhouse.com.

Penguin Random House UK

First published in the United States of America by W. W. Norton & Company, Inc. 2015
First published in Great Britain by Allen Lane 2015

Text copyright © Richard H. Thaler, 2015

Jacket design by Pete Garceau
Jacket illustration of birds © Thinkstock/Getty Images

The moral right of the author has been asserted

ISBN: 978-0-141-96615-1

To:

Victor Fuchs who gave me a year to think, and Eric Wanner and the Russell Sage Foundation who backed a crazy idea

And to:

Colin Camerer and George Loewenstein, early students of misbehaving

Contents

Preface

I. BEGINNINGS 1970–78

1. Supposedly Irrelevant Factors

2. The Endowment Effect

3. The List

4. Value Theory

5. California Dreamin’

6. The Gauntlet

II. MENTAL ACCOUNTING: 1979–85

7. Bargains and Rip-Offs

8. Sunk Costs

9. Buckets and Budgets

10. At the Poker Table

III. SELF-CONTROL: 1975–88

11. Willpower? No Problem

12. The Planner and the Doer

INTERLUDE

13. Misbehaving in the Real World

IV. WORKING WITH DANNY: 1984–85

14. What Seems Fair?

15. Fairness Games

16. Mugs

V. ENGAGING WITH THE ECONOMICS PROFESSION: 1986–94

17. The Debate Begins

18. Anomalies

19. Forming a Team

20. Narrow Framing on the Upper East Side

VI. FINANCE: 1983–2003

21. The Beauty Contest

22. Does the Stock Market Overreact?

23. The Reaction to Overreaction

24. The Price Is Not Right

25. The Battle of Closed-End Funds

26. Fruit Flies, Icebergs, and Negative Stock Prices

VII. WELCOME TO CHICAGO: 1995–PRESENT

27. Law Schooling

28. The Offices

29. Football

30. Game Shows

VIII. HELPING OUT: 2004–PRESENT

31. Save More Tomorrow

32. Going Public

33. Nudging in the U.K.

Conclusion: What Is Next?

Notes

Bibliography

List of Figures

Acknowledgments

Follow Penguin

The foundation of political economy and, in general, of every social science, is evidently psychology. A day may come when we shall be able to deduce the laws of social science from the principles of psychology.

VILFREDO PARETO, 1906

Preface

Before we get started, here are two stories about my friends and mentors, Amos Tversky and Daniel Kahneman. The stories provide some hints about what to expect in this book.

Striving to please Amos

Even for those of us who can’t remember where we last put our keys, life offers indelible moments. Some are public events. If you are as old as I am, one may be the day John F. Kennedy was assassinated (freshman in college, playing pickup basketball in the college gym). For anyone old enough to be reading this book, September 11, 2001, is another (just getting up, listening to NPR, trying to make sense of it).

Other events are personal: from weddings to a hole in one. For me one such event was a phone call from Danny Kahneman. Although we speak often, and there are hundreds of calls that have left no trace, for this one I know precisely where I was standing. It was early 1996 and Danny had called to share the news that his friend and collaborator Amos Tversky was ill with terminal cancer and had about six months to live. I was so discombobulated that I had to hand the phone to my wife while I recovered my composure. The news that any good friend is dying is shocking, but Amos Tversky was just not the sort of person who dies at age fifty-nine. Amos, whose papers and talks were precise and perfect, and on whose desk sat only a pad and pencil, lined up in parallel, did not just die.

Amos kept the news quiet until he was no longer able to go into the office. Prior to that, only a small group knew, including two of my close friends. We were not allowed to share our knowledge with anyone except our spouses, so we took turns consoling one another for the five months that we kept this awful news to ourselves.

Amos did not want his health status to be public because he did not want to devote his last months to playing the part of a dying man. There was work to do. He and Danny decided to edit a book: a collection of papers by themselves and others in the field of psychology that they had pioneered, the study of judgment and decision-making. They called it Choices, Values, and Frames. Mostly Amos wanted to do the things he loved: working, spending time with his family, and watching basketball. During this period Amos did not encourage visitors wishing to express their condolences, but “working” visits were allowed, so I went to see him about six weeks before he died, under the thin disguise of finishing a paper we had been working on. We spent some time on that paper and then watched a National Basketball Association (NBA) playoff game.

Amos was wise in nearly every aspect of his life, and that included dealing with illness.* After consulting with specialists at Stanford about his prognosis, he decided that ruining his final months with pointless treatments that would make him very sick and at best extend his life by a few weeks was not a tempting option. His sharp wit remained. He explained to his oncologist that cancer is not a zero-sum game. “What is bad for the tumor is not necessarily good for me.” One day on a phone call I asked him how he was feeling. He said, “You know, it’s funny. When you have the flu you feel like you are going to die, but when you are dying, most of the time you feel just fine.”

Amos died in June and the funeral was in Palo Alto, California, where he and his family lived. Amos’s son Oren gave a short speech at the service and quoted from a note that Amos had written to him days before he died:

I feel that in the last few days we have been exchanging anecdotes and stories with the intention that they will be remembered, at least for a while. I think there is a long Jewish tradition that history and wisdom are being transmitted from one generation to another not through lectures and history books, but through anecdotes, funny stories, and appropriate jokes.

After the funeral, the Tverskys hosted a traditional shiv’a gathering at their home. It was a Sunday afternoon. At some point a few of us drifted into the TV room to catch the end of an NBA playoff game. We felt a bit sheepish, but then Amos’s son Tal volunteered: “If Amos were here, he would have voted for taping the funeral and watching the game.”

From the time I first met Amos in 1977, I applied an unofficial test to every paper I wrote. “Would Amos approve?” My friend Eric Johnson, whom you will meet later on, can attest that one paper we wrote together took three years to get published after it had been accepted by a journal. The editor, the referees, and Eric were all happy with the paper, but Amos was hung up on one point and I wanted to meet his objection. I kept plugging away at that paper, while poor Eric was coming up for promotion without that paper on his vita. Fortunately Eric had written plenty of other strong papers, so my stalling did not cost him tenure. In time, Amos was satisfied.

In writing this book I took Amos’s note to Oren seriously. The book is not the sort you might expect an economics professor to write. It is neither a treatise nor a polemic. Of course there will be discussions of research, but there will also be anecdotes, (possibly) funny stories, and even the odd joke.

Danny on my best qualities

One day in early 2001, I was visiting Danny Kahneman at his home in Berkeley. We were in his living room schmoozing, as we often do. Then Danny suddenly remembered he had an appointment for a telephone call with Roger Lowenstein, a journalist who was writing an article about my work for the New York Times Magazine. Roger, the author of the well-known book When Genius Failed, among others, naturally wanted to talk to my old friend Danny. Here was a quandary. Should I leave the room, or listen in? “Stay,” Danny said, “this could be fun.”

The interview started. Hearing a friend tell an old story about you is not an exciting activity, and hearing someone praise you is always awkward. I picked up something to read and my attention drifted— until I heard Danny say: “Oh, the best thing about Thaler, what really makes him special, is that he is lazy.”

What? Really? I would never deny being lazy, but did Danny think that my laziness was my single best quality? I started waving my hands and shaking my head madly but Danny continued, extolling the virtues of my sloth. To this day, Danny insists it was a high compliment. My laziness, he claims, means I only work on questions that are intriguing enough to overcome this default tendency of avoiding work. Only Danny could turn my laziness into an asset.

But there you have it. Before reading further you should bear in mind that this book has been written by a certifiably lazy man. The upside is that, according to Danny, I will only include things that are interesting, at least to me.

I.


BEGINNINGS
1970–78

1

Supposedly Irrelevant Factors

Early in my teaching career I managed to inadvertently get most of the students in my microeconomics class mad at me, and for once, it had nothing to do with anything I said in class. The problem was caused by a midterm exam.

I had composed an exam that was designed to distinguish among three broad groups of students: the stars who really mastered the material, the middle group who grasped the basic concepts, and the bottom group who just didn’t get it. To successfully accomplish this task, the exam had to have some questions that only the top students would get right, which meant that the exam was hard. The exam succeeded in my goal—there was a wide dispersion of scores—but when the students got their results they were in an uproar. Their principal complaint was that the average score was only 72 points out of a possible 100.

What was odd about this reaction was that the average numerical score on the exam had absolutely no effect on the distribution of grades. The norm at the school was to use a grading curve in which the average grade was a B or B+, and only a tiny number of students received grades below a C. I had anticipated the possibility that a low average numerical score might cause some confusion on this front, so I had reported how the numerical scores would be translated into actual grades in the class. Anything over 80 would get an A or A–, scores above 65 would get some kind of B, and only scores below 50 were in danger of getting a grade below C. The resulting distribution of grades was not different from normal, but this announcement had no apparent effect on the students’ mood. They still hated my exam, and they were none too happy with me either. As a young professor worried about keeping my job, I was determined to do something about this, but I did not want to make my exams any easier. What to do?

Finally, an idea occurred to me. On the next exam, I made the total number of points available 137 instead of 100. This exam turned out to be slightly harder than the first, with students getting only 70% of the answers right, but the average numerical score was a cheery 96 points. The students were delighted! No one’s actual grade was affected by this change, but everyone was happy. From that point on, whenever I was teaching this course, I always gave exams a point total of 137, a number I chose for two reasons. First, it produced an average score well into the 90s, with some students even getting scores above 100, generating a reaction approaching ecstasy. Second, because dividing one’s score by 137 was not easy to do in one’s head, most students did not seem to bother to convert their scores into percentages. Lest you think I was somehow deceiving the students, in subsequent years I included this statement, printed in bold type, in my course syllabus: “Exams will have a total of 137 points rather than the usual 100. This scoring system has no effect on the grade you get in the course, but it seems to make you happier.” And indeed, after I made that change, I never got a complaint that my exams were too hard.

In the eyes of an economist, my students were “misbehaving.” By that I mean that their behavior was inconsistent with the idealized model of behavior that is at the heart of what we call economic theory. To an economist, no one should be happier about a score of 96 out of 137 (70%) than 72 out of 100, but my students were. And by realizing this, I was able to set the kind of exam I wanted but still keep the students from grumbling.

For four decades, since my time as a graduate student, I have been preoccupied by these kinds of stories about the myriad ways in which people depart from the fictional creatures that populate economic models. It has never been my point to say that there is something wrong with people; we are all just human beings—homo sapiens. Rather, the problem is with the model being used by economists, a model that replaces homo sapiens with a fictional creature called homo economicus, which I like to call an Econ for short. Compared to this fictional world of Econs, Humans do a lot of misbehaving, and that means that economic models make a lot of bad predictions, predictions that can have much more serious consequences than upsetting a group of students. Virtually no economists saw the financial crisis of 2007–08 coming,** and worse, many thought that both the crash and its aftermath were things that simply could not happen.

Ironically, the existence of formal models based on this misconception of human behavior is what gives economics its reputation as the most powerful of the social sciences—powerful in two distinct ways. The first way is indisputable: of all the social scientists, economists carry the most sway when it comes to influencing public policy. In fact, they hold a virtual monopoly on giving policy advice. Until very recently, other social scientists were rarely invited to the table, and when they were invited, they were relegated to the equivalent of the kids’ table at a family gathering.

The other way is that economics is also considered the most powerful of the social sciences in an intellectual sense. That power derives from the fact that economics has a unified, core theory from which nearly everything else follows. If you say the phrase “economic theory,” people know what you mean. No other social science has a similar foundation. Rather, theories in other social sciences tend to be for special purposes—to explain what happens in a particular set of circumstances. In fact, economists often compare their field to physics; like physics, economics builds from a few core premises.

The core premise of economic theory is that people choose by optimizing. Of all the goods and services a family could buy, the family chooses the best one that it can afford. Furthermore, the beliefs upon which Econs make choices are assumed to be unbiased. That is, we choose on the basis of what economists call “rational expectations.” If people starting new businesses on average believe that their chance of succeeding is 75%, then that should be a good estimate of the actual number that do succeed. Econs are not overconfident.

This premise of constrained optimization, that is, choosing the best from a limited budget, is combined with the other major workhorse of economic theory, that of equilibrium. In competitive markets where prices are free to move up and down, those prices fluctuate in such a way that supply equals demand. To simplify somewhat, we can say that Optimization + Equilibrium = Economics. This is a powerful combination, nothing that other social sciences can match.

There is, however, a problem: the premises on which economic theory rests are flawed. First, the optimization problems that ordinary people confront are often too hard for them to solve, or even come close to solving. Even a trip to a decent-sized grocery store offers a shopper millions of combinations of items that are within the family’s budget. Does the family really choose the best one? And, of course, we face many much harder problems than a trip to the store, such as choosing a career, mortgage, or spouse. Given the failure rates we observe in all of these domains, it would be hard to defend the view that all such choices are optimal.

Second, the beliefs upon which people make their choices are not unbiased. Overconfidence may not be in the economists’ dictionary, but it is a well-established feature of human nature, and there are countless other biases that have been documented by psychologists.

Third, there are many factors that the optimization model leaves out, as my story about the 137-point exam illustrates. In a world of Econs, there is a long list of things that are supposedly irrelevant. No Econ would buy a particularly large portion of whatever will be served for dinner on Tuesday because he happens to be hungry when shopping on Sunday. Your hunger on Sunday should be irrelevant in choosing the size of your meal for Tuesday. An Econ would not finish that huge meal on Tuesday, even though he is no longer hungry, just because he had paid for it and hates waste. To an Econ, the price paid for some food item in the past is not relevant in making the decision about how much of it to eat now. An Econ would also not expect a gift on the day of the year in which she happened to get married, or be born. What possible difference can a date make? In fact, Econs would be perplexed by the entire idea of gifts. An Econ would know that cash is the best possible gift; it allows the recipient to buy whatever is optimal. But unless you are married to an economist, I don’t advise giving cash on your next anniversary. Come to think of it, even if your spouse is an economist, this is probably not a great idea.

You know, and I know, that we do not live in a world of Econs. We live in a world of Humans. And since most economists are also human, they also know that they do not live in a world of Econs. Adam Smith, the father of modern economic thinking, explicitly acknowledged this fact. Before writing his magnum opus, The Wealth of Nations, he wrote another book devoted to the topic of human “passions,” a word that does not appear in any economics textbook. Econs do not have passions; they are cold-blooded optimizers. Think of Mr. Spock in Star Trek.

Nevertheless, this model of economic behavior based on a population consisting only of Econs has flourished, raising economics to that pinnacle of influence on which it now rests. Critiques over the years have been brushed aside with a gauntlet of poor excuses and implausible alternative explanations of embarrassing empirical evidence. But one by one these critiques have been answered by a series of studies that have progressively raised the stakes. It is easy to dismiss a story about the grading of an exam. It is harder to dismiss studies that document poor choices in large-stakes domains such as saving for retirement, choosing a mortgage, or investing in the stock market. And it is impossible to dismiss the series of booms, bubbles, and crashes we have observed in financial markets beginning on October 19, 1987, a day when stock prices fell more than 20% all around the world in the absence of any substantive bad news. This was followed by a bubble and crash in technology stocks that quickly turned into a bubble in housing prices, which in turn, when popped, caused a global financial crisis.

It is time to stop making excuses. We need an enriched approach to doing economic research, one that acknowledges the existence and relevance of Humans. The good news is that we do not need to throw away everything we know about how economies and markets work. Theories based on the assumption that everyone is an Econ should not be discarded. They remain useful as starting points for more realistic models. And in some special circumstances, such as when the problems people have to solve are easy or when the actors in the economy have the relevant highly specialized skills, then models of Econs may provide a good approximation of what happens in the real world. But as we will see, those situations are the exception rather than the rule.

Moreover, much of what economists do is to collect and analyze data about how markets work, work that is largely done with great care and statistical expertise, and importantly, most of this research does not depend on the assumption that people optimize. Two research tools that have emerged over the past twenty-five years have greatly expanded economists’ repertoire for learning about the world. The first is the use of randomized control trial experiments, long used in other scientific fields such as medicine. The typical study investigates what happens when some people receive some “treatment” of interest. The second approach is to use either naturally occurring experiments (such as when some people are enrolled in a program and others are not) or clever econometrics techniques that manage to detect the impact of treatments even though no one deliberately designed the situation for that purpose. These new tools have spawned studies on a wide variety of important questions for society. The treatments studied have included getting more education, being taught in a smaller class or by a better teacher, being given management consulting services, being given help to find a job, being sentenced to jail, moving to a lower-poverty neighborhood, receiving health insurance from Medicaid, and so forth. These studies show that one can learn a lot about the world without imposing optimizing models, and in some cases provide credible evidence against which to test such models and see if they match actual human responses.

For much of economic theory, the assumption that all the agents are optimizing is not a critical one, even if the people under study are not experts. For example, the prediction that farmers use more fertilizer if the price falls is safe enough, even if many farmers are slow to change their practices in response to market conditions. The prediction is safe because it is imprecise: all that is predicted is the direction of the effect. This is equivalent to a prediction that when apples fall off the tree, they fall down rather than up. The prediction is right as far as it goes, but it is not exactly the law of gravity.

Economists get in trouble when they make a highly specific prediction that depends explicitly on everyone being economically sophisticated. Let’s go back to the farming example. Say scientists learn that farmers would be better off using more or less fertilizer than has been the tradition. If everyone can be assumed to get things right as long as they have the proper information, then there is no appropriate policy prescription other than making this information freely available. Publish the findings, make them readily available to farmers, and let the magic of markets take care of the rest.

Unless all farmers are Econs, this is bad advice. Perhaps multinational food companies will be quick to adopt the latest research findings, but what about the behavior of peasant farmers in India or Africa?

Similarly, if you believe that everyone will save just the right amount for retirement, as any Econ would do, and you conclude from this analysis that there is no reason to try to help people save (say, by creating pension plans), then you are passing up the chance to make a lot of people better off. And, if you believe that financial bubbles are theoretically impossible, and you are a central banker, then you can make serious mistakes—as Alan Greenspan, to his credit, has admitted happened to him.

We don’t have to stop inventing abstract models that describe the behavior of imaginary Econs. We do, however, have to stop assuming that those models are accurate descriptions of behavior, and stop basing policy decisions on such flawed analyses. And we have to start paying attention to those supposedly irrelevant factors, what I will call SIFs for short.

It is difficult to change people’s minds about what they eat for breakfast, let alone problems that they have worked on all their lives. For years, many economists strongly resisted the call to base their models on more accurate characterizations of human behavior. But thanks to an influx of creative young economists who have been willing to take some risks and break with the traditional ways of doing economics, the dream of an enriched version of economic theory is being realized. The field has become known as “behavioral economics.” It is not a different discipline: it is still economics, but it is economics done with strong injections of good psychology and other social sciences.

The primary reason for adding Humans to economic theories is to improve the accuracy of the predictions made with those theories. But there is another benefit that comes with including real people in the mix. Behavioral economics is more interesting and more fun than regular economics. It is the un-dismal science.

Behavioral economics is now a growing branch of economics, and its practitioners can be found in most of the best universities around the world. And recently, behavioral economists and behavioral scientists more generally are becoming a small part of the policy-making establishment. In 2010 the government of the United Kingdom formed a Behavioural Insights Team, and now other countries around the world are joining the movement to create special teams with the mandate to incorporate the findings of other social sciences into the formulation of public policy. Businesses are catching on as well, realizing that a deeper understanding of human behavior is every bit as important to running a successful business as is an understanding of financial statements and operations management. After all, Humans run companies, and their employees and customers are also Humans.

This book is the story of how this happened, at least as I have seen it. Although I did not do all the research—as you know, I am too lazy for that—I was around at the beginning and have been part of the movement that created this field. Following Amos’s dictum, there will be many stories to come, but my main goals are tell the tale of how it all happened, and to explain some of the things we learned along the way. Not surprisingly, there have been numerous squabbles with traditionalists who defended the usual way of doing economics. Those squabbles were not always fun at the time, but like a bad travel experience, they make for good stories after the fact, and the necessity of fighting those battles has made the field stronger.

Like any story, this one does not follow a straight-line progression with one idea leading naturally to another. Many ideas were percolating at different times and at different speeds. As a result, the organizational structure of the book is both chronological and topical. Here is a brief preview. We start at the beginning, back when I was a graduate student and was collecting a list of examples of odd behaviors that did not seem to fit the models I was learning in class. The first section of the book is devoted to those early years in the wilderness, and describes some of the challenges that were thrown down by the many who questioned the value of this enterprise. We then turn to a series of topics that occupied most of my attention for the first fifteen years of my research career: mental accounting, self-control, fairness, and finance. My objective is to explain what my colleagues and I learned along the way, so that you can use those insights yourself to improve your understanding of your fellow Humans. But there may also be useful lessons about how to try to change the way people think about things, especially when they have a lot invested in maintaining the status quo. Later, we turn to more recent research endeavors, from the behavior of New York City taxi drivers, to the drafting of players into the National Football League, to the behavior of participants on high-stakes game shows. At the end we arrive in London, at Number 10 Downing Street, where a new set of exciting challenges and opportunities is emerging.

My only advice for reading the book is stop reading when it is no longer fun. To do otherwise, well, that would be just misbehaving.

2

The Endowment Effect

I began to have deviant thoughts about economic theory while I was a graduate student in the economics department at the University of Rochester, located in upstate New York. Although I had misgivings about some of the material presented in my classes, I was never quite sure whether the problem was in the theory or in my flawed understanding of the subject matter. I was hardly a star student. In that New York Times Magazine article by Roger Lowenstein that I mentioned in the preface, my thesis advisor, Sherwin Rosen, gave the following as an assessment of my career as a graduate student: “We did not expect much of him.”

My thesis was on a provocative-sounding topic, “The Value of a Life,” but the approach was completely standard. Conceptually, the proper way to think about this question was captured by economist Thomas Schelling in his wonderful essay “The Life You Save May Be Your Own.” Many times over the years my interests would intersect with Schelling’s, an early supporter and contributor to what we now call behavioral economics. Here is a famous passage from his essay:

Let a six-year-old girl with brown hair need thousands of dollars for an operation that will prolong her life until Christmas, and the post office will be swamped with nickels and dimes to save her. But let it be reported that without sales tax the hospital facilities of Massachusetts will deteriorate and cause a barely perceptible increase in preventable deaths—not many will drop a tear or reach for their checkbooks.

Schelling writes the way he speaks: with a wry smile and an impish twinkle in his eye. He wants to make you a bit uncomfortable.** Here, the story of the sick girl is a vivid way of capturing the major contribution of the article. The hospitals stand in for the concept Schelling calls a “statistical life,” as opposed to the girl, who represents an “identified life.” We occasionally run into examples of identified lives at risk in the real world, such as the thrilling rescue of trapped miners. As Schelling notes, we rarely allow any identified life to be extinguished solely for the lack of money. But of course thousands of “unidentified” people die every day for lack of simple things like mosquito nets, vaccines, or clean water.

Unlike the sick girl, the typical domestic public policy decision is abstract. It lacks emotional impact. Suppose we are building a new highway, and safety engineers tell us that making the median divider a meter wider will cost $42 million and prevent 1.4 fatal accidents per year for thirty years. Should we do it? Of course, we do not know the identity of those victims. They are “merely” statistical lives. But to decide how wide to make that median strip we need a value to assign to those lives prolonged, or, more vividly, “saved” by the expenditure. And in a world of Econs, society would not pay more to save one identified life than twenty statistical lives.

As Schelling noted, the right question asks how much the users of that highway (and perhaps their friends and family members) would be willing to pay to make each trip they take a tiny bit safer. Schelling had specified the correct question, but no one had yet come up with a way to answer it. To crack the problem you needed some situation in which people make choices that involve a trade-off between money and risk of death. From there you can infer their willingness to pay for safety. But where to observe such choices?

Economist Richard Zeckhauser, a student of Schelling’s, noted that Russian roulette offers a way to think about the problem. Here is an adaptation of his example. Suppose Aidan is required to play one game of machine-gun Russian roulette using a gun with many chambers, say 1,000, of which four have been picked at random to have bullets. Aidan has to pull the trigger once. (Mercifully, the gun is set on single shot.) How much would Aidan be willing to pay to remove one bullet?** Although Zeckhauser’s Russian roulette formulation poses the problem in an elegant way, it does not help us come up with any numbers. Running experiments in which subjects point loaded guns at their heads is not a practical method for obtaining data.

While pondering these issues I had an idea. Suppose I could get data on the death rates of various occupations, including dangerous ones like mining, logging, and skyscraper window-washing, and safer ones like farming, shopkeeping, and low-rise window-washing. In a world of Econs, the riskier jobs would have to pay more, otherwise no one would do them. In fact, the extra wages paid for a risky job would have to compensate the workers for taking on the risks involved (as well as any other attributes of the job). So if I could also get data on the wages for each occupation, I could estimate the number implied by Schelling’s analysis, without asking anyone to play Russian roulette. I searched but could not find any source of occupational mortality rates.

My father, Alan, came to the rescue. Alan was an actuary, one of those mathematical types who figure how to manage risks for insurance companies. I asked him if he might be able to lay his hands on data on occupational mortality. I soon received a thin, red, hardbound copy of a book published by the Society of Actuaries that listed the very data I needed. By matching occupational mortality rates to readily available data on wages by occupation, I could estimate how much people had to be paid to be willing to accept a higher risk of dying on the job.

Getting the idea and the data were a good start, but doing the statistical exercise correctly was key. I needed to find an advisor in the economics department whom I could interest in supervising my thesis. The obvious choice was the up-and-coming labor economist mentioned earlier, Sherwin Rosen. We had not worked together before, but my thesis topic was related to some theoretical work he was doing, so he agreed to become my advisor.

We went on to coauthor a paper based on my thesis entitled, naturally, “The Value of Saving a Life.” Updated versions of the number we estimated back then are still used in government cost-benefit analyses. The current estimate is roughly $7 million per life saved.

While at work on my thesis, I thought it might be interesting to ask people some hypothetical questions as another way to elicit their preferences regarding trade-offs between money and the risk of dying. To write these questions, I first had to decide which of two ways to ask the question: either in terms of “willingness to pay” or “willingness to accept.” The first asks how much you would pay to reduce your probability of dying next year by some amount, say by one chance in a thousand. The second asks how much cash you would demand to increase the risk of dying by the same amount. To put these numbers in some context, a fifty-year-old resident of the United States faces a roughly 4-in-1,000 risk of dying each year.

Here is a typical question I posed in a classroom setting. Students answered both versions of the question.

A. Suppose by attending this lecture you have exposed yourself to a rare fatal disease. If you contract the disease you will die a quick and painless death sometime next week. The chance you will get the disease is 1 in 1,000. We have a single dose of an antidote for this disease that we will sell to the highest bidder. If you take this antidote the risk of dying from the disease goes to zero. What is the most you would be willing to pay for this antidote? (If you are short on cash we will lend you the money to pay for the antidote at a zero rate of interest with thirty years to pay it back.)

B. Researchers at the university hospital are doing some research on that same rare disease. They need volunteers who would be willing to simply walk into a room for five minutes and expose themselves to the same 1 in 1,000 risk of getting the disease and dying a quick and painless death in the next week. No antidote will be available. What is the least amount of money you would demand to participate in this research study?

Economic theory has a strong prediction about how people should answer the two different versions of these questions. The answers should be nearly equal. For a fifty-year-old answering the questions, the trade-off between money and risk of death should not be very different when moving from a risk of 5 in 1,000 (.005) to .004 (as in the first version of the question) than in moving from a risk of .004 to .005 (as in the second version). Answers varied widely among respondents, but one clear pattern emerged: the answers to the two questions were not even close to being the same. Typical answers ran along these lines: I would not pay more than $2,000 in version A but would not accept less than $500,000 in version B. In fact, in version B many respondents claimed that they would not participate in the study at any price.

Economic theory is not alone in saying the answers should be identical. Logical consistency demands it. Again consider a fifty-year-old who, before he ran into me, was facing a .004 chance of dying in the next year. Suppose he gives the answers from the previous paragraph: $2,000 for scenario A and $500,000 for scenario B. The first answer implies that the increase from .004 to .005 only makes him worse off by at most $2,000, since he would be unwilling to pay more to avoid the extra risk. But, his second answer said that he would not accept the same increase in risk for less than $500,000. Clearly, the difference between a risk of .004 and .005 cannot be at most $2,000 and at least $500,000!

This truth is not apparent to everyone. In fact, even when explained, many people resist, as you may be doing right now. But the logic is inescapable.** To an economist, these findings were somewhere between puzzling and preposterous. I showed them to Sherwin and he told me to stop wasting my time and get back to work on my thesis. But I was hooked. What was going on here? Sure, the putting-your-life-at-risk scenario is unusual, but once I began to look for examples, I found them everywhere.

One case came from Richard Rosett, the chairman of the economics department and a longtime wine collector. He told me that he had bottles in his cellar that he had purchased long ago for $10 that were now worth over $100. In fact, a local wine merchant named Woody was willing to buy some of Rosett’s older bottles at current prices. Rosett said he occasionally drank one of those bottles on a special occasion, but would never dream of paying $100 to acquire one. He also did not sell any of his bottles to Woody. This is illogical. If he is willing to drink a bottle that he could sell for $100, then drinking it has to be worth more than $100. But then, why wouldn’t he also be willing to buy such a bottle? In fact, why did he refuse to buy any bottle that cost anything close to $100? As an economist, Rosett knew such behavior was not rational, but he couldn’t help himself.**

These examples all involve what economists call “opportunity costs.” The opportunity cost of some activity is what you give up by doing it. If I go for a hike today instead of staying home to watch football, then the opportunity cost of going on the hike is the forgone pleasure of watching the game. For the $100 bottle of wine, the opportunity cost of drinking the bottle is what Woody was willing to pay Rosett for it. Whether Rosett drank his own bottle or bought one, the opportunity cost of drinking it remains the same. But as Rosett’s behavior illustrated, even economists have trouble equating opportunity costs with out-of-pocket costs. Giving up the opportunity to sell something does not hurt as much as taking the money out of your wallet to pay for it. Opportunity costs are vague and abstract when compared to handing over actual cash.

My friend Tom Russell suggested another interesting case. At the time, credit cards were beginning to come into widespread use, and credit card issuers were in a legal battle with retailers over whether merchants could charge different prices to cash and credit card customers. Since credit cards charge the retailer for collecting the money, some merchants, particularly gas stations, wanted to charge credit card users a higher price. Of course, the credit card industry hated this practice; they wanted consumers to view the use of the card as free. As the case wound its way through the regulatory process, the credit card lobby hedged its bets and shifted focus to form over substance. They insisted that if a store did charge different prices to cash and credit card customers, the “regular price” would be the higher credit card price, with cash customers offered a “discount.” The alternative would have set the cash price as the regular price with credit card customers required to pay a “surcharge.”

To an Econ these two policies are identical. If the credit card price is $1.03 and the cash price is $1, it should not matter whether you call the three-cent difference a discount or a surcharge. Nevertheless, the credit card industry rightly had a strong preference for the discount. Many years later Kahneman and Tversky would call this distinction “framing,” but marketers already had a gut instinct that framing mattered. Paying a surcharge is out-of-pocket, whereas not receiving a discount is a “mere” opportunity cost.

I called this phenomenon the “endowment effect” because, in economists’ lingo, the stuff you own is part of your endowment, and I had stumbled upon a finding that suggested people valued things that were already part of their endowment more highly than things that could be part of their endowment, that were available but not yet owned.

The endowment effect has a pronounced influence on behavior for those considering attending special concerts and sporting events. Often the retail price for a given ticket is well below the market price. Someone lucky enough to have grabbed a ticket, either by waiting in line or by being quickest to click on a website, now has a decision to make: go to the event or sell the ticket? In many parts of the world there is now a simple, legal market for tickets on websites such as Stubhub.com, such that ticket-holders no longer have to stand outside a venue and hawk the tickets in order to realize the windfall gain they received when they bought a highly valued item.

Few people other than economists think about this decision correctly. A nice illustration of this involves economist Dean Karlan, now of Yale University. Dean’s time in Chicago—he was an MBA student then—coincided with Michael Jordan’s reign as the king of professional basketball. Jordan’s Chicago Bulls won six championships while he was on the team. The year in question, the Bulls were playing the Washington Bullets in the first round of the playoffs. Although the Bulls were heavily favored to win, tickets were in high demand in part because fans knew seats would be even more expensive later in the playoffs.

Dean had a college buddy who worked for the Bullets and gave Dean two tickets. Dean also had a friend, a graduate student in divinity school, who shared the same Bullets connection and had also received a pair of free tickets. Both of them faced the usual financial struggles associated with being a graduate student, although Dean had better long-term financial prospects: MBAs tend to make more money than graduates of divinity school.**

Both Dean and his friend found the decision of whether to sell or attend the game to be an easy one. The divinity school student invited someone to go to the game with him and enjoyed himself. Dean, meanwhile, got busy scoping out which basketball-loving professors also had lucrative consulting practices. He sold his tickets for several hundred dollars each. Both Dean and his friend thought the other’s behavior was nuts. Dean did not understand how his friend could possibly think he could afford to go to the game. His friend could not understand why Dean didn’t realize the tickets were free.

That is the endowment effect. I knew it was real, but I had no idea what to do with it.