This was the first I have heard of the Sure-Thing Principle, and I thought that his example was particularly telling. If someone was offered $1 million dollars for sure or $1 billion dollars with a 90% chance, most people would choose the latter. If the expected payoff is calculated, it is obviously the smarter choice. However, even though I agree with his statement in the given situation, I do not think that it would extrapolate in all other scenarios. For example, in a situation where one is in a hospital room and a doctor says “if I don’t do this surgery, you will have a guaranteed, reasonably functional life for the next 3 years, but if I do it, there is a 90% chance you will be completely back to normal or a 10% chance that you will lose you life in the process,” I think that the decisions that people would make would be much more divided. It would further depend on factors like whether the patient or a loved one was making the decision, how old the patient was, whether the patient had any upcoming events they were looking forward to, etc. What if it were a 5-year guarantee? What if it were a 2-year guarantee? What if the margin was 95%? Or if it was 80%? How trustworthy are the doctor’s analyses? What are their biases? These types of questions are really interesting to me because of how easily they can unravel assumptions made in economics (it’s okay, I can say that - I am an Economics concentrator). Humans are fundamentally irrational actors and even though many of these principles can be held on an aggregate level, predictions on an individual actor’s decisions can sometimes just be a coin toss.
top of page
bottom of page
Yes, these are vexing questions for humans! Check out “Thinking Fast, and Slow,” by Kahnmenan (https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow), or of course Dan Gilbert’s book “Stumbling on Happiness,” (https://en.wikipedia.org/wiki/Daniel_Gilbert_(psychologist)) to learn more!