What's new

Why we make bad decisions – the psychology of decision making.

RoyMogg

Administrator

The Four Features of Uncertainty.


Imagine that a friend takes a coin from her pocket and offers you the following gamble. If the coin is tossed and lands heads she will give you £110, but if it lands tails you must give her £100. Would you accept the gamble, or walk away? Curiously, captured in this situation are the key components of all our behavioural decisions, from what to wear in the morning to whether or not to go to war with another country.

First to choose whether or not to accept the gamble we have to consider what the outcomes might be – the coin will land on heads and you win, or tails and you lose. Second, because we never know for sure what is going to happen in the future, deciding always involves uncertainty. And third, to address this uncertainty effectively we need to consider four fundamental things: the available options, the possible outcomes of these options (heads or tails), the payoffs of the options, and the probability of these payoffs (in this case 50/50).

People who fail to take proper account of these four features of uncertainty are doomed to a life of sub-optimal decision-making. People who process information about them accurately, quickly, and comprehensively make better decisions.

Expected Utility Theory and Rational Decision Making.
The question of how we should make rational decisions has a long history. Two of the first thinkers to address it were eminent mathematicians Blaise Pascal (1660) and Daniel Bernoulli (1738), and they began work on a set of ideas which eventually became known as Expected Utility Theory (EUT). EUT provides an account of how we should make decisions rationally, is one of the key theories in economics, and is taught to countless management students each year. According to this rational model, when deciding what to do we should take each option available to us, attach a utility (or value) to all of the outcomes of the option, weight these values by their associated probabilities, and add up the result. The option with the highest summed value is the one we should choose. The appealing thing about EUT is that it uses simple mathematical principles to ensure that information about options, payoffs, and probabilities are weighed up fully and in a logically defensible manner. The trouble is that over the last 35 years both psychologists and rebel economists have argued convincingly that whilst EUT provides an entirely satisfactory way of describing what we should do, it is an unsatisfactory account or what we actually do

Non-Rational Decision Making.

Bernoulli-3-150x150.jpg
The first major shot across the bows of the rational decision-making model of Expected Utility Theory came from Herbert Simon. Simon, who won a Nobel Prize for his work in 1978, coined the phrase “bounded rationality” to describe the limited extent to which we make logical decisions in organizations. Instead of maximising and finding the best solution to a problem, we typically satisfice and settle for the solution with which we can make do. However, arguably the most damaging attack on the rational model as a description of how we make decisions has come from two psychologists: Daniel Kahneman and Amos Tversky. With a series of simple but cunning experiments they not only demonstrate that our judgement and decision-making is often fundamentally flawed, but also that this irrationality can be explained. Consider the following problem. A bat and a ball cost £1.10. The bat costs £1 more than the ball. How much does the ball cost? Most people, at least for a few moments, decide incorrectly that the ball costs 10p. Kahneman and Tversky argue that the reason for this is that we use two systems for judgment and decision making. System 1 is intuitive and fast. It often gives us the right answer, but, as in the case of the bat and ball problem, it can lead to mistakes. System 2 refers to a slower and more deliberate set of thought processes. These are more likely to yield the correct response, but they are also considerably more demanding on our cognitive resources.

Framing Effects.

Kahneman and Tversky in their work on System 1 thinking these two psychologists identified the framing effect, and the most famous demonstration of this occurred when they gave people the so-called Asian Disease Problem in two versions. Some people were given the first version which is as follows: Put the following “puzzle” in a box Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific consequences of the programs are as follows: • If program A is adopted, 200 people will be saved. • If program B is adopted, there is a 1/3 probability that 600 people will be saved and 2/3 probability that no people will be saved. Others were given an alternative Asian Disease problem in which the scientific consequences of the programmes were stated as: • If program C is adopted, 400 people will die. • If program D is adopted, there is a 1/3 probability that nobody will die and 2/3 probability 600 people will die. Which programme would you choose in the first form of the dilemma, A or B? And which in the second, C or D? whereas most confronted with the second chose programme D. That is, people tended to be risk averse in the first version, but risk seeking (i.e. prepared to adopt the programme for which the outcome is uncertain) in the second version. In fact, the two versions of the problem are formally identical. Given that the disease is due to kill 600 people, saving 200 (program A) is the same as allowing 400 to die (program C). A 1/3 probability that 600 will be saved is the same as a 1/3 probability that no one will die, and a 2/3 probability that no people will be saved is the same as a 2/3 probability that 600 will die. Because both versions of the Asian Disease Problem are identical from a formal standpoint, the rational model of EUT predicts that people should respond in the same way to both of them. But people don’t respond in the same way, and this suggests that as a theory of how people make decisions, the rational model is wrong

Cognitive Processes.

To explain our tendency to systematically make bad judgements in some circumstances, Kahneman and Tvesky proposed that we often rely on cognitive short-cuts or heuristics. For example, to evaluate the probability or frequency of an event (e.g. how likely am I to be attacked by a shark whilst swimming off the coast of South Africa), we estimate the ease with which we can recall similar events. This availability heuristic has the advantage of placing only limited demands on our cognitive resources – assessing how easy it would be to recall something from memory is quick and easy. However, it also means that we are prone to overestimate the probability and frequency of events which are particularly memorable – such as shark attacks. Other psychologists have drawn attention to different ways in which irrationality is commonly manifested. How do you rate your driving ability? Much worse than average, a little worse than average, about average, a little better than average, or a lot better than average? In fact, the number of people who consider themselves better than average drivers outnumber those who consider themselves worse than average. Clearly at least some of these people are being unrealistically optimistic about their ability. This unrealistic optimism effect is partnered by two similar ones: the self-serving bias (e.g. most people think they are better looking than the average person) and the illusion of control (we tend to think that we have more control of events than is actually the case). Indeed, some research suggests that those with the most accurate level of optimism, the most accurate views of their own attractiveness, and the most accurate sense of the degree to which they are in control, are the acutely depressed.

Group Processes in Decision Making.
Group meetings are a ubiquitous feature of most organizations. Countless hours are spent in meetings communicating information, discussing issues, and making decisions. Many believe that this time is well spent, because group decision-making will generally be better than the decision-making made by individuals. However, both academic research and human history show that group decision-making is no guarantee of decision quality. A variety of well-established processes, such as informational and normative influence, majority and minority influence, group polarization, groupthink, and organizational politics, can seriously undermine the quality of group decision-making. To take just one example of this, in the consensus meetings (or “wash-up” sessions) typically used in assessment centres, research shows that discussions amongst assessors do not improve, and may even interfere with, decisions about which candidates to select

The Psychology of Decision making.

Imagine that a friend takes a coin from her pocket and offers you the following gamble. If the coin is tossed and lands heads she will give you £110, but if it lands tails you must give her £100. Would you accept the gamble, or walk away? Curiously, captured in this situation are the key components of all our behavioural decisions, from what to wear in the morning to whether or not to go to war with another country.

The Four Features of Uncertainty.

First to choose whether or not to accept the gamble we have to consider what the outcomes might be – the coin will land on heads and you win, or tails and you lose. Second, because we never know for sure what is going to happen in the future, deciding always involves uncertainty. And third, to address this uncertainty effectively we need to consider four fundamental things: the available options, the possible outcomes of these options (heads or tails), the payoffs of the options, and the probability of these payoffs (in this case 50/50). People who fail to take proper account of these four features of uncertainty are doomed to a life of sub-optimal decision-making. People who process information about them accurately, quickly, and comprehensively make better decisions.

Expected Utility Theory
The question of how we should make rational decisions has a long history. Two of the first thinkers to address it were eminent mathematicians Blaise Pascal (1660) and Daniel Bernoulli (1738), and they began work on a set of ideas which eventually became known as Expected Utility Theory (EUT).

Rational Decision Making.

EUT provides an account of how we should make decisions rationally, is one of the key theories in economics, and is taught to countless management students each year. According to this rational model, when deciding what to do we should take each option available to us, attach a utility (or value) to all of the outcomes of the option, weight these values by their associated probabilities, and add up the result. The option with the highest summed value is the one we should choose. The appealing thing about EUT is that it uses simple mathematical principles to ensure that information about options, payoffs, and probabilities are weighed up fully and in a logically defensible manner. The trouble is that over the last 35 years both psychologists and rebel economists have argued convincingly that whilst EUT provides an entirely satisfactory way of describing what we should do, it is an unsatisfactory account or what we actually do

Non-Rational Decision Making.

The first major shot across the bows of the rational decision-making model of Expected Utility Theory came from Herbert Simon. Simon, who won a Nobel Prize for his work in 1978, coined the phrase “bounded rationality” to describe the limited extent to which we make logical decisions in organizations. Instead of maximising and finding the best solution to a problem, we typically satisfice and settle for the solution with which we can make do. However, arguably the most damaging attack on the rational model as a description of how we make decisions has come from two psychologists: Daniel Kahneman and Amos Tversky. With a series of simple but cunning experiments they not only demonstrate that our judgement and decision-making is often fundamentally flawed, but also that this irrationality can be explained. Consider the following problem. A bat and a ball cost £1.10. The bat costs £1 more than the ball. How much does the ball cost? Most people, at least for a few moments, decide incorrectly that the ball costs 10p. Kahneman and Tversky argue that the reason for this is that we use two systems for judgment and decision making. System 1 is intuitive and fast. It often gives us the right answer, but, as in the case of the bat and ball problem, it can lead to mistakes. System 2 refers to a slower and more deliberate set of thought processes. These are more likely to yield the correct response, but they are also considerably more demanding on our cognitive resources.

Framing Effects.

Kahneman and Tversky in their work on System 1 thinking these two psychologists identified the framing effect, and the most famous demonstration of this occurred when they gave people the so-called Asian Disease Problem in two versions. Some people were given the first version which is as follows: Put the following “puzzle” in a box Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific consequences of the programs are as follows:

  • If program A is adopted, 200 people will be saved.
  • If program B is adopted, there is a 1/3 probability that 600 people will be saved and 2/3 probability that no people will be saved.

Others were given an alternative Asian Disease problem in which the scientific consequences of the programmes were stated as:

  • If program C is adopted, 400 people will die.
  • If program D is adopted, there is a 1/3 probability that nobody will die and 2/3 probability 600 people will die.

Which programme would you choose in the first form of the dilemma, A or B? And which in the second, C or D? whereas most confronted with the second chose programme D. That is, people tended to be risk averse in the first version, but risk seeking (i.e. prepared to adopt the programme for which the outcome is uncertain) in the second version. In fact, the two versions of the problem are formally identical. Given that the disease is due to kill 600 people, saving 200 (program A) is the same as allowing 400 to die (program C). A 1/3 probability that 600 will be saved is the same as a 1/3 probability that no one will die, and a 2/3 probability that no people will be saved is the same as a 2/3 probability that 600 will die.

Because both versions of the Asian Disease Problem are identical from a formal standpoint, the rational model of EUT predicts that people should respond in the same way to both of them. But people don’t respond in the same way, and this suggests that as a theory of how people make decisions, the rational model is wrong.

Cognitive Processes.

To explain our tendency to systematically make bad judgements in some circumstances, Kahneman and Tvesky proposed that we often rely on cognitive short-cuts or heuristics. For example, to evaluate the probability or frequency of an event (e.g. how likely am I to be attacked by a shark whilst swimming off the coast of South Africa), we estimate the ease with which we can recall similar events. This availability heuristic has the advantage of placing only limited demands on our cognitive resources – assessing how easy it would be to recall something from memory is quick and easy. However, it also means that we are prone to overestimate the probability and frequency of events which are particularly memorable – such as shark attacks. Other psychologists have drawn attention to different ways in which irrationality is commonly manifested. How do you rate your driving ability? Much worse than average, a little worse than average, about average, a little better than average, or a lot better than average? In fact, the number of people who consider themselves better than average drivers outnumber those who consider themselves worse than average. Clearly at least some of these people are being unrealistically optimistic about their ability. This unrealistic optimism effect is partnered by two similar ones: the self-serving bias (e.g. most people think they are better looking than the average person) and the illusion of control (we tend to think that we have more control of events than is actually the case). Indeed, some research suggests that those with the most accurate level of optimism, the most accurate views of their own attractiveness, and the most accurate sense of the degree to which they are in control, are the acutely depressed.

Group Processes in Decision Making.
Group meetings are a ubiquitous feature of most organizations. Countless hours are spent in meetings communicating information, discussing issues, and making decisions. Many believe that this time is well spent, because group decision-making will generally be better than the decision-making made by individuals. However, both academic research and human history show that group decision-making is no guarantee of decision quality. A variety of well-established processes, such as informational and normative influence, majority and minority influence, group polarization, groupthink, and organizational politics, can seriously undermine the quality of group decision-making. To take just one example of this, in the consensus meetings (or “wash-up” sessions) typically used in assessment centres, research shows that discussions amongst assessors do not improve, and may even interfere with, decisions about which candidates to select

Continue reading...
 

Top