When scientists study decision making they make an important assumption. They expect that the decision maker wants to maximize their (lifetime) utility.

Making a decision means choosing and performing an action from a set of possible alternatives, for example selecting a specific laptop to buy from a set of all the laptops that are available at your location.

## Decisions under certainty

In some cases the utility of our decision depends only on our choice. There are no uncertain external factors that would have an effect on how we enjoy (or regret) the consequences of our choice. This means we make a decision **under certainty**.

Given that we can somehow express the utility of each possible action, the decision process is quite easy: we simply select the option giving us the maximum utility. We could express this rule mathematically as:

a^*\ =\ arg\ max\ u(a_i)

Where u(a) is the utility of action A. One option how to calculate the utility of an action is to use some of multiple-criteria decision making methods.

Let’s say we want to select the best laptop from a set of 5 alternatives. We calulated following utility values:

Laptop | ZenBook | MacBook | Surface | XPS | ThinkPad |

Utility | 0.34 | 0.21 | 0.15 | 0.18 | 0.12 |

Using the rule above we simply select the alternative with highest utility, in this case the ZenBook.

## Decisions under risk

Let’s say we have an opportunity to invest into a project. As is typical for investment projects, we have no idea whether this particular is going be successful or not. We know that 3 situations can occur: the project will be *highly successful*, *mildly successful* or a *failure*. If the project is highly successful we will get a 30% return on our investment. In case of a mild success the rate of return will be 10%. If the project fails, we will be able to resell some assets the company bought to implement the project but we will still occur a 15% loss. The probability of high success is 30%, the probability of mild success is 45% and the probability of failure is 25%. Should we invest in such project?

As in the example above, in many situations the utility of our decision depends not only on the alternative we choose, but also on some future **state of nature** that is currently unknown and we can’t predict it with certainty. Given that we know the probabilities with which these states will occur, we make decisions **under risk**.

If we are dealing with a finite set of alternatives and states of nature, we can capture the situation using a **decision matrix**. Using the information from our example we get the following matrix:

High success | Mild success | Failure | |

Probability | 30% | 45% | 25% |

Invest | 30 | 10 | -15 |

Not invest | 0 | 0 | 0 |

The rows of the decision matrix represent possible courses of action. The columns are the possible future states of nature. Each element of the matrix then expresses the utility we would gain if we chose the given action and the given state of nature occurs. In addition, I augmented this matrix with a row containing the probabilities that a given state of nature will occur.

How to decide whether to invest or not? The most basic rule is based on the concept called **expected utility.** Expected utility of each alternative is calculated as the weighted average of utilities over all states of nature using their probabilities as weights:

E(a) = \sum_{i=1}^{N_S} p_i u(a,s_i)

For our example we get the following values of expected utility:

Alternative – a | Expected utility – E(a) |

Invest | 0.3*0.3 + 0.45*0.1 + 0.25*(-0.15) = 0.0975 = 9.75% |

Not invest | 0 |

The expected utility tells us that “on average” we would make a profit of 9.75% if we invested and we would make a profit of 0% if I we chose not to.