Sustainable decision making in climate change adaptation

1. Decision making under climate change

Climate change is presenting to modern society the challenge of prioritising climate actions to mitigate the impact of climate change. It is not a new challenge. Humanity has been long faced with the problem of adapting to climate variability and change.

For instance, the management of water supply and drainage systems has always been at the core of urban development and often requires taking strategic decisions in the presence of uncertainty. To provide an example, the Ancient Egyptians knew very well how crucial the spring floods of the Nile River were to the summer harvest. They used the spring season water level in the Nile River, measured by using the Nilometer (Figure 1) to determine the amount of tax to charge the farmers that year. The Nilometer is an ancient example of a decision support system.

Figure 1. Measuring shaft of the Nilometer on Rhoda Island, Cairo. By Baldiri - Own work, CC BY-SA 3.0,

Decision making has always been at the core of human development. Everyday life involves decisions that may be affected by different levels and forms of uncertainty. Taking decisions in the presence of uncertainty is a usual challenge for humans. In the past heuristic approaches that make use of routine thinking were mainly used. Heuristic methods are generally preferred for being quick and flexible, but are more likely to involve fallacies or inaccuracies. Therefore, already in the far past attempts were made to set the basis for a rigorous and objective approach.

Decision theory has its roots in probability theory. Already in the 17th century, scientists made us of the idea of expected value: when faced with a number of actions, each of which could give rise to more than one possible outcome, the rational procedure is to identify all possible results of each action, determine their benefit using assigned units (like, for instance, economic gain) and related probability, and multiply them two to give an expected value resulting from each action, and then choose the action leading to the highest expected value. Decision theory evolved significantly during the XXth century, when theoretical basis were laid out to support complex strategies.

In the context of sustainable water resources management taking a proper decision is a challenging mission: ensuring sustainability requires the investment of resources for pursuing a long term vision, which usually implies that short term goals are given lower priority. Moreover, the possible presence of deep uncertainty makes decision making definitely critical with respect to a wide range of ecological and human challenges related to the management of water resources. This is the reason why decision theory has been largely used in water resources management in the second part of the XXth century, when the availability of automatic computation allowed the rigorous evaluation of the outcomes resulting from multiple choices.

The first decision support systems in water resources management made use of linear programming, which was introduced in 1939 by the Soviet economist Leonid Kantorovich. During the 1980s and 1990s scientists working in water resources gave a significant contribution to mathematical optimization. During the last 30 years of the XXth century robust optimization was developed and is a very active field of research today. Indeed, during the last few decades decision support and analysis tools emerged as useful techniques for reducing uncertainty, increase transparency and improve the effectiveness of complex water management decisions.

Decision theory is the study of choices that can be made by individuals, groups or institutions. In water resources management we are interested in the so-called normative decision theory, which determines the best decisions given constraints and assumptions, in the presence of uncertainty. It brings together psychology, statistics, philosophy and mathematics to analyze the available information, constraints and benefits within a transparent and cooperative approach. There are several methods that can be used in normative decision theory under uncertainty. Recent years have seen an increasing interest in tools and methods for the identification of robust, rather than optimal, decisions. They seek a robust strategy, namely, a strategy that performs well, compared to the alternatives, over a wide range of plausible futures (Rosenhead et al. 1972; Lempert et al. 2003). A decision has high robustness if it meets the performance requirements despite the occurrence of a large set of unanticipated contingencies. Conversely, a decision has low robustness if even small gaps in our knowledge can undermine the achievement of the desired minimal goals.

To identify robust strategies analytic methods can be used - like for instance the approaches that will be discussed here below, to qualitative scenario analysis and other heuristic methods. The interest in robust strategy is justified by increasing awareness of limited reliability or unavailability of predictions, the possible occurrence of unanticipated events and the need to engage stakeholders with significantly different expectations about the future in the decision making process.

When possible, it is suggested that the criteria and constraints are agreed by the stakeholders before knowing the alternative decisions. In fact, the preliminary knowledge of the possible final outcomes may introduce a subjective bias in the definition of the criteria. However, in water resources management the alternative decisions are often known beforehand and therefore the above suggested strategy cannot be adopted.

Given the above mentioned availability of several different methods it is interesting to compare them. Such comparisons are not straightforward as different methods generally use different measures to quantify robustness, use different descriptions of uncertainty (probabilistic or not), and provide different information to decision makers at different stages of the decision process (Hall et al., 2012). A brief review of selected robust decision methods, inspired by the work of Roach (2016), is offered here below.

2. Review of selected robust strategies
2.1. Robust Optimisation

Robust Optimisation involves the identification of an optimal solution for a problem such that the underlying assumption and constraints are always satisfied in the presence of uncertainty. Namely, the identified solution will always guarantee the optimal functioning of the system even in the worst of the several scenario that can materialize under uncertainty. It is mostly employed to identify an optimal solution to a singular objective problem. However, it can be adapted to resolve multiple objective problems, namely, problems where two or more performance measures (objective functions) need to be optimized without the possibility to combine them.

The most classical example of robust optimization is given by a problem where a single objective function is to be optimized under uncertain constraints. For instance, water resources availability for a given future may be not known for sure but varying into a given range with assigned probabilities. Therefore the optimum is to be searched under different scenarios of water availability.

2.2. Robust Decision Making

Robust Decision Making (RDM) is a a robust strategy for characterizing uncertainty with multiple views of the future given the available knowledge of uncertainty. These multiple views are created by just looking at their possibility, without necessarily taking into account their probability. Then, robustness with respect to those multiple views, rather than optimality, is adopted as a criterion to assess alternative policies. Several different approaches can be followed to seek robustness. These may include, for instance, trading a small amount of optimum performance for less sensitivity to broken assumptions, or performance comparison over a wide range of plausible scenarios. Finally, a vulnerability-and-response-option analysis framework is used to identify robust strategies that minimize the regret that may occur over the different future scenarios. This structuring of the decision problem is a key feature of RDM which has been used in several climate adaptation studies (see, for instance, Bhave et al., 2016 and Daron (2015)).

Details on RDM are given here and briefly summarized here below.

  • Step 1: identification of future scenarios, systems models and metrics to evaluate success. The first step in RDM is articulated by a joint work among stakeholders, planners, and decision makers. They sit together to identify possible future scenarios, without caring of their probability in this stage. Therefore, uniform sampling may be used rather than basing on a prior distribution, in order to make sure that all possibilities are explored. Metrics to describe how well future goals would be met are also agreed upon. Metrics can be, for instance, water demands or water supplied, or unmet demand. Metrics can also include indexes such as reliability (e.g. the percentage of years in which the system does not fail). Environmental and/or financial metrics can also be considered such as minimum in-stream flows and costs of service provision. Furthermore, in this step candidate strategies for reaching the goals are identified, such as investments or programs. They also agree on the models that will be used to determine future performances of the system.
  • Step 2: evaluation of system performances. In this step, which is termed as "experimental design", the performances of the alternative strategies are evaluated with respect to the possible future scenarios, by estimating the related metrics of success. This step is typically analytical.
  • Step 3: vulnerability assessment. Stakeholders and decision makers work together to analyse the results from step 2 to identify the vulnerabilities associated to each strategy. The results from the simulations in Step 2 are first evaluated to determine in which futures the management strategy or strategies do not meet the management targets. Next, a scenario discovery leads stakeholders and decision makers to jointly define a small number of scenarios to which the strategy is vulnerable. The information about vulnerability can help define new management options that can be used to test strategies more robust to those vulnerabilities. Furthermore, they identify tradeoffs among different strategies. The vulnerability analysis helps decision makers recognize those combinations of uncertainties that require their attention and those that can instead be ignored. Visual inspection or more sophisticated statistical analyses can be used depending on the problem and audience.
  • Step 4: adaptation options to address vulnerabilities. The information on system's vulnerability can then be used to identify the most robust adaptation option. Here, a identification model is introduced, which may consider different alternative options. 
    An example is given by the so-called "min-max" or worst case analysis (Marchau et al., 2019). Both methods don't need knowledge of probabilities associated to uncertainty (while risk analysis needs probabilities). Min-max identifies several plausible models (or model parameters) without assigning any probability to them. Then, the solution leading to the worst possible outcome is identified and a appropriate decision is taken to minimise the maximally bad. In doing that, min-max may lead to a decision that may be unnecessary costly. Moreover, it may be irresponsible to make a decision by only focusing on one possible outcome, which may be much unlikely. In fact, the worst case is very rare and therefore poorly known.
  • Moreover, suggestions to improve the considered options can also be gained from step 3. For instance, adaptive strategies can be considered, that can evolve over time depending on the observed conditions. Interactive visualizations may be used to help decision makers and stakeholders understand the tradeoffs in terms of how alternative strategies perform in reducing vulnerabilities. This information is often paired with additional information about costs and other implications of strategies.
  • Step 5: risk management. At this stage decision makers and stakeholders can bring in their assumptions regarding the likelihoods of the future scenarios and the related vulnerable conditions. For example, if the vulnerable conditions are deemed very unlikely, then the reduction in the corresponding vulnerabilities may not be worth the cost or effort. Conversely, the vulnerable conditions identified may be viewed as plausible or very likely, providing support to a strategy designed to reduce these vulnerabilities. Based on this tradeoff analysis, decision makers may finally decide on a robust strategy.

RDM characterizes uncertainty in the context of a particular decision. That is, the method identifies those combinations of uncertainties most important to the choice among alternative options and describes the set of beliefs about the uncertain state of the world that are consistent with choosing one option over another. This ordering provides cognitive benefits in decision support applications, allowing stakeholders to understand the key assumptions underlying alternative options before committing themselves to believing those assumptions.

RDM reverses the order of traditional decision analysis by conducting an iterative process based on a vulnerability-and-response-option rather than a predict-then-act decision framework, which is adaptation based on a single projected future. This is known as a bottom-up analysis and differs from the top-down method that is also widely utilised in decision making (Blöschl et al., 2013).

2.3. Information-Gap decision theory

Information-Gap (Info-Gap) decision theory was proposed by Ben-Haim (2001) to assist decision-making when there are severe knowledge gaps and when models of uncertainty are unreliable, inappropriate, or not quantifiable in statistical terms. Info-Gap is a mismatch between what is known and what needs to be known to make a good decision. The Info-Gap model assists user that needs to take a decision to assess how wrong can the available information be, without compromising the quality of the outcome. It evaluates the robustness of an intervention strategy as the maximum range of information (input data, model parameters) uncertainty that can be accepted while maintaining specified performance requirements. In several applications a policy that is not sensitive to information errors or gaps may be preferred over a vulnerable policy. A possible way to get to target is to seek to optimise robustness to failure under uncertainty (Ben-Haim, 2001). In water resources management it is typically applied by identifying the strategy that will satisfy the minimum performance requirements (performing adequately rather than optimally) over a wide range of potential scenarios even under future conditions that deviate from the best estimate. However, uncertainty may also be an opportunity, in that the presence of uncertainty may also lead to unexpected positive outcomes. The idea of Info-Gap is take any opportunity to profit from the positive outcomes from uncertainty, while seeking robustness against unacceptable errors. The unexpected benefit is called "windfall". Thus, Info-Gap  allows to evaluate under which uncertain scenario an unexpected windfall may occur.

Info-Gap has many similarities with RDM and in particular "min-max", which however does not consider opportuneness. In fact, Info-Gap recognises that uncertainty implies not only possibly worst outcomes but also opportunities. In the context of climate change, the presence of uncertainty associated to a prediction lead to a possibly bad future also implies that the future may be not so bad, so that we may eventually get an unexpectedly good benefit, that is, an unexpected  windfall. For instance, a prediction may depict a drier future and therefore less water available for irrigation, but the presence of uncertainty may make a wetter future with better food production possible. Info-gap thus leads to identifying all possible futures, by evaluating what is the minimum requirement - and therefore the maximum tolerable uncertainty - for a sustainable future while, at the same time, quanitifying the minimum uncertainty needed for getting a windfall.

Info-Gap quantifies uncertainty with a sequence of expanding nested ranges defined on the space of an assigned decision-relevant vector. The latter can be, for instance, the parameter vector of a prediction model (which can reduce to a single scalar value if the model counts one parameter only). A larger range of possible parameter values indicate increased uncertainty. Robustness is defined as the maximum uncertainty, represented by a given value for the parameter range, which a strategy achieves a certain level of performance. The method evaluates alternative strategies with a reward function that measures the desirability of each strategy to the decision maker. Info-Gap identifies the worst tolerable outcome, but also explores several other outcomes including the unexpectedly good ones.

There are three essential components of info-gap:

  • The model, which reflects our understanding of the system. In engineering the model is typically a mathematical relationship that allows us to estimate a performance;
  • The performance requirements, namely, the minimum benefit (which can also be expressed by a maximal cost) associated to a decision and the windfall that a user would instead desire;
  • The uncertainty model, which allows us to quantify uncertainty in the information. Uncertainty can be expressed quantitatively or qualitatively.

At a given range value, there will be a set of possible rewards given by the minimum and maximum levels of performance. These levels are used to define two criteria:

  • Robustness function, the maximum range that can be tolerated while ensuring the performance requirements for each decision strategy. It can be evaluated with quantitative or qualitative methods.
  • Opportuneness function, the minimum range of uncertainty required to enable an assigned maximum reward ("windfall") for each decision strategy.

Quoting from Hall et al. (2012):

The robustness function expresses immunity against failure so “bigger is better.” Conversely, when considering the opportunity function, “big is bad.” The different behaviors of these functions illustrate the potential pernicious and propitious consequences of uncertainty.

In fact, to seek robustness we would like to estimate the highest level of uncertainty we can tolerate. To seek the windfall, we instead would be happy if we could get to target with a low level of uncertainty.

Info-gap presents to users (decision makers) robustness and opportuneness curves for each strategy using the same uncertainty model. The independent variable on the graphs are minimum and maximum reward. The robustness curve describes the maximum level of uncertainty that can be tolerated depending on a given “critical” (minimum) outcome. The opportuneness curve describes the minimum level of uncertainty that is necessary to yield the possibility of a given “windfall” (maximum) outcome. Then, users may decide minimise the worst-case outcome or maximize the best-case windfall, or they may seek a strategy that provides some desirable tradeoff between robustness and opportuneness. Info-gap does not identify a unique best strategy but gives to users the opportunity to assess extreme outcomes and their interaction.

To better illustrate Info-gap we refer to the a simple example.

2.3.1. A quantitative example of application of Info-Gap
Let us denote with \(x\) the future - uncertain - prediction of water volume in millions of cubic meters that is annually managed by an irrigation board. We know an estimated value \(\tilde x\) of water flow, as well as an estimated error \(s\), but the most we can confidently say is that the true future water flow, \(x\), deviates from the estimate by \(\pm s\) or more, although \(x\) must be positive. We do not know a worst-case or maximum error, and we have no probabilistic information about \(x\).
The maximum error, being unknown, may be larger than the estimated error \(s\), which is only a first guess. Indeed, in most of the cases, the actual error will be greater than \(s\), being \(s\) simply given by a feeling. For the case of climate models, \(s\) may be for instance given by the ensemble spread, which in most of the cases underestimates uncertainty. 
We assume that the benefit that \(B\) can be attained by distributing the water volume \(x\) is given by the following relationship (see Figure 2):
\(B(x)=B_0+(B_1-B_0)(1-e^{-Kx})\) (1)

where \(B_0\) and \(B_1\) are minimum and maximum attainable benefit in million of euros, respectively, and \(K\) is a positive constant with the dimension of \(1/x\). Let's denote with the symbol \(\tilde B(\tilde x)\) the benefit corresponding to the best estimate for the water volume.

Figure 2. Benefit function for the example of the irrigation water board

A possible Info-Gap model for uncertainty may be given by the relationship

\(U(h)=\left\lvert\frac{x-\tilde x}{s}\right\lvert\leq h\). (2)

with \(x\ge 0\) and \(h\ge 0\). The above Info-Gap uncertainty model is an unbounded family of possible values of the uncertain water volume \(x\). For any non-negative value of \(h\), the range \(U(h)\) indicates an interval of \(x\) values. Like all Info-Gap uncertainty model, this one has two properties:

  • “Nesting” means that the range \(U(h)\) becomes more inclusive (containing more and more \(x\) values) as \(h\) increases.
  • “Contraction” means that \(U(h)\) contains only the best estimate of \(x\) when \(h=0\). These properties identify \(h\) with its meaning as a “horizon of uncertainty.”
The robustness function associated to the above uncertainty model can be written as
\( \hat h(B_c)=\max \left\{ h \mbox{ with}\left( \min\limits_{x \in U(h)} B \right) \ge B_c \right\} .\) (3)

In words, Robustness \(\hat h\) is the maximum level of uncertainty \(h\) up to which all realizations of the uncertain water flow \(x\) in the uncertainty range \(U(h)\) result in benefit \(B\) no less than the critical value \(B_c\) .
When taking a decision in the face of uncertain more robustness is better than less. Given two options that are approximately equivalent in other respects but one is more robust than the other, the robust-satisficing decisionmaker will prefer the more robust option. In short, “bigger uncertainty is better” when prioritizing decision options in terms of robustness.

The relationship between robustness and the critical performance \(B_c\) is shown in Figure 3. It can be computed by estimating the minimum value of \(x\) associated to a given value of \(h\) from eq. (2) and then computing the associated benefit with eq. 1. Note that we need to impose that \(x\ge 0\).

Note that the robustness functions for \(h=0\), i.e., zero uncertainty, gives the benefit \(\tilde B(\tilde x)\) corresponding to the best estimate \(\tilde x\) for water volume. This property of the robustness function is called "zeroing". When no uncertainty is allowed the benefit is determined precisely depending on the certain estimate of water volume. In other words, robustness can be assured when \(B_c< \tilde B\), while no robustness is allowed if \(B_c= \tilde B\).

Another property of the robustness function is trade-off. The performance requirement is that the benefit \(B(x)\) be no less than the critical value \(B_c\). This requirement becomes more demanding as \(B_c\) increases, thus the robustness decreases as the performance requirement increases. That is, robustness can be increased only by relaxing the performance requirement. Thus, it is confirmed the intuition that high performance requires a minimisation of uncertainty to avoid surprisely negative results.

Figure 3. Robustness function for the example of the irrigation water board

For determining the opportuneness function, we need to set a performance aspiration, which expresses the desire for an outcome that is better than \(\tilde B(\tilde x)\). In fact, Info-Gap recognises that uncertainty is not necessarily implying a lower than expected outcome. It may also bring  an unexpectedly positive outcome and therefore it is worth inspecting under which conditions a windfall may become true. For the case of the irrigation board, let's assume that the outcome would be unexpectedly good if a benefit \(B_w\) with \(B_w>\tilde B(\tilde x)>B_c\) is obtained. So the question asked by the windfaller is: What is the lowest horizon of uncertainty at which windfall is possible (though not necessarily guaranteed)? The answer is given by the opportuneness function:
\( \hat \beta(B_w)=\min \left\{ h \mbox{ with}\left( \max\limits_{x \in U(h)} B \right) \ge B_w \right\} .\) (4)

From eq. 4 one sees that the opportuneness \(\beta\) is the minimum horizon of uncertainty \(h\) up to which possible realizations of the uncertain water volume in the set U(h) gives a benefit that is at least as large as the wonderful windfall \(B_w\).

Opportuneness is useful to identify opportune options that may be better able to exploit propitious uncertainty. An option whose \(\beta\) value is small is opportune, because windfall can occur even at low uncertainty. The opportune windfaller prioritizes options according to the smallness of their opportuneness function values. That is, “smaller uncertainty is better” for opportuneness. For the case of the irrigation board the opportuneness function is shown in Figure 4.

Figure 4. Robustness and opportuneness functions for the example of the irrigation water board

The opportuneness function displays zeroing and trade-off properties like the robustness function, but of course with different  meaning. The zeros of the two functions coincide. However, for the opportuneness function this means that favorable windfall surprise is not needed for enabling the predicted outcome. The slope of the opportuneness function indicates that greater windfall is possible only at larger horizon of uncertainty.

2.3.2. Considerations on robustness and opportuneness

The opportuneness function (4) and the robustness function (3), both shown in Figure 4, confirm that robustness is the greatest uncertainty that guarantees the required outcome, while opportuneness is the lowest uncertainty that enables the aspired outcome. It provides an interesting picture of the impact of uncertainty and the trade off between robustness and opportuneness: a large windfall may be acquired only with large uncertainty, which however may imply a more critical outcome.

Furthermore, an interesting consideration is that the zeroing property suggests that model predictions are not a good basis for design, because those predictions have no robustness against errors in the models. The awareness of this implication has been long known in engineering, where safety factors were introduced to give robustness to uncertainty. The lack of robustness is particularly a reason of concern when one cannot quantity uncertainty probabilistically: in this case, robustness is particularly needed.

Thus, the zeroing property asserts that the predicted outcome is not enough to assess the impact of climate change. An assessment  - possibly quantitative - of uncertainty is needed to for optimally mitigating the effect of uncertainty and profit from unexpected opportunities. The slope of the robustness curve reflects the cost of robustness. The robustness function is useful to get protected against uncertainty. On the other hand, the opportuneness function is useful in exploiting the potential for propitious surprise.

By looking at Figure 4, one sees that a large uncertainty may be needed for getting a windfall, while significant robustness can be gained with a moderate decrease of performance requirements.

2.4. Comparison between Robust decision making and Info-Gap Decision Theory

RDM and Info-Gap Decision Theory (IGDT) are decision making frameworks that seek robustness. Both use simulation models to consider a wide spectrum of plausible futures each with different input parameters to represent uncertainty. Both approaches have been applied to water management. For instance, Groves and Lempert (2007) use RDM to identify vulnerabilities of the California Department of Water Resources’ California Water Plan (CWP). Hipel and Ben-Haim (1999) use IGDT to represent different sources of hydrological uncertainty. IGDT was also used by McCarthy and Lindenmayer (2007) within a water resources – timber production management problem in Australia. Also, the sensitivity of UK flood management decisions to uncertainties in flood inundation models was investigated with IGDT (Hine and Hall, 2010).

In a recent comparison of the two approaches, Hall et al. (2012) found that both tools come to similar conclusions to a climate change problem but provide different information about the performance and vulnerabilities of the analysed decisions. IGDT is described as a tool comparing performance of different decision under a wide range of plausible futures (robustness) and their potential for rewards (windfall) under favourable future conditions. On the other hand, RDM identifies under which combination of future conditions a particular strategy becomes vulnerable to failure through ‘scenario discovery’. Identifying different failure conditions provides scenarios to test plans and devise new strategies. IGDT by contrast, provides the facility to simultaneously compare the robustness and opportuneness of multiple strategies but does not quantify their vulnerabilities.

Info-gap and RDM share many similarities. Both represent uncertainty as sets of multiple plausible futures, and both seek to identify robust strategies whose performance is insensitive to uncertainties. Yet they also exhibit important differences, as they arrange their analyses in different orders, treat losses and gains in different ways, and take different approaches to imprecise probabilistic information.

2.5. Decision-Scaling

Decision-scaling (DS) is another bottom-up analysis approach to decision making. It has been introduced in the context of climate change adaptation (Brown et al., 2012). The term "decision scaling" refers to the use of a decision analytic framework to investigate the appropriate downscaling of climate information that is needed to best inform the decision at hand. Here downscaling refers to the identification of the relevant climatic information from the large ensemble of simulations provided by Global Circulation Models (GCMs). DS differs from current methodologies by utilizing the climate information in the latter stages of the process within a decision space to guide preferences among choices.

The analytical heart of DS a kind of “stress test” to identify the factors or combinations of factors that cause the considered system to fail. Thus, in the first step of the analysis vulnerabilities are identified. These vulnerabilities can be defined in terms of those external factors and the thresholds at which they become problematic. The purpose is to identify the scenarios that are relevant to the considered decision which serve as the basis for any necessary scientific investigation.

In the second step of the decision making process, future projections of climate are then used to characterise the relative likelihood or plausibility of those conditions occurring. By using climate projections only in the second step of the analysis, the initial findings are not diluted by the uncertainties inherent in the projections. In the third step of the analysis strategies can be planned to minimize the risk of the system.

The result is a detected ‘vulnerability domain’ of key concerns that the planner or decision maker can utilise to isolate the key climate change projections to strengthen the respective system against, which differs from the bottom-up analysis featured in RDM (see Figure 2).This setup marks DS primarily as a risk assessment tool with limited features developed for overall risk management.

The workflow of DS is compared with the one of RDM in Figure 2, where the workflow of the traditional top-down approach is also depicted.

Figure 2. Top-down decision approach versus DS and RDM bottom-up approaches – adapted from Roach (2016), Brown et al. (2012), Hall et al. (2012) and Lempert and Groves (2010). Images are taken from the following sources: NOAA Geophysical Fluid Dynamics Laboratory (GFDL) [Public domain], Mike Toews - Own work, CC BY-SA 3.0,, James Mason [Public domain], Dan Perry [CC BY 3.0 (], Tommaso.sansone91 - Own work, CC0,, Svjo - Own work, CC BY-SA 3.0,

2.6. Multi-Criteria Decision Analysis

Multi-Criteria Decision Analysis (MCDA) is a mathematical optimization procedure involving more than one objective function to be optimized simultaneously. It is useful when decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. MCDA solutions are evaluated against the given criteria and assigned scores according to each criterion performance. The target may be to produce an overall aggregated score, by weighing criteria into one criterion or utility function. An alternative is to identify non dominated solutions (or Pareto efficient solutions).

As this usually implies a deterministic approach, accounting for multi-objectives is the way to seek robustness rather than accounting for uncertainty (Ranger et al., 2010). As such it is often performed as a preliminary step to isolate candidate individual resource options or to pre-select superior strategies to be further tested on decision making models more suited for “deep” uncertainty. If uncertainty is accounted for, it is usually done so by performing a sensitivity analysis on each criterion to uncertainty (Hyde and Maier, 2006; Hyde et al., 2005) or by placing joint probability distributions over all decision criteria (Dorini et al., 2011).

While it is herein presented as an approach on its own, MCDA can be also used within the previously reviewed methods as a means to assess the outcome of a policy with respect to alternatives and assigned scenarios. When combining the several criteria into one final score weights need to be assigned to each criterion.

3. Resolving complex decisions - Analytic hierarchy process

Analytic hierarchy process (AHP) is a structured technique for handling complex decisions. It was developed by Thomas L. Saaty in the 1970s and has been extensively studied and refined since then.

The AHP supports decision makers by first decomposing their decision problem into a hierarchy of more easily comprehended sub-problems, each of which can be analyzed independently. Once the hierarchy is structured, the decision makers evaluate its various elements by comparing them to each other two at a time, by using Pairwise comparison. The AHP converts preferences to numerical values that can be processed and compared over the entire range of the problem. A numerical weight or priority is derived for each alternative and element of the hierarchy, allowing diverse and often incommensurable elements to be compared to one another in a rational and consistent way. This capability distinguishes the AHP from other decision making techniques. In the final step of the process, numerical priorities are calculated for each of the decision alternatives. These numbers represent the alternatives' relative ability to achieve the decision goal, so they allow a straightforward consideration of the various courses of action.

AHP can account for uncertainty for instance by evaluating alternatives with respect to several future scenarios. Therefore, if conveniently applied it may be considered a robust approach.

The first step in the analytic hierarchy process is to model the problem as a hierarchy. A hierarchy is a stratified system of ranking and organizing people, things, ideas, and so forth, where each element of the system, except for the top one, is subordinate to one or more other elements. Diagrams of hierarchies are often shaped roughly like pyramids, but other than having a single element at the top, there is nothing necessarily pyramid-shaped about a hierarchy. An AHP hierarchy is a structured means of modeling the decision at hand. It consists of an overall goal, a group of options or alternatives for reaching the goal, and a group of factors or criteria that relate the alternatives to the goal. The criteria can be further broken down into subcriteria, and so on, in as many levels as the problem requires. The design of any AHP hierarchy will depend not only on the nature of the problem at hand, but also on the knowledge, judgments, values, opinions, needs, wants, and so forth, of the participants in the decision-making process. Constructing a hierarchy typically involves significant discussion, research, and discovery by those involved. Once the hierarchy has been constructed, the participants analyze it through a series of pairwise comparisons that derive numerical scales of measurement for the nodes. The criteria are pairwise compared against the goal for importance. The alternatives are pairwise compared against each of the criteria for preference. The comparisons are processed mathematically, and priorities are derived for each node.

Figure 3 reports an example of application of AHP to a water resources management problem. In this case the decision is taken according to 4 criteria:

  • Net benefit N;
  • Environmental impact E;
  • Impact on river flow regime R;
  • CO2 emissions C.

Net benefit is evaluated along the lifetime of the alternative, by assessing the cost of the intervention and the benefit gained through, for instance, increased crop productivity, hydropower production and so forth. Environmental impact needs to be evaluated through a proper index, as well as the impact on flow regime. CO2 emissions can be quantitatively evaluated as those due to construction works, use of electricity and so forth. Measures of each criteria need to be rescaled in the same range of variability, to allow values to be combined.

It is interesting to observe that the above criteria are not rigorously independent as the environmental impact is related to the impact on the river flow regime. Introducing dependent indicators implies that a larger weight will be implicitly assigned in the decision process to the common driver of those indicators (degradation of the environment in this case). Such a situation may lead to reducing the transparency of the decision.

Figure 3. Example of a decision articulated according to the analytic hierarchy process – Adapted from Lou Sander - Own work, Public Domain, Images are taken from the following sources: Public Domain,; Nigel Cox / Grand Union Canal (Wendover Arm) / CC BY-SA 2.0,

Assessment and rescaling of criteria can be carried out through utility functions assigning a real number in range [0,1] to each alternative, in such a way that alternative a is assigned a utility greater than alternative b if, and only if, the individual prefers alternative a to alternative b. While assigning utilities the following analytical rules need to be followed:

  • Utility 0 is assigned to the minimum of each criteria. For instance, for the case of the net benefit one may assign utility 0 to the alternative that leads to the minimum benefit, or utility 0 can be assigned to the null benefit, depending on outcome from stakeholder discussion;
  • Utility 1 is assigned to the maximum of each criteria;
  • Increasing utility is assigned to criteria values corresponding to increasing convenience as quantified by the related indicator.

Figure 4 reports an example of utility linear function. In general, eh utility function is non-linear.

Figure 4. Example of a utility function. By Jüri Eintalu / CC BY-SA (

Finally, the overall score of each alternative is computed by averaging the score corresponding to each criteria by using the weights W(1), W(2), W(3) and W(4). Those can be computed through Pairwise comparison.

When there are N criteria (in the above case they are 4) the decision makers will need to make N pairwise comparisons with respect to each criterion. In the above case we need to compare: (1) net benefit versus environmental impact, net benefit versus impact on the river flow regime, and net benefit versus reduction of CO2 emissions. Then, we need to compare (2) environmental impact versus impact on the river flow regime and environmental impact versus reduction of CO2 emissions. Finally, we have to compare (3) impact on the river flow regime versus reduction of CO2 emissions. For each comparison, one needs to judge the preference of one criterion that is being compared with respect to the other. The scale given in Figure 5 can be used to quantify preference.

Figure 5. Preference scale used in pairwise comparison. By Lou Sander - Own work, Public Domain,

The next step is to transfer the measures of preference to a matrix. For each pairwise comparison, the number representing the preference is positioned into the matrix in the corresponding position; the reciprocal of that number is put into the matrix in its symmetric position. For instance, for the above example the matrix resulting from pairwise comparison of the four criteria may be:


Table 1. Pairwise comparison matrix for the criteria net benefit (N), environmental impact (E), impact on the river flow regime (R) and CO2 emissions (C).

By processing the above matrix mathematically, weights for the alternatives with respect to the considered criteria can be derived. Mathematically speaking, weights are the values in the matrix's principal right eigenvector rescaled to give a sum of 1. They can be easily computed by using R, for instance.

It is important to check that the decision is consistent, which implies that preferences expressed in each pairwise comparison are not contradicted by subsequent comparisons. A consistent matrix implies, e.g., that:

  • if the decision maker says alternative 1 is equal important to alternative 2 (so the comparison matrix will contain the unit value in the related pairwise comparisons),
  • if alternative 2 is absolutely more important than alternative 3, then
  • alternative 1 should also be absolutely more important than alternative 3.

Unfortunately, the decision maker is often not able to express consistent preferences in case of several alternatives. Then, a formal test of consistency is required.

In the ideal case of fully consistent matrix, its maximum eigenvalue λ is equal to the dimension of the matrix itself (3 in the above case). If the matrix is not fully consistent, a consistency index CI can be computed as:

CI = (λmax - N)/(N − 1)

Then, a consistence ratio CR can be computed as the ratio of the CI for the considered matrix and a random consistency index RI, which corresponds to the consistency of a randomly generated pairwise comparison matrix:


Suggested values for RI are given in Table 2.


Table 2. RI values for different sizes N of the matrix.

If CR ≤ 0.1 then the pairwise comparison matrix is considered to be consistent enough. If CR>0.1, the comparison matrix should be improved.

The same procedure needs to be repeated for the other criteria. Finally, a pairwise comparison needs to be carried out in order to assign the weights W to the criteria. In this case the matrix will have dimension N=4.

An example of application, to a different but conceptually similar problem, is given here. Another example of application is given by this paper (in Italian).

4. References

Ben-Haim, Y. (2001). Info-gap value of information in model updating. Mechanical Systems and Signal Processing, 15(3), 457-474.
Blöschl, G., Viglione, A., & Montanari, A. (2013). Emerging approaches to hydrological risk management in a changing world. In: Climate Vulnerability, 3-10,
Bhave, A. G., Conway, D., Dessai, S., & Stainforth, D. A. (2016). Barriers and opportunities for robust decision making approaches to support climate change adaptation in the developing world. Climate Risk Management, 14, 1-10.
Brown, C., Ghile, Y., Laverty, M., & Li, K. (2012). Decision scaling: Linking bottom‐up vulnerability analysis with climate projections in the water sector. Water Resources Research, 48(9). Daron, J. (2015). Challenges in using a Robust Decision Making approach to guide climate change adaptation in South Africa. Climatic Change, 132(3), 459-473.
Dorini, G., Kapelan, Z., & Azapagic, A. (2011). Managing uncertainty in multiple-criteria decision making related to sustainability assessment. Clean Techn. Environ. Policy, 13(1), 133–139.
Groves, D. G., & Lempert, R. J. (2007). A new analytic method for finding policy-relevant scenarios. Global Environmental Change, 17(1), 73-85.
Hall, J. W., Lempert, R. J., Keller, K., Hackbarth, A., Mijere, C., & McInerney, D. J. (2012). Robust climate policies under uncertainty: A comparison of robust decision making and info-gap methods. Risk Anal., 32(10), 1657–1672.
Hine, D., & Hall, J. W. (2010). Information gap analysis of flood model uncertainties and regional frequency analysis. Water Resources Research, 46(1).
Hipel, K. W., & Ben-Haim, Y. (1999). Decision making in an uncertain world: Information-gap modeling in water resources management. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 29(4), 506-517.
Hyde, K. M., & Maier, H. R. (2006). Distance-based and stochastic uncertainty analysis for multi-criteria decision analysis in Excel using Visual Basic for Applications. Environmental Modelling & Software, 21(12), 1695-1710.
Hyde, K. M., Maier, H. R., & Colby, C. B. (2005). A distance-based uncertainty analysis approach to multi-criteria decision analysis for water resource decision making. Journal of Environmental Management, 77(4), 278–290.
Lempert, R. J. (2003). Shaping the next one hundred years: new methods for quantitative, long-term policy analysis. Rand Corporation.
Lempert, R. J., & Groves, D. G. (2010). Identifying and evaluating robust adaptive policy responses to climate change for water management agencies in the American west. Technol. Forecast. Soc., 77(6), 960–974.
Marchau, V. A., Walker, W. E., Bloemen, P. J., & Popper, S. W. (2019). Decision making under deep uncertainty: from theory to practice (p. 405). Springer Nature.
McCarthy, M. A., & Lindenmayer, D. B. (2007). Info-gap decision theory for assessing the management of catchments for timber production and urban water supply. Environmental management, 39(4), 553-562.
Roach, T. P. (2016). Decision Making Methods for Water Resources Management Under Deep Uncertainty. Available on-line at (last visited on May 21, 2019).
Ranger, N., Millner, A., Dietz, S., Fankhauser, S., Lopez, A., & Ruta, G. (2010). Adaptation in the UK: A decision-making process. Grantham Research Institute/CCCEP Policy Brief, London School of Economics and Political Science, London, UK.
Rosenhead, J., Elton, M., & Gupta, S. K. (1972). Robustness and optimality as criteria for strategic decisions. Journal of the Operational Research Society, 23(4), 413-431.

Download the powerpoint presentation of this lecture

Last modified on March 23, 2023