Elementary effects method

From HandWiki
Short description: Screening method

Published in 1991 by Max Morris[1] the elementary effects (EE) method[2] is one of the most used[3][4][5][6] screening methods in sensitivity analysis.

EE is applied to identify non-influential inputs for a computationally costly mathematical model or for a model with a large number of inputs, where the costs of estimating other sensitivity analysis measures such as the variance-based measures is not affordable. Like all screening, the EE method provides qualitative sensitivity analysis measures, i.e. measures which allow the identification of non-influential inputs or which allow to rank the input factors in order of importance, but do not quantify exactly the relative importance of the inputs.

Methodology

To exemplify the EE method, let us assume to consider a mathematical model with [math]\displaystyle{ k }[/math] input factors. Let [math]\displaystyle{ Y }[/math] be the output of interest (a scalar for simplicity):

[math]\displaystyle{ Y = f(X_1, X_2, ... X_k). }[/math]

The original EE method of Morris [2] provides two sensitivity measures for each input factor:

  • the measure [math]\displaystyle{ \mu }[/math], assessing the overall importance of an input factor on the model output;
  • the measure [math]\displaystyle{ \sigma }[/math], describing non-linear effects and interactions.

These two measures are obtained through a design based on the construction of a series of trajectories in the space of the inputs, where inputs are randomly moved One-At-a-Time (OAT). In this design, each model input is assumed to vary across [math]\displaystyle{ p }[/math] selected levels in the space of the input factors. The region of experimentation [math]\displaystyle{ \Omega }[/math] is thus a [math]\displaystyle{ k }[/math]-dimensional [math]\displaystyle{ p }[/math]-level grid.

Each trajectory is composed of [math]\displaystyle{ (k+1) }[/math] points since input factors move one by one of a step [math]\displaystyle{ \Delta }[/math] in [math]\displaystyle{ \{0, 1/(p-1), 2/(p-1),..., 1\} }[/math] while all the others remain fixed.

Along each trajectory the so-called elementary effect for each input factor is defined as:

[math]\displaystyle{ d_i(X) = \frac{Y(X_1, \ldots ,X_{i-1}, X_i + \Delta, X_{i+1}, \ldots, X_k ) - Y( \mathbf X)}{\Delta} }[/math],

where [math]\displaystyle{ \mathbf{X} = (X_1, X_2, ... X_k) }[/math] is any selected value in [math]\displaystyle{ \Omega }[/math] such that the transformed point is still in [math]\displaystyle{ \Omega }[/math] for each index [math]\displaystyle{ i=1,\ldots, k. }[/math]

[math]\displaystyle{ r }[/math] elementary effects are estimated for each input [math]\displaystyle{ d_i\left(X^{(1)} \right), d_i\left( X^{(2)} \right), \ldots, d_i\left( X^{(r)} \right) }[/math] by randomly sampling [math]\displaystyle{ r }[/math] points [math]\displaystyle{ X^{(1)}, X^{(2)}, \ldots , X^{(r)} }[/math].

Usually [math]\displaystyle{ r }[/math] ~ 4-10, depending on the number of input factors, on the computational cost of the model and on the choice of the number of levels [math]\displaystyle{ p }[/math], since a high number of levels to be explored needs to be balanced by a high number of trajectories, in order to obtain an exploratory sample. It is demonstrated that a convenient choice for the parameters [math]\displaystyle{ p }[/math] and [math]\displaystyle{ \Delta }[/math] is [math]\displaystyle{ p }[/math] even and [math]\displaystyle{ \Delta }[/math] equal to [math]\displaystyle{ p/[2(p-1)] }[/math], as this ensures equal probability of sampling in the input space.

In case input factors are not uniformly distributed, the best practice is to sample in the space of the quantiles and to obtain the inputs values using inverse cumulative distribution functions. Note that in this case [math]\displaystyle{ \Delta }[/math] equals the step taken by the inputs in the space of the quantiles.

The two measures [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma }[/math] are defined as the mean and the standard deviation of the distribution of the elementary effects of each input:

[math]\displaystyle{ \mu_i = \frac{1}{r} \sum_{j=1}^r d_i \left( X^{(j)} \right) }[/math],
[math]\displaystyle{ \sigma_i = \sqrt{ \frac{1}{(r-1)} \sum_{j=1}^r \left( d_i \left( X^{(j)} \right) - \mu_i \right)^2} }[/math].

These two measures need to be read together (e.g. on a two-dimensional graph) in order to rank input factors in order of importance and identify those inputs which do not influence the output variability. Low values of both [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma }[/math] correspond to a non-influent input.

An improvement of this method was developed by Campolongo et al.[7] who proposed a revised measure [math]\displaystyle{ \mu^* }[/math], which on its own is sufficient to provide a reliable ranking of the input factors. The revised measure is the mean of the distribution of the absolute values of the elementary effects of the input factors:

[math]\displaystyle{ \mu_i^* = \frac{1}{r} \sum_{j=1}^r \left| d_i \left( X^{(j)} \right) \right| }[/math].

The use of [math]\displaystyle{ \mu^* }[/math] solves the problem of the effects of opposite signs which occurs when the model is non-monotonic and which can cancel each other out, thus resulting in a low value for [math]\displaystyle{ \mu }[/math].

An efficient technical scheme to construct the trajectories used in the EE method is presented in the original paper by Morris while an improvement strategy aimed at better exploring the input space is proposed by Campolongo et al..

References

  1. https://www.stat.iastate.edu/people/max-morris Home Page of Max D. Morris at Iowa State University
  2. 2.0 2.1 Morris, M. D. (1991). Factorial sampling plans for preliminary computational experiments. Technometrics, 33, 161–174.
  3. Borgonovo, Emanuele, and Elmar Plischke. 2016. “Sensitivity Analysis: A Review of Recent Advances.” European Journal of Operational Research 248 (3): 869–87. https://doi.org/10.1016/J.EJOR.2015.06.032.
  4. Iooss, Bertrand, and Paul Lemaître. 2015. “A Review on Global Sensitivity Analysis Methods.” In Uncertainty Management in Simulation-Optimization of Complex Systems, edited by G. Dellino and C. Meloni, 101–22. Boston, MA: Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7547-8_5.
  5. Norton, J.P. 2015. “An Introduction to Sensitivity Assessment of Simulation Models.” Environmental Modelling & Software 69 (C): 166–74. https://doi.org/10.1016/j.envsoft.2015.03.020.
  6. Wei, Pengfei, Zhenzhou Lu, and Jingwen Song. 2015. “Variable Importance Analysis: A Comprehensive Review.” Reliability Engineering & System Safety 142: 399–432. https://doi.org/10.1016/j.ress.2015.05.018.
  7. Campolongo, F., J. Cariboni, and A. Saltelli (2007). An effective screening design for sensitivity analysis of large models. Environmental Modelling and Software, 22, 1509–1518.