Mean absolute percentage error

From HandWiki
Short description: Measure of prediction accuracy of a forecast

The mean absolute percentage error (MAPE), also known as mean absolute percentage deviation (MAPD), is a measure of prediction accuracy of a forecasting method in statistics. It usually expresses the accuracy as a ratio defined by the formula:

[math]\displaystyle{ \mbox{MAPE} = \frac{1}{n}\sum_{t=1}^n \left|\frac{A_t-F_t}{A_t}\right| }[/math]

where At is the actual value and Ft is the forecast value. Their difference is divided by the actual value At. The absolute value of this ratio is summed for every forecasted point in time and divided by the number of fitted points n.

MAPE in regression problems

Mean absolute percentage error is commonly used as a loss function for regression problems and in model evaluation, because of its very intuitive interpretation in terms of relative error.

Definition

Consider a standard regression setting in which the data are fully described by a random pair [math]\displaystyle{ Z=(X,Y) }[/math] with values in [math]\displaystyle{ \mathbb{R}^d\times\mathbb{R} }[/math], and n i.i.d. copies [math]\displaystyle{ (X_1, Y_1), ..., (X_n, Y_n) }[/math] of [math]\displaystyle{ (X,Y) }[/math]. Regression models aim at finding a good model for the pair, that is a measurable function g from [math]\displaystyle{ \mathbb{R}^d }[/math] to [math]\displaystyle{ \mathbb{R} }[/math] such that [math]\displaystyle{ g(X) }[/math] is close to Y.

In the classical regression setting, the closeness of [math]\displaystyle{ g(X) }[/math] to Y is measured via the L2 risk, also called the mean squared error (MSE). In the MAPE regression context,[1] the closeness of [math]\displaystyle{ g(X) }[/math] to Y is measured via the MAPE, and the aim of MAPE regressions is to find a model [math]\displaystyle{ g_\text{MAPE} }[/math] such that:

[math]\displaystyle{ g_\mathrm{MAPE}(x) = \arg\min_{g \in \mathcal{G}} \mathbb{E} \Biggl[ \left|\frac{g(X) - Y}{Y}\right| | X = x\Biggr] }[/math]

where [math]\displaystyle{ \mathcal{G} }[/math] is the class of models considered (e.g. linear models).

In practice

In practice [math]\displaystyle{ g_\text{MAPE}(x) }[/math] can be estimated by the empirical risk minimization strategy, leading to

[math]\displaystyle{ \widehat{g}_\text{MAPE}(x) = \arg\min_{g \in \mathcal{G}} \sum_{i=1}^n \left|\frac{g(X_i) - Y_i}{Y_i}\right| }[/math]

From a practical point of view, the use of the MAPE as a quality function for regression model is equivalent to doing weighted mean absolute error (MAE) regression, also known as quantile regression. This property is trivial since

[math]\displaystyle{ \widehat{g}_\text{MAPE}(x) = \arg\min_{g \in \mathcal{G}} \sum_{i=1}^n \omega(Y_i) \left|g(X_i) - Y_i\right| \mbox{ with } \omega(Y_i) = \left|\frac{1}{Y_i}\right| }[/math]

As a consequence, the use of the MAPE is very easy in practice, for example using existing libraries for quantile regression allowing weights.

Consistency

The use of the MAPE as a loss function for regression analysis is feasible both on a practical point of view and on a theoretical one, since the existence of an optimal model and the consistency of the empirical risk minimization can be proved.[1]

WMAPE

WMAPE (sometimes spelled wMAPE) stands for weighted mean absolute percentage error.[2] It is a measure used to evaluate the performance of regression or forecasting models. It is a variant of MAPE in which the mean absolute percent errors is treated as a weighted arithmetic mean. Most commonly the absolute percent errors are weighted by the actuals (e.g. in case of sales forecasting, errors are weighted by sales volume).[3]. Effectively, this overcomes the 'infinite error' issue.[4] Its formula is:[4] [math]\displaystyle{ \mbox{wMAPE} = \frac{\displaystyle \sum_{i=1}^n \left(w_i \cdot \tfrac{\left|A_i-F_i\right|}{|A_i|} \right)}{\displaystyle \sum_{i=1}^n w_i} = \frac{\displaystyle \sum_{i=1}^n \left(|A_i| \cdot \tfrac{\left|A_i-F_i\right|}{|A_i|} \right)}{\displaystyle \sum_{i=1}^n \left|A_i\right|} }[/math]

Where [math]\displaystyle{ w_i }[/math] is the weight, [math]\displaystyle{ A }[/math] is a vector of the actual data and [math]\displaystyle{ F }[/math] is the forecast or prediction. However, this effectively simplifies to a much simpler formula: [math]\displaystyle{ \mbox{wMAPE} = \frac{\displaystyle \sum_{i=1}^n \left|A_i-F_i\right|}{\displaystyle \sum_{i=1}^n \left|A_i\right|} }[/math]

Confusingly, sometimes when people refer to wMAPE they are talking about a different model in which the numerator and denominator of the wMAPE formula above are weighted again by another set of custom weights [math]\displaystyle{ w_i }[/math]. Perhaps it would be more accurate to call this the double weighted MAPE (wwMAPE). Its formula is: [math]\displaystyle{ \mbox{wwMAPE} = \frac{\displaystyle \sum_{i=1}^n w_i \left|A_i-F_i\right|}{\displaystyle \sum_{i=1}^n w_i \left|A_i\right|} }[/math]

Issues

Although the concept of MAPE sounds very simple and convincing, it has major drawbacks in practical application,[5] and there are many studies on shortcomings and misleading results from MAPE.[6][7]

  • It cannot be used if there are zero or close-to-zero values (which sometimes happens, for example in demand data) because there would be a division by zero or values of MAPE tending to infinity.[8]
  • For forecasts which are too low the percentage error cannot exceed 100%, but for forecasts which are too high there is no upper limit to the percentage error.
  • MAPE puts a heavier penalty on negative errors, [math]\displaystyle{ A_t \lt F_t }[/math] than on positive errors.[9] As a consequence, when MAPE is used to compare the accuracy of prediction methods it is biased in that it will systematically select a method whose forecasts are too low. This little-known but serious issue can be overcome by using an accuracy measure based on the logarithm of the accuracy ratio (the ratio of the predicted to actual value), given by [math]\displaystyle{ \log\left(\frac{\text{predicted}}{\text{actual}}\right) }[/math]. This approach leads to superior statistical properties and also leads to predictions which can be interpreted in terms of the geometric mean.[5]
  • People often think the MAPE will be optimized at the median. But for example, a log normal has a median of [math]\displaystyle{ e^\mu }[/math] where as it is MAPE optimized at [math]\displaystyle{ e^{\mu - \sigma^{2}} }[/math].

To overcome these issues with MAPE, there are some other measures proposed in literature:

  • Mean Absolute Scaled Error (MASE)
  • Symmetric Mean Absolute Percentage Error (sMAPE)
  • Mean Directional Accuracy (MDA)
  • Mean Arctangent Absolute Percentage Error (MAAPE): MAAPE can be considered a slope as an angle, while MAPE is a slope as a ratio.[7]

See also

External links

References

  1. 1.0 1.1 de Myttenaere, B Golden, B Le Grand, F Rossi (2015). "Mean absolute percentage error for regression models", Neurocomputing 2016 arXiv:1605.02541
  2. Forecast Accuracy: MAPE, WAPE, WMAPE "Error: no |title= specified when using {{Cite web}}". https://www.baeldung.com/cs/mape-vs-wape-vs-wmape%7Ctitle=Understanding Forecast Accuracy: MAPE, WAPE, WMAPE. 
  3. Weighted Mean Absolute Percentage Error "Error: no |title= specified when using {{Cite web}}". https://ibf.org/knowledge/glossary/weighted-mean-absolute-percentage-error-wmape-299%7Ctitle=WMAPE: Weighted Mean Absolute Percentage Error. 
  4. 4.0 4.1 "Statistical Forecast Errors". https://blog.olivehorse.com/statistical-forecast-errors. 
  5. 5.0 5.1 Tofallis (2015). "A Better Measure of Relative Prediction Accuracy for Model Selection and Model Estimation", Journal of the Operational Research Society, 66(8):1352-1362. archived preprint
  6. Hyndman, Rob J., and Anne B. Koehler (2006). "Another look at measures of forecast accuracy." International Journal of Forecasting, 22(4):679-688 doi:10.1016/j.ijforecast.2006.03.001.
  7. 7.0 7.1 Kim, Sungil and Heeyoung Kim (2016). "A new metric of absolute percentage error for intermittent demand forecasts." International Journal of Forecasting, 32(3):669-679 doi:10.1016/j.ijforecast.2015.12.003.
  8. Kim, Sungil; Kim, Heeyoung (1 July 2016). "A new metric of absolute percentage error for intermittent demand forecasts". International Journal of Forecasting 32 (3): 669–679. doi:10.1016/j.ijforecast.2015.12.003. 
  9. Makridakis, Spyros (1993) "Accuracy measures: theoretical and practical concerns." International Journal of Forecasting, 9(4):527-529 doi:10.1016/0169-2070(93)90079-3