Forecast skill

From HandWiki
Revision as of 13:37, 6 February 2024 by BotanyGa (talk | contribs) (update)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In the fields of forecasting and prediction, forecasting skill or prediction skill is any measure of the accuracy and/or degree of association of prediction to an observation or estimate of the actual value of what is being predicted (formally, the predictand); it may be quantified as a skill score.[1] In meteorology, more specifically in weather forecasting, skill measures the superiority of a forecast over a simple historical baseline of past observations. The same forecast methodology can result in different skill scores at different places, or even in the same place for different seasons (e.g., spring weather might be driven by erratic local conditions, whereas winter cold snaps might correlate with observable polar winds). Weather forecast skill is often presented in the form of seasonal geographical maps.

Forecasting skill for single-value forecasts (i.e., time series of a scalar quantity) is commonly represented in terms of metrics such as correlation, root mean squared error, mean absolute error, relative mean absolute error, bias, and the Brier score, among others. A number of scores associated with the concept of entropy in information theory are also being used.[2][3]

The term 'forecast skill' may also be used qualitatively, in which case it could either refer to forecast performance according to a single metric or to the overall forecast performance based on multiple metrics.

Metrics

Probabilistic forecast skill scores may use metrics such as the Ranked Probabilistic Skill Score (RPSS) or the Continuous RPSS (CRPSS), among others. Categorical skill metrics such as the False Alarm Ratio (FAR), the Probability of Detection (POD), the Critical Success Index (CSI), and Equitable Threat Score (ETS) are also relevant for some forecasting applications. Skill is often, but not exclusively, expressed as the relative representation that compares the performance of a particular forecast prediction to that of a reference, benchmark prediction—a formulation called a 'Skill Score'.

Forecasting skill metric and score calculations should be made over a large enough sample of forecast-observation pairs to be statistically robust. A sample of predictions for a single predictand (e.g., temperature at one location, or a single stock value) typically includes forecasts made on a number of different dates. A sample could also pool forecast-observation pairs across space, for a prediction made on a single date, as in the forecast of a weather event that is verified at many locations.

Example skill calculation

An example of a skill calculation which uses the error metric 'Mean Squared Error (MSE)' and the associated skill score is given in the table below. In this case, a perfect forecast results in a forecast skill metric of zero, and skill score value of 1.0. A forecast with equal skill to the reference forecast would have a skill score of 0.0, and a forecast which is less skillful than the reference forecast would have unbounded negative skill score values.[4][5]

Skill Metric: Mean squared error (MSE) [math]\displaystyle{ \ \mathit{MSE} = \frac{\sum_{t=1}^N {E_t^2}}{N} }[/math]
The associated Skill Score (SS) [math]\displaystyle{ \ \mathit{SS} = 1- \frac{\mathit{MSE}_\text{forecast}}{\mathit{MSE}_\text{ref}} }[/math]

Further reading

A broad range of forecast metrics can be found in published and online resources. A good starting point is the Australian Bureau of Meteorology's longstanding web pages on verification at the WWRP/WGNE Joint Working Group on Forecast Verification Research.[6]

A popular textbook and reference that discusses forecast skill is Statistical Methods in the Atmospheric Sciences.[7]

See also

References

  1. "American Meteorological Society". https://glossary.ametsoc.org. 
  2. Gneiting, Tilmann; Raftery, Adrian E (2007-03-01). "Strictly Proper Scoring Rules, Prediction, and Estimation". Journal of the American Statistical Association 102 (477): 359–378. doi:10.1198/016214506000001437. ISSN 0162-1459. 
  3. Riccardo Benedetti (2010-01-01). "Scoring Rules for Forecast Verification". Monthly Weather Review 138 (1): 203–211. doi:10.1175/2009MWR2945.1. Bibcode2010MWRv..138..203B. 
  4. Roebber, Paul J. (1998), "The Regime Dependence of Degree Day Forecast Technique, Skill, and Value", Weather and Forecasting 13 (3): 783–794, doi:10.1175/1520-0434(1998)013<0783:TRDODD>2.0.CO;2, Bibcode1998WtFor..13..783R 
  5. Murphy, Allen H. (1988), "Skill Scores Based on the Mean Square Error and Their Relationships to the Correlation Coefficient", Monthly Weather Review 116 (12): 2417–2424, doi:10.1175/1520-0493(1988)116<2417:SSBOTM>2.0.CO;2, Bibcode1988MWRv..116.2417M 
  6. WWRP/WGNE Joint Working Group on Forecast Verification Research.
  7. Wilks, Daniel (2011-06-03). Statistical Methods in the Atmospheric Sciences (3rd ed.). store.elsevier.com. ISBN 9780123850225. http://store.elsevier.com/Statistical-Methods-in-the-Atmospheric-Sciences/Daniel-Wilks/isbn-9780123850225/. Retrieved 2016-02-01.