Empirical statistical laws

From HandWiki
Short description: Statistical behavior found in a wide variety of datasets

An empirical statistical law or (in popular terminology) a law of statistics represents a type of behaviour that has been found across a number of datasets and, indeed, across a range of types of data sets.[1] Many of these observances have been formulated and proved as statistical or probabilistic theorems and the term "law" has been carried over to these theorems. There are other statistical and probabilistic theorems that also have "law" as a part of their names that have not obviously derived from empirical observations. However, both types of "law" may be considered instances of a scientific law in the field of statistics. What distinguishes an empirical statistical law from a formal statistical theorem is the way these patterns simply appear in natural distributions, without a prior theoretical reasoning about the data.

Examples

There are several such popular "laws of statistics".

The Pareto principle is a popular example of such a "law". It states that roughly 80% of the effects come from 20% of the causes, and is thus also known as the 80/20 rule.[2] In business, the 80/20 rule says that 80% of your business comes from just 20% of your customers.[3] In software engineering, it is often said that 80% of the errors are caused by just 20% of the bugs.[4] 20% of the world creates roughly 80% of worldwide GDP.[5] 80% of healthcare expenses in the US are caused by 20% of the population.[6]

Zipf's law, described as an "empirical statistical law" of linguistics,[7] is another example. According to the "law", given some dataset of text, the frequency of a word is inversely proportional to its frequency rank. In other words, the second most common word should appear about half as often as the most common word, and the fifth most common world would appear about once every five times the most common word appears. However, what sets Zipf's law as an "empirical statistical law" rather than just a theorem of linguistics is that it applies to phenomena outside of its field, too. For example, a ranked list of US metropolitan populations also follow Zipf's law,[8] and even forgetting follows Zipf's law.[9] This act of summarizing several natural data patterns with simple rules is a defining characteristic of these "empirical statistical laws".

Examples of empirically inspired statistical laws that have a firm theoretical basis include:

Examples of "laws" with a weaker foundation include:

Examples of "laws" which are more general observations than having a theoretical background:

Examples of supposed "laws" which are incorrect include:

See also

Notes

References

  • Kitcher, P., Salmon, W.C. (Editors) (2009) Scientific Explanation. University of Minnesota Press. ISBN:978-0-8166-5765-0
  • Gelbukh, A., Sidorov, G. (2008). Zipf and Heaps Laws’ Coefficients Depend on Language. In:Computational Linguistics and Intelligent Text Processing (pp. 332–335), Springer. ISBN:978-3-540-41687-6 . link to abstract