Probability measure

From HandWiki
Revision as of 13:48, 6 February 2024 by Jworkorg (talk | contribs) (simplify)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Measure of total value one, generalizing probability distributions

In mathematics, a probability measure is a real-valued function defined on a set of events in a σ-algebra that satisfies measure properties such as countable additivity.[1] The difference between a probability measure and the more general notion of measure (which includes concepts like area or volume) is that a probability measure must assign value 1 to the entire σ-algebra.

Intuitively, the additivity property says that the probability assigned to the union of two disjoint (mutually exclusive) events by the measure should be the sum of the probabilities of the events; for example, the value assigned to the outcome "1 or 2" in a throw of a dice should be the sum of the values assigned to the outcomes "1" and "2".

Probability measures have applications in diverse fields, from physics to finance and biology.

Definition

A probability measure mapping the σ-algebra for [math]\displaystyle{ 2^3 }[/math] events to the unit interval.

The requirements for a set function [math]\displaystyle{ \mu }[/math] to be a probability measure on a σ-algebra are that:

  • [math]\displaystyle{ \mu }[/math] must return results in the unit interval [math]\displaystyle{ [0, 1], }[/math] returning [math]\displaystyle{ 0 }[/math] for the empty set and [math]\displaystyle{ 1 }[/math] for the entire space.
  • [math]\displaystyle{ \mu }[/math] must satisfy the countable additivity property that for all countable collections [math]\displaystyle{ E_1, E_2, \ldots }[/math] of pairwise disjoint sets: [math]\displaystyle{ \mu\left(\bigcup_{i \in \N} E_i\right) = \sum_{i \in \N} \mu(E_i). }[/math]

For example, given three elements 1, 2 and 3 with probabilities [math]\displaystyle{ 1/4, 1/4 }[/math] and [math]\displaystyle{ 1/2, }[/math] the value assigned to [math]\displaystyle{ \{1, 3\} }[/math] is [math]\displaystyle{ 1/4 + 1/2 = 3/4, }[/math] as in the diagram on the right.

The conditional probability based on the intersection of events defined as: [math]\displaystyle{ \mu (B \mid A) = \frac{\mu(A \cap B)}{\mu(A)}. }[/math][2] satisfies the probability measure requirements so long as [math]\displaystyle{ \mu(A) }[/math] is not zero.[3]

Probability measures are distinct from the more general notion of fuzzy measures in which there is no requirement that the fuzzy values sum up to [math]\displaystyle{ 1, }[/math] and the additive property is replaced by an order relation based on set inclusion.

Example applications

In many cases, statistical physics uses probability measures, but not all measures it uses are probability measures.[4][5]

Market measures which assign probabilities to financial market spaces based on actual market movements are examples of probability measures which are of interest in mathematical finance; for example, in the pricing of financial derivatives.[6] For instance, a risk-neutral measure is a probability measure which assumes that the current value of assets is the expected value of the future payoff taken with respect to that same risk neutral measure (i.e. calculated using the corresponding risk neutral density function), and discounted at the risk-free rate. If there is a unique probability measure that must be used to price assets in a market, then the market is called a complete market.[7]

Not all measures that intuitively represent chance or likelihood are probability measures. For instance, although the fundamental concept of a system in statistical mechanics is a measure space, such measures are not always probability measures.[4] In general, in statistical physics, if we consider sentences of the form "the probability of a system S assuming state A is p" the geometry of the system does not always lead to the definition of a probability measure under congruence, although it may do so in the case of systems with just one degree of freedom.[5]

Probability measures are also used in mathematical biology.[8] For instance, in comparative sequence analysis a probability measure may be defined for the likelihood that a variant may be permissible for an amino acid in a sequence.[9]

Ultrafilters can be understood as [math]\displaystyle{ \{0, 1\} }[/math]-valued probability measures, allowing for many intuitive proofs based upon measures. For instance, Hindman's Theorem can be proven from the further investigation of these measures, and their convolution in particular.

See also

References

  1. An introduction to measure-theoretic probability by George G. Roussas 2004 ISBN 0-12-599022-7 page 47
  2. Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuhaä, Hendrik Paul; Meester, Ludolf Erwin (2005). "A Modern Introduction to Probability and Statistics" (in en). Springer Texts in Statistics. doi:10.1007/1-84628-168-7. ISSN 1431-875X. https://link.springer.com/book/10.1007/1-84628-168-7. 
  3. Probability, Random Processes, and Ergodic Properties by Robert M. Gray 2009 ISBN 1-4419-1089-1 page 163
  4. 4.0 4.1 A course in mathematics for students of physics, Volume 2 by Paul Bamberg, Shlomo Sternberg 1991 ISBN 0-521-40650-1 page 802
  5. 5.0 5.1 The concept of probability in statistical physics by Yair M. Guttmann 1999 ISBN 0-521-62128-3 page 149
  6. Quantitative methods in derivatives pricing by Domingo Tavella 2002 ISBN 0-471-39447-5 page 11
  7. Irreversible decisions under uncertainty by Svetlana I. Boyarchenko, Serge Levendorskiĭ 2007 ISBN 3-540-73745-6 page 11
  8. Mathematical Methods in Biology by J. David Logan, William R. Wolesensky 2009 ISBN 0-470-52587-8 page 195
  9. Discovering biomolecular mechanisms with computational biology by Frank Eisenhaber 2006 ISBN 0-387-34527-2 page 127

Further reading

External links

pl:Miara probabilistyczna