Sample space

From HandWiki
Short description: Set of all possible outcomes or results of a statistical trial or experiment

In probability theory, the sample space (also called sample description space,[1] possibility space,[2] or outcome space[3]) of an experiment or random trial is the set of all possible outcomes or results of that experiment.[4] A sample space is usually denoted using set notation, and the possible ordered outcomes, or sample points,[5] are listed as elements in the set. It is common to refer to a sample space by the labels S, Ω, or U (for "universal set"). The elements of a sample space may be numbers, words, letters, or symbols. They can also be finite, countably infinite, or uncountably infinite.[6]

A subset of the sample space is an event, denoted by [math]\displaystyle{ E }[/math]. If the outcome of an experiment is included in [math]\displaystyle{ E }[/math], then event [math]\displaystyle{ E }[/math] has occurred.[7]

For example, if the experiment is tossing a single coin, the sample space is the set [math]\displaystyle{ \{H,T\} }[/math], where the outcome [math]\displaystyle{ H }[/math] means that the coin is heads and the outcome [math]\displaystyle{ T }[/math] means that the coin is tails.[8] The possible events are [math]\displaystyle{ E=\{\} }[/math], [math]\displaystyle{ E=\{H\} }[/math], [math]\displaystyle{ E = \{T\} }[/math], and [math]\displaystyle{ E = \{H,T\} }[/math]. For tossing two coins, the sample space is [math]\displaystyle{ \{HH, HT, TH, TT\} }[/math], where the outcome is [math]\displaystyle{ HH }[/math] if both coins are heads, [math]\displaystyle{ HT }[/math] if the first coin is heads and the second is tails, [math]\displaystyle{ TH }[/math] if the first coin is tails and the second is heads, and [math]\displaystyle{ TT }[/math] if both coins are tails.[9] The event that at least one of the coins is heads is given by [math]\displaystyle{ E = \{HH,HT,TH\} }[/math].

For tossing a single six-sided die one time, where the result of interest is the number of pips facing up, the sample space is [math]\displaystyle{ \{1,2,3,4,5,6\} }[/math].[10]

A well-defined, non-empty sample space [math]\displaystyle{ S }[/math] is one of three components in a probabilistic model (a probability space). The other two basic elements are a well-defined set of possible events (an event space), which is typically the power set of [math]\displaystyle{ S }[/math] if [math]\displaystyle{ S }[/math] is discrete or a σ-algebra on [math]\displaystyle{ S }[/math] if it is continuous, and a probability assigned to each event (a probability measure function).[11]

A visual representation of a finite sample space and events. The red oval is the event that a number is odd, and the blue oval is the event that a number is prime.

A sample space can be represented visually by a rectangle, with the outcomes of the sample space denoted by points within the rectangle. The events may be represented by ovals, where the points enclosed within the oval make up the event.[12]

Conditions of a sample space

A set [math]\displaystyle{ \Omega }[/math] with outcomes [math]\displaystyle{ s_1, s_2, \ldots, s_n }[/math] (i.e. [math]\displaystyle{ \Omega = \{s_1, s_2, \ldots, s_n\} }[/math]) must meet some conditions in order to be a sample space:[13]

  • The outcomes must be mutually exclusive, i.e. if [math]\displaystyle{ s_j }[/math] occurs, then no other [math]\displaystyle{ s_i }[/math] will take place, [math]\displaystyle{ \forall i,j=1,2,\ldots,n \quad i\neq j }[/math].[6]
  • The outcomes must be collectively exhaustive, i.e. on every experiment (or random trial) there will always take place some outcome [math]\displaystyle{ s_i \in \Omega }[/math] for [math]\displaystyle{ i \in \{1, 2, \ldots, n\} }[/math].[6]
  • The sample space ([math]\displaystyle{ \Omega }[/math]) must have the right granularity depending on what the experimenter is interested in. Irrelevant information must be removed from the sample space and the right abstraction must be chosen.

For instance, in the trial of tossing a coin, one possible sample space is [math]\displaystyle{ \Omega_1 = \{H,T\} }[/math], where [math]\displaystyle{ H }[/math] is the outcome where the coin lands heads and [math]\displaystyle{ T }[/math] is for tails. Another possible sample space could be [math]\displaystyle{ \Omega_2 = \{(H,R), (H,NR), (T,R), (T,NR)\} }[/math]. Here, [math]\displaystyle{ R }[/math] denotes a rainy day and [math]\displaystyle{ NR }[/math] is a day where it is not raining. For most experiments, [math]\displaystyle{ \Omega_1 }[/math] would be a better choice than [math]\displaystyle{ \Omega_2 }[/math], as an experimenter likely does not care about how the weather affects the coin toss.

Multiple sample spaces

For many experiments, there may be more than one plausible sample space available, depending on what result is of interest to the experimenter. For example, when drawing a card from a standard deck of fifty-two playing cards, one possibility for the sample space could be the various ranks (Ace through King), while another could be the suits (clubs, diamonds, hearts, or spades).[4][14] A more complete description of outcomes, however, could specify both the denomination and the suit, and a sample space describing each individual card can be constructed as the Cartesian product of the two sample spaces noted above (this space would contain fifty-two equally likely outcomes). Still other sample spaces are possible, such as right-side up or upside down, if some cards have been flipped when shuffling.

Equally likely outcomes

Flipping a coin leads to a sample space composed of two outcomes that are almost equally likely.
A brass tack with point downward
Up or down? Flipping a brass tack leads to a sample space composed of two outcomes that are not equally likely.

Some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely.[15] For any sample space with [math]\displaystyle{ N }[/math] equally likely outcomes, each outcome is assigned the probability [math]\displaystyle{ \frac{1}{N} }[/math].[16] However, there are experiments that are not easily described by a sample space of equally likely outcomes—for example, if one were to toss a thumb tack many times and observe whether it landed with its point upward or downward, there is no physical symmetry to suggest that the two outcomes should be equally likely.[17]

Though most random phenomena do not have equally likely outcomes, it can be helpful to define a sample space in such a way that outcomes are at least approximately equally likely, since this condition significantly simplifies the computation of probabilities for events within the sample space. If each individual outcome occurs with the same probability, then the probability of any event becomes simply:[18]:346–347

[math]\displaystyle{ \mathrm{P}(\text{event}) = \frac{\text{number of outcomes in event}}{\text{number of outcomes in sample space}} }[/math]

For example, if two fair six-sided dice are thrown to generate two uniformly distributed integers, [math]\displaystyle{ D_1 }[/math] and [math]\displaystyle{ D_2 }[/math], each in the range from 1 to 6, inclusive, the 36 possible ordered pairs of outcomes [math]\displaystyle{ (D_1,D_2) }[/math] constitute a sample space of equally likely events. In this case, the above formula applies, such as calculating the probability of a particular sum of the two rolls in an outcome. The probability of the event that the sum [math]\displaystyle{ D_1 + D_2 }[/math] is five is [math]\displaystyle{ \frac{4}{36} }[/math], since four of the thirty-six equally likely pairs of outcomes sum to five.

If the sample space was all of the possible sums obtained from rolling two six-sided dice, the above formula can still be applied because the dice rolls are fair, but the number of outcomes in a given event will vary. A sum of two can occur with the outcome [math]\displaystyle{ \{(1,1)\} }[/math], so the probability is [math]\displaystyle{ \frac{1}{36} }[/math]. For a sum of seven, the outcomes in the event are [math]\displaystyle{ \{(1,6), (6,1), (2,5), (5,2), (3,4),(4,3)\} }[/math], so the probability is [math]\displaystyle{ \frac{6}{36} }[/math].[19]

Simple random sample

Main page: Simple random sample

In statistics, inferences are made about characteristics of a population by studying a sample of that population's individuals. In order to arrive at a sample that presents an unbiased estimate of the true characteristics of the population, statisticians often seek to study a simple random sample—that is, a sample in which every individual in the population is equally likely to be included.[18]:274–275 The result of this is that every possible combination of individuals who could be chosen for the sample has an equal chance to be the sample that is selected (that is, the space of simple random samples of a given size from a given population is composed of equally likely outcomes).[20]

Infinitely large sample spaces

In an elementary approach to probability, any subset of the sample space is usually called an event.[9] However, this gives rise to problems when the sample space is continuous, so that a more precise definition of an event is necessary. Under this definition only measurable subsets of the sample space, constituting a σ-algebra over the sample space itself, are considered events.

An example of an infinitely large sample space is measuring the lifetime of a light bulb. The corresponding sample space would be [0, ∞).[9]

See also

References

  1. Stark, Henry; Woods, John W. (2002). Probability and Random Processes with Applications to Signal Processing (3rd ed.). Pearson. p. 7. ISBN 9788177583564. 
  2. Forbes, Catherine; Evans, Merran; Hastings, Nicholas; Peacock, Brian (2011). Statistical Distributions (4th ed.). Wiley. p. 3. ISBN 9780470390634. https://archive.org/details/statisticaldistr00cfor. 
  3. Hogg, Robert; Tannis, Elliot; Zimmerman, Dale (December 24, 2013). Probability and Statistical Inference. Pearson Education, Inc. p. 10. ISBN 978-0321923271. "The collection of all possible outcomes... is called the outcome space." 
  4. 4.0 4.1 Albert, Jim (1998-01-21). "Listing All Possible Outcomes (The Sample Space)". Bowling Green State University. http://www-math.bgsu.edu/~albert/m115/probability/sample_space.html. 
  5. Soong, T. T. (2004). Fundamentals of probability and statistics for engineers. Chichester: Wiley. ISBN 0-470-86815-5. OCLC 55135988. https://www.worldcat.org/oclc/55135988. 
  6. 6.0 6.1 6.2 "UOR_2.1". https://web.mit.edu/urban_or_book/www/book/chapter2/2.1.html. 
  7. Ross, Sheldon (2010). A First Course in Probability (8th ed.). Pearson Prentice Hall. pp. 23. ISBN 978-0136033134. http://julio.staff.ipb.ac.id/files/2015/02/Ross_8th_ed_English.pdf. 
  8. Dekking, F.M. (Frederik Michel), 1946- (2005). A modern introduction to probability and statistics : understanding why and how. Springer. ISBN 1-85233-896-2. OCLC 783259968. 
  9. 9.0 9.1 9.2 "Sample Space, Events and Probability". https://faculty.math.illinois.edu/~kkirkpat/SampleSpace.pdf. 
  10. Larsen, R. J.; Marx, M. L. (2001). An Introduction to Mathematical Statistics and Its Applications (3rd ed.). Upper Saddle River, NJ: Prentice Hall. p. 22. ISBN 9780139223037. 
  11. LaValle, Steven M. (2006). Planning Algorithms. Cambridge University Press. pp. 442. http://lavalle.pl/planning/ch9.pdf. 
  12. "Sample Spaces, Events, and Their Probabilities". https://saylordotorg.github.io/text_introductory-statistics/s07-01-sample-spaces-events-and-their.html. 
  13. Tsitsiklis, John (Spring 2018). "Sample Spaces". Massachusetts Institute of Technology. https://ocw.mit.edu/resources/res-6-012-introduction-to-probability-spring-2018/part-i-the-fundamentals. 
  14. Jones, James (1996). "Stats: Introduction to Probability - Sample Spaces". Richland Community College. https://people.richland.edu/james/lecture/m170/ch05-int.html. 
  15. Foerster, Paul A. (2006). Algebra and Trigonometry: Functions and Applications, Teacher's Edition (Classics ed.). Prentice Hall. p. 633. ISBN 0-13-165711-9. https://archive.org/details/algebratrigonome00paul_0/page/633. 
  16. "Equally Likely outcomes". https://www3.nd.edu/~dgalvin1/10120/10120_S16/Topic09_7p2_Galvin.pdf. 
  17. "Chapter 3: Probability". https://www.coconino.edu/resources/files/pdfs/academics/arts-and-sciences/MAT142/Chapter_3_Probability.pdf. 
  18. 18.0 18.1 Yates, Daniel S.; Moore, David S.; Starnes, Daren S. (2003). The Practice of Statistics (2nd ed.). New York: Freeman. ISBN 978-0-7167-4773-4. http://bcs.whfreeman.com/yates2e/. 
  19. "Probability: Rolling Two Dice". http://www.math.hawaii.edu/~ramsey/Probability/TwoDice.html. 
  20. "Simple Random Samples". https://web.ma.utexas.edu/users/mks/statmistakes/SRS.html. 

External links