Statistical conclusion validity

From HandWiki
Revision as of 15:10, 6 February 2024 by Carolyn (talk | contribs) (fixing)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Statistical conclusion validity is the degree to which conclusions about the relationship among variables based on the data are correct or "reasonable". This began as being solely about whether the statistical conclusion about the relationship of the variables was correct, but now there is a movement towards moving to "reasonable" conclusions that use: quantitative, statistical, and qualitative data.[1] Fundamentally, two types of errors can occur: type I (finding a difference or correlation when none exists) and type II (finding no difference or correlation when one exists). Statistical conclusion validity concerns the qualities of the study that make these types of errors more likely. Statistical conclusion validity involves ensuring the use of adequate sampling procedures, appropriate statistical tests, and reliable measurement procedures.[2][3][4]

Common threats

The most common threats to statistical conclusion validity are:

Low statistical power

Power is the probability of correctly rejecting the null hypothesis when it is false (inverse of the type II error rate). Experiments with low power have a higher probability of incorrectly failing to reject the null hypothesis—that is, committing a type II error and concluding that there is no detectable effect when there is an effect (e.g., there is real covariation between the cause and effect). Low power occurs when the sample size of the study is too small given other factors (small effect sizes, large group variability, unreliable measures, etc.).

Violated assumptions of the test statistics

Most statistical tests (particularly inferential statistics) involve assumptions about the data that make the analysis suitable for testing a hypothesis. Violating the assumptions of statistical tests can lead to incorrect inferences about the cause–effect relationship. The robustness of a test indicates how sensitive it is to violations. Violations of assumptions may make tests more or less likely to make type I or II errors.

Dredging and the error rate problem

Each hypothesis test involves a set risk of a type I error (the alpha rate). If a researcher searches or "dredges" through their data, testing many different hypotheses to find a significant effect, they are inflating their type I error rate. The more the researcher repeatedly tests the data, the higher the chance of observing a type I error and making an incorrect inference about the existence of a relationship.

Unreliability of measures

If the dependent and/or independent variable(s) are not measured reliably (i.e. with large amounts of measurement error), incorrect conclusions can be drawn.

Restriction of range

Restriction of range, such as floor and ceiling effects or selection effects, reduce the power of the experiment, and increase the chance of a type II error.[5] This is because correlations are attenuated (weakened) by reduced variability (see, for example, the equation for the Pearson product-moment correlation coefficient which uses score variance in its estimation).

Heterogeneity of the units under study

Greater heterogeneity of individuals participating in the study can also impact interpretations of results by increasing the variance of results or obscuring true relationships (see also sampling error). This obscures possible interactions between the characteristics of the units and the cause–effect relationship.

Threats to internal validity

Any effect that can impact the internal validity of a research study may bias the results and impact the validity of statistical conclusions reached. These threats to internal validity include unreliability of treatment implementation (lack of standardization) or failing to control for extraneous variables.

See also

References

  1. Cozby, Paul C. (2009). Methods in behavioral research (10th ed.). Boston: McGraw-Hill Higher Education. 
  2. Cohen, R. J.; Swerdlik, M. E. (2004). Psychological testing and assessment (6th edition).. Sydney: McGraw-Hill. 
  3. Cook, T. D.; Campbell, D. T.; Day, A. (1979). Quasi-experimentation: Design & analysis issues for field settings.. Houghton Mifflin. https://archive.org/details/quasiexperimenta00cook. 
  4. Shadish, W.; Cook, T. D.; Campbell, D. T. (2006). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin. 
  5. Sackett, P.R.; Lievens, F.; Berry, C.M.; Landers, R.N. (2007). "A Cautionary Note on the Effects of Range Restriction on Predictor Intercorrelations". Journal of Applied Psychology 92 (2): 538–544. doi:10.1037/0021-9010.92.2.538. PMID 17371098. https://www.researchgate.net/publication/6436643_A_cautionary_note_on_the_effects_of_range_restriction_on_predictor_intercorrelations/file/d912f50dd667aa5857.pdf.