Medicine:Minimal important difference

From HandWiki
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

The minimal important difference (MID) or minimal clinically important difference (MCID) is the smallest change in a treatment outcome that an individual patient would identify as important and which would indicate a change in the patient's management.[1][2]

Purpose

Over the years great steps have been taken in reporting what really matters in clinical research. A clinical researcher might report: "in my own experience treatment X does not do well for condition Y".[3][4] The use of a P value cut-off point of 0.05 was introduced by R.A. Fisher; this led to study results being described as either statistically significant or non-significant.[5] Although this p-value objectified research outcome, using it as a rigid cut off point can have potentially serious consequences: (i) clinically important differences observed in studies might be statistically non-significant (a type II error, or false negative result) and therefore be unfairly ignored; this often is a result of having a small number of subjects studied; (ii) even the smallest difference in measurements can be proved statistically significant by increasing the number of subjects in a study. Such a small difference could be irrelevant (i.e., of no clinical importance) to patients or clinicians. Thus, statistical significance does not necessarily imply clinical importance.

Over the years clinicians and researchers have moved away from physical and radiological endpoints towards patient-reported outcomes. However, using patient-reported outcomes does not solve the problem of small differences being statistically significant but possibly clinically irrelevant. [6]

In order to study clinical importance, the concept of minimal clinically important difference (MCID) was proposed by Jaeschke et al. in 1989.[7] MCID is the smallest change in an outcome that a patient would identify as important. MCID therefore offers a threshold above which outcome is experienced as relevant by the patient; this avoids the problem of mere statistical significance. Schunemann and Guyatt recommended minimally important difference (MID) to remove the "focus on 'clinical' interpretations" (2005, p. 594).

Methods of determining the MID

There are several techniques to calculate the MID. They fall into three categories: distribution-based methods, anchor-based methods and the Delphi method.

Distribution-based methods

These techniques are derived from statistical measures of spread of data: the standard deviation, the standard error of measurement and the effect size, usually expressed as a standardized mean difference (SMD; also known as Cohen's d in psychology).

  1. Using the one-half standard deviation benchmark of an outcome measure entails that patient improving more than one-half of the outcome score's standard deviation have achieved a minimal clinically important difference.[8]
  2. The standard error of measurement is the variation in scores due to unreliability of the scale or measure used. Thus a change smaller than the standard error of measurement is likely to be the result of measurement error rather than a true observed change. Patients achieving a difference in outcome score of at least one standard error of measurement would have achieved a minimal clinically important difference.[9]
  3. The effect size is a measure obtained by dividing the difference between the means of the baseline and posttreatment scores by the SD of the baseline scores. An effect size cut off point can be used to define MID in the same way as the one half standard deviation and the standard error of measurement.[9]
  4. Item response theory (IRT) also can create an estimate of MID using judges who respond to clinical vignettes illustrating different scenarios.[10]

Anchor based

The anchor based method compares changes in scores with an "anchor" as a reference. An anchor establishes if the patient is better after treatment compared to baseline according to the patient's own experience.

A popular anchoring method is to ask the patient at a specific point during treatment: ‘‘Do you feel that the treatment improved things for you?’’.[11] Answers to anchor questions could vary from a simple "yes" or "no", to ranked options, e.g., "much better", "slightly better", "about the same", "somewhat worse" and "much worse". Differences between those average scale score for who answered "better" and those who answered "about the same" create the benchmark for the anchor method.

An interesting approach to the anchor based method is establishment of an anchor before treatment. The patient is asked what minimal outcome would be necessary to undergo the proposed treatment. This method allows for more personal variation, as one patient might require more pain relief, where another strives towards more functional improvement.[12]

Different anchor questions and a different number of possible answers have been proposed.[12][13] Currently there is no consensus on the one right question nor on the best answers.

Delphi method

The Delphi method relies on a panel of experts who reach consensus regarding the MID. The expert panel gets information about the results of a trial. They review it separately and provide their best estimate of the MID. Their responses are averaged, and this summary is sent back with an invitation to revise their estimates. This process is continued until consensus is achieved.[14][15][16]

Shortcomings

The anchor based method is not suitable for conditions where most patients will improve and few remain unchanged. High post treatment satisfaction results in insufficient discriminative ability for calculation of a MID.[4][17] A possible solution to this problem is a variation on the calculation of a 'substantial clinical benefit' score. This calculation is not based on the patients that improve vs. that do not, but on the patients that improve and those who improve a lot.[13]

MID calculation is of limited additional value for treatments that show effects only in the long run, e.g. tightly regulated blood glucose in the case of diabetes might cause discomfort because of the accompanying hypoglycemia (low blood sugar) and the perceived quality of life might actually decrease; however, regulation reduces severe long term complications and is therefore still warranted. The calculated MID varies widely depending on the method used,[18][19] currently there is no preferred method of establishing the MID.

There is no consensus regarding the optimal technique, but distribution-based methods have been criticized. For example, use of the standard error of the mean (SEM) is based on anecdotal observations that it is approximately equal to 1/2 SD when the reliability is 0.75. But Revicki et al. question why 1 SEM should "have anything to do with the MID? The SEM is estimated by the product of the SD and the square root of 1-reliability of a measure. The SEM is used to set the confidence interval (CI) around an individual score, that is, the observed score plus or minus 1.96 SEMS constitutes the 95% CI. In fact, the reliable change index proposed early by Jacobson and Truax [12] is based on defining change using the statistical convention of exceeding 2 standard errors" (p. 106).[20]

Caveats

The MID varies according to diseases and outcome instruments, but it does not depend on treatment methods. Therefore, two different treatments for a similar disease can be compared using the same MID if the outcome measurement instrument is the same. Also MID may differ depending on baseline level[21] and it seems to differ over time after treatment for the same disease.[4]

See also

References

  1. "Commentary--goodbye M(C)ID! Hello MID, where do you come from?". Health Services Research 40 (2): 593–7. April 2005. doi:10.1111/j.1475-6773.2005.0k375.x. PMID 15762909. 
  2. "Clinimetrics corner: a closer look at the minimal clinically important difference (MCID)". The Journal of Manual & Manipulative Therapy 20 (3): 160–6. August 2012. doi:10.1179/2042618612Y.0000000001. PMID 23904756. 
  3. "Ruptures of the rotator cuff". Clinical Orthopaedics 3: 92–8. 1954. PMID 13161170. 
  4. 4.0 4.1 4.2 "Editor's spotlight/take 5: Comparative responsiveness and minimal clinically important differences for idiopathic ulnar impaction syndrome (DOI 10.1007/s11999-013-2843-8)". Clinical Orthopaedics and Related Research 471 (5): 1403–5. May 2013. doi:10.1007/s11999-013-2886-x. PMID 23460486. 
  5. "Sifting the evidence-what's wrong with significance tests?". BMJ 322 (7280): 226–31. January 2001. doi:10.1136/bmj.322.7280.226. PMID 11159626. 
  6. "Minimal clinically important differences in randomised clinical trials on pain management after total hip and knee arthroplasty: a systematic review". British Journal of Anaesthesia 126 (5): 1029–1037. May 2021. doi:10.1016/j.bja.2021.01.021. PMID 33678402. 
  7. "Measurement of health status. Ascertaining the minimal clinically important difference". Controlled Clinical Trials 10 (4): 407–15. December 1989. doi:10.1016/0197-2456(89)90005-6. PMID 2691207. 
  8. "Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation". Medical Care 41 (5): 582–92. May 2003. doi:10.1097/01.MLR.0000062554.74615.4C. PMID 12719681. 
  9. 9.0 9.1 "Understanding the minimum clinically important difference: a review of concepts and methods". The Spine Journal 7 (5): 541–6. 2007. doi:10.1016/j.spinee.2007.01.008. PMID 17448732. 
  10. "Estimating minimally important difference (MID) in PROMIS pediatric measures using the scale-judgment method". Quality of Life Research 25 (1): 13–23. January 2016. doi:10.1007/s11136-015-1058-8. PMID 26118768. 
  11. "Comparative responsiveness and minimal clinically important differences for idiopathic ulnar impaction syndrome". Clinical Orthopaedics and Related Research 471 (5): 1406–11. May 2013. doi:10.1007/s11999-013-2843-8. PMID 23404422. 
  12. 12.0 12.1 "Minimum acceptable outcomes after lumbar spinal fusion". The Spine Journal 10 (4): 313–20. April 2010. doi:10.1016/j.spinee.2010.02.001. PMID 20362247. 
  13. 13.0 13.1 "Defining substantial clinical benefit following lumbar spine arthrodesis". The Journal of Bone and Joint Surgery. American Volume 90 (9): 1839–47. September 2008. doi:10.2106/JBJS.G.01095. PMID 18762642. 
  14. "Osteoarthritis antirheumatic drug trials. III. Setting the delta for clinical trials--results of a consensus development (Delphi) exercise". The Journal of Rheumatology 19 (3): 451–7. March 1992. PMID 1578462. 
  15. "Rheumatoid arthritis antirheumatic drug trials. III. Setting the delta for clinical trials of antirheumatic drugs--results of a consensus development (Delphi) exercise". The Journal of Rheumatology 18 (12): 1908–15. December 1991. PMID 1795330. 
  16. "Ankylosing spondylitis antirheumatic drug trials. III. Setting the delta for clinical trials of antirheumatic drugs--results of a consensus development (Delphi) exercise". The Journal of Rheumatology 18 (11): 1716–22. November 1991. PMID 1787494. 
  17. "The minimal clinically important difference of the Michigan hand outcomes questionnaire". The Journal of Hand Surgery 34 (3): 509–14. March 2009. doi:10.1016/j.jhsa.2008.11.001. PMID 19258150. 
  18. "Determination of minimum clinically important difference (MCID) in pain, disability, and quality of life after revision fusion for symptomatic pseudoarthrosis". The Spine Journal 12 (12): 1122–8. December 2012. doi:10.1016/j.spinee.2012.10.006. PMID 23158968. 
  19. "Determining minimally important changes in generic and disease-specific health-related quality of life questionnaires in clinical trials of rheumatoid arthritis". Arthritis and Rheumatism 43 (7): 1478–87. July 2000. doi:10.1002/1529-0131(200007)43:7<1478::AID-ANR10>3.0.CO;2-M. PMID 10902749. 
  20. "Recommended methods for determining responsiveness and minimally important differences for patient-reported outcomes". Journal of Clinical Epidemiology 61 (2): 102–9. February 2008. doi:10.1016/j.jclinepi.2007.03.012. PMID 18177782. 
  21. Hays, R. D., & Woolley, J. M. (2000). The concept of clinically meaningful difference in health-related quality-of-life research: How meaningful is it? PharmacoEconomics, 18, 419-423.

External links