Medicine:Routine health outcomes measurement

From HandWiki
Revision as of 23:48, 4 February 2024 by Pchauhan2001 (talk | contribs) (simplify)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Definition of health outcomes

Routine health outcomes measurement is the process of examining whether or not interventions are associated with change (for better or worse) in the patient's health status. This change can be directly measured (e.g. by rating scales used by the clinician or patient) or assumed by the use of proxy measurement (e.g. a blood test result). Interventions can be direct (e.g. medication) or indirect (e.g. change in the process of health care like integration care by different specialists). Some definitions of health outcomes measurement stipulate that the population or group has to be defined (different outcomes are expected for different people & conditions). A strong example is that of Australia's New South Wales Health Department: health outcome is

"change in the health of an individual, group of people or population which is attributable to an intervention or series of interventions"[1]

In its purest form, measurement of health outcomes implies identifying the context (diagnosis, demographics etc.), measuring health status before an intervention is carried out, measuring the intervention, measuring health status again and then plausibly relating the change to the intervention.

Health outcomes measurement and evidence-based practice

Evidence-based practice describes a healthcare system in which evidence from published studies, often mediated by systematic reviews or processed into medical guidelines is incorporated into clinical practice. The flow of information is one way; from research to practice. However many interventions by health systems and treatments by their staff have never been, or cannot easily be, subject to research study. Of the rest, quite a lot is from research that is graded as low quality.[2] All health staff intervene in their patients on the basis of both information from research evidence and from their own experience. The latter is personal, subjective and strongly influenced by stark instances which may not be representative.[3] However, when information on these interventions and their outcomes are collected systematically it becomes "practice-based evidence"[4] and can complement that from academic research. To date, such initiatives have been largely confined to primary care[5] and rheumatology.[6] An example of practice-based evidence is found in the evaluation of a simple intervention like a medication. Efficacy is the degree with which it can improve patients in randomised controlled trials– the epitome of evidence-based practice. Effectiveness is the degree with which the same drug improves patients in the uncontrolled hurly-burly of everyday practice; data which are much more difficult to come by. Routine health outcomes measurement has the potential to provide such evidence.

The information required for practice-based evidence is of three sorts: context (e.g. case mix), intervention (treatment) and outcomes (change).[7] Some mental health services are developing a practice-based evidence culture with the routine measurement of clinical outcomes[8][9] and creating behavioral health outcomes management programs.

History of routine health outcomes measurement

Florence Nightingale

An early example of a routine clinical outcomes system was set up by Florence Nightingale in the Crimean War. The outcome under study was death. The context was the season and the cause of death– wounds, infections and any other cause. The interventions were nursing and administrative. She arrived just before the barracks in Scutari were accepting the first soldiers wounded at the battle of Inkerman in November 1854, and mortality was already high. She was appalled at the disorganisation and standards of hygiene and set about cleaning and reorganisation. However, mortality continued to rise. It was only after the sewers were cleared and ventilation improved in March 1856 that mortality fell. On return to the UK she reflected on these data and produced new sorts of chart (she had trained in mathematics rather than "worsted work and practising quadrilles") to show that it was most likely that these excess deaths were caused by living conditions rather than, as she initially believed, poor nutrition. She also showed that soldiers in peacetime also had an excess mortality over other young men, presumably from the same causes. Her reputation was damaged, however, when she and William Farr, Registrar General, collaborated in producing a table which appeared to show a mortality in London hospitals of over 90% compared with less than 13% in Margate. They had made an elementary error in the denominator; the true rate for London hospitals was actually 9% for admitted patients.[10] She was never too keen on hospital mortality figures as outcome measures anyway:

"If the function of a hospital were to kill the sick, statistical comparisons of this nature would be admissible. As, however, its proper function is to restore the sick to health as speedily as possible, the elements which really give information as to whether this is done or not, are those which show the proportion of sick restored to health, and the average time which has been required for this object…"[11]

Here she presaged the next key figure in the development of routine outcomes measurement

Ernest Amory Codman

Codman was a Boston orthopaedic surgeon who developed the "end result idea". At its core was

"The common sense notion that every hospital should follow every patient it treats, long enough to determine whether or not the treatment has been successful, and then to inquire 'if not, why not?' with a view of preventing similar failures in the future."[12]

He is said to have first articulated this idea to his gynaecologist colleague and Chicagoan Franklin H Martin, who later founded the American College of Surgeons, in a Hansom Cab journey from Frimley Park, Surrey, UK in the summer of 1910. He put this idea into practice in Massachusetts General Hospital.

"Each patient who entered the operating room was provided with a 5-inch by 8-inch card on which the operating surgeon filled out the details of the case before and after surgery. This card was brought up 1 year later, the patient was examined, and the previous years' treatment was then evaluated based on the patient's condition. This system enabled the hospital and the public to evaluate the results of treatments and to provide comparisons among individual surgeons and different hospitals"[13]

He was able to demonstrate his own patients’ outcomes and those of some of his colleagues but unaccountably this system was not embraced by his colleagues. Frustrated by their resistance, he provoked an uproar at a public meeting and thus fell dramatically from favour in the hospital and at Harvard, where he held a teaching post, and he was only able to fully realize the idea in his own, struggling small private hospital[14] although some colleagues continued with it at the larger hospitals. He died in 1940 disappointed that his dream of publicly available outcomes data was not even on the horizon, but hoped that posterity would vindicate him.

Avedis Donabedian

In a classic 1966 paper, Avedis Donabedian, the renowned public health pioneer, described three distinct aspects of quality in health care: outcome, process and structure (in that order in the original paper).[15] He had misgivings about solely using outcomes as a measure of quality, but concluded that:

"Outcomes, by and large, remain the ultimate validation of the effectiveness and quality of medical care."[15]

He may have muddied the waters a bit when discussing patient satisfaction with treatment (usually regarded as a measure of process) as an outcome, but more importantly it has become apparent that his three-aspect model has been subverted into what is called the "structure-process-outcomes" model, a directional, putatively causal chain that he never originally described. This subversion has been the justification for repeated attempts to improve process and thus outcomes by reorganizing the structure of health care, wittily described by Oxman et al.[16] Donabedian himself cautioned that outcomes measurement cannot distinguish efficacy from effectiveness: (outcomes may be poor because the right treatment is badly applied or the wrong treatment is carried out well), that outcomes measurement must always take into account context (factors other than the intervention may be very important in determining outcomes), and also that the most important outcomes may be the least easy to measure, so easily measured but irrelevant outcomes are chosen (e.g. mortality instead of disability).

Mortality as an outcome measure

Perhaps because of instances of scandalously poor care (for example at the Bristol Royal Infirmary 1984-1995[17]) mortality data have become more and more openly available as a proxy for other health outcomes in hospitals,[18] and even for individual surgeons.[19] For many people, quality of life is a greater consideration so factors such as physical symptoms, psychological, emotional and spiritual, and information and support needs may take greater precedence. Therefore, as an indicator of the quality and safety of health care institutions, mortality remains important, but for individuals, it may not be the key goal.[20]

Principles of routine health outcomes measurement

  1. All three dimensions (context, intervention as well as outcomes) must be measured. It is not possible to understand outcomes data without all three of these.
  2. Different perspectives on outcomes need to be acknowledged. For instance, patients, carers and clinical staff may have different views of what outcomes are important, how you would measure them, and even which were desirable[21]
  3. Prospective and repeated measurement of health status is superior to retrospective measurement of change such as Clinical Global Impressions.[22] The latter relies on memory and may not be possible if the rater changes.
  4. The reliability (statistics) and validity (statistics) of any measure of health status must be known so that their impact on the assessment of health outcomes can be taken into account. In mental health services these values may be quite low, especially when carried out routinely by staff rather than by trained researchers, and when using short measures that are feasible in everyday practice.
  5. Data collected must be fed back to them to maximize data quality, reliability and validity.[23] Feedback should be of content (e.g. relationship of outcomes to context and interventions) and of process (data quality of all three dimensions)

Current status of routine health outcomes measurement

Why is routine health outcomes measurement so rare? One can find reports of routine health outcomes measurement in many medical specialties and in many countries. However, the vast majority of these reports are by or about enthusiasts who have set up essentially local systems, with little connection with other similar systems elsewhere, even down the street. In order to realise the full benefits of an outcomes measurement system we need large-scale implementation using standardised methods with data from high proportions of suitable healthcare episodes being trapped. In order to analyse change in health status (health outcomes) we also need data on context, as recommended by Donabedian[15] and others, and data on the interventions being used, all in a standardised manner. Such large-scale systems are only at present evident in the field of mental health services, and only well developed in two locations: Ohio[8] and Australia,[9] even though in both of these data on context and interventions are much less prominent than data on outcomes. The major challenge for health outcomes measurement is now the development of usable and discriminatory categories of interventions and treatments, especially in the field of mental health.

Benefits of routine health outcomes measurement

Aspirations include the following benefits

  • Aggregated data
    • Can form the basis of effectiveness data that complement efficacy data. This could show the actual benefits in everyday clinical practice of interventions previously tested by randomised clinical trials, or the benefits of interventions that have not been or cannot be tested in Randomized Controlled Trials and systematic reviews
    • Can identify hazardous interventions that are only apparent in large datasets
    • Can be used to show differences between clinical services with similar case mix and thus stimulate search for testable hypotheses that might explain these differences and lead to improvements in treatment or management
    • Can be used to compare the outcomes of treatment and care from different perspectives– e.g. clinical staff and patient
  • Data about individual patients
    • Can be used to track changes during treatment over periods of time too long to be amenable to memory by an individual patient or clinician, and especially when more than one clinician or team is involved[24]
    • Can, especially when different perspectives are available, be used in discussions between patients, clinicians and carers about progress[25]
    • Can be used to speed up and crispen clinical meetings[26]

Risks of routine health outcomes measurement

  1. If attempts are made to purchase or commission health services using outcomes data, bias may be introduced that will negate the benefits, especially in the service provider produces the outcomes measurement. See Goodhart's Law
  2. Inadequate attention may be paid to the analysis of context data, such as case mix, leading to dubious conclusions.[27]
  3. If data are not fed back to clinicians participating then data quality (and quantity) is likely to fall below the thresholds necessary for reasonable interpretation.[28]
  4. If only a small proportion of episodes of health care have completed outcomes data, then these data may not be representative of all episodes, although the threshold for this effect will vary from service to service, measure to measure.
  5. Some risks of bias, widely foretold,[29] are proving to be insubstantial but need guarding against

Practical issues in routine health outcomes measurement

Experience suggests that the following factors are necessary for routine health outcomes measurement

  1. an electronic patient record system with easy extraction from data warehouse. Entry of outcomes data can then become part of the everyday entry of clinical data. Without this, aggregate data analysis and feedback is very difficult indeed.
  2. resources and staff time set aside for training and receiving feedback
  3. resources and personnel to extract, analyse and proactively present outcomes, casemix and, where available, intervention data to clinical teams
  4. regular reports on data quality as part of a performance management process by senior managers can supplement, but not replace, feedback

Shared decision making[30]

Outcome measurement is therefore an important but neglected tool in improving quality of healthcare provision. It has been argued that it is vital that the patient has been meaningfully involved in decisions about whether or not to embark on an intervention (e.g. a test, an operation, a medicine). This is especially so if the decision is fateful (i.e. cannot be reversed).[31] Although a process rather than an outcome measure, the degree with which patients have been involved in shared decision making is clearly important.[32]

References

  1. Frommer, Michael; Rubin, George; Lyle, David (1992). "The NSW Health Outcomes program". New South Wales Public Health Bulletin 3 (12): 135. doi:10.1071/NB92067. 
  2. "What is "quality of evidence" and why is it important to clinicians?". BMJ 336 (7651): 995–8. May 2008. doi:10.1136/bmj.39490.551019.BE. PMID 18456631. 
  3. Malterud K (August 2001). "The art and science of clinical knowledge: evidence beyond measures and numbers". Lancet 358 (9279): 397–400. doi:10.1016/S0140-6736(01)05548-9. PMID 11502338. 
  4. "Practice-based evidence study design for comparative effectiveness research". Medical Care 45 (10 Supl 2): S50–7. October 2007. doi:10.1097/MLR.0b013e318070c07b. PMID 17909384. 
  5. Ryan JG (1 March 2004). "Practice-Based Research Networking for Growing the Evidence to Substantiate Primary Care Medicine". Annals of Family Medicine 2 (2): 180–1. PMID 15083861. PMC 1466650. http://www.annfammed.org/cgi/pmidlookup?view=long&pmid=15083861. 
  6. "Evidence-based practice and practice-based evidence". Nature Clinical Practice Rheumatology 2 (3): 114–5. March 2006. doi:10.1038/ncprheum0131. PMID 16932666. 
  7. Pawson R, Tilley N. Realistic Evaluation. London: Sage Publications Ltd; 1997
  8. 8.0 8.1 "Introducing the routine use of outcomes measurement to mental health services". Australian Health Review 24 (1): 43–50. 2001. doi:10.1071/AH010043. PMID 11357741. 
  9. 9.0 9.1 Ohio Mental Health Datamart
  10. Iezzoni LI (15 June 1996). "100 apples divided by 15 red herrings: a cautionary tale from the mid-19th century on comparing hospital mortality rates". Annals of Internal Medicine 124 (12): 1079–85. doi:10.7326/0003-4819-124-12-199606150-00009. PMID 8633823. 
  11. Nightingale F. Notes on Hospitals. 3rded. London: Longman, Green, Longman, Roberts, and Green; 1863
  12. Codman EA. The Shoulder. Rupture of the supraspinatus tendon and other lesions in or about the subacromial bursa. Privately published 1934 Reprint 1965 Malabar, Florida: Kreiger;
  13. "Historical perspective. Ernest Amory Codman, 1869-1940. A pioneer of evidence-based medicine: the end result idea". Spine 23 (5): 629–33. March 1998. doi:10.1097/00007632-199803010-00019. PMID 9530796. 
  14. Codman EA. A study in hospital efficiency. As demonstrated by the case report of the first five years of a private hospital. Published privately 1817. Reprinted 1996 Joint Commission on Accreditation of Healthcare Organizations Oakbrook Terrace, IL, USA:
  15. 15.0 15.1 15.2 Donabedian A. Evaluating the quality of medical care. Milbank Memorial Fund Quarterly 1966;44:166-206
  16. "A surrealistic mega-analysis of redisorganization theories". Journal of the Royal Society of Medicine 98 (12): 563–8. December 2005. doi:10.1177/014107680509801223. PMID 16319441. 
  17. "Archived copy". http://www.bristol-inquiry.org.uk/. 
  18. "St George's Healthcare". http://www.stgeorges.nhs.uk/mortalityintro.asp. 
  19. Bridgewater B; Adult Cardiac Surgeons of North West England (March 2005). "Mortality data in adult cardiac surgery for named surgeons: retrospective examination of prospectively collected data on coronary artery surgery and aortic valve replacement". BMJ 330 (7490): 506–10. doi:10.1136/bmj.330.7490.506. PMID 15746131. 
  20. Murtagh, Fliss E. M.; McCrone, Paul; Higginson, Irene J.; Dzingina, Mendwas (2017-06-01). "Development of a Patient-Reported Palliative Care-Specific Health Classification System: The POS-E". The Patient: Patient-Centered Outcomes Research 10 (3): 353–365. doi:10.1007/s40271-017-0224-1. ISSN 1178-1661. PMID 28271387. 
  21. Long, A; Jefferson, J (1999). "The significance of outcomes within European health sector reforms: towards the development of an outcomes culture". International Journal of Public Administration 22 (3): 385–424. doi:10.1080/01900699908525389. 
  22. NIMH Early Clinical Drug Evaluation PRB. Clinical global impressions. In: Guy W, editor. ECDEU Assessment manual for psychopharmacology, revised. US Department of Health and Human Services Public Health Service, Alcohol Drug Abuse and Mental Health Administration, NIMH Psychopharmacology Research Branch; 1976. p. 217-22
  23. "Does Feedback Improve the Quality of Computerized Medical Records in Primary Care?". Journal of the American Medical Informatics Association 9 (4): 395–401. 2002. doi:10.1197/jamia.M1023. PMID 12087120. 
  24. Keogh, Bruce; Jones, Mark; Hooper, Tim; Au, John; Fabri, Brian M.; Grotte, Geir; Brooks, Nicholas; Grayson, Antony D. et al. (2007-06-01). "Has the publication of cardiac surgery outcome data been associated with changes in practice in northwest England: an analysis of 25 730 patients undergoing CABG surgery under 30 surgeons over eight years". Heart 93 (6): 744–748. doi:10.1136/hrt.2006.106393. ISSN 1468-201X. PMID 17237128. 
  25. Stewart M (April 2009). "Service user and significant other versions of the Health of the Nation Outcome Scales". Australasian Psychiatry 17 (2): 156–63. doi:10.1080/10398560802596116. PMID 19296275. 
  26. Stewart M. Making the HoNOS(CA) clinically useful: A strategy for making the HoNOS, HoNOSCA, and HoNOS65+ useful to the clinical team. 2nd Australasian Mental Health Outcomes Conference; 2008
  27. Nicholl, Jon; Brown, Celia A.; Lilford, Richard J. (2007-09-27). "Use of process measures to monitor the quality of clinical practice". BMJ 335 (7621): 648–650. doi:10.1136/bmj.39317.641296.AD. ISSN 1468-5833. PMID 17901516. 
  28. Turner-Stokes, Lynne; Williams, Heather; Sephton, Keith; Rose, Hilary; Harris, Sarah; Thu, Aung (November 2012). "Engaging the hearts and minds of clinicians in outcome measurement – the UK rehabilitation outcomes collaborative approach". Disability and Rehabilitation 34 (22): 1871–1879. doi:10.3109/09638288.2012.670033. ISSN 0963-8288. PMID 22506959. 
  29. "Routine outcome measurement by mental health-care providers: is it worth doing?". Lancet 360 (9346): 1689–90. November 2002. doi:10.1016/S0140-6736(02)11610-2. PMID 12457807. 
  30. "Shared decision-making in medicine", Wikipedia, 2018-11-19, https://en.wikipedia.org/w/index.php?title=Shared_decision-making_in_medicine&oldid=869532527, retrieved 2019-01-14 
  31. "A new paradigm for better value health care". https://www.kingsfund.org.uk/sites/default/files/Muir-Gray.pdf. 
  32. Elwyn, Glyn; Frosch, Dominick; Thomson, Richard; Joseph-Williams, Natalie; Lloyd, Amy; Kinnersley, Paul; Cording, Emma; Tomson, Dave et al. (October 2012). "Shared Decision Making: A Model for Clinical Practice". Journal of General Internal Medicine 27 (10): 1361–1367. doi:10.1007/s11606-012-2077-6. ISSN 0884-8734. PMID 22618581. 

See also