z-logo
Premium
Government plans for public reporting of performance data in health care: the case against
Author(s) -
Braithwaite Jeffrey,
Mannion Russell
Publication year - 2011
Publication title -
medical journal of australia
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.904
H-Index - 131
eISSN - 1326-5377
pISSN - 0025-729X
DOI - 10.5694/j.1326-5377.2011.tb03187.x
Subject(s) - government (linguistics) , library science , foundation (evidence) , citation , political science , sociology , law , computer science , philosophy , linguistics
The idea that publicly reported performance measurement, like apple pie, parenthood and the national flag, deserves a warm, uncritical glow of universal support should be roundly rejected. Introducing any costly new initiative must always be analysed with a cool head for its risks and potential downside. Where is the Australian business case, or the international cost–benefit analysis or cost-effectiveness analysis, applied to Australia, that compels us to not only accept, but insist on its introduction? These have not been provided by proponents to date. It is a considerable logistical exercise to collect, process, analyse and distribute national data, and it is therefore very costly. Do the benefits outweigh the costs? Almost everyone will have their doubts. There is probably no clinician or manager who has not reported information to some system or other, and never heard about it again. Health care is renowned for “hoovering up” data, which get stuck in the system. Will a national performance measurement initiative fare any better? We prefer to maintain a healthy scepticism. Most experts in favour of performance measurement argue that the twin aims are to improve accountability and enhance the system’s performance. But health care’s enormous complexity must be acknowledged. There are many stakeholders, a complicated multiplicity of services and products delivered through many public and private providers contributing millions of encounters across acute, aged, primary and tertiary sectors, with increasing emphasis on prevention, promotion and community care. There is heavy political involvement in health care, and much media attention, both of which distort priorities. Against this heady mix of systems changeability, key challenges are to determine what should be measured, and how, and from whose perspective, while ensuring fairness and objectivity. These questions have not been answered satisfactorily. Even if they were, there are several other problems. One is how to solve technical issues about data quality and the effectiveness of measurement. For example, are the input data collected in the same way, are they gathered systematically by all providers or according to different coding and institutional rules, and what is the extent of gaming (ie, portraying or modifying data to one’s strategic advantage) by participants? How accurately do the data reflect actual performance, are apples being compared with apples and how good are the information systems or the data collection measures that produce the data? It is well known that it is extremely hard to measure performance because of difficulties in risk adjusting. Different risk adjustment models give rise to different outcomes. Another issue is whether this initiative will have the desired outcomes. There are many examples of performance measurement systems which, even when they have solved some of the main technical problems, have failed to have meaningful effects. The reporting data are ignored, argued over or politicised, improvement efforts founder, or targets and indicators have perverse effects. Indeed, public performance measures are not neutral assessments of performance, but can alter behaviour in unintended and dysfunctional ways. All this gives rise to the potential for Type 1 errors (higher performing organisations or services are assessed as underperforming) or Type 2 errors (lower performing organisations or services are assessed as adequately performing). Traditionally, the attention has been on avoiding Type 1 errors, but since high-profile inquiries at Bristol Royal Infirmary in the United Kingdom, King Edward Memorial Hospital in Western Australia, Campbelltown and Camden hospitals in New South Wales, and others, attention has shifted to avoiding Type 2 situations. It is very tricky to get the balance right. No performance measurement systems internationally claim to have done so. There are also timing and attribution issues that need to be resolved. Performance measurement systems are necessarily backward looking, as it takes time to assemble and disseminate data. By the time a problem is spotted, it may be too late to do anything about it. There is also the attribution problem: when good or bad performance is observed, is it causally related and assigned correctly? In systems where many things are changing simultaneously — and health is the par-excellence exemplar of this — this is an ongoing issue. So what can contribute to success? Apart from a good data collection system and agreed definitions, targets and indicators (none of which we have at this point), we need excellent partnerships between sectors, agencies and health departments; leadership, not politics; incentives to participate; really good communication of outcomes; fair media reporting of results; and well designed mechanisms to improve performance. It remains doubtful whether these can be readily achieved in Australia. All in all, performance measurement systems often have little impact on changing behaviour or improving performance. As that is the point of them, and until the fundamental problems we describe are sorted out, we respond with a resounding no to the proposition.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here