When drafts of Sustainability and Transformation Plans (STP) for the 44 “footprints” of the NHS in England began to surface last year, a phrase caught my eye: “Championing the NHS Right Care approach to others within commissioner and provider organizations and building a consensus within the teams of those organizations”. This was the first bullet point for clinical leadership, from the 30 June 2016 version of the Cheshire and Merseyside STP.
The leaked draft became a hot topic. The Liverpool Echo headlined a “£1bn black hole” in the Merseyside NHS, and exposed a £300k fee paid to management consultants PwC for their work on the STP. When it was eventually published, the STP mentioned RightCare repeatedly, with no details.
According to Liverpool Clinical Commissioning Group (CCG), “The NHS Five Year Forward View modelled the need for the health system to generate £22 bn of efficiencies by 2020/21. The NHS RightCare programme is a critical part of NHS England’s approach to driving allocative efficiency in order to meet this need.”
In other words, RightCare is one component in a massive efficiency savings plan. NHS England instructed CCGs to identify targets for improvement, using RightCare, and most CCGs appear to have done so.
In the opening line of a New Statesman article, NHS RightCare national director Prof. Matthew Cripps describes this NHS England programme as “a proven approach that delivers better patient outcomes and frees up funds for further innovation,” a phrase appearing on RightCare webpages and echoed in CCG documents.
What does “a proven approach” mean? It might mean a convincing majority view from peer-reviewed articles in mainstream journals, in favour of the methodology which CCGs are now instructed to follow. Despite the fanfare, endorsement by NHS England, and the involvement of Public Health England, no such articles turn up in PubMed.
For each CCG, RightCare assigns a fixed comparator group of ten “similar” CCGs, and then, for any particular outcome, finds the “Best 5” of those ten. The CCG is invited to measure itself against the Best 5 average, and potential savings and/or quality improvements are calculated by comparison with that average.
When it comes to lung cancer mortality for those below 75 years (Directly age-standardised against the European standard population), Liverpool’s Best 5 are Brighton, Bristol, Sheffield, Newcastle, and Stoke. The Cancer and Tumours Focus Pack suggests that 80 lives per year could be saved if Liverpool CCG matched their average mortality.
But Liverpool isn’t Brighton, Bristol, or even Sheffield. If the areas exchanged populations and Liverpool eradicated environmental and occupational hazards, lung cancer mortality might improve dramatically. In reality, the NHS in each area has to deal with incidence in the existing population, using proven diagnostic and treatment regimes, and implement preventative measures for the future. In fact, Liverpool has better one-year survival and smoking cessation rates than the Best 5 averages, but its high mortality is driven by incidence, which is largely outside the CCG’s control.
If there are good reasons to think a CCG is comparable to some others, then investigating unexpected observed differences may identify real issues which planners may be able to address. No one would object if quality improvements and potential savings are found, if they can be achieved without other adverse impacts. But what does “comparable” mean?
‘Perhaps NHS England never expected RightCare to be a “proven approach”, but welcomed it to justify budget cuts.’
I argue that CCGs are only comparable for a particular outcome if a model which succeeds in fitting the national data for that outcome makes similar predictions for the CCG and its comparators. A model which doesn’t fit the data can’t be used. Finding a suitable model is a first step before making comparisons. If a good model predicts dissimilar values, and such differences actually occur, that can help confirm the model without indicating a problem. Compared with Liverpool, Brighton has much lower values of lung cancer incidence and health deprivation, key factors which influence lung cancer mortality; their different performance is predictable, and doesn’t signal an opportunity for Liverpool CCG.
RightCare doesn’t specify any models, and its method of identifying the “Similar 10” just once for all outcomes relies on a general measure of differences in standardised demographic variables and deprivation. For any particular outcome, these factors may be insufficient, some may be irrelevant, and the relevant factors may have different impacts. Using an appropriate model for lung cancer mortality leads to different comparator groups, and most of the purported opportunities vanish.
In a simulation comparing hypothetical CCGs with the “Best 5” average from a “Similar 10” whose populations all have identical risks, the RightCare method finds significant opportunities over 12% of the time, when there are none.
Can anything be rescued from this morass? As long as generating £22 bn of efficiencies by 2020/21 remains the goal, service improvements are virtually impossible. Freed from that constraint, comparisons between localities may help if they draw on models able to predict specific data, rather than magically assuming that a fixed set of demographic peers is useful for all outcomes. However, even with appropriate comparators to point the way, CCGs may be unable to control deprivation, stress, occupational and environmental hazards, or undiagnosed disease, either immediately or in a few years’ time.
Perhaps NHS England never expected RightCare to be a “proven approach”, but welcomed it to justify budget cuts. But the NHS is supposed to deliver evidence-based medicine, and clinicians are trained on the basis that science underpins therapy. If RightCare is to be a “proven approach”, its proponents should address questions of methodology openly in the public health literature. Isn’t that how science works?
Keep Our NHS Public by Tony O’Sullivan. Reproduced with permission via Keep Our NHS Public.