Without knowing health outcomes, we cannot improve them
A new report suggests that poor surgical performers could be easily identified, and identifying those outliers would save patients from harm.
The group of authors also claim that the risk of complications sustained through Australian health care providers varies dramatically depending on which hospital patients go to.
The authors offer their help to identify poorly performing hospitals so that patients can avoid seeing them.
I do agree with some of the sentiments raised:
· Doctors and hospitals have no idea what their actual outcomes are.
· Complications are common, are associated with enormous personal and financial costs and have not decreased over time; we simply have got used to them.
· Capturing health outcomes is a prerequisite to improving them.
However, there are major shortcomings of the report that I would like to describe here.
1. The report stresses the existence of “good performers” and “poor performers”. The authors suggest that complications are the result of poor care, individual or system shortcomings and the chance of complications depends on what hospital you attend to.
While this reasoning might sit well with the general public (some of who might even have experienced a hospital stay with or without a complication), this reasoning is very much misleading. Our academic research repeatedly points to patient factors that are the most consistent predictors of complications. Such factors include the ASA score, parameters of general health (co-morbidity scores), age and body mass index rather than doctors or hospitals.
In our academic work, we were not only able to create truly risk-adjusted outcomes after hysterectomy (compare expected versus actual complication rates); we were even able to develop an algorithm to predict the risk of surgical complications. This algorithm assists gynaecological surgeons to decide if a patient is suitable for a hysterectomy in a rural/remote setting or if the patient would benefit from being transferred to a tertiary hospital for care for her surgery. Potentially, such algorithms can be applied to many other surgical procedures once analysis of actual contributing factors has been done.
2. The authors also propose to use routine administrative hospital data sets to inform the public about good and poor performers.
The problem with this approach is that they propose to use data that is not designed to assess surgical performance. These datasets are designed for hospital reaccreditation and Medicare billing. The inaccuracies resulting from such misaligned data are massive.
For example, complications are only recorded if they developed during the hospital stay. By contrast, the majority of complications develop after discharge from hospital. Certain complications have no provision for being coded. Co-morbidities that contribute heavily to surgical complication rates are also not captured.
While routine data is big and impressive, it creates no meaningful insights as to how the health service could be improved. Similarities to the Health Roundtable come to mind.
For example, if one hospital/surgeon records a significantly higher “readmission to hospital “rate than the average, the administrative data will not be able to state whether this surgeon’s outcomes are problematic or not. There are many reasons for such an observation. The surgeon could a. Be a poor performer; or b. Receives referrals of elderly and socially isolated patients; or c. Performs risky procedures that, his/her colleagues are not willing to perform. If we declare a surgeon an outlier based on the proposed criteria, we may well condemn him/her unfairly leading to the “normal” consequences, including death by media.
3. The authors request transparency about complications. They argue that the general public has a right to know who the great and the poor performers are so that normal citizens can make decisions where to have surgery.
While transparency of data is laudable, in healthcare a lot of damage can be done if interpreted the wrong way. As clinicians, we stick to a rule that says that “Never request investigations that provides results you may not be able to interpret”. For example, it means that I will not order an obstetric ultrasound because I have not worked in obstetrics for many years.
Do we really believe that the general public will be able to understand if a surgeon has a high actual complication rate but a surgeon who has a lower actual complication rate could be a worse performer than the one with a high complication rate due to failing to reach the ultimate long-term aim of treatment, which is increasing the patients’ survival prospects?
The harm that could be done from misinterpretation of results would be enormous and could result in avoidable patient deaths. Surgeons will make every effort to avoid being labelled a poor performer.
The suggested solution to use routine administrative hospital data to identify good and poor performers and to make these data public will have catastrophic consequences for us all.
We already have registries in place in Australia and in the UK that monitor surgical complication rates. As a consequence, some surgeons will decline to take high-risk patients to the operating theatres for treatment. High-risk patients are those with a risk of surgical complication 5 or 10 times higher than a healthy patient. If the patient dies on the waiting list (before going to theatre) the general interpretation is that the disease was too aggressive and the patient died from the disease. If the same patient dies 2 days after her procedure, it will be marked as a serious complication that goes onto the account of the surgeon. If this information then will be made public, any surgeon who agrees to accept high-risk patients runs the very realistic risk of being labeled a poor performer and patients should not be advised to have surgery with this surgeon.
All surgeons who I know will avoid under all circumstances being called a poor performer. The safest and quickest way to avoid this unfortunate mark is to focus treating younger and healthier patients and progressively decline to look after elderly and unhealthy ones. Do you call that a successful quality improvement program? I’d call such an outcome catastrophic.
The time for a change in health outcome has never been better. However, and as a senior surgeon in active clinical practice, some adjustments to the proposed solution need to be made.
The vast majority of doctors are well trained and bend over backwards every day to provide great health outcomes to their patients.
There are outliers but the majority of outlier surgeons are unaware. Once they have access to reliable and comparable information the majority will move heaven and earth to improve.
We need mechanisms that allow doctors to measure and track surgically meaningful outcomes in objective ways, without the fear of being named and shamed.
Before SurgicalPerformance.com became available, surgeons had no way to reflect on their surgical outcomes. This tool assists with measurable and clinically relevant feedback for personal reflection to improve. Actual numbers suggest that surgical outcomes improve for those surgeons who used this tool for a period of 6 months.
We don’t need a power struggle between governments, health bureaucrats and doctors. Unspeakable harm will be created by a culture that encourages to point the finger to presumed “outliers” only to find out after we destroyed a career when the “outlier” was actually better than the national average in regards to those long-term outcomes that matter most to patients.
We need engagement; we need to manage to get doctors behind the clinical improvement agenda. We need relevant and meaningful information about our health outcomes, measure them, and we need data that will actually form the basis from which to improve.