Just Added!

New Videos with Amal Mattu, MD

Watch NowGo

Hey, Press Ganey – You Can’t Improve What You Can’t Control

October 6, 2017

Spoon Feed
Patient satisfaction is important.  But the most commonly used metric in the US, the Press Ganey survey, may not be a good measure of individual emergency physician and clinician performance.  Giving emergency clinicians feedback on their Press Ganey scores, ostensibly so they could take steps to improve, did not lead to appreciable score improvements.

Why does this matter?
Those of us in practice already know the answer – reimbursement and performance-based pay incentives are increasingly tied to patient satisfaction metrics.  The problem is these metrics are deeply flawed, especially when considering an individual provider and especially in the ED.  Press Ganey surveys are prone to, “nonresponse bias, small sample sizes for individual providers, incentivizing inappropriate care to improve patient experience ratings (e.g., leading to overprescribing of opiates and antibiotics), inability to identify true outlying providers, and reduction in provider job satisfaction.”  And in the ED, many of the variables that contribute to the patients’ gestalt feeling about the visit are completely beyond the control of the individual clinician, such as, “wait time, door-to-room time, treatment time, the location of care [eg hallway], ED boarding, and high ED occupancy.”

Feedback on problems you can’t fix doesn’t help… thanks Press Ganey!
One would think that if Press Ganey surveys were an accurate, reproducible, scientific way to measure individual emergency physician performance, that providing feedback on one’s performance would result in that person taking measures to improve performance and lead to better scores in those with feedback than those without.  That was the premise of this single center RCT.  Half the 25 eligible faculty with at least a year of baseline Press Ganey data, including physicians and advanced practice clinicians, were randomized to either get monthly feedback (funnel plots) on their patient satisfaction performance compared with peers plus a 6-month review with a faculty mentor or a group that was not provided this feedback.  There was no statistically significant difference between groups after the intervention.  The study was not powered to detect a difference, as they only had a limited number of faculty to include, so a larger sample size may have detected a true, though small improvement.  There also could have been cross-contamination between the groups.  They were not supposed to talk about the intervention, but ED groups are pretty tight and there was probably some off-the-record discussion.  This study was not large enough to provide a definitive answer, but it raises good questions and shows that a larger trial is feasible.  Until it has been shown that we can actually do something to improve these scores, maybe reimbursement should be tied to other, more reliable metrics of our performance.

Source
Using Press Ganey Provider Feedback to Improve Patient Satisfaction: A Pilot Randomized Controlled Trial. Acad Emerg Med. 2017 Sep;24(9):1051-1059. doi: 10.1111/acem.13248. Epub 2017 Aug 16. 

What are your thoughts?