Research has repeatedly shown that U.S. patients receive recommended care only half of the time. It is also known that patients receive non-recommended or “low-value” care as much as 20% of the time. Despite the proliferation of evidence-based guidelines to improve clinicians’ practice patterns, clinicians often don’t respond to them. So healthcare leaders have long wondered: what’s the best way to change clinicians’ behavior and improve their quality and efficiency of care?
In recent years, there has been a lot of enthusiasm about approaches like financial incentives and behavioral “nudges” to help clinicians offer more evidence-based care. But clinical decision-making is far too complex to be consistently improved by applying these frameworks. When it comes to changing clinician behavior, leaders have to think more broadly about the local organizational culture clinicians work in.
What the Research Says
Let’s first look at financial incentives. Pay-for-performance (P4P), which pays healthcare providers bonuses when they adhere to clinical guidelines or achieve better outcomes (and may impose financial penalties when they do not deliver high-quality care), has been viewed as a promising way to change clinicians’ behavior (the catch phrase: “pay doctors for value instead volume”). However, the evidence so far highlights limitations in what these centralized incentive plans can achieve.
A large number of studies have found that P4P is effective in improving processes of care (e.g., ensuring every patient with a heart attack receives aspirin therapy), but ineffective in improving patient outcomes such as patient mortality rate. (Evidence has shown that better processes of care does not always translate into improved patient outcomes, in the same way that “teaching to the test” does not always lead to more successful students.) Some have argued that most financial incentives are too small to make a difference, but even in the UK’s national health system, where up to 30% of clinicians’ income is paid through a P4P system, patients’ health have not markedly improved since 2004, when the system was introduced.
Relying too heavily on financial incentives to boost performance can often lead to gaming of metrics (e.g., up-coding patient complexity to reduce readmission penalties) or harmful unintended consequences. For example, paying health systems and doctors to prescribe beta-blocker medications in patients with heart disease frequently leads to overprescribing, even when they are unsafe, and such overprescribing can lead to disastrous consequences for patients.
Financial incentives may also unintentionally undermine clinicians’ intrinsic motivation or serve as a distraction and hurt performance. Clinicians already have so many increasing and competing demands on their time that adding another initiative (even one that offers financial rewards) may not be well received.
Researchers and healthcare leaders have recently turned to behavioral economics as another way to improve clinicians’ decision making. The idea is that small changes to an individual’s environment can reduce cognitive biases and encourage rational choices. For example, one well-designed randomized-controlled study found that requiring clinicians to justify to their peers why they prescribed antibiotics for viral infections (known as “accountable justification”) led to a substantial reduction in inappropriate prescribing of antibiotics.
While results like this are encouraging, behavioral economic approaches have not always been successful. In an experiment conducted in Switzerland, physicians were randomized into groups that received personalized feedback on their antibiotic prescribing and those that didn’t, and they found no impact on physicians’ prescription patterns between the two groups. While some argue that these studies simply applied the behavioral frameworks imperfectly, it seems likely that there are also limits to the frameworks themselves.
To be sure, it’s important to note that financial and behavioral frameworks, while imperfect, can still help move the needle toward better decision making and evidence-based practice. However, leaders and clinicians should also understand that these frameworks are no panacea, and be open to other approaches for driving change.
A Stronger Focus on Culture
Organizational culture – or the system of shared assumptions, values, beliefs, and norms within an environment – has a substantial influence on how clinicians work and on how patients fare. When clinicians are encouraged to creatively solve problems and improve processes, when they feel comfortable speaking up about problems and mistakes, and when they have support from senior leadership, they are often more motivated to make positive changes that can improve the quality of care and patient outcomes. On the other hand, if the culture is weak – if clinicians report lack of support from managers or a generally poor work environment, patients tend to have worse outcomes. Crucially, we argue that rolling out financial incentives or behavioral interventions will fail if leaders do not engage clinicians and mind their organization’s culture.
Research has shown that interventions to strengthen culture have resulted in better care. For example, one study of 10 hospitals over two years found that positive culture change – achieved, in this case, through an intervention to empower staff at all levels to save more lives – was associated with 1% lower hospital mortality rates (a major improvement from a clinical point-of-view). Another study found that the quality of care offered at hospitals was positively correlated with hospital leaders who engage frontline clinicians, support clinicians in improvement efforts, and build a blame-free environment.
And a systematic review of 62 observational studies found a consistent association between positive organizational culture and improved clinical outcomes—including reduced patient mortality rates. While we await randomized studies, a closer look at the hospitals that succeeded in changing their culture and improving patient outcomes reveal that the best outcomes were linked with senior management endorsing decisions made by clinical teams, creating a learning environment, and making people feel psychologically safe to speak up about things going wrong.
Health care leaders who are committed to improving clinical care must assess their local organizational cultures and identify opportunities for improvement. While this is not easy, there are multiple resources available, such as the Institute for Healthcare Improvement or the Taking Action on Overuse Framework. Of course, any intervention will have trade-offs, opportunity costs, and unintended consequences. But pursuing bottom-up approaches, which empower clinicians to lead and monitor local improvement, can help uncover and mitigate these issues—and such strategies are far superior than top-down dictates, generic slogans, and online training modules. For example, in one of the most successful examples of the patient safety movement, a strategy to change organizational culture and engage local physicians to lead and monitor quality improvement activities led to major drops in potentially preventable (and often fatal) central line-associated blood stream infections.
While monetary incentives and behavioral “nudges” both have their strengths, neither of them is sufficient to reliably change clinician behavior and improve the quality of their care. Organizational culture, while diverse and complex, provides another important lens to understand why clinicians are practicing in a certain way and to put forth more comprehensive, long-term solutions.
from HBR.org https://ift.tt/2MCySof