As an academic researcher, my primary goal is to improve population health. I was trained in innovative study designs, rigorous analytic approaches, and taught that fidelity to the methods is of the utmost importance. However, it is just as important that patients actually use the programs that we design to improve their health. Unfortunately, the few health programs that actually make it into the community can take years—even decades—to get there. In order to improve health on a population level, we (academic researchers) need to rethink what defines the success of our research studies. We need to focus on designing practical studies so our programs can succeed in the “real world.” Implementation science is a field focused on increasing the number of evidence-based programs in clinical community settings. In the last decade, implementation science has seen an explosion of progress in terms of both quantity and quality.
My colleagues and I recently completed a series of projects where research and clinical staff spoke with patients over the phone to help them improve their chronic disease control. Trained study staff asked patients what made it challenging from them to take their medications and provided tailored support to help troubleshoot patients’ unique barriers. We called our original study CHANGE. It was designed to help African Americans who receive care at an academic medical center to take their medications more accurately and improve their chronic illness control. CHANGE improved medication taking, but did not improve patients’ blood pressure, cholesterol, or diabetes control.
We tested CHANGE in a medical center that routinely conducts research, but this is not the type of setting where most patients receive their care. Because the trial did not improve patients’ blood pressure, cholesterol, or diabetes, we had to consider whether we were really making a difference, and we thought we would have to use traditional academic theories to heavily revise the program before we could re-evaluate its success.
“…we thought we would have to use traditional academic theories to heavily revise the program before we could re-evaluate its success.
We were wrong.”
We were wrong. When we finished the CHANGE study, North Carolina had a primary care case management program serving low-income patients across several counties, and they needed an easily accessible solution to improve patients’ heart health. Even though CHANGE didn’t work perfectly in the research setting, the case management program was eager to work with us so they could use it for their patients. So, we started working with community clinics to adapt the program to fit their needs. These adaptations were based not on traditional scientific methods for program development, but instead were primarily chosen to meet each clinic’s needs while keeping the things that worked in CHANGE. For example, CHANGE was originally delivered by research staff members and, on a future iteration, was delivered by community health workers. Thus, we modified the content to address only issues that community health workers were prepared to address (e.g., removed information about managing medication side effects). We also changed the program for Medicare patients.
These adaptations and projects were not what we would have done in an academic context, but they worked. Not only did patients take their medicines more correctly in the original CHANGE study, but in the adaptations of CHANGE patients’ blood pressure, cholesterol, and diabetes improved. If we had only focused on making CHANGE “perfect,” we might have missed an opportunity to meet local needs and ultimately impact our community’s well-being. As researchers, we need to balance between making sure that we have enough evidence to change behaviors and being flexible enough to use a potential solution in real-time when a health need must be addressed.
We also learned several important research lessons. The research and clinical teams had different perspectives on key issues, including their expectations of “success.” The research team collected blood tests for research purposes so we could control the timing of the test and availability of result data. We (the research team) considered it a success when we saw improvements in blood pressure, cholesterol, and diabetes. In contrast, the community clinics collected blood tests only when patients needed them as part of their medical visits. Since CHANGE was provided over the telephone, many patients had relatively infrequent in-clinic encounters and did not get blood tests at consistent time intervals. While this was frustrating for the research team, it was completely acceptable to the community clinics because they viewed success as having access to an immediate solution to help their patients improve their blood pressure, cholesterol, and diabetes. The simple act of offering the program and sensing that patients were using it was valuable from the community clinics’ perspective. As researchers, this challenged us to recalibrate our measure of success and exclusively focus on what will have a meaningful impact on health.
The goal of health research should be to impact health in real world settings. Implementing our work in the communities where we live and work is an essential step in making that impact. This process can be uncomfortable because, unlike a clinical trial, the real world is difficult to control. Implementation science is a valid field, but it may require researchers to “get their hands dirty” by engaging with stakeholders and community partners in a new paradigm where everyone works together to improve the health of the community.
Featured image credit: Bonding by rawpixel. CC0 via Pixabay.