Digital Health Innovations, Part 4: Testing and Evaluation

Posted | Posted in Featured

This 5-part blog series is about designing evidence-based patient-facing digital health interventions for vulnerable populations that are efficacious, scalable, and cost-effective. Think it’s a tall order? IT IS! But it’s not impossible. We have some insights we’d love to share with you. These insights come from our combined 20 years of experience designing, testing, and disseminating effective digital health interventions in medically vulnerable populations.

 

If you want to start from the beginning, check out Part 1, Part 2 and Part 3.

After reading part 3 of this series and following all of our recommendations to the letter, by now you have probably:

  • Identified a problem
  • Measured that problem
  • Done research about that problem
  • Formed a hypothesis that connects a potential solution to that problem
  • Selected your research approach and designed your experiment

 

So now you’re ready to move to the next pressing question:

How do you evaluate your impact?

Before diving into the particulars of metrics and evaluation, understand the purpose of engagement. Engagement allows providers to enter into a dialogue for the purpose of motivating and educating patients to adopt and sustain healthy behaviors. In other words, engagement does not guarantee behavior change but it provides a strong signal that behavior change will occur. Because of this, your evaluation should assess both engagement as well as efficacy so you can have the most comprehensive understanding what factors contribute either your success or failure.

Typically we have too much data rather than too little so you have to decide what’s meaningful. Your choice of evaluation depends on your experiment design. Quasi-experimental design has grown in usage for experimentation with technology due to the flexibility of implementation it offers while maintaining a relatively high level of rigor to be able to draw inferences about causality. Randomized control trials are the gold standard, but do not offer the flexibility nor sensitivity to resources constraints that quasi-experimental design offers. There are two sets of metrics you can evaluate:

 

  • Process metrics: These metrics measure the quality of the intervention delivery, the efficiency of the delivery, and the ability of the intervention to adapt to changing conditions.
  • Outcome metrics: The metrics assess the efficacy of the intervention.

 

Additionally, the flexibility of quasi-experimental design means that you can make small changes to the intervention to improve engagement and preserve the rigor of the experiment. This optimizes your budget by developing a more effective intervention faster than if you wanted until the end of the pilot to make all changes. These small changes are different than fixing bugs. Fixing bugs means making small changes that introduce functionality that was intended to be in the prototype before starting the pilot. These small changes further enhance your prototype by aiming to further optimize the level of engagement or satisfaction.

For example, within a text messaging intervention, small iterative changes could include altering the frequency or timing of the text messages. Other iterations may address questions such as those listed below:

 

  • How can we use our limited resources more effectively?
  • Are enough of the right people using this product?

 

Geisinger Health System is a well-recognized innovative leader in testing and scaling a range of technologies and innovations that improve the health of their patients. Chanin Wendling, director, eHealth, division of applied research and clinical informatics at Geisinger Health System has said “Mobile technology isn’t for everyone…Find out first which patients want mobile and which don’t.” Geisinger started small with text message reminders for upcoming appointments and scaled up to weight management programs delivered via texting. The short-term benefit of fewer missed appointments provided enough data to demonstrate both acceptability as well as satisfaction that they were reasonably hopeful that a more intensive engagement would also be successful. The iterative process that Geisinger employs is the ideal approach to innovation in resource-constrained settings.

 

An evaluation of a pilot should answer three key questions:

  1. Acceptability: Will my patients use this?
  2. Satisfaction: Will my patients like this?
  3. Clinical benefit: Will my patients be better off doing this compared to what they are doing now?

 

To determine acceptability, test “ugly.” Testing ugly allows you to quickly and cheaply verify that your patients want that you are about to offer. Your product prototype should be something you can easily develop at low cost and easily modified to the standards of your research protocol once tested. In other words, don’t immediately jump to building an app for your population only to find out that they aren’t interested in another app on their home screen, especially one that is not mobile-responsive (i.e., it looks and behaves just like a website, but is harder to read because it’s on a small phone screen).

One of the virtues of technology is that you can continuously and passively collect data. Generally, greater engagement indicates satisfaction and high levels of satisfaction can be a powerful source of motivation for sustained behavior change. For example, if you are piloting a text messaging intervention, the following measures can help you paint a picture of the level of satisfaction with the intervention:

 

  • Delivered rate: The percentage of messages opened that were successfully delivered
  • Click through rate: The percentage of links in messages that were clicked on in opened messages
  • Response rate: The percentage of text replies received from messages that were opened
  • Response time: The average amount of time that it took the recipient to open the delivered message

 

There are several models to assess clinical benefit. This will briefly discuss two frameworks: the RE-AIM framework and PRECEDE-PROCEED.

PRECEDE-PROCEED Model

 

PRECEDE and PROCEED are acronyms. PRECEDE stands for Predisposing, Reinforcing, and Enabling Constructs in Educational/Environmental Diagnosis and Evaluation. PROCEED stands for  Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development. As the name of the model indicates, PRECEDE represents the process that leads of to an intervention and PROCEED represents that steps needed to proceed or implement the intervention.

PRECEDE consists of four phases:

  • Phase 1: Identify the desired outcome.
  • Phase 2: Identify barriers to achieving that outcome including behavioral and environmental determinants such as the social determinants of health
  • Phase 3: Identify factors that facilitate the required behaviors, attitudes, and environmental factors identified in Phase 2 that enable the outcome identified in Phase 1.
  • Phase 4: Identify the administrative and policy factors that influence what intervention can be implemented.

PROCEED consists of phases that encompass the implementation and evaluation of the intervention.

  • Phase 5: Implementation: the design and execution of the intervention.
  • Phase 6: Process evaluation: the assessment of how closely actual activities compare to planned activities
  • Phase 7: Impact evaluation: the assessment of how effective the intervention is in having the desired impact on the target population
  • Phase 8: Outcome evaluation: the assessment of how much the intervention is leading to the outcome identified in Phase 1

The PRECEDE-PROCEED model is helpful for evaluation because it is a participatory model that incorporates and rewards the inclusion of the target population, leading to a better understanding of barriers and facilitating factors and a better chance of success. This model also accounts for organizational and policy processes and practices that can limit the success of an intervention. Additionally, the flexibility of the process allow you to adapt your intervention’s design and methods to the needs of the population and problem that you are aiming to solve.

 

0*6ZszLqzaS9T7ytjN..png

RE-AIM Framework

The RE-AIM framework, developed by Russ Glasgow, Shawn Boles, and Tom Vogt, was designed to enhance the quality, speed, and public health impact of efforts to translate research into practice in five steps:

  • Reach your intended target population
  • Efficacy or effectiveness
  • Adoption by target staff, settings, or institutions
  • Implementation consistency, costs and adaptations made during delivery
  • Maintenance of intervention effects in individuals and settings over time

This framework is advantageous for assessing clinical benefit of both tech-driven and practice-driven innovations because it considers both individual- and institution-level impact and efficacy. These factors are critical to maximize  the translatability and public health impact of health promotion interventions. If you’re curious about how RE-AIM can be applied, check out this paper by Duke Digital Health researchers. They used RE-AIM to assess the impact of a weight loss study in a community health center setting.

 

No evaluation would be complete without some cost-benefit analysis. Admittedly this is not an easy feat for even the largest technology companies. Early on, you may not be able to quantify the ROI but it’s important to have an idea of how to decide if it’s worth it to continue with the intervention based on the financial and economic demands in addition to outcomes and impact. There are a few factors outlined below to consider for even the most high-level back of the envelope calculations.

  • Costs to include are labor costs (both staff and volunteers), technology costs, and out-of-pocket costs for patients (eg. devices, data plans etc.)
  • Savings can be estimated (eg. lower readmission rates, lower technology costs, fewer missed appointments)
  • An increased adoption rate could be an important proxy for precise measurement of clinical benefit depending on your intervention

Even a quick and dirty calculation should answer the following questions affirmatively for an intervention that will be truly impactful.

  • Is this sustainable? Can this intervention continue as is with current budgetary constraints given potential revenue and/or cost savings?
  • Is this scalable? Can we expand this intervention without incurring high labor and/or technology costs that outpace cost savings or revenue?

 

Can’t wait to move this conversation forward! Share your experiences, thoughts and questions on Twitter with the hashtag #safetynettech.

 

Erica Levine is the Programs Director at the Duke Global Digital Health Science Center. She has over 8 years of experience translating evidence-based behavior change interventions for delivery on various technology platforms (SMS, IVR, web). She was leveraging technology for health in medically vulnerable populations way before it was cool. She has worked on projects in rural North Carolina, Boston, and Beijing.

 

Vanessa Mason is the co-founder of P2Health, an initiative that supports innovation that fosters the protection of population health and promotion of disease prevention and founder and CEO of Riveted Partners, a consultancy that sparks behavior change through accelerating data-driven innovation. She has over a decade of experience in healthcare innovation and consumer engagement in both the United States and developing countries. Her experience in global health has shaped the way that she sees the role of technology and design in health for vulnerable populations: innovate and integrate rather than break and disrupt.