Digital Health Innovations, Pt 3: Experiment Design

Posted | Posted in Featured

This 5-part blog series is about designing evidence-based patient-facing digital health interventions for vulnerable populations that are efficacious, scalable, and cost-effective. Think it’s a tall order? IT IS! But it’s not impossible. We have some insights we’d love to share with you. These insights come from our combined 20 years of experience designing, testing, and disseminating effective digital health interventions in medically vulnerable populations.

If you want to start from the beginning, check out Part 1 and Part 2.

After reading part 2 of this series and following all of our recommendations to the letter, by now you have probably:

  • Identified a problem
  • Measured that problem
  • Done research about that problem
    • You know how other people have attempted to solve it, why it might happen, and how big that problem is (either for your organization or nationally)
  • Formed a hypothesis that connects an evidence-based potential solution to that problem.

 

So now you’re ready to move to the next pressing question:

How do you test this hypothesis?

First, let’s make sure we’re on the same page about different types of studies. If you hang out with a lot of public health academics, you’ll probably notice them talking about how randomized controlled trials (aka RCTs) are the gold standard, and other study designs are susceptible to bias (aka unworthy). While the RCT is the gold standard for testing things like whether a certain intervention or treatment is effective against a particular condition, there are a lot of reasons why you’d choose a different method when designing a digital health intervention. These alternative methods can save you money, time, and help you get feedback in real-time, taking some of the guesswork out of development.

 

Here are several possible options:

Randomized controlled trials (RCTs) usually test whether a specific treatment is effective. These are usually done when it’s important to establish a causal link between a treatment and a condition; i.e., a completely novel treatment. The key phases of RCTs include: a fully developed intervention, random assignment of participants to each condition, and staff members being blinded to that assignment. RCTs are conducted in a rigorously controlled environment in order to minimize bias and confounding. However, they can be quite costly, and they can take a long time. In digital health, they require having a fully developed digital health solution in place (a system or app) with no additional changes slated before being able to truly test effectiveness. Researchers are actively working on enhanced RCT designs for digital health, but you might want to consider alternative study designs, especially if you’re not necessarily testing a new intervention or treatment and definitely if budgetary and/or staff constraints are an issue.

 

Quasi-experimental studies are very similar to RCTs in that you’re testing a treatment vs. usual care. One critical difference is that quasi-experimental studies lack the element of randomizing participants. Thus, they can be very useful for testing solutions in an environment where you will need to “contend with variation instead of control for it.” In designing a quasi-experimental study, you might make slight changes to your intervention based on what you think will work in each context, and test them against each other. This is very handy for quick or agile product iteration as you learn about from engagement with your technology. Typically, this is a more budget-friendly way of designing technology.

 

A mixed-methods approach allows you to evaluate a solution using both quantitative methods (i.e., surveys, pre-post tests) and qualitative methods (focus groups, key informant interviews, etc). These can be especially helpful when you want to know both whether a solution worked and also why. Here’s an example of a mixed methods evaluation of a mHealth solution in Uganda and one describing designing an intervention strategy to promote adolescent HPV vaccination in safety-net clinics.

 

Qualitative studies are those that primarily use qualitative methods to delve deeper into a question by speaking directly to participants. Qualitative studies are super useful when you’re conducting user research to make an existing solution better, or when you’re not sure whether a specific approach will work, and you want to find out from the target population directly what they think. They can also be helpful to gain insight after you’ve started testing your solution or during a formative design phase, especially if you start seeing non-adherence or low engagement.

How do you choose from this menu of options?

 

Let your hypothesis drive the research question and resulting experiment design

General, open-ended questions typically are a better fit for qualitative or mixed-methods studies whereas targeted, specific questions are better suited for RCTs and quasi-experimental studies.

  • Open-ended research question: Would patients want and use a clinic app?
    • We recommend using a qualitative or mixed methods approach. For example, you might run a series of focus groups and/or key informant interviews to get a sense for whether they use similar apps, or maybe a mix of focus groups and a survey to find out whether they have smartphones.
  • Targeted, specific research question: Does this app help our patients improve their hypertension?
    • For these questions we recommend an experimental design such as a randomized controlled trial or a quasi-experimental study because you can test your proposed treatment vs. usual care and see if it causes an improvement.

No matter what question you want to answer or which design you choose, be sure to consider the following:

    1. Choose your target population (wisely). Who do you mean when you say “patients” or “providers”: Who is affected by the problem you’ve found, and how? Are all of the people you want to help with your potential solution affected in the same way by the problem? Are there differences in the way people might use or view the solution you want to test that are predicted or defined by age, or sex, or income or other demographic attributes? Asking these and related questions will help you clearly define your sample population.
    2. Start at the beginning (i.e., collect baseline data). If you don’t, how will you be able to compare the before and after? This is a good time to think about who is missing from your team, especially as it relates to data. The biostatisticians and data scientists will like you more if you come to them early on.
    3. Anticipate and mitigate ethical issues related to testing your hypothesis. The most extreme example involves smoking. To really test whether smoking causes lung cancer, the perfect RCT would randomize a bunch of non-smokers to either smoke or not smoke. But we’re sure enough at this point that smoking actually does cause lung cancer that it would be unethical to ask people to take up smoking. Your ethical issues will likely not be that extreme. It may be worthwhile assess if it would be ethical to require your participants to buy a smartphone or buy a data plan in order to participate. Another ethical consideration involves determining what signs you’ll look out for to determine if a study is working. It may be unethical to keep a study going if you find dramatic effects before the end of the study period. This is particularly important given the legacy of research abuses and unethical practices with vulnerable populations. It’s imperative that you write an informed consent that is easy for participants to read and understand. Spell it out! Use a 3rd – 5th grade reading level. You can use the SMOG readability test or the Hemingway App to make sure your writing matches the reading level of your participants.
      • Use clear, simple language.
      • Explain the following: what the goal of the study is, who is being invited to participate, what is expected of participants, risks (especially data risks) and benefits, and how participants can contact staff to quit the study or ask questions.
  • Manage data and security. Don’t let HIPAA be the elephant in the room. There are resources from both the private and public sector to better understand HIPAA. Answering the following questions in the design phase is required, since you’ll rely heavily on these data throughout the life of the study:
    • How will you collect data?
    • Who will collect and enter data?
    • Where will data be entered?
    • Where will it be stored?
    • What is your data analysis plan?

With digital health, there are always security risks. Make decisions with data security in mind, and have a plan to address security, including knowing where you can’t (ie. where you need patients to understand the risks and either choose to participate or not.) Think about it in terms of reasonable risk to patients. You can usually design informed consent forms for patients that outline some of these risks. If you choose to test a commercial program, do your homework so that you’re aware of their security standards and protocols. It might also be worth investing in an IT or app security expert to do a review of the commercial vendor’s security plans and privacy policies.

  1. Leverage existing solutions that meet your needs where possible. It’s helpful to look for existing solutions or similar ones that are already being tested or on the market. They can help you address constraints, hypotheses and workflows.
  2. Include patients and caregivers in the design stage of your solution. If patients are your end-users, include them in the design of your study. Patient safety and trust is foremost, and some preliminary work with patients can help you maximize acceptability and usefulness of your digital health solution. Patient advisory boards can help you by giving feedback or by helping you find people who will.
  3. Minimize bias in design. It might be helpful to reach out to a local university/school of public health or any researchers you know, who can help you look for bias in your design. One common bias is to recruit the most motivated patients. This can happen if you’re only recruiting people who respond to an ad, or come to their visits on time. If you’re enrolling only the most motivated patients, you may see an improvement, but might not know if it’s because of the group of patients you’ve enrolled or your program. Ask yourself whether you want to optimize for efficacy (i.e., whether your solution works at all) or for scale (i.e., whether a large group of diverse patients or users can access/use/engage with it).
  4. Maximize retention. Look for dropouts early and often. Who is dropping out, and why? Are the people who are dropping out different than the people who are not? It’s in your interest to figure out why people do not engage so you can understand how to change your intervention or possibly end it.

 

Can’t wait to move this conversation forward! Share your experiences, thoughts and questions on Twitter with the hashtag #safetynettech.

 

Erica Levine is the Programs Director at the Duke Global Digital Health Science Center. She has over 8 years of experience translating evidence-based behavior change interventions for delivery on various technology platforms (SMS, IVR, web). She was leveraging technology for health in medically vulnerable populations way before it was cool. She has worked on projects in rural North Carolina, Boston, and Beijing.

 

Vanessa Mason is the co-founder of P2Health, an initiative that supports innovation that fosters the protection of population health and promotion of disease prevention and founder and CEO of Riveted Partners, a consultancy that sparks behavior change through accelerating data-driven innovation. She has over a decade of experience in healthcare innovation and consumer engagement in both the United States and developing countries. Her experience in global health has shaped the way that she sees the role of technology and design in health for vulnerable populations: innovate and integrate rather than break and disrupt.