What Does Evidence for mHealth Mean?

Woman holding mobile phone

Rachel Wambui Kung’u from the Peace Caravan in Kenya.
Photo courtesy of Erik (HASH) Hersman, Flickr

For the past decade, mobile health (mHealth) interventions have become widespread in public health. Mobile devices are seen as inexpensive tools to reach large numbers of people in a matter of seconds. This makes mHealth particularly attractive for social and behavior change communication (SBCC). However, because the complex field of mHealth is still young, more evidence of its efficacy for actually creating behavior change is needed. While mHealth appears to have untapped potential, its public health impact remains in question.

The mHealth Working Group’s meeting June 16, 2014, on evidence generation for mHealth offered speakers the opportunity to consider evaluation techniques for their mHealth interventions. Kelly L’Engle, Behavioral Scientist at FHI 360, kicked off the meeting by noting the great response to the call for more evidence in mHealth, but she said it is difficult to evaluate mHealth using traditional gold standard approaches, such as randomized controlled trials (RCTs). Mobile devices can transform traditional public health implementation and evaluation techniques, though, because of the speed of data collection and ease of communication they offer, she said.

Kate E. Gilroy from IntraHealth International referenced an mLearning intervention in Senegal that disseminated SMS messages and used interactive voice response to reach participants. Her presentation focused on usability issues, or, in other words, how comfortable participants were in using mobile phones for mLearning. The program managers used a variety of sources to evaluate usability including administrative data, the cost of the intervention and surveys to participants.

Pam Riley at Abt Associates spoke in depth about the importance of a control group. The intervention, m4RH, pushed out family planning information via mobile phones to women who opted in. During the study design, the implementers struggled to incorporate a control group into the intervention. Even if the implementers chose to monitor a group of people who were not getting the information, they might have heard of the intervention and therefore, be biased. In addition, the characteristics of those who opted out of the intervention might make them inherently different than those who opted in, which would decrease the rigor of the results. Finally, an innovative solution was chosen: to randomize the phone numbers of all who opted in, and have one group receive the intervention and another who had restricted access to the information.  Although this seemed to work, there were several challenges with this process, including ethical issues, a lack of support and not having good follow-up with those in the control group.

During the discussion, Kelly mentioned that mHealth contributed unique insight into evaluation, including the ability to visualize data in innovative and engaging ways. With geolocated data and other information that can be collected easily, analyzing with mHealth can often produce maps and graphics quicker than using other tools. However, there are several challenges to evaluation for mHealth. The process of peer-reviewing articles can take many years, and in a field as fast-moving as mHealth that can stifle new research. In addition, mHealth can make data available and provide information to target populations – but does that mean that health impact is achieved? Audience members at the meeting offered another perspective: perhaps mHealth is only a part of the solution. An SMS campaign in addition to other approaches might be the key to behavior change and eventually improving health outcomes. Perhaps mHealth is only effective at overcoming systematic barriers, such as acquiring supplies or increasing productivity of health workers.

Much of mHealth evaluation is focused on usability and whether or not people can adopt the technology. This concept makes it clear that mHealth is a process, not a pill that will solve health problems, but this left some audience members asking what the best way is to sell mHealth to stakeholders unfamiliar with it. Although the session on evidence for mHealth is over, it is clear more conversations are needed to create standardized guidelines for evaluating mHealth. Join the conversation on Springboard on monitoring and evaluation!

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

Help stop spam. * Time limit is exhausted. Please reload CAPTCHA.