Introduction
For the most part, discussions of the ethical, legal, and social implications (ELSI) of mHealth research have been preoccupied with the welfare and interests of individual participants — i.e., those who carry the devices running the programs that provide their health data to researchers. Since the ultimate benefits of mHealth research are so often framed in terms of “precision” health care capable of “personalizing” health care to individuals, this preoccupation with individual interests is understandable. But other forms of precision medicine research, like population genomic variation studies, have already learned that the individuals donating their data (whether DNA or downloads) are not the only parties whose interests are implicated in the responsible design of data-intensive research. Wherever such research is designed to allow generalizations to be drawn about groups of people beyond the individual data donors, the interests of those groups also become important to consider. In this paper, our thesis is that consideration of potential harms or benefits to groups is just as important in mHealth research as it is in genomics: i.e., that groups do have moral standing in this context, despite the individualistic ethos that flavors this field. To defend this (counter-intuitive) thesis, we first examine the growth of concern for group interests in discussions of biomedical research. Next, we analyze several different accounts of groups’ interests in research that have been proposed in the research ethics literature, to provide a framework for our mHealth research analysis. We then use this framework to demonstrate how mHealth research raises four sets of group-related issues. Finally, we address future directions for empirical research and policy development needed to address group harms, benefits and rights that arise from mHealth research.
I. The Growth of Concern for Group Interests in Biomedical Research
Concern for the protection of “vulnerable populations” in biomedical research and the equitable distribution of research risks and benefits across different population groups has existed since the 1978 Belmont Report on the ethics of research with human subjects.Reference Levine1 These concerns were galvanized in the early 1990s by the first efforts to map human genomic diversity across the world’s indigenous populations, which underscored how tacit colonialism can exacerbate the risks of group harm, and provoked the assertion of a range of claims to group consent, control, and “benefit sharing” in response.Reference Reardon, Juengst, Fong, Braun and Chang2 In recent years, these concerns have again been driven to new levels of scrutiny by growing biomedical interest in the role of population genomic variation in “precision medicine” research and the recognition of the racial and ethnic biases that infect current genomic databases.Reference Beskow, Hammack and Brelsford3 In a national interview study of 60 thought leaders of fields ranging from genome research to historically-disadvantaged populations on benefits and harms of precision medicine research, important cross-cutting themes included the return of research results, harm to socially identifiable groups, and the value-dependent nature of many benefits and harms.4
The recognition that the risk of group harms contributes to research distrust and is a potential barrier to participation in precision medicine research marks an important watershed in research ethics. Unlike the Belmont Report’s concern to protect vulnerable groups against exploitation by researchers, we now worry about excluding groups from research that might benefit them. These benefits are in part, of course, the health benefits that might flow to their individual members from generalizations about their groups. For example, return of results to research participants from underrepresented populations could provide medically actionable direct benefit and perhaps, in aggregate, provide a measurable benefit to segments of the population. But just as important seems to be the social benefits of being represented equally with other groups in the research enterprise, whatever the consequences of that might be for their members’ individual welfare. The clearest instance comes from early claims that consideration of race and then genetic ancestry in research would enable genetics to address health disparities and promote equity in genomic research.Reference Burchard5 Despite numerous critiques,Reference Meagher, Sabatello and Appelbaum6 such group benefits continue to be expected from, some might say even drive, initiatives such as the Precision Medicine Initiative’s All of Us research program.
Consideration of potential harms or benefits to groups is just as important in mHealth research as it is in genomics: i.e., that groups do have moral standing in this context, despite the individualistic ethos that flavors this field.
II. Unpacking the Concept of “Group Interests”
When the interests of groups are discussed in the research ethics literature, they are often framed as concerns about “group harms” and their correlative benefits. At its simplest, the argument is that in some research contexts, like studies of human genetic variation or community-based epidemiology, those put at risk of harm are not just the individual study participants, but also all other members of the human groups to which the research results are generalized. Obviously, the harms at stake here are not the kinds of physical harms that phase one drug study volunteers might incur, since most group members will not even be directly involved in the particular studies being discussed. What sorts of harms might they be, and how might they be relevant to mhealth research?
The literature on group harms in other research contexts provides several good starting points for thinking about this question. For example, Daniel HausmanReference Hausman and Hausman7 distinguishes between harms to “structured groups” like tribal nations or municipal communities and to “identifying groups” like racial or ethnic minorities. Structured groups are usually groups which feature some form of corporate or political organization, and their membership is comprised of those counted as “citizens” under its jurisdiction. These groups typically have mechanisms for articulating and defending their collective interests. At the same time, their sovereignty, for the most part, only extends to the interests of their citizenry, no matter what biological or cultural ties others may have with them. Identifying groups, by contrast, are groups created by shared ascribed identifiers, whether or not their members identify themselves in those terms or not.Reference Juengst8 Except at the micro-level, such as in individual families, these groups rarely feature the social organization of structured groups, and cannot easily be voluntarily joined or left, because they are created by labels over which those labeled have little control. But with their shared identities, those affiliated with identifying groups gain shared interests, since scientific claims that impugn the social value of that label put related interests at risk for them all.
Hausman’s distinction is helpful in thinking about group interests in mHealth research because they suggest that to the extent that mHealth studies are individualized rather than community-based, their findings will implicate identifying groups more readily than structured groups, mostly raising the risks of stigmatization and stereotyping based on specific shared traits rather than challenges to organized communities. For instance, group harms are more likely to be experienced by ethnic minority groups because these groups are already stigmatized or discriminated against especially when the research topic has normative implications.Reference de Vries9
Hausman goes further to distinguish harms that result from research processes and those that flow from research outcomes.10 But in both cases, his analysis of group interests assumes a consequentialist conception of “harm” as a loss that can be measured in degrees and potentially outweighed by counter-balancing benefits. But in practical debates over group harms, it is common to see some “harms” held out as trump cards, as violations of group members’ rights or affronts to the group’s dignity that supersede any calculations of relative losses or gains. Joan McGregor, for example, distinguishes between the “tangible” harms that Hausman explores and “dignitary” group harms in the research context.Reference McGregor and McGregor11 She defines dignitary harms as those that “undermine the perceived value and worth of the group in the eyes of others and the group itself,” whether or not they lead to tangible losses in welfare or opportunity.12 Obviously, dignitary harms can lead to tangible harms, but McGregor’s point is to emphasize that sometimes the aphorism “no harm, no foul” is inadequate to capture the offensiveness of some research generalizations. Others have made similar efforts to draw attention to these dignity-based concerns by contrasting “material” group harms with “cultural” or “spiritual” groups harms to capture similar distinctions between these different kinds of overlapping research risks.Reference Tsosie13
One way to sharpen McGregor’s distinction even further is to consider the distinction made by legal philosophers between being harmed and being wronged.Reference Simester and Von Hirsch14 On this formulation “harms” are injuries that leave a person worse off than they were before in measurable ways (mild/severe, permanent/remediable, etc.), and “wrongs” are insults that are offensive in themselves whether or not they actually have any bad consequences. Both concern people’s interests, but harms tend to be losses to people’s welfare, while wrongs are attacks on their dignity, autonomy, and identity. Thus, one can be wronged even when it has no impact on one’s welfare (like a “harmless” invasion of privacy), and some harms are not blameworthy as moral wrongs (like uncontrollable accidents or inadvertent contagion). Often, of course, the two concepts travel together: the victims of theft are wronged by the violation of their property rights and harmed by the loss of their goods. But while harms can be objectively outweighed by benefits, as when we tolerate the noxious effects of chemotherapy to gain a cancer remission, wrongs can only be absolved forgiveness. The “dignitary harms” that are used as trump cards in debates over group interests in research are often better understood as not a form of harm at all, but as a way in which a particular group has been wronged.
Recasting dignitary harms as wrongs is important, because it helps bring into focus which of Hausman’s kinds of group harm will be most important to consider in mHealth research. When mHealth research aggregates data from individual device owners, the groups it creates can only be identifying groups: that is, sets of participants who share some identifying feature in common, like race, gender, age or social networks. But in the absence of tangible harms, identifying groups rarely have the moral standing to “press charges” of being wronged. Because they are unstructured, identifying groups have no voice — or rather, they have too many voices, with no way to adjudicate between them. Thus, overgeneralizing to all members of an identifying group is an epistemic mistake, and, if it is also stigmatizing, it may yield tangible harms for group members, but it violates no rights: Without the corporate moral agency that structured groups are given by their members, identifying groups literally cannot be “disrespected.”
The upshot of this conceptual unpacking is that in anticipating the impact of individualized mHealth research on group interests, we might expect to look first and foremost to its risks for creating the tangible harms of stigmatization for members of identifying groups.
III. Group Interests in Unregulated mHealth Research
Group-level harms from mHealth research have been articulated as the potential for discrimination and stigmatization of identifying groups based on research inferences made using heterogeneous sources of data that result in the profiling of individuals on the basis of group affiliation.Reference Mittelstadt and Floridi15 At least four inter-related practices converge to create this potential for group harms via mHealth. They include the social construction of identifying groups, the involvement of third-parties and bystander data collection, profiling based on data aggregation, and digital divides in mHealth as barriers to equitable benefit.
The Social Construction of Groups
Group-related concerns about AI, machine learning, etc., have focused on bias against established social-political groups primarily along racial or ethnic lines, a problem thought tractable through technological fixes.Reference Howard, Borenstein and Courtland16 These concerns assume a stability for such groups that is undercut by experiences of research participants, especially in research involving genomics. For example, ancestry estimation and associated genealogical testing has shown the potential disruption of racial and ethnic identities, sometimes even serving as a basis for assuming a different racial, ethnic or cultural identity.Reference Johnston, Shriver, Kittles, Skinner, Winston, Kittles and Turner17 As further example, individuals have assumed and organized new identities based on genetic markers. Thus, while shifting identities and “identifying group” affiliations are by no means novel to technological influences, perhaps more important is that patient/participant engagement through mHealth apps, especially those that inform and establish phenotypic patterns or evidence of diagnosis and include social functionality are likely to contribute to the formation of socially constructed groups or new classes of shared identity. These identity- or affiliation-based groups may form based on patterns of data, use of new platforms or participation in research facilitated by mHealth apps, and may form as subsets within existing groups who already experience stigma and discrimination. While many groups form intentionally, some individuals will remain unaware of their affiliation, yet may experience the effects of their inclusion including surveillance, monitoring, and profiling. On the one hand, the imposition of new identifying groups on research participants might stigmatize them with tangible harms; while on the other hand, the disruption of existing group affiliations might be construed as a form of dignitary disrespect towards those who embrace those identities.
Impact on Third Parties and Bystanders
Camera-based data collection via smart phones has been deployed in health research.Reference Gurrin18 Social media data such as that available through Facebook offers the possibility of capturing communities of peopleReference Khanna19 and mining their data.Reference Breen20 Furthermore, data can be generated both overtly or covertly across a wide array of platforms, wearables, sensors, and the like.21 All of these modalities offer the prospect of collecting data about those around the individual mhealth research volunteers without their knowledge or consent, risking invasions of privacy and the tangible harms that might flow to third parties as a result. Together these developments present the potential for profiling in research that has implications for groupsReference Garattini22 because third parties/bystanders are more likely to belong to the same social groups, whether they are in physical or digital proximity.
Profiling as the Product of Data Aggregation
Aggregation of so-called anonymized datasets will yield patterns or correlations in data that may contribute to over-generalizations about groups and ultimately profiling of individuals based on these patterns.23 While aggregation is a core function of population science and resultant profiling is not a new concern, the role of automation, algorithms, and A in discerning patterns introduces a unique potential for contributing to discrimination and stigma based on existing data biases.Reference Zou, Schiebinger, Char, Shah and Magnus24 Assurances of de-identification and anonymization facilitates research participation and data sharing, but aggregation of such data does not preclude and instead may contribute to group-based inference that contribute to tangible group harms such as discrimination and stigma.Reference Docherty and Choudhury25 This is where arguments for group privacy rights may be most relevant.Reference Taylor, Floridi and van der Sloot26
Digital Divides as Barriers to Equitable mHealth Research
While the digital divide is slowly closing, deeper digital divides have developed that might undermine the potential benefits of mHealth research. The socioeconomic divide between use of iOS and Android platform devices is a potential barrier to digital research participation. Most mHealth apps are designed for the general population from the perspective of the health care system,Reference Anderson27 yet safety net patients and care-givers have been found to have lower capacity to use mHealth tools and participate in research. Studies also suggest differences in the use of health-related apps between racial and ethnic populations, including which apps are used.Reference Bender28 We think that digital divides in mHealth research will be a barrier to realizing benefits equitably across groups and that this constitutes another potential group interest in research.
IV. Developing an Evidentiary Base for Anticipatory Policy Development Regarding Group Interests in mHealth Research
Our review of group interests and the issues they raise for mHealth research provide a number of immediate targets for future empirical research and policy development.
First, more information is needed on the nature of the groups implicated in mHealth research. To what extent will mHealth research actually need to sort participants into “identifying groups” along psychosocially potent demographic lines, or might its disseminated design actually allow data to be interpreted without those pre-existing political and social lenses? If new identifying or even structured groups are generated from the identification of the kinds of shared traits mHealth research will isolate, what dynamics are likely to drive their emergence?
The past two decades of scholarship on group interests in research suggests that groups and their rights are co-produced or co-constituted along with our understanding of risks of both group injuries and insults, and potential for group benefits through the development of representative governance systems, adoption of broad consent, and proliferation of codes-of-conduct. For example, attention to group harms dominates discussions of genomic research for good reason, as these harms are grounded in past social injustices to specific racial, ethnic, and particularly indigenous communities. As such, one area of much needed work is to make a stronger case for group benefits, but again, for whom? For example, analogous to group harms, it’s conceivable that groups could experience capacity building to engage in research or address health disparities or strengthen group self-conceptions. Research into positive group attributes and even deeper understandings of health-related differences between groups may be of value. Claims of benefit from research involving mHealth apps to both individuals and groups need to be critically examined, particularly when potential benefits to one group may result in harms to another.
Second, once more data become available about the affiliations that mHealth research help reinforce, participant-engaged studies can be designed to track and assess the relative threats of the different categories of mHealth group interests we have enumerated above. For example, in discussing the research uses of West African mobile phone data, Taylor recognizes the tension between the ability to better respond to group-based conflict and forced migration and the risk of surveilling and controlling population movement.Reference Taylor29 In doing so, he identifies a critical need to balance the right to be forgotten or the right to invisibility versus the right to be seen, which will require fine-grained empirical assessments of the local circumstances of prospective research participants.
Strategies of inclusion, engagement, consultation, and community-based research are all critical to avoid widening this potential disparity, yet more fundamental is determining how to strike a balance between group and individual benefits, including group privacy. Such determinations will inform practical issues, such as what constitutes collecting the minimum data via mobile device/app in the context of group as well as individual interests.
Another key area for further empirical research will be to assess different approaches to mitigating groups’ risks in mHealth research. For example, in the context of genetic research, a number of practices to protect group interests have been proposed, including community consultation, inclusion of both individual and group risks in consent discussions, prioritizing research that benefits participating communities, and continuing discussion between researchers and community members beyond sample and data collection.Reference Sharp and Foster30 Consistent with these suggestions, the potential for groups harms in research has motivated the use of Community-based, Participatory Research (CBPR) approaches to research,Reference Boyer31 in particular engaging communities in research designReference Weijer32 and broader community partnership.Reference Winickoff33 The relevance of any of these potential approaches to mHealth research remains to be investigated.
Strategies of inclusion, engagement, consultation, and community-based research are all critical to avoid widening this potential disparity, yet more fundamental is determining how to strike a balance between group and individual benefits, including group privacy. Such determinations will inform practical issues, such as what constitutes collecting the minimum data via mobile device/app in the context of group as well as individual interests. For example, in discussing the implications of the West African mobile phone data, Floridi and Taylor argue for consideration of group privacy as a right, specifically because of the potential for group harms from Big Data.36 Ultimately, they argue that there are at least four group interests in group privacy.37 These include two “retrospective” interests applicable to existing groups: a negative interest in not being discriminated against and a positive interest in securing and protecting minority rights. Two additional “prospective” or forward-looking interests including an interest in not having group identities defined and imposed on research participants by others and the correlative interest in the self-definition of any groups that research participants may wish to embrace. In our opinion, this articulation of interests in group ontology places front-and-center the potential harms and benefits at stake in big data and precision medicine research to which mHealth may become instrumental.
As the volume of data for research generated by mHealth apps becomes nearly boundless, inquiry becomes more inductive with the use of pattern recognizing algorithms that will render specific “informed” consent for research unachievable. As such, some are turning to deliberative governance involving representative stakeholders to overcome the limits of individual consent.Reference Koenig, Vayena and Blasimme38 Yet, an important limit to such approaches is the potential for missing perspectives or representatives of a particular group/class of persons. A critical topic for ELSI inquiry is the quality of representation in these governance mechanisms as well as the consequences of representation.Reference Majumder39 At a minimum, disclosure of potential group harms should not only be included in the process of informed consent, but also should be integrated into deliberative approaches to research governance. mHealth research, in contrast to most other biomedical research, may be better suited to build in the continuous feedback loops required by adaptive or shared approaches to research governance.Reference O’Doherty, Pratt and Hyder40 In particular, the communication possibilities afforded by mhealth research lend themselves towards the model of “dynamic governance,” in which new collaborators and participants are all continuously involved in development and changing the scope, priorities, and methods of a given research project.Reference Juengst and Meslin41
Finally, perhaps the most policy pressing question in mHealth research may be one of equity. Who is likely to benefit from versus be harmed by unregulated mHealth research? It is clear that likely to benefit are those integral to the production of a knowledge-based health economy in part driven/dependent on mHealth and related eHealth trends. More tangibly, that translates into communities of patients engaged in patient-led research, the quantified health movement, and citizen science. Yet looking through the lens of groups suggests that communities who may be marginalized or face barriers from such active engagement with their health are at greatest risk of missing out on the benefits while experiencing group harms.
Conclusion: Future Directions
An equity-based framework including strategies and practices to address ELSI issues impacting vulnerable populations is central to the conduct of mHealth research because these questions and issues at the group-level revolve around differences in power across society.Reference Ali42 This highlights the need to consider what is required by the principle of justice when understood as not only the fair distribution of goods (e.g., group benefits and harms) but also recognition of groups (e.g., group wrongs or insults).Reference Fraser and Honneth43 In this context, we suggest that ELSI scholars of mHealth and related research should learn from the software/technology lifecycle how to “iterate” on equity. By this we mean maintain vigilance through an iterative process of ELSI surveillance including a cycle of identifying group formation, implications for existing and forming groups, continuous engagement and consultation commensurate with new technological developments, and reapportioning resources to new issues and strategies to head off inequities. One crucial iterative practice may be to monitor the development of new digital divides and invest in and study strategies to develop capacity to engage with mHealth research, as one dimension of a needed larger investment in community-led research (like patient-led research and “citizen science”). Promotion of participant recruitment and data collection using mHealth apps in vulnerable populations should anticipate the range of not only individual benefits and harms but those that develop between individuals and society writ large. Central to data inclusion and community-led research initiatives is recognizing that such efforts are attempting to bridge a cultural digital divide between communities with varying degrees of technological familiarity, facility, and trust that requires significant bi-directional capacity building.
Given the central role of mHealth applications as a tool for research and health care, expanding ELS inquiry to understand the role of group harms, benefits, and rights will be key to its implications in the context of other emerging health technologies.
Acknowledgments
The authors would like to thank members of the ELSI of Unregulated mHealth Workgroup and members of the Center for Genomics and Healthcare Equality Group Harms and Benefits Work-group (P50-HG003374; PI: Burke). Preparation of this article was supported by the National Human Genomic Research Institute (NHGRI) including: K99R00 HG007076 (JHY); P50-HG004488 (ETJ).
Research on this article was funded by the following grant: Addressing ELS Issues in Unregulated Health Research Using Mobile Devices, No. 1R01CA20738-01A1, National Cancer Institute, National Human Genome Research Institute, and Office of Science Policy and Office of Behavioral and Social Sciences Research in the Office of the Director, National Institutes of Health, Mark A. Rothstein and John T. Wilbanks, Principal Investigators.