Hostname: page-component-6bf8c574d5-xtvcr Total loading time: 0 Render date: 2025-02-23T17:49:16.038Z Has data issue: false hasContentIssue false

Do Groups Have Moral Standing in Unregulated mHealth Research?

Published online by Cambridge University Press:  01 January 2021

Rights & Permissions [Opens in a new window]

Abstract

Biomedical research using data from participants’ mobile devices borrows heavily from the ethos of the “citizen science” movement, by delegating data collection and transmission to its volunteer subjects. This engagement gives volunteers the opportunity to feel like partners in the research and retain a reassuring sense of control over their participation. These virtues, in turn, give both grass-roots citizen science initiatives and institutionally sponsored mHealth studies appealing features to flag in recruiting participants from the public. But while grass-roots citizen science projects are often community-based, mHealth research ultimately depends on the individuals who own and use mobile devices. This inflects the ethos of mHealth research towards a celebration of individual autonomy and empowerment, at the expense of its implications for the communities or groups to which its individual participants belong. But the prospects of group harms — and benefits — from mHealth research are as vivid as they are in other forms of data-intensive “precision health” research, and will be important to consider in the design of any studies using this approach.

Type
Symposium Articles
Copyright
Copyright © American Society of Law, Medicine and Ethics 2020

Introduction

For the most part, discussions of the ethical, legal, and social implications (ELSI) of mHealth research have been preoccupied with the welfare and interests of individual participants — i.e., those who carry the devices running the programs that provide their health data to researchers. Since the ultimate benefits of mHealth research are so often framed in terms of “precision” health care capable of “personalizing” health care to individuals, this preoccupation with individual interests is understandable. But other forms of precision medicine research, like population genomic variation studies, have already learned that the individuals donating their data (whether DNA or downloads) are not the only parties whose interests are implicated in the responsible design of data-intensive research. Wherever such research is designed to allow generalizations to be drawn about groups of people beyond the individual data donors, the interests of those groups also become important to consider. In this paper, our thesis is that consideration of potential harms or benefits to groups is just as important in mHealth research as it is in genomics: i.e., that groups do have moral standing in this context, despite the individualistic ethos that flavors this field. To defend this (counter-intuitive) thesis, we first examine the growth of concern for group interests in discussions of biomedical research. Next, we analyze several different accounts of groups’ interests in research that have been proposed in the research ethics literature, to provide a framework for our mHealth research analysis. We then use this framework to demonstrate how mHealth research raises four sets of group-related issues. Finally, we address future directions for empirical research and policy development needed to address group harms, benefits and rights that arise from mHealth research.

I. The Growth of Concern for Group Interests in Biomedical Research

Concern for the protection of “vulnerable populations” in biomedical research and the equitable distribution of research risks and benefits across different population groups has existed since the 1978 Belmont Report on the ethics of research with human subjects.Reference Levine1 These concerns were galvanized in the early 1990s by the first efforts to map human genomic diversity across the world’s indigenous populations, which underscored how tacit colonialism can exacerbate the risks of group harm, and provoked the assertion of a range of claims to group consent, control, and “benefit sharing” in response.Reference Reardon, Juengst, Fong, Braun and Chang2 In recent years, these concerns have again been driven to new levels of scrutiny by growing biomedical interest in the role of population genomic variation in “precision medicine” research and the recognition of the racial and ethnic biases that infect current genomic databases.Reference Beskow, Hammack and Brelsford3 In a national interview study of 60 thought leaders of fields ranging from genome research to historically-disadvantaged populations on benefits and harms of precision medicine research, important cross-cutting themes included the return of research results, harm to socially identifiable groups, and the value-dependent nature of many benefits and harms.4

The recognition that the risk of group harms contributes to research distrust and is a potential barrier to participation in precision medicine research marks an important watershed in research ethics. Unlike the Belmont Report’s concern to protect vulnerable groups against exploitation by researchers, we now worry about excluding groups from research that might benefit them. These benefits are in part, of course, the health benefits that might flow to their individual members from generalizations about their groups. For example, return of results to research participants from underrepresented populations could provide medically actionable direct benefit and perhaps, in aggregate, provide a measurable benefit to segments of the population. But just as important seems to be the social benefits of being represented equally with other groups in the research enterprise, whatever the consequences of that might be for their members’ individual welfare. The clearest instance comes from early claims that consideration of race and then genetic ancestry in research would enable genetics to address health disparities and promote equity in genomic research.Reference Burchard5 Despite numerous critiques,Reference Meagher, Sabatello and Appelbaum6 such group benefits continue to be expected from, some might say even drive, initiatives such as the Precision Medicine Initiative’s All of Us research program.

Consideration of potential harms or benefits to groups is just as important in mHealth research as it is in genomics: i.e., that groups do have moral standing in this context, despite the individualistic ethos that flavors this field.

II. Unpacking the Concept of “Group Interests”

When the interests of groups are discussed in the research ethics literature, they are often framed as concerns about “group harms” and their correlative benefits. At its simplest, the argument is that in some research contexts, like studies of human genetic variation or community-based epidemiology, those put at risk of harm are not just the individual study participants, but also all other members of the human groups to which the research results are generalized. Obviously, the harms at stake here are not the kinds of physical harms that phase one drug study volunteers might incur, since most group members will not even be directly involved in the particular studies being discussed. What sorts of harms might they be, and how might they be relevant to mhealth research?

The literature on group harms in other research contexts provides several good starting points for thinking about this question. For example, Daniel HausmanReference Hausman and Hausman7 distinguishes between harms to “structured groups” like tribal nations or municipal communities and to “identifying groups” like racial or ethnic minorities. Structured groups are usually groups which feature some form of corporate or political organization, and their membership is comprised of those counted as “citizens” under its jurisdiction. These groups typically have mechanisms for articulating and defending their collective interests. At the same time, their sovereignty, for the most part, only extends to the interests of their citizenry, no matter what biological or cultural ties others may have with them. Identifying groups, by contrast, are groups created by shared ascribed identifiers, whether or not their members identify themselves in those terms or not.Reference Juengst8 Except at the micro-level, such as in individual families, these groups rarely feature the social organization of structured groups, and cannot easily be voluntarily joined or left, because they are created by labels over which those labeled have little control. But with their shared identities, those affiliated with identifying groups gain shared interests, since scientific claims that impugn the social value of that label put related interests at risk for them all.

Hausman’s distinction is helpful in thinking about group interests in mHealth research because they suggest that to the extent that mHealth studies are individualized rather than community-based, their findings will implicate identifying groups more readily than structured groups, mostly raising the risks of stigmatization and stereotyping based on specific shared traits rather than challenges to organized communities. For instance, group harms are more likely to be experienced by ethnic minority groups because these groups are already stigmatized or discriminated against especially when the research topic has normative implications.Reference de Vries9

Hausman goes further to distinguish harms that result from research processes and those that flow from research outcomes.10 But in both cases, his analysis of group interests assumes a consequentialist conception of “harm” as a loss that can be measured in degrees and potentially outweighed by counter-balancing benefits. But in practical debates over group harms, it is common to see some “harms” held out as trump cards, as violations of group members’ rights or affronts to the group’s dignity that supersede any calculations of relative losses or gains. Joan McGregor, for example, distinguishes between the “tangible” harms that Hausman explores and “dignitary” group harms in the research context.Reference McGregor and McGregor11 She defines dignitary harms as those that “undermine the perceived value and worth of the group in the eyes of others and the group itself,” whether or not they lead to tangible losses in welfare or opportunity.12 Obviously, dignitary harms can lead to tangible harms, but McGregor’s point is to emphasize that sometimes the aphorism “no harm, no foul” is inadequate to capture the offensiveness of some research generalizations. Others have made similar efforts to draw attention to these dignity-based concerns by contrasting “material” group harms with “cultural” or “spiritual” groups harms to capture similar distinctions between these different kinds of overlapping research risks.Reference Tsosie13

One way to sharpen McGregor’s distinction even further is to consider the distinction made by legal philosophers between being harmed and being wronged.Reference Simester and Von Hirsch14 On this formulation “harms” are injuries that leave a person worse off than they were before in measurable ways (mild/severe, permanent/remediable, etc.), and “wrongs” are insults that are offensive in themselves whether or not they actually have any bad consequences. Both concern people’s interests, but harms tend to be losses to people’s welfare, while wrongs are attacks on their dignity, autonomy, and identity. Thus, one can be wronged even when it has no impact on one’s welfare (like a “harmless” invasion of privacy), and some harms are not blameworthy as moral wrongs (like uncontrollable accidents or inadvertent contagion). Often, of course, the two concepts travel together: the victims of theft are wronged by the violation of their property rights and harmed by the loss of their goods. But while harms can be objectively outweighed by benefits, as when we tolerate the noxious effects of chemotherapy to gain a cancer remission, wrongs can only be absolved forgiveness. The “dignitary harms” that are used as trump cards in debates over group interests in research are often better understood as not a form of harm at all, but as a way in which a particular group has been wronged.

Recasting dignitary harms as wrongs is important, because it helps bring into focus which of Hausman’s kinds of group harm will be most important to consider in mHealth research. When mHealth research aggregates data from individual device owners, the groups it creates can only be identifying groups: that is, sets of participants who share some identifying feature in common, like race, gender, age or social networks. But in the absence of tangible harms, identifying groups rarely have the moral standing to “press charges” of being wronged. Because they are unstructured, identifying groups have no voice — or rather, they have too many voices, with no way to adjudicate between them. Thus, overgeneralizing to all members of an identifying group is an epistemic mistake, and, if it is also stigmatizing, it may yield tangible harms for group members, but it violates no rights: Without the corporate moral agency that structured groups are given by their members, identifying groups literally cannot be “disrespected.”

The upshot of this conceptual unpacking is that in anticipating the impact of individualized mHealth research on group interests, we might expect to look first and foremost to its risks for creating the tangible harms of stigmatization for members of identifying groups.

III. Group Interests in Unregulated mHealth Research

Group-level harms from mHealth research have been articulated as the potential for discrimination and stigmatization of identifying groups based on research inferences made using heterogeneous sources of data that result in the profiling of individuals on the basis of group affiliation.Reference Mittelstadt and Floridi15 At least four inter-related practices converge to create this potential for group harms via mHealth. They include the social construction of identifying groups, the involvement of third-parties and bystander data collection, profiling based on data aggregation, and digital divides in mHealth as barriers to equitable benefit.

The Social Construction of Groups

Group-related concerns about AI, machine learning, etc., have focused on bias against established social-political groups primarily along racial or ethnic lines, a problem thought tractable through technological fixes.Reference Howard, Borenstein and Courtland16 These concerns assume a stability for such groups that is undercut by experiences of research participants, especially in research involving genomics. For example, ancestry estimation and associated genealogical testing has shown the potential disruption of racial and ethnic identities, sometimes even serving as a basis for assuming a different racial, ethnic or cultural identity.Reference Johnston, Shriver, Kittles, Skinner, Winston, Kittles and Turner17 As further example, individuals have assumed and organized new identities based on genetic markers. Thus, while shifting identities and “identifying group” affiliations are by no means novel to technological influences, perhaps more important is that patient/participant engagement through mHealth apps, especially those that inform and establish phenotypic patterns or evidence of diagnosis and include social functionality are likely to contribute to the formation of socially constructed groups or new classes of shared identity. These identity- or affiliation-based groups may form based on patterns of data, use of new platforms or participation in research facilitated by mHealth apps, and may form as subsets within existing groups who already experience stigma and discrimination. While many groups form intentionally, some individuals will remain unaware of their affiliation, yet may experience the effects of their inclusion including surveillance, monitoring, and profiling. On the one hand, the imposition of new identifying groups on research participants might stigmatize them with tangible harms; while on the other hand, the disruption of existing group affiliations might be construed as a form of dignitary disrespect towards those who embrace those identities.

Impact on Third Parties and Bystanders

Camera-based data collection via smart phones has been deployed in health research.Reference Gurrin18 Social media data such as that available through Facebook offers the possibility of capturing communities of peopleReference Khanna19 and mining their data.Reference Breen20 Furthermore, data can be generated both overtly or covertly across a wide array of platforms, wearables, sensors, and the like.21 All of these modalities offer the prospect of collecting data about those around the individual mhealth research volunteers without their knowledge or consent, risking invasions of privacy and the tangible harms that might flow to third parties as a result. Together these developments present the potential for profiling in research that has implications for groupsReference Garattini22 because third parties/bystanders are more likely to belong to the same social groups, whether they are in physical or digital proximity.

Profiling as the Product of Data Aggregation

Aggregation of so-called anonymized datasets will yield patterns or correlations in data that may contribute to over-generalizations about groups and ultimately profiling of individuals based on these patterns.23 While aggregation is a core function of population science and resultant profiling is not a new concern, the role of automation, algorithms, and A in discerning patterns introduces a unique potential for contributing to discrimination and stigma based on existing data biases.Reference Zou, Schiebinger, Char, Shah and Magnus24 Assurances of de-identification and anonymization facilitates research participation and data sharing, but aggregation of such data does not preclude and instead may contribute to group-based inference that contribute to tangible group harms such as discrimination and stigma.Reference Docherty and Choudhury25 This is where arguments for group privacy rights may be most relevant.Reference Taylor, Floridi and van der Sloot26

Digital Divides as Barriers to Equitable mHealth Research

While the digital divide is slowly closing, deeper digital divides have developed that might undermine the potential benefits of mHealth research. The socioeconomic divide between use of iOS and Android platform devices is a potential barrier to digital research participation. Most mHealth apps are designed for the general population from the perspective of the health care system,Reference Anderson27 yet safety net patients and care-givers have been found to have lower capacity to use mHealth tools and participate in research. Studies also suggest differences in the use of health-related apps between racial and ethnic populations, including which apps are used.Reference Bender28 We think that digital divides in mHealth research will be a barrier to realizing benefits equitably across groups and that this constitutes another potential group interest in research.

IV. Developing an Evidentiary Base for Anticipatory Policy Development Regarding Group Interests in mHealth Research

Our review of group interests and the issues they raise for mHealth research provide a number of immediate targets for future empirical research and policy development.

First, more information is needed on the nature of the groups implicated in mHealth research. To what extent will mHealth research actually need to sort participants into “identifying groups” along psychosocially potent demographic lines, or might its disseminated design actually allow data to be interpreted without those pre-existing political and social lenses? If new identifying or even structured groups are generated from the identification of the kinds of shared traits mHealth research will isolate, what dynamics are likely to drive their emergence?

The past two decades of scholarship on group interests in research suggests that groups and their rights are co-produced or co-constituted along with our understanding of risks of both group injuries and insults, and potential for group benefits through the development of representative governance systems, adoption of broad consent, and proliferation of codes-of-conduct. For example, attention to group harms dominates discussions of genomic research for good reason, as these harms are grounded in past social injustices to specific racial, ethnic, and particularly indigenous communities. As such, one area of much needed work is to make a stronger case for group benefits, but again, for whom? For example, analogous to group harms, it’s conceivable that groups could experience capacity building to engage in research or address health disparities or strengthen group self-conceptions. Research into positive group attributes and even deeper understandings of health-related differences between groups may be of value. Claims of benefit from research involving mHealth apps to both individuals and groups need to be critically examined, particularly when potential benefits to one group may result in harms to another.

Second, once more data become available about the affiliations that mHealth research help reinforce, participant-engaged studies can be designed to track and assess the relative threats of the different categories of mHealth group interests we have enumerated above. For example, in discussing the research uses of West African mobile phone data, Taylor recognizes the tension between the ability to better respond to group-based conflict and forced migration and the risk of surveilling and controlling population movement.Reference Taylor29 In doing so, he identifies a critical need to balance the right to be forgotten or the right to invisibility versus the right to be seen, which will require fine-grained empirical assessments of the local circumstances of prospective research participants.

Strategies of inclusion, engagement, consultation, and community-based research are all critical to avoid widening this potential disparity, yet more fundamental is determining how to strike a balance between group and individual benefits, including group privacy. Such determinations will inform practical issues, such as what constitutes collecting the minimum data via mobile device/app in the context of group as well as individual interests.

Another key area for further empirical research will be to assess different approaches to mitigating groups’ risks in mHealth research. For example, in the context of genetic research, a number of practices to protect group interests have been proposed, including community consultation, inclusion of both individual and group risks in consent discussions, prioritizing research that benefits participating communities, and continuing discussion between researchers and community members beyond sample and data collection.Reference Sharp and Foster30 Consistent with these suggestions, the potential for groups harms in research has motivated the use of Community-based, Participatory Research (CBPR) approaches to research,Reference Boyer31 in particular engaging communities in research designReference Weijer32 and broader community partnership.Reference Winickoff33 The relevance of any of these potential approaches to mHealth research remains to be investigated.

Strategies of inclusion, engagement, consultation, and community-based research are all critical to avoid widening this potential disparity, yet more fundamental is determining how to strike a balance between group and individual benefits, including group privacy. Such determinations will inform practical issues, such as what constitutes collecting the minimum data via mobile device/app in the context of group as well as individual interests. For example, in discussing the implications of the West African mobile phone data, Floridi and Taylor argue for consideration of group privacy as a right, specifically because of the potential for group harms from Big Data.36 Ultimately, they argue that there are at least four group interests in group privacy.37 These include two “retrospective” interests applicable to existing groups: a negative interest in not being discriminated against and a positive interest in securing and protecting minority rights. Two additional “prospective” or forward-looking interests including an interest in not having group identities defined and imposed on research participants by others and the correlative interest in the self-definition of any groups that research participants may wish to embrace. In our opinion, this articulation of interests in group ontology places front-and-center the potential harms and benefits at stake in big data and precision medicine research to which mHealth may become instrumental.

As the volume of data for research generated by mHealth apps becomes nearly boundless, inquiry becomes more inductive with the use of pattern recognizing algorithms that will render specific “informed” consent for research unachievable. As such, some are turning to deliberative governance involving representative stakeholders to overcome the limits of individual consent.Reference Koenig, Vayena and Blasimme38 Yet, an important limit to such approaches is the potential for missing perspectives or representatives of a particular group/class of persons. A critical topic for ELSI inquiry is the quality of representation in these governance mechanisms as well as the consequences of representation.Reference Majumder39 At a minimum, disclosure of potential group harms should not only be included in the process of informed consent, but also should be integrated into deliberative approaches to research governance. mHealth research, in contrast to most other biomedical research, may be better suited to build in the continuous feedback loops required by adaptive or shared approaches to research governance.Reference O’Doherty, Pratt and Hyder40 In particular, the communication possibilities afforded by mhealth research lend themselves towards the model of “dynamic governance,” in which new collaborators and participants are all continuously involved in development and changing the scope, priorities, and methods of a given research project.Reference Juengst and Meslin41

Finally, perhaps the most policy pressing question in mHealth research may be one of equity. Who is likely to benefit from versus be harmed by unregulated mHealth research? It is clear that likely to benefit are those integral to the production of a knowledge-based health economy in part driven/dependent on mHealth and related eHealth trends. More tangibly, that translates into communities of patients engaged in patient-led research, the quantified health movement, and citizen science. Yet looking through the lens of groups suggests that communities who may be marginalized or face barriers from such active engagement with their health are at greatest risk of missing out on the benefits while experiencing group harms.

Conclusion: Future Directions

An equity-based framework including strategies and practices to address ELSI issues impacting vulnerable populations is central to the conduct of mHealth research because these questions and issues at the group-level revolve around differences in power across society.Reference Ali42 This highlights the need to consider what is required by the principle of justice when understood as not only the fair distribution of goods (e.g., group benefits and harms) but also recognition of groups (e.g., group wrongs or insults).Reference Fraser and Honneth43 In this context, we suggest that ELSI scholars of mHealth and related research should learn from the software/technology lifecycle how to “iterate” on equity. By this we mean maintain vigilance through an iterative process of ELSI surveillance including a cycle of identifying group formation, implications for existing and forming groups, continuous engagement and consultation commensurate with new technological developments, and reapportioning resources to new issues and strategies to head off inequities. One crucial iterative practice may be to monitor the development of new digital divides and invest in and study strategies to develop capacity to engage with mHealth research, as one dimension of a needed larger investment in community-led research (like patient-led research and “citizen science”). Promotion of participant recruitment and data collection using mHealth apps in vulnerable populations should anticipate the range of not only individual benefits and harms but those that develop between individuals and society writ large. Central to data inclusion and community-led research initiatives is recognizing that such efforts are attempting to bridge a cultural digital divide between communities with varying degrees of technological familiarity, facility, and trust that requires significant bi-directional capacity building.

Given the central role of mHealth applications as a tool for research and health care, expanding ELS inquiry to understand the role of group harms, benefits, and rights will be key to its implications in the context of other emerging health technologies.

Acknowledgments

The authors would like to thank members of the ELSI of Unregulated mHealth Workgroup and members of the Center for Genomics and Healthcare Equality Group Harms and Benefits Work-group (P50-HG003374; PI: Burke). Preparation of this article was supported by the National Human Genomic Research Institute (NHGRI) including: K99R00 HG007076 (JHY); P50-HG004488 (ETJ).

Research on this article was funded by the following grant: Addressing ELS Issues in Unregulated Health Research Using Mobile Devices, No. 1R01CA20738-01A1, National Cancer Institute, National Human Genome Research Institute, and Office of Science Policy and Office of Behavioral and Social Sciences Research in the Office of the Director, National Institutes of Health, Mark A. Rothstein and John T. Wilbanks, Principal Investigators.

Footnotes

JHY is a member of Sage Bionetwork’s Scientific Advisory Board. The authors have no conflicts of interest to disclose.

References

Levine, C. et al., “The Limitations of ‘Vulnerability’ as a Protection for Human Research Participants,” American Journal of Bioethics 4, no. 3 (2004): 44-49.CrossRefGoogle Scholar
Reardon, J., Race to the Finish: Identity and Governance in An Age of Genomics (Princeton: Princeton University Press, 2005); Juengst, E.T., “Groups as Gatekeepers to Genomic Research: Conceptually Confusing, Morally Hazardous, and Practically Useless,” Kennedy Institute of Ethics Journal 8, no. 2 (1998): 183-200; Fong, M., Braun, K.L., and Chang, R.M., “Native Hawaiian Preferences for Informed Consent and Disclosure of Results from Genetic Research,” Journal of Cancer Education 21, Supp.1 (2006): S47-52.Google Scholar
Beskow, L.M., Hammack, C.M., and Brelsford, K.M., “Thought Leader Perspectives on Benefits and Harms in Precision Medicine Research,” PLoS ONE 13 (2018): e0207842.CrossRefGoogle Scholar
Burchard, E.G., “The Importance of Race and Ethnic Background in Biomedical Research and Clinical Practice,” New England Journal of Medicine 348, no. 12 (2003): 1170-1175.CrossRefGoogle Scholar
Meagher, K.M. et al., “Precisely Where Are We Going? Charting the New Terrain of Precision Prevention,” Annual Review of Genomics and Human Genetics 18 (2017): 369-387; Sabatello, M. and Appelbaum, P.S., “The Precision Medicine Nation,” Hastings Center Report 47, no. 4 (2017): 19-29.CrossRefGoogle Scholar
Hausman, D., “Protecting Groups from Genetic Research,” Bioethics 22, no. 3 (2008): 157-165; Hausman, D.M., “Group Risks, Risks to Groups, and Group Engagement in Genetics Research,” Kennedy Institute of Ethics Journal 17, no. 4 (2007): 351-369.CrossRefGoogle Scholar
Juengst, E.T., “FACE Facts: Why Human Genetics Will Always Provoke Bioethics,” Journal of Law, Medicine & Ethics 32, no. 2 (2004): 267-275.CrossRefGoogle Scholar
de Vries, et al., “Investigating the Potential for Ethnic Group Harm in Collaborative Genomics Research in Africa: Is Ethnic Stigmatisation Likely?” Social Science & Medicine 75, no. 8 (2012): 1400-1407.CrossRefGoogle Scholar
See Hausman, supra note 7.Google Scholar
McGregor, J., “Racial, Ethnic, and Tribal Classifications in Biomedical Research with Biological and Group Harm,” American Journal of Bioethics 10 (2010): 23-24; McGregor, J.L., “Population Genomics and Research Ethics with Socially Identifable Groups,” Journal of Law, Medicine & Ethics 35 (2007): 356-370.CrossRefGoogle Scholar
Id., at 24.Google Scholar
Tsosie, R., “Cultural Challenges to Biotechnology: Native American Genetic Resources and the Concept of Cultural Harm,” Journal of Law, Medicine & Ethics 35 (2007): 396-411.CrossRefGoogle Scholar
Simester, A.P. and Von Hirsch, A., Crimes, Harms, and Wrongs: On the Principles of Criminalisation (Oxford and Portland: Hart Publishing, 2011).Google Scholar
Mittelstadt, B.D. and Floridi, L., “The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts,” Science and Engineering Ethics 22, no. 2 (2016): 303-341.CrossRefGoogle Scholar
Howard, A. and Borenstein, J., “The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity,” Science and Engineering Ethics 24, no. 5 (2018): 1521-1536; Courtland, R., “Bias Detectives: the Researchers Striving to Make Algorithms Fair,” Nature 558, no. 7710 (2018): 357-360.Google Scholar
Johnston, J., “Resisting a Genetic Identity: The Black Seminoles and Genetic Tests of Ancestry,” Journal of Law, Medicine & Ethics 31, no. 2 (2003): 262-271; Shriver, M.D. and Kittles, R.A., “Genetic Ancestry and the Search for Personalized Genetic Histories,” Nature Reviews Genetics 5, no. 8 (2004): 611-618; Skinner, D., “Racialized Futures: Biologism and the Changing Politics of Identity,” Social Studies of Science 36, no. 3 (2006): 459-488; Winston, C.E. and Kittles, R., “Psychological and Ethical Issues Related to Identity and Inferring Ancestry of African Americans,” in Turner, T., ed., Biological Anthropology and Ethics: From Repatriation to Genetic Identity (Albany: State University of New York Press, 2005): at 209-229.CrossRefGoogle Scholar
Gurrin, C. et al., “The Smartphone as a Platform for Wearable Cameras in Health Research,” American Journal of Preventitive Medicine 44, no. 3 (2013): 308-313.CrossRefGoogle Scholar
Khanna, A.S. et al., “Using Partially-Observed Facebook Networks to Develop a Peer-Based HIV Prevention Intervention: Case Study,” Journal of Medical Internet Research 20, no. 9 (2018): e11652.CrossRefGoogle Scholar
Breen, N. et al., “Translational Health Disparities Research in a Data-Rich World,” American Journal of Public Health 109, Supp. 1 (2019): S41-S42.CrossRefGoogle Scholar
See Mittelstadt and Floridi, supra note 15.Google Scholar
Garattini, C. et al., “Big Data Analytics, Infectious Diseases and Associated Ethical Impacts,” Philosophy & Technology 32, no. 1 (2019): 69-85.CrossRefGoogle Scholar
Zou, J. and Schiebinger, L., “AI Can Be Sexist and Racist — It’s Time to Make It Fair,” Nature 559, no. 7714 (2018): 324-326; Char, D.S., Shah, N.H., and Magnus, D., “Implementing Machine Learning in Health Care: Addressing Ethical Challenges,” New England Journal of Medicine 378, no. 11 (2018): 981-983.CrossRefGoogle Scholar
See Mittelstadt and Floridi, supra note 15; Docherty, A., “Big Data — Ethical Perspectives,” Anaesthesia 69, no. 4 (2014): 390-391; Choudhury, S. et al., “Big Data, Open Science and the Brain: Lessons Learned from Genomics,” Frontiers in Human Neuroscience 8, no. 239 (2014).CrossRefGoogle Scholar
Taylor, L., Floridi, L., and van der Sloot, B., Group Privacy: New Challenges of Data Technologies (New York: Springer International Publishing, 2017).CrossRefGoogle Scholar
Anderson, C.-Lewis et al., “mHealth Technology Use and Implications in Historically Underserved and Minority Populations in the United States: Systematic Literature Review,” Journal of Medical Internet Research Mhealth Uhealth 6, no. 6 (2018): e128.CrossRefGoogle Scholar
Bender, M.S. et al., “Digital Technology Ownership, Usage, and Factors Predicting Downloading Health Apps among Caucasian, Filipino, Korean, and Latino Americans: The Digital Link to Health Survey,” Journal of Medical Internet Research Mhealth Uhealth 2, no. 4 (2014): e43.CrossRefGoogle Scholar
Taylor, L., “No Where to Hide? The Ethics and Analytics of Tracking Mobility Using Mobile Phone Data,” Environment and Planning D: Society and Space 34, no. 2 (2016): 319-336.CrossRefGoogle Scholar
See Sharp, R.R. and Foster, M.W., “Grappling With Groups: Protecting Collective Interests in Biomedical Research,” Journal of Medicine and Philosophy 32, no. 4 (2007): 321-337.CrossRefGoogle Scholar
Boyer, B.B. et al., “Ethical Issues in Developing Pharmacogenetic Research Partnerships with American Indigenous Communities,” Clinical Pharmacology & Therapeutics 89, no. 3 (2011): 343-345.CrossRefGoogle Scholar
Weijer, C., “Benefit-sharing and Other Protections for Communities in Genetic Research,” Clinical Genetics 58, no. 5 (2000): 367-368.CrossRefGoogle Scholar
Winickoff, E., “Partnership in U.K. Biobank: A Third Way for Genomic Property?” Journal of Law, Medicine & Ethics 35, no. 3 (2007): 440-456.CrossRefGoogle Scholar
See Sharp and Foster, supra note 30.Google Scholar
Santos, L., “Genetic Research in Native Communities,” Progress in Community Health Partnerships 2, no. 4 (2008): 321-327.CrossRefGoogle Scholar
See Taylor, Floridi, and van der Sloot, supra note 26.Google Scholar
Id., at 286.Google Scholar
Koenig, B.A., “Have We Asked Too Much of Consent?” Hastings Center Report 44, no. 4 (2014): 33-34; Vayena, E. and Blasimme, A., “Biomedical Big Data: New Models of Control Over Access, Use and Governance,” Journal of Bioethical Inquiry 14, no. 4 (2017): 501-513.CrossRefGoogle Scholar
Majumder, M.A. et al., “The Role of Participants in a Medical Information Commons,” Journal of Law, Medicine & Ethics 47, no. 1 (2019): 51-61.CrossRefGoogle Scholar
O’Doherty, K.C. et al., “From Consent to Institutions: Designing Adaptive Governance for Genomic Biobanks,” Social Science & Medicine 73, no. 3 (2011): 367-374; Pratt, B. and Hyder, A.A., “Governance of Transnational Global Health Research Consortia and Health Equity,” American Journal of Bioethics 16, no. 10 (2016): 29-45.CrossRefGoogle Scholar
Juengst, E.T. and Meslin, E.M., “Sharing with Strangers: Governance Models for Borderless Genomic Research in a Territorial World,” Kennedy Institute of Ethics Journal 29, no. 1 (2019): 67-95.CrossRefGoogle Scholar
Ali, J. et al., “Ethics Considerations in Global Mobile Phone-Based Surveys of Noncommunicable Diseases: A Conceptual Exploration,” Journal of Medical Internet Research 19, no. 5 (2017): e110.CrossRefGoogle Scholar
Fraser, N. and Honneth, A., Redistribution or Recognition?: A Political-Philosophical Exchange (London and New York: Verso, 2003).CrossRefGoogle Scholar