Research with human subjects is central to contemporary life. Read the newspaper, buy a new product, swallow a new pill, and you are consuming knowledge gathered through experiments, interviews, surveys, focus groups, or observations involving people who live next door or on the other side of the planet. Some elements of this research are highly formalized, like the four phases of clinical trials required for new drugs or medical treatment. Others are indistinguishable from daily life, as when an anthropologist interested in smoking talks to his mechanic, who is taking a cigarette break.Footnote 1
Some research—medical experiments in particular—can kill. Other research can hurt, inflicting harms ranging from embarrassment to jail sentences to chronic illness. And some research, even though not materially harmful, may be ethically shocking. What hospital patient would like to wake up and be told that he had, without his consent, been subjected to an experimental procedure that had no chance of benefiting him, even if it had done no lasting harm?
Accordingly, professional associations and governments have sought for some time to control research. When and why this started is part of the story told by the authors in this issue, but by the start of the twenty-first century, the regulation of research with human subjects was a massive enterprise. Government regulators and other national bodies remain rather small, but they oversee large networks of local ethics committees—known as institutional review boards, research ethics boards, research ethics committees, or other terms—charged with overseeing individual research projects.
Though created with the best of intentions, these committees have proven controversial. As sociologist Sydney Halpern has noted, “Human subjects oversight in the US provides a striking counter-example to the often assumed flexibility of hybrid regulation.” In the mid-1990s, U.S. regulators began cutting research funds to universities for relatively minor violations of rules, leading to complaints even from some of the creators of the regulatory system.Footnote 2 Other countries have adopted U.S. models, leading to similar structures—and similar complaints—there.
Understandings of history both shape and sustain this regulatory structure. Though many policymakers acted in response to recent events, at times they shaped rules as a way to avoid the recurrence of behavior well in the past. In the United States, for example, the current regulatory climate owes a great deal to the work, in the 1990s, of the Advisory Committee on Human Radiation Experiments. Though U.S. President Bill Clinton established the committee in response to reports of abuses that had taken place decades earlier, it and other official bodies used those abuses to justify a stricter regime of ethics review.Footnote 3 Proponents of regulation continue to look back to the 1960s for evidence of the need of strict ethics review.Footnote 4
Accounts of past scandals are also a key element in the training given to researchers who wish to work with human subjects and committee members who oversee them. As Maureen Fitzgerald has noted, “Most texts on research ethics and many public documents related to the ethics review of research” include an “obligatory history [involving] the presentation of a series of cases that highlight periods of ethical (or moral) crisis in society. The same cases are cited repeatedly and this body of cases is relatively small in relation to the amount of research conducted.”Footnote 5 Yet ethics committees around the world rely on this small set of historical knowledge.
Given the importance of history to the regulation of research with human subjects, it would be nice to have a sturdy, scholarly foundation for the stories that are told. Fortunately, many scholars—both historians and others with a good sense of history—have made a start at that, especially when it comes to the most famous cases of the mid-twentieth century, such as the Nazi medical experiments, the Tuskegee Syphilis Study, the Willowbrook hepatitis experiments, and the research underlying Tearoom Trade.Footnote 6 Yet if we are to get a full history of human subjects research and the policies governing it, we must get beyond these cases to look at less prominent examples of troublesome research. And we must consider what happened during the intervals between scandals, as policymakers—both lawmakers and professional leaders—debated the meaning of various events and sought to control behavior in the future.
The articles in this issue take on these tasks. Robert Dingwall and Vienna Rozelle challenge the notion that the governance of human subjects research began with the Nazi Doctors trial of 1946–47, in which twenty-three doctors and administrators faced charges of war crimes and crimes against humanity. Sixteen were found guilty, and seven executed.Footnote 7 The 1947 verdict included ten “basic principles” for medical experiments that have become known as the Nuremberg Code, and it is this code that is often the starting place for the obligatory history of ethics training. Dingwall and Rozelle show that comparable ideas dated back to the nineteenth century, but that German lawmakers and ethicists had failed to persuade physicians and scientists of their legitimacy. The result was a failure of written rules to restrain doctors’ behavior in the Nazi period.
Even as the Nazi doctors stood trial in Nuremberg, American researchers were performing experiments they themselves knew to be ethically dubious. In her article, Susan Reverby draws on previously obscure archival documents to tell the story of a U.S. Public Health Service study in Guatemala that deliberately infected with syphilis prisoners, soldiers, and inmates in a mental asylum. While the researchers took care to provide penicillin to anyone who became infected, they used “double talk” to prevent their subjects from understanding, much less consenting to, the procedures. This case reminds us of the high stakes involved in the regulation of medical experiments and the reasons many people are unwilling to leave that regulation to doctors alone. It also exemplifies the tendency of research to cross international borders, a trend that has picked up in recent decades.
Though the Nuremberg Code stressed the need to obtain the “voluntary consent of the human subject,” our third article, by Tal Bolton, shows that such consent was not always a simple thing. In the 1960s, the British military used members of the armed forces as subjects for tests of chemical weapons and hallucinogens. Though termed “volunteer observers,” these servicemen remained subject to both military law and, as important, less formal pressures of military life. She concludes that consent is no simple matter, but a spectrum of responses that depends on everything from individual personality to the state of international relations.
Similar nuance emerges from the story of research regulation in the Netherlands, explored here by Patricia Jaspers. Like their counterparts in other countries, Dutch doctors both recognized the dangers of unconstrained research while hoping that their own profession could manage those dangers without outside interference. From the 1960s through the present, they have tried various models of control, ranging from public shaming in medical journals to national legislation. While the government agency that monitors medical research applauds its own 1999 creation as a great advance, it is not clear researchers today agree on ethics regulation anymore than they did half a century ago.
Even greater ambivalence is heard in the responses of the ethnographers surveyed by our final contributor, L. L. Wynn. For decades, the social sciences have been the great afterthought of human subjects regulation. Faced with rules and structures designed to control medical experiments, ethnographers and other social scientists have tried both resisting those controls and adapting them to their own purposes.Footnote 8 Due to the exclusion of social scientists from policymaking bodies, these efforts have often taken place quietly at the level of university committees or even individual projects. So rather than relying on libraries and archives, Wynn has used social research itself—in the form of an international survey reaching hundreds of scholars—to uncover shifting attitudes toward ethics review. She finds that while ethnographers are committed to “careful consideration of research ethics,” they are skeptical of the system of ethics regulation now in place.
The articles cover events and policies in at least six countries, spread over the course of more than a century. Yet taken together, they offer three key findings. First, rulemaking is often based on understandings of horror stories, whether the horrors were committed by researchers or ethics committees. Yet these understandings of past events are often less rigorous and nuanced than scholars would wish. Official histories, like those produced by the Dutch Central Coordinating Body, overlook dissent. At the other extreme, informal accounts of the Tuskegee Syphilis Study misstate basic facts. Rumors about the actions of ethics committees may be unrepresentative yet still inform researchers’ attitudes. Historians can learn from all of these types of stories, and they can complicate them with more methodical research.
Second, there is no sharp break between the “bad old days” and a current, enlightened era. Rather, research ethics, and research regulations, are constantly evolving, and not always in positive directions. Anyone tempted to believe that past abuses could not take place today should explore studies of current research, especially that sponsored by the pharmaceutical industry.Footnote 9 The stories in this issue show that today’s ethics can be tomorrow’s scandal.
Lest this sound like a call for a flood of new regulation, the articles offer a third lesson: nations and institutions can pass as many rules as they want, but if they are not perceived as legitimate by researchers, the researchers will evade them. In all the stories told here, researchers ignored rules they felt were incompatible with science while respecting constraints that made sense to them, especially if they had a chance to participate in making the rules. Progress in research control must be measured not simply as a series of laws, but by exploring the actual behavior and beliefs of everyone involved.
The regulation of human subjects research, then, is no simple matter. For decades, scientists, policymakers, lawyers, ethicists, and research participants have struggled to balance their desire for knowledge and progress against their wish to protect people from inept or unscrupulous researchers. The most effective regulators may be those who understand themselves as part of this long history.