No one is under the illusion that very young children should have medical decisionmaking authority. Hallmarks of childhood are immaturity of emotions, intelligence, and experience. These characteristics bode ill for sound determinations about medical procedures. However, another hallmark of childhood, for most children at least, is continued development of these characteristics, expanding their abilities and capacities. In the 1970s explicit recognition of these expanding capacities for medical decisionmaking occurred when the National Commission supported the idea that, as children grow, they should be allowed to participate in their care to the degree that they are capable.Footnote 1 Specifically, the Commission offered up the concept of “assent” (recognizing that “consent” remained in the authority of parents or other adult guardians) as a way of capturing the expressed interests of children concerning their participation in medical research. Over the intervening decades, many others, including the American Academy of Pediatrics, have applied the concept of “assent” to clinical decisionmaking as well.Footnote 2
But this recognition simply begs a question: When should full decisional authority be granted to individuals as they age? For political and policy reasons, states determine an “age of majority,” after which point an individual is granted certain rights and privileges, including medical decisionmaking authority. The particular age determined within a state has varied over the history of our nation—younger (even down to 15 or 16) in the 19th century, older (around 21) in the mid-20th century, and currently 18 in most every state in the United States. But this history alone demonstrates the fluidity, if not arbitrariness, of setting a particular age of majority, and surely any one of us can think of persons older than 18 who we believe are too immature for many kinds of important decisionmaking, whereas we can probably recall others who, even in their early teens, could handle difficult, sophisticated reflection. It is this time of so-called adolescenceFootnote 3 that, then, asks that we think seriously about what principled reasons we should rely on in order to determine whether or not any particular teenager should be able to make medical decisions for and by him/herself. In fact, although states do determine an age of majority, many states also allow recognition of “mature minors” when it comes to medical decisionmaking. Though most state courts have been reluctant to allow mature minor provisions to be enacted when an adolescent refuses life-sustaining treatments, the judicial and legislative trends have begun to shift.
The following debate among three prominent scholars focuses specifically on ethical arguments for and against allowing adolescents decisional authority in their own medical care. In particular, Lainie Friedman Ross, argues that the shift by courts and state assemblies to support what she calls “minor refusals” by adolescents is a mistake. Specifically, Ross is concerned with decisions to forgo life-sustaining treatments that have proven efficacy. In fact, she argues that neither minor refusal nor parental refusal of such treatments is ethically justifiable given that parents are required to protect their children's “basic needs.”
In response, both Jeffrey Blustein and Ellen Wright Clayton contend that Ross's position fails to recognize adequately that there are demonstrably mature adolescents who should be allowed to determine their own life courses. Blustein, among other issues, is further concerned with Ross's division between “efficacious” and “inefficacious” treatment and is troubled by how this is to be determined and whether the division is truly capable of playing the role Ross intends in her argument. Clayton, while in substantive agreement with Blustein, also wonders whether religious objections are wrongly disparaged by ethicists, and she emphasizes the need to separate the responsibilities of physicians qua physicians from those of state agencies and their authority. Further, the exchange between Ross and Blustein does not end here but will continue for another iteration in the next issue of Cambridge Quarterly.
Of course, this introduction does not do justice to the positions expressed, and so I invite you to read the exchange itself in depth. Like all the “Great Debates,” the following, then, was developed with substantive give-and-take before finalizing the positions and responses for publication. It is our hope, as always, that this debate not only proves interesting, but stimulating and fruitful.