Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-11T01:16:19.322Z Has data issue: false hasContentIssue false

Epistemic Injustice as a Basis for Failure? Health Research Regulation, Technological Risk and the Foundations of Harm and Its Prevention

Published online by Cambridge University Press:  14 January 2020

Mark L FLEAR*
Affiliation:
School of Law, Queen’s University Belfast; email: m.flear@qub.ac.uk.
Rights & Permissions [Opens in a new window]

Abstract

I use the examples of medical devices, clinical trials and health data, to look at the framing of harm through the language of technological risk and failure. Across the examples, there is little or no suggestion of failure by those formally responsible. Failure is seen as arising when harm becomes refracted through calculative techniques and judgments, and reaches a point where the expectations of safety built into technological framings of regulation are thwarted. Technological framings may marginalise the contribution patients, research participants and others can make to regulation, which may in turn underlie harm and lead to the construction of failure. This marginalisation may amount to epistemic injustice. Epistemic injustice and its link to failure, which has normative weight over and above harm, can present a risk to organisational standing and reputation. This risk can be used to improve the knowledge base to include stakeholder knowledges of harm, and to widen responsibilities and accountabilities. This promises to allow regulation to better anticipate and prevent harm and failure, and improve the efficacy and legitimacy of the health research enterprise.

Type
Symposium on European Union Governance of Health Crisis and Disaster Management
Copyright
© Cambridge University Press 2020

I. Introduction

The language of failure – scandal, disaster and crisisFootnote 1 – seems to have acquired new force in public discourse and regulation.Footnote 2 Since the financial and economic crisis of 2008, as Kurunmäki and Miller observe, the “category of failure now saturates public life”.Footnote 3 Our contemporary predicament comes in the wake of what Power described:

“A stream of apparent failures, scandals, and disasters from the 1990s onwards have challenged and threatened institutional capacities to organise in the face of uncertainty, suggesting a world which is out of control, where failure may be endemic, and in which organisational interdependencies are so intricate that no single locus of control has a grasp on them”.Footnote 4

Health research regulation provides many instances of failure that are often unaddressed until illuminated, usually through the involvement of those affected.Footnote 5 Failure in health research regulation is of course nothing new. The regulatory regimes for clinical trials were developed in response to the Thalidomide scandal, which occurred some 50 years ago. In the United Kingdom (UK) the scandal resulted in the Medicines Act 1968 and its related licensing authority.Footnote 6

One current example is worthy of immediate note. For around a decade medical devices have provided several high-profile examples of failure globally. Poly Implant Prothése (PIP) silicone breast implants have been prominent for much of this time.Footnote 7 So too have metal-on-metal hip replacements.Footnote 8 More recently, mesh implants for urinary incontinence and pelvic organ prolapse in women (often referred to as “vaginal mesh”) have been the subject of intense controversy – so much so that some have called it “the new Thalidomide”.Footnote 9 While PIP was denounced in the report on the scandal by Health Minister Lord Howe as a case of “deliberate fraud” by a manufacturer that “actively covered up its deceit and showed a complete disregard for the welfare of its customers”,Footnote 10 metal-on-metal hip replacements and vaginal mesh relate to health research practices. In these cases, previously licensed medical devices were used to demonstrate the safety of supposedly analogous new medical devices. The publication of “The Implant Files” in The Guardian suggests that failures in respect of medical devices have reached a critical juncture. The editorial in The Guardian could not be more damning: “Our investigation has revealed alarming failures. Regulation of devices ... must be reformed”.Footnote 11 The time is ripe for reflection on the deeper causes of harm and failure.

In this article, I draw upon health research regulation for medical devices, clinical trials for new medicines, and health data, to look at the regulatory framing of harm through the language of technological risk, ie relating to safety. I understand failure itself in terms of this framing of harm.Footnote 12 I describe how such framing marginalises the contribution of stakeholder knowledge of harm. I explain how the latter may amount to epistemic injustice, and why that should change and how. Stakeholders include research participants, patients and other interested parties. I see failure as arising when harm becomes refracted through calculative techniques and judgments, and reaches a point where the expectations of safety built into technological framings of regulation are seen as thwarted.Footnote 13 This usually occurs from stakeholder perspectives.

My overall argument is that reliance on a narrow discourse of technological risk in the regulatory framing of harm may contribute to epistemic injustice, and that this may underlie harm and in turn lead to the construction of failure. As conceptualised by Fricker, epistemic injustice is a “wrong done to someone specifically in their capacity as a knower”.Footnote 14 Epistemic injustice includes, but is not limited to, marginalisation and silencing within social interactions and systems, such as those in health research regulation that manage technological risk so as to protect safety.

Epistemic injustice is a harm in itself. As such it deserves attention. More significantly for this article, however, epistemic injustice may limit stakeholder ability to contribute towards and shape regulation, leading to other kinds of harm, and failure. This is especially true in the case of health research regulation, where stakeholders could be directly or indirectly harmed by practices and decisions that are grounded on a limited knowledge base, and which may be rooted in epistemic injustice. The argument in this article thus appreciates how failure can amount to a “failure of foresight”,Footnote 15 which may mean it is possible to “organise” failure and harm out of existence.Footnote 16

More particularly, I appreciate failure principally though not exclusively, in Kurunmäki and Miller’s words, “as arising from risk rather than sin”.Footnote 17 Put differently, I understand failure in principally consequentialist, rather than deontological terms.Footnote 18 As such, this understanding does not exclude legal conceptualisations of failure in tort law and criminal law, in which the conventional idea of liability is one premised on “sin” or causal contribution. Legal procedures and modes of judgment remain important to the conceptualisation of failure.Footnote 19 But within contemporary society and regulation such deontological understandings are now often overlaid with a consequentialist view of failure.Footnote 20 The latter is appropriate for the focus of this article, which is about uncovering and dealing with the epistemic roots of harm, rather than causal contribution, so as to better anticipate and prevent future failure. The normative weight of failure, as compared to harm, is used to achieve this. Epistemic injustice may underlie failure, the significance of which can present an important institutional risk to standing and reputation. This risk can prompt the take-up of stakeholder knowledge of harm in regulation, so as to address harm and failure. Coming full circle, this can mean regulation more fully meets expectations of safety, and that its efficacy and legitimacy are improved.

Overall, this article amounts to a call to scholars of law and regulation to grapple with failure further than they have up until now. There is a pressing need to make visible and address the deeper causes of failure in the process of framing and the organisation of knowledge, in which regulators and scholars may be complicit. Grappling with these foundations promises to enable scholars and regulators to better anticipate and prevent future harm and failure. The article advances work on social construction within law and regulatory studies, especially recent work on liminality and health research regulation, to bring fresh insight to discussion on failure.Footnote 21 The argument advanced also draws upon and contributes towards developing work within legal studies on biopolitics.Footnote 22 The discussion in this article highlights the importance of framing, the expectations within it, and the underpinning knowledge, in undergirding the technicalities of law and regulation, and explaining failure. The analysis thus widens a growing dimension in understanding of the “legal” and its relations with the “social” in socio-legal studies.Footnote 23

In the next (second) section, I enlarge on these introductory comments and use several examples to reflect on expectations in framing health research regulation: how does the thwarting of expectations through harm relate to failure? This sets the scene for the third section. Here, I highlight the stakeholder knowledges of harm that are marginalised within the examples of health research regulation, and that became central to the construction of failure through the thwarting of expectations. I explain how these knowledges amounted to lacunae and blindspots that stemmed from the hierarchy of knowledge that underpins the technological framing of regulation. Building on these foundations, in the fourth section, I turn to discuss how the marginalisation of stakeholder knowledges may amount to epistemic injustice. The latter provides both the grounds for stakeholder participation in regulation – and the means, which are provided by the institutional risk presented by epistemic injustice and its link to failure.

II. Expectations and failure in health research

According to van Lente and Rip, expectations amount to “prospective structures” and are “put forward and taken up in statements, brief stories and scenarios”.Footnote 24 In their study on material objects and failure, subtitled When Things Go Wrong, Carroll and others describe failure as “a situation or thing as [sic] not being in accord with expectation”.Footnote 25 This suggests that the thwarting of the expectations built into framing can be understood as a key condition for the construction of failure. It is expectation, rather than anticipation or hope, then, that is central to failure. Bryant and Knight explain that anticipation, contains a “sense of the future pulling [us] forward”;Footnote 26 hope “is a way of virtually pushing potentiality into actuality”.Footnote 27 Unlike expectation, anticipation and hope do not provide a sense of how things ought to be, so much as how they could be or an individual or group would like them to be. Indeed: “We expect because of what the past has taught us to expect… [Expectation] awakens a sense of how things ought to be, given particular conditions”.Footnote 28 Expectations provide “the ground on which practices, orders, and hence the normative emerge”. And the normative is, of course, “a standard for evaluation”, for whether an outcome is “good or bad, desirable or undesirable”,Footnote 29 and, relatedly, a failure.

Expectations rely on the past to inform a normative view of what will be, which – when thwarted – provides the grounds for the judgment of failure.Footnote 30 This judgment relates to the system or regime itself, and has normative weight over and above harm. For Appadurai:

The most important thing about failure is that it is not a fact but a judgment. And given that it is a human judgment, we are obliged to ask how the judgment is made, who is authorized to make it, who is forced to accept the judgment, and what the relationship is between the imperfections of human life and the decision to declare some of them as constituting failures”.Footnote 31

Expectations, a key ground for establishing failure, are built into regulatory framings. Work on “design-based regulation”, including that by Brownsword, and Yeung and Dixon-Woods, examines how norms, values, virtues and behavioural options are “designed-in” and “designed-out” of technologies, potentially limiting the exercise of agency and accountability.Footnote 32 Both socio-legal scholars and regulators themselves understand the ways in which the framing of governance and regulation, and their foundational knowledges, embed a range of societal and organisational aims, aspirations and expectations. As a corollary, the targets of regulation, including health research and its applications, can also be understood in these terms, as science and technology studies (STS) has long shown. Society and its expectations are “built into” artefacts,Footnote 33 such as through ideas concerning how they might and should be used; how they are eventually implemented;Footnote 34 and health research on whether artefacts meet expectations and can be used.

The expectations built into regulatory framings, and artefacts, also engender imaginaries of the future or “imagined futures”.Footnote 35 Jasanoff and Kim have developed a more specific iteration of this concept: sociotechnical imaginaries.Footnote 36 In subsequent work Jasanoff defines these as:

“collectively held, institutionally stabilised, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology”.Footnote 37

This definition connects expectations and framing to an imagined future. Bringing these various insights together and applying them to the focus of this discussion, failure can be understood as arising when harm becomes refracted through calculative techniques and judgments, and reaches a point where the expectations of safety built into framing are seen as thwarted. Failure threatens to shatter related imaginaries of safe futures.

These key insights hold real potential to shed new light on failure, its underpinnings, and ways to anticipate and prevent it. In the following, I apply and develop these insights to three examples drawn from health research. Although these examples centre on the UK, they also relate to practices elsewhere. These examples encompass living human participants (for medical devices and new medicines) and their adjuncts (personal data) in non-emergency contexts. Health research in emergency contexts is not considered as it raises distinct issues, including the practices in such situations, and the power imbalances in the organisation of global governance that may underlie them.Footnote 38 I work in broad brush strokes to consider two things in relation to each example: first, the expectations built into and flowing from its framing; second, how the expectations were thwarted, imaginaries threatened, and failure emerged. This discussion, included and summarised in Table 1 below, provides the starting point for the subsequent analysis of the epistemic foundations of failure, and reflection on how they can be modified to prevent future failure.

Table 1. Framing and the organisation of knowledge for regulation

1. Health research for safe implants

The first example is in respect of medical devices. The applicable legislation on medical devicesFootnote 39 led to its framing by technological risk and an expectation of safety.Footnote 40 However, in respect of metal-on-metal hips and vaginal mesh for instance, harm occurred, the expectation of safety was thwarted and failure arose. Failures in health research became apparent downstream around the world once these medical devices were in use.

At the root of the failures was the classification of metal-on-metal hips and vaginal mesh as Class IIb devices.Footnote 41 This meant that it was possible for manufacturers to rely on substantial equivalence to existing products to demonstrate conformity with general safety and performance requirements. These requirements set expectations that manufacturers and regulators would demonstrate safety, both for the device and the person within which it is implanted. Substantial equivalence obviates the need for health research involving humans via a clinical investigation.Footnote 42 It is noted in one BMJ editorial that this route “failed to protect patients from substantial harm”.Footnote 43 Heneghan and others point out that in respect of approvals by the Food and Drug Administration in the United States, which are largely mirrored in the EU: “Transvaginal mesh products for pelvic organ prolapse have been approved on the basis of weak evidence over the last 20 years”.Footnote 44 The reliance on substantial equivalence meant that safety and performance data came from implants that were already on the market, sometimes for decades, and which were no longer an accurate predicate for new medical devices.Footnote 45 In other words, on the basis of past experience – specifically, of “substantially equivalent” medical devices – there was an unrealistic expectation that safety would be ensured through this route, and that further research involving human participants was unnecessary.

In respect of vaginal mesh, the adverse events reported include: “Pain, impaired mobility, recurrent infections, incontinence/urinary frequency, prolapse, fistula formation, sexual and relationship difficulties, depression, social withdrawal or exclusion/loneliness and lethargy”.Footnote 46 Within the European Union (EU), new legislation was introduced, largely in response to the failures.Footnote 47 The legislation is deemed to provide a “fundamental revision” of the regulatory framework in order to “establish a robust, transparent, predictable and sustainable regulatory framework for medical devices which ensures a high level of safety and health whilst supporting innovation”.Footnote 48 One interpretation of the new legislation is that it is a direct response to failure and intended to provide “a better guarantee for the safety of medical devices, and to restore the loss of confidence that followed high profile scandals around widely used hip, breast, and vaginal mesh devices”.Footnote 49

The specific piece of legislation applicable to the latter will not apply until 26 May 2020.Footnote 50 In respect of metal-on-metal hips and vaginal mesh, the legislation reclassifies them as Class III. Future manufacturers of these devices will have to carry out clinical investigations so as to demonstrate conformity with regulatory requirements.Footnote 51 According to one self-described “innovation company”,Footnote 52 Cambridge Design Partnership, whereas the Medical Devices Directive “briefly outlined (in only nine paragraphs) the expectations of clinical investigations”, the “new [Medical Devices Regulation] expands on this and takes up a whole chapter on the subject”.Footnote 53 Put differently, the new legislation provides more detailed and expansive expectations in relation to safety. Nevertheless, doubts remain whether these will prevent future harms and thus failures similar to those mentioned above.Footnote 54

2. Health research that is safe for participants and leads to safe medicines

The second example of failure in health research is clinical trials. It is noteworthy that the standards used for clinical investigations of medical devices are usually those used for the conduct of clinical trials. The reason: “these [give] a greater indication of the expectations for clinical investigations”.Footnote 55 By contrast to medical devices, the marketing of new medicines always requires clinical trials.

The focus in clinical trials is on protecting the human participants involved. Here, ethics-based and rights-related requirements, in particular relating to consent and other standards of good clinical practice (GCP),Footnote 56 support the key framing. These requirements are also about credentials: the medicines can be said to be authorised on the basis of data produced in an ethics and rights-friendly way. This is essential given the problematic history of the development of medicines (see Thalidomide, mentioned in the introduction). The trials are used to establish quality, safety and efficacy, ie the conditions for market authorisation.Footnote 57 These conditions contrast with the focus on safety and performance for the marketing of medical devices.

In 2006, clinical trials were the subject of renewed controversy when the novel drug TGN1412, produced by German pharmaceutical company TeGenero, caused multiple organ failures in Phase I or first in human clinical trials.Footnote 58 Six men were injected with the drug at Northwick Park Hospital, London while another two received a placebo, thus making this a randomised controlled trial, the “gold standard”.Footnote 59 The drug was being developed as a potential treatment for certain autoimmune diseases and leukaemia.Footnote 60 In this case, there was an expectation of participant safety – or at least that sufficient attempts would be taken to reasonably ensure safety – and this was based on past experience and understanding of regulation. However, harm occurred, the expectation of safety was thwarted and failure arose. The Medicines and Healthcare Products Regulatory Agency’s (MHRA’s) report into compliance with standards on GCPFootnote 61 noted the absence of complete patient medical records and 24-hour medical cover, including plans for dealing with adverse events.

These aspects of the trials, referred to in the New Scientist as a “catalogue of errors”,Footnote 62 were addressed in Appendix 1 of the MHRA’s report as “discrepancies”.Footnote 63 In other words, echoing the MHRA’s interim report,Footnote 64 these issues were not described as problems with substantive aspects of the research process itself – hence the non-binding regulatory changes that followed what others have described as the “failure”Footnote 65 of the trial.Footnote 66 Further, the MHRA stopped short of categorising the adverse event in terms that would connote failure at all, including its own. The MHRA’s assessment was subsequently buttressed by an expert report.Footnote 67 This report repeated the MHRA’s recommendations and also drew attention to the use of the maximum starting dose and its administration in sequence. As regards the latter, the last two volunteers were injected even though the first volunteer had begun to show the early signs of an adverse reaction. The export report found that a minimum dose should have been administered in a longer sequence. Significantly for the present analysis, each of these reports, and surrounding commentary, actually underline how it was the thwarting of the expectation of participant safety that began the process of constructing failure.

3. Health data and social licence

Health data is a key adjunct to the human body, and it provides the third and final example of failure. The Health and Social Care Act 2012 (2012 Act) provided the basis for the establishment of care.data in England. The 2012 Act established the Health and Social Care Information Centre to receive individual level patient data from the surgeries of general practitioners (GPs). Data would be anonymised and consent from patients would not be sought.Footnote 68 Where anonymisation would not be possible, the Confidentiality Advisory Group would be charged with authorising non-consented uses in the public interest. The National Health Service Constitution was amended and a huge public information exercise was undertaken to explain the initiative, although care.data was not mentioned directly. This system was first abandoned in February 2014 after a huge outcry from patients and professionals, before being briefly resurrected and seemingly completely abandoned in July 2016. Key concerns related to lack of transparency and robust oversight, and inadequate information on the possible uses of data and its disclosure, in particular for use in commercial endeavours.Footnote 69

Although there was a legal mandate through the 2012 Act, the implementation of this mandate through care.data did not accord with, inter alia, “traditional understandings of the private and confidential nature of the relationship between GP and patient”.Footnote 70 These understandings are underpinned by law relating to data protectionFootnote 71 and human rights.Footnote 72 Simply put, care.data thwarted or “ruptured”Footnote 73 these expectations, and a social licence.Footnote 74 This is because the accordance of care.data with the traditional understandings could not be assumed, particularly given the concerns just noted. Importantly for present purposes, the idea of a social licence underscores “the assumption [that] a mandate for action must accord with the general expectations of society” – and, moreover, “it is dangerous to attempt to borrow this wholesale from one context to another”.Footnote 75

In this case, then, the assumption of a mandate for action from individual consent (one context) to care.data (another context), did not accord with a key societal-level expectation, which was informed by public understandings of past practice. Put differently: there needed to be the grant of a social licence for the significant changes to the use of individual data envisaged under care.data. The absence of this social licence meant harm occurred, or at least was thought to be likely to occur, and the expectation of safety was thwarted. The thwarting of this societal expectation, through the implementation of the statutory basis for regulation, began the process of constructing failure, which ultimately led to the failure of care.data itself. This case demonstrates the central importance of expectations of safety through the protection of individual data, and its use, in a way that ensures good governance and public benefit.

4. Constructing failure

In each example, technological risk understood as being about safety frames regulation (of medical devices, research participants and data gathering and use). An expectation of safety is built into and flows from this framing. The expectation engenders an imaginary or imagined future where technoscience leads to safe innovations.

However, across the examples, there is little or no suggestion of failure by those formally responsible, and who might be accountable in the event of failure. This is doubtless to avoid any hint of regulatory failure. Regulatory failures are described by Hutter and Lloyd-Bostock as “failures to manage risk”.Footnote 76 Moreover, “[i]ncreasingly, regulators find themselves in the spotlight as questions are raised about why regulation ‘failed’, effectively framing them as part of the cause of disasters and crises”.Footnote 77 A perception of regulatory failure thus has key implications for the accountability and legitimacy of regulation and regulators – and such perception is therefore to be avoided by them.

Instead, these examples demonstrate how the construction of failure does not necessarily hinge on official interpretations of harm as amounting to “failure”. This is apparent in the various quotations from non-regulators noted above. As Hutter and Lloyd-Bostock put it, these are “terms in which events are construed or described in the media or in political discourse or by those involved in the event”.Footnote 78 As they continue, what matters is an “event’s construction, interpretation and categorisation”.Footnote 79 Failure amounts to an interpretation and judgment of harm. Put differently, “failure” arises through an assessment of harm undertaken through calculative techniques and judgments. Harm becomes refracted through these. At a certain point in each of the examples, the expectations of safety built into framing are seen by stakeholders as thwarted, and the harm becomes understood as a failure.

Kurunmäki and Miller make a similar observation:

[The] moment [of failure] surfaces within and through an assemblage of calculative practices, financial norms, legal procedures, expert claims and modes of judgment. These allow complex processes of mediation between a variety of actors, domains and desired outcomes…[Failure is] undeniably constructed through the multiple ideas and instruments that set the parameters within which open-ended yet not limitless negotiation and judgment takes place, as the moment of failure is either predicted or pronounced”.Footnote 80

Although harm is real, it becomes known and understood as failure through various techniques and practices, including legal and regulatory procedures and modes of judgment.Footnote 81 These make it possible to know and name the reality of failure.Footnote 82 Hacking summarises this point: “our classifications and our cases emerge hand in hand, each egging the other one”.Footnote 83 Official discourses are significant, not least because they help to set expectations of safety. But these discourses do not control stakeholder interpretations and knowledge of harm, or their thwarting of expectations of safety leading to failure. Indeed, failure is what Kurunmäki and Miller term “a variable ontology object”.Footnote 84 The potential for diverse and alternative interpretations and perceptions of harm by individuals and groups, and their roles in making their views known, underscore the role of the mutability of power relations in the construction of failure.Footnote 85 The assessment of failure in turn threatens to shatter the imaginary of a safe future.

In what follows, I shift attention to the lacunae and blindspots in the knowledge base in relation to each example. I outline these, discuss what they imply about the place of stakeholder knowledges of harm in regulation, and interrogate their foundations in the organisation of knowledge. Subsequently, I build on this discussion to explain how the marginalisation of stakeholder knowledges relating to harm may amount to epistemic injustice, and how that provides both the grounds and means for bringing stakeholder knowledges and expertise within health research regulation.

III. Producing knowledge for safety regulation

1. Lacunae and blindspots

In each of the examples, the knowledge base for regulation is derived from an archive of past experience and scientific-technical knowledge. There are epistemic or knowledge-related lacunae and blindspots in this knowledge base, and these relate to stakeholder knowledge of harm. In respect of vaginal mesh and metal-on-metal hips, the focus on performance (ie the device performs as designed and intended) marginalised attention to effectiveness (ie producing a therapeutic benefit), and patient knowledge on this issue. Moreover, in relation to implants, many of the most prominent examples have related to breast and mesh implants placed in female bodies. Female knowledges and lived experiences of the devices implanted within them have tended to be sidelined or even overlooked. Gender matters within the “biomedical gaze”.Footnote 86 The operation of presumptions based on, inter alia, genderFootnote 87 through this lens are well-known, and include gender-based presumptions about pain.Footnote 88 Within biomedicine understandings of pain have and are still usually based upon male bodies, which are then cast as the “normal” body. The position of the male body within biomedical research might explain the all-male participants in the clinical trial for TGN1412.Footnote 89 The centrality of the male body within research provides part of the explanation for the time taken to recognise there was a safety problem in respect of vaginal mesh.

Another part of the explanation for the latter problem is that there was a lengthy delay in embodied knowledge and experiences of pain being reported within the very processes that are supposed to reveal them – effectively sidelining and ignoring those experiences.Footnote 90 Bringing these experiences into the knowledge base for health research regulation is, however, still proving difficult. For instance, within the UK, the new guidance on vaginal mesh, issued by the National Institute for Health and Care Excellence (or NICE),Footnote 91 has been met with criticism on gender-based lines. NICE cites safety concerns and recommends that vaginal mesh should not be used to treat vaginal prolapse. As the UK Parliament’s All Party Parliamentary Group on Surgical Mesh Implants said, the guidelines: “disregard mesh-injured women’s experiences by stating that there is no long-term evidence of adverse effects”.Footnote 92

Despite the privileged positon of male bodies within biomedicine, male research participants can of course still experience marginalisation of their knowledges, as the TGN1412 trial demonstrates. The problems with the trial – inter alia, the absence of complete proper patient medical records;Footnote 93 the lack of 24-hour medical cover and plans to deal with adverse events; the close sequencing of injecting the trial medicine – meant that the embodied risk and experiential knowledge that would be generated through the trial itself were implicitly and effectively minimised and sidelined. This underscores the degree to which participants’ bodies were instrumentalised for very specific purposes: testing a new medicine in order to generate safety and efficacy data. As for the final example, care.data, it demonstrates the importance of widely-held social understandings of the doctor-patient relationship, good governance and public benefit, in the use of patient data. In a sense this amounts to social knowledge and understanding of data, ie that which is disembodied. However, these social understandings were effectively ignored.

These epistemic lacunae and blindspots make apparent the marginalisation of stakeholder knowledges of harm within the examples of health research regulation. As I have described, this marginalisation effectively silences and closes off a rich source of knowledge and expertise for health research regulation. Stakeholders are, as Pohlhaus notes, “relegated to the role of epistemic other”.Footnote 94 The limited knowledge base for regulation, and epistemic othering in the examples, seems to account, at least in part, for the harms and failures that arose, since stakeholder knowledges became central to the construction of failure in each example. The limited knowledge base also helped to support the focus on safety and delimit regulatory responsibilities and accountabilities in each case.

2. Knowledge for regulation

Epistemic othering stems directly from the organisation of knowledge for regulation. Risks, risk-based framings and concepts (including, as we have seen, failure) do not exist in themselves, but are co-constituted through a field of knowledge and regulatory processes.Footnote 95 Power describes, putting knowledge and framing (including expectations) together, that “social and economic institutions… shape and frame knowledge of, and management strategies for, risk, including the definition of specific ‘risk objects’”.Footnote 96 The focus on technological risk in health research regulation is based on a series of decisions, which determine the knowledge required for regulation, and help to explain epistemic othering. These decisions include how harm relates to risk; centralising safety as “the” technological risk; the degree of risk (low or high); and, finally, how technological risk relates to precaution – whether precaution operates in risk assessment or in risk management.Footnote 97 The decisions underscore how, to quote Sparks, drawing on Garland, risk has “moral, emotive and political as well as calculative” dimensions and is a “mixed discourse”.Footnote 98

Knowledge is formulated as encompassing “the vast assemblage of persons, theories, projects, experiments and techniques that has become such a central component of government” – it is “the ‘know how’ that makes government possible”.Footnote 99 In the context of health research regulation, scientific-technical knowledge defines a space in which specific objects (medical devices, medicines and data) are rendered governable.Footnote 100 Stakeholder knowledges of harm are effectively deemed unnecessary for regulation focused on technological risk. The organisation of knowledge for regulation is also, as Jasanoff explains, “incorporated into practices of state-making, or of governance more broadly”.Footnote 101 As such, knowledge for regulation “is not a transcendent mirror on reality”,Footnote 102 but, similar to the governing arrangements it underpins, remains contingent on the wider social milieu.Footnote 103

More generally, and reflecting its social setting, risk management “embodies ideas about purpose” and is embedded in “larger systems of value and belief”.Footnote 104 Hence, although health research regulation is focused on safety, this is linked to wider goals. Indeed, Laurie describes the way in which:

“law’s regimes do not capture the experience of becoming someone involved in research…from the human perspective, the process of becoming a research participant is a change of status, potentially in very profound ways. Individuals, their bodies, body parts, and other intimate adjuncts such as personal data, are instrumentalised to varying degrees”.Footnote 105

Health research is increasingly part of attempts to optimise the market and economy.Footnote 106 At least at the EU level, which is relevant to each of the examples in this article, the focus on technological risk attempts to deliver market-oriented goals – health through safe products is instrumentalised for the latter.Footnote 107

To achieve these goals, discourses on technological risk,Footnote 108 and their related scientific-technical knowledge base, do more than frame and shape expectations of safety: through this discursive device,Footnote 109 they literally talk and legitimate new health research and technoscience into existence.Footnote 110 In this regard, imaginaries play a central role. Ezrahi’s explains that: “[w]hat renders…[imaginaries] consequential is their capacity to generate performative scripts that orient political behaviour and the making and unmaking of political institutions”.Footnote 111 The instrumental role of imaginaries can be readily seen in the expectations and framing of health research discussed in this article. An imaginary “composes and selects images and metaphors aimed at fulfilling functions in a variety of fields, including scientific research…”.Footnote 112 In terms of the examples considered above, safe medical devices, safe clinical trials for safe medicines, and future innovation to be delivered through the safe use of health data, are part of an imaginary that encompasses these key legitimating images and metaphors (the latter since they might not be entirely literal or wholly based on reality – not least because safety is found wanting).

Technological risk and expectations of safety, including as part of imaginaries, are becoming even more important tools of legitimation for states or supranational entities such as the EU. These sites are tightening their relations with science and technologyFootnote 113 – and through them the public (including stakeholders) who engage with technoscience as a “knowledge society”.Footnote 114 The framing of health research by technological risk and expectations of safety, and the imaginaries they help to generate, thus perpetuate current modes of governance and technoscientific trajectories.Footnote 115 Yet as a key consequence, the organisation of knowledge for regulation marginalises and others stakeholders and their knowledges.

3. Hierarchy of knowledge

Looking more deeply still at the basis for epistemic marginalisation and othering, the organisation of knowledge for regulation, wherein credentialised knowledge and expertise forms the basis for regulation, is underpinned by a hierarchy of knowledge.Footnote 116 As a key part of the framing of technological risk, bioethics plays a central role in constructing the hierarchy of knowledge and masking its construction – and has long been the subject of criticism for it. Bioethics tends to focus on technological development within biomedicine, and principles of individual ethical conductFootnote 117 or so-called “quandary ethics”,Footnote 118 rather than systemic issues related to social (or indeed epistemic) justice. Consequently, bioethics usually privileges and bolsters scientific-technical knowledge, erases social context,Footnote 119 and renders social elements as little more than “epiphenomena”.Footnote 120

Importantly for the present discussion, bioethics operates with risk-based framings to install an expert rationality – the “quasiguardianship” of scientific expertsFootnote 121 – within regulation. The expert rationality reflects and draws upon an imaginary of science. Hurlbut describes how:

“Framed as epistemic matters – that is, as problems of properly assessing the risks of novel technological constructions – problems of governance become questions for experts. The ‘scientific community’ thus acquires a gatekeeping role, based not on some principle of scientific autonomy or purity but on an imagination that science is the institution most capable of governing technological emergence”.Footnote 122

This expert rationality, and related imaginary, configure and legitimate the hierarchy between scientific-technical knowledge and expertise, and in turn the relationship between regulation and its stakeholders.Footnote 123 Stakeholder knowledges and forms of expertise relating to harm are, as Foucault explained, “disqualified … [as] naïve knowledges, hierarchically inferior knowledges, knowledges that are below the required level of erudition or scientificity”.Footnote 124

Table 1 (above) brings together the epistemic lacunae and blindspots, with framing, and the hierarchy of knowledge, and summarises the discussion to this point. To build on the platform provided by the analysis thus far, I turn now to explain how epistemic othering through the marginalisation of stakeholder knowledges of harm may amount to epistemic injustice, and provide both the grounds and means for the contribution of stakeholder knowledges of harm to regulation. Again, I work in broad brush strokes rather than close detail.

IV. Epistemic injustice: grounds and the means for stakeholder participation

1. Grounds for participation

As we have seen, the health research process is a site for harms and failures. The marginalisation of stakeholder knowledge and expertise relating to harm, evident in the lacunae and blindspots, appears to be part of the explanation for failure, and may amount to epistemic injustice. Testimonial injustice is a more specific instance of epistemic injustice, as defined by Fricker,Footnote 125 and it arises when individuals or groups lack credibility. The examples from health research, discussed above, attest to the way in which biomedical researchers or those involved in the design of health research have, as Medina put it, “been trained not to hear or to hear only deficiently and through a lens that filters out the speaker’s perspective”.Footnote 126 The perspective (and bodily reactions) of the research participants in the TGN1412, and the recipients of metal-on-metal hips and vaginal mesh, are key examples of testimonial injustice. The three-year wait for the new EU legislation applicable to medical devices to move from entry into force to applicability further attests to testimonial injustice. For Heneghan and others, “the long delay in implementation does not represent a timely response to patients’ needs”.Footnote 127

Hermeneutic injustice arises when individuals or groups do not articulate and present themselves through the conceptual resources that resonate with biomedical researchers and regulators, and through which they view reality, think and act. This is apparent across the examples discussed above. Although the symptoms of adverse reaction to TGN1412 were present in early recipients, the trial medicine continued to be given to others. The accounts and complaints of women in receipt of vaginal mesh, for instance, were long dismissed. It took a huge outcry by the medical profession and general public before the need for a social licence to health data collection and use was acknowledged and acted upon.

For the purposes of this article, resolving epistemic injustice is not simply an end in itself. Tackling epistemic injustice, by plugging lacunae and widening what is known, becomes instrumental to constructing a more complete knowledge base, and through it anticipating and preventing future harms, and in turn judgments of them as failures. To this end, stakeholder participation holds much promise. Contrary to what is implied by their marginalisation, stakeholders know about diverse things such as harms in different ways, have a range of expertise, and are often reflexively aware of limitations to their comprehension of particular technoscientific developments that they may actively seek to address.Footnote 128 Stakeholders’ insights can inform discussion on the purpose of risk governance and regulation, whom it makes vulnerable by failing and hurting, whom it benefits, and how we might know.Footnote 129 Stakeholder knowledge of harms provides the basis for their participation in health research regulation.Footnote 130

Moreover, scientific knowledge and technologies are “political machines”Footnote 131 which can become embroiled with and further engender biopolitical debates and campaigns.Footnote 132 As the example of medical devices in particular attests, stakeholders may demonstrate “biosociality”Footnote 133 by organising to demand and contest decisions relating to their biology, conditions and experiences.Footnote 134 And since there is injustice (albeit epistemic), participation makes sense.Footnote 135 Indeed, as Medina states, “if one exhibits a complete lack of ‘alertness or sensitivity’ to certain meanings or voices, one’s communicative interactions are likely to contain failures…for which one has to take responsibility”.Footnote 136 This responsibility extends beyond individuals to communities and “[i]nstitutions and people in a position of power”.Footnote 137

There is a growing plethora of approaches, most recently and notably vulnerability,Footnote 138 within which embodied risk and experiential knowledge are central.Footnote 139 These approaches are buttressed by a developing scientific understanding of the significance of environmental factors to genetic predisposition to vulnerability and embodied risk.Footnote 140 This understanding provides a “new biosocial moment”Footnote 141 which the approaches can leverage in order to expand appreciation of the potential for a lack of alertness and communicative failures for which institutions and powerful people must take responsibility. Further, within such approaches, as Thomson explains, the centrality of the human body and experience is foregrounded precisely to recast the objects of bioethical concern.Footnote 142 The goal: to prompt a response from the state to fulfil its responsibilities in respect of rights.Footnote 143 In the context of health research, this growing focus on embodied risk and experiential knowledge, also seen in the turn to personalised medicine,Footnote 144 can be used to reshape the biomedical gaze and practice.

But the years of failed attempts to reshape the bioethical establishment, and the organisation of knowledge for regulation, demonstrate that it can take more than failure, new approaches and growing knowledge, to widen the knowledges that count within risk-based regulation. Bioethics tends to prefer deliberation over resolving problems as ethical matters.Footnote 145 Moreover, recent scholarship, such as Thomson’s,Footnote 146 as well as much work in cognate disciplines such as STS,Footnote 147 considers public bioethics. These discussions notwithstanding, bioethics remains, as Moore observed almost ten years ago, “largely separate from the analysis of public participation in scientific governance”.Footnote 148

On top of this, risk-based regulation, such as that applicable to health research, has a well-known tendency to direct attention towards managing consequences rather than dealing with root causes.Footnote 149 As part of this, organisations focus on their documentary records to ensure they have in place an audit trail that backs-up their actions as rational and avoids institutional sanction. Power details how this leads organisations to prioritise “more elaborate formal representations of management practice, including risk management”, over “generating a climate of intelligent challenge and a capacity to abandon existing organisational hierarchies in a crisis”.Footnote 150

Against this background, I find it hard to be as optimistic as Downer is about “epistemic failures”, that is, the failures arising from problems in the knowledge base. He describes how epistemic failures:

“force experts to confront the limits of their knowledge and adjust their theories. Because they are likely to reoccur, they allow engineers to leverage hindsight… turning it into foresight and preventing future accidents”.Footnote 151

As seen across the examples discussed above, within official accounts, at least, the problem in each case has been presented as one of management and governance (albeit not amounting to regulatory failure), rather than the knowledge base. Stakeholder accounts of the examples have tended to foreground knowledge gaps and blindspots relating to harm as part of the explanation for failure (over and above harm). For this reason, and the others just outlined, it becomes necessary to prompt both credentialised experts and regulators to take note of stakeholder knowledges of harm, and make this “disruptive intelligence”Footnote 152 part of the knowledge base for regulation. I turn now to discuss how to achieve this.

2. “Epistemic injustice as risk” as the means for participation

Epistemic injustice, a harm in itself, arises from the framing and organisation of knowledge, and helps to explain lacunae and blindspots in the knowledge base for regulation. Epistemic injustice is thus part of what needs to be resolved in order to manage other kinds of harm, and in turn failure, within technological framings of regulation. Yet, for the reasons just outlined, it is unlikely that epistemic injustice would be tackled within current risk management practices. Nevertheless, the framing of regulation and organisation of knowledge through risk, the threat posed by the construction of failure from harm, and the related imaginary, discussed in the previous section, are also resources for stakeholder participation.Footnote 153 These resources can enable the contribution of stakeholder knowledge and expertise to resolve epistemic injustice, improve the knowledge base, and help anticipate and prevent future failures in health research regulation.Footnote 154

Epistemic injustice can also be understood and seen as an institutional risk to standing and reputation.Footnote 155 As Power points out, risk management also involves governing “unruly perceptions”, against a background of societal anxiety about risk,Footnote 156 and maintaining the “production of legitimacy in the face of these perceptions”.Footnote 157 These risks are secondary to the primary risk or risk event. The risk presented by epistemic injustice is to standing and reputation (a secondary risk). This risk gains added potency from its association with the failures (primary risks) that epistemic injustice may engender. As a reminder, the judgment of failure bears significant normative weight over and above harm. Failure reinterprets harm in risk-based terms and as such it disrupts the societal expectation that risk can be regulated out of existence. An associated (and secondary) risk is legal risk: the actual or potential legal liability that arises where there is a causal contribution to harm.Footnote 158

The management of epistemic injustice as an institutional risk to standing and reputation, can derive added urgency from the potential for public blaming and shaming amidst what Power termed the “web of expectations about management and actor responsibility”.Footnote 159 Blaming and shaming threatens to assign responsibility, and not simply for harm, but also failure, to regulators – that is, to make the “harm” a “regulatory failure”.Footnote 160 As such, blaming can even amplifyFootnote 161 or extend the duration of an institutional risk to standing and reputation.Footnote 162 This may produce a crisis for regulation, quite apart from any interpretation and judgment of failure or regulatory failure.Footnote 163 As discussed above, the examples of failure in health research regulation are not understood as such due to official pronouncement.

In order to develop epistemic injustice as an institutional risk to standing and reputation, the growing research on embodied risk, and the cluster of references in key legal and bioethical instruments, can be leveraged to shape adverse public perceptions of regulation. These references include those to vulnerabilityFootnote 164 and participation of marginalised groups.Footnote 165 In particular, a key principle in Article 4 of the Universal Declaration on Bioethics and Human Rights is focused on harm and failure. It states that in:

applying and advancing scientific knowledge, medical practice and associated technologies, direct and indirect benefits to patients, research participants and other affected individuals should be maximized and any possible harm to such individuals should be minimized”.Footnote 166

Regulatory sensitivity to risk to standing and reputation, supported by these kinds of references, draw upon the combined discursive power of human rights and bioethics, and can prompt a response.Footnote 167 The utility of human rights, in particular, in inflicting reputational damage so as to reshape regulation can be seen in the successes of disability rights campaigners, feminists and anti-poverty groups, in their respective challenges of the bioethics establishment.Footnote 168 In these cases, epistemic and wider injustice grounded on risk formed the basis for mobilisation and participation,Footnote 169 and it was often “uninvited” and even disruptive of regulatory structures.Footnote 170 Mobilisation is also apparent across the examples noted above, and most currently medical devices.

In summary, epistemic injustice, stemming from marginalisation, can amount to an institutional risk to standing and reputation, and as such it can be used as a tool to deal with harm and failure. In summary, in these various ways, epistemic injustice can amount to a significant institutional risk to standing and reputation. This is especially the case when that institutional risk is linked failure: when harm becomes refracted through calculative techniques and judgments, and reaches a point where the expectations of safety built into technological framings of regulation are seen as thwarted. Indeed, the assessment of failure, over and above harm, threatens to shatter the imaginary of a safe future, and in turn to undermine organisational identity and legitimacy. In essence, epistemic injustice can amount to an existential threat. Epistemic injustice as an institutional risk provides a way to impel the gathering of stakeholder knowledge of harm, and its addition to the knowledge base for regulation.

In this way, epistemic injustice can be used by stakeholders to inscribe their knowledges, expertise, and thus themselves, within the field of knowledge for regulation, and to reshape regulation. As a risk, epistemic injustice helps to widen the knowledges that matter within technological framings of regulation. This strategy thus chimes with ongoing attempts to engender a widening of the bioethical and biomedical viewpoint beyond their dominant objects of concern.Footnote 171 Attention to epistemic injustice as an institutional risk promises to enable regulation to better anticipate and prevent future harm and failure. The contribution of stakeholder knowledges of harm would supplement and bolster the current hierarchy of knowledge for regulation, and organisational identity, and the imaginary that helps to legitimate regulation.

The mechanisms to facilitate epistemic integration, and widen the knowledge base for regulation, include existing techniques of risk assessment and management. For instance, in respect of medical devices, further attention to effectiveness could yield important additional data (ie on producing a therapeutic benefit) on top of performance (ie the device performs as designed and intended). Similar to the example of clinical trials for medicines, this would require far more involvement and data from device recipients. Recipient involvement and data could come pre- or post-marketing – or both.Footnote 172 Involvement pre-marketing seems both desirable and possible:

“The manufacturers’ argument that [randomised controlled trials] are often infeasible and do not represent the gold standard for [medical device] research is clearly refuted. As high-quality evidence is increasingly common for pre-market studies, it is obviously worthwhile to secure these standards through the [Medical Devices Regulation] in Europe and similar regulations in other countries”.Footnote 173

One proposed model for long-term implantable devices, such as those discussed in this article, involves providing limited access to them through temporary licences that restrict use to within clinical evaluations with long follow-up at a minimum of five years.Footnote 174 Wider access could be provided once safety, performance and efficacy have been adequately demonstrated. In addition, wider public access to medical device patient registries, including the EU’s Eudamed database, could be provided so as to ensure transparency, open up public discourse around safety, and tackle epistemic injustice.Footnote 175 As for clinical trials, epistemic injustice as a risk could supplement the changes already introduced to ensure future safety, and develop discussion on other changes that could better safeguard trial participants.

Finally, ways of gathering and sharing health data are moving beyond the model of care.data to others. A recent proposal for an ethics framework for big data in health and research aims to ensure governance structures reflect social values for data gathering and use. Harm minimisation is one of the substantive values to be realised through the outcome of a decision on data governance. Harm minimisation “involves reducing the possibility of real or perceived harms (physical, economic, psychological, emotional, or reputational) to persons”.Footnote 176 Reflexivity is a key procedural value to guide the decision-making process, and “refers to the process of reflecting on and responding to the limitations and uncertainties embedded in knowledge, information, evidence, and data”.Footnote 177 By referencing these values, epistemic injustice as a risk could gain further leverage in health data governance. This could ensure the process of decision-making and the outcome take into account stakeholder knowledge of harms. Overall, epistemic injustice provides the means to develop a knowledge base for regulation that more fully meets expectations of safety.

V. Conclusion

In this article, I described how failures are constructed and become known and recognised through processes that determine whether harm has thwarted the expectations of safety built into technological framings of regulation. Laurie is one of the few legal scholars to illuminate the implications of not viewing health research regulation as a process that constructs objects and transforms its participants. As Laurie explains:

“if we fail to see involvement in health research as an essentially transformative experience, then we blind ourselves to many of the human dimensions of health research. More worryingly, we run the risk of overlooking deeper explanations about why some projects fail and why the entire enterprise continues to operate sub-optimally”.Footnote 178

By looking at the process of framing and the organisation of knowledge, I revealed how epistemic injustice may be part of the deeper explanation for failure and sub-optimal health research practices. Carroll and co-authors describe how: “[f]ailure is a moment of breakage between the reality of the present and the anticipated future” – it “exists as a rich space for the growth and development of new social relations”.Footnote 179 Presenting epistemic injustice as an institutional risk to standing and reputation reconfigures power relations, and creates possibilities for including stakeholder knowledges and expertise relating to harm in the knowledge base for regulation.Footnote 180

I argued that the institutional risk provides a key means for stakeholders to prompt a regulatory response. This response is an attempt to relegitimate the regime, including by reiterating the imaginary of safe technoscience and innovation. In this way, stakeholders can help to widen the knowledge base for regulation, and through it responsibilities and accountabilities. This promises to allow regulation to better address past failures, anticipate and prevent future failures, and maintain and even enhance legitimation.

Why, then, has more not been done to ensure epistemic integration as a way to enhance regulatory capacities to anticipate and prevent failure? Epistemic integration would involve bringing stakeholders within regulatory processes via their knowledges of harm. As such, epistemic integration would seem to disrupt and undermine the dominant position of those deemed expert within extant processes. Taking epistemic integration seriously re-problematises knowledge: what knowledges from across society are required by regulation in order to ensure innovation that is safe and legitimate? The integration of (dis)embodied and experiential knowledges within regulation might threaten the epistemic underpinnings of our current regulatory regimes, and the direction of technoscience.

More deeply, epistemic integration would challenge modernist values on the import of empirically-derived knowledge and the efficacy of society’s technological “fixes” in addressing its problems. Indeed, the limits of their capacity to deal with risk and uncertainty would become clearer to society at large. For this reason, at least, the privileging of credentialised knowledges within regulation may amount to “strategic ignorance”.Footnote 181 However, scientific-technical knowledge and expertise would still be necessary in order to discipline “lay” knowledges and ensure their integration within the epistemic foundations of regulation. To resist epistemic integration is, therefore, essentially to bolster extant power relations. As the analysis in this article suggests, these relations are actually antithetical to addressing harm and failure, and ensuring the success of health research regulation.

Footnotes

Many thanks to all those with whom I have discussed the ideas set out in this article, especially: Richard Ashcroft, Ivanka Antova, Tammy Hervey, Katharina Paul, Barbara Prainsack, Anniek de Ruijter, Daithi Mac Sithigh and Ilke Turkmendag. The ideas in this article were also strongly informed by my work on expectations in regulation, which I presented at invited seminars and lectures at number of institutions: Amsterdam, Durham (Centre for Ethics and Law in the Life Sciences), Edinburgh (Mason Institute), Manchester, Maynooth, Oxford (Centre for Socio-Legal Studies) and Vienna (Department of Politics and Centre for the Study of Contemporary Solidarity). In addition, I had the pleasure of presenting early ideas on expectations and failure at the Society of Legal Scholars conference held at Queen Mary, University of London. I am grateful to participants in these events for their feedback and to the organisers for their hospitality.

References

1 As the other contributions to this special issue underline.

2 Black’s definition of regulation encompasses the focus of this chapter, in that it is “the intentional use of authority to affect behaviour of a different party according to set standards, involving instruments of information-gathering and behaviour modification”: see J Black, “Critical Reflections on Regulation” (2002) 27 Australian Journal of Legal Philosophy 1. This understanding of regulation includes “hard law”, “soft law”, social norms, standards and the market. For other understandings, see R Baldwin et al, “Regulation, the Field and the Developing Agenda” in R Baldwin et al (eds), The Oxford Handbook on Regulation (Oxford University Press 2011); R Baldwin et al, Understanding Regulation: Theory, Strategy, and Practice (2nd edn, Oxford University Press 2012).

3 L Kurunmäki and P Miller, “Calculating Failure: The Making of a Calculative Infrastructure for Forgiving and Forecasting Failure” (2013) 55(7) Business History 1100, at p 1100, emphasis added.

4 M Power, Organised Uncertainty (Oxford University Press 2007) p 5, emphasis added.

5 Relatedly, see S Macleod and S Chakraborty, Pharmaceutical and Medical Device Safety (Hart Publishing 2019).

6 See E Jackson, Law and the Regulation of Medicines (Hart Publishing 2012) pp 4–5.

7 This was not the first scandal concerning silicone breast implants: see generally C Greco, “The Poly Implant Prothése Breast Prostheses Scandal: Embodied Risk and Social Suffering” (2015) 147 Social Science and Medicine 150; M Latham, “‘If It Ain’t Broke Don’t Fix It’: Scandals, Risk and Cosmetic Surgery” (2014) 22(3) Medical Law Review 384.

8 Heneghan, Cet al, “Ongoing Problems with Metal on Metal Hip Implants” (2012) 344 BMJ 1349CrossRefGoogle Scholar.

9 “Vaginal Mesh to Treat Organ Prolapse Should Be Suspended, says UK Health Watchdog”, The Independent, 15 December 2017.

10 Quoted in “Review into PiP Implant Scandal Published”, available at <www.gov.uk/government/news/review-into-pip-implant-scandal-published> last accessed 10 December 2019. This references the report: Department of Health, Poly Implant Prothèse (PIP) Silicone Breast Implants: Review of the Actions of the Medicines and Healthcare products Regulatory Agency (MHRA) and Department of Health (2012). Also see AK Deva, “The ‘Game of Implants’: A Perspective on the Crisis-Prone History of Breast Implants” (2019) 39(S1) Aesthetic Surgery Journal S55; D Spiegelhalte et al, “Breast Implants the Scandal the Outcry and Assessing the Risks” (2012) 9(6) Significance 17.

11 The Guardian, 26 November 2018, emphasis added.

12 This may extend beyond physical harm to social harm, environmental harm “and so on”: see R Brownsword, Rights, Regulation and the Technological Revolution (Oxford University Press 2008) p 119. Also see pp 102–105.

13 Firestein describes ‘a continuum of failure’ and explains how ‘[f]ailures can be minimal and easily dismissed; they can be catastrophic and harmful. There are failures that should be encouraged and others that should be discouraged’ – before stating ‘[t]he list could go on’: see S Firestein, Failure. Why Science is So Successful (Oxford University Press 2016) pp 8–9.

14 M Fricker, Epistemic Injustice: Power and the Ethics of Knowing (Oxford University Press 2007), emphasis added. Also see IJ Kidd and H Carel, “Epistemic Injustice and Illness” (2017) 34(2) Journal of Applied Philosophy 172.

15 BA Turner, Man-Made Disasters (Wykeham 1978); BA Turner and NF Pidgeon, Man-Made Disasters, 2nd edn (Butterworth-Heinemann 1997).

16 BM Hutter and M Power (eds), Organizational Encounters With Risk (Cambridge University Press 2005) p 1. Some failures are “normal accidents” and cannot be organised out of existence: see C Perrow, Normal Accidents: Living with High-Risk Technologies (Basic Books 1984). “Normal accidents” are inevitable rather than common (at ibid, p 174). In Perrow’s view, whereas the partial meltdown at the Three Mile Island nuclear power station, Pennsylvania, in 1979 was a “normal accident”, disasters involving the Challenger spacecraft in 1986, the Union Carbide gas tragedy in Bhopal, India in 1984, and the Chernobyl nuclear power station in 1986, were not “normal accidents”. The examples considered in this article are related to limitations in the knowledge base, which is capable of adjustment, and as such they are not inevitable.

17 Kurunmäki and Miller, supra, note 3, at p 1101, emphasis added.

18 Brownsword and Goodwin describe how “while there is agreement across normative frameworks on the importance of minimising harm, differences arise in determining what it means to say that someone has been harmed. Utilitarians are likely to interpret harm in a minimalist manner; deontologists in an expansive fashion, for example, so as to include the harm done to human dignity… Harm is also likely to include, for some, violations of individual human rights”: see R Brownsword and M Goodwin, Law and the Technologies of the Twenty-First Century: Text and Materials (Cambridge University Press 2012) p 208. Also see pp 205–210.

19 Indeed, PIP silicone breast implants and vaginal mesh have been the subject of litigation – for discussion of each, see Macleod and Chakraborty, supra, note 5, at pp 232–234 and 259–263 respectively.

20 Appadurai explains that “failure is not seen in the same way at all times and in all places”: see A Appadurai, “Introduction” to Special Issue on “Failure” (2016) 83(3) Social Research xx. A key question for related work is: does the understanding of failure in terms of harm co-exist with the understanding of failure in terms of neglect or omission? This is a central question of risk regulation, where it is queried whether a regulatory focus displaces causal claims premised on negligence or nuisance or recklessness. See, for instance, S Shavell, “Liability for Harm versus Regulation of Safety” (1984) 13(2) The Journal of Legal Studies 357.

21 G Laurie, “Liminality and the Limits of Law in Health Research Regulation: What Are We Missing in the Spaces In-Between?” (2016) 25(1) Medical Law Review 47.

22 M Foucault, The Birth of Biopolitics: Lectures at the Collège de France, 1978–1979 (Palgrave Macmillan 2008). Also see T Lemke, Biopolitics: An Advanced Introduction (New York University Press 2013). For application of this thinking, see ML Flear, Governing Public Health: EU Law, Regulation and Biopolitics (Hart Publishing 2015 hb; 2018 (revised) pb), especially ch 2, ch 7, and ch 8.

23 For instance: A Riles, “A New Agenda for the Cultural Study of Law: Taking on the Technicalities” (2005) 53 Buffalo Law Review 973, at p 975.

24 H van Lente and A Rip, “Expectations in Technological Developments: An Example of Prospective Structures to Be Filled in by Agency” in C Disco and B van der Meulen (eds), Getting New Technologies Together: Studies in Making Sociotechnical Order (De Gruyter 1998) p 205. Also see H van Lente, Promising Technology: The Dynamics of Expectations in Technological Development (University of Twente 1993).

25 T Carroll et al, “Introduction: Towards a General Theory of Failure” in T Carroll et al (eds), The Material Culture of Failure: When Things Go Wrong (Bloomsbury 2018) p 15, emphasis added.

26 R Bryant and DM Knight, The Anthropology of the Future (Cambridge University Press 2019) p 28.

27 ibid, at p 134.

28 ibid, at p 58, emphasis added.

29 ibid, at p 63.

30 Beckert lists past experience amongst the social influences on expectations: see J Beckert, Imagined Futures: Fictional Expectations and Capitalist Dynamics (Harvard University Press 2016) p 91.

31 A Appadurai, “Introduction” to Special Issue on “Failure” (2016) 83(3) Social Research xxi, emphasis added. Also see A Appadurai, Banking on Words: The Failure of Language in the Age of Derivative Finance (University of Chicago Press 2016).

32 Brownsword, supra, note 12; K Yeung, “Towards an Understanding of Regulation by Design” in R Brownsword and K Yeung (eds), Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes (Hart Publishing 2008); K Yeung and M Dixon-Woods, “Design-based Regulation and Patient Safety: A Regulatory Studies Perspective” (2010) 71(3) Social Science and Medicine 613.

33 T Dant, Materiality and Society (Open University Press 2005).

34 D MacKenzie and J Wajcman (eds), The Social Shaping of Technology, 2nd edn (Open University Press 1999); L Neven, “‘But Obviously It’s Not for Me’: Robots, Laboratories and the Defiant Identity of Elder Test Users” (2010) 32 Sociology of Health and Illness 335; G Walker et al, “Renewable Energy and Sociotechnical Change: Imagined Subjectivities of ‘The Public’ and Their Implications” (2010) 42 Environment and Planning A 931; L Winner, “Do Artefacts Have Politics?” (1980) 109 Daedalus 121.

35 Beckert, supra, note 30.

36 S Jasanoff and S-H Kim, “Containing the Atom: Sociotechnical Imaginaries and Nuclear Regulation in the US and South Korea” (2009) 47(2) Minerva 119, at p 120. Also see K Konrad et al, “Performing and Governing the Future in Science and Technology” in U Felt et al (eds), The Handbook of Science and Technology Studies, 4th edn (MIT Press 2017) p 467. This chapter provides a summary of current key understandings of “expectations”, “visions” and “imaginaries”.

37 S Jasanoff, “Future Imperfect: Science, Technology, and the Imaginations of Modernity” in S Jasanoff and S-H Kim, Dreamscapes of Modernity. Sociotechnical Imaginaries and the Fabrication of Power (University of Chicago Press 2015) p 4. For wider discussion of imaginaries in STS see M McNeil et al, “Conceptualising Imaginaries of Science, Technology, and Society” in Felt et al, supra, note 36.

38 For related discussion, albeit not discussing “failure” as such, see ML Flear, “‘Technologies of Reflexivity’: Generating Biopolitics and Institutional Risk to Supplement Global Public Health Security” (2017) 8 EJRR 658.

39 Within the European Union, the applicable law is subject to transition from a trio of directives to a duo of regulations: Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (Text with EEA relevance) OJ L117/1; Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (Text with EEA relevance) OJ L117/176. Implementation of this legislation is left to national competent authorities, including, at the time of writing, the UK’s Medicines and Healthcare Products Regulatory Agency. The competent authorities designate notified bodies to assess medical device conformity with “essential requirements”. The focus in conformity assessments is on the intended purpose and risk of a device. Where a conformity assessment finds a medical device to be compliant with the regulations, the manufacturer of the device can brand it with the CE (Conformité Européenne) mark and trade it within the EU internal market.

40 Medical devices are defined by their intended function, as defined by the manufacturer, for medical purposes. Art 2(1) Medical Devices Regulation defines “medical device” as “any instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes:

  • diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease,

  • diagnosis, monitoring, treatment, alleviation of, or compensation for, an injury or disability,

  • investigation, replacement or modification of the anatomy or of a physiological or pathological process or state,

  • providing information by means of in vitro examination of specimens derived from the human body, including organ, blood and tissue donations, and which does not achieve its principal intended action by pharmacological, immunological or metabolic means, in or on the human body, but which may be assisted in its function by such means” (emphasis added).

41 The classification of medical devices ranges from I for low-risk devices which are non-invasive, such as spectacles; IIa for low- to medium-risk devices, which are usually installed within the body for only between 60 minutes and 30 days, such as hearing aids, blood transfusion tubes, and catheters; IIb for medium- to high-risk devices, which are usually devices installed within the body for 30 days or longer, such as ventilators and intensive care monitoring equipment; to III for high-risk devices, which are invasive long-term devices. Under new EU legislation, Class III devices include the examples already noted and pacemakers that are used on a long-term basis, ie normally intended for continuous use for more than 30 days (Point 1.3, Annex VIII, Medical Devices Regulation, ibid). The only medical devices that are required to evidence therapeutic benefit or efficacy in controlled conditions before marketing are those that incorporate medicinal products. It is not necessary to perform clinical investigations where a device is custom-made or can be deemed similar or substantially equivalent.

42 Specifically, the evidence required to demonstrate conformity with essential safety requirements involves a clinical evaluation. Clinical evaluation: “means a systematic and planned process to continuously generate, collect, analyse and assess the clinical data pertaining to a device in order to verify the safety and performance, including clinical benefits, of the device when used as intended by the manufacturer” (Art 2(44) Medical Devices Regulation, emphasis added). The clinical evaluation verifies safety, performance, and an acceptable benefit/risk ratio. The clinical investigation is a subset of the clinical evaluation and involves “any systematic investigation involving one or more human subjects, undertaken to assess the safety or performance of a device” (Art 2(45) Medical Devices Regulation, emphasis added). The definitions of clinical evaluation and investigation align with that for medical devices in that the assessment of performance is in accordance with intended function.

43 C Allan et al, “Europe’s New Device Regulations Fail to Protect the Public” (2018) 363 BMJ 4205, at p 4205.

44 CJ Heneghan et al, “Trials of Transvaginal Mesh Devices for Pelvic Organ Prolapse: A Systematic Database Review of the US FDA Approval Process” (2017) 7 BMJ Open e017125, p 1, emphasis added.

45 This is a point of comparison for new medical devices seeking approval. One study traced the origins of 61 surgical mesh implants to just two original devices approved in the United States in 1985 and 1996 – see ibid.

46 Macleod and Chakraborty, supra, note 5, at p 238.

47 See references in supra, note 40.

48 Recital 1 Medical Devices Regulation, supra, note 40.

49 Allan et al, supra, note 43, at p 4205, emphasis added.

50 Art 123(2) Medical Devices Regulation, supra, note 40.

51 See, for instance, Recital 63 Medical Devices Regulation, supra, note 40.

52 Cambridge Design Partnership, “About Us” available at <www.cambridge-design.com/about-us> last accessed 15 November 2019.

53 ibid, emphasis added.

54 See, for instance: C Heneghan et al, “Transvaginal Mesh Failure: Lessons for Regulation of Implantable Devices” (2017) 359 BMJ 5515.

55 Cambridge Design Partnership, supra, note 52, emphasis added. Moreover: “Under the [Medical Devices Directive], many people chose to look to the EU Clinical Trials Directive (and subsequent regulation) and the associated Good Clinical Practice guidelines from the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use”. That said, under the Medical Devices Regulation, supra, note 40: “The rules on clinical investigations should be in line with well-established international guidance in this field, such as the international standard ISO 14155:2011 on good clinical practice for clinical investigations of medical devices for human subjects”. The relevant global standards for trials of new medical devices and new medicines is the same: “World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects” (Recital 64, Medical Devices Regulation, supra, note 40).

56 References to consent are embedded throughout the Regulation (EU) 536/2014 on clinical trials on medicinal products for human use, and repealing Directive 2001/20/EC [2014] OJ L158/1. The Regulation is planned to apply from 2019. Key references include: Recitals 15, 17, 27, 44, 76 and 80, Art 3 and Ch V (on protection of subjects and informed consent). Preceding the latter was Directive 2001/20/EC on the approximation of the laws, regulations and administrative provisions of the Member States relating to the implementation of good clinical practice in the conduct of clinical trials on medicinal products for human use [2001] OJ L121/34. References to GCP are underpinned by ICH, Integrated Addendum to ICH E6(R1): Guideline for Good Clinical Practice E6(R2), Current Step 4 Version Dated 9 November 2016 (this version amends that finalised and adopted in 1996) and Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects (1964, as revised, the last time in 2013). The Helsinki Declaration is an instrument of the World Medical Association. Consent is a primary value in each of these. Consent is also a principle within the Universal Declaration on Bioethics and Human Rights (2005) – see Art 6 (consent) and Art 7 (persons without the capacity to consent).

57 Directive 2001/83/EC on the Community Code relating to medicinal products for human use [2001] OJ L 311/67.

58 There are four phases of clinical trials. Phase I trials use between 20–80 healthy volunteers in order to determine the tolerable dose range of a new drug. Phase II trials use between 100–300 subjects who have the disease or condition to be treated in order to evaluate efficacy and safety of the drug. Phase III trials tend to be multi-centred and might involve up to 10,000 people located in 10–20 countries. This phase generates more safety and efficacy data and the research participants included in the protocol for this type of trial tend to be those suffering from the condition the new drug is intended to treat. Phase IV trials are used to generate post-marketing data on safety and efficacy. This phase can involve millions of people.

59 Double-blind randomised controlled trials are considered the best for Phase II and III clinical trials. These trials involve random allocation to the control or the active arm of the study. Participants allocated to the control group are provided with either the best standard treatment for their condition or a placebo (an inert substance).

60 H Attarwala, “TGN1412: From Discovery to Disaster” (2010) 2(3) Journal of Young Pharmacists 332.

61 Medicines for Human Use (Clinical Trials) Regulations 2004 which implement Directive 2001/20/EC, supra, note 56.

62 G Vince, “UK Drug Trial Disaster – The Official Report”, New Scientist, 25 May 2006, available at <www.newscientist.com/article/dn9226-uk-drug-trial-disaster-the-official-report>, last accessed 10 December 2019.

63 MHRA, Investigations into Adverse Incidents during Clinical Trials of TGN1412, App 1 GCP Inspection of Parexel – Findings, 25 May 2006, available at <webarchive.nationalarchives.gov.uk/20141206175945/http://www.mhra.gov.uk/home/groups/comms-po/documents/websiteresources/con2023821.pdf> last accessed 10 December 2019.

64 MHRA, Investigations into Adverse Incidents during Clinical Trials of TGN1412, 5 April 2006, available at <webarchive.nationalarchives.gov.uk/20141206222245/http://www.mhra.gov.uk/home/groups/comms-po/documents/websiteresources/con2023519.pdf> last accessed 10 December 2019.

65 M Day, “Agency Criticises Drug Trial” (2006) 332(7553) BMJ 1290.

66 MHRA, Phase I Accreditation Scheme Requirements, Version 3, 28 October 2015, available at <assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/473579/Phase_I_Accreditation_Scheme.pdf>, last accessed 10 December 2019. For discussion, see KE Brown, “Revisiting CD28 Superagonist TGN1412 as Potential Therapeutic for Pediatric B Cell Leukemia: A Review” (2018) 6(2) Diseases 41.

67 An interim report was published on 20 July 2006 and followed-up by Expert Scientific Group on Phase One Clinical Trials, Final Report, 30 November 2006, available at: <webarchive.nationalarchives.gov.uk/20130105090249/http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_063117>, last accessed 10 December 2019. For more on reviews and international action of vaginal mesh implant complications, see S Barber, “Surgical Mesh Implants”, Briefing Paper, Number CBP 8108, 4 September 2019.

68 At the time of this scandal the legislation applicable to personal data was Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data [1995] OJ L281/31. This Directive is now replaced by Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) [2016] OJ L119/1. Consent can be understood more as an ongoing process and not just a one-time only event, see P Allmark and S Mason, “Improving the Quality of Consent to Randomised Controlled Trials by Using Continuous Consent and Clinician Training in the Consent Process” (2006) 32 Journal of Medical Ethics 439; DC English, “Valid Informed Consent: A Process, Not a Signature” (2002) American Surgeon 45.

69 Laurie, supra, note 21, at p 53.

70 P Carter et al, “The Social Licence for Research: Why care.data Ran into Trouble” (2015) 41 Journal of Medical Ethics 404.

71 See supra, note 68.

72 Human Rights Act 1998 and in particular the right to privacy under Art 8 European Convention on Human Rights as incorporated via Sch 1, Part 1.

73 Carter et al refer to a “rupture in traditional expectations”: supra, note 70, at p 407.

74 Carter et al, supra, note 70.

75 Laurie, supra, note 21, at p 53, emphasis added. Citing Carter et al, supra, note 70.

76 BM Hutter and S Lloyd-Bostock, Regulatory Crisis: Negotiating the Consequences of Risk, Disasters and Crises (Cambridge University Press 2017) p 209, emphasis added. For discussion, see M Lodge, “The Wrong Type of Regulation? Regulatory Failure and the Railways in Britain and Germany” (2002) 22(3) Journal of Public Policy 271; M Lodge, “Risk, Regulation and Crisis: Comparing National Responses to Food Safety Regulation” (2011) 31(1) Journal of Public Policy 25; R Schwartz and A McConnell, “Do Crises Help Remedy Regulatory Failure? A Comparative Study of the Walkerton Water and Jerusalem Banquet Hall Disasters” (2009) 52 Canadian Public Administration 91; G Wilson, “Social Regulation and Explanations of Regulatory Failure” (1984) 31 Political Studies 203.

77 Hutter and Lloyd-Bostock, ibid, at p 8, emphasis added. For further discussion of the relationship between regulatory failure and reform, see ibid, at pp 5–6.

78 Hutter and Lloyd-Bostock, supra, note 76, at p 3.

79 ibid, at p 3.

80 Kurunmäki and Miller, supra, note 3, at p 1101, emphasis added.

81 In the examples discussed in this article, legal procedures and modes of judgement seem to have played a limited role in the interpretation of harm and construction of failure – although there has been some litigation, see supra, note 19.

82 See J Law and A Mol, “Notes on Materiality and Sociality” (1995) 43 Sociological Review 279; T Pinch and WE Bijker, “The Social Construction of Facts and Artefacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other” (1984) 14 Social Studies of Science 399. Also see A Faulkner et al (eds), “Material Worlds: Intersections of Law, Science, Technology, and Society”, Special Issue (2012) 39(1) Journal of Law and Society; V Toom, “Bodies of Science and Law: Forensic DNA Profiling, Biological Bodies, and Biopower” (2012) 39(1) Journal of Law and Society 150.

83 I Hacking, Historical Ontology (Harvard University Press 2002) p 106. The key concept here is dynamic nominalism – “a fancy way of saying name-ism” – see I Hacking, The Social Construction of What? (Harvard University Press 1999) p 82. Applied within a variety of literatures, including feminist scholarship (D Haraway, The Haraway Reader (Routledge 2004)), critical race scholarship (D Roberts, “The Social Immorality of Health in the Gene Age: Race, Disability and Inequality” in J Metzl and A Kirkland (eds), Against Health (NYU Press 2010)) and disability studies (B Allen, “Foucault’s Nominalism” in S Tremain (ed), Foucault and the Government of Disability (University of Michigan Press 2018); M Oliver, Social Work and Disabled People (Macmillan 1983); M Oliver, “The Social Model of Disability: Thirty Years On” (2013) 28 Disability and Society 1024).

84 Kurunmäki and Miller, supra, note 3, at p 1101.

85 Hutter and Lloyd-Bostock, supra, note 76, at pp 19–21 for framing and routines, and also at pp 9–18.

86 Adjusting discussion of “medical gaze” in M Foucault, The Birth of the Clinic (Tavistock 1973).

87 For some starting points on the salience of gender to bodies and embodiment, see M Fox and T Murphy, “The Body, Bodies, Embodiment: Feminist Legal Engagement with Health” in M Davies and VE Munro (eds), The Ashgate Research Companion to Feminist Legal Theory (Ashgate 2013).

88 See, for example, DE Hoffmann and AJ Tarzian, “The Girl Who Cried Pain: A Bias Against Women in the Treatment of Pain” (2001) 29 Journal of Law, Medicine & Ethics 13; RW Hurley and MCB Adams, “Sex, Gender and Pain: An Overview of a Complex Field” (2008) 107(1) Anesthesia & Analgesia 309.

89 As does the decision, in 2014 by the US National Institutes of Health, that the preclinical research it funds will in future ensure that investigators account for sex as a biological variable as part of a rigour and transparency initiative. For discussion, see JA Clayton, “Studying Both Sexes: A Guiding Principle for Biomedicine” (2016) 30(2) The FASEB Journal 519.

90 MR Nolan and T-L Nguyen, “Analysis and Reporting of Sex Differences in Phase III Medical Device Clinical Trials – How Are We Doing?” (2013) 22(5) Journal of Women’s Health 399.

91 NICE, Urinary Incontinence and Pelvic Organ Prolapse in Women: Management, NICE Guideline [NG123], 2 April 2019, available at <www.nice.org.uk/guidance/ng123/chapter/Recommendations>, last accessed 10 December 2019. This guidance was issued in response to the NHS England Mesh Working Group – see Mesh Oversight Group Report, 25 July 2017, available at <www.england.nhs.uk/publication/mesh-oversight-group-report/>, last accessed 10 December 2019. Also see “Mesh Working Group”, available at <www.england.nhs.uk/mesh/>, last accessed 10 December 2019.

92 H Pike, “NICE Guidance Overlooks Serious Risks of Mesh Surgery” (2019) 365 BMJ 1537, emphasis added.

93 The contract research organisation, Parexel, failed to complete the full medical background of a trial subject in writing. One principal investigator of the trial failed to update the medical history file in writing after conducting a verbal consultation with one of the trial volunteers.

94 G Pohlhaus, “Discerning the Primary Epistemic Harm in Cases of Testimonial Injustice” (2014) 28(2) Social Epistemology 99, at p 107.

95 See, for example, R Flynn, “Health and Risk” in G Mythen and S Walklate (eds), Beyond the Risk Society (Open University Press 2006). In general see F Ewald, “Insurance and Risk” in G Burchell et al (eds), The Foucault Effect: Studies in Governmentality (University of Chicago Press 1991); F Ewald and S Utz, “The Return of Descartes’ Malicious Demon: An Outline of a Philosophy of Precaution” in T Baker and J Simon (eds), Embracing Risk: The Changing Culture of Insurance and Responsibility (University of Chicago Press 2002). More generally see, for example: RV Ericson, Crime in an Insecure World (Polity 2007); L Zedner, “Fixing the Future? The Pre-emptive Turn in Criminal Justice” in B McSherry et al (eds), Regulating Deviance: The Redirection of Criminalisation and the Futures of Criminal Law (Hart Publishing 2009).

96 Power, supra, note 4, at pp 3–4. Risk objects are a type of bounded object. A bounded object approach is “where law creates artificial constructs that become the object of regulatory attention of dedicated regulators who operate within legally defined spheres of influence or ‘silos’”: see Laurie, supra, note 21, at p 11. There are, of course, related and to some extent overlapping objects, including “marketised objects”, “innovation objects”, etc. For discussion, which draws on Laurie, supra, note 21, see M Quigley and S Ayihongbe, “Everyday Cyborgs: On Integrated Persons and Integrated Goods” (2018) 26(2) Medical Law Review 276. For discussion of another contemporary risk object, “children”, see A-M McAlinden, Children as “Risk” (Cambridge University Press 2018). For further discussion, see Power, supra, note 4, at pp 7–12 and 24–28.

97 Cf R Brownsword, Rights, Regulation and the Technological Revolution (Oxford University Press 2008) pp 118–119.

98 R Sparks, “Degrees of Estrangement: The Cultural Theory of Risk and Comparative Penology” (2001) 5(2) Theoretical Criminology 159, at p 169, drawing on D Garland, Punishment and Modern Society (Oxford University Press 1990) emphasis added. On risk and cultural context, see M Douglas and A Wildavsky, Risk and Culture: An Essay on the Selection of Technical and Environmental Dangers (University of California Press 1982). Also see N Luhmann, Risk: A Sociological Theory (De Gruyter 1993).

99 N Rose and P Miller, “Political Power Beyond the State: Problematics of Government” (1992) 43(2) British Journal of Sociology 172, at p 178.

100 K Knorr Cetina, “Laboratory Studies: The Cultural Approach to the Study of Science” in S Jasanoff et al (eds), Handbook of Science and Technology Studies (London 1995); B Latour, Science in Action: How to Follow Scientists and Engineers Through Society (Harvard University Press 1987); M Lynch and S Woolgar (eds), Representation in Scientific Practice (MIT Press 1990); A Pickering (ed) Science as Practice and Culture (University of Chicago Press 1992).

101 S Jasanoff, “The Idiom of Co-Production” in S Jasanoff (ed), States of Knowledge: The Co-production of Science and the Social Order (Routledge 2004) p 3.

102 ibid, at pp 2–3.

103 Jasanoff, supra, note 101. Also see K Knorr Cetina, “Laboratory Studies: The Cultural Approach to the Study of Science” in S Jasanoff et al (eds), Handbook of Science and Technology Studies (London 1995); T Kuhn, The Structure of Scientific Revolutions, 2nd edn (University of Chicago Press 1970); M Lynch and S Woolgar (eds), Representation in Scientific Practice (MIT Press 1990); A Pickering (ed), Science as Practice and Culture (University of Chicago Press 1992).

104 Power, supra, note 4, at p 25, emphasis added. Also see Sparks, supra, note 98.

105 Laurie, supra, note 21, at p 52, emphasis added.

106 This includes a focus on enterprise, which as Power explains entails “control [that] is indirect and exercised by autonomous value-creating selves. It must be self-governing…constitutive of freedom and the capacity to innovate” – Power, supra, note 4, at p 197, emphasis added. The logic of enterprise and elements that constitute governance and regulation – framing, knowledges, discourses and practices that regulate everyday life, including law and expectations – are underpinned and directed by neoliberal rationality. Rose and colleagues describe rationality as “a way of doing things that… [is] oriented to specific objectives and that… [reflects] on itself in characteristic ways”: see N Rose et al, “Governmentality” (2006) 2 Annual Review of Law Society and Science 83, at p 84, emphasis added. Neoliberal rationality prioritises technical reason and means-end, or instrumental, market rationality, and disseminates it through the organisation of governance “at a distance” – see M Dean, Governmentality: Power and Rule in Modern Society, 2nd edn (Sage Publications 2009); P O’Malley, Risk, Uncertainty and Government (Routledge 2004).

107 ML Flear, “Regulating New Technologies: EU Internal Market Law, Risk, and Socio-Technical Order” in M Cremona (ed), New Technologies and EU Law (Oxford University Press 2016) p 7. See further ML Flear et al, European Law and New Health Technologies (Oxford University Press 2013).

108 Black notes that the rhetoric of risk is a “useful legitimating device”: J Black, “The Emergence of Risk-Based Regulation and the New Public Risk Management in the United Kingdom” (2005) Public Law 512, at p 519; Power, supra, note 4.

109 E Goffman, Frame Analysis: An Essay on the Organisation of Experience (Harvard University Press 1974); M Hajer and D Laws, “Ordering Through Discourse” in M Moran et al (eds), The Oxford Handbook of Public Policy (Oxford University Press 2006); VA Schmidt, “Discursive Institutionalism: The Explanatory Power of Ideas and Discourse” (2008) 11 American Review of Political Science 303; M Rein and D Schön, “Problem Setting in Policy Research” in C Weiss (ed), Using Social Research in Public Policy Making (Lexington Book 1977).

110 M Akrich, “The De-Scription of Technical Objects” in WE Bijker and J Law (eds), Shaping Technology/Building Society: Studies in Sociotechnical Change (MIT Press 1992); M Borup et al, “The Sociology of Expectations in Science and Technology” (2006) 18(3–4) Technology Analysis and Strategic Management 285; N Brown and M Michael, “A Sociology of Expectations: Retrospecting Prospects and Prospecting Retrospects” (2003) 15(1) Technology Analysis and Strategic Management 4.

111 Y Ezrahi, Imagined Democracies (Cambridge University Press 2012) p 38, emphasis added.

112 ibid, at p 42, emphasis added. Also see B Anderson, Imagined Communities (Verso 2006); C Taylor, Modern Social Imaginaries (Duke University Press 2004).

113 W Brown, Regulating Aversion (Princeton University Press 2006) p 15; S Jasanoff, Designs on Nature (Princeton University Press 2005) pp 5–6.

114 D Bell, The Coming of Post-Industrial Society: A Venture in Social Forecasting (Basic Books 1976); M Castells, The Rise of the Network Society (The Information Age, Vol I) (Blackwell 1996); K Knorr Cetina, Epistemic Cultures. How the Sciences Make Knowledge (Harvard University Press 1999); N Stehr, Knowledge Societies (Sage Publications 1994).

115 For discussion of neoliberalism, see references supra, note 106.

116 S Jasanoff and B Wynne, “Science and Decision-Making” in S Rayner and EL Malone (eds), Human Choice and Climate Change, Volume 1: The Societal Framework (Battelle Press 1998).

117 D Callaghan, “The Social Sciences and the Task of Bioethics” (1999) 128(4) Daedalus 275, at p 276.

118 P Farmer, Pathologies of Power: Health, Human Rights, and the New War on the Poor (University of California 2003) pp 204–205.

119 JR Garrett, “Two Agendas for Bioethics: Critique and Integration” (2015) 29 Bioethics 440, at p 442.

120 AM Hedgecoe, “Critical Bioethics: Beyond the Social Science Critique of Applied Ethics” (2004) 18 Bioethics 120, at p 125. Also see B Hoffmaster (ed), Bioethics in Social Context (Temple University Press 2001).

121 RA Dahl, Democracy and its Critics (Yale University Press 1989) p 335.

122 J Benjamin Hurlbut, “Remembering the Future: Science, Law, and the Legacy of Asilomar” in S Jasanoff and S-H Kim, Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power (University of Chicago Press 2015) p 129, original emphasis.

123 For discussion, see Hutter and Lloyd-Bostock, supra, note 76, at pp 13–15. In general see I Hacking, The Taming of Chance (Cambridge University Press 1990); TM Porter, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton University Press 1995); A Desrosières, The Politics of Large Numbers: A History of Statistical Reasoning (Harvard University Press 1998); WN Espeland and ML Stevens, “A Sociology of Quantification” (2008) 49(3) European Journal of Sociology 401.

124 M Foucault, Society Must Be Defended (Penguin Books 2004) p 7.

125 Fricker, supra, note 14, referring to text cited in the introduction.

126 J Medina, “Hermeneutical Injustice and Polyphonic Contextualism: Social Silences and Shared Hermeneutical Responsibilities” (2012) 26(2) Social Epistemology 201, at p 217, emphasis added.

127 Heneghan et al, supra, note 54, at p 5.

128 A Irwin and M Michael, Science, Social Theory and Public Knowledge (Open University Press 2003); A Kerr et al, “The New Genetics and Health: Mobilising Lay Expertise” (1998) 7(1) Public Understanding of Science 41; BE Wynne, “Knowledges in Context” (1991) 16 Science, Technology and Human Values 111; BE Wynne, “Misunderstood Misunderstandings: Social Identities and Public Uptake of Science” (1992) 1 Public Understanding of Science 281.

129 S Jasanoff, “Technologies of Humility: Citizen Participation in Governing Science” (2003) 41 Minerva 223; Jasanoff, supra, note 113, ch 10 “Civic Epistemology”. Also see B Wynne, “Uncertainty and Environmental Learning: Reconceiving Science and Policy in the Preventive Paradigm” (1992) 2(2) Global Environmental Change 111.

130 See A Wylie, “Why Standpoint Matters” in S Harding (ed), The Feminist Standpoint Reader: Intellectual and Political Controversies (Routledge 2004); V Rabeharisoa et al, “Evidence-based Activism: Patients’, Users’ and Activists’ Groups in Knowledge Society” (2014) 9(2) BioSocieties 111.

131 Cf A Barry, Political Machines: Governing a Technological Society (Athlone Press 2001).

132 N Rose, The Politics of Life Itself: Biomedicine, Power and Subjectivity in the 21st Century (Princeton University Press 2007).

133 P Rabinow, Essays on the Anthropology of Reason (Princeton University Press 1996); S Gibbon and C Novas (eds), Biosocialities, Genetics and the Social Sciences (Routledge 2007).

134 Such as: genetic citizens (D Heath et al, “Genetic Citizenship” in D Night and J Vincent (eds), A Companion to the Anthropology of Politics (Blackwell 2004)); stories of resistance (E Loja et al, “Disability, Embodiment and Ableism: Stories of Resistance” (2013) 28(2) Disability & Society 190)); moral pioneers (R Rapp, Testing Women, Testing the Fetus: The Social Impact of Amniocentesis in America (Routledge 2000)); biological citizens (N Rose and C Novas, “Biological Citizenship” in A Ong and S Collier (eds), Global Assemblages: Technology, Politics, and Ethics as Anthropological Problems (Blackwell 2005), cf J Biehl, Will to Live: AIDS Therapies and the Politics of Survival (Princeton University Press 2007)); therapeutic citizens (V-K Nguyen, “Antiretroviral Globalism, Biopolitics, and Therapeutic Citizenship” in A Ong and SJ Collier (eds), Global Assemblages: Technology, Politics, and Ethics as Anthropological Problems (Blackwell Publishing 2005).

135 A Moore, “Beyond Participation: Opening-Up Political Theory in STS” (2010) 40(5) Social Studies of Science 793. This is a review of MB Brown, Science in Democracy: Expertise, Institutions and Representation (MIT Press 2009).

136 Medina, supra, note 126, at p 218. This responsibility is grounded in virtue theory. For discussion see Fricker, supra, note 14.

137 Medina, supra, note 126, at p 215.

138 M Fineman, “The Vulnerable Subject and the Responsive State” (2010) 60 Emory Law Journal 251.

139 Including precarity (J Butler, Precarious Life: The Power of Mourning and Violence (Verso 2005)); the capabilities approach (M Nussbaum, Creating Capabilities (Harvard University Press 2011); A Sen, “Equality of What?” in S McMurrin (ed), Tanner Lectures on Human Values, Volume 1 (Cambridge University Press 1980)); depletion (B Goldblatt and SM Rai, “Recognizing the Full Costs of Care? Compensation for Families in South Africa Silicosis Class Action” (2017) 27 Social & Legal Studies 671); a feminist approach to flesh (C Beasley and C Bacchi, “Envisaging a New Politics for an Ethical Future: Beyond Trust, Care and Generosity – Towards an Ethic of Social Flesh” (2007) 8 Feminist Theory 279); and the social body (S Lewis and M Thomson, “Social Bodies and Social Justice” (2019) International Journal of Law in Context 1).

140 This includes understanding in epigenetics and neuroscience: see M Pickersgill, “Neuroscience, Epigenetics and the Intergenerational Transmission of Social Life: Exploring Expectations and Engagements” (2014) 3(3) Families, Relationships and Societies 481; N Rose and J Abi-Rached, Neuro: The New Brain Sciences and the Management of the Mind (Princeton University Press 2013); N Rose and J Abi-Rached, “Governing Through the Brain: Neuropolitics, Neuroscience and Subjectivity” (2014) 32(1) Cambridge Anthropology 3; D Wastell and S White, Blinded by Science: The Social Implications of Epigenetics and Neuroscience (Policy Press 2017).

141 M Meloni, “How Biology Became Social, and What It Means for Social Theory” (2014) 62 The Sociological Review 593, at p 595.

142 To borrow from the title to the following: M Thomson, “Bioethics & Vulnerability: Recasting the Objects of Ethical Concern” (2018) 67(6) Emory Law Journal 1207.

143 Most notably, see Fineman, supra, note 138.

144 Sex differences are central to the development of personalised medicines – as mentioned by Nolan and Nguyen, supra, note 90.

145 R Ashcroft, “Fair Process and the Redundancy of Bioethics: A Polemic” (2008) 1 Public Health Ethics 3.

146 Thomson, supra, note 142.

147 See, especially, Jasanoff, supra, note 113.

148 A Moore, “Public Bioethics and Public Engagement: The Politics of ‘Proper Talk’” (2010) 19(2) Public Understanding of Science 197, at p 197. Also A Moore, “Public Bioethics and Deliberative Democracy” (2010) 58 Political Studies 715.

149 For discussion, see Flear, supra, note 22, ch 7, especially pp 205–206.

150 Power, supra, note 4, at p 11, emphasis added.

151 J Downer, “Anatomy of a Disaster: Why Some Accidents Are Unavoidable” (2010) CARR Discussion Paper No 61, p 20, emphasis added.

152 Turner, and Turner and Pidgeon, supra, note 15.

153 B Wynne, “Risk as a Globalising ‘Democratic’ Discourse? Framing Subjects and Citizens” in M Leach et al (eds), Science and Citizens: Globalisation and the Challenge of Engagement (Zed Books 2005). Also see A Boin et al (eds), The Politics of Crisis Management: Public Leadership Under Pressure (Cambridge University Press 2005); SG Breyer, Breaking the Vicious Circle: Toward Effective Risk Regulation (Harvard University Press 1993); D Demortain, “From Drug Crises to Regulatory Change: The Mediation of Expertise” (2008) 10(1) Health Risk & Society 37.

154 For discussion, see Flear, supra, note 22, especially ch 1 and ch 6.

155 F Fischer, Reframing Public Policy: Discursive Politics and Deliberative Practices (Oxford University Press 2003); DA Schon and M Rein, Frame/Reflection: Toward the Resolution of Intractable Policy Controversies (Basic Books 1994).

156 Part of what Jasanoff describes as the “civic epistemology” informing societal choices about technoscience –Jasanoff, supra, note 113, ch 10. On risk society, see U Beck, Risk Society: Towards a New Modernity (Sage Publications 1986); U Beck, World Risk Society (Polity 2009); A Giddens, Modernity and Self-Identity: Self and Society in the Late Modern Age (Stanford University Press 1991); N Luhmann, Observations on Modernity (Stanford University Press 1998). Also see H Kemshall, “Social Policy and Risk” in G Mythen and S Walklate (eds), Beyond the Risk Society (Open University Press 2006).

157 Power, supra, note 4, at p 21. Emphasis added.

158 It is worthwhile noting that, as regards vaginal mesh, “[l]itigation did not inform the regulatory decisions” made in the wake of failure – Macleod and Chakraborty, supra, note 5, at p 264. The latter authors do not seem to attribute the regulatory approach to PIP silicone breast implants to litigation. However, this statement was made before the decision of the Federal Court of Australia in Gill v Ethicon Sarl (No 5) [2019] FCA 1905. This case involved a class action against members of the Johnson & Johnson group in which the Court found in favour of the claimants.

159 Power, supra, note 4, at p 6. Emphasis added.

160 C Hood, The Blame Game: Spin, Bureaucracy, and Self-Preservation in Government (Princeton University Press 2011). Also see Hutter and Lloyd-Bostock, supra, note 76, at pp 209–213.

161 N Pidgeon et al, The Social Amplification of Risk (Cambridge University Press 2003).

162 Boin et al, supra, note 153.

163 On the ‘regulatory failure’, see Section II.2, and on ‘regulatory crisis’ see Hutter and Lloyd-Bostock, supra, note 76.

164 See Fineman, supra, note 138; M Fineman, “Equality, Autonomy, and the Vulnerable Subject in Law and Politics” in M Fineman and A Grear (eds), Vulnerability: Reflections on a New Ethical Foundation for Law and Politics (Ashgate 2013). For one recent deployment, see NM Ries and M Thomson, “Bioethics & Universal Vulnerability: Exploring the Ethics and Practices of Research Participation” (2019) Medical Law Review (forthcoming).

165 General Principle 13 Helsinki Declaration: “Groups that are underrepresented in medical research should be provided appropriate access to participation in research”.

166 Art 4 Universal Declaration on Bioethics and Human Rights (2005), emphasis added. Also see Additional Protocol to the Convention on Human Rights and Biomedicine, concerning Biomedical Research (25 January 2005, entered into force 1 September 2007) CETS 195. In addition see Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine (4 April 1997, entered into force 1 December 1999) ETS 164 (often referred to simply as the Oviedo Convention). For discussion, see Brownsword, supra, note 12, at pp 102–105.

167 On which, including a discussion of the potential of human rights and bioethics to both narrow and widen participation, see Flear, supra, note 22. Also see T Murphy, “Technology, Tools and Toxic Expectations: Post-Publication Notes on New Technologies and Human Rights” (2009) 2 Law, Innovation and Technology 181.

168 For discussion, see R Ashcroft, “Could Human Rights Supersede Bioethics? (2010) 10(4) Human Rights Law Review 639, at pp 645–646.

169 In relation to biomedicine see S Epstein, Impure Science (University of California Press 1996). On risk and social mobilisation, see references to Beck, supra, note 156.

170 R Doubleday and B Wynne, “Despotism and Democracy in the United Kingdom: Experiments in Reframing Citizenship” in S Jasanoff (ed), Reframing Rights: Bioconstitutionalism in the Genetic Age (MIT Press 2011).

171 One of the most promising lines of enquiry is vulnerability theory. See references supra, note 164.

172 For a review of approaches to the collection of data, see DB Kramer et al, “Ensuring Medical Device Effectiveness and Safety: A Cross-National Comparison of Approaches to Regulation” (2014) 69(1) Food Drug Law Journal 1. The EU’s new legislation on medical devices has sought to improve, inter alia, post-marketing data collection, such as through take-up of the Unique Device Identification. This is used to mark and identify medical devices within the supply chain. For discussion of this and other aspects of the EU’s new legislation, see AG Fraser et al, “The Need for Transparency of Clinical Evidence for Medical Devices in Europe” (2018) 392 The Lancet 521.

173 S Sauerland et al, “Premarket Evaluation of Medical Devices: A Cross-Sectional Analysis of Clinical Studies Submitted to a German Ethics Committee” (2019) 9 BMJ Open e027041.

174 Heneghan et al, supra, note 54. Also see B Campbell et al, “How Can We Get High Quality Routine Data to Monitor the Safety of Devices and Procedures?” (2013) 346 BMJ 2782.

175 M Eikermann et al, “Signatories of Our Open Letter to the European Union. Europe Needs a Central, Transparent, and Evidence Based Regulation Process for Devices” (2013) 346 BMJ 2771; AG Fraser et al, “The Need for Transparency of Clinical Evidence for Medical Devices in Europe” (2018) 392 The Lancet 521.

176 V Xafis et al, “Ethics Framework for Big Data in Health and Research” (2019) 11(3) Asian Bioethics Review 227, at p 245.

177 ibid, at p 246.

178 Laurie, supra, note 21, at p 71, emphasis added.

179 Carroll et al, supra, note 25, at p 2, emphasis added. There is a slippage here between anticipation and expectation – but recall that the latter is central to the conditions of possibility for failure, at least as they are understood in this article.

180 Modes of description create possibilities for action – that is: “if new modes of description come into being, new possibilities for action come into being as a consequence” : see I Hacking, “Making-Up People” in T Heller et al (eds), Reconstructing Individualism: Autonomy, Individuality and the Self in Western Thought (Standard University Press 1986) p 231, emphasis added.

181 L McGoey, Unknowers: How Strategic Ignorance Rules the World (Zed Books 2019). See further KT Paul and C Haddad, “Beyond Evidence Versus Truthiness: Toward a Symmetrical Approach to Knowledge and Ignorance in Policy Studies” (2019) 52(2) Policy Sciences 299.

Figure 0

Table 1. Framing and the organisation of knowledge for regulation