I. Introduction
The language of failure – scandal, disaster and crisisFootnote 1 – seems to have acquired new force in public discourse and regulation.Footnote 2 Since the financial and economic crisis of 2008, as Kurunmäki and Miller observe, the “category of failure now saturates public life”.Footnote 3 Our contemporary predicament comes in the wake of what Power described:
“A stream of apparent failures, scandals, and disasters from the 1990s onwards have challenged and threatened institutional capacities to organise in the face of uncertainty, suggesting a world which is out of control, where failure may be endemic, and in which organisational interdependencies are so intricate that no single locus of control has a grasp on them”.Footnote 4
Health research regulation provides many instances of failure that are often unaddressed until illuminated, usually through the involvement of those affected.Footnote 5 Failure in health research regulation is of course nothing new. The regulatory regimes for clinical trials were developed in response to the Thalidomide scandal, which occurred some 50 years ago. In the United Kingdom (UK) the scandal resulted in the Medicines Act 1968 and its related licensing authority.Footnote 6
One current example is worthy of immediate note. For around a decade medical devices have provided several high-profile examples of failure globally. Poly Implant Prothése (PIP) silicone breast implants have been prominent for much of this time.Footnote 7 So too have metal-on-metal hip replacements.Footnote 8 More recently, mesh implants for urinary incontinence and pelvic organ prolapse in women (often referred to as “vaginal mesh”) have been the subject of intense controversy – so much so that some have called it “the new Thalidomide”.Footnote 9 While PIP was denounced in the report on the scandal by Health Minister Lord Howe as a case of “deliberate fraud” by a manufacturer that “actively covered up its deceit and showed a complete disregard for the welfare of its customers”,Footnote 10 metal-on-metal hip replacements and vaginal mesh relate to health research practices. In these cases, previously licensed medical devices were used to demonstrate the safety of supposedly analogous new medical devices. The publication of “The Implant Files” in The Guardian suggests that failures in respect of medical devices have reached a critical juncture. The editorial in The Guardian could not be more damning: “Our investigation has revealed alarming failures. Regulation of devices ... must be reformed”.Footnote 11 The time is ripe for reflection on the deeper causes of harm and failure.
In this article, I draw upon health research regulation for medical devices, clinical trials for new medicines, and health data, to look at the regulatory framing of harm through the language of technological risk, ie relating to safety. I understand failure itself in terms of this framing of harm.Footnote 12 I describe how such framing marginalises the contribution of stakeholder knowledge of harm. I explain how the latter may amount to epistemic injustice, and why that should change and how. Stakeholders include research participants, patients and other interested parties. I see failure as arising when harm becomes refracted through calculative techniques and judgments, and reaches a point where the expectations of safety built into technological framings of regulation are seen as thwarted.Footnote 13 This usually occurs from stakeholder perspectives.
My overall argument is that reliance on a narrow discourse of technological risk in the regulatory framing of harm may contribute to epistemic injustice, and that this may underlie harm and in turn lead to the construction of failure. As conceptualised by Fricker, epistemic injustice is a “wrong done to someone specifically in their capacity as a knower”.Footnote 14 Epistemic injustice includes, but is not limited to, marginalisation and silencing within social interactions and systems, such as those in health research regulation that manage technological risk so as to protect safety.
Epistemic injustice is a harm in itself. As such it deserves attention. More significantly for this article, however, epistemic injustice may limit stakeholder ability to contribute towards and shape regulation, leading to other kinds of harm, and failure. This is especially true in the case of health research regulation, where stakeholders could be directly or indirectly harmed by practices and decisions that are grounded on a limited knowledge base, and which may be rooted in epistemic injustice. The argument in this article thus appreciates how failure can amount to a “failure of foresight”,Footnote 15 which may mean it is possible to “organise” failure and harm out of existence.Footnote 16
More particularly, I appreciate failure principally though not exclusively, in Kurunmäki and Miller’s words, “as arising from risk rather than sin”.Footnote 17 Put differently, I understand failure in principally consequentialist, rather than deontological terms.Footnote 18 As such, this understanding does not exclude legal conceptualisations of failure in tort law and criminal law, in which the conventional idea of liability is one premised on “sin” or causal contribution. Legal procedures and modes of judgment remain important to the conceptualisation of failure.Footnote 19 But within contemporary society and regulation such deontological understandings are now often overlaid with a consequentialist view of failure.Footnote 20 The latter is appropriate for the focus of this article, which is about uncovering and dealing with the epistemic roots of harm, rather than causal contribution, so as to better anticipate and prevent future failure. The normative weight of failure, as compared to harm, is used to achieve this. Epistemic injustice may underlie failure, the significance of which can present an important institutional risk to standing and reputation. This risk can prompt the take-up of stakeholder knowledge of harm in regulation, so as to address harm and failure. Coming full circle, this can mean regulation more fully meets expectations of safety, and that its efficacy and legitimacy are improved.
Overall, this article amounts to a call to scholars of law and regulation to grapple with failure further than they have up until now. There is a pressing need to make visible and address the deeper causes of failure in the process of framing and the organisation of knowledge, in which regulators and scholars may be complicit. Grappling with these foundations promises to enable scholars and regulators to better anticipate and prevent future harm and failure. The article advances work on social construction within law and regulatory studies, especially recent work on liminality and health research regulation, to bring fresh insight to discussion on failure.Footnote 21 The argument advanced also draws upon and contributes towards developing work within legal studies on biopolitics.Footnote 22 The discussion in this article highlights the importance of framing, the expectations within it, and the underpinning knowledge, in undergirding the technicalities of law and regulation, and explaining failure. The analysis thus widens a growing dimension in understanding of the “legal” and its relations with the “social” in socio-legal studies.Footnote 23
In the next (second) section, I enlarge on these introductory comments and use several examples to reflect on expectations in framing health research regulation: how does the thwarting of expectations through harm relate to failure? This sets the scene for the third section. Here, I highlight the stakeholder knowledges of harm that are marginalised within the examples of health research regulation, and that became central to the construction of failure through the thwarting of expectations. I explain how these knowledges amounted to lacunae and blindspots that stemmed from the hierarchy of knowledge that underpins the technological framing of regulation. Building on these foundations, in the fourth section, I turn to discuss how the marginalisation of stakeholder knowledges may amount to epistemic injustice. The latter provides both the grounds for stakeholder participation in regulation – and the means, which are provided by the institutional risk presented by epistemic injustice and its link to failure.
II. Expectations and failure in health research
According to van Lente and Rip, expectations amount to “prospective structures” and are “put forward and taken up in statements, brief stories and scenarios”.Footnote 24 In their study on material objects and failure, subtitled When Things Go Wrong, Carroll and others describe failure as “a situation or thing as [sic] not being in accord with expectation”.Footnote 25 This suggests that the thwarting of the expectations built into framing can be understood as a key condition for the construction of failure. It is expectation, rather than anticipation or hope, then, that is central to failure. Bryant and Knight explain that anticipation, contains a “sense of the future pulling [us] forward”;Footnote 26 hope “is a way of virtually pushing potentiality into actuality”.Footnote 27 Unlike expectation, anticipation and hope do not provide a sense of how things ought to be, so much as how they could be or an individual or group would like them to be. Indeed: “We expect because of what the past has taught us to expect… [Expectation] awakens a sense of how things ought to be, given particular conditions”.Footnote 28 Expectations provide “the ground on which practices, orders, and hence the normative emerge”. And the normative is, of course, “a standard for evaluation”, for whether an outcome is “good or bad, desirable or undesirable”,Footnote 29 and, relatedly, a failure.
Expectations rely on the past to inform a normative view of what will be, which – when thwarted – provides the grounds for the judgment of failure.Footnote 30 This judgment relates to the system or regime itself, and has normative weight over and above harm. For Appadurai:
“The most important thing about failure is that it is not a fact but a judgment. And given that it is a human judgment, we are obliged to ask how the judgment is made, who is authorized to make it, who is forced to accept the judgment, and what the relationship is between the imperfections of human life and the decision to declare some of them as constituting failures”.Footnote 31
Expectations, a key ground for establishing failure, are built into regulatory framings. Work on “design-based regulation”, including that by Brownsword, and Yeung and Dixon-Woods, examines how norms, values, virtues and behavioural options are “designed-in” and “designed-out” of technologies, potentially limiting the exercise of agency and accountability.Footnote 32 Both socio-legal scholars and regulators themselves understand the ways in which the framing of governance and regulation, and their foundational knowledges, embed a range of societal and organisational aims, aspirations and expectations. As a corollary, the targets of regulation, including health research and its applications, can also be understood in these terms, as science and technology studies (STS) has long shown. Society and its expectations are “built into” artefacts,Footnote 33 such as through ideas concerning how they might and should be used; how they are eventually implemented;Footnote 34 and health research on whether artefacts meet expectations and can be used.
The expectations built into regulatory framings, and artefacts, also engender imaginaries of the future or “imagined futures”.Footnote 35 Jasanoff and Kim have developed a more specific iteration of this concept: sociotechnical imaginaries.Footnote 36 In subsequent work Jasanoff defines these as:
“collectively held, institutionally stabilised, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology”.Footnote 37
This definition connects expectations and framing to an imagined future. Bringing these various insights together and applying them to the focus of this discussion, failure can be understood as arising when harm becomes refracted through calculative techniques and judgments, and reaches a point where the expectations of safety built into framing are seen as thwarted. Failure threatens to shatter related imaginaries of safe futures.
These key insights hold real potential to shed new light on failure, its underpinnings, and ways to anticipate and prevent it. In the following, I apply and develop these insights to three examples drawn from health research. Although these examples centre on the UK, they also relate to practices elsewhere. These examples encompass living human participants (for medical devices and new medicines) and their adjuncts (personal data) in non-emergency contexts. Health research in emergency contexts is not considered as it raises distinct issues, including the practices in such situations, and the power imbalances in the organisation of global governance that may underlie them.Footnote 38 I work in broad brush strokes to consider two things in relation to each example: first, the expectations built into and flowing from its framing; second, how the expectations were thwarted, imaginaries threatened, and failure emerged. This discussion, included and summarised in Table 1 below, provides the starting point for the subsequent analysis of the epistemic foundations of failure, and reflection on how they can be modified to prevent future failure.
Table 1. Framing and the organisation of knowledge for regulation
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200228093226980-0220:S1867299X19000679:S1867299X19000679_tab1.png?pub-status=live)
1. Health research for safe implants
The first example is in respect of medical devices. The applicable legislation on medical devicesFootnote 39 led to its framing by technological risk and an expectation of safety.Footnote 40 However, in respect of metal-on-metal hips and vaginal mesh for instance, harm occurred, the expectation of safety was thwarted and failure arose. Failures in health research became apparent downstream around the world once these medical devices were in use.
At the root of the failures was the classification of metal-on-metal hips and vaginal mesh as Class IIb devices.Footnote 41 This meant that it was possible for manufacturers to rely on substantial equivalence to existing products to demonstrate conformity with general safety and performance requirements. These requirements set expectations that manufacturers and regulators would demonstrate safety, both for the device and the person within which it is implanted. Substantial equivalence obviates the need for health research involving humans via a clinical investigation.Footnote 42 It is noted in one BMJ editorial that this route “failed to protect patients from substantial harm”.Footnote 43 Heneghan and others point out that in respect of approvals by the Food and Drug Administration in the United States, which are largely mirrored in the EU: “Transvaginal mesh products for pelvic organ prolapse have been approved on the basis of weak evidence over the last 20 years”.Footnote 44 The reliance on substantial equivalence meant that safety and performance data came from implants that were already on the market, sometimes for decades, and which were no longer an accurate predicate for new medical devices.Footnote 45 In other words, on the basis of past experience – specifically, of “substantially equivalent” medical devices – there was an unrealistic expectation that safety would be ensured through this route, and that further research involving human participants was unnecessary.
In respect of vaginal mesh, the adverse events reported include: “Pain, impaired mobility, recurrent infections, incontinence/urinary frequency, prolapse, fistula formation, sexual and relationship difficulties, depression, social withdrawal or exclusion/loneliness and lethargy”.Footnote 46 Within the European Union (EU), new legislation was introduced, largely in response to the failures.Footnote 47 The legislation is deemed to provide a “fundamental revision” of the regulatory framework in order to “establish a robust, transparent, predictable and sustainable regulatory framework for medical devices which ensures a high level of safety and health whilst supporting innovation”.Footnote 48 One interpretation of the new legislation is that it is a direct response to failure and intended to provide “a better guarantee for the safety of medical devices, and to restore the loss of confidence that followed high profile scandals around widely used hip, breast, and vaginal mesh devices”.Footnote 49
The specific piece of legislation applicable to the latter will not apply until 26 May 2020.Footnote 50 In respect of metal-on-metal hips and vaginal mesh, the legislation reclassifies them as Class III. Future manufacturers of these devices will have to carry out clinical investigations so as to demonstrate conformity with regulatory requirements.Footnote 51 According to one self-described “innovation company”,Footnote 52 Cambridge Design Partnership, whereas the Medical Devices Directive “briefly outlined (in only nine paragraphs) the expectations of clinical investigations”, the “new [Medical Devices Regulation] expands on this and takes up a whole chapter on the subject”.Footnote 53 Put differently, the new legislation provides more detailed and expansive expectations in relation to safety. Nevertheless, doubts remain whether these will prevent future harms and thus failures similar to those mentioned above.Footnote 54
2. Health research that is safe for participants and leads to safe medicines
The second example of failure in health research is clinical trials. It is noteworthy that the standards used for clinical investigations of medical devices are usually those used for the conduct of clinical trials. The reason: “these [give] a greater indication of the expectations for clinical investigations”.Footnote 55 By contrast to medical devices, the marketing of new medicines always requires clinical trials.
The focus in clinical trials is on protecting the human participants involved. Here, ethics-based and rights-related requirements, in particular relating to consent and other standards of good clinical practice (GCP),Footnote 56 support the key framing. These requirements are also about credentials: the medicines can be said to be authorised on the basis of data produced in an ethics and rights-friendly way. This is essential given the problematic history of the development of medicines (see Thalidomide, mentioned in the introduction). The trials are used to establish quality, safety and efficacy, ie the conditions for market authorisation.Footnote 57 These conditions contrast with the focus on safety and performance for the marketing of medical devices.
In 2006, clinical trials were the subject of renewed controversy when the novel drug TGN1412, produced by German pharmaceutical company TeGenero, caused multiple organ failures in Phase I or first in human clinical trials.Footnote 58 Six men were injected with the drug at Northwick Park Hospital, London while another two received a placebo, thus making this a randomised controlled trial, the “gold standard”.Footnote 59 The drug was being developed as a potential treatment for certain autoimmune diseases and leukaemia.Footnote 60 In this case, there was an expectation of participant safety – or at least that sufficient attempts would be taken to reasonably ensure safety – and this was based on past experience and understanding of regulation. However, harm occurred, the expectation of safety was thwarted and failure arose. The Medicines and Healthcare Products Regulatory Agency’s (MHRA’s) report into compliance with standards on GCPFootnote 61 noted the absence of complete patient medical records and 24-hour medical cover, including plans for dealing with adverse events.
These aspects of the trials, referred to in the New Scientist as a “catalogue of errors”,Footnote 62 were addressed in Appendix 1 of the MHRA’s report as “discrepancies”.Footnote 63 In other words, echoing the MHRA’s interim report,Footnote 64 these issues were not described as problems with substantive aspects of the research process itself – hence the non-binding regulatory changes that followed what others have described as the “failure”Footnote 65 of the trial.Footnote 66 Further, the MHRA stopped short of categorising the adverse event in terms that would connote failure at all, including its own. The MHRA’s assessment was subsequently buttressed by an expert report.Footnote 67 This report repeated the MHRA’s recommendations and also drew attention to the use of the maximum starting dose and its administration in sequence. As regards the latter, the last two volunteers were injected even though the first volunteer had begun to show the early signs of an adverse reaction. The export report found that a minimum dose should have been administered in a longer sequence. Significantly for the present analysis, each of these reports, and surrounding commentary, actually underline how it was the thwarting of the expectation of participant safety that began the process of constructing failure.
3. Health data and social licence
Health data is a key adjunct to the human body, and it provides the third and final example of failure. The Health and Social Care Act 2012 (2012 Act) provided the basis for the establishment of care.data in England. The 2012 Act established the Health and Social Care Information Centre to receive individual level patient data from the surgeries of general practitioners (GPs). Data would be anonymised and consent from patients would not be sought.Footnote 68 Where anonymisation would not be possible, the Confidentiality Advisory Group would be charged with authorising non-consented uses in the public interest. The National Health Service Constitution was amended and a huge public information exercise was undertaken to explain the initiative, although care.data was not mentioned directly. This system was first abandoned in February 2014 after a huge outcry from patients and professionals, before being briefly resurrected and seemingly completely abandoned in July 2016. Key concerns related to lack of transparency and robust oversight, and inadequate information on the possible uses of data and its disclosure, in particular for use in commercial endeavours.Footnote 69
Although there was a legal mandate through the 2012 Act, the implementation of this mandate through care.data did not accord with, inter alia, “traditional understandings of the private and confidential nature of the relationship between GP and patient”.Footnote 70 These understandings are underpinned by law relating to data protectionFootnote 71 and human rights.Footnote 72 Simply put, care.data thwarted or “ruptured”Footnote 73 these expectations, and a social licence.Footnote 74 This is because the accordance of care.data with the traditional understandings could not be assumed, particularly given the concerns just noted. Importantly for present purposes, the idea of a social licence underscores “the assumption [that] a mandate for action must accord with the general expectations of society” – and, moreover, “it is dangerous to attempt to borrow this wholesale from one context to another”.Footnote 75
In this case, then, the assumption of a mandate for action from individual consent (one context) to care.data (another context), did not accord with a key societal-level expectation, which was informed by public understandings of past practice. Put differently: there needed to be the grant of a social licence for the significant changes to the use of individual data envisaged under care.data. The absence of this social licence meant harm occurred, or at least was thought to be likely to occur, and the expectation of safety was thwarted. The thwarting of this societal expectation, through the implementation of the statutory basis for regulation, began the process of constructing failure, which ultimately led to the failure of care.data itself. This case demonstrates the central importance of expectations of safety through the protection of individual data, and its use, in a way that ensures good governance and public benefit.
4. Constructing failure
In each example, technological risk understood as being about safety frames regulation (of medical devices, research participants and data gathering and use). An expectation of safety is built into and flows from this framing. The expectation engenders an imaginary or imagined future where technoscience leads to safe innovations.
However, across the examples, there is little or no suggestion of failure by those formally responsible, and who might be accountable in the event of failure. This is doubtless to avoid any hint of regulatory failure. Regulatory failures are described by Hutter and Lloyd-Bostock as “failures to manage risk”.Footnote 76 Moreover, “[i]ncreasingly, regulators find themselves in the spotlight as questions are raised about why regulation ‘failed’, effectively framing them as part of the cause of disasters and crises”.Footnote 77 A perception of regulatory failure thus has key implications for the accountability and legitimacy of regulation and regulators – and such perception is therefore to be avoided by them.
Instead, these examples demonstrate how the construction of failure does not necessarily hinge on official interpretations of harm as amounting to “failure”. This is apparent in the various quotations from non-regulators noted above. As Hutter and Lloyd-Bostock put it, these are “terms in which events are construed or described in the media or in political discourse or by those involved in the event”.Footnote 78 As they continue, what matters is an “event’s construction, interpretation and categorisation”.Footnote 79 Failure amounts to an interpretation and judgment of harm. Put differently, “failure” arises through an assessment of harm undertaken through calculative techniques and judgments. Harm becomes refracted through these. At a certain point in each of the examples, the expectations of safety built into framing are seen by stakeholders as thwarted, and the harm becomes understood as a failure.
Kurunmäki and Miller make a similar observation:
“[The] moment [of failure] surfaces within and through an assemblage of calculative practices, financial norms, legal procedures, expert claims and modes of judgment. These allow complex processes of mediation between a variety of actors, domains and desired outcomes…[Failure is] undeniably constructed through the multiple ideas and instruments that set the parameters within which open-ended yet not limitless negotiation and judgment takes place, as the moment of failure is either predicted or pronounced”.Footnote 80
Although harm is real, it becomes known and understood as failure through various techniques and practices, including legal and regulatory procedures and modes of judgment.Footnote 81 These make it possible to know and name the reality of failure.Footnote 82 Hacking summarises this point: “our classifications and our cases emerge hand in hand, each egging the other one”.Footnote 83 Official discourses are significant, not least because they help to set expectations of safety. But these discourses do not control stakeholder interpretations and knowledge of harm, or their thwarting of expectations of safety leading to failure. Indeed, failure is what Kurunmäki and Miller term “a variable ontology object”.Footnote 84 The potential for diverse and alternative interpretations and perceptions of harm by individuals and groups, and their roles in making their views known, underscore the role of the mutability of power relations in the construction of failure.Footnote 85 The assessment of failure in turn threatens to shatter the imaginary of a safe future.
In what follows, I shift attention to the lacunae and blindspots in the knowledge base in relation to each example. I outline these, discuss what they imply about the place of stakeholder knowledges of harm in regulation, and interrogate their foundations in the organisation of knowledge. Subsequently, I build on this discussion to explain how the marginalisation of stakeholder knowledges relating to harm may amount to epistemic injustice, and how that provides both the grounds and means for bringing stakeholder knowledges and expertise within health research regulation.
III. Producing knowledge for safety regulation
1. Lacunae and blindspots
In each of the examples, the knowledge base for regulation is derived from an archive of past experience and scientific-technical knowledge. There are epistemic or knowledge-related lacunae and blindspots in this knowledge base, and these relate to stakeholder knowledge of harm. In respect of vaginal mesh and metal-on-metal hips, the focus on performance (ie the device performs as designed and intended) marginalised attention to effectiveness (ie producing a therapeutic benefit), and patient knowledge on this issue. Moreover, in relation to implants, many of the most prominent examples have related to breast and mesh implants placed in female bodies. Female knowledges and lived experiences of the devices implanted within them have tended to be sidelined or even overlooked. Gender matters within the “biomedical gaze”.Footnote 86 The operation of presumptions based on, inter alia, genderFootnote 87 through this lens are well-known, and include gender-based presumptions about pain.Footnote 88 Within biomedicine understandings of pain have and are still usually based upon male bodies, which are then cast as the “normal” body. The position of the male body within biomedical research might explain the all-male participants in the clinical trial for TGN1412.Footnote 89 The centrality of the male body within research provides part of the explanation for the time taken to recognise there was a safety problem in respect of vaginal mesh.
Another part of the explanation for the latter problem is that there was a lengthy delay in embodied knowledge and experiences of pain being reported within the very processes that are supposed to reveal them – effectively sidelining and ignoring those experiences.Footnote 90 Bringing these experiences into the knowledge base for health research regulation is, however, still proving difficult. For instance, within the UK, the new guidance on vaginal mesh, issued by the National Institute for Health and Care Excellence (or NICE),Footnote 91 has been met with criticism on gender-based lines. NICE cites safety concerns and recommends that vaginal mesh should not be used to treat vaginal prolapse. As the UK Parliament’s All Party Parliamentary Group on Surgical Mesh Implants said, the guidelines: “disregard mesh-injured women’s experiences by stating that there is no long-term evidence of adverse effects”.Footnote 92
Despite the privileged positon of male bodies within biomedicine, male research participants can of course still experience marginalisation of their knowledges, as the TGN1412 trial demonstrates. The problems with the trial – inter alia, the absence of complete proper patient medical records;Footnote 93 the lack of 24-hour medical cover and plans to deal with adverse events; the close sequencing of injecting the trial medicine – meant that the embodied risk and experiential knowledge that would be generated through the trial itself were implicitly and effectively minimised and sidelined. This underscores the degree to which participants’ bodies were instrumentalised for very specific purposes: testing a new medicine in order to generate safety and efficacy data. As for the final example, care.data, it demonstrates the importance of widely-held social understandings of the doctor-patient relationship, good governance and public benefit, in the use of patient data. In a sense this amounts to social knowledge and understanding of data, ie that which is disembodied. However, these social understandings were effectively ignored.
These epistemic lacunae and blindspots make apparent the marginalisation of stakeholder knowledges of harm within the examples of health research regulation. As I have described, this marginalisation effectively silences and closes off a rich source of knowledge and expertise for health research regulation. Stakeholders are, as Pohlhaus notes, “relegated to the role of epistemic other”.Footnote 94 The limited knowledge base for regulation, and epistemic othering in the examples, seems to account, at least in part, for the harms and failures that arose, since stakeholder knowledges became central to the construction of failure in each example. The limited knowledge base also helped to support the focus on safety and delimit regulatory responsibilities and accountabilities in each case.
2. Knowledge for regulation
Epistemic othering stems directly from the organisation of knowledge for regulation. Risks, risk-based framings and concepts (including, as we have seen, failure) do not exist in themselves, but are co-constituted through a field of knowledge and regulatory processes.Footnote 95 Power describes, putting knowledge and framing (including expectations) together, that “social and economic institutions… shape and frame knowledge of, and management strategies for, risk, including the definition of specific ‘risk objects’”.Footnote 96 The focus on technological risk in health research regulation is based on a series of decisions, which determine the knowledge required for regulation, and help to explain epistemic othering. These decisions include how harm relates to risk; centralising safety as “the” technological risk; the degree of risk (low or high); and, finally, how technological risk relates to precaution – whether precaution operates in risk assessment or in risk management.Footnote 97 The decisions underscore how, to quote Sparks, drawing on Garland, risk has “moral, emotive and political as well as calculative” dimensions and is a “mixed discourse”.Footnote 98
Knowledge is formulated as encompassing “the vast assemblage of persons, theories, projects, experiments and techniques that has become such a central component of government” – it is “the ‘know how’ that makes government possible”.Footnote 99 In the context of health research regulation, scientific-technical knowledge defines a space in which specific objects (medical devices, medicines and data) are rendered governable.Footnote 100 Stakeholder knowledges of harm are effectively deemed unnecessary for regulation focused on technological risk. The organisation of knowledge for regulation is also, as Jasanoff explains, “incorporated into practices of state-making, or of governance more broadly”.Footnote 101 As such, knowledge for regulation “is not a transcendent mirror on reality”,Footnote 102 but, similar to the governing arrangements it underpins, remains contingent on the wider social milieu.Footnote 103
More generally, and reflecting its social setting, risk management “embodies ideas about purpose” and is embedded in “larger systems of value and belief”.Footnote 104 Hence, although health research regulation is focused on safety, this is linked to wider goals. Indeed, Laurie describes the way in which:
“law’s regimes do not capture the experience of becoming someone involved in research…from the human perspective, the process of becoming a research participant is a change of status, potentially in very profound ways. Individuals, their bodies, body parts, and other intimate adjuncts such as personal data, are instrumentalised to varying degrees”.Footnote 105
Health research is increasingly part of attempts to optimise the market and economy.Footnote 106 At least at the EU level, which is relevant to each of the examples in this article, the focus on technological risk attempts to deliver market-oriented goals – health through safe products is instrumentalised for the latter.Footnote 107
To achieve these goals, discourses on technological risk,Footnote 108 and their related scientific-technical knowledge base, do more than frame and shape expectations of safety: through this discursive device,Footnote 109 they literally talk and legitimate new health research and technoscience into existence.Footnote 110 In this regard, imaginaries play a central role. Ezrahi’s explains that: “[w]hat renders…[imaginaries] consequential is their capacity to generate performative scripts that orient political behaviour and the making and unmaking of political institutions”.Footnote 111 The instrumental role of imaginaries can be readily seen in the expectations and framing of health research discussed in this article. An imaginary “composes and selects images and metaphors aimed at fulfilling functions in a variety of fields, including scientific research…”.Footnote 112 In terms of the examples considered above, safe medical devices, safe clinical trials for safe medicines, and future innovation to be delivered through the safe use of health data, are part of an imaginary that encompasses these key legitimating images and metaphors (the latter since they might not be entirely literal or wholly based on reality – not least because safety is found wanting).
Technological risk and expectations of safety, including as part of imaginaries, are becoming even more important tools of legitimation for states or supranational entities such as the EU. These sites are tightening their relations with science and technologyFootnote 113 – and through them the public (including stakeholders) who engage with technoscience as a “knowledge society”.Footnote 114 The framing of health research by technological risk and expectations of safety, and the imaginaries they help to generate, thus perpetuate current modes of governance and technoscientific trajectories.Footnote 115 Yet as a key consequence, the organisation of knowledge for regulation marginalises and others stakeholders and their knowledges.
3. Hierarchy of knowledge
Looking more deeply still at the basis for epistemic marginalisation and othering, the organisation of knowledge for regulation, wherein credentialised knowledge and expertise forms the basis for regulation, is underpinned by a hierarchy of knowledge.Footnote 116 As a key part of the framing of technological risk, bioethics plays a central role in constructing the hierarchy of knowledge and masking its construction – and has long been the subject of criticism for it. Bioethics tends to focus on technological development within biomedicine, and principles of individual ethical conductFootnote 117 or so-called “quandary ethics”,Footnote 118 rather than systemic issues related to social (or indeed epistemic) justice. Consequently, bioethics usually privileges and bolsters scientific-technical knowledge, erases social context,Footnote 119 and renders social elements as little more than “epiphenomena”.Footnote 120
Importantly for the present discussion, bioethics operates with risk-based framings to install an expert rationality – the “quasiguardianship” of scientific expertsFootnote 121 – within regulation. The expert rationality reflects and draws upon an imaginary of science. Hurlbut describes how:
“Framed as epistemic matters – that is, as problems of properly assessing the risks of novel technological constructions – problems of governance become questions for experts. The ‘scientific community’ thus acquires a gatekeeping role, based not on some principle of scientific autonomy or purity but on an imagination that science is the institution most capable of governing technological emergence”.Footnote 122
This expert rationality, and related imaginary, configure and legitimate the hierarchy between scientific-technical knowledge and expertise, and in turn the relationship between regulation and its stakeholders.Footnote 123 Stakeholder knowledges and forms of expertise relating to harm are, as Foucault explained, “disqualified … [as] naïve knowledges, hierarchically inferior knowledges, knowledges that are below the required level of erudition or scientificity”.Footnote 124
Table 1 (above) brings together the epistemic lacunae and blindspots, with framing, and the hierarchy of knowledge, and summarises the discussion to this point. To build on the platform provided by the analysis thus far, I turn now to explain how epistemic othering through the marginalisation of stakeholder knowledges of harm may amount to epistemic injustice, and provide both the grounds and means for the contribution of stakeholder knowledges of harm to regulation. Again, I work in broad brush strokes rather than close detail.
IV. Epistemic injustice: grounds and the means for stakeholder participation
1. Grounds for participation
As we have seen, the health research process is a site for harms and failures. The marginalisation of stakeholder knowledge and expertise relating to harm, evident in the lacunae and blindspots, appears to be part of the explanation for failure, and may amount to epistemic injustice. Testimonial injustice is a more specific instance of epistemic injustice, as defined by Fricker,Footnote 125 and it arises when individuals or groups lack credibility. The examples from health research, discussed above, attest to the way in which biomedical researchers or those involved in the design of health research have, as Medina put it, “been trained not to hear or to hear only deficiently and through a lens that filters out the speaker’s perspective”.Footnote 126 The perspective (and bodily reactions) of the research participants in the TGN1412, and the recipients of metal-on-metal hips and vaginal mesh, are key examples of testimonial injustice. The three-year wait for the new EU legislation applicable to medical devices to move from entry into force to applicability further attests to testimonial injustice. For Heneghan and others, “the long delay in implementation does not represent a timely response to patients’ needs”.Footnote 127
Hermeneutic injustice arises when individuals or groups do not articulate and present themselves through the conceptual resources that resonate with biomedical researchers and regulators, and through which they view reality, think and act. This is apparent across the examples discussed above. Although the symptoms of adverse reaction to TGN1412 were present in early recipients, the trial medicine continued to be given to others. The accounts and complaints of women in receipt of vaginal mesh, for instance, were long dismissed. It took a huge outcry by the medical profession and general public before the need for a social licence to health data collection and use was acknowledged and acted upon.
For the purposes of this article, resolving epistemic injustice is not simply an end in itself. Tackling epistemic injustice, by plugging lacunae and widening what is known, becomes instrumental to constructing a more complete knowledge base, and through it anticipating and preventing future harms, and in turn judgments of them as failures. To this end, stakeholder participation holds much promise. Contrary to what is implied by their marginalisation, stakeholders know about diverse things such as harms in different ways, have a range of expertise, and are often reflexively aware of limitations to their comprehension of particular technoscientific developments that they may actively seek to address.Footnote 128 Stakeholders’ insights can inform discussion on the purpose of risk governance and regulation, whom it makes vulnerable by failing and hurting, whom it benefits, and how we might know.Footnote 129 Stakeholder knowledge of harms provides the basis for their participation in health research regulation.Footnote 130
Moreover, scientific knowledge and technologies are “political machines”Footnote 131 which can become embroiled with and further engender biopolitical debates and campaigns.Footnote 132 As the example of medical devices in particular attests, stakeholders may demonstrate “biosociality”Footnote 133 by organising to demand and contest decisions relating to their biology, conditions and experiences.Footnote 134 And since there is injustice (albeit epistemic), participation makes sense.Footnote 135 Indeed, as Medina states, “if one exhibits a complete lack of ‘alertness or sensitivity’ to certain meanings or voices, one’s communicative interactions are likely to contain failures…for which one has to take responsibility”.Footnote 136 This responsibility extends beyond individuals to communities and “[i]nstitutions and people in a position of power”.Footnote 137
There is a growing plethora of approaches, most recently and notably vulnerability,Footnote 138 within which embodied risk and experiential knowledge are central.Footnote 139 These approaches are buttressed by a developing scientific understanding of the significance of environmental factors to genetic predisposition to vulnerability and embodied risk.Footnote 140 This understanding provides a “new biosocial moment”Footnote 141 which the approaches can leverage in order to expand appreciation of the potential for a lack of alertness and communicative failures for which institutions and powerful people must take responsibility. Further, within such approaches, as Thomson explains, the centrality of the human body and experience is foregrounded precisely to recast the objects of bioethical concern.Footnote 142 The goal: to prompt a response from the state to fulfil its responsibilities in respect of rights.Footnote 143 In the context of health research, this growing focus on embodied risk and experiential knowledge, also seen in the turn to personalised medicine,Footnote 144 can be used to reshape the biomedical gaze and practice.
But the years of failed attempts to reshape the bioethical establishment, and the organisation of knowledge for regulation, demonstrate that it can take more than failure, new approaches and growing knowledge, to widen the knowledges that count within risk-based regulation. Bioethics tends to prefer deliberation over resolving problems as ethical matters.Footnote 145 Moreover, recent scholarship, such as Thomson’s,Footnote 146 as well as much work in cognate disciplines such as STS,Footnote 147 considers public bioethics. These discussions notwithstanding, bioethics remains, as Moore observed almost ten years ago, “largely separate from the analysis of public participation in scientific governance”.Footnote 148
On top of this, risk-based regulation, such as that applicable to health research, has a well-known tendency to direct attention towards managing consequences rather than dealing with root causes.Footnote 149 As part of this, organisations focus on their documentary records to ensure they have in place an audit trail that backs-up their actions as rational and avoids institutional sanction. Power details how this leads organisations to prioritise “more elaborate formal representations of management practice, including risk management”, over “generating a climate of intelligent challenge and a capacity to abandon existing organisational hierarchies in a crisis”.Footnote 150
Against this background, I find it hard to be as optimistic as Downer is about “epistemic failures”, that is, the failures arising from problems in the knowledge base. He describes how epistemic failures:
“force experts to confront the limits of their knowledge and adjust their theories. Because they are likely to reoccur, they allow engineers to leverage hindsight… turning it into foresight and preventing future accidents”.Footnote 151
As seen across the examples discussed above, within official accounts, at least, the problem in each case has been presented as one of management and governance (albeit not amounting to regulatory failure), rather than the knowledge base. Stakeholder accounts of the examples have tended to foreground knowledge gaps and blindspots relating to harm as part of the explanation for failure (over and above harm). For this reason, and the others just outlined, it becomes necessary to prompt both credentialised experts and regulators to take note of stakeholder knowledges of harm, and make this “disruptive intelligence”Footnote 152 part of the knowledge base for regulation. I turn now to discuss how to achieve this.
2. “Epistemic injustice as risk” as the means for participation
Epistemic injustice, a harm in itself, arises from the framing and organisation of knowledge, and helps to explain lacunae and blindspots in the knowledge base for regulation. Epistemic injustice is thus part of what needs to be resolved in order to manage other kinds of harm, and in turn failure, within technological framings of regulation. Yet, for the reasons just outlined, it is unlikely that epistemic injustice would be tackled within current risk management practices. Nevertheless, the framing of regulation and organisation of knowledge through risk, the threat posed by the construction of failure from harm, and the related imaginary, discussed in the previous section, are also resources for stakeholder participation.Footnote 153 These resources can enable the contribution of stakeholder knowledge and expertise to resolve epistemic injustice, improve the knowledge base, and help anticipate and prevent future failures in health research regulation.Footnote 154
Epistemic injustice can also be understood and seen as an institutional risk to standing and reputation.Footnote 155 As Power points out, risk management also involves governing “unruly perceptions”, against a background of societal anxiety about risk,Footnote 156 and maintaining the “production of legitimacy in the face of these perceptions”.Footnote 157 These risks are secondary to the primary risk or risk event. The risk presented by epistemic injustice is to standing and reputation (a secondary risk). This risk gains added potency from its association with the failures (primary risks) that epistemic injustice may engender. As a reminder, the judgment of failure bears significant normative weight over and above harm. Failure reinterprets harm in risk-based terms and as such it disrupts the societal expectation that risk can be regulated out of existence. An associated (and secondary) risk is legal risk: the actual or potential legal liability that arises where there is a causal contribution to harm.Footnote 158
The management of epistemic injustice as an institutional risk to standing and reputation, can derive added urgency from the potential for public blaming and shaming amidst what Power termed the “web of expectations about management and actor responsibility”.Footnote 159 Blaming and shaming threatens to assign responsibility, and not simply for harm, but also failure, to regulators – that is, to make the “harm” a “regulatory failure”.Footnote 160 As such, blaming can even amplifyFootnote 161 or extend the duration of an institutional risk to standing and reputation.Footnote 162 This may produce a crisis for regulation, quite apart from any interpretation and judgment of failure or regulatory failure.Footnote 163 As discussed above, the examples of failure in health research regulation are not understood as such due to official pronouncement.
In order to develop epistemic injustice as an institutional risk to standing and reputation, the growing research on embodied risk, and the cluster of references in key legal and bioethical instruments, can be leveraged to shape adverse public perceptions of regulation. These references include those to vulnerabilityFootnote 164 and participation of marginalised groups.Footnote 165 In particular, a key principle in Article 4 of the Universal Declaration on Bioethics and Human Rights is focused on harm and failure. It states that in:
“applying and advancing scientific knowledge, medical practice and associated technologies, direct and indirect benefits to patients, research participants and other affected individuals should be maximized and any possible harm to such individuals should be minimized”.Footnote 166
Regulatory sensitivity to risk to standing and reputation, supported by these kinds of references, draw upon the combined discursive power of human rights and bioethics, and can prompt a response.Footnote 167 The utility of human rights, in particular, in inflicting reputational damage so as to reshape regulation can be seen in the successes of disability rights campaigners, feminists and anti-poverty groups, in their respective challenges of the bioethics establishment.Footnote 168 In these cases, epistemic and wider injustice grounded on risk formed the basis for mobilisation and participation,Footnote 169 and it was often “uninvited” and even disruptive of regulatory structures.Footnote 170 Mobilisation is also apparent across the examples noted above, and most currently medical devices.
In summary, epistemic injustice, stemming from marginalisation, can amount to an institutional risk to standing and reputation, and as such it can be used as a tool to deal with harm and failure. In summary, in these various ways, epistemic injustice can amount to a significant institutional risk to standing and reputation. This is especially the case when that institutional risk is linked failure: when harm becomes refracted through calculative techniques and judgments, and reaches a point where the expectations of safety built into technological framings of regulation are seen as thwarted. Indeed, the assessment of failure, over and above harm, threatens to shatter the imaginary of a safe future, and in turn to undermine organisational identity and legitimacy. In essence, epistemic injustice can amount to an existential threat. Epistemic injustice as an institutional risk provides a way to impel the gathering of stakeholder knowledge of harm, and its addition to the knowledge base for regulation.
In this way, epistemic injustice can be used by stakeholders to inscribe their knowledges, expertise, and thus themselves, within the field of knowledge for regulation, and to reshape regulation. As a risk, epistemic injustice helps to widen the knowledges that matter within technological framings of regulation. This strategy thus chimes with ongoing attempts to engender a widening of the bioethical and biomedical viewpoint beyond their dominant objects of concern.Footnote 171 Attention to epistemic injustice as an institutional risk promises to enable regulation to better anticipate and prevent future harm and failure. The contribution of stakeholder knowledges of harm would supplement and bolster the current hierarchy of knowledge for regulation, and organisational identity, and the imaginary that helps to legitimate regulation.
The mechanisms to facilitate epistemic integration, and widen the knowledge base for regulation, include existing techniques of risk assessment and management. For instance, in respect of medical devices, further attention to effectiveness could yield important additional data (ie on producing a therapeutic benefit) on top of performance (ie the device performs as designed and intended). Similar to the example of clinical trials for medicines, this would require far more involvement and data from device recipients. Recipient involvement and data could come pre- or post-marketing – or both.Footnote 172 Involvement pre-marketing seems both desirable and possible:
“The manufacturers’ argument that [randomised controlled trials] are often infeasible and do not represent the gold standard for [medical device] research is clearly refuted. As high-quality evidence is increasingly common for pre-market studies, it is obviously worthwhile to secure these standards through the [Medical Devices Regulation] in Europe and similar regulations in other countries”.Footnote 173
One proposed model for long-term implantable devices, such as those discussed in this article, involves providing limited access to them through temporary licences that restrict use to within clinical evaluations with long follow-up at a minimum of five years.Footnote 174 Wider access could be provided once safety, performance and efficacy have been adequately demonstrated. In addition, wider public access to medical device patient registries, including the EU’s Eudamed database, could be provided so as to ensure transparency, open up public discourse around safety, and tackle epistemic injustice.Footnote 175 As for clinical trials, epistemic injustice as a risk could supplement the changes already introduced to ensure future safety, and develop discussion on other changes that could better safeguard trial participants.
Finally, ways of gathering and sharing health data are moving beyond the model of care.data to others. A recent proposal for an ethics framework for big data in health and research aims to ensure governance structures reflect social values for data gathering and use. Harm minimisation is one of the substantive values to be realised through the outcome of a decision on data governance. Harm minimisation “involves reducing the possibility of real or perceived harms (physical, economic, psychological, emotional, or reputational) to persons”.Footnote 176 Reflexivity is a key procedural value to guide the decision-making process, and “refers to the process of reflecting on and responding to the limitations and uncertainties embedded in knowledge, information, evidence, and data”.Footnote 177 By referencing these values, epistemic injustice as a risk could gain further leverage in health data governance. This could ensure the process of decision-making and the outcome take into account stakeholder knowledge of harms. Overall, epistemic injustice provides the means to develop a knowledge base for regulation that more fully meets expectations of safety.
V. Conclusion
In this article, I described how failures are constructed and become known and recognised through processes that determine whether harm has thwarted the expectations of safety built into technological framings of regulation. Laurie is one of the few legal scholars to illuminate the implications of not viewing health research regulation as a process that constructs objects and transforms its participants. As Laurie explains:
“if we fail to see involvement in health research as an essentially transformative experience, then we blind ourselves to many of the human dimensions of health research. More worryingly, we run the risk of overlooking deeper explanations about why some projects fail and why the entire enterprise continues to operate sub-optimally”.Footnote 178
By looking at the process of framing and the organisation of knowledge, I revealed how epistemic injustice may be part of the deeper explanation for failure and sub-optimal health research practices. Carroll and co-authors describe how: “[f]ailure is a moment of breakage between the reality of the present and the anticipated future” – it “exists as a rich space for the growth and development of new social relations”.Footnote 179 Presenting epistemic injustice as an institutional risk to standing and reputation reconfigures power relations, and creates possibilities for including stakeholder knowledges and expertise relating to harm in the knowledge base for regulation.Footnote 180
I argued that the institutional risk provides a key means for stakeholders to prompt a regulatory response. This response is an attempt to relegitimate the regime, including by reiterating the imaginary of safe technoscience and innovation. In this way, stakeholders can help to widen the knowledge base for regulation, and through it responsibilities and accountabilities. This promises to allow regulation to better address past failures, anticipate and prevent future failures, and maintain and even enhance legitimation.
Why, then, has more not been done to ensure epistemic integration as a way to enhance regulatory capacities to anticipate and prevent failure? Epistemic integration would involve bringing stakeholders within regulatory processes via their knowledges of harm. As such, epistemic integration would seem to disrupt and undermine the dominant position of those deemed expert within extant processes. Taking epistemic integration seriously re-problematises knowledge: what knowledges from across society are required by regulation in order to ensure innovation that is safe and legitimate? The integration of (dis)embodied and experiential knowledges within regulation might threaten the epistemic underpinnings of our current regulatory regimes, and the direction of technoscience.
More deeply, epistemic integration would challenge modernist values on the import of empirically-derived knowledge and the efficacy of society’s technological “fixes” in addressing its problems. Indeed, the limits of their capacity to deal with risk and uncertainty would become clearer to society at large. For this reason, at least, the privileging of credentialised knowledges within regulation may amount to “strategic ignorance”.Footnote 181 However, scientific-technical knowledge and expertise would still be necessary in order to discipline “lay” knowledges and ensure their integration within the epistemic foundations of regulation. To resist epistemic integration is, therefore, essentially to bolster extant power relations. As the analysis in this article suggests, these relations are actually antithetical to addressing harm and failure, and ensuring the success of health research regulation.