1. Introduction
The world stands on the cusp of a new age, heralded by the biotechnological revolution of recent decades. The law as it stands is woefully insufficient to regulate these emerging advances, and in particular those that will stem from the creation of new intelligences.
Gene science, advanced pharmaceuticals, neurotechnologies, robotics and cybernetics, the internet, breakthroughs in artificial intelligence and yet more technologies more usually associated with science fiction have in recent years risen to the forefront of science. Not the least of these developments is the potential for the emergence of new types of conscious, intelligent being. Closer to home, perhaps, is the profusion of ‘expert systems’—algorithms and simple artificial intelligences (AI) that interweave in our everyday lives, from smart assistants, to the financial markets, to social media. All these technologies, and more, are collectively and individually poised to present great and fundamental challenges for society and for the law. We have already experienced the disruptive potential of expert systems in politics,Footnote 1 policing,Footnote 2 and economics.Footnote 3 The issues presented by such technologies as may be able to think for themselves could be orders of magnitude greater. Further, these thinking creations may warrant their own protections and freedoms; perhaps to an equal degree as enjoyed by humans. As the stewards of scientific progress, we are beholden to protect all parties—both existing persons, and to the beings we may create through AI—and bio—research.
It is likely that the technologies in question will be the product of public companies and in particular multinational corporations, which operate beyond the bounds set by domestic research ethics and regulation. The main source of regulation for these bodies derives from company law which is not equipped to sufficiently manage the greater weight of moral responsibility that these technologies will impose on their producers.
Other existing policy and regulation is also ill-prepared for this brave new world of novel intelligent beings. Any new suggested regulation is generally piecemeal and problem-specific, with recently proposed documents addressing only existing technologies such as self-driving cars and latterly facial recognition software. What the substance of this regulation will be, how it will function and the precise form it will take is beyond the scope of the present work,Footnote 4 but in order to begin to find answers to these questions we must first identify the problems and gaps which already exist.
Consider the Tyrell Corporation, from the film Blade Runner,Footnote 5 as a scenario we may wish to avoid. It is essentially a law unto itself, able to create thinking, feeling products without any apparent oversight. In the fictional 2019 Los Angeles, the replicants are hated and feared, and are hunted (or ‘retired’) by specialised police squads. Clearly these ‘products’ are judged as societally undesirable and unsound. However, the corporation continues to design and produce its synthetic lifeforms, and suffers no backlash—instead growing and profiting from the production of what amounts to slave labour. Its head, Eldon Tyrell, does not feel any personal responsibility for his decisions—even if his products, embodied in the film by the late, lamented Rutger Hauer’s Roy Batty, hold him to be the source of their suffering. We might assume the continued existence of such institutions as human rights law in this bleak future, but clearly the Tyrell Corporation’s outputs are not subject to it.
This is of course fiction, and an extreme fiction at that, but it serves to illustrate that companies developing this technology must be regulated and held accountable for their products. These technologies are morally significant, much like work on HIV medicines, assisted reproductive techniques, and genome editing. Companies should therefore be required to not only be transparent to the public (and their investors), but to be responsible to the emerging technology they create. This is not an alien concept; company and securities law, corporate governance structures, and corporate social responsibility doctrines have been exponentially increased (both within more formal and self-regulatory structures) to reduce their negative impacts on society at large. The question, then, is how can we enforce minimum moral and ethical standards through legal instruments to ensure responsible development and operation of this new technology? The obvious answer may be to utilise existing law, such as tort, property, and contract. As we elucidate in what follows, however, this proves to be insufficient and it may be more appropriate to consider directly controlling companies’ behaviour. This could be accomplished by new standalone legislation and dedicated regulator, such as the United Kingdom’s Human Fertilisation and Embryology Authority (HFEA), or it may be possible to adapt existing structures of regulation. We must actively choose to regulate. To determine how, we must first decide whether novel beings should be persons or legal beings at all—and what their creators (and we more generally) might owe to them.
2. New technologies
Technologies once deemed science fiction are now much more than just theories. Enhancement technologies are used and developed by militaries,Footnote 6 and cognitive enhancers are becoming increasingly popular and widely used in colleges and universities.Footnote 7 There are frequent stories of star athletes banned from competition for ‘doping.’Footnote 8 Technologies and pharmaceuticals which augment our capabilities are already very much extant, and in use by the general public every day. They are also entering the overt commercial marketFootnote 9 with companies such as Cyborg Nest offering their NorthsenseFootnote 10 implant for retail purchase; and implantable microchips for digital security becoming very affordable.Footnote 11 The first synthetic biological constructs are now in use in industrial applications,Footnote 12 and new means of embryo production regularly feature in the news.Footnote 13 Robotics and artificial intelligences are now commonplace; most of us walk around with a form of AI in our pocket—the ‘personal assistant’ in our smartphone. Corporations such as Google use vastly powerful neural networksFootnote 14 to parse information and perform services online, and which are performing actions that their designers admit to not understanding—and not having programmed them to perform.Footnote 15 We have seen Google’s Duplex voice assistant arguably pass the Turing Test in a true sense by convincing human respondents that they were interacting with another Homo sapiens (albeit that passing the test is not a morally significant achievement).Footnote 16
These types of technology promise significant effects on our way of life, of working, and of interacting with others—perhaps even as significant as in the science fiction worlds they were once relegated to. They may even bring about the first time we encounter an equal—or a better—through the development of conscious, thinking, sapient machines or organisms. Regulation and policy around the advancement of these technologies presents different challenges. The former raises questions around liability, ownership, employment, and more; but the latter presents new issues with no precedent. A sapient intelligence may in effect be a novel being, a potential person—and there is good reason to think we should treat it as such,Footnote 17 as will be discussed below.
2.1 Artificial Intelligence
From stock markets,Footnote 18 to autopiloted aircraft and cars,Footnote 19 to our email,Footnote 20 all the way down to what products and videos are recommended to us online;Footnote 21 artificial intelligences surround us in the modern world, and we make use of them constantly. Despite their complexity, none of these AI can ‘think,’ or feel, and are far from anything we might describe as ‘conscious.’ It is arguable that they ought not be referred to as ‘intelligent’ at all. AI such as those we presently make such widespread use of are more properly termed ‘expert systems’Footnote 22 or ‘applied’ AI (sometimes known as ‘weak’ AIFootnote 23)—based on the combination of a knowledge base and an inference engine. They might well have the potential for great moral harms if the systems fail, but they are not themselves moral actors. Rather, expert systems (to be somewhat reductive) are programmed to recognise data input and respond in a predetermined way, even if that response is one of many options based on many factors. The expert system governing an autonomous car might detect a sudden obstacle ahead and another vehicle pulling alongside on the right, infer the risk of collision, and would be able to determine that avoidance of damage or harm to its occupants requires it to swerve left. In some sense an expert system is simply reliant on the application of first-order logic,Footnote 24 a flowchart—albeit a highly detailed and complex one—of ‘if this, then that.’
Contrast these systems with the type of intelligence to which we allude above—new lifeforms possessed of degrees of sentience, even sapience equivalent to our own cognitive level. The latter, also known as ‘strong’Footnote 25 AI or artificial ‘general’Footnote 26 intelligence (herein AGI), are clearly not within our present grasp. Some figures contend that AGI will never be achieved,Footnote 27 whereas others consider it almost an inevitability.Footnote 28 Whether or not we are ultimately capable of the technical wizardry required to allow a machine to reason and think, it is the case that great efforts are being made towards that end.
‘Artificial brain’ projects aim to develop our understanding of what would be required for AGI, and to make steps towards realising it. Brain circuitry is mapped through modelling in-silico in projects such as the famous Blue Brain, wherein 37 million of the synapse connections of a rat’s sensory cortexFootnote 29 have been simulated with great success. ‘Deep learning’ neural networks such as the Google BrainFootnote 30 use vast troves of data as a knowledge base, with which to allow the AI to begin to parse things for itself through cross-referencing and recognition. Deepmind, a highly advanced example, taught itself unprompted to recognise human faces in motion, and identify the same individuals in other video sources.Footnote 31 Developments such as the aforementioned Google Duplex, building upon Deepmind’s WavenetFootnote 32 voice generation to produce a convincingly interactive smart assistant capable of handling the vagaries of natural human speech patterns and responding in kind, are laying the groundwork for our interaction with these potential intelligences.
To be truly sapient an AGI would require a huge range of cognitive faculties. To say nothing of the components of moral status discussed below, a novel being of computer origin would need what is known as “knowledge representation”Footnote 33 or the ability to retain, parse, and apply the extreme number of discrete facts, truths, and logical paths between them that we take for granted. It would need to be able to understand and process speech and language;Footnote 34 as well as to recognise and contextualise informationFootnote 35—and to learn from it,Footnote 36 altering its future behaviour and knowledge representation accordingly. It would need to be capable of reasoning with this information, of determining what is in its best interests and that of others—as well as possessing subjectivity, perhaps even emotion. These capacities seem far-fetched, but further projects such as Cyc,Footnote 37 in which a database of ‘common knowledge’ equivalent to that of a 30 year old human is being used to develop a practical ontology allowing independent reasoning, may render sapient AGI much closer to reality.
2.2 Synthetic biology
Synthetic biology, or the “assembl[y of] components that are not natural (therefore synthetic) to generate chemical systems that support Darwinian evolution (therefore biological)”Footnote 38 in order to perform “rational design of biological systems and living organisms using engineering principles”Footnote 39 promises the creation of entirely new forms of life. Even though synthetic biology has received far less attention by the social sciences than AI, it seems to offer greater likelihood in a shorter timescale of leading to sentient or sapient creations. This type of technology is, after all, a present reality. We have already seen successes in this type of ‘playing God,’ with Craig Venter’s minimal synthetic, bacterial cell JCVI-syn3.0Footnote 40 being the most well-known example of a novel organism that does not feature in nature being designed and built. This was the first successful attempt to design and create a new species from man-made genetic ‘instructions.’ It signified ‘a major step toward our ability to design and build synthetic organisms from the bottom up.’Footnote 41 Much scientific and ethical debate surrounds whether syn3.0 is indeed a ‘lifeform’ and what moral status this or any similar organism developed from this research could be given. Critics, in particular, question why the creation of synthetic biology is different from other genetic engineering (such as selective breeding) and consequently why different legal strategies should be implemented.Footnote 42 There are however ‘certain ethical implications of synthetic biology [that] go beyond those of genetic engineering’;Footnote 43 including ‘the range and specificity of human control over the organism’s properties.’Footnote 44 There are also variants on the concept which this critical argument fails to consider; protocell synthetic biology, in particular, aims to produce living organisms from inanimate materials; and if achieved, could be understood as creating life.Footnote 45
More recently, The Human Genome Project—WriteFootnote 46 has presented a definite route towards synthetic humanity, despite the scientists in charge of the project being careful to present their work as not targeting this possibility.Footnote 47 The project aims to synthesise an entire human genetic sequence, and to solve the technical challenges and existing limitations in genetic technology to doing so. In effect, a success in this project may amount to a ‘blueprint’ for the design and construction of new human-equivalent beings.
Thus, more thought must be given to whether, and under what conditions, it is acceptable to allow companies to produce these biological artificial life forms. The law is currently severely under-equipped to deal with these scenarios and the continuance of self-regulation in this instance would be, to say the least, unwise.Footnote 48
3. Social ramifications of new technology
The technologies of concern are well known, and there is a distinct body of academic thought that considers almost exclusively the possible societal implications of current versions of these technologies. As such, we are in a unique position of foresight.
AI has had increasing amounts of attention over recent years, and is posited by some as one of—if not the—greatest potential threat to humanity. Academic literature provides familiar arguments for this view, which are broadly speculative and tend to focus on expert systems without self-determinism. Stuart Russell and Peter Norvig tell us that AI “may … evolve into a system with unintended behavior”Footnote 49 which could manifest in any number of ways that threaten our lives or freedoms. This may not be malicious. A common line of reasoning is that “[t]he AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else”Footnote 50—which is to say that an expert system might value the completion of its own goals over the preservation of Homo sapiens, or perhaps would be so driven to complete its task that all other matters are subsidiary.Footnote 51 Where there might be attempts to program a moral code to govern such actions and prevent harm to us in pursuit of a specified goal or purpose, critics hold that this would prove almost impossible to accomplish due to the lack of a perfect ethical theory.Footnote 52 Any directed value system that could be bestowed upon an AI would necessarily be flawed, with internal conflicts from which we might be forced to concede or capitulate to avoid a harmful situation. An expert system, in applying the system rigidly, would fail to avoid this harm. There is also an argument commonly made that conflict is inevitable; that peaceful co-existence is impossible;Footnote 53 and with the motivations and goals of an AI being necessarily incompatible with our own, thus forcing one species or other to dominate. The problems here are foreseen and thoroughly identified, at least as far as non-sapient intelligences might be concerned.
Academia is not the only place where the need for action is recognised. Beyond academic journals, there has been an exponential increase in the number of media articles and thinkpieces published over the last two to three years, with the frequency reaching at least several per day in United Kingdom media alone. Many follow the above trend, presenting AI and robotic technologies as looming threats. TitlesFootnote 54 such as ‘The Real Problem with Artificial Intelligence,’Footnote 55 ‘Why You Should Fear Artificial Intelligence,’Footnote 56 ‘Artificial Intelligence: ‘We’re like children playing with a bomb,’Footnote 57 ‘Artificial Intelligence to take over half of all jobs in next decade,’Footnote 58 and ‘Has humanity already lost control of artificial intelligence?’Footnote 59 are commonplace, and range from reasonable discussion to tabloid fear mongering—much as with any controversial technology. Public figures in science and technology, those few who possess such a platform, have proffered their fears and warnings to endorse the idea of AI as threat—most notably Elon Musk, Stephen Hawking, and Bill Gates. Gates “cannot understand why some people are not concerned,”Footnote 60 whilst Hawking warned that the technologies “could spell the end of the human race”Footnote 61—an idea mirrored by Musk’s claim that AI is “[p]otentially more dangerous than nukes.”Footnote 62
We see a very similar dialogue regarding advanced biotechnologies. This is not the place to explore fully the vast range of literature expounding on the ethical and existential risks posed by synthetic biology and heritable germline editing technologies such as CRISPR-Cas9 and TALENS, but it is extensive.Footnote 63 In 2017, Jennifer Doudna—head of the lab which developed CRISPR-Cas9 gene editing—spoke out on BBC radio about the potential challenges it poses.Footnote 64 Her words are indicative of the scale of that challenge:
I felt… a responsibility to start a more open discussion about how do we as a culture, we as a species, how do we use a technology that gives us effectively the ability to control evolution?
The sheer expanse of this question is intimidating. Gene editing, artificial intelligences, synthetic biology- as discussed throughout this paper, these technologies present fundamental trials for the structure of society and indeed our conception of what it is to be human.
The difference between these emerging technologies which are gaining such panicked attention and other potential developments that could fundamentally alter or otherwise affect our society is that we can predict that we are likely to see new forms of life, to see it coming and therefore have time to put ourselves in a position to determine where and how far things will go. As the producers of the underlying science, it is vital that we begin now to develop frameworks, policies, and legal provisions for the potential outcomes of these technologies. In the context of what may be conscious or morally significant technologies, we may wish to centre this development around the rights and moral status of the technological products themselves.
3.1 Personhood and Rights
If a novel being is our cognitive equal or better, it must necessarily possess the same faculties as we do- including those which grant us a certain moral status and value. If the measure of this for Homo sapiens is to have crossed the threshold for personhood (i.e. per Charles Taylor, John Harris, and others: having “a sense of self, a notion of the future and the past, [an ability to] hold values, make choices”Footnote 65 through possessing self-awareness, moral agency, and continuous narrativeFootnote 66); and our novel being matches by also having done so, it must, perforce, qualify as a person. It is important to acknowledge that this eventuality is at the far end of the possible spectrum of consciousness that a being might possess, and that there are many nuances that must be applied in the case of novel lifeforms cognitively equivalent to animals of various intelligences—from mice, to dogs, to apes. These nuances will be the subject of future work by the present authors. However, the idea of a sapient novel being is a powerful one, and can be easily encapsulated in our example of the Blade Runner ‘replicants.’ It is also a worthy starting point for regulation and in considering the questions of rights that these beings might raise, as it would presumably be the closest to human—and therefore the closest to personhood and the rights we grant ourselves.
There are good reasons for thinking this. Any being possessing human-equivalent intelligence is by default self-aware and conscious: simple reactivity would merely be the domain of an expert system whereas a synthetic sapient animal or an AGI worthy of the name must be able to act in a considered fashion as a moral agent. Furthermore a being without narrative identity would be unable to act in any meaningful way, let alone consider its actions. By fulfilling the requirements of personhood it surely follows that our digital consciousness proves itself deserving of the protections due to a sapiens person.Footnote 67 Where we consider legal protections for a group it is because we see that group as possessing whatever level of moral value is worthy of that protection (i.e. that we consider ourselves to possess), and personhood appears to be the qualifying requirement. The second major argument centres around animal personhood. A number of legal challenges have been brought seeking legal personhood for great apes, some of which have been successful to greater or lesser degrees.Footnote 68 There is no reason that the same consideration ought not be given to other non Homo sapiens beings. If some animals can be judged to have attained sufficient characteristics to be persons, then it follows that new lifeforms which are demonstrably our cognitive equals would be so. Just as whatever species gradually succeeds Homo sapiens is likely to continue to think of itself as human, or belonging to the same group, it seems likely that any other being that emerges which is capable of this type of conscious thought would warrant being called the same.Footnote 69
Where fears are articulated about novel beings, they tend to focus on those equal or superior in ability to those possessed by Homo sapiens. For any of the threats they might pose, such as being motivated to eradicate us to further their own agendas, it is presupposed that they have the same sorts of capacities as we do for reason, self-awareness, agency, and identity. These traits are the same as those which qualify Homo sapiens for personhood. It seems unreasonable, then, to automatically assume that a novel being which fulfilled these criteria would be morally different in some way that matters. Possession of the same moral value does not imply that we would agree with such a being, nor that we would not come into conflict with it; though it does suggest that there are grounds for us to treat them well to avoid such a conflict and for us to provide it with the same types of legal protection as we do for ourselves.Footnote 70
Consequently, the moral status of any novel being possessed of intelligence—human-equivalent or not—must be taken into account in any legislative progress. We would be guilty of a great moral failing were we to neglect to provide protection to creatures capable of suffering as we do ourselves, and moreover would betray the jurisprudential reasoning which underpins a great deal of our own rights and freedoms.
4. Legislative Gaps
As has been intimated, technologies and products that are the underpinnings of the emergence of novel beings are already in development by companies, and in some cases commercially available. A matter of great concern lies in that no existing legislation appears to have the power to regulate or control the behaviour of these companies with regard to their actions around the creation of novel beings, and we cannot necessarily rely upon them to provide this control themselves.
To return to our example of the fictional Tyrell Corporation, we see a situation in which sapient beings are owned and operated in a fashion akin to slave labour. They have no protections, and the mythos in which the corporation exists revolves around the culling—or retiring—of the intelligent ‘replicants’ without repercussion. The film in part focuses on the idea that the replicants are just as valuable as the humans, and yet the corporation has no requirement to protect them nor any compunction about not doing so. This is, as mentioned, an extreme version of this issue, but an instructive one.
For our futuristic scenario, nothing is contained in the Companies Act 2006, the UK Corporate Governance Code (2018),Footnote 71 nor any other instruments of company law. For instance, it is unclear whether directors have a duty to ensure their company develops and operates emerging technology in a responsible and transparent manner.Footnote 72 Under s.172 Companies Act 2006 directors are required to promote the success of the company for the benefit of its members as a whole; this includes the impact of the company’s operations on the community and the environment.Footnote 73 Whether this stretches to include the responsible development of its products or any harm caused by the development or operation of this technology is unclear. Even if such scenarios did amount to a breach of a duty, that duty is owed directly to the company;Footnote 74 and therefore any claim would have to be enforced by the company itself or by a shareholder using a derivative claim.Footnote 75 It is well documented that these claims are difficult to bring and are rarely successful.Footnote 76 It is also highly unlikely that a shareholder would bring such a claim on behalf of our novel beings. The most we can hope from existing company law is that current regulations prohibit certain behaviour for fear of reprimands—such as civil or criminal liability. However, this does not address what might be subtler moral questions raised by emerging technologies.
Perhaps, then, we can turn to instruments that specifically govern the technologies of chief interest to us in order to provide protection and accountability.
4.1 Regulation of biotechnology
There are existing legislative regimes regulating the responsible research, development and utilisation of biomaterials and biotechnologies more generally. These include the Human Fertilisation and Embryology Act 2008 (HFEA), the Human Tissue Act 2004 (HTA), and the Genetically Modified Organisms (Contained Use) Regulations 2014 (GMOR). In the view of these authors, the technological prospects highlighted above fall outside the remit of these instruments.
The HTA is possibly the least applicable of these regimes. Its chief concern is with the “removal, storage, and use of human organs and other tissue”Footnote 77 for research and therapeutic purposes; which may have uses in the development of human-assistive technologies such as neuroprostheses and implantable technologies.Footnote 78 However, it makes no mention of synthesised or otherwise modified tissues, whether these are derived from Homo sapiens material, chimeric, or entirely de novo.
The major focus of the HFEA centres around reproductive issues and the licensing of research conducted on human embryos. It does contain specific provisions in connection with genetic material not of human origin, for example permitting research on human admixed embryos,Footnote 79 whilst retaining prohibitions against the implantation of embryos containing non-Homo sapiens genetic material into a woman.Footnote 80 The Act’s protection of the concept of the ‘permitted embryo’ for implantation is its chief contribution to the regulation of genetically modified or synthetic human births; but it may soon be possible to circumvent the need for this process through advances in exogenesis and artificial wombs.Footnote 81
Genetically modified organisms, as defined by the GMOR, seem much closer to the types of technology with which we are concerned here. The Regulations state that an organism is “a biological entity capable of replication or of transferring genetic material, and includes a microorganism, but does not include a human, human embryo, or human admixed embryo”Footnote 82; and that “Genetic modification in relation to an organism means the altering of the genetic material in that organism in a way that does not occur naturally by mating or natural recombination (or both).”Footnote 83
These definitions are very broad, with the intention of applicability to anything necessary for the purposes of containment and biosecurity. Genome editing or the design of synthetic genes and their incorporation into organisms is, by its nature, modification; and so our hypothesised novel beings, be they complex sapiences or simple eukaryotic life, will likely be genetically modified in a technical sense. The Regulations, despite being regularly updated, do not presently contain mention of modern techniques such as CRISPR, but it is possible that we can understand their definition of ‘modification’ to include such processes. However, it is doubtful whether products of synthetic biology necessarily fall under the auspices of the GMOR, particularly those created using plasmid transfer processes as is presently common practice. Schedule 2, part 3, paragraph 4(a) of the GMOR states that any process in which
…the removal of nucleic acid sequences from a cell of an organism which may or may not be followed by reinsertion of all or part of that nucleic acid (or a synthetic equivalent), whether or not altered by enzymic or mechanical processes, into cells of the same species or into cells of phylogenetically closely related species which can exchange genetic material by homologous recombination…Footnote 84
is not subject to the regulations at all. This grey area illustrates at the very least the insufficiency of the existing structures to deal with advancements in biotechnology that were, if not unforeseeable, then not a present concern when they were written. A further major flaw of the GMOR as it pertains to the concerns of this paper is its limited scope. Even if it is the case that synthetic organisms do fall within its remit, it only provides for containment and control measures and principles of occupational and environmental safety.Footnote 85 This leaves much to be desired with regard to what can and cannot be developed.
These biolaw regimes are a useful starting point for looking at how to regulate the development of morally significant biotechnologies. Variously they provide for systems of licensing for practitioners and researchers, for medical devices and drugs. However, they do not themselves go far enough in their scope to be directly applicable. None of the discussed legislation makes specific provision for synthetic biology, focussing instead on ‘human’ or ‘natural’ materials. Furthermore, they broadly apply to existing technologies and products thereof; in order to regulate their use. They do not, for the most part, regulate what may or may not be developed in the future- and in some cases specifically allow freedom for research purposes. This in itself is laudable; but it does prevent their being used to control technologies we may decide to be undesirable.
Additionally, we cannot neglect the global context for these instruments. The UK has one of, if not the most, thorough and successful regulatory regimes regarding biotechnologies; but clearly it does not apply in other territories. It is entirely possible, even probable, that novel being research will be undertaken in other countries with less regulatory oversight;Footnote 86 and that the fruits of that work could be brought to our shores.
4.2 Regulation of Artificial Intelligence
Similarly, despite great media attention and the explosion in our engagement with (minor) AI in our day to day life, there is a lack of useful, enforceable regulation. There are extant Acts which might be pointed to in the realm of digital technology and the computer sciences, but these seem to have little direct applicability to the development of AI themselves.
We might consider the Computer Misuse Act 1990, which is chiefly concerned with offences related to unauthorised access to systems and data, and with any intent to commit other offences using this access or to impair the operation of computers. In effect, the Act is intended to counter hacking activities. This could perhaps be applied to charge a sapient AI or its developer with an offence for particular actions it may take, but it does not itself govern or affect the development or deployment of AI.
The Data Protection Act 1998 similarly fails to dictate the actions of companies and those involved in technological development. It targets only the rights of data subjects, and the responsibilities of controllers in the collection of information; the intention being to ensure that data accrued is both correct, and fit for purpose. It specifically does not engage with the regulation of programming, or indeed with the responsible use of data—for example the avoidance of bias in coding as has been found in, amongst others, recidivism software used in the United States.Footnote 87 Despite their absence in the law, codes of ethics have been developed by professional bodies—such as the Association for Computing MachineryFootnote 88 and the IEEE,Footnote 89 which could be construed to pertain to code developed by their members. However, these codes are entirely voluntary and unenforceable, and may not reflect differing international standards of ethics.
The House of Commons Science and Technology Committee’s Fifth Report, into Robotics and Artificial Intelligence,Footnote 90 proposed some motions toward constructing a regulatory setup for the development and use of AI, including the institution of a Commission. However, the governmental response to this report was noncommittal, and to date no direct action has been taken in support of the suggestions made. This is not to say that the UK government displays no willingness to regulate AI; for example, in 2018, the House of Lords Select Committee on Artificial IntelligenceFootnote 91 issued a thorough report and recommendations, to which the present authors contributed.Footnote 92 Whilst we do not have the scope to review them here, there has also been action to begin the process of international regulation of AI, with a clear focus on responsibility and human rights issues such as privacy and protection of personal data.Footnote 93, Footnote 94
Ultimately, the existing regulatory structures cardinally fail to address morally valuable technologies. Whilst some proposed instruments and reviews acknowledge the need for responsible development, they do not make significant inroads beyond immediate data protection issues.
5. Conclusion
We stand faced with a stark choice. Through various routes, be they biotechnological or via the computer sciences, we are likely to bring into being new forms of intelligent life. These creatures may be the equivalents of animals to which we today grant minimum protections, or may eventually range up to cognitive equality with Homo sapiens. The law as it exists is insufficient to deal with the issues the advent of these beings will raise for both society and for themselves, particularly as it pertains to whether we grant legal personhood or how we control the behaviour of the companies that will profit from their existence. We cannot, in good faith, trust companies to self-regulate in so morally significant an area as the emergence of novel sapient or sentient lifeforms that do not concern us here. Something so epochal as this is too big to be left to private concerns guided by profit margins; rather it should be subject to collective morality and ordre public. We must choose to regulate, at least with the institution of minimum standards. As part of this we must choose firstly whether such beings deserve any degree of legal status, be that personhood or other, and secondly how we might best respect that status and the rights contingent upon it- especially in relation to their creation and development by corporations. The setting of minimum standards is a far more urgent need than one may at first think, and should be a high priority. This will require the engagement of a wide array of academic and policy-related fields; and the determination of what those standards should be, what they can be, and why, will be a significant undertaking taking a number of years and considerable work in founding a new intersectional discipline.