Hostname: page-component-745bb68f8f-d8cs5 Total loading time: 0 Render date: 2025-02-07T00:11:30.083Z Has data issue: false hasContentIssue false

Regulating the Tyrell Corporation: the Emergence of Novel Beings

Published online by Cambridge University Press:  10 June 2021

David R Lawrence*
Affiliation:
Centre for Biomedicine, Self and Society, Usher Institute, University of Edinburgh, EdinburghEH8 9LN, Scotland
Sarah Morley
Affiliation:
Newcastle Law School, Newcastle University, Newcastle Upon TyneNE1 7RU, United Kingdom
*
*Corresponding author. Email: d.lawrence@ed.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Emerging biotechnologies and advances in computer science promise the arrival of novel beings possessed of some degree of moral status, even potentially sentient or sapient life. Such a manifestation will constitute an epochal change, and perhaps threaten Homo sapiens’ status as the only being generally considered worthy of personhood and its contingent protections; as well as being the root of any number of social and legal issues. The law as it stands is not likely to be capable of managing or adapting to this challenge. This paper highlights the likely societal ramifications of novel beings and the gaps in the legislation which is likely to be relied upon to respond to these. In so doing, the authors make a case for the development of new regulatory structures to manage the moral issues surrounding this new technological upheaval.

Type
Articles
Copyright
© The Author(s), 2021. Published by Cambridge University Press

1. Introduction

The world stands on the cusp of a new age, heralded by the biotechnological revolution of recent decades. The law as it stands is woefully insufficient to regulate these emerging advances, and in particular those that will stem from the creation of new intelligences.

Gene science, advanced pharmaceuticals, neurotechnologies, robotics and cybernetics, the internet, breakthroughs in artificial intelligence and yet more technologies more usually associated with science fiction have in recent years risen to the forefront of science. Not the least of these developments is the potential for the emergence of new types of conscious, intelligent being. Closer to home, perhaps, is the profusion of ‘expert systems’—algorithms and simple artificial intelligences (AI) that interweave in our everyday lives, from smart assistants, to the financial markets, to social media. All these technologies, and more, are collectively and individually poised to present great and fundamental challenges for society and for the law. We have already experienced the disruptive potential of expert systems in politics,Footnote 1 policing,Footnote 2 and economics.Footnote 3 The issues presented by such technologies as may be able to think for themselves could be orders of magnitude greater. Further, these thinking creations may warrant their own protections and freedoms; perhaps to an equal degree as enjoyed by humans. As the stewards of scientific progress, we are beholden to protect all parties—both existing persons, and to the beings we may create through AI—and bio—research.

It is likely that the technologies in question will be the product of public companies and in particular multinational corporations, which operate beyond the bounds set by domestic research ethics and regulation. The main source of regulation for these bodies derives from company law which is not equipped to sufficiently manage the greater weight of moral responsibility that these technologies will impose on their producers.

Other existing policy and regulation is also ill-prepared for this brave new world of novel intelligent beings. Any new suggested regulation is generally piecemeal and problem-specific, with recently proposed documents addressing only existing technologies such as self-driving cars and latterly facial recognition software. What the substance of this regulation will be, how it will function and the precise form it will take is beyond the scope of the present work,Footnote 4 but in order to begin to find answers to these questions we must first identify the problems and gaps which already exist.

Consider the Tyrell Corporation, from the film Blade Runner,Footnote 5 as a scenario we may wish to avoid. It is essentially a law unto itself, able to create thinking, feeling products without any apparent oversight. In the fictional 2019 Los Angeles, the replicants are hated and feared, and are hunted (or ‘retired’) by specialised police squads. Clearly these ‘products’ are judged as societally undesirable and unsound. However, the corporation continues to design and produce its synthetic lifeforms, and suffers no backlash—instead growing and profiting from the production of what amounts to slave labour. Its head, Eldon Tyrell, does not feel any personal responsibility for his decisions—even if his products, embodied in the film by the late, lamented Rutger Hauer’s Roy Batty, hold him to be the source of their suffering. We might assume the continued existence of such institutions as human rights law in this bleak future, but clearly the Tyrell Corporation’s outputs are not subject to it.

This is of course fiction, and an extreme fiction at that, but it serves to illustrate that companies developing this technology must be regulated and held accountable for their products. These technologies are morally significant, much like work on HIV medicines, assisted reproductive techniques, and genome editing. Companies should therefore be required to not only be transparent to the public (and their investors), but to be responsible to the emerging technology they create. This is not an alien concept; company and securities law, corporate governance structures, and corporate social responsibility doctrines have been exponentially increased (both within more formal and self-regulatory structures) to reduce their negative impacts on society at large. The question, then, is how can we enforce minimum moral and ethical standards through legal instruments to ensure responsible development and operation of this new technology? The obvious answer may be to utilise existing law, such as tort, property, and contract. As we elucidate in what follows, however, this proves to be insufficient and it may be more appropriate to consider directly controlling companies’ behaviour. This could be accomplished by new standalone legislation and dedicated regulator, such as the United Kingdom’s Human Fertilisation and Embryology Authority (HFEA), or it may be possible to adapt existing structures of regulation. We must actively choose to regulate. To determine how, we must first decide whether novel beings should be persons or legal beings at all—and what their creators (and we more generally) might owe to them.

2. New technologies

Technologies once deemed science fiction are now much more than just theories. Enhancement technologies are used and developed by militaries,Footnote 6 and cognitive enhancers are becoming increasingly popular and widely used in colleges and universities.Footnote 7 There are frequent stories of star athletes banned from competition for ‘doping.’Footnote 8 Technologies and pharmaceuticals which augment our capabilities are already very much extant, and in use by the general public every day. They are also entering the overt commercial marketFootnote 9 with companies such as Cyborg Nest offering their NorthsenseFootnote 10 implant for retail purchase; and implantable microchips for digital security becoming very affordable.Footnote 11 The first synthetic biological constructs are now in use in industrial applications,Footnote 12 and new means of embryo production regularly feature in the news.Footnote 13 Robotics and artificial intelligences are now commonplace; most of us walk around with a form of AI in our pocket—the ‘personal assistant’ in our smartphone. Corporations such as Google use vastly powerful neural networksFootnote 14 to parse information and perform services online, and which are performing actions that their designers admit to not understanding—and not having programmed them to perform.Footnote 15 We have seen Google’s Duplex voice assistant arguably pass the Turing Test in a true sense by convincing human respondents that they were interacting with another Homo sapiens (albeit that passing the test is not a morally significant achievement).Footnote 16

These types of technology promise significant effects on our way of life, of working, and of interacting with others—perhaps even as significant as in the science fiction worlds they were once relegated to. They may even bring about the first time we encounter an equal—or a better—through the development of conscious, thinking, sapient machines or organisms. Regulation and policy around the advancement of these technologies presents different challenges. The former raises questions around liability, ownership, employment, and more; but the latter presents new issues with no precedent. A sapient intelligence may in effect be a novel being, a potential person—and there is good reason to think we should treat it as such,Footnote 17 as will be discussed below.

2.1 Artificial Intelligence

From stock markets,Footnote 18 to autopiloted aircraft and cars,Footnote 19 to our email,Footnote 20 all the way down to what products and videos are recommended to us online;Footnote 21 artificial intelligences surround us in the modern world, and we make use of them constantly. Despite their complexity, none of these AI can ‘think,’ or feel, and are far from anything we might describe as ‘conscious.’ It is arguable that they ought not be referred to as ‘intelligent’ at all. AI such as those we presently make such widespread use of are more properly termed ‘expert systems’Footnote 22 or ‘applied’ AI (sometimes known as ‘weak’ AIFootnote 23)—based on the combination of a knowledge base and an inference engine. They might well have the potential for great moral harms if the systems fail, but they are not themselves moral actors. Rather, expert systems (to be somewhat reductive) are programmed to recognise data input and respond in a predetermined way, even if that response is one of many options based on many factors. The expert system governing an autonomous car might detect a sudden obstacle ahead and another vehicle pulling alongside on the right, infer the risk of collision, and would be able to determine that avoidance of damage or harm to its occupants requires it to swerve left. In some sense an expert system is simply reliant on the application of first-order logic,Footnote 24 a flowchart—albeit a highly detailed and complex one—of ‘if this, then that.’

Contrast these systems with the type of intelligence to which we allude above—new lifeforms possessed of degrees of sentience, even sapience equivalent to our own cognitive level. The latter, also known as ‘strong’Footnote 25 AI or artificial ‘general’Footnote 26 intelligence (herein AGI), are clearly not within our present grasp. Some figures contend that AGI will never be achieved,Footnote 27 whereas others consider it almost an inevitability.Footnote 28 Whether or not we are ultimately capable of the technical wizardry required to allow a machine to reason and think, it is the case that great efforts are being made towards that end.

‘Artificial brain’ projects aim to develop our understanding of what would be required for AGI, and to make steps towards realising it. Brain circuitry is mapped through modelling in-silico in projects such as the famous Blue Brain, wherein 37 million of the synapse connections of a rat’s sensory cortexFootnote 29 have been simulated with great success. ‘Deep learning’ neural networks such as the Google BrainFootnote 30 use vast troves of data as a knowledge base, with which to allow the AI to begin to parse things for itself through cross-referencing and recognition. Deepmind, a highly advanced example, taught itself unprompted to recognise human faces in motion, and identify the same individuals in other video sources.Footnote 31 Developments such as the aforementioned Google Duplex, building upon Deepmind’s WavenetFootnote 32 voice generation to produce a convincingly interactive smart assistant capable of handling the vagaries of natural human speech patterns and responding in kind, are laying the groundwork for our interaction with these potential intelligences.

To be truly sapient an AGI would require a huge range of cognitive faculties. To say nothing of the components of moral status discussed below, a novel being of computer origin would need what is known as “knowledge representation”Footnote 33 or the ability to retain, parse, and apply the extreme number of discrete facts, truths, and logical paths between them that we take for granted. It would need to be able to understand and process speech and language;Footnote 34 as well as to recognise and contextualise informationFootnote 35—and to learn from it,Footnote 36 altering its future behaviour and knowledge representation accordingly. It would need to be capable of reasoning with this information, of determining what is in its best interests and that of others—as well as possessing subjectivity, perhaps even emotion. These capacities seem far-fetched, but further projects such as Cyc,Footnote 37 in which a database of ‘common knowledge’ equivalent to that of a 30 year old human is being used to develop a practical ontology allowing independent reasoning, may render sapient AGI much closer to reality.

2.2 Synthetic biology

Synthetic biology, or the “assembl[y of] components that are not natural (therefore synthetic) to generate chemical systems that support Darwinian evolution (therefore biological)”Footnote 38 in order to perform “rational design of biological systems and living organisms using engineering principles”Footnote 39 promises the creation of entirely new forms of life. Even though synthetic biology has received far less attention by the social sciences than AI, it seems to offer greater likelihood in a shorter timescale of leading to sentient or sapient creations. This type of technology is, after all, a present reality. We have already seen successes in this type of ‘playing God,’ with Craig Venter’s minimal synthetic, bacterial cell JCVI-syn3.0Footnote 40 being the most well-known example of a novel organism that does not feature in nature being designed and built. This was the first successful attempt to design and create a new species from man-made genetic ‘instructions.’ It signified ‘a major step toward our ability to design and build synthetic organisms from the bottom up.’Footnote 41 Much scientific and ethical debate surrounds whether syn3.0 is indeed a ‘lifeform’ and what moral status this or any similar organism developed from this research could be given. Critics, in particular, question why the creation of synthetic biology is different from other genetic engineering (such as selective breeding) and consequently why different legal strategies should be implemented.Footnote 42 There are however ‘certain ethical implications of synthetic biology [that] go beyond those of genetic engineering’;Footnote 43 including ‘the range and specificity of human control over the organism’s properties.’Footnote 44 There are also variants on the concept which this critical argument fails to consider; protocell synthetic biology, in particular, aims to produce living organisms from inanimate materials; and if achieved, could be understood as creating life.Footnote 45

More recently, The Human Genome Project—WriteFootnote 46 has presented a definite route towards synthetic humanity, despite the scientists in charge of the project being careful to present their work as not targeting this possibility.Footnote 47 The project aims to synthesise an entire human genetic sequence, and to solve the technical challenges and existing limitations in genetic technology to doing so. In effect, a success in this project may amount to a ‘blueprint’ for the design and construction of new human-equivalent beings.

Thus, more thought must be given to whether, and under what conditions, it is acceptable to allow companies to produce these biological artificial life forms. The law is currently severely under-equipped to deal with these scenarios and the continuance of self-regulation in this instance would be, to say the least, unwise.Footnote 48

3. Social ramifications of new technology

The technologies of concern are well known, and there is a distinct body of academic thought that considers almost exclusively the possible societal implications of current versions of these technologies. As such, we are in a unique position of foresight.

AI has had increasing amounts of attention over recent years, and is posited by some as one of—if not the—greatest potential threat to humanity. Academic literature provides familiar arguments for this view, which are broadly speculative and tend to focus on expert systems without self-determinism. Stuart Russell and Peter Norvig tell us that AI “may … evolve into a system with unintended behavior”Footnote 49 which could manifest in any number of ways that threaten our lives or freedoms. This may not be malicious. A common line of reasoning is that “[t]he AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else”Footnote 50—which is to say that an expert system might value the completion of its own goals over the preservation of Homo sapiens, or perhaps would be so driven to complete its task that all other matters are subsidiary.Footnote 51 Where there might be attempts to program a moral code to govern such actions and prevent harm to us in pursuit of a specified goal or purpose, critics hold that this would prove almost impossible to accomplish due to the lack of a perfect ethical theory.Footnote 52 Any directed value system that could be bestowed upon an AI would necessarily be flawed, with internal conflicts from which we might be forced to concede or capitulate to avoid a harmful situation. An expert system, in applying the system rigidly, would fail to avoid this harm. There is also an argument commonly made that conflict is inevitable; that peaceful co-existence is impossible;Footnote 53 and with the motivations and goals of an AI being necessarily incompatible with our own, thus forcing one species or other to dominate. The problems here are foreseen and thoroughly identified, at least as far as non-sapient intelligences might be concerned.

Academia is not the only place where the need for action is recognised. Beyond academic journals, there has been an exponential increase in the number of media articles and thinkpieces published over the last two to three years, with the frequency reaching at least several per day in United Kingdom media alone. Many follow the above trend, presenting AI and robotic technologies as looming threats. TitlesFootnote 54 such as ‘The Real Problem with Artificial Intelligence,’Footnote 55 ‘Why You Should Fear Artificial Intelligence,’Footnote 56Artificial Intelligence: ‘We’re like children playing with a bomb,’Footnote 57Artificial Intelligence to take over half of all jobs in next decade,’Footnote 58 and ‘Has humanity already lost control of artificial intelligence?Footnote 59 are commonplace, and range from reasonable discussion to tabloid fear mongering—much as with any controversial technology. Public figures in science and technology, those few who possess such a platform, have proffered their fears and warnings to endorse the idea of AI as threat—most notably Elon Musk, Stephen Hawking, and Bill Gates. Gates “cannot understand why some people are not concerned,”Footnote 60 whilst Hawking warned that the technologies “could spell the end of the human race”Footnote 61—an idea mirrored by Musk’s claim that AI is “[p]otentially more dangerous than nukes.”Footnote 62

We see a very similar dialogue regarding advanced biotechnologies. This is not the place to explore fully the vast range of literature expounding on the ethical and existential risks posed by synthetic biology and heritable germline editing technologies such as CRISPR-Cas9 and TALENS, but it is extensive.Footnote 63 In 2017, Jennifer Doudna—head of the lab which developed CRISPR-Cas9 gene editing—spoke out on BBC radio about the potential challenges it poses.Footnote 64 Her words are indicative of the scale of that challenge:

I felt… a responsibility to start a more open discussion about how do we as a culture, we as a species, how do we use a technology that gives us effectively the ability to control evolution?

The sheer expanse of this question is intimidating. Gene editing, artificial intelligences, synthetic biology- as discussed throughout this paper, these technologies present fundamental trials for the structure of society and indeed our conception of what it is to be human.

The difference between these emerging technologies which are gaining such panicked attention and other potential developments that could fundamentally alter or otherwise affect our society is that we can predict that we are likely to see new forms of life, to see it coming and therefore have time to put ourselves in a position to determine where and how far things will go. As the producers of the underlying science, it is vital that we begin now to develop frameworks, policies, and legal provisions for the potential outcomes of these technologies. In the context of what may be conscious or morally significant technologies, we may wish to centre this development around the rights and moral status of the technological products themselves.

3.1 Personhood and Rights

If a novel being is our cognitive equal or better, it must necessarily possess the same faculties as we do- including those which grant us a certain moral status and value. If the measure of this for Homo sapiens is to have crossed the threshold for personhood (i.e. per Charles Taylor, John Harris, and others: having “a sense of self, a notion of the future and the past, [an ability to] hold values, make choices”Footnote 65 through possessing self-awareness, moral agency, and continuous narrativeFootnote 66); and our novel being matches by also having done so, it must, perforce, qualify as a person. It is important to acknowledge that this eventuality is at the far end of the possible spectrum of consciousness that a being might possess, and that there are many nuances that must be applied in the case of novel lifeforms cognitively equivalent to animals of various intelligences—from mice, to dogs, to apes. These nuances will be the subject of future work by the present authors. However, the idea of a sapient novel being is a powerful one, and can be easily encapsulated in our example of the Blade Runner ‘replicants.’ It is also a worthy starting point for regulation and in considering the questions of rights that these beings might raise, as it would presumably be the closest to human—and therefore the closest to personhood and the rights we grant ourselves.

There are good reasons for thinking this. Any being possessing human-equivalent intelligence is by default self-aware and conscious: simple reactivity would merely be the domain of an expert system whereas a synthetic sapient animal or an AGI worthy of the name must be able to act in a considered fashion as a moral agent. Furthermore a being without narrative identity would be unable to act in any meaningful way, let alone consider its actions. By fulfilling the requirements of personhood it surely follows that our digital consciousness proves itself deserving of the protections due to a sapiens person.Footnote 67 Where we consider legal protections for a group it is because we see that group as possessing whatever level of moral value is worthy of that protection (i.e. that we consider ourselves to possess), and personhood appears to be the qualifying requirement. The second major argument centres around animal personhood. A number of legal challenges have been brought seeking legal personhood for great apes, some of which have been successful to greater or lesser degrees.Footnote 68 There is no reason that the same consideration ought not be given to other non Homo sapiens beings. If some animals can be judged to have attained sufficient characteristics to be persons, then it follows that new lifeforms which are demonstrably our cognitive equals would be so. Just as whatever species gradually succeeds Homo sapiens is likely to continue to think of itself as human, or belonging to the same group, it seems likely that any other being that emerges which is capable of this type of conscious thought would warrant being called the same.Footnote 69

Where fears are articulated about novel beings, they tend to focus on those equal or superior in ability to those possessed by Homo sapiens. For any of the threats they might pose, such as being motivated to eradicate us to further their own agendas, it is presupposed that they have the same sorts of capacities as we do for reason, self-awareness, agency, and identity. These traits are the same as those which qualify Homo sapiens for personhood. It seems unreasonable, then, to automatically assume that a novel being which fulfilled these criteria would be morally different in some way that matters. Possession of the same moral value does not imply that we would agree with such a being, nor that we would not come into conflict with it; though it does suggest that there are grounds for us to treat them well to avoid such a conflict and for us to provide it with the same types of legal protection as we do for ourselves.Footnote 70

Consequently, the moral status of any novel being possessed of intelligence—human-equivalent or not—must be taken into account in any legislative progress. We would be guilty of a great moral failing were we to neglect to provide protection to creatures capable of suffering as we do ourselves, and moreover would betray the jurisprudential reasoning which underpins a great deal of our own rights and freedoms.

4. Legislative Gaps

As has been intimated, technologies and products that are the underpinnings of the emergence of novel beings are already in development by companies, and in some cases commercially available. A matter of great concern lies in that no existing legislation appears to have the power to regulate or control the behaviour of these companies with regard to their actions around the creation of novel beings, and we cannot necessarily rely upon them to provide this control themselves.

To return to our example of the fictional Tyrell Corporation, we see a situation in which sapient beings are owned and operated in a fashion akin to slave labour. They have no protections, and the mythos in which the corporation exists revolves around the culling—or retiring—of the intelligent ‘replicants’ without repercussion. The film in part focuses on the idea that the replicants are just as valuable as the humans, and yet the corporation has no requirement to protect them nor any compunction about not doing so. This is, as mentioned, an extreme version of this issue, but an instructive one.

For our futuristic scenario, nothing is contained in the Companies Act 2006, the UK Corporate Governance Code (2018),Footnote 71 nor any other instruments of company law. For instance, it is unclear whether directors have a duty to ensure their company develops and operates emerging technology in a responsible and transparent manner.Footnote 72 Under s.172 Companies Act 2006 directors are required to promote the success of the company for the benefit of its members as a whole; this includes the impact of the company’s operations on the community and the environment.Footnote 73 Whether this stretches to include the responsible development of its products or any harm caused by the development or operation of this technology is unclear. Even if such scenarios did amount to a breach of a duty, that duty is owed directly to the company;Footnote 74 and therefore any claim would have to be enforced by the company itself or by a shareholder using a derivative claim.Footnote 75 It is well documented that these claims are difficult to bring and are rarely successful.Footnote 76 It is also highly unlikely that a shareholder would bring such a claim on behalf of our novel beings. The most we can hope from existing company law is that current regulations prohibit certain behaviour for fear of reprimands—such as civil or criminal liability. However, this does not address what might be subtler moral questions raised by emerging technologies.

Perhaps, then, we can turn to instruments that specifically govern the technologies of chief interest to us in order to provide protection and accountability.

4.1 Regulation of biotechnology

There are existing legislative regimes regulating the responsible research, development and utilisation of biomaterials and biotechnologies more generally. These include the Human Fertilisation and Embryology Act 2008 (HFEA), the Human Tissue Act 2004 (HTA), and the Genetically Modified Organisms (Contained Use) Regulations 2014 (GMOR). In the view of these authors, the technological prospects highlighted above fall outside the remit of these instruments.

The HTA is possibly the least applicable of these regimes. Its chief concern is with the “removal, storage, and use of human organs and other tissue”Footnote 77 for research and therapeutic purposes; which may have uses in the development of human-assistive technologies such as neuroprostheses and implantable technologies.Footnote 78 However, it makes no mention of synthesised or otherwise modified tissues, whether these are derived from Homo sapiens material, chimeric, or entirely de novo.

The major focus of the HFEA centres around reproductive issues and the licensing of research conducted on human embryos. It does contain specific provisions in connection with genetic material not of human origin, for example permitting research on human admixed embryos,Footnote 79 whilst retaining prohibitions against the implantation of embryos containing non-Homo sapiens genetic material into a woman.Footnote 80 The Act’s protection of the concept of the ‘permitted embryo’ for implantation is its chief contribution to the regulation of genetically modified or synthetic human births; but it may soon be possible to circumvent the need for this process through advances in exogenesis and artificial wombs.Footnote 81

Genetically modified organisms, as defined by the GMOR, seem much closer to the types of technology with which we are concerned here. The Regulations state that an organism is “a biological entity capable of replication or of transferring genetic material, and includes a microorganism, but does not include a human, human embryo, or human admixed embryo”Footnote 82; and that “Genetic modification in relation to an organism means the altering of the genetic material in that organism in a way that does not occur naturally by mating or natural recombination (or both).”Footnote 83

These definitions are very broad, with the intention of applicability to anything necessary for the purposes of containment and biosecurity. Genome editing or the design of synthetic genes and their incorporation into organisms is, by its nature, modification; and so our hypothesised novel beings, be they complex sapiences or simple eukaryotic life, will likely be genetically modified in a technical sense. The Regulations, despite being regularly updated, do not presently contain mention of modern techniques such as CRISPR, but it is possible that we can understand their definition of ‘modification’ to include such processes. However, it is doubtful whether products of synthetic biology necessarily fall under the auspices of the GMOR, particularly those created using plasmid transfer processes as is presently common practice. Schedule 2, part 3, paragraph 4(a) of the GMOR states that any process in which

…the removal of nucleic acid sequences from a cell of an organism which may or may not be followed by reinsertion of all or part of that nucleic acid (or a synthetic equivalent), whether or not altered by enzymic or mechanical processes, into cells of the same species or into cells of phylogenetically closely related species which can exchange genetic material by homologous recombination…Footnote 84

is not subject to the regulations at all. This grey area illustrates at the very least the insufficiency of the existing structures to deal with advancements in biotechnology that were, if not unforeseeable, then not a present concern when they were written. A further major flaw of the GMOR as it pertains to the concerns of this paper is its limited scope. Even if it is the case that synthetic organisms do fall within its remit, it only provides for containment and control measures and principles of occupational and environmental safety.Footnote 85 This leaves much to be desired with regard to what can and cannot be developed.

These biolaw regimes are a useful starting point for looking at how to regulate the development of morally significant biotechnologies. Variously they provide for systems of licensing for practitioners and researchers, for medical devices and drugs. However, they do not themselves go far enough in their scope to be directly applicable. None of the discussed legislation makes specific provision for synthetic biology, focussing instead on ‘human’ or ‘natural’ materials. Furthermore, they broadly apply to existing technologies and products thereof; in order to regulate their use. They do not, for the most part, regulate what may or may not be developed in the future- and in some cases specifically allow freedom for research purposes. This in itself is laudable; but it does prevent their being used to control technologies we may decide to be undesirable.

Additionally, we cannot neglect the global context for these instruments. The UK has one of, if not the most, thorough and successful regulatory regimes regarding biotechnologies; but clearly it does not apply in other territories. It is entirely possible, even probable, that novel being research will be undertaken in other countries with less regulatory oversight;Footnote 86 and that the fruits of that work could be brought to our shores.

4.2 Regulation of Artificial Intelligence

Similarly, despite great media attention and the explosion in our engagement with (minor) AI in our day to day life, there is a lack of useful, enforceable regulation. There are extant Acts which might be pointed to in the realm of digital technology and the computer sciences, but these seem to have little direct applicability to the development of AI themselves.

We might consider the Computer Misuse Act 1990, which is chiefly concerned with offences related to unauthorised access to systems and data, and with any intent to commit other offences using this access or to impair the operation of computers. In effect, the Act is intended to counter hacking activities. This could perhaps be applied to charge a sapient AI or its developer with an offence for particular actions it may take, but it does not itself govern or affect the development or deployment of AI.

The Data Protection Act 1998 similarly fails to dictate the actions of companies and those involved in technological development. It targets only the rights of data subjects, and the responsibilities of controllers in the collection of information; the intention being to ensure that data accrued is both correct, and fit for purpose. It specifically does not engage with the regulation of programming, or indeed with the responsible use of data—for example the avoidance of bias in coding as has been found in, amongst others, recidivism software used in the United States.Footnote 87 Despite their absence in the law, codes of ethics have been developed by professional bodies—such as the Association for Computing MachineryFootnote 88 and the IEEE,Footnote 89 which could be construed to pertain to code developed by their members. However, these codes are entirely voluntary and unenforceable, and may not reflect differing international standards of ethics.

The House of Commons Science and Technology Committee’s Fifth Report, into Robotics and Artificial Intelligence,Footnote 90 proposed some motions toward constructing a regulatory setup for the development and use of AI, including the institution of a Commission. However, the governmental response to this report was noncommittal, and to date no direct action has been taken in support of the suggestions made. This is not to say that the UK government displays no willingness to regulate AI; for example, in 2018, the House of Lords Select Committee on Artificial IntelligenceFootnote 91 issued a thorough report and recommendations, to which the present authors contributed.Footnote 92 Whilst we do not have the scope to review them here, there has also been action to begin the process of international regulation of AI, with a clear focus on responsibility and human rights issues such as privacy and protection of personal data.Footnote 93, Footnote 94

Ultimately, the existing regulatory structures cardinally fail to address morally valuable technologies. Whilst some proposed instruments and reviews acknowledge the need for responsible development, they do not make significant inroads beyond immediate data protection issues.

5. Conclusion

We stand faced with a stark choice. Through various routes, be they biotechnological or via the computer sciences, we are likely to bring into being new forms of intelligent life. These creatures may be the equivalents of animals to which we today grant minimum protections, or may eventually range up to cognitive equality with Homo sapiens. The law as it exists is insufficient to deal with the issues the advent of these beings will raise for both society and for themselves, particularly as it pertains to whether we grant legal personhood or how we control the behaviour of the companies that will profit from their existence. We cannot, in good faith, trust companies to self-regulate in so morally significant an area as the emergence of novel sapient or sentient lifeforms that do not concern us here. Something so epochal as this is too big to be left to private concerns guided by profit margins; rather it should be subject to collective morality and ordre public. We must choose to regulate, at least with the institution of minimum standards. As part of this we must choose firstly whether such beings deserve any degree of legal status, be that personhood or other, and secondly how we might best respect that status and the rights contingent upon it- especially in relation to their creation and development by corporations. The setting of minimum standards is a far more urgent need than one may at first think, and should be a high priority. This will require the engagement of a wide array of academic and policy-related fields; and the determination of what those standards should be, what they can be, and why, will be a significant undertaking taking a number of years and considerable work in founding a new intersectional discipline.

References

Notes

1. Digital Culture, Media and Sport Committee. Disinformation and ‘fake news’ (Final Report, HC 1791, 2019); Hern A, Cambridge analytica scandal ‘highlights need for AI regulation (The Guardian, 16 Apr 2018); available at: https://www.theguardian.com/technology/2018/apr/16/cambridge-analytica-scandal-highlights-need-for-ai-regulation (last accessed 26 Nov 2019).

2. Liptak A, Sent to prison by a software program’s secret algorithms (Nytimes.com 1 May, 2018); available at: https://www.nytimes.com/2017/05/01/us/politics/sent-to-prison-by-a-software-programs-secret-algorithms.html (last accessed 26 Nov 2019).

3. Marwala, T, Hurwitz, E, Artificial Intelligence And Economic Theory: Skynet In The Market (Springer 2017)CrossRefGoogle Scholar.

4. Embarking on this was the purpose of a Wellcome Trust funded pilot project (WT 208871/Z/17/Z) run by the present authors, which has led to a forthcoming major bid.

5. Blade Runner (Screenplay) (Hauer R, Scott R, Fancher H, Peoples D 1981).

6. Emonson DR Vanderbeek, The use of amphetamines in U.S. air force tactical operations during desert shield and storm (1995) 66 Aviation, Space, and Environmental Medicine; Shachtman N, Lockheed unleashes ‘HULC’ super-strength gear (WIRED, 27 February 2009); available at: https://www.wired.com/2009/02/lockheed-exo (last accessed 26 Nov 2019); Raytheon unveils lighter, faster, stronger second generation exoskeleton robotic suit (Raytheon, 27 Sept 2010); available at: http://multivu.prnewswire.com/mnr/raytheon/46273/ (last accessed 26 Nov 2019).

7. Pells R, More UK students turning to banned ‘brain boosting’ drug than ever before (The Independent, 6 June 2016); available at: http://www.independent.co.uk/student/student-life/noopept-study-drug-legal-high-banned-brain-boosting-students-record-numbers-a7068071.html (last accessed 27 Nov 2019).

8. Providing a list of all anti doping rule violations (Ukad.org.uk, 2016); available at: http://ukad.org.uk/anti-doping-rule-violations/current-violations/ (last accessed 27 Nov 2019).

9. Rather than by a ‘backdoor’ —for instance, we are sold various technologies to ease our lives which are by any understanding ‘human enhancements’ —see Lawrence DR, To what extent is the use of human enhancements defended in international human rights legislation? (2013) 13 Medical Law International 254 —but rarely if ever are we told explicitly that this is their function.

10. The north sense: intelligently designed evolution by cyborg nest (Cyborg Nest, 2018); available at: https://cyborgnest.net/ (last accessed 27 Nov 2019).

11. BioTeq - Human Re-Engineering NFC RFID Implants. (Bioteq, 2019); available at: https://www.bioteq.co.uk (last accessed 28 Nov 2019).

12. Liu, Y, Shin, HD, Li, J, Liu, L.Toward metabolic engineering in the context of system biology and synthetic biology: Advances and prospects. Applied Microbiology and Biotechnology 2014;99:1109Google ScholarPubMed.

13. For example Roberts M, Scientists build ‘synthetic embryos’ (BBC News 3 November 2019); available at: http://www.bbc.co.uk/news/health-43960363 (last accessed 27 Nov 2019); Knapton S, Artificial human life could soon be grown in lab after embryo breakthrough (The Telegraph March 2, 2017); available at: https://www.telegraph.co.uk/science/2017/03/02/artificial-human-life-could-soon-grown-lab-embryo-breakthrough/ (last accessed 27 Nov 2019).

14. Hernandez D, The man behind the google brain: andrew ng and the quest for the new AI (WIRED, 2013); available at: http://www.wired.com/2013/05/neuro-artificial-intelligence/ (last accessed 14 Sept 2017).

15. Knight W, The dark secret at the heart of AI (MIT Technology Review April 11, 2017); available at: https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ (last accessed 27 Nov 2019). It is worth pointing out that this ‘unknown’ may on some level be deliberately overplayed as the programmers provided the AI with the tools by which it performs these actions. However, it is true that these AI are using those tools in unforeseen ways.

16. Pressman A, Deciding whether to fear or celebrate google’s mind-blowing AI demo (Fortune 10 May, 2018); available at: http://fortune.com/2018/05/10/google-duplex-ai-demo/ (last accessed 27 Nov 2019).

17. Lawrence, DR, More human than human Cambridge Quarterly of Healthcare Ethics 2017 26:476CrossRefGoogle Scholar.

18. Dymova, L, Sevastjanov, P, Kaczmarek, K. A Forex trading expert system based on a new approach to the rule-base evidential reasoning. Expert Systems with Applications. 2016 1;51:13CrossRefGoogle Scholar.

19. Zhu, W, Miao, J, Hu, J, Qing, L. Vehicle detection in driving simulation using extreme learning machine. Neurocomputing 2014;27;128:160–5CrossRefGoogle Scholar.

20. Davis R, Silva A, Automatic and predictive management of electronic messages (2018) US Patent 9,894,026.

21. Gomez-Uribe, CA, Hunt, N. The netflix recommender system: Algorithms, business value, and innovation. ACM Transactions on Management Information Systems 2015 28;6(4):19CrossRefGoogle Scholar.

22. Cuddy, C. Expert systems: The technology of knowledge management and decision making for the 21st century Library Journal 2002 127(16) 82Google Scholar.

23. Searle, J.R. Minds, brains, and programs Behavioral and Brain Sciences 1980;3:417CrossRefGoogle Scholar.

24. Forgy, C, Rete: A fast algorithm for the many pattern/many object pattern match problem Artificial Intelligence 1982;19:17.CrossRefGoogle Scholar

25. Kurzweil, R. The Singularity is Near New York: Viking Press; 2005Google Scholar.

26. Newell A, Simon H, Computer science as empirical inquiry: symbols and search (1976) 19 Communications of the ACM 113.

27. Allen P, Paul Allen: the singularity isn’t near (MIT Technology Review October 12, 2011); available at: https://www.technologyreview.com/s/425733/paul-allen-the-singularity-isnt-near/ (last accessed 27 Nov 2019).

28. Kurzweil R, Kurzweil Responds: don’t underestimate the singularity (MIT Technology Review October 20, 2011); available at: https://www.technologyreview.com/s/425818/kurzweil-responds-dont-underestimate-the-singularity/ (last accessed 27 Nov 2019).

29. Markram, H, et al. Reconstruction and simulation of neocortical microcircuitry Cell 2015;163: 456CrossRefGoogle ScholarPubMed.

30. op cit note 14.

31. Clark L, Google’s Artificial brain learns to find cat videos (WIRED June 26, 2012); available at: http://www.wired.com/2012/06/google-x-neural-network (last accessed 27 Nov 2019).

32. Wavenet launches in the google assistant | deepmind (DeepMind, 2018); available at: https://deepmind.com/blog/wavenet-launches-google-assistant/ (last accessed 27 Nov 2019).

33. Russell, S, Norvig, P, Artificial Intelligence: A Modern Approach 2nd edn, Prentice Hall 2003 at 320363Google Scholar.

34. Cambria, E, White, B, Jumping NLP curves: a review of natural language processing research IEEE Computational Intelligence Magazine 2014;9:48CrossRefGoogle Scholar.

35. Op cit 33 at 537–81, 863–98.

36. Langley, P, The changing science of machine learning Machine Learning 2011;82:275CrossRefGoogle Scholar.

37. “A Knowledge modeling and machine reasoning environment capable of addressing the most challenging problems in industry, government, and academia.” Cycorp —making solutions better (Cycorp, 2016); available at: http://www.cyc.com/ (last accessed 27 Nov 2019); The Word: Common Sense (New Scientist April 11, 2006); available at: https://www.newscientist.com/article/mg19025471.700-the-word-common-sense/ (last accessed 27 Nov 2019). We thank John Harris for informing us of this fascinating endeavor.

38. Benner, SA, Sismour, AM. Synthetic biology. Nature Reviews Genetics. 2005;6(7):533–43CrossRefGoogle ScholarPubMed.

39. Osbourn, AE, O’Maille, PE, Rosser, SJ, Lindsey, K. Synthetic biology. New Phytologist. 2012;196(3):671–7CrossRefGoogle ScholarPubMed.

40. Hutchison, CA, Chuang, RY, Noskov, VN, Assad-Garcia, N, Deerinck, TJ, Ellisman, MH, et al. Design and synthesis of a minimal bacterial genome. Science. 25;351:6280Google Scholar.

41. ibid.

42. See Christiansen, A. Synthetic biology and the moral significance of artificial life: A reply to Douglas, Powell and Savulescu. Bioethics. 2016;30(5):372–9CrossRefGoogle ScholarPubMed.

43. Boldt, J, Müller, O. Nature biotechnology. 2008;26(4):387–9CrossRefGoogle Scholar.

44. Op cit 42.

45. See Gómez‐Tatay, L, Hernández‐Andreu, JM, Aznar, J. A personalist ontological approach to synthetic biology. Bioethics. 2016;30(6):397-406CrossRefGoogle ScholarPubMed.

46. Boeke, JD, Church, G, Hessel, A, Kelley, NJ, Arkin, A, Cai, Y et al. The genome project-write. Science. 2016 Jul 8;353(6295):126–7CrossRefGoogle ScholarPubMed.

47. Radford T, Davis N, Scientists launch proposal to create synthetic human genome (The Guardian, 2016); available at: https://www.theguardian.com/science/2016/jun/02/scientists-launch-proposal-to-create-synthetic-human-genome-dna (last accessed 27 Nov 2019).

48. See the Opinion of the European Group on Ethics in Science and New Technologies to the European Commission, Ethics of Synthetic Biology (No25, 2013) p 81.

49. op cit 33 Ch. 26.3 passim.

50. Yudkowski, E, Artificial intelligence as a positive and negative factor in global risk, Global Catastrophic Risks Oxford University Press 2011; 184Google Scholar.

51. Bostrom, N, Ethical issues in advanced artificial intelligence, in Smit, I, et al. (eds) Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence Vol. 2 Int Institute of Advanced Studies in Systems Research and Cybernetics 2003; 12Google Scholar.

52. Muehlhauser, L, Helm, L.The singularity and machine ethics.’ In Eden, A, et al (eds). Singularity Hypotheses: A Scientific and Philosophical Assessment, Berlin: Springer 2012; 101126CrossRefGoogle Scholar.

53. As discussed at length in Lawrence, DR, Palacios-González, C, Harris, J. Artificial intelligence: the shylock syndrome.’ Cambridge Quarterly of Healthcare Ethics 2016;25(2): 250261CrossRefGoogle Scholar.

54. The authors acknowledge that these titles are somewhat cherry-picked for effect. However, the sheer ease of finding such articles is telling, even if they are interspersed with more positive portrayals; all were within one click of a simple Google News search for ‘Artificial intelligence.’

55. Thompson C, The real problem with artificial intelligence. (Business Insider Sept 10, 2015); available at: http://uk.businessinsider.com/autonomous-artificial-intelligence-is-the-real-threat-2015-9?r=US&IR=T (last accessed 27 Nov 2019).

56. Huston, D. Why you should fear artificial intelligence (Techcrunch March 22, 2016); available at: https://techcrunch.com/2016/03/22/why-you-should-fear-artificial-intelligence (last accessed 27 Nov 2019).

57. Adams, T. Nick Bostrom: We’re like small children playing with a bomb. (The Guardian June 12, 2016); available at: https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine. (last accessed 27 Nov 2019).

58. Artificial intelligence to take over half of all jobs in next decade (RT International April 28, 2017); available at: https://www.rt.com/business/386452-ae-replace-half-jobs-technologist/. (last accessed 27 Nov 2019).

59. Best, S. Has Humanity already lost control of artificial intelligence? (Mail Online April 11, 2017); available at: http://www.dailymail.co.uk/sciencetech/article-4401836/Has-humanity-lost-control-artificial-intelligence.html. (last accessed 27 Nov 2019).

60. Rawlinson, K. Microsoft’s Bill Gates insists ai is a threat. (BBC News January 29, 2015); available at: http://www.bbc.co.uk/news/31047780. (last accessed 27 Nov 2019).

61. Cellan–Jones R. Stephen Hawking warns artificial intelligence could end mankind. (BBC News December 2, 2014); available at: http://www.bbc.co.uk/news/technology-30290540 (last accessed 27 Nov 2019).

62. Rodgers P. Elon Musk warns of terminator tech. (Forbes August 5, 2014); available at: http://www.forbes.com/sites/paulrodgers/2014/08/05/elon-musk-warns-ais-could-exterminate-humanity/ (last accessed 27 Nov 2019).

63. For expansion see, among others, Harris, J and Lawrence, DR, New technologies, old attitudes, and legislative rigidity in Brownsword, R, Scotford, E, Yeung, K (eds), Oxford Handbook on the Law and Regulation of Technology Oxford University Press 2016Google Scholar.

64. BBC. Jennifer Doudna, The Life Scientific (BBC Radio 4, September 19, 2017) Available at: http://www.bbc.co.uk/programmes/b0952pgy#play (last accessed 27 Nov 2019).

65. Taylor, C. The Concept of a Person. Philosophical Papers, Volume 1. Cambridge: Cambridge University Press; 1985 at 97Google Scholar.

66. ibid.

67. A number of domestic and international documents of rights provide these protections, and may well be applicable here to form the basis of any legal policymaking addressing this issue.

68. Most notable successes include Cecilia: Expte. Nro. P-72.254/15 Presentación Efectuada Por A.F.A.D.A Respecto Del Chimpancé “Cecilia”- Sujeto No Humano (2016); Expte. A2174-2015/0 Asociacion de funcionarios y abogados por los derechos de los animales y otros contra gcba sobre amparo (2016); and Sandra: Asociacion de Funcionarios y Abogados por los Derechos de Los Animales y Otros Contra GCBA Sobre Amparo EXPTE A2174-2015/0; see also: ‘Orangutan granted controlled freedom by argentine court.’ (CNN. 2016); available at: http://edition.cnn.com/2014/12/23/world/americas/feat-orangutan-rights-ruling/ (last accessed 27 Nov 2019) Notable failures include: THE NONHUMAN RIGHTS PROJECT, INC., On Behalf Of TOMMY, V PATRICK C. LAVERY. 518336, State of New York Supreme Court 2014 (http://decisions.courts.state.ny.us/ad3/Decisions/2014/518336.pdf); MATTER OF NONHUMAN RIGHTS PROJECT, INC. V. Stanley. N.Y. Slip Op 31419, State of New York Supreme Court 2015 (http://law.justia.com/cases/new-york/other-courts/2015/2015-ny-slip-op-25257.html); see also: McKinley J. ‘Judge orders stony brook university to defend its custody of 2 chimps.’ (Nytimes.com. 2015). http://www.nytimes.com/2015/04/22/nyregion/judge-orders-hearing-for-2-chimps-said-to-be-unlawfully-detained.html. (last accessed 27 Nov 2019).

69. This argument follows a case built in Lawrence, DR. Amplio, Ergo Sum. Cambridge Quarterly of Healthcare Ethics. 2018;27(4):686–97CrossRefGoogle ScholarPubMed.

70. It also follows that both they and we would have the right to defend ourselves from the other if such a conflict did come to pass.

71. See UK corporate governance code (Financial Reporting Council, 2018); available at: https://www.frc.org.uk/directors/corporate-governance-and-stewardship/uk-corporate-governance-code (last accessed 27 Nov 2019).

72. See Companies Act 2006 s.171–177.

73. Companies Act 2006 s.172.

74. Companies Act 2006 s.170(1).

75. Companies Act 2006 s.260 (A derivative claim is a claim brought by one or more shareholders, but “on behalf of the company.” It is used to enforce breaches of directors’ duty owed to the company. If the director is found liable, the remedy against them will be awarded to the company, not to the shareholder bringing the claim).

76. Under Companies Act 2006 s.261 claimant shareholders must, immediately upon commencing the claim, apply to the court for permission to continue it this court permission is often difficult to obtain. See Reisberg, A, Derivative claims under the companies act 2006: much ado about nothing? In Armour, J, Payne, J (eds), Rationality in Company Law: Essays in Honour of D D Prentice Hart Publishing, 2008Google ScholarDignam A, Lowry. Company Law 9th ed, Oxford University Press 2016: 205; see also Keay AR, Loughrey JM, Something Old, Something New, Something Borrowed: An Analysis of the New Derivative Action Under the Companies Act 2006 The Law Quarterly Review 2008: 124; 469; Loughrey JM, Keay AR, An Assessment Of The Present State Of Statutory Derivative Proceedings in Loughrey JM (ed.) Directors’ Duties and Shareholder Litigation in the Wake of the Financial Crisis. Edward Elgar Publishing, 2012: 187; Milman D, Shareholder Litigation In The UK: The Implications Of Recent Authorities And Other Developments Company Law Newsletter 2013; 1: 342.

77. Human Tissue Act 2004 Part 1.

78. ibid.

79. Human Fertilisation and Embryology Act 2008 4A(1)(a).

80. ibid 4A.

81. Partridge, E et al, An extra-uterine system to physiologically support the extreme premature lamb Nature Communications 2017 8: 15112CrossRefGoogle ScholarPubMed.

82. Genetically Modified Organisms (Contained Use) Regulations 2014 Part 1, Para 2(1).

83. ibid.

84. ibid Schedule 2, Part 3, Para 4(a).

85. See ibid Part 3.

86. As was the case with both Mitochondrial Replacement Therapy being first performed in Mexico due to a (mistaken) belief that there were a lack of regulations (Palacios-González C, Medina-Arellano M, Mitochondrial Replacement Techniques And Mexico’s Rule Of Law: On The Legality Of The First Maternal Spindle Transfer Case Journal of Law and the Biosciences 2017; 4:50); as well as the first human germline gene editing being undertaken in China (Liang, P, Xu, Y, Zhang, X, Ding, C, Huang, R, Zhang, Z, et al. CRISPR/Cas9-mediated gene editing in human tripronuclear zygotes. Protein & cell. 2015 1;6(5):363-72.CrossRefGoogle ScholarPubMed) before licenses were approved for the Crick Institute in the UK (Cressey D, Abbott A and Ledford H, UK scientists apply for licence to edit genes in human embryos (Nature News, 18 September 2015); available at: www.nature.com/news/uk-scientists-apply-for-licence-to-edit-genes-in-human-embryos-1.18394 (last accessed 27 Nov 2019)).

87. State of Wisconsin v Eric Loomis 2016 881 N.W.2d 749 (Wis. 2016)

see also Caliskan, A, Bryson, JJ, Narayanan, A. Semantics derived automatically from language corpora contain human-like biases. Science 2017 356(6334):183CrossRefGoogle ScholarPubMed; and Israni E, Opinion: when an algorithm helps send you to prison (Nytimes.com October 26, 2017); available at: https://www.nytimes.com/2017/10/26/opinion/algorithm-compas-sentencing-bias.html (last accessed 27 Nov 2019).

88. ACM Code of ethics and professional conduct (Association for Computing Machinery, 2018); available at: https://www.acm.org/about-acm/acm-code-of-ethics-and-professional-conduct (last accessed 27 Nov 2019).

89. IEEE Code of ethics (Institute of Electrical and Electronics Engineers, 2018); available at: https://www.ieee.org/about/corporate/governance/p7-8.html (last accessed 27 Nov 2019).

90. House of Commons, Robotics and artificial intelligence: government response to the committee’s fifth report of session 2016–17 (HC 896) (2017).

91. Artificial intelligence committee (UK Parliament, 2018); available at: https://www.parliament.uk/ai-committee (last accessed 27 Nov 2019).

92. House of Lords (2018) AI in the UK: ready, willing and able? HL 2017-2019 (100) paras 315,379.

93. Forty-two countries adopt new OECD Principles on Artificial Intelligence (Oecd.org. 2019); available at: https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.html (last accessed 30 Nov 2019).

94. Focus on responsible AI: a new Council of Europe study draws attention to the responsibility challenges linked to the use of artificial intelligence (Council of Europe. 2019); available at: https://www.coe.int/en/web/freedom-expression/-/focus-on-responsible-ai-a-new-council-of-europe-study-draws-attention-to-the-responsibility-challenges-linked-to-the-use-of-artificial-intelligence (last accessed 30 Nov 2019).