Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-05T16:01:05.021Z Has data issue: false hasContentIssue false

Responsibility-Enhancing Assistive Technologies and People with Autism

Published online by Cambridge University Press:  07 September 2020

Rights & Permissions [Opens in a new window]

Abstract

This paper aims to explore the role assistive technologies (ATs) might play in helping people with autism spectrum disorder (ASD) and a concomitant responsibility deficit become more morally responsible. Toward this goal, the authors discuss the philosophical concept of responsibility, with a reliance on Nicole Vincent’s taxonomy of responsibility concepts. They then outline the ways in which ASD complicates ascriptions of responsibility, particularly responsibility understood as a capacity. Further, they explore the ways in which ATs might improve a person’s capacity so that responsibility can be properly ascribed to them. After demonstrating that although assistive technologies are likely to be able to enhance a person’s capacity in such a way so that responsibility can be ascribed to them, the authors assert that these technologies will have a number of additional effects on the other aspects of the concept of responsibility.

Type
Articles
Copyright
© The Author(s), 2020. Published by Cambridge University Press

Introduction

Moral responsibility is considered to be a distinguishing feature of personhood.Footnote 1 To say that someone is morally responsible is to say that praise or blame can be properly attributed to them. Of course, in specific circumstances, individuals are not considered to be morally responsible—if the person is acting under duress or has been brainwashed for example. Children are not thought to be able to fully consider their actions, or to completely control their actions and thus are considered to have diminished responsibility. Similarly, people with dementia or with intellectual or developmental disabilities might not be considered fully morally responsible or not responsible at all (depending on the circumstances and the nature and extent of their impairments). Moral responsibility can be thought of as existing on a spectrum and admitting of degrees.

Novel technological advances suggest that it might be possible to overcome some of the specific features associated with autism spectrum disorder (ASD). Developments in assistive technologies (ATs) such as speech generating devices to help communication, games to help persons with ASD to pick up social cues they otherwise would struggle to identify, and tactile devices designed to help reduce stress, have made life easier for people with ASD in various fields. These technologies might also provide means of enhancing moral responsibility.

This paper aims to explore the role ATs might play in helping people with ASD and a concomitant responsibility deficit become more morally responsible. In order to do this, in the section “Responsibility,” we briefly introduce the concept of responsibility, with a reliance on Nicole Vincent’s taxonomy of responsibility concepts.Footnote 2 , Footnote 3 We then outline the ways in which ASD complicates ascriptions of responsibility, particularly responsibility understood as a capacity. In the section “Enhancing Capacity or Moral Responsibility with Assistive Technologies,” we discuss the ways in which ATs might improve a person’s capacity so that responsibility can be properly ascribed to them. This, as we discuss in the section “Additional Responsibility Effects,” will have a number of additional effects on the other concepts of responsibility outlined by Vincent.

Although this paper focuses on the ways in which ATs will impact on the responsibility of persons with ASD, the philosophical issues addressed will likely have a more general application. Novel technologies such as ATs will be developed in order to address specific issues, such as deficits caused by ASD, but will have impacts far beyond the targeted group. Ascriptions of responsibility will become increasingly difficult in a technologically mediated world, as more and more people are guided by gadgets and devices, and ultimately the algorithms underpinning them.

Responsibility

The term responsibility has numerous (related) meanings in ordinary discourse. Nicole Vincent provides a taxonomy of responsibility, distinguishing capacity, role, outcome, liability, causal, and virtue responsibilities.Footnote 4 Capacity responsibility relates to someone’s capability to perform certain actions, role responsibility relates to the sorts of roles people might have in society and is somewhat dependent on what capacities they have (or ought to have), outcome responsibility relates to whether a person is responsible for something happening. A certain outcome might have resulted from actions a person undertook, making that outcome attributable to them and perhaps, though not necessarily, making them blameworthy in that regard. A driver might turn the steering wheel suddenly, causing the car to crash. They would be responsible for this outcome, but, if they turned the steering wheel due to a heart attack, or to avoid a small child, they may not be blameworthy. Causal responsibility implies a causal link between the actions of the person and some outcome, but is a “thinner and less morally imbued concept than outcome responsibility.”Footnote 5 A person might be causally responsible for a theft, but if they had been forced to do it, we would be unlikely to hold them morally responsible for the outcome. Virtue responsibility refers to a person’s character resulting from their past actions demonstrating “their manifest commitment to doing what they take to be right.”Footnote 6 Vincent notes that “a fully responsible person (in the capacity sense) can be very irresponsible (in the virtue sense), and a person who is not yet fully responsible (in the capacity sense) may nevertheless be very responsible (in the virtue sense).”Footnote 7 Liability responsibility relates to legal responsibility, focusing on whether someone should be held responsible for an outcome.

Responsibility and Autism

This paper focuses primarily on capacity responsibility, as deficits in this type of responsibility are the focus of attention in relation to ASD. Capacity deficits in persons with ASD have broader implications however, as persons who are not considered to have the capacity to be responsible (as some deficits mean they do not have the capacity to perform certain actions) will often be excluded from particular role responsibilities (which we will return to below).

The nature of ASD means that persons with ASD have certain cognitive and volitional deficits that impact on responsibility. Specifically, in Vincent’s terms, the deficits impact on capacity responsibility and causal responsibility, though the discussions quoted below refer simply to responsibility. We will concentrate on the reasons-responsiveness theory, one of “three broad categories” of moral responsibility theory that focus on capacity, that is on the “features or abilities of agents” thought “to be central for ascriptions of praise and blame (or the appropriateness thereof).”Footnote 8 The other moral responsibility theories focusing on capacity are the quality-of-will theory and the mesh theory. Quality of will theories hold that “the matter of an agent’s responsibility for a particular action is decided by determining whether or not the action expressed an objectionable quality of will on the agent’s part”Footnote 9; mesh theories hold the “agents are responsible for acts which flow from the agent’s ‘real self’, or for actions that come from an appropriate mesh between certain key aspects of her agency.”Footnote 10 However, the dominant theory is the reasons-responsiveness theory, the most prominent version of which was advocated by Fischer and Ravizza.Footnote 11 Reasons-responsiveness theories argue that persons are morally responsible for their actions if they have the rational capacity for recognizing reasons for acting and are able to guide their actions in accordance with those reasons. Agents “must not simply act on their strongest desires, but be capable of stepping back from their desires, evaluating them, and acting for good reasons. This requires responsible agents to be able to recognize and respond to reasons for action.”Footnote 12 Being reasons-responsive “involves both cognitive capacities to distinguish right from wrong and volitional capacities to conform one’s conduct to that normative knowledge.”Footnote 13

The cognitive and volitional aspects of reasons-responsiveness correlate with being receptive and being reactive. This involves being receptive to reasons—a cognitive power—and being reactive to reasons—an executive power (volitional). A person above a certain cognitive threshold will be receptive to reasons, that is, will recognize certain facts or conditions in the world. Being reasons-receptive means responding to reasons in a comprehensible pattern. These reasons might include threats, other people’s emotional states, moral reasons for acting and so on. Reasons-receptivity will take into account a person’s preferences, likes, values, and beliefs. The person will understand how their reasons fit together (a cognitive power). Being reasons-reactive is an executive power. Executive competency, or volitional competency, relates to people’s ability to react to reasons.Footnote 14

If a person has both the cognitive capacities and the volitional capacities required, they will be considered normatively competent. Footnote 15 Those who lack such capacities are not considered normatively competent, and so will not be considered morally responsible for their actions. This might be due to a failure in the cognitive or volitional component of normative competence. So, people considered to be receptive to reasons and capable of reacting to them, that is, both cognitively and volitionally competent, are considered to be normatively competent. Footnote 16 There are a number of ways, however, in which a person might fail to be normatively competent and therefore not considerable in terms of moral responsibility. A person can fail to be reasons-receptive if they fail to recognize reasons that exist for acting; and fail to be reasons-reactive if they fail to “make a decision which aligns with those reasons, or fails to act on that decision.”Footnote 17 So, a person might have the cognitive capacity to recognize reasons to act in a certain way and thus be reasons-receptive but lack the executive power to change their behavior (e.g., to respond to those reasons) and thus fail to be reasons-reactive.

Autism, because it “affects the ability to understand and react to the expectations of others, and can lead to atypical reactions to what other people do”Footnote 18 is often a test case in questions of who can or cannot be held morally responsible. Indeed, Nathan Stout argues that “there may be individuals with ASD who are moderately reasons-responsive yet fail to be responsible for a range of actions.”Footnote 19 Autism is usually explained according to one of two models: a theory of mind model, and an executive function model. The former emphasizes that people with autism have atypically low theory of mind skills, that is, they struggle to intuit the mental states of others, and to understand that other persons have beliefs and expectations different from our own.Footnote 20 The latter emphasizes challenges in executive functions (EFs), the mental processes required when a person needs to concentrate and pay attention. They are required when automatic behavior or reliance on instinct would not suffice.Footnote 21 According to Kenneth A. Richman, “The core domains of EF are understood to be inhibitory control, working memory, and cognitive flexibility.”Footnote 22 Inhibitory control pertains to behavioral inhibition, selective attention, and cognitive inhibition; working memory pertains to updating the memory content or storing new information for later analysis; and “cognitive flexibility involves the capacity to shift attention and to adapt to changing environments or rules.”Footnote 23 EF challenges in people with ASD can cause failures of cognitive or volitional competence, that is, a person may not be able to distinguish right from wrong or may not be able to conform their conduct to the normative knowledge they possess. In other words, due to EF challenges in people with ASD, they might, it is suggested, fail to be both reasons reactive (exhibit failure of volition) and fail to be reasons receptive (exhibit cognitive failure) and thus fail to be normatively competent and, as such, ineligible to be considered morally reponsible.Footnote 24

In addition to the EF issues the aforementioned theory of mind challenges might also result in a failure of reasons-receptivity, for example, not recognizing that other people might think differently or be hurt by one’s action (or inaction). In short, it is apparent that autism can reduce an agent’s reasons responsiveness. Richman argues that people with ASD are likely to have reduced reasons-responsiveness in a number of scenarios, taking into account that there is a great deal of diversity among people with ASD. However, he does not conclude that this will result in a global exemption from all moral responsibility, rather that such reductions would likely ground excuses in specific situations.Footnote 25 Stout agrees, arguing “it is clear that individuals with ASD demonstrate regular patterns of receptivity to reasons yet are not responsible for a range of actions given that their cognitive deficits rule out their having access to a certain range of reasons.”Footnote 26 In summary, persons with ASD will not always meet the requirements of moral responsibility according to the reasons-responsiveness theory.

Enhancing Capacity or Moral Responsibility with Assistive Technologies

Let us assume that (1) the reasons-responsiveness account of moral responsibility is acceptable (at least in terms of capacity responsibility) and (2) that executive function and theory of mind challenges do mean that people with ASD have reduced reasons-responsiveness and hence reduced moral responsibility. If these assumptions hold, it follows that ATs that can improve a person’s reasons-responsiveness might be able to bring that person up to the minimum threshold required for responsibility. The idea of a threshold is that persons with certain capacity deficits are not considered morally responsible—there exists a threshold above which it is reasonable to say that the person has the capacity to be morally responsible. ATs, by addressing executive function challenges and theory of mind challenges could, in theory, bring a person with ASD up to a threshold level of moral responsibility.

Executive function challenges are physical in nature (i.e., are directly connected to brain functions such as working memory, inhibitory control, and cognitive flexibility, as mentioned above). Lack of inhibitory control, working memory, and cognitive flexibility mean that people with ASD are more likely to suffer from reasons-blockage (either volitional or cognitive). These can be interconnected.Footnote 27 At times, limited cognitive flexibility might mean that people with ASD struggle to compare consequences of potential actions—a cognitive challenge leading to a failure of volition, that is, a person might perceive a fact as a moral reason for action, and want to act, but lack the cognitive flexibility to choose an appropriate response. Assistive technologies have the potential to address these challenges by, for instance removing reasons-blockages that hinder the cognitive aspect of reasons-responsiveness or by providing guidance or nudges that aid the volitional aspect of reasons responsiveness. So, for instance, a person with ASD might fail to notice the emotional state of a friend or colleague (a cognitive failure) and therefore fail to respond correctly to them. The failure might be due to a reasons-blockage caused by their autism, and hence they would not be culpable. An AT might be able to overcome failures of volition as well by providing an appropriate response for a person who lacks the cognitive flexibility to choose (resulting in a failure of volition).

Indeed, although this paper has focused on ATs being used to assist executive function challenges, they may also be able to assist theory of mind challenges characteristic of people with ASD. For example, ATs using facial- or emotion-recognition software might be able to suggest to the person with ASD how a friend or acquaintance is feeling. Assistive devices that could help recognize emotional states in others would make a person with ASD more likely to be reasons-receptive more often and thus likely to be more responsible.

Projecting further into the future, certain types of brain-computer interfaces might be used to remove or reduce executive function challenges (obviously this would be easier if using a brain-computer interface did not require any form of surgery and was relatively innocuous). Consider that challenges in prioritizing among relevant reasons to act or challenges leading to a failure to perceive options to act arising from some reason-giving fact would constitute certain reasons-blockages. ATs might be designed to help people with ASD prioritize relevant reasons. Take, for instance, lifelog technologies. These technologies are defined as a “form of pervasive computing consisting of a unified, digital record”Footnote 28 about an individual. Lifelogs consist of “multimodally captured data which are gathered, stored, and processed into meaningful and retrievable information accessible through an interface.”Footnote 29 Lifelogs might be able to suggest to the user which courses of action they would be likely to favor if they had more time to reflect on their decision or suggest and rank various courses of action available to the user.

Challenges in cognitive flexibility might prevent people with ASD from fulfilling aspects of volitional competence, that is, having difficulties processing new information in order to make a decision after perceiving something as a moral reason for acting. Again, lifelog technologies are an example of a device that might be able to process new information quickly and present it to the person with ASD in a succinct and comprehensible manner.

Additional Responsibility Effects

As we have seen, ATs could be successful in overcoming deficits in capacity and thereby bring persons with ASD above the threshold required to be considered morally responsible. Let us now explore the effects this might have on the other aspects of responsibility as outlined by Vincent.

Role Responsibility

ATs by enhancing a person’s capacity means that they will be capable of acquiring more role responsibility. Roles can be conventional, institutional, or social, that is, an individual has certain role responsibilities due to the job they have, due to being a parent, or due to being a friend. So role responsibility determines what duties a person might have. Of course, as we often have different and conflicting roles, we might have different and conflicting duties. Role responsibility can be said to track capacity responsibility, that is, ideally roles will be only assigned to people with the requisite capacities. We would not want to assign the role responsibility of a surgeon to someone who lacks the capacity to carry out the role. Lack of capacity means that persons with autism are not always (depending on the severity of autism) able to carry out certain roles, for example, we do not expect them to have to perform specific duties. ATs that could enhance their capacity would mean that persons with autism could carry out and obtain more roles.

Causal Responsibility

Causal responsibility is concerned with causal links. It does not necessarily mean that someone who is causally responsible for an event, caused the event to happen, or played a part in causing the event to happen, should be blamed or praised for this thing happening. For example, a person might have committed a crime, but committed the crime while deluded by a mental illness, or while brainwashed. Or, a person might have caused the thing to happen, but lacked the capacity to do otherwise, as could be the case for people with ASD. If a person with ASD using an AT causes some event to happen, we will have to include both the user and the creators of the AT in our ascription of causal responsibility. This means that we have a many-hands problem. Within a complex technological system designed by many people, many different actors might have had a role in determining the outcomes produced by that system, or the chain of events actuated by that system. Numerous people will have been involved in designing, building, maintaining, and operating the device.Footnote 30 In the event of a bad outcome, the number of people involved originally makes it more difficult to determine who is causally responsible for that bad outcome. This is not to say that all involved in causing an event to happen are blameworthy, simply that with large numbers of people being involved, precisely determining who, if anyone, caused what exactly is difficult.

Outcome Responsibility

When we can rightfully attribute a state of affairs to a person as the result of something they did, we are talking about outcome responsibility. Remember, outcome responsibility depends on prior claims about a person’s role responsibility and their causal responsibility, that is, they had cognitive and volitional capacity, caused the event to happen or not happen, and had a duty in relation to that event happening or not happening, either in terms of a social, professional or institutional role or simply as a capable person. Again, those lacking capacity are normally excluded from having blame attributed to them. ATs are likely to complicate the attribution of outcome responsibility to their users. The many-hands problem (mentioned above) by complicating the ascription of causal responsibility, complicates the ascription of outcome responsibility.

Even if it were possible to determine the extent to which an AT played a role in the user causing some event to occur, there arises a secondary many-hands problem. It would still be very difficult to allocate individual causal responsibility to all those involved in the design, development, building, and maintenance of the AT, as well as those involved in teaching the user how to use the device. They might all have had some part to play in causing the event to occur.

A further problem in attributing outcome responsibility arises as a result of the “responsibility gap” which refers to dilemmas arising from the increasing use of computers in essential systems and in everyday life.Footnote 31 , Footnote 32 In the many-hands problem, many people can be held responsible for an act, whereas with the responsibility gap, there may be a space where no one can be said to be responsible. When a machine acts “autonomously,” it might be impossible to (rightfully) ascribe responsibility to any single person. Machines can act autonomously when powered by autonomous artificial agents without human supervision. Examples range from autonomous vehicles to robots such as AIBO (Sony’s robot dog) that adapt to their owner. These sorts of machines will be acting autonomously, making them less predictable. Indeed, it is possible that no human will be able to predict precisely what the machine will do. We might know that a machine caused a certain outcome, but we will not know who is responsible for the outcome, as the actions the machine took were the result of autonomous activity. The responsibility gap is therefore likely to occur when people with ASD use high-tech assistive devices that make use of computerized systems that are capable of learning, that is, ATs that involve the classification of sensory input, or are involved in navigation, or that are designed to adapt to the user and their changing circumstances. Given the advances in AI, and more specifically machine learning, more and more of these ATs can be expected.

Virtue Responsibility

ATs will play some role in determining the virtue responsibility of users. In order to illustrate virtue responsibility, Vincent provides the example of Smith, who is “normally a dependable person who took his duties seriously and did the right thing.”Footnote 33 So, saying someone is responsible in this sense is to “say something good about their character, reputation or intentions, as exemplified by their history which testifies to their manifest commitment to doing what they take to be right.”Footnote 34

Broadly construed then, Vincent is talking about a virtuous person. More narrowly construed, Vincent is talking about someone possessing the virtue of responsibility—someone who takes their duties seriously, is dependable and so on, that is a “responsible” person. Such a person may not possess other virtues. For instance, they may lack humility, or empathy, or generosity. ATs, by influencing the decisionmaking of users, will influence virtues in the sense of making someone more responsible and, in all likelihood, in the sense of being more generous, empathetic, and so on.

ATs that help users to interact with other people, to read and respond to emotional cues, and to make decisions on how to respond to the actions of others will influence users’ behavior. Although some devices might only enhance capacities, many, in doing so, will also offer guidance for users on doing what is appropriate in a given situation. This places the onus on developers to determine what responses to people (and their actions and emotions) are appropriate. Such ATs can be considered to be offering guidance on how the user should fulfil their social role. ATs might also guide people in relation to truth-telling or lying. This sort of guidance will play a role in determining how the user is viewed by other people, that is, in determining whether they are considered to be good, responsible, or trustworthy, in short, how virtuous they are.

Such technologies might mean that users will be able to behave more responsibly (in the virtue sense) and/or more virtuously in general, that is, be more empathetic, generous, brave. One practical consequence of this is that people who, due to certain cognitive or volitional deficits were not held to the same ethical standards as others, now will be. However, the use of ATs is likely to raise questions about the authenticity of such (virtuous) behavior. The disability scholar, Leslie Francis argues that ATs should not prevent us from considering the actions of a person with intellectual disabilities to be their own autonomous actions.Footnote 35 Francis has discussed the role of devices for people with physical disabilities, noting that we do not question whether people with glasses are actually seeing, or people with prosthetic limbs are walking. With cognitive aids, such as reminder-notes in a calendar, no one questions whether the person has remembered the event.Footnote 36 She goes on to argue than in the case of technologies that will help people with intellectual disabilities, we should not question their autonomy if they are using such devices.

With responsibility-enhancing devices, Francis’s line of argument suggests that it would be a mistake to question whether the person with ASD recognizes the emotion or chooses the action they perform simply because a device has helped them to do so. A person using such a device would be considered virtue responsible. However, given the executive function challenges of persons with ASD, they may struggle to determine when a device is providing them with bad guidance. This might happen for a number of reasons—the device might be broken or damaged, the device might be hacked, the device might not have been well-designed, or the device might have autonomously behaved in a way that was neither predicted nor desired. Whereas a person with full normative competence ought to be able to resist bad or incorrect advice of an AT, this is more likely to be difficult for persons without full normative competence, for example, persons with ASD. In these scenarios, it seems unjust to declare the user to be morally responsible for actions based on said device. However, this assessment, that is, of a lack of responsibility, would then also apply when the user is behaving in a desirable way (when they are doing something worthy of praise). A further concern is that this sort of habituation amounts to indoctrination rather than moral education. The latter would aim to encourage moral reflection, as part of a general goal of developing morally mature adults. If users are incapable of that sort of reflection (or if reflective capacities are bypassed), claims that a person’s actions are virtuous are less convincing.

Liability Responsibility

These new complications will also make liability responsibility more complicated. A person using an AT who commits a crime or is negligent in some way is not clearly liable. That they are using an AT means that this has played an important causal role in whatever outcome is at stake. The responsibility gap and the many hands problem further complicate the question of liability. For example, if a bad outcome occurs as a result of a user with an AT performing an action and it is determined that the AT malfunctioned, it will still be important to determine whether the malfunction is the result of flawed design by its creators, poor upkeep by service providers, or even a hack by malicious third parties. Practically speaking, legislators will have to determine how to ascribe liability. It will be essential to protect users from being held liable for actions performed under the guidance of malfunctioning or hacked ATs. Similarly, it will be essential to ensure that ATs cannot be used as a way of excusing all bad behavior or of forcing designers, developers, and so on to be accountable for every action a user takes. Ultimately, if the correct balance is not struck, the great potential of responsibility-enhancing ATs will not be realized.

Conclusion

This paper first introduced the concept of responsibility, relying on Nicole Vincent’s taxonomy of responsibility to differentiate the various ways in which we understand the concept. Following this, we focused on one particular type of responsibility—namely capacity responsibility. We outlined discussions about the reasons-responsiveness theory of moral responsibility (a theory of capacity responsibility) detailing how autism creates cognitive or volitional deficits that mean we should not always consider persons with autism to be fully responsible. We suggested that ATs might be able to overcome those deficits meaning that persons with ASD would thus have the capacity to be morally responsible. We then explored the ways in which this technological transformation would impact on the other kinds of responsibility.

ATs have the potential to provide persons with ASD with aids to ensure that they have the capacities required to be considered morally responsible. This would facilitate their being given more role responsibilities, which would mean that they are able to participate in more social and professional fields than was possible hitherto. This makes a compelling case for the further development of ATs for persons with ASD. However, responsibility-enhancing ATs bring with them new problems relating to outcome, causal, liability, and even virtue responsibility. The benefits of increasing a person’s capacity responsibility are that new roles will be available to them. If people are capable of more responsibility, they are likely to be granted more autonomy in their day-to-day lives—they may get jobs that were previously unavailable to them and so on. In this sense the technologies are inclusive. However, the potential for chaos in relation to responsibility for outcomes and ultimately for liability needs, for practical reasons, to be addressed urgently. Meanwhile, the philosophical debate regarding the virtuous nature (or otherwise) of technologically mediated people is likely to become more prominent as ATs and other novel technologies become more influential at both a personal and political level.

Footnotes

Funding: This research was supported by funding from the charity RESPECT and the People Program (Marie Curie Actions) of the European Union’s Seventh Framework Program (FP7/2007-2013) under REA grant agreement no. PCOFUND-GA-2013-608728.

References

Notes

1. Eshleman A. Moral responsibility. In: Zalta EN, ed. The Stanford Encyclopedia of Philosophy. Stanford: Stanford University; 2016; available at https://plato.stanford.edu/archives/win2016/entries/moral-responsibility/.

2. Vincent NA. A structured taxonomy of responsibility concepts. In: Vincent NAA, van de Poel I, Hoven J, Düwell M, eds. Moral Responsibility. The Netherlands: Springer; 2011; available at http://www.springerlink.com/content/m474j701vq4m1871/abstract/.

3. Vincent N. Enhancing responsibility. In: Vincent N, ed. Neuroscience and Legal Responsibility. Oxford: Oxford University Press; 2013, at 305–33.

4. See note 2, Vincent 2011, at 15–35.

5. See note 2, Vincent 2011, at 15–35.

6. See note 2, Vincent 2011, at 15–35.

7. See note 3, Vincent 2013, at 305–33.

8. Stout N. Reasons-responsiveness and moral responsibility: The case of autism. The Journal of Ethics 2016;20(4):401–18.

9. See note 8, Stout 2016, at 401–18.

10. See note 8, Stout 2016, at 401–18.

11. Fischer JM, Ravizza M. Responsibility and Control: A Theory of Moral Responsibility. Cambridge: Cambridge University Press; 1999.

12. Brink DO, Nelkin DK. Fairness and the Architecture of Responsibility [Internet]. Rochester, NY: Social Science Research Network; 2013. Report No.: ID 2313826; available at https://papers.ssrn.com/abstract=2313826.

13. See note 12, Brink, Nelkin 2013.

14. Richman KA. Autism and moral responsibility: Executive function, reasons responsiveness, and reasons blockage. Neuroethics 2017;11(1):23–33.

15. See note 12, Brink, Nelkin 2013.

16. See note 14, Richman 2017, at 23–33.

17. See note 8, Stout 2016, at 401–18.

18. See note 14, Richman 2017, at 23–33

19. See note 8, Stout 2016, at 401–18.

20. See note 14, Richman 2017, at 23–33.

21. See note 14, Richman 2017, at 23–33.

22. See note 14, Richman 2017, at 23–33.

23. See note 14, Richman 2017, at 23–33.

24. See note 14, Richman 2017, at 23–33.

25. See note 14, Richman 2017, at 23–33.

26. See note 8, Stout 2016, at 401–18.

27. See note 14, Richman 2017, at 23–33.

28. Dodge M, Kitchin R. Outlines of a world coming into existence: Pervasive computing and the ethics of forgetting. Environment and Planning B Planning and Design 2007;34(3):431–45.

29. Jacquemard T, Novitzky P, O’Brolcháin F, Smeaton AF, Gordijn B. Challenges and opportunities of lifelog technologies: A literature review and critical analysis. Science and Engineering Ethics 2014;20:379–409.

30. Thompson DF. The problem of many hands. In: Restoring Responsibility: Ethics in Government, Business and Healthcare. London, UK: Cambridge University Press; 2005.

31. Matthias A. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology 2004;6(3):175–83.

32. Johnson D. Technology with no human responsibility? Journal of Business Ethics 2015;127(4):707–15.

33. See note 2, Vincent 2011, at 15–35.

34. See note 2, Vincent 2011, at 15–35.

35. Francis L. Disability. In: Frey RG, Wellman CH, eds. A Companion to Applied Ethics. London, UK: Wiley-Blackwell; 2005.

36. Francis L. Understanding autonomy in light of intellectual disability. In: Brownlee K, Cureton A, eds. Disability and Disadvantage. Oxford: Oxford University Press; 2009, at 200–15.