We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Artificial intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility, and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but often fail to suffice due to the context-sensitivity of ethical challenges. Second, this chapter discusses methods to tackle these challenges. Main ethical theories (such as virtue ethics, consequentialism, and deontology) are shown to provide a starting point, but often lack the details needed for actionable AI ethics. Instead, we argue that mid-level philosophical theories coupled to design-approaches such as “design for values”, together with interdisciplinary working methods, offer the best way forward. The chapter aims to show how these approaches can lead to an ethics of AI that is actionable and that can be proactively integrated in the design of AI systems.
This chapter shows how different values including security, privacy, and safety have been at stake in the design of whole-body scanners at airports. Value-sensitive design (VSD) and Design for Values are discussed as two approaches to proactively identifying and including values in engineering design. When designing for values, one may run into conflicting values that cannot be accommodated at the same time. Different strategies for dealing with value conflicts are discussed, including designing out the conflict and balancing the conflicting values in a sensible and acceptable way. This chapter does not pretend to offer the holy grail of design for ethics. Indeed, complex and ethically intricate situations will emerge in an actual design process. Instead, it offers a way to be more sensitive to these conflicts when they occur in design and to be equipped to deal with them as far as possible. The chapter further discusses responsible research and innovation in proactive thinking about technological innovation. In so doing, it extends the notion of design beyond merely technical artifacts and focuses on the process of innovation.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.