Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-06T08:06:18.919Z Has data issue: false hasContentIssue false

A Book Review of The Ethical Algorithm: The Science of Socially Aware Algorithm Design by Kearns and Roth; and Thoughts for Legal Information Professionals

Published online by Cambridge University Press:  01 October 2021

Rights & Permissions [Opens in a new window]

Abstract

This article consists of two parts. The first part is a review of the book entitled, ‘The Ethical Algorithm: The Science of Socially Aware Algorithm Design’, written by Professors Michael Kearns and Aaron Roth of the Computer and Information Science Department, University of Pennsylvania. The book was published in 2019 by Oxford University Press. The second part consists of thoughts learned from the book and how they could be applied to the work of legal information management professionals when facing tasks related to the ethical algorithms in artificial intelligence (AI) and robotics. How online privacy continues to be the main concern in the AI and robot era will be discussed, as well as the rise of concerns over AI and robots and how they might make unfair decisions toward vulnerable populations which could then become acts of discrimination. Examples of real-world problems of AI and robotics are also noted in this article.

Type
AI, Algorithms and the Legal Information World
Copyright
Copyright © The Author(s), 2021. Published by British and Irish Association of Law Librarians

INTRODUCTION

This article continues in the same vein as a systematic review on artificial intelligence (AI) and race that we published back in 2020 in Legal Information Management.Footnote 1 Findings of our review, derived from journal articles and conference proceedings, concluded that there are four relationships between AI and race: (a) AI causes unequal opportunities for people from certain racial groups, (b) AI helps to detect racial discrimination, (c) AI is applied to study health conditions of specific racial populations, and (d) AI is used to study demographics and facial images of people from different racial backgrounds. After we finished the systematic review, we learned about the enlightening book ‘The Ethical Algorithm: The Science of Socially Aware Algorithm Design’, published in 2019 by the Oxford University Press and written by Professors Michael Kearns and Aaron Roth of the University of Pennsylvania's Computer and Information Science Department.Footnote 2 The book gives us more answers to our questions on the ethics and AI. After reading the book, we see the urgent need to write this book review article adding our observations from the book that could be useful for the legal information professionals so they could support their organizations, co-workers, or clients who are working on tasks in the area of ethics of AI and robotics. The ethical algorithm is a real concern for the high-tech industry, government and society.

The structure of this paper is organized as follows: starting with the summary of the book section below written by the first author, Channarong Intahchomphoo (CI), who read the book. The next section is on the thoughts for legal information management professionals written by CI and the co-author, Christian Tschirhart, ending with a conclusion, references, and the authors’ biographies.

SUMMARY OF THE BOOK

The Ethical Algorithm: The Science of Socially Aware Algorithm Design’ is a book written by Professor Michael Kearns and Professor Aaron Roth. It is an excellent book for anyone who is interested in learning about social issues intentionally and unintentionally caused by AI and robots, particularly privacy and fairness. The book begins by talking about the issues of privacy in our internet-connected lifestyles. Online privacy is something that is very difficult to protect, as we have seen through evidence released by the whistleblower Edward Snowden against US government surveillance programs run on a global scale using advanced computing technologies; allowing governments to have the ability to collect and store online data in large volumes without asking people for their consent and without them being informed about mass surveillance projects. An online privacy example mentioned in the book is how Google has been complying with the US government in handing over the online usage activity data of certain persons requested from the authorities, despite all of the privacy protection practices and values in American society. Google has still been handing in their user data to the government every year. As Kearns and Roth state in the book on page 47 and 48:

…there is still much data released to national governments via the ordinary legal process. Google reports that in the one-year period starting July 2016, government authorities requested data for more than 157,000 user accounts. Google produced data in response to roughly 65 percent of these requests. A guarantee of differential privacy in the centralized model promises nothing against these kinds of threats: so long as Google or Apple holds customer data, it is subject to release through nonprivate channels.’

It raises the question about whether there is a real online privacy protection on the internet, especially in a context where people are constantly asked to provide personal information when signing up to use web tools and the tech companies still record and store the users’ system interaction data. Having a large amount of interaction data is always the ideal scenario for a lot of computer and data scientists because it helps them to do their jobs both better and more easily. Of course, they do not want to give up that ‘clean data’. Kearns and Roth also mention the linking of data from different sources and that it poses more risks because it makes identification of anyone very rapid, accurate and detailed. The technician could tell a lot about the people and many aspects of their lives from the linked datasets.

The second focus of the book is about ‘fairness’, mainly about gender and racial inequities facing females and racialised communities, especially disadvantaged populations like African Americans. The book provides real examples of biased algorithms that cause problems in selecting who will get job interviews, get into certain schools, be approved for financial loan applications and even regarding persons recommended on dating apps. The rest of the book discusses how to find a balance in algorithm design, referred in the book as ‘equilibrium’, one of the goals is to design algorithms that present multiple options to select depending on each user's situation or not have the machines make decisions for humans in crisis situations. In fact, it needs to give back the decision-making power to humans. For example, if a self-driving car is about to unavoidably hit a pedestrian or to choose a sharp turn which will hurt the driver and passengers and potentially kill. This kind of decision should be put back to the human driver to make, not the AI algorithm. This is another interesting point of view.

THOUGHTS FOR LEGAL INFORMATION MANAGEMENT PROFESSIONALS

Online privacy continues to be the main concern in the AI and robotics

We have seen a lot of discussion on privacy invasion in academic works from various disciplines, particularly legal studies and philosophy. Even in the computer sciences and engineering, many scholars are looking into the issue more and more and trying to find ways to build privacy protection technologies. Privacy on the internet is different from the privacy in the future AI and robotic era. Usually when we browse websites in the internet, our system interactions are being recorded and leave traces of our digital footprints on what we have clicked, searched and read. In the social media era our engagement activities on social media platforms and website activities via cookies are being used to improve the interaction experience on these platforms. Facebook also uses this information to present us with ads and the news feed, which is run by AI systems that decide what information should be presented on the users’ walls. We see now that in the social media era, the scope of online privacy is getting wider than simply tracking website visits. More of us are using the Internet of Things and AI products which use sensors to detect activities and send data back to computers and then send commands back to the sensor telling it to act or perform in certain ways. In general, sensors collect data of objects including humans, temperature, light, sound, pressure, and speed, among other things. This is a higher level of personal data if humans are being detected. The situation worsens if data about people is being collected or used without people's consent. A related example is the recent decision by the Office of the Privacy Commissioner of Canada, ruling against the Clearview AI for violating Canadian privacy law.Footnote 3 Canada is the first country in the world to take such legal action against Clearview. The US high-tech firm offers technologies that quickly identify the faces of people by going through their photos available on the internet and other sources including websites and the ‘public pages’ of peoples’ social media accounts such as profile photos on Facebook, LinkedIn, Twitter, etc. In Canada, Clearview AI's technology is used in the law enforcement. Canadian police used or might still use Clearview AI to identify suspects in the same way as US police and the FBI used the Clearview AI facial recognition app to identify the Capital rioters on 6 January 2021.Footnote 4 The point of view of the Office of the Privacy Commissioner of Canada against Clearview AI is based on the fact that that the company gathers and analyzes photos of Canadians on the internet without their consent, therefore the Clearview AI infringes Canadians’ privacy rights. However, the legal consequences or punishment are still unknown for this kind of case because the Clearview AI is based in the US's Silicon Valley and the company had closed its Canada office for quite some time. This raises questions concerning the traditional concepts of legal powers and governance over geographical jurisdictions which may well not apply to the internet-connected world where there are no real borders. In conclusion, we see now how online privacy will continue to be the main concern of the AI era and it is getting more complicated to regulate and it continue to challenge the rule of law as we traditionally understand it.

Our thoughts for legal information management professionals, including law librarians, legal researchers, law clerks, and policy makers, is to try to think of privacy as a subject that continues to face so many issues in the laws. It is important to listen to many opinions on online privacy protection. When a new AI technology is introduced to the market, the first thing to do is to think of how it fits within the privacy regulations in your country and then to try to test the technology to understand how it is designed. It is then important to think of the risks of privacy invasion and closely monitoring what local and international experts discuss about a given AI product. The goal for you is to be well read and informed.

The rise of concerns over AI and robots and how they might make unfair decisions toward vulnerable populations which could then become acts of discrimination.

The fairness issue is a new concern about AI and robotic development. Such a concern did not really exist before the beginning of the era of the internet and personal computer development. As more of AI and robotic technologies are being developed and deployed for the real uses in everyday human life, the way that technologists are making AI and robots might have unintentionally caused negative impacts by programming machines to make potentially unfair decisions toward vulnerable populations which could be considered as discriminatory or biased. A well-known example is the Google's AI algorithm in the Google Photos app, which was made to automatically label or tag images with categories. In 2015, the Google algorithm in the photo app showed that it had classified people of African descent as ‘gorillas’.Footnote 5 Technologically, this mistake likely happened by not having enough of a dataset to train the AI algorithms to understand the physical appearance of dark-skinned people or not having enough programmers of African heritage in the AI development process at the company to be fully aware of these human identity issues. From our points of view, even within the academic community, we often collect the data of one population group more than another, sometimes along racial, gender-based, socio-economic or otherwise lines. Therefore, that data from our academic research in some ways only adequately represents certain groups and leaves others out. In some cases, AI algorithms are trained from social media data. It is well known that people often do not filter what they say in such an environment; they often say anything on social media platforms either without careful thought or consideration for others. Thus, often social media data contains many racist or sexist posts. If we use those data to train AI algorithms then of course it is going to create racist, sexist or otherwise discriminatory AI systems if we are not careful about the design of those algorithms. Another solution is to have more AI programmers from diverse backgrounds participating from early development onward, so that critical points where algorithm makeup and acute cultural and societal issues meet can be better addressed. This, in turn, will lead to developing greater know how as to training and building ethical algorithms. The last point we want to make is to have more AI testing on its fairness conducted by third parties which could include NGOs, community organizations, academic researchers and individuals with technology studies, computational social science, or similar backgrounds. This part of the process would be helpful because many AI problems created made by mistake. So, it is very important to test and evaluate them and publicly call out to the makers when problems are found.

Our advice for legal information management professionals is to pay attention to news reports on biased AI as there are not yet a lot of actual legal cases in the courts on this matter but they might happen in the future as class-action lawsuits from certain vulnerable populations against tech companies. There are also strong movements pressing governments to introduce regulations on AI and robotic technology. Listening to government positions and responses to these issues, usually communicated through working committee reports, is another vital source for all legal information management professionals to monitor in order to keeping learning what many national governments have to say in this matter. Such reports are publicly accessible via the internet. Not all countries make the same policies to regulate AI and robotic tools, as we can see now with the Australian government ordering Facebook to pay money to Australian news publishers under the new Media Bargaining Law, an estimated AU$407 million.Footnote 6 This law has been implemented because Facebook allows users to share and view news on its platform, which the Australian government believes will draw all the ads revenues away from the Australian professional news publishers. Facebook responded by no longer allowing users in Australia to share and view news on Facebook. Following public backlash, Facebook reversed their decision and came to an agreement with the Australian government, and eventually some Australian news corporations such as News Corp and Sky News Australia.Footnote 7 This rule only applies in Australia and does not yet apply to the news media in other countries, even in the US, where Facebook is based. This kind of piecemeal approach towards regulating multinational tech companies country-by-country hints at future challenges as governments and civil society increasingly turn their attention to AI and robotic technology. Furthermore, we will keep seeing issues about the internet and computer regulation as in the coming years more and more AI and robots are introduced in human societies and workplaces.

CONCLUSION

This is a book review article with additional thoughts for legal information management professionals based on the information learned from the book, ‘The Ethical Algorithm’ of Professor Kearns and Professor Roth (2019). The main focuses of the discussion section in this article are on privacy in the AI and robot era and the rise of fairness concerns due to potentially biased AI algorithms and robots. We also offered some sources of information, such as government communications, and examples of potential future challenges, such as inconsistent government regulation, related to the book's subject matter.

References

Footnotes

1 Intahchomphoo, Channarong and Gundersen, Odd Erik, ‘Artificial Intelligence and Race: A Systematic Review’ (2020) 20(2) LIM 74CrossRefGoogle Scholar.

2 Kearns, Michael and Roth, Aaron, The Ethical Algorithm: The Science of Socially Aware Algorithm Design, (New York, Oxford University Press 2019)Google Scholar.

3 Office of the Privacy Commissioner of Canada, Joint Investigation of Clearview AI, Inc. by the Office of the Privacy Commissioner of Canada, the Commission d'accès à l'information du Québec, the Information and Privacy Commissioner for British Columbia, and the Information Privacy Commissioner of Alberta (February 2021), https://www.priv.gc.ca/en/opc-actions-and-decisions/investigations/investigations-into-businesses/2021/pipeda-2021-001/> [https://perma.cc/7EEE-TQWU].

4 Juliette Rihl, ‘Facial Recognition Use Spiked After the Capitol Riot. Privacy Advocates Are Leery’ (January 2021), online:<https://www.publicsource.org/capitol-riot-dc-clearview-facial-recognition-privacy/> [https://perma.cc/LL9G-QZ3Y].

5 BBC, ‘Google Apologises for Photos App’s Racist Blunder’ (July, 2015), online:<https://www.bbc.com/news/technology-33347866> [https://perma.cc/793J-4N5V].

6 Facebook, ‘Changes to Sharing and Viewing News on Facebook in Australia’ (February 2021), online:<https://about.fb.com/news/2021/02/changes-to-sharing-and-viewing-news-on-facebook-in-australia/> [https://perma.cc/7L38-7RF8].

7 News Corp and Facebook Reach News Agreement in Australia The Wall Street Journal (March, 2021), online: <https://www.wsj.com/articles/news-corp-and-facebook-reach-news-agreement-in-australia-11615847666> [https://perma.cc/DL8L-XXYL].