Hostname: page-component-7b9c58cd5d-bslzr Total loading time: 0.001 Render date: 2025-03-15T15:44:46.394Z Has data issue: false hasContentIssue false

Adopting robot lawyer? The extending artificial intelligence robot lawyer technology acceptance model for legal industry by an exploratory study

Published online by Cambridge University Press:  13 February 2019

Ni Xu
Affiliation:
School of Management, National Taiwan University of Science and Technology, Taipei, Taiwan (R.O.C.)
Kung-Jeng Wang*
Affiliation:
School of Management, National Taiwan University of Science and Technology, Taipei, Taiwan (R.O.C.)
*
*Corresponding author. Email: kjwang@mail.ntust.edu.tw
Rights & Permissions [Opens in a new window]

Abstract

The development of artificial intelligence has created new opportunities and challenges in industries. The competition between robots and humans has elicited extensive attention among legal researchers. In this exploratory study, we addressed issues regarding the introduction of robots to the practice of legal service through a semistructured interviews with lawyers, judges, artificial intelligence experts, and potential clients. An extended robot lawyer technology acceptance model with five facets and 11 elements is proposed in this study. This model highlights two dimensions: ‘legal use’ and ‘perception of trust.’ In summary, this study provides new specific implications and exhibits three characteristics, namely, derivative, macroscopic, and instructive, in the legal services with artificial intelligence. In addition, artificial intelligence robot lawyers are being developed with some of the abilities necessary to substitute for human beings. Nevertheless, working with human lawyers is imperative to produce benefits from this type of reciprocity.

Type
Research Article
Copyright
Copyright © Cambridge University Press and Australian and New Zealand Academy of Management 2019

Introduction

Artificial intelligence (AI) refers to the theory and application of systems used to simulate, extend, and expand human intelligence. It is involved in mathematics, informatics, linguistics, philosophy, psychology, and mechanics, among others. In the future, AI may even be integrated to simulate human cognition, learning, reasoning, and behavioral ability (Brougham & Haar, Reference Brougham and Haar2018). At present, AI is undergoing rapid development and even exerts significant influence on certain industries. Simultaneously, human concern is increasing with regard to the growing number of jobs that will be replaced by AI robots, such as factory workers, bank tellers, and truck drivers (Huang & Rust, Reference Huang and Rust2018). AI brings convenience to people, but it can also lead to the substitution of human workers with robots (Dekker, Salomons, & van der Waal, Reference Dekker, Salomons and van der Waal2017). Some scholars even predict that at least eight professions will be threatened by robots in the next 10–20 years, and lawyers are among these professions (Zlotowski, Yogeeswaran, & Bartneck, Reference Zlotowski, Yogeeswaran and Bartneck2017). The application of AI in the practice of legal industry will continue to increase in the next few years (Remus & Levy, Reference Remus and Levy2017); through AI and other technologies, many tasks that previously require human lawyers are currently being performed by machines (Greenleaf, Mowbray, & Chung, Reference Greenleaf, Mowbray and Chung2018).

The opinion that ‘lawyers are going to lose their jobs’ is common in newspapers and magazines due to the rapid development of AI and the severe challenges that it will create for humans in various professions (McClure, Reference McClure2018). Will human lawyers be easily replaced? What new procedures will be required if the public accepts AI robot lawyers? How will human lawyers overcome the challenges presented by robots? This study will provide an exploratory study on these issues.

The objectives of this study are as follows.

  1. 1. To identify the capabilities of AI robot lawyers with regard to replacing human lawyers (i.e., which human capabilities are irreplaceable).

  2. 2. To propose an extended technology acceptance model (TAM) for AI robot lawyers (RLTAM) that is closely intertwined with the highly professional practice of legal service and is suitable for the unique theory of AI robots in this field.

By examining the extending TAM, this study is an in-depth discussion of the use of lawyers in legal practice, and presents the possibility of AI robot lawyers replacing human lawyers as a reference for future robot applications. Moreover, it discusses the future development of the profession of human lawyers and offers suggestions for strengthening the advantages of human lawyers.

Literature survey

Development of AI robot lawyers

In 1987, Northeastern University in the United States held the first International Conference on AI and Law to promote interdisciplinary research and the application of AI to law (Rissland, Ashley, & Loui, Reference Rissland, Ashley and Loui2003). Simultaneously, the field of legal informatics is emerging, with AI and law at its core, to engage in interdisciplinary research in the fields of law, social science, informatics, intelligent technology, logic, and philosophy (Aguilo-Regla, Reference Aguilo-Regla2005).

JP Morgan Chase & Co. launched COIN, a contract analysis software that can replace lawyers in the task of reviewing contracts, thereby reducing the original required time from 360,000 h to a few dozen seconds (Galeon & Houser, Reference Galeon and Houser2017). ROSS, the world’s first AI robot lawyers, was launched in the United States in 2016; it can provide quick answers to questions related to legal matters (Arruda, Reference Arruda2016). The AI ‘judge’ in European Court of Human Rights refers to 584 case databases on fair trial and privacy, and its accuracy of the case prediction can reach 79% (Aletras, Tsarapatsanis, Preotiuc-pietro, & Lampos, Reference Aletras, Tsarapatsanis, Preotiuc-pietro and Lampos2016).

Rately, LawGeex, a company that developed software for reviewing legal contracts, invited 20 lawyers to test its AI product. Working simultaneously, the lawyers and the AI reviewed five agreements and identified 30 legal questions. The average human lawyer required 92 min to complete the task and achieved an accuracy rate of 85%. By contrast, the AI achieved 95% accuracy and finished the task in 26 s (LawGeex, 2018). DoNotPay, an online AI platform that offers free legal advice was launched in the United States in 2017; it is described by its inventor, Joshua Browder, as ‘UK’s first robot lawyer,’ and it can handle up to 1,000 types of civil cases (Boynton, Reference Boynton2017).

The services provided by lawyers are often unable to meet the needs of the public. AI robot lawyers can solve the problem of imbalance in the resources of legal services, so the arrival of legal robots is inevitable (Castell, Reference Castell2018). As early as the 1970s, scholars began to discuss the integration of AI technology and law by studying the use of robot judges instead of human judges to eliminate legal uncertainty (D’Amato, Reference D’Amato1976; von der Lieth Gardner, Reference von der Lieth Gardner1987), but the first priority of legal AI is to assist and serve judges or lawyers in handling cases, not to replace them (McGinnis & Pearce, Reference McGinnis and Pearce2014).

AI can assist lawyers to make decision in four fields of legal service, namely, consultation and guidance, file retrieval, data review and lawsuit prediction (Greenleaf, Mowbray, & Chung, Reference Greenleaf, Mowbray and Chung2018), and assist lawyers to make defense decisions in file review, evidence review, and judicial rule mining (McGinnis & Pearce, Reference McGinnis and Pearce2014). Many scholars are relatively optimistic about legal AI (Bersoff & Hofer, Reference Bersoff and Hofer1991; Bintliff, Reference Bintliff1996; Bast & Pyle, Reference Bast and Pyle2001; Alarie, Niblett, & Yoon, Reference Alarie, Niblett and Yoon2016) and affirm its value (Popple, Reference Popple1991; Ben-Shahar & Porat, Reference Ben-Shahar and Porat2016), arguing that AI in natural language processing and machine learning, which can quickly analyze cases on the act of becoming a good assistant for lawyers and judges (Aletras et al., Reference Aletras, Tsarapatsanis, Preotiuc-pietro and Lampos2016). Human lawyers are unable to compete with AI in terms of the ability to retrieve, analyze, and process data (Greenleaf, Mowbray, & Chung, Reference Greenleaf, Mowbray and Chung2018). AI legal robot, which allows lawyers to move from criminal defense to intelligent defense, changing the way lawyers work (Barton, Reference Barton2014), so it is wise to use and adapt to new AI technologies as early as possible (Greenleaf, Mowbray, & Chung, Reference Greenleaf, Mowbray and Chung2018; Nissan, Reference Nissan2018).

However, the legal AI also has a certain practical dilemma (Mommers, Voermans, Koelewijn, & Kielman, Reference Mommers, Voermans, Koelewijn and Kielman2009), which contains incomplete, untrue and nonobjective date, shortage of structuralization and fuzzy, opaque and inefficient algorithm (Riesen & Serpen, Reference Riesen and Serpen2008; Mcnamar, Reference Mcnamar2009; Hashem, Yaqoob, Anuar, Mokhtar, Gani, & Khan, Reference Hashem, Yaqoob, Anuar, Mokhtar, Gani and Khan2015; Hildebrandt, Reference Hildebrandt2018), which can easily lead to new conflicts such as implicit discrimination (Moses & Chan, Reference Moses and Chan2014). AI is more likely to automate legal customer service in general, but high-level interaction, such as negotiation, makes it difficult to replace human (Bryson & Winfield, Reference Bryson and Winfield2017), because AI has limited cognitive range, and the computational patterns and logic is relatively fixed (Prakken, Reference Prakken2005), which results in its incompetence in judicial work with large knowledge coverage and high technical content (Deedman & Smith, Reference Deedman and Smith1991; Zeleznikow, Reference Zeleznikow2002). It is difficult for AI to have unique human ideology and emotional morality (Valente & Breuker, Reference Valente and Breuker1994; Marcus, Reference Marcus2008; Ashley, Reference Ashley2012), which eventually leads to the alienation of the legal and AI professionals (McNally & Inayatullah, Reference McNally and Inayatullah1988; Oskamp & Lauritsen, Reference Oskamp and Lauritsen2002; Strnad, Reference Strnad2007). The AI law technique is also less favored by the capital and practical circles (Bertolini & Aiello, Reference Bertolini and Aiello2018).

At present, the introduction of AI in the actual legal practice is slow. We should consider the reliability, sharing, legal supervision of date, and so on (Rissland, Ashley, & Loui, Reference Rissland, Ashley and Loui2003). We should pay attention to the study of its complex system (Pham, Madhavan, Righetti, Smart, & Chatila, Reference Pham, Madhavan, Righetti, Smart and Chatila2018), or develop the integrated legal decision-support systems, not ‘expert systems’ or ‘AI robot lawyers’ (Greenleaf, Mowbray, & Chung, Reference Greenleaf, Mowbray and Chung2018). There is no standard answer to a large number of legal questions in reality, and the deal needs to be weighed in terms of interests, human feelings, social customs, and the ability to understand the real society, which is not easy for experienced lawyers, but robots are harder to handle (Evans & Price, Reference Evans and Price2017).

The rapid development of AI will have an impact on the structural changes in the legal profession, the market for legal services, and the redistribution of global lawyers’ resources (Bench-Capon et al., Reference Bench-Capon, Araszkiewicz, Ashley, Atkinson, Bex, Borges and Wyner2012). AI will enhance the flexibility of legal services and promote the standardization, systematization, commercialization, transparency, automation of legal services (Adamski, Reference Adamski2018). In the future, AI will become the industrial standard for legal service, thus eliminating the asymmetry of legal resources and achieving wider judicial justice (Hilt, Reference Hilt2017). But at this stage, AI robot lawyerss are not real professionals, and human lawyers need to monitor it at any time (Goodman, Reference Goodman2016). How to transform human experience and thinking into algorithm is one of the biggest problems in AI (Papakonstantinou & De Hert, Reference Papakonstantinou and De Hert2018), and it is also a challenge that law people must face, which will be a long-term and arduous work (Alarie, Niblett, & Yoon, Reference Alarie, Niblett and Yoon2018).

In summary, the academic research on the integration of law and AI is still in the initial stage, lacking of understanding and thinking about the application of AI into specific legal science and technology, and even less of analyzing the applicability of legal AI technology from the perspective of consumers. Therefore, based on the TAM modeling, this study explores the key elements of the social acceptance and how AI robots enter the legal profession, through the interviews and analysis of judges, lawyers, clients, and AI experts. The results of this study will contribute to the practical application and integration of law and AI, and fill in the gaps in academic theory.

TAM

TAM was used by Davis in 1989 to study the determinants of personal acceptance of information technology according to the behavior of science and technology by applying theory of rational behavior to external and dependent variables (Pavlou, Reference Pavlou2003). TAM aims to explore the degree of willingness of individuals to use an information technology system in the future (Davis, Bagozzi, & Warshaw, Reference Davis, Bagozzi and Warshaw1989).

TAM believes that the actual behavior of users to accept new technology depends on their intention to use it, and two key variables, namely, perceived usefulness and perceived ease of use, will affect user intention (Roca, Chiu, & Martinez, Reference Roca, Chiu and Martinez2006). Several scholars have introduced consumer trust as a key variable in the study of TAM (Gefen, Karahanna, & Straub, Reference Gefen, Karahanna and Straub2003; Fuller, Serva, & Baroudi, Reference Fuller, Serva and Baroudi2010; Kim, Reference Kim2012). They found that consumer trust exerts a significant impact on perceived usefulness (Abroud, Choong, Muthaiyah, & Fie, Reference Abroud, Choong, Muthaiyah and Fie2015). On the basis of TAM, this study will discuss the influences of perceived ease of use, perceived usefulness, and consumer trust on the willingness of consumers to accept AI robot lawyers by introducing consumer trust variables.

Research methods

The primary methods used in this study are secondary data collection (e.g., related news, literature) and semistructured in-depth interviews to conduct qualitative research. This study aims to explore the relationship between AI robot lawyers and human lawyers and to identify the elements of AI robot lawyers that are accepted by human users. Therefore, four practicing lawyers, two professional judges, two AI experts, and two potential customers are selected as subjects in this study. A qualitative analysis is used to determine their theoretical opinions.

The interview period was from October 2017 until the end of March 2018. The average time of each interview was 70 min. The entire process was recorded and edited verbatim into the manuscript, which was used as the basis for analyzing data in this study. After interviewing the 10 subjects, we could no longer obtain new information from them and confirmed that we had collected the required data. Therefore, data obtained from the 10 interviewees were further analyzed. The basic information of the interviewees is provided in Appendix 1, and the outline of the interview is presented in Appendix 2.

Research results

Analysis of human and machine capabilities

A lawyer’s work typically features a high-pressure and complex environment, and requires authoritative, intelligent, and professional characteristics (Flower, Reference Flower2018). When facing clients, lawyers should be inquisitive and should exhibit a sense of justice and empathy, logical thinking, analytical and judgment capabilities, and good communication skills and negotiating abilities (Goodman-Delahunty, Granhag, Hartwig, & Loftus, Reference Goodman-Delahunty, Granhag, Hartwig and Loftus2010). With regard to the comparison between human and AI robot lawyers, the interview results can be divided into two aspects: (1) AI robot lawyers may replace human lawyers in terms of the structure of their abilities, and (2) AI robot lawyers cannot, or at least for the time being, be used as a substitute for human lawyers.

Ability to replace human beings

From the results of the interviews, the abilities of AI robot lawyers that can replace those of human lawyers in the practice of law are as follows:

1. Ability to collect and retrieve data: AI can help people in the arduous task of collecting information, including online information searches, filtering of relevant and redundant information, integration of information from a wide variety of information sources, and reinterpretation of information consistency (Wichmann, Korkmaz, & Tosun, Reference Wichmann, Korkmaz and Tosun2018). AI exhibits a high degree of accurate recall and memory ability, thereby allowing it to manage a huge amount of multidimensional information and to explore the correlation among multiple variables; thus, AI is highly suitable for data exploration (Aryabarzan, Minaei-Bidgoli, & Teshnehlab, Reference Aryabarzan, Minaei-Bidgoli and Teshnehlab2018). These functions comprise the preparatory work of lawyers to deal with legal affairs.

AI can provide reproducible labor and alternative legal services, such as preparing and searching a large number of data files, thereby allowing lawyers to devote more time and energy to providing high-quality legal services and improving service efficiency (Interviewee 4).

Lawyers have to exert tremendous effort searching for a large amount of data, which are frequently scattered. Collecting data is extremely difficult. An AI robot lawyers that can help collect data will be great (Interviewee 1).

A robot’s ability to cope with data collection, analysis, and collation is certainly superior to that of humans. Its command cycle is also extremely fast (Interviewee 8).

Based on the massive amount of information in a big data platform, AI can quickly obtain the solution to the problem by using the current data processing technology (Interviewee 5).

AI can immediately locate laws, precedents, and corresponding statistical results; thus, they can assist human lawyers in conducting complex analysis (Interviewee 2).

2. Ability to analyze and predict cases: The analysis and prediction of case trends typically depend on the logic and analytical ability of human lawyers. However, human cognitive ability can be imitated by AI robot lawyers. Robots can learn automatically and find rules from huge amounts of data; they can also make predictions (Chandrinos, Sakkas, & Lagaros, Reference Chandrinos, Sakkas and Lagaros2018). AI is programmed to respond quickly and is good at identifying the best strategies among a multitude of materials and clues; these abilities are conducive to the settlement of cases (Wang, Wang, & Shi, Reference Wang, Wang and Shi2018).

Robots can learn to conduct rational analysis and predict events like humans (Interviewee 9).

A robot lawyer exhibits logical ability and can provide basic analysis and judgment of a case (Interviewee 5).

If the lawyer is a robot, then I think that it is feasible for it to determine the logic of a case and make a rational prediction (Interviewee 6).

In the current legal practice, because the law has many disputes and uncertainties, the people are confused about certain cases, and it is not conducive to the stable implementation of the law. The development of AI robot lawyer will strengthen the stability and predictability of the law because of the objective and fairness of its robotic procedures (Interviewee 2).

The current law, even in the same law, the results of the actual cases may vary widely, which brings trouble to lawyers and the public. If there is an AI robot lawyer, it can analyze the biggest possibility from thousands of cases, increase the accuracy of the forecast which can also bring stability and consistency to the work of the lawyer and the trial of the court. It is worth encouraging (Interviewee 1).

Irreplaceable human abilities

From the results of the interview, AI robot lawyers cannot replace human lawyers in the following aspects.

1. Intuitive strain capacity: When handling cases, human lawyers will combine rational logical analysis and perceptual intuitive reactions to make judgments. By contrast, a robot’s understanding of grammar and wording, along with its emotional intuitive response, is not as responsive as that of humans (Lee & Kim, Reference Lee and Kim2018). In particular, the intuitive response to human emotions is too complex for robots (Menne & Schwab, Reference Menne and Schwab2018).

AI requires more capabilities for case detail recognition and complex legal relationship analysis, but AI currently does not have the capacity to handle complex work (Interviewee 3).

Compared with robots, human lawyers are flexible, more responsive, and experience more unflappale during emergencies (Interviewee 4).

Human beings can react to or judge many simple details directly by intuition, but AI tends to be the opposite. The simpler things are for human beings, the more complex they are for AI. For example, the analysis of expressions, emotional experiences, and intuitive feelings can be performed immediately by humans, but these aspects are too complex for AI and will require a large number of operations and analyses (Interviewee 7).

2. Empathy: Communicating with humans is imperative when dealing with legal matters. Human lawyers feel empathy and can understand the difficulties, anxieties, and expectations of clients, thereby enabling them to think from the perspective of their clients. Empathy allows people to understand the feelings of others from their perspective; hence, they can communicate reasonably with others about their feelings (Alam, Danieli, & Riccardi, Reference Alam, Danieli and Riccardi2017). At present, AI technology requires a long time to develop this aspect of empathy (Wiese, Metta, & Wykowska, Reference Wiese, Metta and Wykowska2017). Future robots should work together with people, not only in terms of providing wisdom to complement those of human beings, but should also learn to show empathy and communicate to achieve this goal (Hofree, Ruvolo, Reinert, Bartlett, & Winkielman, Reference Hofree, Ruvolo, Reinert, Bartlett and Winkielman2018).

No emotion, no expression. These characteristics make AI inferior to humans. Emotions are important for cooperation (Interviewee 5).

Compared with human lawyers, AI can never be as creative and empathetic. Many details and evidence will not be used optimally (Interviewee 4).

It is hard for a robot lawyer to think about its clients and reflect their feelings. It has no emotions (Interviewee 2).

How does a person win against a computer? A lawyer’s role is to act as a bridge between the plaintiff and the defendant. Both parties are volatile, but robots cannot comfort them or listen to them. Human lawyers can (Interviewee 8).

3. Creativity: Creativity refers to the ability to generate new ideas and create new things (Kralik, Mao, Cheng, & Ray, Reference Kralik, Mao, Cheng and Ray2016). It is a unique comprehensive ability of humans that is characterized by novelty, originality, divergent thinking, and an indirect, unfettered way of thinking to explore the unknown without direction. Creativity represents the peak of higher-level cognition (Olteteanu, Falomir, & Freksa, Reference Olteteanu, Falomir and Freksa2018). Good lawyers, instead of using procedural clichés, can create new solutions that can help clients solve problems. AI relies on human programs and software, which results in difficulties in achieving human creativity and imagination (Augello, Infantino, Manfre, Pilato, & Vella, Reference Augello, Infantino, Manfre, Pilato and Vella2016). Humans can integrate and analyze an individual’s knowledge through creativity and experience. Such ability is the valuable merit for human beings, and that is also lacking by robots (Gray & Wegner, Reference Gray and Wegner2012).

Science cannot explain the way people think; thus, technology cannot produce AI robot lawyerss who can think like humans (Interviewee 1).

Robots, autonomous consciousness, represent logical programming, but creativity can be difficult to achieve (Interviewee 7).

Human lawyers have a flexible mind, but AI robot lawyers work in line with a set program and will not be creative (Interviewee 3).

4. Psychological warfare: In adversarial activities such as war, commerce, and litigation, psychological stimulation and influence will be exerted on opposite sides of the competition in various ways (Lopez, Reference Lopez2017). In the practice of legal service, lawyers frequently use psychological warfare techniques to demoralize their adversaries; they combine these techniques with offensive or defensive methods, launch strategies, or use psychological tactics to realize a victorious litigation (Fells, Caspersz, & Leighton, Reference Fells, Caspersz and Leighton2018).

Human lawyers are cunning in many ways. They are flexible in applying strategies and tactics, but AI robot lawyers are not (Interviewee 6).

AI robot lawyers are simpler and more direct. They do not exhibit hypocrisy, acting skills, or mask their emotions (Interviewee 5).

Many clients lie. Robots will find it more difficult to identify lying clients compared with human lawyers (Interviewee 4).

5. Negotiation ability: This ability refers to the comprehensive and special skill to negotiate; it includes thinking, observing, reacting, and expressing (Reif & Brodbeck, Reference Reif and Brodbeck2017). The negotiation ability used in the work of lawyers requires long-term professional training. Contemporary AI technology is unable to exactly express output and input nor clearly explain contents (Gadanho & Hallam, Reference Gadanho and Hallam2001). Even with language recognition and similar expressions in the near future, AI robot lawyers cannot perform as well as human lawyers based on human experience and understanding of closely related negotiations (Ojha, Williams, & Johnston, Reference Ojha, Williams and Johnston2018).

Most people are willing to talk to people than to machines, although this assumption is a matter of opinion.But I want to talk with human beings (Interviewee 7).

AI robot lawyerss are mechanized; their thinking and creative abilities are not as strong as those of humans. In general, AI robot lawyerss can only be based on previous procedures designed by humans, even in the production of autonomous consciousness. They cannot reach the level of human autonomous creation (Interviewee 8).

RLTAM

A new scientific TAM for AI robot lawyers is proposed, namely, based on the RLTAM literature review and the qualitative analysis of in-depth interviews in the study. The model is divided into five facets and 11 elements. The framework is shown in Figure 1.

Figure 1 Artificial intelligence robot lawyer technology acceptance model

Legal use

The first facet of TAM is legal use, which includes two elements: legal permission and legal imputation. The current study found that AI robot lawyers would be widely used in society because they should be legal and recognized by the state, government, society, and people.

1. Legal permission: Every country has set up a licensing system for the legal profession. Lawyers can only practice after obtaining a license by passing an examination prescribed by the state. Lawyers are supervised and managed by a competent authority. This study found that future AI robot lawyers must be licensed by the national law and authorized by the client.

Lawyers need a license. A robot lawyer will not be accepted by a judge without a license (Interviewee 7).

A robot lawyer should have the law’s permission to practice! This should be a requirement (Interviewee 4).

If I am to entrust my case to a robot lawyer, then I may to see first if it is qualified to receive a case similar to a human lawyer. It must have the qualifications of a lawyer, which is equivalent to the country giving you the ability to examine ahead of time. If it is qualified, then a robot lawyer should be able to practice law (Interviewee 10).

2. Legal imputation: In this study, legal imputation refers to the legal principle that AI robot lawyers should bear legal responsibility. If AI robot lawyers make mistakes, then they also have to bear the corresponding legal liabilities. However, traditional means of punishment, such as fines, imprisonment, or death, are not effective for robots. AI robot lawyers are essentially man-made, semiautomatic or fully automatic products, and defining liability is difficult. Thus, the issue of attribution of responsibility still requires a long discussion. The respondents believe that this issue should be considered carefully.

Constructing the legal responsibility of AI robot lawyers from the perspective of law is a critical element of their ‘legal use.’ Through the reasonable consideration, assumption, and evaluation of potential risks, solving difficult problems in the process technology application and development will be helpful. This action can promote the development of technology, protect the legitimate rights and interests of humans, and increase the intention of using AI robot lawyers.

If humans design a self-conscious robot lawyer in the future, then it will be a completely moral subject that is capable of rational analysis and self-examination and is accountable for its own actions. More people will be willing to hire AI robot lawyers (Interviewee 8).

If an AI robot lawyers makes a mistake, then how can it be punished for dereliction of duty? What if AI is wrong? Who is in charge? Being responsible for AI is difficult (Interviewee 7).

If a human lawyer makes a mistake, then he/she can take full responsibility for himself/herself and his/her law firm. But if a robot lawyer makes a mistake, then who is in charge? (Interviewee 9).

Perceived ease of use

Perceived ease of use is the belief that using a particular system will not be laborious; that is, how easy using a particular system is (Davis, Bagozzi, & Warshaw, Reference Davis, Bagozzi and Warshaw1989). In this study it refers to the use of AI robot lawyers in receiving legal services, whether they are easy to operate or use, thereby increasing the intention of people to use them. The interviewees agree that AI robot lawyers are simple to use, which can be concluded from their ability to substitute for human data collection and retrieval. Case prediction and analysis capabilities demonstrate that robots have big data analysis skills, absolute rationality, memory storage, and other perceptual and easy-to-use characteristics. Therefore, perceptual easy-to-use facets are analyzed based on big data, absolute rationality, and memory storage.

I choose to work with AI robot lawyerss at the junior level because they are not very complicated (Interviewee 3).

AI robot lawyerss may be easier to operate because they can do whatever they are asked to do (Interviewee 8).

If a robot lawyer is easier to use and less complicated to operate, then I think I will be willing to work with it (Interviewee 9).

1. Big data analysis: Big data is defined as a large amount of data that are processed at high speed, which exhibit diversity, variability, and other characteristics of a data set, along with effective storage, processing, and analysis (Ramirez-Gallego, Fernandez, Garcia, Chen, & Herrera, Reference Ramirez-Gallego, Fernandez, Garcia, Chen and Herrera2018). Many respondents recognized the big data analysis capability of AI, but human lawyers did not. Data collection and retrieval abilities and case analysis and prediction abilities can be regarded as the ability for big data analysis, which constitutes an important component of perceptual ease of use in the RLTAM model, and this element has a positive effect on perceived ease of use.

AI can handle a huge amount of data and has a large memory. People cannot do it (Interviewee 7).

AI allows courts to hear cases more efficiently. AI robot lawyerss use big data for basic facts and the legal application (Interviewee 4).

The data analysis, computing, and memory abilities of robots are considerably stronger than those of humans, which represent their biggest advantage over the latter (Interviewee 8).

2. Absolute rationality: Rationality refers to the ability of humans to use reason to infer a reasonable conclusion in a rational manner after careful thinking, as opposed to the concept of sensibility (Handel & Schwartzstein, Reference Handel and Schwartzstein2018). In this study, absolute rationality means that a robot lawyer strictly follows the procedure set by the engineer and the target set by the client. A robot lawyer is unaffected by any emotion. The absolute rational behavior of AI systems emphasizes the action of accomplishing particular goals based on certain known beliefs (Breazeal, Reference Breazeal2003). The technology of rapidly searching for relevant information in a database will not have an influence on emotional stress. This technology can be applied to face-to-face communication with humans and can also avoid the influence of human subjective factors (Cavallo et al., Reference Cavallo, Semeraro, Fiorini, Magyar, Sincak and Dario2018). The absolutely rational nature of AI robot lawyers can serve the user completely without being distracted from the client’s mandate. The absolute rationality of AI robot lawyers exerts a positive effect on their perceived easy-to-use facets. All the respondents agreed that AI is more absolutely rational than humans.

AI has no emotion and experiences no emotional pressure. Thus, communication is relatively without burden (Interviewee 7).

The advantage of AI is its computer rationality; its disadvantage is its excessive rationality. The addition of AI should lead judgment toward a rational direction (Interviewee 8).

AI robot lawyerss are very rational. Human lawyers are often affected by emotions and different values (Interviewee 2).

In legal practice, human beings may be treated unequally because of factors such as race, gender, class, etc., so that legal trials may have certain bias. The development of robotic lawyers can exclude artificial interference factors in programming design and development, and achieve relative objective, rational, fair, and just (Interviewee 5).

The race, gender, occupation, personality, and even appearance of the parties involved will affect the perception and subjective attitude more or less. If absolute and objective AI robotic lawyers are not interfered by these emotional factors, they can reasonably make judgments and service (Interviewee 6).

3. Memory storage: Human memory is the ability of the nervous system to store past experiences; robotic cognitive systems are equipped with memory systems similar to human autobiographical memory systems (Pointeau & Dominey, Reference Pointeau and Dominey2017). Robot memory imitates the human emotional memory model. It has powerful storage and memory functions that can perform different behaviors; by contrast, human memory is limited (Masuyama, Loo, & Seera, Reference Masuyama, Loo and Seera2018). The ability of AI robot lawyers to store memory is powerful through cloud storage and retrieval, and it is easy to use in perception, which acts in a positive direction. The brain of human lawyers is selective and limited in terms of memory storage.

The advantages of AI lie in memory, storage, and knowledge renewal (Interviewee 3).

AI memory ability is extremely large (cloud) (Interviewee 7).

The memory of the human brain is limited, and the amount of information stored considerably differs from that of AI robot lawyers (Interviewee 8).

Sense of trust

The perception of trust is a subjective feeling of honesty and trustworthiness (Dzindolet, Peterson, Pomranky, Pierce, & Beck, Reference Dzindolet, Peterson, Pomranky, Pierce and Beck2003). A significant relationship exists between human trust in robots and the performance of robots (Hancock et al., Reference Hancock, Billings, Schaefer, Chen, de Visser and Parasuraman2011). Trust represents a belief in the integrity, goodwill, ability, and predictability of a new technology platform, the expectations generated by the client in a risky environment, and the execution of important or special activities (Gefen et al., Reference Gefen, Karahanna and Straub2003). Given its abstractness and complexity, trust is defined differently in various fields. This study found the interviewees would personify the AI robot lawyers, and make a certain subjective phychological reaction to the AI robot lawyers, and form a one-way emotional bond, that is, trust. Consumers also anthropomorphize to robot with inner life and trust emotions (Arbib & Fellous, Reference Arbib and Fellous2004). Therefore, the trust of human clients in AI robot lawyers originates from the performance of robots in the legal service.

If I trust a robot lawyer, then I will consider choosing it (Interviewee 9).

AI robot lawyers may be more trustworthy than humans because humans may betray you; robots are incapable of betrayal (Interviewee 10).

If a robot lawyer will act as an assistant, then I will trust its ability and I will be willing to hire it (Interviewee 4).

Humans may trust AI robot lawyers more in some ways, thinking that robots will listen more to them than human lawyers (Interviewee 6).

Human beings will always have doubts about computers. The first challenge is to overcome doubts about AI, which is a problem of human psychology and not of technology. Do people trust robots more? No! Showing the analysis process is difficult. Thus, AI is not trusted (Interviewee 8).

If a robot lawyer can express itself like human beings with the feeling of soul, the replace can be done. However, it is also a question of trust, which is always a problem. If people really trust robots, then humans can be replaced (Interviewee 8).

On the basis of the preceding analysis, this study defines trust in the scenario of using AI robot lawyers as follows. Under the model of consumers using AI robot lawyers in uncertain and risky environments, consumers believe that the trusted object is reliable and worthy of trust. The object exhibits the subjective feelings of safety, credibility, reliability, and ability. The structure contains three elements: machine safety, capability dimension, and human–computer interaction.

1. Machine safety: In this study, machine safety refers to the security of using AI robot lawyers as reflected in the security of data storage, the safety of information from being leaked, and the security of not being betrayed. Robots make decisions without emotional interference, without being controlled by emotions, and without prejudice or greed (Canamero, Reference Canamero2005).

Human lawyers have a limited storage for legal files, which can be easily lost. By contrast, as long as the program is set up and a firewall system is installed, basic security and reliability are ensured in AI robot lawyers; thus, they are not susceptible to leaks (Interviewee 8).

AI robot lawyers are less likely to disclose private and secret information (Interviewee 3).

AI robot lawyers usually do not betray humans. They do not betray their clients for money, social relations, politics, and other reasons (Interviewee 6).

2. Capability dimension: The client’s evaluation of the ability of the lawyers is an important indicator affecting the ‘trust’ facet. Robots will outsmart humans, and between 2020 and 2050, robots are predicted to be as intelligent as humans (Kurzweil, Reference Kurzweil2000).

AI robot lawyers are very powerful in data processing, analysis, and retrieval. I believe these characteristics are very useful and will help me with a considerable amount of work; hence, I will be happy to use robots (Interviewee 1).

If a robot lawyer can help me quickly sort out cases and analyze possible results, then I would like to use it as a reference (Interviewee 2).

Human lawyers in court should have a strong ability to adapt and observe; however, these characteristics generally do not apply to AI robot lawyers, which may affect the outcome of a case. I will not dare entrust a case to a robot (Interviewee 10).

If a robot lawyer is sent to negotiate with a human, then the robot may be too rational to perceive the person’s true feelings and ideas, which may not be conducive to my case. I may not hire a robot lawyer (Interviewee 9).

In terms of settlement and negotiation, I believe that human lawyers have an advantage. I trust human lawyers more than AI robot lawyers (Interviewee 7).

3. Human–computer interaction: This element refers to the study of the interaction between users and user systems. The interactive use of AI robot lawyers and humans involves a combination of disciplines, such as computer science, psychology, sociology, industrial design, and law (Gatteschi, Lamberti, Montuschi, & Sanna, Reference Gatteschi, Lamberti, Montuschi and Sanna2016). In addition, contextual perception, gesture recognition, eye tracking, speech recognition, three-dimensional input, facial expression recognition, natural language understanding, and handwriting recognition are all important technologies that can be accepted by users (Hak & Zeman, Reference Hak and Zeman2017). As service-oriented robots, AI robot lawyers require a good interpersonal interaction experience (Hibbeln, Jenkins, Schneider, Valacich, & Weinmann, Reference Hibbeln, Jenkins, Schneider, Valacich and Weinmann2017), which can affect user purchase decisions and customer loyalty (Schoenick, Clark, Tafjord, Turney, & Etzioni, Reference Schoenick, Clark, Tafjord, Turney and Etzioni2017). Therefore, good communication skills and human–computer interaction experience positively affect user trust.

Perceived usefulness

Here, perceived usefulness refers to enhancing the performance of users of new technology systems (Davis, Bagozzi, & Warshaw, Reference Davis, Bagozzi and Warshaw1989). New technology systems can help improve operational efficiency, reduce costs, and enhance personnel performance. The interviewees agreed that robotic lawyers are efficient and helpful. They concurred that solving the legal problems of clients conveniently, quickly, and at a lost cost constitutes the basic facet of perceived usefulness, which consists of the following elements.

1. Problem-solving: Problem-solving is the use of existing knowledge, experiences, skills, and a variety of thoughts and actions to address problems, such that a situation can be changed to a desired state (Vonhippel, Reference Vonhippel1994). The interviewees agreed that AI robot lawyers can produce the best solution to help solve a problem.

To save time and improve efficiency, AI can quickly locate the involved laws, precedents, and corresponding statistical results. It can assist human lawyers in complex analysis (Interviewee 1).

AI can quickly cope with the massive information in a big data platform by using the current data processing technology (Interviewee 5).

2. Low price: The cognitive value of technology products includes monetary and nonmonetary values, and price affects the relationship between quality and purchase intention (Yang & Peterson, Reference Yang and Peterson2004). When assessing the cost and value of AI robot lawyers, the respondents considered the costs of human lawyers to be higher than those of AI robot lawyers. First, the experiences of human lawyers are too costly to acquire and difficult to imitate. Second, AI robot lawyers are essentially powerful computers that can work all the time and can be replicated on a large scale. The overall cost of research and development application will be considerably lower than that of human lawyers and the price will be relatively low.

In terms of cost, AI may exhibit higher performance than humans. Robots can save considerable labor cost and time (Interviewee 1).

AI robot lawyerss cost less than human lawyers. Evidently, robots cost less because they are machines (Interviewee 4).

The cost of developing a robot lawyer may be higher at first, but once the scale effect is developed, the cost is lower than that of humans. Moreover, robots never get tired and do not need rest (Interviewee 6)

If AI robot lawyers are developed, then the costs of lawyers will be considerably reduced, along with the costs of public litigation. Will this result in indiscriminate litigations and an explosion of litigations? That will be another social problem (Interviewee 8).

3. Convenience: The definition of convenience refers to the convenience that the customer feels in the process of using the robot lawyer, including the convenience of time, place, acquisition, and execution mode (Brown, Reference Brown1989). All the interviewees recognized the convenience of using AI robot lawyers.

AI robot lawyerss can provide reproducible labor and alternative, transactional legal services (Interviewee 4).

Compared with human lawyers, AI robot lawyers can serve their clients anytime and anywhere (Interviewee 6).

Human lawyers may get tired when they work too much. AI robot lawyers never complain, and they are not governed by emotions (Interviewee 2).

Use intention

Use intention is the intensity of the intention of a user to utilize a specific technology’s behavior and have a positive perception of it (Davis, Bagozzi, & Warshaw, Reference Davis, Bagozzi and Warshaw1989). In this study, use intention refers to the client’s willingness to hire a robot lawyer. The result is that the client is willing to trust a robot lawyer to represent him/her.

If the state passes a law, then AI robot lawyers can be used in large scale. They will assume legal responsibility or can clearly find the subject of responsibility. We are willing to hire AI robot lawyers because they are easy to use, low cost, and trustworthy (Interviewee 10).

AI performs the role of a ‘think tank.’ It cannot be replaced, and replacing it is wrong (Interviewee 7).

AI can participate in the work of lawyers, assist human lawyers, complete low-level data collection and retrieval, and provide preliminary judgment. With the advancement of technology, lawyers can considerably reduce routine work and improve efficiency (Interviewee 1).

Conclusion

At present, science and technology continue to develop and evolve. Humans have developed robots since the invention of computers. However, humans require a certain amount of time to adapt to new things, and the speed of accepting new technologies is consistently behind the rapid changes in technology. The proposed Robot lawyer TAM in the study is helpful in expanding the research scope of AI in the future and presents new application possibilities for integrating AI into the practice of legal services. Given its characteristics of efficiency, rationality, and functionality when doing work originally assigned to humans, AI can improve efficiency. At this stage, however, working with human lawyers remains necessary to produce beneficial reciprocity.

This study is exploratory and mainly uses secondary data collection (related news and literature), a semistructured interview design, and in-depth interviews with four practicing lawyers, two professional judges, two AI experts, and two potential clients. This study starts from the perspective of the user based on the interview contents. This study also uses qualitative research to understand the applications of robots in the practice of law and other related considerations. Predictions and suggestions for the development of lawyers and AI robot lawyers are also provided. The following conclusions are drawn from this study.

AI robot lawyers possess some abilities that can replace those of humans

AI robot lawyers and human lawyers have their own strengths and weaknesses. Humans have many abilities that AI robot lawyers cannot replace. These abilities include intuition, empathy, creativity, psychological warfare, and negotiation. Meanwhile, the abilities that can be substituted by robots include data collection and retrieval and case analysis and prediction. Therefore, AI robot lawyers cannot completely replace human lawyers at present, but they can assist human lawyers in addressing legal affairs.

RLTAM

TAM is a behavioral idea model based on rational behavior theory. It is mainly designed for the behavior of users with regard to accepting new information systems. This study continues to expand based on this model. Two key influence factors, namely, legal use and trust perception, are proposed to bridge the theoretical gaps in the strategy. From the in-depth interviews, we found that AI robot lawyers exhibited five major facets and 11 elements of the model. The facets are legal use, perceived ease of use, trust perception, perceived usefulness, and intention of use. The eleven elements comprise legal permission, legal imputation, big data analysis, absolute rationality, memory storage, machine security, capability dimension, human–computer interaction, problem-solving, low price, and convenience. A clear legal license and legal liability will have a positive effect on the other four facets. The stronger the ability of AI robot lawyers to replace human lawyers, the stronger the trust of consumers. In terms of willingness to use, this study found that if clients and AI robot lawyers can build a perception of mutual trust, then clients are more likely to hire AI robot lawyers.

Three developable characteristics of RLTAM in the future are recommended as follows.

  1. 1. Derivation: The market for AI robot lawyers remains experimental. By using the RLTAM model presented in this study, which can be combined with planning behavior theory and experimental design methods, we can simulate situations to explore the behavior of users and human lawyers. Through public hearings, more people will improve their understanding of AI robot lawyers.

  2. 2. Macroscopic: The strategy model presented in this study can explore the macrotheoretical framework from the perspective of the practice of the lawyer industry, and then gradually establish theoretical reliability and validity by combining the literature basis and expert opinions. The practice of combining industry experiences is helpful to cope with the rapid changes in AI in the future. This approach will be useful to doctors, designers, and other professionals with a reference role.

  3. 3. Guidance: This study can provide guidance to traditional lawyers. Human lawyers should not worry about being replaced by AI robot lawyers and reject or oppose the development of AI. Instead, they should focus on the promotion of legal analysis and the flexible application of law. Creativity, critical thinking, and other meaningful soft power increase the irreplaceability of human lawyers. They can also be advantageous to AI robot lawyers, which can assist humans in completing the complex processes of information collection or case analysis and prediction, thereby improving overall work efficiency.

Appendix 1

Appendix 2

  1. 1. What do you know about lawyers?

  2. 2. What is the current development of AI technology?

  3. 3. What is the possibility that the current AI technology will create AI robot lawyerss? What kind of technology do you need? How do you accelerate such a breakthrough?

  4. 4. Do you think AI robot lawyerss need personification? Why?

  5. 5. How do you think AI robot lawyerss work in society? What factors contribute to or deprive them of their job opportunities?

  6. 6. What do you think are the advantages of AI robot lawyerss? What are their disadvantages?

  7. 7. What do you think are the differences between AI robot lawyerss and human lawyers in terms of rationality, fairness and justice, success rate, privacy, and cost?

  8. 8. What will be the impact on a court if AI robot lawyerss and human lawyers appeared simultaneously?

  9. 9. If the judge was AI, what will be the impact of the decision on the case? What is the likelihood of a judge being replaced by AI as opposed to a lawyer?

  10. 10. Do you think a possibility exists that AI robot lawyerss will replace human lawyers in the future? What are the factors?

  11. 11. What do you think human lawyers should do under the competitive pressure of AI? What advice would you offer to human lawyers?

Footnotes

Note. AI=artificial intelligence.

References

Abroud, A., Choong, Y. V., Muthaiyah, S., & Fie, D. Y. G. (2015). Adopting e-finance: decomposing the technology acceptance model for investors. Service Business, 9(1), 161182.CrossRefGoogle Scholar
Adamski, D. (2018). Lost on the digital platform: Europe’s legal travails with the digital single market. Common Market Law Review, 55(3), 719751.Google Scholar
Aguilo-Regla, J. (2005). Introduction: Legal informatics and the conceptions of the law. In V. R. Benjamins, P. Casanovas, J. Breuker, & A. Gangemi (Eds.), Law and the semantic web: Legal ontologies, methodologies, legal information retrieval, and applications (Vol. 3369, pp. 1824). Berlin: Springer-Verlag Berlin.CrossRefGoogle Scholar
Alam, F., Danieli, M., & Riccardi, G. (2017). Annotating and modeling empathy in spoken conversations. Computer Speech & Language, 50, 4061.CrossRefGoogle Scholar
Alarie, B., Niblett, A., & Yoon, A. H. (2016). Focus feature: Artificial intelligence, big data, and the future of law. University of Toronto Law Journal, 66(4), 423428. https://doi.org/10.3138/utlj.4005.CrossRefGoogle Scholar
Alarie, B., Niblett, A., & Yoon, A. H. (2018). How artificial intelligence will affect the practice of law. University of Toronto Law Journal, 68, 106124. https://doi.org/10.3138/utlj.2017-0052.CrossRefGoogle Scholar
Aletras, N., Tsarapatsanis, D., Preotiuc-pietro, D., & Lampos, V. (2016). Predicting judicial decisions of the European Court of Human Rights: A natural language processing perspective. PeerJ Computer Science, 2, e93.CrossRefGoogle Scholar
Arbib, M. A., & Fellous, J. M. (2004). Emotions: From brain to robot. Trends in Cognitive Sciences, 8(12), 554561. https://doi.org/10.1016/j.tics.2004.10.004.CrossRefGoogle Scholar
Arruda, A. (2016). An ethical obligation to use artificial intelligence: An examination of the use of artificial intelligence in law and the model rules of professional responsibility. The American Journal of Trial Advocacy, 40, 443.Google Scholar
Aryabarzan, N., Minaei-Bidgoli, B., & Teshnehlab, M. (2018). negFIN: An efficient algorithm for fast mining frequent itemsets. Expert Systems with Applications, 105, 129143. https://doi.org/10.1016/j.eswa.2018.03.041.CrossRefGoogle Scholar
Ashley, K. D. (2012). Teaching law and digital age legal practice with an AI and law seminar. Chicago-Kent Law Review, 88, 783.Google Scholar
Augello, A., Infantino, I., Manfre, A., Pilato, G., & Vella, F. (2016). Analyzing and discussing primary creative traits of a robotic artist. Biologically Inspired Cognitive Architectures, 17, 2231. https://doi.org/10.1016/j.bica.2016.07.006.CrossRefGoogle Scholar
Barton, B. H. (2014). The Lawyer’s monopoly-what goes and what stays. Fordham Law Review, 82(6), 30673090.Google Scholar
Bast, C. M., & Pyle, R. C. (2001). Legal research in the computer age: A paradigm shift? Law Library Journal, 93(2), 285302.Google Scholar
Ben-Shahar, O., & Porat, A. (2016). Personalizing negligence law. New York University Law Review, 91(3), 627688.Google Scholar
Bench-Capon, T., Araszkiewicz, M., Ashley, K., Atkinson, K., Bex, F., Borges, F., . . . Wyner, A. Z. (2012). A history of AI and Law in 50 papers: 25 years of the international conference on AI and Law. Artificial Intelligence and Law, 20(3), 215319.CrossRefGoogle Scholar
Bersoff, D. N., & Hofer, P. J. (1991). Legal Issues in Computerized Psychological Testing. Ethical conflicts in psychology. Retrieved from http://psycnet.apa.org/doi/10.1037/10329-000.Google Scholar
Bertolini, A., & Aiello, G. (2018). Robot companions: A legal and ethical analysis. Information Society, 34(3), 130140. https://doi.org/10.1080/01972243.2018.1444249.CrossRefGoogle Scholar
Bintliff, B. (1996). From creativity to computerese: Thinking like a lawyer in the computer age. Law Library Journal, 88(3), 338351.Google Scholar
Boynton, S. (2017). DoNotPay, ‘world’s first robot lawyer,’ coming to Vancouver to help fight parking tickets. Online News Producer Global News, p. 1. Retrieved 2018 from https://globalnews.ca/news/3838307/donotpay-robot-lawyer-vancouver-parking-tickets/.Google Scholar
Breazeal, C. (2003). Emotion and sociable humanoid robots. International Journal of Human-Computer Studies, 59(1–2), 119155. https://doi.org/10.1016/s1071-5819(03)00018-1.CrossRefGoogle Scholar
Brougham, D., & Haar, J. (2018). Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA): Employees’ perceptions of our future workplace. Journal of Management & Organization, 24(2), 239257. https://doi.org/10.1017/jmo.2016.55.CrossRefGoogle Scholar
Brown, L. G. (1989). The strategic and tactical implications of convenience in consumer product marketing. Journal of Consumer Marketing, 6(3), 1319.CrossRefGoogle Scholar
Bryson, J., & Winfield, A. (2017). Standardizing ethical design for Artificial Intelligence and autonomous systems. Computer, 50(5), 116119. https://doi.org/10.1109/mc.2017.154.CrossRefGoogle Scholar
Canamero, L. (2005). Emotion understanding from the perspective of autonomous robots research. Neural Networks, 18(4), 445455. https://doi.org/10.1016/j.neunet.2005.03.003.CrossRefGoogle ScholarPubMed
Castell, S. (2018). The future decisions of RoboJudge HHJ Arthur Ian Blockchain: Dread, delight or derision? Computer Law & Security Review, 34(4), 739753. https://doi.org/10.1016/j.clsr.2018.05.011.CrossRefGoogle Scholar
Cavallo, F., Semeraro, F., Fiorini, L., Magyar, G., Sincak, P., & Dario, P. (2018). Emotion modelling for social robotics applications: a review. Journal of Bionic Engineering, 15(2), 185203. https://doi.org/10.1007/s42235-018-0015-y.CrossRefGoogle Scholar
Chandrinos, S. K., Sakkas, G., & Lagaros, N. D. (2018). AIRMS: A risk management tool using machine learning. Expert Systems with Applications, 105, 3448. https://doi.org/10.1016/j.eswa.2018.03.044.CrossRefGoogle Scholar
D’Amato, A. (1976). Can/should computers replace judges. The Georgia Law Review, 11, 1277.Google Scholar
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 9821003.CrossRefGoogle Scholar
Deedman, C., & Smith, J. (1991). The nervous shock advisor: A legal expert system in case-based law. New York: Pergamon Press.Google Scholar
Dekker, F., Salomons, A., & van der Waal, J. (2017). Fear of robots at work: The role of economic self-interest. Socio-Economic Review, 15(3), 539562. https://doi.org/10.1093/ser/mwx005.Google Scholar
Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697718. https://doi.org/10.1016/s1071-5819(03)00038-7.CrossRefGoogle Scholar
Evans, N., & Price, J. (2017). Managing information in law firms: Changes and challenges. Information Research, 22(1), 21.Google Scholar
Fells, R., Caspersz, D., & Leighton, C. (2018). The encouragement of bargaining in good faith - A behavioural approach. Journal of Industrial Relations, 60(2), 266281. https://doi.org/10.1177/0022185617741925.CrossRefGoogle Scholar
Flower, L. (2018). Doing loyalty: defense lawyers’ subtle dramas in the courtroom. Journal of Contemporary Ethnography, 47(2), 226254. https://doi.org/10.1177/0891241616646826.CrossRefGoogle Scholar
Fuller, M. A., Serva, M. A., & Baroudi, J. (2010). Clarifying the integration of trust and TAM in e-commerce environments: implications for systems design and management. IEEE Transactions on Engineering Management, 57(3), 380393.Google Scholar
Gadanho, S. C., & Hallam, J. (2001). Robot learning driven by emotions. Adaptive Behavior, 9(1), 4264. https://doi.org/10.1177/105971230200900102.CrossRefGoogle Scholar
Galeon, D., & Houser, K. (2017). An AI completed 360,000 hours of finance work in just seconds. Retrieved from https://futurism.com/an-ai-completed-360000-hours-of-finance-work-in-just-seconds/ Google Scholar
Gatteschi, V., Lamberti, F., Montuschi, P., & Sanna, A. (2016). Semantics-based intelligent human-computer interaction. IEEE Intelligent Systems, 31(4), 1121. https://doi.org/10.1109/mis.2015.97.CrossRefGoogle Scholar
Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An integrated model. Mis Quarterly, 27(1), 5190.CrossRefGoogle Scholar
Goodman, J. (2016). Meet the AI robot lawyers and virtual assistants. Retrieved from https://www.lexisnexis-es.co.uk/assets/files/legal-innovation.pdf.Google Scholar
Goodman-Delahunty, J., Granhag, P. A., Hartwig, M., & Loftus, E. F. (2010). Insightful or wishful: Lawyers’ ability to predict case outcomes. Psychology Public Policy and Law, 16(2), 133157. https://doi.org/10.1037/a0019060.CrossRefGoogle Scholar
Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125130. https://doi.org/10.1016/j.cognition.2012.06.007.CrossRefGoogle ScholarPubMed
Greenleaf, G., Mowbray, A., & Chung, P. (2018). Building sustainable free legal advisory systems: Experiences from the history of AI & law. Computer Law & Security Review, 34(2), 314326. https://doi.org/10.1016/j.clsr.2018.02.007.CrossRefGoogle Scholar
Hak, R., & Zeman, T. (2017). Consistent categorization of multimodal integration patterns during human-computer interaction. Journal on Multimodal User Interfaces, 11(3), 251265. https://doi.org/10.1007/s12193-017-0243-1.CrossRefGoogle Scholar
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517527. https://doi.org/10.1177/0018720811417254.CrossRefGoogle ScholarPubMed
Handel, B., & Schwartzstein, J. (2018). Frictions or mental gaps: what’s behind the information we (don’t) use and when do we care? Journal of Economic Perspectives, 32(1), 155178. https://doi.org/10.1257/jep.32.1.155.CrossRefGoogle ScholarPubMed
Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Khan, S. U. J. I. S. (2015). The rise of ‘big data’ on cloud computing: Review and open research issues. 47, 98115.CrossRefGoogle Scholar
Hibbeln, M., Jenkins, J. L., Schneider, C., Valacich, J. S., & Weinmann, M. (2017). How is your user feeling? Inferring emotion through human-computer interaction devices. Mis Quarterly, 41(1), 1.CrossRefGoogle Scholar
Hildebrandt, M. (2018). Algorithmic regulation and the rule of law. Philosophical Transactions of the Royal Society A, 376(2128), 11. https://doi.org/10.1098/rsta.2017.0355.Google ScholarPubMed
Hilt, K. (2017). What does the future hold for the law librarian in the advent of artificial intelligence? Canadian Journal of Information and Library Science, 41(3), 211227.Google Scholar
Hofree, G., Ruvolo, P., Reinert, A., Bartlett, M. S., & Winkielman, P. (2018). Behind the robot’s smiles and frowns: In social context, people do not mirror android’s expressions but react to their informational value. Frontiers in Neurorobotics, 12, 11. https://doi.org/10.3389/fnbot.2018.00014.CrossRefGoogle Scholar
Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155172. https://doi.org/10.1177/1094670517752459.CrossRefGoogle Scholar
Kim, J. B. (2012). An empirical study on consumer first purchase intention in online shopping: Integrating initial trust and TAM. Electronic Commerce Research, 12(2), 125150.CrossRefGoogle Scholar
Kralik, J. D., Mao, T., Cheng, Z., & Ray, L. E. (2016). Modeling incubation and restructuring for creative problem solving in robots. Robotics and Autonomous Systems, 86, 162173. https://doi.org/10.1016/j.robot.2016.08.025.CrossRefGoogle Scholar
Kurzweil, R. (2000). The age of spiritual machines: When computers exceed human intelligence. New York: Penguin Books.Google Scholar
LawGeex (2018). Ai vs lawyers. Retrieved 2018 from https://www.lawgeex.com/AIvsLawyer/.Google Scholar
Lee, W. H., & Kim, J. H. (2018). Hierarchical emotional episodic memory for social human robot collaboration. Autonomous Robots, 42(5), 10871102. https://doi.org/10.1007/s10514-017-9679-0.CrossRefGoogle Scholar
Lopez, A. C. (2017). The evolutionary psychology of war: Offense and defense in the adapted mind. Evolutionary Psychology, 15(4), 23 https://doi.org/10.1177/1474704917742720.CrossRefGoogle ScholarPubMed
Marcus, R. L. (2008). The electronic lawyer. DePaul Law Review, 58, 263.Google Scholar
Masuyama, N., Loo, C. K., & Seera, M. (2018). Personality affected robotic emotional model with associative memory for human-robot interaction. Neurocomputing, 272, 213225. https://doi.org/10.1016/j.neucom.2017.06.069.CrossRefGoogle Scholar
McClure, P. K. (2018). ’You’re Fired,’ says the robot: The rise of automation in the workplace, technophobes, and fears of unemployment. Social Science Computer Review, 36(2), 139156. https://doi.org/10.1177/0894439317698637.CrossRefGoogle Scholar
McGinnis, J. O., & Pearce, R. G. (2014). The great disruption: How machine intelligence will transform the role of lawyers in the delivery of legal services. Fordham Law Review, 82(6), 30413066.Google Scholar
McNally, P., & Inayatullah, S. J. F. (1988). The rights of robots: Technology, culture and law in the 21st century. Futures, 20(2), 119136.CrossRefGoogle Scholar
Mcnamar, R. T. (2009). Methods, systems and computer software utilizing xbrl to identify, capture, array, manage, transmit and display documents and data in litigation preparation, trial and regulatory filings and regulatory compliance. United States Patent No.: 20090030754. U. S. Patent.Google Scholar
Menne, I. M., & Schwab, F. (2018). Faces of emotion: Investigating emotional facial expressions towards a robot. International Journal of Social Robotics, 10(2), 199209. https://doi.org/10.1007/s12369-017-0447-2.CrossRefGoogle Scholar
Mommers, L., Voermans, W., Koelewijn, W., & Kielman, H. (2009). Understanding the law: Improving legal knowledge dissemination by translating the contents of formal sources of law. Artificial Intelligence and Law, 17(1), 5178.CrossRefGoogle Scholar
Moses, L. B., & Chan, J. (2014). Using big data for legal and law enforcement decisions: Testing the new tools. UNSWLJ, 37, 643.Google Scholar
Nissan, E. (2018). Computer tools and techniques for lawyers and the judiciary. Cybernetics and Systems, 49(4), 201233. https://doi.org/10.1080/01969722.2018.1447766.CrossRefGoogle Scholar
Ojha, S., Williams, M. A., & Johnston, B. (2018). The essence of ethical reasoning in robot-emotion processing. International Journal of Social Robotics, 10(2), 211223. https://doi.org/10.1007/s12369-017-0459-y.CrossRefGoogle Scholar
Olteteanu, A. M., Falomir, Z., & Freksa, C. (2018). Artificial cognitive systems that can answer human creativity tests: An approach and two case studies. IEEE Transactions on Cognitive and Developmental Systems, 10(2), 469475. https://doi.org/10.1109/tcds.2016.2629622.CrossRefGoogle Scholar
Oskamp, A., & Lauritsen, M. (2002). AI in law practice? So far, not much. Artificial Intelligence and Law, 10(4), 227236.CrossRefGoogle Scholar
Papakonstantinou, V., & De Hert, P. (2018). Structuring modern life running on software. Recognizing (some) computer programs as new ‘digital persons’. Computer Law & Security Review, 34(4), 732738. https://doi.org/10.1016/j.clsr.2018.05.032.CrossRefGoogle Scholar
Pavlou, P. A. (2003). Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model. International Journal of Electronic Commerce, 7(3), 101134.Google Scholar
Pham, Q. C., Madhavan, R., Righetti, L., Smart, W., & Chatila, R. (2018). The Impact of robotics and automation on working conditions and employment. IEEE Robotics & Automation Magazine, 25(2), 126128. https://doi.org/10.1109/mra.2018.2822058.CrossRefGoogle Scholar
Pointeau, G., & Dominey, P. F. (2017). The role of autobiographical memory in the development of a robot self. Frontiers in Neurorobotics, 11, 18 https://doi.org/10.3389/fnbot.2017.00027.CrossRefGoogle ScholarPubMed
Popple, J. (1991). Legal expert systems: The inadequacy of a rule-based approach. Australian Computer Journal, 23, 8.CrossRefGoogle Scholar
Prakken, H. (2005). AI & law, logic and argument schemes. Argumentation, 19(3), 303320.CrossRefGoogle Scholar
Ramirez-Gallego, S., Fernandez, A., Garcia, S., Chen, M., & Herrera, F. (2018). Big Data: Tutorial and guidelines on information and process fusion for analytics algorithms with MapReduce. Information Fusion, 42, 5161. https://doi.org/10.1016/j.inffus.2017.10.001.CrossRefGoogle Scholar
Reif, J. A. M., & Brodbeck, F. C. (2017). When do people initiate a negotiation? The role of discrepancy, satisfaction, and ability beliefs. Negotiation and Conflict Management Research, 10(1), 4666. https://doi.org/10.1111/ncmr.12089.CrossRefGoogle Scholar
Remus, D., & Levy, F. (2017). Can robots be lawyers: Computers, lawyers, and the practice of law. Georgetown Journal of Legal Ethics, 30(2017), 501.Google Scholar
Riesen, M., & Serpen, G. (2008). Validation of a Bayesian belief network representation for posterior probability calculations on national crime victimization survey. Artificial Intelligence and Law, 16(3), 245276.CrossRefGoogle Scholar
Rissland, E. L., Ashley, K. D., & Loui, R. P. (2003). AI and law: A fruitful synergy. Artificial Intelligence, 150(1–2), 115.CrossRefGoogle Scholar
Roca, J. C., Chiu, C. M., & Martinez, F. J. (2006). Understanding e-learning continuance intention: An extension of the Technology Acceptance Model. International Journal of Human-Computer Studies, 64(8), 683696. https://doi.org/10.1016/j.ijhcs.2006.01.003.CrossRefGoogle Scholar
Schoenick, C., Clark, P., Tafjord, O., Turney, P., & Etzioni, O. (2017). Moving beyond the Turing test with the Allen AI science challenge. Communications of the ACM, 60(9), 6064. https://doi.org/10.1145/3122814.CrossRefGoogle Scholar
Strnad, J. (2007). Should legal empiricists go Bayesian? American Law and Economics Review, 9(1), 195303.CrossRefGoogle Scholar
Valente, A., & Breuker, J. (1994). Ontologies: The missing link between legal theory and AI & law. Legal knowledge Based Systems JURIX, 94, 138150.Google Scholar
von der Lieth Gardner, A (1987). An artificial intelligence approach to legal reasoning. MA, USA: MIT Press.Google Scholar
Vonhippel, E. (1994). Sticky information and the locus of problem-solving - Implications for innovation. Management science, 40(4), 429439. https://doi.org/10.1287/mnsc.40.4.429.CrossRefGoogle Scholar
Wang, D., Wang, P., & Shi, J. Z. (2018). A fast and efficient conformal regressor with regularized extreme learning machine. Neurocomputing, 304, 111. https://doi.org/10.1016/j.neucom.2018.04.012.CrossRefGoogle Scholar
Wichmann, A., Korkmaz, T., & Tosun, A. S. (2018). Robot control strategies for task allocation with connectivity constraints in wireless sensor and robot networks. IEEE Transactions on Mobile Computing, 17(6), 14291441. https://doi.org/10.1109/tmc.2017.2766635.CrossRefGoogle Scholar
Wiese, E., Metta, G., & Wykowska, A. (2017). Robots as intentional agents: Using neuroscientific methods to make robots appear more social. Frontiers in Psychology, 8, 19. https://doi.org/10.3389/fpsyg.2017.01663.CrossRefGoogle ScholarPubMed
Yang, Z. L., & Peterson, R. T. (2004). Customer perceived value, satisfaction, and loyalty: The role of switching costs. Psychology & Marketing, 21(10), 799822. https://doi.org/10.1002/mar.20030.CrossRefGoogle Scholar
Zeleznikow, J. (2002). An Australian perspective on research and development required for the construction of applied legal decision support systems. Artificial Intelligence and Law, 10(4), 237260.CrossRefGoogle Scholar
Zlotowski, J., Yogeeswaran, K., & Bartneck, C. (2017). Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. International Journal of Human-Computer Studies, 100, 4854. https://doi.org/10.1016/j.ijhcs.2016.12.008.CrossRefGoogle Scholar
Figure 0

Figure 1 Artificial intelligence robot lawyer technology acceptance model