Skip to main content Accessibility help
×
Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-06T13:20:17.308Z Has data issue: false hasContentIssue false

Conversation 3 - The Entrepreneur’s Perspective on Innovation

Published online by Cambridge University Press:  06 February 2025

Felix Steffek
Affiliation:
University of Cambridge
Mihoko Sumida
Affiliation:
Hitotsubashi University

Summary

This chapter aims to understand the entrepreneurial perspective on legal innovation. How do those who create legal innovation make the services and products that improve the way law is used? The conversation centres on topics such as passion for innovation, identifying needs for new products and services, dealing with failure, keys to success in the market, innovation networks, regulatory approaches to innovation and innovation in the financial services industry. The chapter also considers aspects of innovation that are particular to legal services. Among them are the creation and availability of legal datasets for the application of artificial intelligence and the transferability of legal innovation to other jurisdictions with different laws.

Type
Chapter
Information
Legal Innovation , pp. 64 - 87
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/
Speakers

Ludwig Bull, Holli Sargeant and Wojtek Buczynski

Moderators

Felix Steffek and Mihoko Sumida

Concluding Conversation

Felix Steffek and Mihoko Sumida

Comments

Ryutaro Ohara and Yuya Ishihara

Questions for Further Thought

Felix Steffek

Introduction

Sumida: Let us start the third session on ‘The Entrepreneur’s Perspective on Innovation’. I deliver the baton to Felix.

Steffek: A very warm welcome to everyone. Today, we are looking at legal innovation from the entrepreneurial side. We will have three speakers. First, we will listen to an entrepreneur. Then, we will hear about PhD research projects, looking at aspects of the use of artificial intelligence (AI) in the commercial world.

It is a great pleasure to welcome Ludwig Bull. He is the CEO of CourtCorrect, a LawTech start-up based in London. Yesterday, I told you about the Case Crunch Lawyer Challenge. Ludwig Bull was one of the people responsible for making that happen. In addition to his entrepreneurial activities, Ludwig also engages in research and teaching. In particular, he is a research partner of the joint research project of Hitotsubashi University and the University of Cambridge on ‘Legal Systems and Artificial Intelligence’. We invited Ludwig to tell us about how he innovates law as an entrepreneur, and we are very grateful that he accepted our invitation. Ludwig, please, the virtual floor is all yours.

Bull: Thank you, Felix. Good evening. I am Ludwig Bull, the founder and CEO of CourtCorrect. I was interested in technology when I was at university and studied programming on my initiative. I am now twenty-five years old, and three years ago, while studying law at university, I started to learn programming because I was interested in exploring the mathematical properties of legal theory using quantitative methods. As I read large numbers of cases and papers every day at university, I thought that more could be done with AI. However, there was no useful dataset that could be used in this context. Data are essential for the use of AI, but there was no dataset. So, I decided to create a tool to find and collect legal data from the internet. That became the database of my first start-up, from which I created many AI applications. By the way, did you see the Lawyer Challenge reported on the BBC?

Sumida: Yes. Last time, Professor Steffek gave us an overview.

Bull: The Case Crunch Lawyer Challenge is based on the dataset I created. Datasets are probably the most important thing in AI. For example, when you want to establish a legal technology start-up, datasets are going to be really important. Am I right in thinking that obtaining legal datasets is a bit difficult in Japan?

Sumida: Yes, that is correct. In Japan, legal datasets are available on a commercial basis. Courts have also created datasets, but none of them covers all decisions. The percentage of represented cases is quite low. The datasets available are limited to judgments that contain important issues. So, they contain only a selected collection of relatively complex cases.

Bull: That is a bit of a problem, because you always need a sufficient amount of data. By the way, my major was law, not technology. But when I wanted to study law with AI, I did not know enough about technology, so I decided to study programming online while studying law at university. For example, Stanford University has a very good online course on programming and another on AI. It was hard during that period because there was just so much to study, but learning is fun. Even if you do not know about an area, for example, technology or programming, there is a lot of information on the internet, and if you are willing to learn, there are many good online courses. The knowledge you need is not hard to find with such courses and resources. In fact, it can be an enjoyable experience.

Going back to entrepreneurship, when we were looking for data to use for AI, we found a large legal dataset, but it was unorganised and not very user-friendly. So, we built an AI system to organise the data. For example, we let the AI read the legal cases in the dataset and extract the lawyers’ names. It turned out to be a really good dataset allowing good-quality predictions. As a result, the creation of this dataset led to the start of the business. Three years have already passed since then, and the dataset is still active. We also use it in research projects at the University of Cambridge and Hitotsubashi University. It is very useful for innovation, and we plan to continue using it in the future.

Passion

Bull: Many law firms and research projects now use legal datasets, but two years ago, I was the only one using it. I am really happy for the dataset to have grown. I believe that the most important thing in my work is passion. Passion made me start my own business, and I am still working with passion. That is why I love my job.

At the same time, in my start-up, I experiment a lot. Every day, everything, is an experiment such as with the dataset collection. Feedback on experiments becomes really important because start-up work is always an experiment.

If you are thinking of starting a business, passion is the most important thing. Find something that you are passionate about. Follow your passion and start your start-up. If you have passion, experimentation and a lot of feedback, you can create innovations. The beginning is passion.

In my case, I started my first company with a friend. We grew the company by creating a dataset and providing that dataset to law firms. Then, we proved the technology and certainty of the technology in a competition. That was the Case Crunch Lawyer Challenge. As Felix explained in the last session, this challenge created a competition between lawyers and our AI.

The challenge resulted in us getting new clients. We had a very good reputation. But in fact not many companies approached us. The reason was that our service was quite expensive. The licence fee for one year was €40,000, which is about £35,000. It was so expensive that it became unaffordable for many people, especially individual users. This was not the way I wanted to go. I wanted to bring technology to all citizens. However, it was difficult to change the direction of the company, so I started another new company, CourtCorrect, of which I am now the CEO.

Two Keys to Business Success

Bull: When CourtCorrect was set up, one of my friends and business partners was against it. His view was that a small number of customers was fine. Such differences in business policy are common in start-ups. My friend and I had different visions of the company, and our opinions remained divided. I think passion and ideas are important drivers to overcome such differences in opinion and keep the business going.

At the same time, running a business requires a vision. When you set up a new company, it is often small at first, with one or two colleagues, if any, and often you are alone. The vision you want to achieve through your company is really important, as this is what motivates you to keep running it. Vision is the shape of the future. You cannot create innovation without a vision.

In my first company, my friends and I did not have the same vision. The company was born before we had a vision and grew very quickly, partly because the way we used technology to solve problems was simple and straightforward. But I wanted to develop the business true to my passion, so I started a new company. Although the start of the second company was more difficult than the first, it is now growing well.

CourtCorrect is a robot lawyer that functions as a search engine like Google. People are now researching WhatsApp and Telegram issues for improved data protection. Our search engine scanned WhatsApp’s legal documents, summarised them into a clear explanation and uploaded them. For example, end-to-end encryption is a key feature of WhatsApp’s secret chat, but experts found that its privacy policies are actually the same as before. Users can search CourtCorrect to find new privacy policies and learn about legal information.

We also have a lot of user information and have understood essential trends. For example, users are very much interested in the issue of the United Kingdom leaving the European Union and digital problems connected to services such as WhatsApp and Facebook. They search for these issues and related words very frequently.

Many users are now searching for legal issues on robot lawyers, and we provide legal explanations as clearly as possible to help them understand. The number of searches has reached one million, and we are very pleased that we are helping so many users through our legal services.

There is a term in the United Kingdom called ‘access to justice for all’, and that is what we are trying to achieve. Many people in the United Kingdom do not ask for a lawyer because they consider lawyers to be expensive and they perceive law as a bit scary. They do not really understand law. It is probably the same in Japan. This perception of law is really not a good thing. Law is important for everyone. My dream is to use new technology to teach everyone about legal issues. That is my passion. Now that I can do that, I really enjoy it. I think using legal technology to teach people law is a good idea. Do you have any questions so far?

A Growth Challenge for AI Lawyers Is the Lack of Data

Sumida: Could you please raise your hand if you have a question?

Student A: Yes, thank you. You referred to a giant dataset which you found fun to study. While using the dataset to study, are there any cases that left a deep impression on you?

Bull: We made a lot of interesting discoveries while analysing the dataset. For example, when you analyse the judgments, you discover the characteristics of the judges in charge of the cases. For example, US Supreme Court judges use language grounded in morality and ideology. Perhaps these judges are less fond of doctrinal arguments. We are always looking at new cases, and there are new cases and new legislation coming out all the time. It is really a challenge to exactly know what the law is at any given point in time.

I was also surprised that the dataset-based AI reached a prediction accuracy of 70–80 per cent. Artificial intelligence is already really good.

We have analysed 400,000 cases over the past five years. We have also extracted the names of the lawyers from all 400,000 cases and analysed the results. The results include the number of cases litigated, won, lost, withdrawn etc. We can, for example, identify the success rate for each lawyer. It was an interesting analysis, but it was not well received by all lawyers.

Student B: What was the most difficult part of creating an AI lawyer?

Bull: Current robot lawyers using AI are at a basic level. They can answer simple cases quickly and accurately, but when the circumstances become more complex, they start to struggle. Human lawyers have a wide range of work; some of them are legally complex and difficult, while others are simple and easy to understand. At present, we are using robot lawyers to do these simple but time-consuming and labour-intensive things. This will probably change in the future, since AI will certainly grow and get better if it is allowed to learn.

Human lawyers have a lot of work to do: they read documents, talk to clients and negotiate contracts. Today’s robot lawyers can read documents in datasets but cannot really talk to clients. Until AI evolves a bit more, what we can currently handle are basic text-based tasks.

What AI needs to grow is training data. The problem is that there is a lack of these data. There are a large number of law firms in the United Kingdom, each of which has its own record of previous court cases. There are enough data, but they are not being shared. If such data could be collected and used as training data, AI would see tremendous growth. I started studying law and AI a couple of years ago. Between then and today, technology has progressed rapidly. I am sure that legal technology will continue to grow, and we will be able to do things that seem impossible now. So, I recommend studying AI and technology.

Student C: You mentioned that you gather a lot of information on the internet, but how can you gather accurate information on case law when the information on the internet has some variation in credibility?

Bull: Information obtained on the internet is indeed not always reliable. That is a really good observation. That is why I used a public source and built the dataset based on information created by the government and courts. Hence, we have a good-quality dataset.

Sumida: Is it official information?

Bull: Yes, it is. But there are also problems with ‘official data’. There are 400,000 court cases. It is quite probable that there are still small mistakes in the data provided by official sources. Whenever you use information, you need to check the source and make sure that the data are correct.

Different Laws, but the Conflicts Remain the Same

Sumida: The following question was asked in the chat: ‘I have a lot of respect for Ludwig because he uses many languages and is well versed in both law and technology, while I am just trying to understand the differences between the laws and systems in Japanese, English and my mother tongue, Vietnamese. When you are expanding CourtCorrect’s services to foreign markets by making them multilingual, do you see any challenges in customising them to other languages and other countries’ legal systems?’

Bull: Laws are rooted in a country’s culture and history and are country-specific systems. Laws in the United Kingdom and France are different, and laws in Japan and Germany are not the same. Yet, everywhere in the world, the roots of conflict are the same: shopping, insurance, crime, etc. The legal systems may be different, but the essence of the problem is the same. In legal innovation, the focus is on the essence of the problem.

Artificial intelligence can understand Vietnamese, Japanese, German and English, of course. However, it needs data to understand these languages. The same applies to services. If you want to create a legal technology application in Vietnam, you need data in Vietnamese. Once the data are ready, it is quite straightforward to expand to other countries around the world. If we have an opportunity, we would like to experiment in Vietnam.

Experiment, Experiment, Experiment!

Bull: In fact, we are now experimenting because our clients have found new problems. Experimentation is the most important part of entrepreneurship. Passion and experimentation are really important. Innovation is always based on experiments. We listen to feedback and then create a new product.

At CourtCorrect, we talk a lot with our clients and users. That is really interesting and fun. We learn a lot. The difficulty here is that – while it is very important to listen to feedback in order to innovate – sometimes the feedback is not correct and misrepresents the problem. In such cases, the entrepreneur needs to act a bit like a doctor. What I mean by this is that the entrepreneur has to ‘examine’ the feedback like a doctor. People see symptoms. The entrepreneur’s job is to find the disease. The user tells you the symptoms, but you have to find the cause. This is my story.

Engineers Are Essential for Modern Entrepreneurship

Bull: For those interested in starting a business, I would like to talk a little about founding a company. In today’s world, an essential part of starting a business is software engineering. In my case, I did my own software engineering. Most of my work was not professional lawyering but engineering. If you are going to start a legal technology business with a friend who is a computer science student, tell them a lot about your vision. If you plan to do it on your own, do as I did and study through an online course. Engineers and vision – those are the most important things.

In my case, I became an engineer myself, but I did not have enough time to talk about my vision with my business partner. If partners do not have the same vision, even if the business runs well, there can be problems running it. Misalignment of vision can be a big problem. So, talk to each other about your vision, as you can never have enough of it.

Actually, getting started is really easy in the United Kingdom, where there is a government online portal that you can access, and it only costs £14 to set up a company. What about Japan?

Sumida: A procedure that can be completed online has recently been launched, but it involves a notary public for the certification of the articles of association. The notary system in Japan is a measure to prevent corporations from being misused for money laundering and other criminal activity. However, it is possible to found a company online.

Bull: Starting a business is very easy nowadays, with low costs, but the real start is creating and selling a product. That is the real beginning. Once you have finalised your product, you have to find clients and get them to buy your product. It can be fun, but it can also be difficult.

And if you and your friends start a company and create a software product, do not be satisfied with the finished product, but rather keep trying to improve it afterwards. The way to do that is to listen to feedback from clients and users. That is really important. Constantly listen to feedback and keep improving, and when the product is even better, you have probably grown. Always look at the product, talk about it and keep improving it. Do not be afraid to put in the time and money to do that. Thanks to this, CourtCorrect has grown.

Investor funding can help your company or product grow faster. If investors give you more money, you can focus on research and experimentation without having to worry about the cost of experimentation, which may allow you to grow faster. However, investors in start-ups also expect a return and want to see a return of ten times their investment in roughly five years. Entrepreneurs need to let investors know that their business is worth it. This is what it means to talk about the future that must be realised – to talk about your vision.

By the way, have you heard of Airbnb? It is a popular company. Here is the material Airbnb prepared for investors in its first presentation. Let us take a look at a few things. First of all, in the introduction, vision is emphasised. Airbnb’s innovation is to use a platform to offer cheaper accommodation to guests and money to hosts. Here is a new vision, a ‘share culture’ that hotels have not been able to cover so far. In its first presentation, Airbnb presents its original product and demonstrates that the market has huge potential. The latter is important for investors who aim for ten times their investment in five years. Now, Airbnb has grown into a very big company within an $80 billion rental market.

Growth in the Technology Industry

Bull: Finally, let us talk about the technology sector market as a whole. Technology is becoming more and more important every day and is growing faster than ever. Take the capital expenditure of giant technology companies. Amazon, for example, has almost tripled their technology expenditure in the last few years to 32 billion USD. In 2020, the COVID-19 pandemic started, but it had very little impact on the technology industry. This also applies to my company. CourtCorrect is a technology company, and the pandemic did not create real business problems for us. Technology is very important for all sorts of business.

Take another example. The share price of Alphabet, which owns Google, Waymo, and YouTube, has strongly grown since the start of the pandemic. Compare this with the struggling growth rates of IAG, which owns British Airways, over the same period. You can see how much more growth there is in technology companies.

Technology will become even more important in the legal sector. It will be really important for lawyers, judges, engineers, everyone. It will be used everywhere in the legal world. Without an understanding of technology, it will be difficult to understand the legal world.

There are strong companies in Japan. Have you heard of Vision Fund? Vision Fund is a Softbank fund, the biggest start-up fund in the world. It has a capital of more than 100 billion USD. Some of the most interesting start-ups that the fund has invested in are Airbnb and Uber. Masayoshi Son, the CEO of Softbank, has a great vision. I think he is a genuine investor.

That is all I would like to say today. Please, everyone, more than ever, learn about technology, whether online or at university. Technology will be an essential part of the world of the future.

Don’t Innovate for Innovation’s Sake

Steffek: Thank you, Ludwig. That was great. I noted one sentence that you said. You said, ‘My dream is to use new technology to teach everyone about legal issues.’ When you said this, I wondered whether we are doing the same thing at the end of the day, the two of us, just in different contexts: you in business and I at university. I really enjoyed your presentation taking the perspective of the entrepreneur.

Now, it is time to move on to Holli Sargeant. It is a great pleasure introducing her. Holli is a PhD student at the Faculty of Law of the University of Cambridge. Before coming to Cambridge, Holli took her undergraduate education in law and international relations at Bond University in Australia and then worked as a solicitor at Herbert Smith Freehills. There, she was seconded to the Australian Human Rights Commission to contribute to the Human Rights and Technology Project. Holli will be so kind to present some ideas, insights and questions from her PhD, and then, we will have an opportunity to ask questions. Holli, over to you.

Sargeant: Good morning to all of you. Thank you so much to Professor Sumida and to Professor Steffek for inviting me to give a brief overview of what I am researching for my PhD. Hopefully, this offers you some interesting insight following Ludwig’s presentation about how you can explore different areas in a variety of different ways.

By way of introduction, I did my undergraduate studies in Australia. I did a double degree in law and international relations, especially focused on human rights. In my final year, I went to Singapore, which is where I really discovered the importance of law and technology exploring the LawTech sector, which is currently a booming sector in Singapore. I spent a year there and was able to see some of the challenges that Ludwig outlined with a lot of start-ups in trying to find their vision, but also solving important problems for lawyers, rather than just innovating for the sake of innovating.

I then moved back to Australia, where I started private practice as a trainee with Herbert Smith Freehills. I spent a lot of time practising law in relation to digital technology. There are many businesses, whether mining companies or banks, that are spending a lot of time bringing their teams up to a stronger technology focus in their business irrespective of whether they are a traditional technology company or not. All these experiences came full circle when I was seconded to the Australian Human Rights Commission and worked with them to draft a report on how technology actually impacts human rights. That was a huge undertaking, and it took us across a lot of different sectors and a lot of different types of work, which was a very interesting exposure. This experience led me to want to explore this at an academic level to be able to dig deeper and spend some more time teaching myself some new skills.

Against this background, I am interested in developing the bigger research question of how law should adapt to AI. What I am trying to find through this research is what the shortcomings of the current regulation are and what the areas are that do not need to be changed because law can be neutral to technology. But there are other areas of law that may be inconsistent and not prevent harms when we look at different economic and human rights interests. I wanted to spend my PhD research with a broader approach and use a lot of different research methods. You may or may not have come across some of these methods already in your undergraduate studies. I have picked three key ones that I wanted to explore, and I will walk you through some of the logic of this.

The first method is working through an economic analysis. Artificial intelligence systems exist in a unique marketplace where you have a lot of different relationships. As Ludwig described, data are important, and data sometimes come from the company or sometimes from elsewhere. Sometimes consumers are direct customers of your company, and sometimes they are more tangentially connected. I wanted to look at that network of relationships and analyse where outcomes are optimal and where they are not. I have a visual which I think can be helpful to understand it. I use economic analysis to show where there may be impacts on human rights or, perhaps more broadly, on some of these liberal democratic norms, such as access to justice and other important values.

Where we highly value personal interests in privacy and autonomy, we want these to be protected even where there is an economic incentive to do otherwise. This is where a normative analysis comes in. Finally, I use these prior methods to inform an analysis of the role and the application that positive law currently has to AI. My PhD research looks at these issues more broadly and investigates how law currently works and how it might need to change.

Balancing Economic Benefits and Human Rights

Sargeant: To understand the intersection between law and AI, it is helpful to understand the various actors involved in designing and deploying AI. This network of actors is what a lot of people call the digital economy. I am a visual person, so I hope that Figure 3.1 is a helpful way to try and conceptualise some of the connections and relationships that you have in this new and emerging marketplace.

Figure 3.1 Network of relationships and interests

Figure 3.1 presents the most relevant stakeholders. You have corporations that each have people involved with significant interests and roles to play. You have people such as shareholders, managers, board of directors and creditors. In relation to start-ups, you might have a different set of stakeholders as Ludwig mentioned as well. You have investors who advise companies in particular directions, and then you have people like Ludwig who are the founders, who have their own set of interests that are a bit different to your normal employee within a big company.

Then, when you work through the system, you start to see that there is a difference between companies that supply AI, whether they develop it themselves, or outsource it, and the buyers of AI systems. I have pulled out some of these key relationships in Figure 3.1 to try and really dig into what the behaviours and interests are across these parties and how we analyse them.

Let us take an example where you have a start-up that has its founder who, obviously, believes in its vision and wants to push it in a particular direction. You have employees who help develop and build the AI system. And then, of course, you have other economic stakeholders such as investors, creditors and managers. They might build this piece of software and then sell it to a buyer, which here, as an example, might be a big platform corporation – such as Alphabet or Softbank or one of the other big tech companies which we have talked about today. They have their own separate managers, creditors and shareholders, and then, of course, they have their customers, who might be either a direct purchaser of the product or another company, which is a commercial client.

In looking at these relationships, I want to explore what the different interests and behaviours are and the way that the actors use the information that is available to them, the way that they contract privately. In other words, I undertake an economic analysis of the relationships involved. I then aim to connect these insights to how the economic behaviours of these actors might infringe on the human rights interests of the consumers or other individuals that are connected to these transactions. That is what I am trying to do with the exploration of different methods. Human rights, if we think about it, are interests that we protect because we value them highly in a way that we say that one cannot achieve an economic outcome to the detriment of human rights. What I am trying to show through this analysis is the importance of achieving an optimal outcome where both the start-up and the big tech company achieve their goals, but not in a way that affects the human rights of their consumers or other individuals whose data they might be or who are users of these systems.

Following these two pieces of analysis, I plan to look at the way law currently works, whether there are sufficient legislative frameworks in different countries, whether we can just rely on the protections under contracts and private ordering or whether we need recommendations about the way we regulate these kinds of relationships. I hope, in the end, to bring together policy-based recommendations or principles that will guide lawmakers as regulation emerges in this unique set of relationships. It is a challenging exercise to bring together all of these really interesting considerations.

It is hard to bring all these ideas together into a short presentation, but I hope that this sparks some interest in you and gives you a few ideas to think about in relation to law and AI.

Making Law Accessible to Vulnerable Groups

Steffek: Thank you very much, Holli. You mention access to justice as a human right or procedural right. Applications of AI, for example, that Ludwig mentioned, impact human and procedural rights in two different ways. On the one hand, access to justice gets cheaper, so more people have access to justice. On the other hand, AI is currently limited and has trouble answering very complex questions. Thus, while more people have access to justice as it gets cheaper, the quality differs. Currently, the advice that some LawTech applications provide is not very well tailored to the specific question at hand. Rather, more generic legal advice is provided. Hence, on the one hand, access to justice is provided to more people, but on the other hand, the quality provided may be somewhat lower. I find that a fascinating aspect of your research.

Sargeant: Yes, absolutely. Access to justice is very important and very interesting, and as you say, there is only so much you can deliver with technological solutions, such as a chatbot, when people have really complex legal challenges. Hopefully, technology does at least start to bridge barriers in some ways.

Steffek: I am sure you are aware of the saying that ‘Access to justice is open to everyone like the Ritz.’ If you have the money, then you will get fantastic advice, but not everyone can go to the Ritz. Thank you very much again, Holli.

We are now ready to move on to Wojtek. Wojtek is an AI regulation consultant during the day, and at night, he becomes a PhD student at the University of Cambridge. He is currently working on regulating AI as regards investment management. Wojtek, we are looking forward to your presentation.

Should the Financial Industry Be Regulated?

Buczynski: Moshi-moshi, good evening, everyone. It is a pleasure and an honour to be here. Many thanks to Professor Sumida and Professor Steffek for the opportunity. My name is Wojtek Buczynski and, as Professor Steffek kindly introduced me, I live a double life. During the day, I work as a financial technology and regulation consultant, and at night, I am a PhD student researching ethics and applications of AI in financial services. I have always had an interest in law, probably because my mother is a lawyer. My PhD is quite interdisciplinary because I look at several disciplines and my supervisors come from different disciplines as well. For my current project, I am looking at the AI regulations for financial services, more specifically in my industry, which is investment management.

My project mainly has a commercial focus. I am looking at the regulatory landscape of AI in financial services, specifically in investment management. We have all heard about the great promise and great excitement of AI in various industries. Financial services is one of the industries that is very optimistic about AI and has a lot of hopes pinned on it. However, one of the challenges is that there are no clear regulations regarding AI, and as you probably know, in financial services we are quite sensitive about regulations since the financial crash of 2008. So, there is this uncertain situation in which it is not clear what regulations apply or if any regulations apply at all. This lack of clarity is a challenge for implementation, and therefore, AI is not implemented as much as it could be because there are concerns about regulation. This is where my work steps in, hopefully.

My work has three primary angles. First, I am looking at the existing regulations. Some of them you may have heard about – GDPR for privacy, MIFID II for investments and a couple of others specifically for investment management. I am also looking at a number of proposed regulations. Currently, there are about fifteen proposed regulations worldwide that pertain to AI specifically in financial services or investment management. Interestingly, Japan, which is stereotypically seen as a hyper-technological and very innovative country, is not one of the countries currently proposing regulation. The Japanese regulator, the Financial Services Agency, does not actually have any publication or consultation on AI in financial services – at least not yet. However, there are about fifteen regulators worldwide that have published regulation proposals for AI in finance. The more original part of my work is to identify the different themes that emerge from the existing and proposed regulations. I am interested in establishing, for example, whether there are a lot of common themes. I will also try to propose, from my own perspective and my own experience, what should be included in a comprehensive regulation of AI in financial services. Possible topics are, for example, transparency and explainability.

What Is Needed to Clarify the Regulations

Buczynski: Figure 3.2 shows the regulatory themes that have emerged so far from my research. Some of them may look familiar to you. There are topics such as transparency, explainability and security. All of these issues are relevant within the context of the financial services industry, but you can see that almost all of them are quite universal and applicable to other industries as well. There is a lot of interest and focus on skills in financial services because there is a huge lack of skills in technology in general, and even more so in AI. In particular, there is a dramatic shortage of skills in AI in financial services. As a result, regulators are interested in and concerned about making sure that people who use AI actually understand AI. The more you investigate regulatory themes, the more you identify. Globally, we have approximately ten to twenty regulatory themes that are emerging from different jurisdictions. One aspect that is spoken a lot about in AI, regardless of the industry, is AI ethics. Discrimination, bias and racism are other important themes as we see some very unfortunate examples such as AI that favours white males or conversely that is much worse at recognising black or female faces. There have also been some unfortunate cases concerning discrimination in recruitment.

Figure 3.2 Regulatory themes

Buczynski: However, I want to draw a very clear distinction between regulations and ethics. They are often mixed together in one bag, but they are actually different things. Ethics is about values, our own sense of right or wrong. Ethics can concern universal values, the things that we share and the views that we share regardless of our cultures or our religions. Regulations are different. Regulations are about what the State tells us we should do and what we should not do. There will be some overlap between regulations and ethics, but I want to maintain a clear distinction between the two and focus on regulations.

We do have some existing technology regulations in financial services, for example, on the cloud outsourcing side, or a little bit on cybersecurity or technology risk sides. It is challenging to regulate technology because it moves so fast, but it can be done. I focus on regulations in my work because there is already a lot of research out there on AI ethics in different industries and it is very good. However, there is very little or, perhaps, no work focusing on regulations in financial services. This is the area which I would like to explore and hopefully the gap that I would like to fill.

Now, since this is a conversation series for students and we are all students, I think that there is a huge role that academia can play in this field, not just in financial services, but generally as regards AI regulations because AI is huge and very interdisciplinary. If only the industry, whether it is financial services or another industry, shapes AI, then there will be limits in terms of coming up with regulations. If we just have the relevant regulators trying to regulate, they will do their best, but they may sometimes lack the subject matter expertise because, particularly with AI, it is very complicated and fast-moving. So, this is a great area for academia to step in and add a lot of value with benefit to the industry and ultimately to society. It is really important that academia is consulted on regulations and also that academics proactively seek engagement. We are quite fortunate that we are beginning to see that. In financial services, we have academics consulted by regulators, and I think it is the best practice to follow internationally to make sure that regulations are not developed without subject matter experts.

This is all I had for this brief presentation. I am also a part-time blogger, so you can see my blog under https://wbuczynski.com. I blog about technology and, particularly, AI. If you have any questions or comments, you are welcome to get in touch with me.

Coordinating General and Sector-Specific Regulation

Sumida: Mr Buczynski, thank you very much for your interesting presentation. Let me ask you a question. Tomorrow’s session will include a discussion on corporate governance and AI governance. I am also a member of the AI governance expert group, and I wonder how to improve the relationship between corporate and financial governance on the one hand and AI governance on the other. This concerns regulating AI issues in the context of financial regulation and risk regulation in the context of AI governance. It seems to me that there is a vertical division by industry. Artificial intelligence governance has a strong connection with human rights issues and data governance, but it is difficult to know how to relate it to finance. In Japan, these aspects are not yet well connected. How are you trying to coordinate AI governance and financial regulation?

Buczynski: I think the answer is that we should have a couple of levels of regulation. We may need some general regulations on AI that are universal. These would be more along the lines of principles and ethics, such as human rights, human centricity, avoiding bias and ensuring fairness. They should be imposed on all industries, not just financial services. Against the background of these overarching sets of rules that apply universally, you can have an additional smaller set of rules that apply to specific industries. We should have very specific and very detailed regulations, for example, for the autonomous car industry to make sure that self-driving cars are safe.

We should have a similar set of regulations for financial services. These regulations should be much more focused and specific to the industry. They should relate to the areas that are more finance-specific, such as data governance, transparency, advertising and marketing. This combination of general rules applicable to everyone and sector-specific rules based on the nature of a given industry is the way that I see it work.

Artificial intelligence governance in finance may be new, but governance in financial services is not new. It is a matter of taking what we already do in financial services – in compliance, in legal and generally in management and governance – and applying it to AI.

Sumida: Thank you. Is there anyone who would like to ask the next question?

How Should Data Confidentiality Be Handled?

Student D: For AI development, it was said that massive data are indispensable, and I think that the challenge is the same in Japan as well. Public information is first collected. Based on publicly available information, AI is trained. I think that is the process. But is there any private information, say, for example, a B2C company may have their undisclosed information or law firms may have some undisclosed information, too? Such private data are another source of information, and I wonder what kind of initiatives are now being rolled out to make use of such data.

Sumida: Ludwig, could you answer this question please? Have any efforts been made to collect private, rather than public information, to upgrade the datasets?

Bull: That is a good question. We have so far relied on public data, but client data can be used as well. However, client data can only be used within the system of that client. Different clients have different systems, and they do not share their information with other people. So, private data can be used, but that information cannot be shared with other parties. In future, ideally speaking, law firms should be able to share anonymous data. That will allow quicker development of AI.

Sumida: Would someone like to ask the last question?

Student E: I have a question for Holli. If a private company runs an AI system regarding law, then there will be risks. Is there a way to manage such risk if LawTech is provided by private businesses?

Sargeant: That is an excellent question, and it is one that is up for a lot of debate and is dealt with differently in different jurisdictions. To practise law, you need to have certain qualifications, which is generally your admission to the legal practice of whichever country you are in. In some jurisdictions, governments have started to expand the definition of practising law. In the United Kingdom, service providers can provide some legal services under the regulatory board, which is slightly different to other jurisdictions. The United Kingdom and Singapore, where there are big LawTech industries, have expanded the rules to allow for start-ups and AI systems to engage with the law. I think that is very interesting, and I hope that this offers some protection, but I think there will need to be a lot more work in this area to offer the necessary protections for these types of generic delivery, AI delivery of legal services.

Sumida: Thank you very much, Holli. It has been a great session joined by three young presenters. They have been a great source of inspiration for our young students. They have taken up interesting challenges. Thank you very much for your participation this afternoon. With this, I would now like to officially close the session today.

Concluding Conversation

Steffek: I very much enjoyed the contribution by Ludwig Bull, explaining the entrepreneur’s perspective on using technology to improve access to law. For me, his emphasis on the passion for one’s work and having a vision for what one aims to achieve stood out.

Sumida: I was very impressed by Ludwig’s vision. I was particularly impressed by the fact that he left his first company, which had grown enough to be viable, to start a new one with the determination to innovate access to justice through technology. This vision includes the common good, which is much more than just ‘making money for ourselves’. This is why Ludwig has fully committed to our research project, while at the same time making his company CourtCorrect a successful business.

Steffek: Interestingly, Ludwig has said that the core of his business is teaching people law. This is a very thought-provoking statement because it is generally believed that law is taught in universities and colleges. The statement indicates that there is a market for legal education in the everyday lives of citizens. It also raises the interesting question of which market actor is best placed in which situation to teach citizens and businesses law.

Sumida: It is all about creating new markets. As a Japanese organiser, I would also like to point out that Ludwig conducted the session in Japanese, without simultaneous interpretation. As a German, he speaks German and English, of course, but also French, Chinese and Russian! His view is that the roots of legal problems in people’s daily lives are the same and that they stay the same across borders. When lawyers look at the laws of other countries, they tend to look at the differences between countries because they start their comparison with the ‘legal system’. His idea of starting with the ‘problem’ was innovative.

Steffek: I agree. I also found Ludwig’s statement that two things are important to be innovative as an entrepreneur – to keep experimenting and to listen to others’ opinions – very interesting. I am fascinated by the process that entrepreneurs use to innovate. Ludwig further cautioned that feedback is sometimes incorrect. In essence, he said: ‘If a patient complains to a doctor about a symptom, it does not necessarily mean that the symptom is the problem. Rather, like a doctor, you have to find the disease.’

Sumida: According to the socialisation, externalisation, combination and internalisation (SECI) model theory of Ikujiro Nonaka and Hirotaka Takeuchi, the key to achieving sustainable innovation is to turn knowledge into wisdom, turn practical knowledge into flexibility and create a ‘place’ for creative interaction. Wisdom is not merely logical. Wisdom is not just logical knowledge, but tacit knowledge at a higher level, enabling us to ‘see the essence of things’. I think the metaphor of the doctor is a perfect expression of this. And I took the repeated ‘experiments’ as a dynamic SECI spiral driven by flexibility, the very ‘wisdom’ that is useful for practical use.Footnote 1

Steffek: I strongly agree with Ludwig’s argument that ‘data availability’ should be improved. He believes that one of the main challenges for future progress is the availability of data. This is not, strictly speaking, an AI issue, but a data issue. In future, he hopes, AI will be able to answer more complex questions than at present. I think it is important for governments to understand that they can play an active role in supporting the start-up and research community by providing an attractive ecosystem. Supporting the availability of legal data is a key element of such an ecosystem.

Sumida: Our collaborator, AI researcher Dr Yamada of the Tokunaga Lab at the Tokyo Institute of Technology, makes the same point, and in Japan, the Ministry of Justice is taking the lead in considering this issue.

The focus of the next generation of AI researchers was also very innovative.

Steffek: Holli Sargeant is interested in how the legal framework should be adapted to achieve optimal results in relation to the development, use and deployment of AI systems. An important part of her analysis is to look at the different actors and their relationships: suppliers of AI systems, their investors, data brokers, platform companies, buyers of AI systems, consumers, etc. Mapping these relationships is essential to understand whether AI systems are useful for society. In particular, we need to track how the interests of private equity investors affect the products that LawTech start-ups develop.

Wojtek Buczynski has used his experience of working in the investment management industry to conduct research that seeks to regulate in a very fast-moving sector, while protecting clients and consumers and not inhibiting technological progress. We look forward to hearing his suggestions for improving the beneficial use of AI applications in this area.

Sumida: The students were very excited and inspired by the new role models who appeared in this session. At the same time, there are also some young people in Japan who are daring to take up the challenge of LawTech. Two young experts have been working with us on our research project and they have contributed the following comments.

Comments

Expectations for the Use of Machine Learning in the Legal Field

Ohara: The move to use machine learning in the legal field is no longer a temporary bubble, but an irreversible trend. Today, legal services such as ‘AI-based contract checking’ are being offered on a large scale. In the near future, the use of machine learning in the legal field will become commonplace.

Machine learning uses large amounts of data as training data, learns abstract patterns of facts and rules that can be read from the data and predicts results by applying these patterns to new cases. This way of prediction is precisely the kind of analysis that legal professionals do every day. Since the amount of data that a human lawyer’s brain can learn is limited, machine learning, which can learn from a much larger amount of data at a much faster rate than humans, can be expected to make more accurate predictions than human lawyers can.

One area where machine learning is expected to be used is in the prediction of the court’s judicial decisions. Predicting how a court will decide on a specific case is not easy, even for legal professionals. However, legal professionals can predict to some extent what the likely outcomes will be by referring to past court precedents. It is difficult for those who do not have legal knowledge to make such predictions. For example, in civil cases, those who are parties to a legal dispute must spend a considerable amount of money to obtain an attorney’s opinion in order to take a fight to court. Often, people give up on filing a lawsuit because they believe that the benefits of filing such a suit are not worth the legal fees. Although access to lawyers has become easier today than it used to be, there are still many cases where parties give up because they cannot find an affordable lawyer. From this perspective, the cost of litigating a case in a court is often still too high for the parties involved.

If models that have learned from a large number of past court precedents through machine learning can predict the court’s judicial decisions at a level close to that of lawyers, even those who do not have legal knowledge will be able to predict the benefits and costs of filing a judicial claim (or responding to a claim against them). This would make it easier and significantly less expensive than it is today to ascertain the costs and benefits of disputes. If the other party understands that it is clear that they will lose the case based on machine learning predictions, there is no need to fight the case in court, and the dispute may be concluded or resolved without both parties incurring the costs of going to court.

This would greatly reduce the litigation costs for both parties and greatly improve accessibility to judicial services by the courts. Such initiatives also contribute to the Sustainable Development Goals (SDGs), which have become a guideline for action by governments and private companies.

Challenges to Overcome

Ohara: The ‘Legal Systems and Artificial Intelligence’ project, of which the editor of this book, Professor Mihoko Sumida, is the project leader, is attempting to predict judicial decisions using machine learning. At this point in time, the project is still in the process of constructing the prediction model of machine learning. I will describe some of the technical challenges that must be overcome in order to use machine learning to predict judicial decisions. I was not familiar with machine learning before participating in this project, and what follows in this section is only what I consider to be possible issues from the standpoint of a legal practitioner.

First, especially in Japan, there is a high hurdle in obtaining data on court cases. The data, which are provided by courts and are accessible to all, contain much fewer court cases than the datasets provided by private companies, and the amount of data obtainable from open sources is still insufficient for machine learning. In Japan, the government and the Japan Federation of Bar Associations are currently discussing the conversion of court records into open data. We hope that this will be done promptly.

Second, the nature of the data used for learning in the legal field is quite unique, in that they include sentences written in judicial decisions, which are different from those used in everyday life and thus require special consideration. For example, a single sentence in a court case can be extremely long. It is thought that sentences become long in order to make them as unambiguous as possible while taking into consideration the arguments of both parties. It will be a challenge to learn the exact meaning of such special sentences via machine learning. Additionally, not only court cases but also legal writings consistently use unique terms. Whether a model of machine learning can accurately learn certain ‘rules’ peculiar to such legal writing seems to be another issue to be addressed.

Third, attention must be paid to the handling of data from the Japanese Court of Appeals and the Supreme Court. Since the Court of Appeals and the Supreme Court examine the judgments of the lower courts, the data of the lower courts must also be included in the learning process; otherwise, the learning will not be accurate. Since the Japanese Court of Appeals uses expressions such as ‘[t]he following decision is revised from line X to line Y on page Z of the original judgment’ and takes the form of revising only a part of the judgment of the lower court, the judgment text of the Court of Appeals itself does not provide a complete picture. Instead, it needs to be checked against the language of the original judgment of the lower court. How to handle data from the Court of Appeals and the Supreme Court, which have such special characteristics, is, hence, an issue to be addressed.

Fourth, studying court decisions alone does not allow us to take into account social circumstances that are not included in the court’s decision but have an impact on the court’s decision. For example, if the parties are a giant corporation and an individual consumer, the court may tend to rule in favour of the individual consumer. In a type of case that has attracted social criticism, the court may make a harsh decision against one of the parties. Legal professionals would make predictions based on such out-of-court circumstances, but there are technical challenges in how to make machine learning learn about such out-of-court circumstances.

However, none of the above issues are technically impossible to overcome, and they will probably be overcome through repeated attempts at prediction.

How to Deal with New Technologies

Ohara: There is a concern that new technologies such as machine learning will take away the work of lawyers. In fact, there will be fields in which the use of machine learning will make it possible to eliminate the need for human involvement. For example, tasks such as reviewing routine contracts (e.g. non-disclosure agreements) may no longer be a job of lawyers.

One example of technology that has had a significant impact on the work of lawyers is case databases. I was told by a senior lawyer that before the advent of databases, he used to stay in the library all night and search through every published casebook to see whether there was any relevant case as regards the matter he was handling. Now, thanks to case databases, we can easily search by keywords in an instant, and we do not need to spend the night in libraries. For lawyers, the amount of time required to research precedents has been greatly reduced, and the amount of time chargeable to clients has decreased. There is no doubt that the efficiency gains in operations have more than outweighed the downsides.

The use of machine learning has a far broader and deeper range of applications than case databases. However, it is no different from databases in that it is just one of the useful tools that lawyers will be able to employ. The question is how to deal with new technologies such as machine learning.

The use of machine learning in the legal field is still in its infancy. It is an area where lawyers and machine learning experts are working together to create something new, confronting the challenges they discover. Out of simple curiosity, I jumped into a project to predict judicial decisions using machine learning, even though I was a lawyer with no prior knowledge of machine learning. As a result, I am constantly stimulated by discussions with the diverse members of the project, who have the courage to boldly solve difficult problems without fear of failure. The attempt to decipher court cases from the perspective of machine learning has also led me to reconsider the depth of decisions of courts. This concerns, for example, the fact that there are many issues in which judges seem to avoid providing clear judgments. In today’s society, where new technologies, not limited to machine learning, are rapidly penetrating globally, it seems to me that being actively involved in areas created by new technologies will broaden the scope of a legal practitioner’s activities. The more colleagues are willing to dive into this new world, the more enjoyable discussions can be had, and the better new things can be created.

Regarding the concern voiced at the beginning of this section that the use of machine learning will take away the work of lawyers, if machine learning reduces the time required for tasks that were previously done manually, then lawyers will be able to devote their time to more creative work. For example, even if machine learning reaches the same level of accuracy in predicting judicial decisions as legal professionals, it will not eliminate the work of litigation lawyers. If the opposing party makes similar predictions, there will be more phases in which victory or defeat will depend on the strategies of litigation lawyers than there are now. Using machine learning predictions, it may become easier to strategically avoid arguing certain issues and focus on those that have a chance of winning.

I think the best way to deal with new technology is not to passively accept it but to think ahead and take advantage of it.

Applying Machine Learning to Law

Ishihara: Some years have passed since the ‘Third AI Summer’, and many systems have been applied and introduced in various industrial fields to support decision-making based on mathematical and statistical techniques, with deep learning at the forefront of this trend. This trend has been supported by improvements in the performance of computers and the expanding use of cloud computing, which has made it possible to access high-performance and sophisticated computers instantly and at a lower cost, without paying the initial costs to prepare data centres. In addition, this trend was also supported by the growing number or reallocation of engineering human resources that are now involved in building systems that utilise these technologies.

In recent years, epoch-making technological progress has been made in the area of natural language processing, which aims to enable machines to acquire abstract knowledge such as of contexts and concepts of human language and to solve problems written in the same, through the ‘Attention’ mechanism and its applications. The variety of tasks and the precision of machines in working with natural language have been greatly improved, and various applications are being intensively examined every day.

In the world of law, there is also a growing movement towards innovation through cutting-edge technology, including academic efforts such as the ‘Legal Systems and Artificial Intelligence’ project, to which I have the privilege to contribute. I see the same trend in the launch of several start-ups advocating legal technology.

The Challenge to Create a Mathematical Model of the Law

Ishihara: The project mentioned before includes an advanced study of how the introduction of machine learning technology to the field of law can change the process of legal decision-making. Even those who have never been involved in disputes or majored in the academic discipline of law, if asked to imagine how AI might change the judiciary, would all focus on ‘automated judging by AI’.

From the viewpoint of legal stability, it is desirable that legal conclusions are reproducible, so long as all the information related to the case is available without any gaps. In other words, we expect that in the legal system, if all the inputs are the same, the output must also be the same without any stochastic variety.

During the second AI boom in the 1980s, a judicial version of expert systems was considered, which would provide a clear logical description and automation of legal judgments by writing down every condition concerning legal decision-making. As typified by expert systems, there have been attempts to identify the factors that influence the outcomes in cases since the emergence of computers or the concept of AI, by expressing every information related to the judgment in mathematical terms. Currently, various attempts are being made to obtain the information used in judgments quickly and widely by applying natural language processing technology or to express causal relationships effectively using the directed graphs method.

However, there is a limitation insofar as legal systems are expected to exhibit reproducibility. The legal system has room to operate in order to adjust its behaviour to the individualities of different cases or to changes in society or culture that occur in the long term. The idea to leave sufficient room for legal interpretation conflicts with the former idea, i.e. the idea to create models to predict the result by expressing all information mathematically.

Systems, AI and Humans

Ishihara: Confronting these aspects of law, it is not enough to simply put the knowledge or concepts such as general common sense into mathematical symbolic expressions required for legal judgment. Instead, it is necessary to adaptively modify these expressions and models to the dynamics of society.

In this context, I would like to introduce the concept of Machine-Learning Operations (ML-Ops), especially the concept of Human-In-the-Loop (HITL), which has become an important topic in recent years concerning the building and operating practice of AI-based systems. ML-Ops is an idea that refers to all practices related to the operation of AI-based systems. HITL refers to designs that involve human judgement in processes such as system performance evaluation and continuous AI learning (refitting), to operate the entire system in a complementary and collaborative manner. For example, assessing the validity and accuracy of AI predictions and monitoring them with respect to the changes in the external environment are key points at which humans can complement AI-based systems.

Given the dynamic nature of law and the demands for its explainability, humans will continue to play an important role even if mathematical techniques such as AI are applied to the procedure of making legal judgments. From this perspective, the key issues are how far decisions are to be made by machines and the mutual feedback between humans and machines to improve the whole system.

The concepts of ML-Ops and HITL are ideas from industrial sectors where AI technology is leading the way. They will also be important when considering innovation in the legal system in the near future.

Invitations to the Frontier

Ishihara: Innovation through technology is the process of knowing what a new technology can achieve and using it to restructure the existing processes. New technology brings with it new operations, practices and ideas. While getting involved with the projects applying AI in the legal system, I found it exciting to see that the process of assessing the ability of this new technology is just getting underway.

For example, the automation of the trial process can be broken down into the process of fact-finding, inference and determination of conclusions. It is simple to determine the decisional boundary between the conclusions and the elements of fact-finding and inference. However, in this system, the accuracy of the elements becomes important.

Fact-finding is usually done manually, but there is a chance to build a model for making inferences. It may be possible to make an inference model such that the facts of a given case can lead to a concrete conclusion. This is the case when an expert system is used. But in some cases, it is difficult to provide concrete conclusions. These include the controversial issue of qualifying workers in the gig economy to be ‘employees’ of the concerned company and the long-standing issue concerning the definition of the concept of ‘spouse’.

To create a mathematical model that can make reasonable inferences regarding these hard cases, we need a large number of inputs that cover different types of cases. Although affinity methods and technologies such as knowledge graphs have emerged and are being tested, gathering a large dataset still remains a necessary task.

It is essential to dream about what can be possible with advances in technology and to continue searching for a way to realise it. This requires not only the knowledge of the technology but also the unravelling of current practices and resources such as data. This is the essence of innovation and something I am enthusiastic about. For me, the field of AI and law is full of enthusiasm and innovation.

Questions for Further Thought

  • How do entrepreneurs facilitate innovation? In LawTech start-ups, how does the ‘spark’ happen that creates a new commercial idea?

  • Are there particular ways to facilitate legal innovation in commercial settings?

  • Where does technical legal innovation happen beyond start-ups?

  • What are the barriers to using technology to create legal innovation?

  • If technology makes justice cheaper, but in the first years perhaps less responsive to the specific issues of a case, what does that mean for society and is such progress desirable?

  • How do the interests of those investing in LawTech start-ups shape the products that are developed?

  • Does LawTech require new ways of regulation, or can we properly regulate LawTech with the regulatory tools used thus far?

Footnotes

1 Ikujiro Nonaka and Hirotaka Takeuchi, The Wise Company: How Companies Create Continuous Innovation (OUP USA 2019) 9, 103.

Figure 0

Figure 3.1 Network of relationships and interests

Figure 1

Figure 3.2 Regulatory themes

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×