I never dreamt of becoming an academic; Footnote 1 as a child nothing was further from my mind. For one thing I was terribly “bad” at school in both the academic and behavioral senses of that term. I was good only at cricket.
What I loved best was playing with toy soldiers, first those made of lead, very dangerously toxic, and then those made of plastic. I had a huge playroom in the top of the house where I lived with my family in the small market town of Cheltenham in Gloucestershire. In the center of this room was a homemade castle inhabited by soldiers of all periods, from knights in armor to modern soldiers with machine guns, tanks, and artillery pieces. I would stage battles and challenge my friends to bring their toy soldiers to be massacred. I loved weapons of all sorts, and by the time I was twelve I had a small collection of real swords, daggers, an African spear, an air rifle, and many toy pistols. I still have some of the swords. I loved to read war stories and war comics, I read all the Hornblower books—stories about a British naval officer of the Napoleonic period—and many books about the navy in the Second World War. I wanted to become a naval officer “when I grew up.”
The death of my father in 1957, when I was 12, changed everything. I was moved from a public school (what Americans call a “private” school) to the local grammar or “free” school, this despite having twice failed the grammar school entrance exam (the infamous “11-Plus”). My mother simply descended on the headmaster of the local grammar school and explained to him that because my father had thoughtlessly died at the young age of 57, there was simply no more money for private education, and I would be moving forthwith to the grammar school. The headmaster was, I am sure, suitably grateful to my mother.
My mother died a year later, aged 46. My older sister and I comforted ourselves with the idea that she died of a broken heart, although why that was supposed to be comforting no one explained. My mother’s passing completed my transition to adulthood without any intervening adolescence.
I changed—or rather was moved—between schools and towns many times, and by the time I was finally ejected from school with only minimal qualifications some five years later, (I certainly did not “graduate” from High School,) I was fully grown. I had become involved in the Campaign for Nuclear Disarmament (CND), had been on a number of mass antinuclear demonstrations—the famous Aldermarston marches— (https://en.wikipedia.org/wiki/Aldermaston_Marches.) and had started thinking about the ethics and legitimacy of direct action and violent demonstrations—subjects that were to appear in my doctor of philosophy thesis a decade later. On leaving school in 1962 I went to Montreal, following my sister, who had emigrated there. I enrolled in high school for the fall semester with the hope of finally achieving entrance requirements for university. It was not to be. I had outgrown school as it then was. The Cuban missile crisis erupted that October. My class teacher in Montreal insisted on “my country right or wrong,” which seemed to me to be such a monumentally stupid and indefensible position as to beggar belief.
I bunked off school to protest outside the U.S. consulate in downtown Montreal. I expected an Aldermarston-sized turnout, with tens of thousands protesting; but the streets were deserted. I was a lone demonstrator marching, rather pathetically, up and down, with a poster improvised from a cardboard shirt stiffener that said “no war over Cuba” and wearing my CND badge (also improvised from cardboard).
I demanded the opportunity to make my protest in person. Eventually the consul (or more probably a much more lowly official) invited me in to hear me out. I made my protest, explaining that because U.S. missiles were in Turkey, they could hardly object to Soviet missiles in Cuba, and the brinkmanship would probably (in my view) trigger a nuclear exchange and World War III. We agreed to differ. When I emerged from the consulate, the street scene had changed. The road was suddenly lined with police, but I was still a lone protestor. Eventually a police officer came over and told me that they expected thousands of students from McGill at any moment who all supported JFK and the idea of war, and I should “beat it,” because they could not promise to protect me. I thought we’d all be dead by the next day anyway, so I continued my lonely march up and down. Eventually thousands of “students” did appear, chanting for JFK to give those commies a bloody nose. I stayed, and the police did protect me, though it didn’t take much—I was obviously a joke. Eventually the “students” all left and then all but one or two of the policemen, and I called it a day and went home to prepare for doomsday.
It never came, but I withdrew from high school and started work at Canadian Pratt and Whitney, operating a blueprint machine for (civil) aircraft engines. I was laid off after a few months and used the money saved to buy a $99 ticket good for 99 days on the Greyhound buses around America. “I went everywhere man!” Back in Montreal, I got a job selling ladies’ shoes in a big department store, Eaton’s of Canada. I learned to appreciate a good shoe and a fine ankle and to speak French Canadian (not to be confused with French). I saved enough to return to England that August. I was going to become a lawyer, the only profession that was then open to someone who had failed to graduate from high school and possessed no worthwhile academic qualifications.
August 1963 found me back in London, fresh off the boat from Montreal with all my worldly possessions, including the swords and daggers, in two large suitcases (no issues with “security” in those days). In 1963, seven days on a Cunard liner (The Carmania) was the same price as a flight! Seven days’ holiday for nothing! I checked into Holland Park Youth Hostel on Friday, found a bedsit on Saturday, and reported for work in a lawyer’s office on Monday—as an articled clerk (you could do that in those days with no qualifications, and, with luck, I would become a solicitor after five years). I stuck it out for three years but hated it. I spent my spare time trying to educate myself. I would lock myself in the office strong room for hours and read novels and plays; when I left there were piles of untouched files under my desk. Almost every lunch hour I went to an art gallery and tried to teach myself about paintings. My office was in the West End of London and was easy walking distance from the British Museum, the National Gallery, Sir John Soane’s Museum, the Wallace Collection, the Courtauld Institute, and many others . . . and they were all free! Almost every evening I went to the theatre and stood “in the Gods” for a shilling or two. I was literally one of les enfants du paradis. Footnote 2 It was a wonderful time to be in London—it was the sixties. The new National Theatre had just launched at the Old Vic, soon followed by the Royal Shakespeare Company at the Aldwych. I went to everything, including more demonstrations.
In 1965 I had finally decided the law was not for me, and the law had probably by then also taken a related view. I wrote to every professor of English literature in the U.K., saying I had no qualifications but had been studying art and literature on my own and would they please take me on as a student. I remember promising to work hard. Most said “no”; some said “apply and we’ll see.” I applied to the six universities that had sent the least depressing replies. Five turned me down flat and one offered me an interview. That one was the University of Kent, which had just opened. I had applied to study English with philosophy. I chose philosophy because Bertrand Russell had inspired me by his leadership in CND and then in the formation of the Committee of 100. (https://en.wikipedia.org/wiki/Committee_of_100_(United_Kingdom.) I had heard him speak in Trafalgar Square and at other places, and on the strength of that I had read his pamphlet Has Man a Future? and his History of Western Philosophy. As luck would have it, the professor of philosophy at Kent, Patrick Nowell-Smith, interviewed me. He took a chance and offered me a place, and my formal education began.
I have had an education marked by being admitted to places I had no right to be. I went to grammar school despite having twice failed the entrance examination (with no subsequent passes) and to university with no entrance qualifications. Equal opportunities legislation would today certainly have put a stop to any such wickedness and injustice.
Kent proved a happy accident. It had been the last chance saloon, so to speak, my only chance of a university education, but in the event it proved worthy of having been a first choice. I had been ejected from school because, in my headmaster’s words, I was “too stupid to be allowed to waste any more of the school’s time.” Without those extra two years at school and the confidence that graduating with “advanced levels” (“A-levels,” as they were called) might have given me, I expected to struggle at university and perhaps fail. But it was a breeze, not least because the philosophy tutors, in particular, made it an intellectual adventure, and as a result, to my astonishment, I found I was doing rather well. But I still never considered an academic life.
When my father died I had become an atheist overnight. It seemed obvious that God was either wicked or dead, or possibly both. Footnote 3 With my mother’s death a year later, a world that had seemed stable, secure, and blessedly predictable had become dangerously insecure.
Although initially I was living with relatives—first with an uncle in London and then with my sister in Birmingham—I felt responsible only for and to myself. This was liberating, if lonely. I wanted to understand this far from brave new world, if for no better reason than self-protection; and self-awareness was part of that self-protection. I had become aware of threats previously unnoticed, at least by me: nuclear weapons; poverty; illness and premature, sudden death; and also indifference. Prejudice, in the form of anti-Semitism, I had already experienced firsthand. All of these threats engaged both my prudential concern and my blossoming intellectual curiosity. How could they be resisted or mitigated, for myself and for others?
Violence and Responsibility
In 1961, the year I had heard Bertrand Russell speak in Trafalgar Square, Adolf Eichmann went on trial in Jerusalem. I followed the trial, mesmerized by the drama of his capture in Argentina by Mossad and by the facts as they emerged. Until then, despite being a Jew, I knew little of the Holocaust. My consciousness of my Jewish origins and of the paradoxes of being a Jewish atheist became part of my persona. My initial reaction to Judaism following my father’s death was “no God, no religion,” end of story. But I soon discovered there were two powerful groups that would not let me cease to be Jewish. The first were the anti-Semites who wouldn’t let me “pass,” and the second the Jews who wouldn’t let me go!
Sometime much later, probably in 1965, I read Eichmann in Jerusalem, Hannah Arendt’s book of her New Yorker articles covering the trial. In the epilogue, Arendt says, imagining answering Eichmann’s denial that he ever hated Jews and his insistence that he had never any inclination to kill anybody:
Let us assume, for the sake of argument, that it was nothing more than misfortune that made you a willing instrument in the organization of mass murder; there still remains the fact that you have carried out and therefore actively supported, a policy of mass murder. For politics is not like the nursery; in politics obedience and support are the same. Footnote 4
Like Arendt, I had been struck not just by what she memorably called, in the last line of her book, “the banality of evil” Footnote 5 but more by the disconnect between intention and responsibility.
This disconnect became later a preoccupation of my research in Oxford between 1969 and 1974, which culminated in my doctor of philosophy degree (1976) and then in my first book, Violence and Responsibility, in 1980. Footnote 6 It took me a long time to have the courage to turn my thesis into a book, a mistake that I vowed not to repeat, and I have been a compulsive publisher ever since—for me, publication is a sort of exorcism, a way of moving on.
While at Kent I continued trying to do as much of everything as I could, like so many university students, but, perhaps because I had been made to think I didn’t deserve to be there, I may have overcompensated! I took up squash, which was later to prove an important part of my life, I edited Incant, the student newspaper, and decided to become a journalist. I was also, briefly, Footnote 7 president of the Students’ Union. In my final year my philosophy tutors, particularly Anne Seller and Colin Radford, persuaded me I was good enough a philosopher to do graduate work at Oxford and that I should apply. This had simply not occurred to me until then. But I had nothing better to do, so I thought I might postpone my journalistic career. Balliol and eventually the subfaculty of philosophy accepted me, and, because I obtained a major state studentship that paid all my fees and provided enough to live on, I could afford to follow my education where it led. Really I was following a line of least resistance, not a vocation.
My entire education from the age of 12 was effectively paid for by the state. My life as an autodidact in London was possible because art galleries and libraries were entirely free and much of the theatre was subsidized. My fees, undergraduate and postgraduate, were paid with grants. I could never have afforded it otherwise. Someone in my predicament today (or at any time since Mrs. Thatcher) would not have had a chance to have the education I have had and the career it opened up for me. I am conscious of the great good fortune I have enjoyed and of the debt it imposes.
A few years ago, to my amazement, my old university, Kent, awarded me an honorary doctorate of letters. – the Hon. D.Litt. I had taken my bachelor of arts in absentia, and in 1969, when it was awarded, the degree ceremony would probably have been in a draughty gymnasium. But in 2010 the degree ceremony took place in Canterbury Cathedral, and those receiving the honorary doctorate of letters were allowed to give a 10-minute oration from the pulpit before a full congregation of parents and students receiving their degrees. As an atheist I simply could not resist the opportunity to “preach” from the pulpit at which Thomas Becket had spoken. I told some of this story in my sermon—about being an accidental academic and a school failure, and not deserving my place at Kent all those years before, unlike all the graduands sitting in my audience. I ended up saying that I had been immensely lucky to have received a university education that I by no means deserved and that I hoped Kent had changed their lives for the better in the way that it had certainly changed mine. The parents loved it . . . I am not sure quite why—they should perhaps have been appalled at the injustice of it all, or perhaps I should!
Three mentors in Oxford have had a permanent influence on me. My first tutor at Balliol, Tony Kenny, was (and is) frighteningly clever and I learned much from him. But one encounter in particular taught me a lesson of lifelong benefit. He had lent me a typescript (there were no computers in those days), and when I returned it at my next tutorial, I asked him if it was going to be published, so that I could reference it properly. He replied, “I haven’t decided yet.” I realized at that moment that there were people for whom academic publication was not passive, something that happened to them if they were lucky (or very good), but something active, something they did if they chose. I resolved then and there that I would become such an academic.
When at the end of my first year I decided to change degree programs and had to find a supervisor for my proposed doctoral thesis, there was no contest. Ronnie Dworkin had just arrived from New York as professor of jurisprudence in succession to Herbert Hart. I had been to his inaugural lecture. He was a breath of fresh air and fresh ideas, and I knew he was something special. I made an appointment and asked if he would supervise my thesis, which I had planned to be on the ethics of violence as an instrument of political and social change. He warned me that he wouldn’t be a very good supervisor but took me on anyway. He was right; he might have been a disaster. He never advised me on appropriate reading, on structure, or on the topics I needed to tackle; he never read a whole draft of the thesis and never told me it was ready to submit; and he often forced me to travel from Oxford to London, where he lived, for supervision. In all other respects however he was wonderful! Indeed, many of these features of our relationship were in my case entirely satisfactory, despite their being contrary to most currently accepted standards. I was delighted to go to his house in Chester Row, not least because he was often late and sometimes failed to show up at all. I would instead have the pleasure of being entertained with coffee and conversation by his wife, Betsy. I got to know her well; she was utterly charming, wonderful company, and would have made a good second supervisor if they had had such people in those days.
It was Ronnie also who bullied or shamed the Oxford Faculty of Literae Humaniores into accepting my doctoral topic. They had initially rejected it as totally unsuitable for a doctorate of philosophy, far too relevant! Ronnie became a strong mentor and support to me all his life. He gave me one priceless gift. He would never discuss a chapter or a paper as a whole and comment on its quality or fitness for purpose. He would simply take an idea or an argument of mine that interested him, often one with which he disagreed, and we would argue it out for an hour or sometimes two. This was stimulating and immensely useful, not least because I thought his was the smartest mind in Oxford, and I felt if I could at least partially hold my own for an hour with him, I could do that with (or against) anyone. This, for really the first time in my life, gave me intellectual confidence. I very much miss his invisible hand, although I still feel its effects.
My third mentor is Jonathan Glover. He never officially taught me, but we became friends while I was at Oxford, a friendship that has endured, from which I have learned, and that I value immensely. He sets an example of intellectual generosity, honesty, and integrity that I try hard to emulate, but that I can never equal.
The Survival Lottery
While working on my doctorate of philosophy I wrote a long paper on our responsibility for the harm we fail to prevent, which I called “The Marxist Conception of Violence,” and sent it to Philosophy and Public Affairs, which was then a new journal. I received an enthusiastic letter back from Marshall Cohen, the editor. He thought it too long but encouraged me to shorten it and resubmit. I reread the piece and found I could remove a complete section without loss. I resubmitted it, and it was published under the same title. The piece I removed I submitted to Philosophy under what had been its section heading in the larger paper. It’s called “The Survival Lottery.” Footnote 8
These two papers contain not only the heart of my first book, Violence and Responsibility, but a number of themes on which I continue to work today. I won’t try to summarize “The Survival Lottery” here. It contains an argument that uses organ transplants and discusses a dilemma that others have thought of as dimensions of the so-called trolley problem. Footnote 9 It also deploys the lottery as a device for choosing without discriminating—in short, a device for ensuring that we treat people as equals, Footnote 10 a device that has informed my work on resource allocation and the infamous Quality-Adjusted Life Years (QALYs). Footnote 11 “The Survival Lottery” is a paper that has been very kind to me and one without which no one would be interested in these thoughts.
“The Survival Lottery,” a rather theoretical and speculative piece, is incidentally responsible for my coming to be a “bioethicist” as well as a philosopher. My first job when my studentship ran out in 1974 was teaching aesthetics to art students and art teachers at Birmingham Poly. I took the job because I needed work; because Birmingham was within striking distance of Oxford, where I was settled; and because I could go on playing squash for Oxfordshire while earning a living. (Yes, my squash had improved rather more rapidly than my philosophy!) I never intended the academic job to become permanent. While teaching aesthetics I began to get invitations from medical schools and medical societies, all of which said, roughly: “We hear you have an interest in organ transplants; would you like to come and speak to our group?” I responded that my interest in transplants was limited to making a theoretical point, rather than being an interest in the ethical and scientific challenges of transplantation; but I asked them to tell me what their most pressing ethical issues were and what they would like me to speak about. Their problems were fascinating.
My long relationship with the Journal of Medical Ethics (JME) also began as a happy accident soon after I had moved to the University of Manchester in 1979. My first self-conscious paper in medical ethics was a reply to a paper by the surgeon John Lorber on the “selective non-treatment of handicapped newborns”—a subject that remains central to contemporary debate Footnote 12 —which had been published in the Journal of the Royal College of Physicians of London. I received a very “sniffy” letter back from the then editor, saying something like, “Your paper seems to be a paper within medical ethics . . . there is a journal for that sort of thing—the JME.” The editor did not seem to think that the paper to which I was replying should also never have been published in his journal—that author was a medic and a surgeon to boot. I sent my paper to the JME and received a warm reply from Raanan Gillon, the editor accepting it, and we have been close friends ever since. Eventually, I myself became editor of the JME (with Søren Holm) between 2004 and 2011. I also received, some years ago, a letter from a subsequent editor of the Journal of the Royal College of Physicians asking me to let them have a paper if I had something suitable, and asking why I had never submitted one?
Another long relationship with a journal and journal editor should also be recorded here: my relationship with CQ and its wonderful editor, Tomi Kushner. Tomi is an author’s ideal editor—unobtrusive, constructive, and brutally honest with the lightest of touches. She has made the CQ into a top journal publishing some of the most interesting work in bioethics today.
What Is Bioethics For?
If there is a theme that unites all my philosophical work, it is an exploration of the responsibility shared by all moral agents, to make the world a better place. Karl Marx Footnote 13 is noted for the idea that the purpose of philosophy cannot simply be to understand the world, but must also be to change it. This thought, however, is not original to Marx; it is implicit in the writings of many philosophers. Plato certainly wanted to change the world for the better, and The Republic is devoted to systematic ways to achieve a better society. Locke, Rousseau, and Bentham would all have been equally at home with the idea. Indeed, as Bertrand Russell said, talking of Jeremy Bentham: “There can be no doubt that nine-tenths of the people living in England in the latter part of the last century were happier than they would have been if he had never lived. So shallow was his philosophy that he would have regarded this as a vindication of his activities.” Footnote 14 Russell’s irony will not be lost on even the most literal of readers. It is a sad comment on the philosophy of the twentieth (and twenty-first) century that in the fourscore years since Russell’s essay was written, concerns with the real world, no less than with attempts to make it better, have continued to be seen as evidence of lack of philosophical depth by the majority of professional philosophers, and Russell’s own attempts to make the world better are not, even now, ranked by most philosophers as among his significant philosophical contributions.
My particular interest, perhaps specialty, has been considering the impact of new and probable technologies and of policies concerning them and in attempting to judge, as objectively as I can, the quality of the reasons for and against their introduction. In particular, I have found myself criticizing the plethora of bad arguments that are always advanced as obstacles to change. This is not, I believe, because I am a natural radical but, rather, because I am a natural sceptic. I have found that all too many people are like the mother who said to her daughter, “Go and see what your little brother is doing and tell him to stop.” When I go and see what the scientists are doing, which I have made it part of my business to do, I much more often find that they are doing a good job and that we should remove rather than increase the obstacles in the way of their progress.
On May 12, 2008, John Sulston and I gave a public lecture at the Sheldonian Theatre in Oxford entitled “What Is Science For?” In that lecture we advanced two fairly commonplace propositions: that in the future there would be no more human beings and no more planet earth. Why will there be no more human beings? Either we will have been wiped out by our own foolishness or by brute forces of nature or, I hope, we will have further evolved by a process more rational and much quicker than Darwinian evolution, a process I described in my book Enhancing Evolution. Even more certain is that there will be no more planet earth. Our sun will die and with it all possibility of life on this planet. By the time this happens, we may hope that our better-evolved successors will have developed the science and the technology needed to survive and to enable us to find and colonize another planet or perhaps even to build another planet, and in the meanwhile to cope better with the problems presented by living on this planet. Either way, not only are these not things we should worry about, they are things we need actively to plan for if we or our successors are to survive into the far future.
People often believe that there is some moral imperative to be ultra cautious in permitting new research and in introducing new technology. This approach is commonly understood as respecting the precautionary principle. However, it is not unusual to find this so-called precautionary principle being invoked in circumstances in which it is far from clear in which direction (if any) caution lies. We cannot know which way lies caution without having some rational basis for establishing the scale of likely dangers that will result from pursuing particular programs of research and innovation and comparing those with the ongoing costs of failing to pursue the research to a successful conclusion. If the so-called precautionary principle had held sway in the Garden of Eden, it is doubtful if any of us would be here now. For there was then simply no rational basis for forecasting the success or failure of our species. And as for the deity’s recklessly engineering a woman out of a man’s rib . . . where was the evidence base for that? The challenge is the following: how can we accept our responsibility to make the world a better place and ensure that life on earth flourishes?
As Giuseppe di Lampedusa had Tancredi remark in The Leopard, “Se vogliamo che tutto rimanga come è, bisogna che tutto cambi” (“If we want things to stay as they are, things will have to change”). Footnote 15 How, not whether, they should change is the challenge for all of us interested in neuroethics. The future of ethics is the future of humanity.