Today it is virtually impossible to find a biological laboratory without computers; they are essential tools for the study of life. But fifty years ago, most biologists believed that computers were incompatible with biological research. For computers to enter the lab, biology had to be rendered fit for computation, and computers had to be adapted to research. Joseph November's Biomedical Computing narrates the early history of these intertwined processes, revealing a diversity of post-war disciplinary, infrastructural and national political agendas that shaped both computing and biology. Focusing on the 1950s and 1960s, his analysis deals with big institutions – such as the National Institutes of Health (NIH) and Stanford University – from which a small cast of individuals emerges as particularly important. November does a fine job in highlighting resistance to (and the failures of) their agendas, setting all of this against a background of post-war American optimism and the Cold War. In so doing, rich local detail emerges, particularly the diverse professionals involved and the places they worked: computer visionaries, physicians, biologists, technicians, federal administrators and computer manufacturers in laboratories, hospitals and clerical offices.
November's account begins with Second World War operations research (OR) – a constellation of quantitative, statistical and managerial methods first developed to optimize British radar systems, and later incorporated into post-war science on both sides of the Atlantic. Partly through the guidance of two innovators steeped in OR – Robert S. Ledley and Lee B. Lusted – and stimulated by the 1959 launch of sputnik, the NIH began actively promoting computer development and use. From 1960, with direct support from the US Congress and with guidance from Ledley's non-profit organization dedicated to promoting the use of computers in biomedicine (the National Biomedical Research Foundation), the NIH sought nothing less than to transform the life sciences.
Initially the NIH concentrated on the multi-million-dollar funding and development of large-scale computer centres – an infrastructural model drawn from physics. But the anticipated multitudes demanding to use these facilities never appeared; more work was needed both to convince biologists of their utility and to make biological data more amenable to computation. So the NIH changed tack, directing investment to the development of much smaller, cheaper, programmable computers in the hope that this would nudge biologists in more quantitative directions. Charting this shift, November focuses on the successful career of the NIH-sponsored Laboratory Instrument Computer (LINC) – designed by Wesley Clark of the military-funded MIT Lincoln Lab – which had a graphical interface and was responsive and adaptable, and allowed real-time intervention and calibration, qualities that its developers and promoters believed were essential for computers to be of use in biology.
For historians of post-war biology, Biomedical Computing is richest in its discussion of the LINC programme and its potential to promote and disrupt research agendas. In stark contrast to large mainframe computers, the LINC was intended to be ‘just another laboratory instrument’ (p. 178): it was small (‘refrigerator sized’) and flexible, integrating seamlessly into existing research programmes without new staff or infrastructure. But despite researchers' reported delight in the LINC's speed and reliability, their feedback to the NIH reveals multiple ways in which the machines had the potential to determine the tempo, labour and skills of the lab. Graduate students and technicians had limited time to invest in training, while lab heads became increasingly accustomed to hands-on computer work, welcoming into the laboratory a new professional: the computer expert. Scientists with commitments to LINC reported radical new departures in their research, but also regretted that they were increasingly unable to talk to other colleagues in their respective fields. Moreover, large-scale investments in training and personnel had the potential to lock laboratories into agendas that could only be sustained with ever more powerful computers and federal funding to match (strikingly resonant of the dynamics of sequencing research today). November also gives us tantalizing glimpses of the visions and failures of computers in the setting of hospitals, which were orders of magnitude bigger than research laboratories, far more diverse and much more public. Despite notable attempts to make computers into diagnostic tools and technologies for simplifying hospital administration (such as at Massachusetts General Hospital), change was arduous and slow.
These insights and the drama of November's narrative left me wanting to learn more about how computers shaped knowledge at the level of questions and laboratory practices – historical work with this focus would help to connect this account to, for example, recent books on the co-development of computing and biological sequencing, such as Hallam Stevens's Life Out of Sequence (2013) and Miguel García-Sancho's Biology, Computing, and the History of Molecular Sequencing (2012). As November points out, in the 1960s there were no established communities of computing biologists. For this reason, many of the pioneering projects of this decade and the following were local and small-scale. It seems likely that – as with the cases of the adoption of home computing and radio – the story of the eventual integration of computing into biomedicine had as much to do with small-scale researchers and local enthusiasts as it did with top-down projects, visionaries and entrepreneurs. Biomedical Computing offers an essential framework for marrying the bigger picture with case-by-case local analysis.