1. HUMANS AND TECHNOLOGY
In seeking to optimise the performance and safety of marine vessels, we must try to understand what humans are good at and what technology is good for. The underlying thought is to allocate responsibilities to humans for dealing with tasks that humans do better and to let technology complement in areas where such solutions are best.
Basically it seems best to complement humans with adapted technological devices rather than letting the advancement of technology decide what tasks will be left over to humans. Advanced technology can do very much today and it seems that many designers have a certain preference for technological solutions over solutions based on humans. Investments are often made in technological solutions if such solutions are available, which, in line with technological advancement, must eventually result in an increasingly narrow field of tasks left over for operators or ships' officers.
The contradiction is that by introducing so-called “safe” technological systems, the task of the mariner is deemed to be narrower, involving fewer human capacities, and the risk of human error might therefore increase. Most of us function best when our capacities are needed. At times when we need to act, think, analyse, make use of our knowledge and our mental as well as our physical capacities, we stay alert. Normally our overall level of functioning decreases when only a limited number of our capacities are needed, when our interest becomes unnecessary or in cases when we are deprived of the need to stay alert.
There is a major difference between the task execution of technical devices and of humans. Technical devices are limited to merely executing the tasks they are designed for, whereas humans can execute a variety of tasks in a variety of ways. Humans are also capable of learning as well as inventing new strategies and methods of problem-solving. Generally speaking, this makes technical devices predictable and one-sided, while human performance remains unpredictable and flexible. It also makes operation by technical devices limited and well-defined and operations by humans broader and continuously adaptable.
2. TECHNOLOGY ERROR IS HUMAN ERROR
Most technological devices are designed to expand the potential of human life, human senses and human motor skills. All technological devices have, however, certain characteristics in common:
• They are designed and constructed by humans.
• They are designed to solve tasks specified and defined by humans.
• They are subject to aging and wearing.
• Their maintenance is decided and taken care of by humans.
• Their performance must be supervised by humans.
The above-mentioned facts make it somewhat difficult to claim that technology is safer than humans. As long as humans – engineers and other professionals – are involved in the intellectual as well as the practical work necessary for designing and constructing a technological device, such a device might inherit disadvantages emanating from human shortcomings. We know it is human and not unusual to miscalculate, to misunderstand, and to misjudge. Furthermore, our human technological creativity is limited to addressing tasks we can think of. Sometimes, however, the unthinkable happens, an event the technological aid is not designed to deal with, and in such circumstances the technology is doomed to failure. When technology fails, it must therefore be looked upon and analysed as a subdivision of human error.
3. THE GROUNDING OF THE M/V ROYAL MAJESTY
The rather well-documented grounding outside Nantucket Island one evening in June 1995 involving the passenger ship Royal Majesty might serve as an illustration (NTSB, 1997)Footnote 1.
The Royal Majesty was en route from St George's, Bermuda, to Boston, USA, with 1,509 passengers and crew onboard. The officers on watch had navigated the vessel using the autopilot and automated radar plotting aid (ARPA) which showed the ship's position and the pre-programmed track together with a map overlay on its display. They also plotted the journey every hour on a chart using data from the display of the ship's GPS. The voyage was intended to go in two legs, one long leg from Bermuda to reach the Boston traffic separation area and a shorter, traffic-separated leg, towards the port of Boston. The entrance as well as the traffic separation lanes were marked by buoys. At the moment of grounding, the vessel turned out to be 17 nautical miles west of track while the ARPA display still showed that the vessel was on its intended track, well positioned in the proper lane of the Boston traffic separation.
The accident investigation concluded that several errors, human as well as technological, contributed to the accident. The GPS receiver, which fed the autopilot, the integrated bridge system and the ARPA with position data, had disconnected from the antenna, and lacking a satellite signal, the receiver automatically went over to dead reckoning (DR) mode, calculating position data from speed and gyro-compass only. When the GPS lost satellite signals, a very faint and discreet one-second-long audio alarm signal sounded in the chart room. In this mode, the GPS did not adjust for the impact of current, sea and wind. Therefore, the current and wind had, during the 34 hours the GPS was disconnected, gradually drifted the vessel west of the intended track without any of the officers noticing it and without the ARPA display showing anything but the vessel right where it should be. The officers also relied on the false GPS information for plotting positions on the chart and did not detect the small signs on the display, indicating that it was on DR mode and not receiving proper satellite signals. Thus, from an instrument point of view, everything looked normal.
The watch-going officers failed to positively identify two buoys at the southern entrance of the traffic separation lanes; one officer mistakenly thought he had seen the first buoy on radar and the second officer lied to the master about having seen the following buoy. The same officer also failed to react to other ominous signs, such as reports from the lookout about mysterious red lights on the port side of the vessel and, later on, white and blue water dead ahead. Information like this should have alerted this officer or at least made him suspicious because lights on the port side could mean land whilst white and blue water are signs of sea grounding up.
The vessel had a second navigational aid, a radio-based system, Loran C, which the officers didn't consult and which could have informed them about the true position of the ship. To make it even worse, a third technological system was not activated. The ship's echo-sounder, which was designed to record and give the officers an audio alarm when the sea was grounding up, was set on zero. This was the normal setting in port to prevent continuous alarming. The echo-sounder had not been reset to the usual three-metre depth at the time of departure, nor was the recorder turned on.
4. THE ACCIDENT INVESTIGATION FINDINGS
The accident investigation commission identified and detailed all mistakes made by the officers and concluded that the master as well as the individual officers had behaved in a complacent manner. The conclusion was also that, had the officers adhered to good navigational practice, they would not have relied solely on one aid, the GPS, for their navigation but should have used the Loran C as a back-up system, supervising the GPS data. The investigation also pointed at the officers' “overreliance on the automated features of the integrated bridge system” (p. 47) as the probable cause for the grounding.
Regarding the technical failures, the accident commission found that the GPS antenna cable had become detached from its connection to the antenna. The antenna was located on top of the fly bridge and about half a metre of the cable was openly routed on the roof and not protected or fixed. It could easily be stepped on or kicked and thus put under strain or otherwise run the risk of being disconnected from the antenna.
The accident commission also investigated the integrated bridge system and found incompatibility of technical standards which resulted in the integrated bridge system's inability to differentiate between GPS data generated by true satellite signals from data generated by dead reckoning (DR data) from the disconnected GPS itself. The integrated bridge system thus presented DR data as if real on the ARPA display. The ARPA display therefore looked normal, showing the vessel on its intended track. The investigation commission learned that the designers of the integrated bridge system did not expect that data from the GPS could be anything but data derived from GPS satellites and “particularly not DR-derived position data” (p. 17).
The only visible indications of the GPS working with calculated data from speed and gyro-compass were the abbreviations “SOL” and “DR” on the numeric GPS display, indicating that data were not derived from proper satellite signals. These letters were displayed in less than a tenth of the size of the two six-digit rows for position and could therefore easily be overlooked by the navigating officers.
In brief, the accident investigation commission justifiably analysed the shortcomings of the officers, i.e. the master, the chief officer, the navigator and the second officer. The commission also analysed the technological failures and concluded that the officers showed an “overreliance on the automatic features of the integrated bridge system”.
5. WHEN RELIANCE BECOMES OVERRELIANCE
It is interesting to note from a human error perspective that this accident report, like other reports both before and after this investigation, does identify the operational mistakes and points fingers at individual officers. Among other things, the officers are blamed for “overreliance” on technological aids. At the same time, the individual engineers responsible for the technological failures are anonymous, hidden behind trademarks and company names. The conclusion is that this investigation commission actually tells officers not to rely on technological aids, and if they do and if technology fails, the individual officer will bear the burden of blame, not the people behind the technology.
Technological aids should, of course, be reliable; otherwise we do not need them. In this accident investigation, the officers are blamed for showing “overreliance”, i.e. too much reliance. It seems quite clear they should have checked their navigational data because an alternative, the Loran-C, was available. Especially they should have checked their position when meeting unexpected indications such as the red lights on the port side and white water ahead, and they should have positively identified the buoys.
The problem with reliance versus overreliance is, however, that it is impossible for any officer to judge at any specific moment what is too much and what is an adequate level of reliance. Semantically speaking, in cases when technology works and an officer trusts it, then it must be labelled reliance. If, on the other hand, technology fails, and the officer still trusts it, then it must be labelled overreliance. The degree of reliance, too much or adequate, seems therefore in hindsight to be defined by accuracy of technology rather than by operator performance. The consistent advice to operators never to trust technology may, however, be hard to follow in an increasingly technological world. Furthermore, if mariners are expected to always distrust technology then we are creating a totally abnormal professional and psychological situation where no person is able to maintain his or her mental health for any lengthy period of time.
6. QUESTIONS AIMING AT THE ROOT CAUSE
A normal strategy when investigating accidents is to find answers to logical sequences of questions beginning with the words “why” and “who”. Why was the ship off track? Why didn't the officers notice it? Why didn't the officers use alternative means to keep track of the voyage? Who was responsible for this? Etcetera. Such a sequence of questions and answers helps an investigator to dig deeper into the chain of events and to eventually reach a root cause. When focusing on the bridge officers' behaviour, this commission asks a sequence of “why” and “who” questions and analyses the answers. The answer to their final “why” leads them to the final accident cause; in this case the officers' overreliance on the technological aids.
It is important to bear in mind that the answers one gets are dependent on the questions one asks. Furthermore, it is of vital importance not to become satisfied and stop questioning but to always try to establish whether the last answer might give rise to a new follow-up question. If this is not borne in mind, the final cause is often found to be the point where investigators feel satisfied with an answer and therefore discontinue the sequence of questions. The investigators of the behaviour of the Royal Majesty deck officers could, for example, have continued with the following question: Why were the officers overreliant on the technological aids?
7. INVESTIGATING HUMANS AND TECHNOLOGY ALIKE
Commissions normally don't adopt this technique of asking sequences of “why” and “who” questions when focusing on technology. In this accident investigation they only asked a few “whys”. For example, they asked why the GPS led the officers astray. The answer: because it was on dead reckoning. Why was it on dead reckoning? Answer: because the antenna cable had become detached and there was no reception of a satellite signal. Why was the antenna cable detached? Answer: because the cable lay unprotected on the fly bridge and someone may have stepped on it. And here the commission stops and this final answer becomes thus elevated to be the technological cause as regards the GPS.
However, there may be a few more questions to be asked. For example:
• Why was the antenna cable openly routed and not protected?
• Who was responsible for the cable installation?
• Who was responsible for supervising the mounting of the antenna cable?
Continuous questions like these dig deeper and will eventually unveil bad practices elsewhere and end up with someone, perhaps not even onboard, who should have been more thoughtful, i.e. another human error but in this case concealed behind impersonal technology. The commission, when investigating the technology, could have tried to find answers using a more complete sequence of why and who questions:
• Why was the GPS display designed with such relatively small indicators letters for DR-mode?
• Why was the GPS equipped with such a short and faint alarm sound when it shifted to dead reckoning?
• Who was responsible for designing the GPS display and the alarm?
• Why was the integrated bridge system constructed using incompatible technical standards?
• Why was such an incompatibility not observed or corrected?
• Why didn't the designers of the integrated bridge system anticipate that GPS data “could be anything but data derived from GPS-satellites”?
• Why “particularly not DR-derived position data”?
• Why was this omission not observed earlier?
• Why was knowledge about GPS receivers lacking?
• Who was responsible for the design of the integrated bridge system?
Questions like these could, perhaps, elucidate the responsibilities of the chief designer at Company “X” or the responsibilities of a chief technical supervisor or the technical department of the Majesty Cruise Line (the owner). Neglect and omissions are as blameworthy for technicians as for operators, but operators are normally singled out. Using similar techniques for investigating operational as well as technological causes should, however, treat design engineers and officers alike and eliminate the bias against operators.
8. A BROADENING OF THE CONCEPT OF HUMAN ERROR
We should move away from conducting investigations where engineers are excused and not seen as a link in the human error chain. Such a practice should also broaden the concept of human error and, above all, teach us more about how to counteract accidents and incidents. Safety-critical and wrong decisions are not only made onboard and not only at a point in time close to an accident. Investigators have to broaden their perception and look beyond the simple fact that an operator is always present when an accident occurs while a designer or technician normally is not.
The lesson learned for a mariner is to constantly maintain a sceptical attitude towards technological aids, to use as many sources of information as possible and to continuously check and double-check information from, as well as the function of, technological devices. It would benefit safety at sea enormously if accident investigators ceased to accept technology or, as it sometimes is called “the technological state of the art” as a physical fact and started investigating it more thoroughly, using the same analytical acuity as when scrutinising other results of human activity. Accident investigators must be given the responsibility to move to the forefront in changing attitudes because they are the interpreters of reality and thus policy setters. They have the power to lead a change in attitude towards technology and to start understanding technological achievements for what they are: physical representations or materialisations of human thinking and acting.
So far, the message from accident investigators to mariners has been to adapt to imperfect or malfunctioning technology as if it were something given by nature. The widespread opinion that technology is more reliable than human operators and that more technology automatically means more safety, is simply not valid as long as technology is created by humans. It is especially invalid as long as equipment producers are allowed to supply and integrate technical devices that are unreliable.
9. TECHNOLOGICAL AIDS AND ERGONOMIC DESIGN
It is worth bearing in mind the driving force of technological designers and suppliers. They compete with others to sell their products in order to make money. They live within a business climate where the competitive edges are economy, user-friendliness and technological product development. Designers want to gain the upper hand by constantly utilising new technologies in order to produce what is possible and marketable rather than what is needed, and to design their products with as modern and exact features as possible. Computerised “exactness” may even act contrary to safety and indirectly lead a mariner astray. Margareta Lützhöft puts it quite poignantly: “[Principles of exactness] … imply that the data are accurate and that the technology that represents them can provide mariners with exact, precise information about a particular thing.” (Lützhöft, Reference Lützhöft2004, p 65)
Maritime equipment companies will market their products with arguments about economy, technological advancement and reliability, probably also emphasising that their particular products are indispensable and the best on the market. Such companies would never hand over a brochure or an information sheet to a prospective customer explicitly stating that their equipment may be unreliable or imperfect or that officers or operators should bear in mind not to trust them.
Equipment should be designed for and adapted to use by humans. In this accident we may well wonder whether the display of the GPS and its faint warning signal has really passed a normal ergonomic check-up before being designed. In this context it is worthwhile to quote, in some length, what Moray (Reference Moray and Bogner1994, p 69) has to say about design and user-friendliness.
“Enough is now known about the relation between ergonomic design, and the way that error is caused by ignoring that knowledge, to make it certain that almost all errors involving these aspects of a system should be laid directly at the door of designers and manufacturers. It is trivially easy to come by the relevant information that is needed to reduce errors in the design of equipment to be used by humans. This is the field of ergonomics of equipment design. Anyone who manufactures or installs equipment that violates the published data on these matters is directly responsible for the vast majority of the errors that arise in its use. Errors arising from violations of well-known design rules are the responsibility of the designer, not of the user.”
Moray concludes that “Almost every problem … of design can be solved by existing data,” and he even recommends specific books and databases as suitable sources of information for designers.
THE BENEFITS OF RELENTLESS QUESTIONING
Accident investigators should, when investigating technological systems, let manufacturers present what particular data or what research from the field of ergonomics they have used in designing a particular aid the way they did. Investigating written documentation as well as following a chain of “why” and “who” questions into the development of a certain set of technological aids would therefore, besides focusing on mariners, hold individual constructors and designers accountable for technological failures when an accident occurs. Undoubtedly, maritime safety would benefit enormously from such an equal investigative approach.
As has been demonstrated, however, using the example of the grounding of the Royal Majesty, there is a certain human misconception that can lead us astray in an increasingly technological world. Pre-interpreted, indirect information, presented with apparent exactness by a technological aid, tends to make us disregard normal and direct human observation. Technologically aided information particularly, takes precedence in cases when direct human observation needs to be interpreted, is ambiguous, vague or contradictory.
Our reliance on technology is growing, and with it the importance of questioning its reliability. This applies both to those who use technology as well as to those who attempt to trace the sources of “human error”.
ACKNOWLEDGEMENTS
This paper is a part of a larger project financially supported by Vinnova (The Swedish Governmental Agency for Innovation Systems). This support is gratefully acknowledged as well as the support from Stena Line AB, The Swedish Club, Star Cruises, Marine Profile AB and Force Technology.