Article 36 of Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts provides:
In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.Footnote 1
As weapons become more technologically complex, the challenges of complying with this apparently simple requirement of international law become more daunting. If a lawyer were to conduct a legal review of a sword, there would be little need for the lawyer to be concerned with the design characteristics beyond those that can be observed by the naked eye. The intricacies of the production and testing methods would equally be legally uninteresting, and even a lawyer could grasp the method of employment in combat. The same cannot be said about some modern weapons, let alone those under development. The use of a guided weapon with an autonomous firing option requires an understanding of the legal parameters; the engineering design, production, and testing (or validation) methods; and the way in which the weapon might be employed on the battlefield.Footnote 2 While somewhat tongue-in-cheek, there is some truth to the view that a person becomes a lawyer due to not understanding maths, another becomes an engineer due to not understanding English, and the third a soldier due to not understanding either!
Our purpose in writing this article is to breakdown those barriers through a multidisciplinary approach that identifies the key legal issues associated with employing weapons, setting out important features of emerging weapons, and then analysing how engineering tests and evaluations can be used to inform the weapon review process. Through the combination of the above methods, we hope to provide a general framework by which the legal and engineering issues associated with weapon development and employment can be understood, regardless of the simplicity or complexity of the weapon.
We commence with a brief review of the key legal factors for employing and reviewing weapons, followed by three substantive parts. The first part deals with the target authorization process, regardless of the choice of weapon to be employed. The second part looks at some emerging weapons and the legal issues associated with those weapons. The final part considers the engineering issues associated with weapon reviews and, in particular, how an understanding of engineering processes can assist when reviewing highly complex weapons.
Key legal factors
The key legal steps under international humanitarian lawFootnote 3 when conducting an attack can be summarized as:
1. collecting information about the target;
2. analysing that information to determine whether the target is a lawful target for attack at the time of the attack;
3. appreciating the potential incidental effects of the weapon and taking feasible precautions to minimize those effects;
4. assessing the ‘proportionality’ of any expected incidental effects against the anticipated military advantage of the overall attack (not just the particular attack of the individual weapon);Footnote 4
5. firing, releasing, or otherwise using the weapon such that its effects are directed against the desired target;
6. monitoring the situation and cancelling or suspending the attack if the incidental effects are disproportionate.Footnote 5
In addition, consideration must also be given to the type of weapon to be employed, and particularly relevant to this article is that there are also ways of employing (using) an otherwise lawful weapon that might result in a banned effect (e.g., indiscriminately firing a rifle). The key legal factors when conducting the review of new weapons (including means and methods of combat) are whether the weapon itself is banned or restricted by international law;Footnote 6 and if not, whether the effects of the weapon are banned or restricted by international law.Footnote 7 Finally, the ‘principles of humanity and the dictates of the public conscience’ must also be kept in mind.Footnote 8
From an operational point of view, the key points can be expressed as: achieving correct target-recognition, determining how to exercise weapon release authorization, and controlling (or limiting) the weapon effect.
With weapons of relatively simple design, the associated legal issues are simple. With the sword example above, the only real issues are whether it is a ‘banned weapon’;Footnote 9 and if not, whether the person who wields it does so with discrimination. Any design flaws (e.g., poorly weighted) or manufacturing defects (e.g., metal is too brittle) are unlikely to affect the legal analysis and are primarily the worry of the person using the sword. With more complex weapons like crossbows, the complexity of the weapon design introduces the potential for discrimination to be affected by:
• design errors (e.g., the weapon does not fire straight or consistent with any sighting mechanism as the design is flawed); or
• manufacturing errors (e.g., the weapon does not fire straight or consistent with any sighting mechanism as the weapon was not built, within tolerance, to the design).
These types of errors have the potential to be magnified with long-range weapons (such as artillery) and batch variation now also becomes a significant factor as any variations are magnified over the longer range of the weapon. Further, modern weapons have a variety of aiming mechanisms that are not solely dependent on the operator, such as inertial guidance, global positioning system (GPS), and electro-optical guidance. Finally, as discussed below, there is even the capacity for the weapon itself to select a target.
Weapon technology is advancing in many different areas and there is limited public material available on the avenues of research and the capabilities of the weapons being developed.Footnote 10 The following emerging weapons are, therefore, purely representative. In any event, the exact capabilities are of less importance to the discussion than are the general modes of operation.
Target recognition and weapon release authorization
The following discussion deals with weapons and weapon systems that have some level of functionality to discriminate between targets and, in appropriate circumstances, might attack a target without further human input. For example, a non-command-detonated landmine is a weapon that once placed and armed, explodes when it is triggered by a pressure plate, trip wire, etcetera. Such landmines have a very basic level of target recognition (e.g., a pressure plate landmine is triggered when a plate is stepped upon with a certain minimum amount of weight – e.g., 15 kilograms – and is clearly unlikely to be triggered by a mouse) and require no human weapon-release authorization.Footnote 11 More complex weapon systems purport to distinguish between civilian trucks and military vehicles such as tanks.Footnote 12 Automated and autonomous weapon systems need to be distinguished from remotely operated weapon systems. While there has been much discussion lately of unmanned combat systems, these are just remotely operated weapon platforms and the legal issues depend far more on the manner in which they are used than on anything inherent to the technology.Footnote 13 The following discussion differentiates automated weapons from autonomous weapons, briefly reviews some key legal issues associated with each type of weapon system, and concludes by outlining some methods for the lawful employment of such weapon systems.
Automated weapons
Automated weapon systems:Footnote 14
are not remotely controlled but function in a self-contained and independent manner once deployed. Examples of such systems include automated sentry guns, sensor-fused munitions and certain anti-vehicle landmines. Although deployed by humans, such systems will independently verify or detect a particular type of target object and then fire or detonate. An automated sentry gun, for instance, may fire, or not, following voice verification of a potential intruder based on a password.Footnote 15
In short, automated weapons are designed to fire automatically at a target when predetermined parameters are detected. Automated weapons serve three different purposes. Weapons such as mines allow a military to provide area denial without having forces physically present. Automated sentry guns free up combat capability and can perform what would be tedious work for long hours and without the risk of falling asleep.Footnote 16 Sensor-fused weapons enable a ‘shot and scoot’ option and can be thought of as an extension of beyond-visual-range weapons.Footnote 17
The principal legal issue with automated weapons is their ability to discriminate between lawful targets and civilians and civilian objects.Footnote 18 The second main concern is how to deal with expected incidental injury to civilians and damage to civilian objects.Footnote 19
Starting with the issue of discrimination, it is worth noting that automated weapons are not new. Mines, booby traps, and even something as simple as a stake at the bottom of a pit are all examples of weapons that, once in place, do not require further control or ‘firing’ by a person. Some of these weapons also have an element of discrimination in the way they are designed. Anti-vehicle mines, for example, are designed to explode only when triggered by a certain weight. Naval mines were initially contact mines, and then advanced to include magnetic mines and acoustic mines. Of course, the problem with such mines is that there is no further discrimination between military objectives or civilian objects that otherwise meet the criteria for the mine to explode.Footnote 20 One way to overcome this is to combine various trigger mechanisms (sensors) and tailor the combination towards ships that are more likely to be warships or other legitimate targets than to be civilian shipping.
As weapons have become more capable and can be fired over a longer range, the ability to undertake combat identification of the enemy at greater distances has become more important. Non-cooperative target recognition (also called automatic target recognition) is the ability to use technology to identify distinguishing features of enemy equipment without having to visually observe that equipment.Footnote 21 A combination of technology like radar, lasers, communication developments, and beyond-visual-range weapon technology allows an ever-increasing ability to identify whether a detected object is friendly, unknown, or enemy and to engage that target. With each advance though, there is not ‘a single problem but rather … a continuum of problems of increasing complexity ranging from recognition of a single target type against benign clutter to classification of multiple target types within complex clutter scenes such as ground targets in the urban environment’.Footnote 22 Significant work is underway to produce integrated systems where cross-cueing of intelligence, surveillance, and reconnaissance sensors allows for improved detection rates, increased resolution, and ultimately better discrimination.Footnote 23 Multi-sensor integration can achieve up to 10 times better identification and up to 100 times better geolocation accuracy compared with single sensors.Footnote 24
With something as simple as a traditional pressure-detonated landmine, the initiating mechanism is purely mechanical. If a weight equal to or greater than the set weight is applied, the triggering mechanism will be activated and the mine will explode. This type of detonation mechanism cannot, by itself,discriminate between civilians and combatants (or other lawful targets). The potential for incidental injury at the moment of detonation is also not part of the ‘detonate/do-not-detonate’ equation. While this equation can be considered with command-detonated landmines, that is clearly a qualitatively different detonation mechanism. With pressure-detonated landmines, the two main ways of limiting incidental damage are either by minimizing the blast and shrapnel, or by placing the mines in areas where civilians are not present or are warned of the presence of mines.Footnote 25
However, the triggering mechanisms for mines have progressively become more complex. For example, anti-vehicle mines exist that are designed to distinguish between friendly vehicles and enemy vehicles based on a ‘signature’ catalogue. Mines that are designed to initiate against only military targets, and are deployed consistent with any design limitations, address the issue of discrimination. Nevertheless, that still leaves the potential for incidental injury and damage to civilians and civilian objects. The authors are not aware of any weapon that has sensors and/or algorithms designed to detect the presence of civilians or civilian objects in the vicinity of ‘targets’. So, while some weapons claim to be able to distinguish a civilian object from a military objective and only ‘fire’ at military objectives, the weapon does not also look for the presence of civilian objects in the vicinity of the military objective before firing. Take the hypothetical example of a military vehicle travelling in close proximity to a civilian vehicle. While certain landmines might be able to distinguish between the two types of vehicles and only detonate when triggered by the military vehicle, the potential for incidental damage to the civilian vehicle is not a piece of data that is factored into the detonate/do-not-detonate algorithm. This is not legally fatal to the use of such automated weapons, but does restrict the manner in which they should be employed on the battlefield.
Along with discrimination there is the second issue of the potential for incidental injury to civilians and damage to civilian objects. The two main ways of managing this issue for automated weapons are controlling how they are used (e.g., in areas with a low likelihood of civilians or civilian objects) and/or retaining human overwatch. Both points are discussed further below under the heading ‘Methods for the lawful employment of automated and autonomous weapons’. A third option is to increase the ‘decision-making capability’ of the weapon system, which leads us to autonomous weapons.
Autonomous weapons
Autonomous weapons are a sophisticated combination of sensors and software that ‘can learn or adapt their functioning in response to changing circumstances’.Footnote 26 An autonomous weapon can loiter in an area of interest, search for targets, identify suitable targets, prosecute a target (i.e., attack the target), and report the point of weapon impact.Footnote 27 This type of weapon can also act as an intelligence, surveillance, and reconnaissance asset. An example of a potential autonomous weapon is the Wide Area Search Autonomous Attack Miniature Munition (WASAAMM). The WASAAMM:
would be a miniature smart cruise missile with the ability to loiter over and search for a specific target, significantly enhancing time-critical targeting of moving or fleeting targets. When the target is acquired, WASAAMM can either attack or relay a signal to obtain permission to attack.Footnote 28
There are a number of technical and legal issues with weapons such as the WASAAMM.Footnote 29 While most of the engineering aspects of such a weapon are likely to be achievable in the next twenty-five years, the ‘autonomous’ part of the weapon still poses significant engineering issues. In addition, there are issues with achieving compliance with international humanitarian law, and resulting rules of engagement, that are yet to be resolved.Footnote 30 Of course, if the WASAAMM operated in the mode where it relayed a signal to obtain permission to attack,Footnote 31 that would significantly reduce the engineering and international humanitarian law (and rules of engagement) compliance issues – but it also would not be a true autonomous weapon if operating in that mode.
An area that is related to autonomous weapons is the development of artificial intelligence assistants to help humans shorten the observe, orient, decide, act (OODA) loop. The purpose of such decision-support systems is to address the fact that while ‘speed-ups in information gathering and distribution can be attained by well-implemented networking, information analysis, understanding and decision making can prove to be severe bottlenecks to the operational tempo’.Footnote 32 There is very limited publicly available information on how such decision-support systems might operate in the area of targeting.
The key issue is how to use ‘computer processing to attempt to automate what people have traditionally had to do’.Footnote 33 Using sensors and computer power to periodically scan an airfield for changes, and thereby cue a human analyst, has been more successful than using sensors such as synthetic aperture radar to provide automatic target recognition.Footnote 34 A clear difficulty is that the law relating to targeting is generally expressed in broad terms with a range of infinitely varying facts, rather than as precise formulas with constrained variables, which is why a commander's judgement is often needed when determining whether an object or person is subject to lawful attack.Footnote 35 As Taylor points out, it is this ‘highly contextual nature’ of targeting that results in there not being a simple checklist of lawful targets.Footnote 36 However, if a commander was prepared to forgo some theoretical capability, it is possible in a particular armed conflict to produce a subset of objects that are at any given time targetable. As long as the list is maintained and reviewed, at any particular moment in an armed conflict it is certainly possible to decide that military vehicles, radar sites, etcetera are targetable. In other words, a commander could choose to confine the list of targets that are subject to automatic target recognition to a narrow list of objects that are clearly military objectives by their nature – albeit thereby forgoing automatic target recognition of other objects that require more nuanced judgement to determine status as military objectives through their location, purpose, or use.Footnote 37
The next step is to move beyond a system that is programmed to be a system that, like a commander, learns the nature of military operations and how to apply the law to targeting activities. As communication systems become more complex, not ‘only do they pass information, they have the capacity to collate, analyse, disseminate … and display information in preparation for and in the prosecution of military operations’.Footnote 38 Where a system is ‘used to analyse target data and then provide a target solution or profile‘Footnote 39 then the ‘system would reasonably fall within the meaning of “means and methods of warfare” as it would be providing an integral part of the targeting decision process’.Footnote 40
What might a system look like that does not require detailed programming but rather learns? Suppose an artificial intelligence system scans the battlespace and looks for potential targets (let's call it the ‘artificial intelligence target recognition system’ (AITRS)). Rather than needing to be preprogrammed, the AITRS learns the characteristics of targets that have previously been approved for attack.Footnote 41 With time, the AITRS gets better at excluding low-probability targets and better at cueing different sensors and applying algorithms to defeat the enemy's attempt at camouflage, countermeasures, etcetera. In one example, the outcome of the process is that the AITRS presents a human operator with a simplified view of the battlespace where only likely targets and their characteristics are presented for human analysis and decision whether to attack. Importantly though, all of the ‘raw information’ (e.g., imagery, multispectral imagery, voice recordings of intercepted conversations, etcetera) is available for human review. In example two, while the AITRS still presents a human operator with a simplified view of the battlespace with likely targets identified for approval to attack, the human decision-maker is not presented with ‘raw information’ but rather analysed data.Footnote 42 For example, the human might be presented with a symbol on a screen that represents a motor vehicle along with the following:
• probability of one human rider: 99 per cent
• probability of body-match to Colonel John Smith:Footnote 43 75 per cent
• probability of voice-match to Colonel John Smith: 90 per cent.Footnote 44
And finally, in example three it is the AITRS itself that decides whether to prosecute an attack. Assuming the AITRS is also linked to a weapon system then the combination is an autonomous weapon system.
It would seem beyond current technology to be able to program a machine to make the complicated assessments required to determine whether or not a particular attack would be lawful if there is an expectation of collateral damage.Footnote 45 Indeed, one would wonder even where to start as assessing anticipated military advantage against expected collateral damage is like comparing apples and oranges.Footnote 46 For now, that would mean any such weapon system should be employed in such a manner as to reduce the risk of collateral damage being expected.Footnote 47 However, a true AITRS that was initially operated with human oversight could presumably ‘learn’ from the decisions made by its human operators on acceptable and unacceptable collateral damage.Footnote 48
As pointed out at footnote 46 above, collateral damage assessments are not just about calculating and comparing numbers – a function well suited to current computers. But instead, there is a clear qualitative assessment, albeit one where the things being compared are not even alike. How could a machine ever make such judgements? Perhaps not through direct programming but rather by pursuing the artificial intelligence route. So, along with learning what are lawful targets, our hypothetical AITRS would also learn how to make a proportionality assessment in the same way humans do – through observation, experience, correction in the training environment (e.g., war games), and so on. An AITRS that failed to make reasonable judgements (in the view of the instructing staff) might be treated the same as a junior officer who never quite makes the grade (perhaps kept on staff but not given decision-making authority), whereas an AITRS that proved itself on course and in field exercises could be promoted, entrusted with increasing degrees of autonomy, etcetera.
Another technical problem is that the required identification standard for determining whether a person or object is a lawful target is not clear-cut. The standard expressed by the International Criminal Tribunal for the Former Yugoslavia is that of ‘reasonable belief’.Footnote 49 In their rules of engagement, at least two states have adopted the standard of ‘reasonable certainty’.Footnote 50 A third approach, reflected in the San Remo Rules of Engagement Handbook is to require identification by visual and/or certain technical means.Footnote 51 The commander authorizing deployment of an autonomous weapon, and any operator providing overwatch of it, will need to know what standard was adopted to ensure that both international law and any operation-specific rules of engagement are complied with. It is also possible to combine the requirement for a particular level of certainty (e.g., reasonable belief or reasonable certainty) with a complementary requirement for identification to be by visual and/or certain technical means.
Presumably, for any identification standard to be able to be codedFootnote 52 into a computer program that standard would need to be turned into a quantifiable confirmation expressed as a statistical probability. For example, ‘reasonable belief’ would need to be transformed from a subjective concept into an objective and measurable quantity – for example, ‘95 per cent degree of confidence’. This would then be used as the benchmark against which field experience (including historical data) could produce an empirical equation to profile a potential target. Then new battlespace data can be compared to quantify (assess) the strength of correlation to the required degree of confidence (in the current example, 95 per cent or greater correlation). However, the uncertainty of measurement associated with the battlespace feedback sensors would also need to be quantified as a distinctly separate acceptance criterion. For example, assume in certain operational circumstances that an uncertainty of measurement results in an uncertainty of plus or minus 1 per cent, whereas in other operational circumstances the uncertainty is plus or minus 10 per cent. In the first circumstance, to be confident of 95 per cent certainty, the correlation would need to be not less than 96 per cent. In the second case, though, the required degree of confidence would never be achievable as the required degree of confidence of 95 per cent cannot be achieved due to the measurement uncertainty.Footnote 53
Methods for the lawful employment of automated and autonomous weapons
Most weapons are not unlawful as such – it is how a weapon is used and the surrounding circumstances that affect legality.Footnote 54 This applies equally to automated and autonomous weapons, unless such weapons were to be banned by treaty (e.g., like non-command-detonated anti-personnel landmines). There are various ways to ensure the lawful employment of such weapons.
[The] absence of what is called a ‘man in the loop’ does not necessarily mean that the weapon is incapable of being used in a manner consistent with the principle of distinction. The target detection, identification and recognition phases may rely on sensors that have the ability to distinguish between military and non-military targets. By combining several sensors the discriminatory ability of the weapon is greatly enhanced.Footnote 55
One method of reducing the target recognition and programming problem is to not try to achieve the full range of targeting options provided for by the law. For example, a target recognition system might be programmed to only look for high-priority targets such as mobile air defence systems and surface-to-surface rocket launchers – objects that are military objectives by nature and, therefore, somewhat easier to program as lawful targets compared to objects that become military objectives by location, purpose, or use.Footnote 56 As these targets can represent a high priority, the targeting software might be programmed to only attack these targets and not prosecute an attack against an otherwise lawful target that was detected first but is of lower priority.Footnote 57 If no high-priority target is detected, the attack could be aborted or might be prosecuted against other targets that are military objectives by nature. Adopting this type of approach would alleviate the need to resolve such difficult issues as how to program an autonomous system to not attack an ambulance except where that ambulance has lost protection from attack due to location, purpose, or use.Footnote 58
A further safeguard includes having the weapon ‘“overwatched” and controlled remotely, thereby allowing for it to be switched off if considered potentially dangerous to non-military objects’.Footnote 59 Such overwatch is only legally (and operationally) useful if the operators provide a genuine review and do not simply trust the system's output.Footnote 60 In other words, the operator has to value add. For example, if an operator is presented with an icon indicating that a hostile target has been identified, then the operator would be adding to the process if that person separately considered the data, observed the target area for the presence of civilians, or in some other way did more than simply authorize or prosecute an attack based on the analysis produced by the targeting software. In other words, the operator is either double-checking whether the target itself may be lawfully attacked, or is ensuring that the other precautions in attack (minimizing collateral damage, assessing any remaining collateral damage as proportional, issuing a warning to civilians where required, etcetera) are being undertaken. A problem arises where the operator is provided with large volumes of data,Footnote 61 as his or her ability to provide meaningful oversight could be compromised by information overload.Footnote 62 A way to manage this would be for the targeting software to be programmed in such a way that the release of a weapon is recommended only when the target area is clear of non-military objects.Footnote 63 In other circumstances, the targeting software might simply identify the presence of a target and of non-military objects and not provide a weapon release recommendation, but only a weapon release solution. In other words, the targeting software is identifying how a particular target could be hit, but is neutral on whether or not the attack should be prosecuted, thereby making it clear to the operator that there are further considerations that still need to be taken into account prior to weapon release.
Two further legal aspects of automated and autonomous weapons (and remotely operated weapons) that require further consideration are the rules relating to self-defenceFootnote 64 and how the risk to own forces is considered when assessing the military advantage from an attack and the expected collateral damage.
The issue of self-defence has two aspects: national self-defence (which is principally about what a state can do in response to an attack) and individual self-defence (which is principally about what an individual can do in response to an attack).Footnote 65 Prior to an armed conflict commencing, the first unlawful use of force against a state's warships and military aircraft may be considered as amounting to an armed attack on that state, thereby allowing it to invoke the right of national self-defence. Would the same conclusion be reached if the warship or military aircraft were unmanned? Imagine an attack on a warship that for whatever reason had none of the ship's company on board at the time of the attack. What is it about attacks on warships that is of legal significance: the mere fact that it is a military vessel that is flagged to the state, the likelihood that any attack on the warship also imperils the ship's company, or a combination of the two?
Second, consider the different legal authorities for using lethal force. In broad terms, individual self-defence allows Person A to use lethal force against Person B when Person B is threatening the life of Person A.Footnote 66 Whether Persons A and B are opposing enemy soldiers or not is an irrelevant factor. Compare this to international humanitarian law, which allows Soldier A to use lethal force against Soldier B purely because Soldier B is the enemy.Footnote 67 Soldier B need not be posing any direct threat to Soldier A at all. Indeed, Soldier B may be asleep and Soldier A might be operating a remotely piloted armed aircraft. However, Soldier A must be satisfied, to the requisite legal standard, that the target is in fact an enemy soldier. Identification, not threat, is the key issue. However, during rules of engagement briefings military members are taught that during an armed conflict not only can they fire upon identified enemy, but also that nothing in international humanitarian law (or other law for that matter) prevents them from returning fire against an unidentifiedFootnote 68 contact in individual self-defence.Footnote 69 This well-known mantra will require reconsideration when briefing operators of unmanned assets. In all but the most unusual of circumstances, the remote operator of an unmanned asset will not be personally endangered if that unmanned asset is fired upon. This issue will need to be carefully considered by drafters of rules of engagement and military commanders, as generally returning fire to protect only equipment (and not lives) would be illegal under the paradigm of individual self-defence.Footnote 70 Compare this to the international humanitarian law paradigm that arguably would allow use of lethal force to protect certain types of property and equipment from attack, based on an argument that whoever is attacking the property and equipment must be either (1) an enemy soldier, or (2) a civilian taking a direct part in hostilities.Footnote 71
Similarly, how to treat an unmanned asset under international humanitarian law when considering the ‘military advantage’ to be gained from an attack is not straightforward. While risk to attacking forces is a factor that can be legitimately considered as part of the military advantage assessment,Footnote 72 traditionally that has been thought of as applying to the combatants and not the military equipment. While it is logical that risk of loss of military equipment is also a factor, it will clearly be a lesser factor compared with risk to civilian life.
In conclusion, it is the commander who has legal responsibility ‘for ensuring that appropriate precautions in attack are taken’.Footnote 73 Regardless of how remote in time or space from the moment of an attack, individual and state responsibility attaches to those who authorize the use of an autonomous weapon system.Footnote 74 It should be noted that this does not mean a commander is automatically liable if something goes wrong. In war, accidents happen. The point under discussion is who could be found liable, not who is guilty.
The above discussion has focused on the intended target of a weapon. The following discussion deals with emerging weapons that highlight the legal issue of weapon effect even where the target is an otherwise lawful target.
Weapon effect
Directed energy weapons
Directed energy weapons use the electromagnetic spectrum (particularly ultraviolet through to infrared and radio-frequency (including microwave)) or sound waves to conduct attacks.Footnote 75 As a means of affecting enemy combat capability, directed energy weapons can be employed directly against enemy personnel and equipment, or indirectly as anti-sensor weapons. For example, laser systems could be employed as ‘dazzlers’ against aided and unaided human eyesight, infrared sensors, and space-based or airborne sensors,Footnote 76 and as anti-equipment weapons.Footnote 77 High-powered microwaves can be employed against electronic components and communications equipment. Lasers and radars are also used for target detection, target tracking, and finally for providing target guidance for other conventional weapons.
When directed energy weapons are employed against enemy communication systems, the legal issues are not significantly different from those that would arise if kinetic means were used. Is the target (e.g., a communication system) a lawful military objective and have incidental effects on the civilian population been assessed? As directed energy weapons have the clear potential to reduce the immediate collateral effects commonly associated with high-explosive weapons (e.g., blast and fragmentation),Footnote 78 the main incidental effect to consider is the second-order consequences of shutting down a communication system such as air traffic control or emergency services. While it is common to state that second-order effects must be considered when assessing the lawfulness of an attack, a proper understanding of what is ‘counted’ as collateral damage for the purpose of proportionality assessments is required. It is a mistake to think that any inconvenience caused to the civilian population must be assessed. That is wrong. Along with death and injury, it is only ‘damage’ to civilian objects that must be considered.Footnote 79 Therefore, a directed energy weapon attack on an air traffic control system that affected both military and civilian air trafficFootnote 80 need only consider the extent to which civilian aircraft would be damaged, along with associated risk of injury or death to civilians, and need not consider mere inconvenience, disruption to business, etcetera.Footnote 81
Directed energy weapons are also being developed as non-lethal (also known as less-lethal) weapons to provide a broader response continuum for a controlled escalation of force.Footnote 82 For a variety of operational and legal reasons, it is preferable to have an option to preserve life while still achieving a temporary or extended incapacitation of the targeted individual. However, the very terms used to describe these weapons can cause problems beyond any particular legal or policy constraints.Footnote 83 The unintended consequences of the weapons (particularly due to the unknown health characteristics of the target) can lead to permanent injury or death. Such consequences are then used to stigmatize the concept of a non-lethal/less-than-lethal weapon. The important point to remember is that as for any other combat capability (including kinetic weapons), use of directed energy weapons during an armed conflict is governed by international humanitarian law and by any applicable rules of engagement and directions from the combat commander.Footnote 84
Non-lethal directed energy weapons can be used in combination with traditional, lethal weapons. For example, it is reported that:
Another weapon … can broadcast deafening and highly irritating tones over great distances. The long-range device precisely emits a high-energy acoustic beam as far as five football fields away. To a reporter standing across the airstrip from where it was set up in a hangar here, it sounded as if someone was shouting directly into his ear.
The device ‘has proven useful for clearing streets and rooftops during cordon and search … and for drawing out enemy snipers who are subsequently destroyed by our own snipers’, the 361st Psychological Operations Company, which has tested the system in Iraq, told engineers in a report.Footnote 85
This form of directed energy weapon demonstrates two key issues associated with non-lethal weapon technology. First, such weapons are likely to be used against a civilian population – in this case, to clear streets and rooftops.Footnote 86 Second, the non-lethal weapon may be employed in conjunction with existing weapons to achieve a lethal effect.
Other directed energy weapons include active denial systems.Footnote 87
One of the weapons that has been successfully tested is a heat beam … that can ‘bake’ a person by heating the moisture in the first one-64th of an inch of the epidural layer of the skin. It was originally developed for the Department of Energy to keep trespassers away from nuclear facilities.Footnote 88
The ‘irresistible heating sensation on the adversary's skin [causes] an immediate deterrence effect’;Footnote 89 because the heating sensation causes ‘intolerable pain [the body's] natural defense mechanisms take over’.Footnote 90 The ‘intense heating sensation stops only if the individual moves out of the beam's path or if the beam is turned off’.Footnote 91 Because flamethrowers and other incendiary weapons are only regulated and not specifically banned by international humanitarian law, there is no legal reason to deny the use of the active denial system in combat.Footnote 92
Where active denial systems are being used as an invisible ‘fence’, then clearly it is a matter for the individual as to whether to approach the fence, and if so, whether to try to breach the perimeter.Footnote 93 However, if active denial systems are being aimed at a person or group to clear an area,Footnote 94 an issue that needs consideration with this type of weapon is how would a person who is being subjected to this type of attack either surrender or consciously choose to leave an area when they can neither see the beam,Footnote 95 may be unaware of even this type of technology, and are reacting to intolerable pain like the ‘feeling … [of] touching a hot frying pan’?Footnote 96 Reacting instinctively to intolerable pain seems likely to make a person incapable of rational thought.Footnote 97 Employment of such weapons will need to be well regulated through a combination of the tactics, techniques and procedures, and rules of engagement to ensure that unnecessary suffering is not caused through continued use of the weapon because a person has not cleared the target area.Footnote 98 In this respect, and noting that the active denial system has ‘successfully undergone legal, treaty and US Central Command rules of engagement reviews’,Footnote 99 it is worth recalling that as states’ legal obligations vary, and as states may employ weapons differently, the legal review by one state is not determinative of the issue for other states.Footnote 100 This may prove interesting in the sale of highly technical equipment, as the details of a weapon's capability are often highly classified and compartmentalized. The state conducting the review may not control access to the necessary data. As discussed below, this may require lawyers, engineers, and operators to work together cooperatively and imaginatively to overcome security classification and compartmental access limitations.
A similar directed energy weapon using different technology is ‘a high-powered white light so intense as to send any but the most determined attackers running in the opposite direction’.Footnote 101 Concepts for employment of the weapon appear to include using it as a means to identify hostile forces, as evidenced by the statement: ‘If anyone appears willing to withstand the discomfort, “I know your intent”, [Colonel Wade] Hall [a top project official] said. “I will kill you.”’Footnote 102 While initially such statements appear quite concerning, it is instructive to consider whether this is in reality any different from the ‘traditional’ warnings and escalation of force scenarios such as ‘stop or I will shoot’ or employment of flares and dazzlers to warn vehicles not to approach too close to military convoys.
Where directed energy weapons are used to counter (often improvised) explosive devices,Footnote 103 the issue is primarily about consequences. If the directed energy weapon is causing a detonation at a safe range from friendly forces, there is a requirement to consider whether any civilians or other non-combatants are in the vicinity of the detonation and, therefore, at risk of injury or death.Footnote 104
Cyber operations
Cyber operations are:
operations against or via a computer or a computer system through a data stream.Footnote 105 Such operations can aim to do different things, for instance to infiltrate a system and collect, export, destroy, change, or encrypt data or to trigger, alter or otherwise manipulate processes controlled by the infiltrated computer system. By these means, a variety of ‘targets’ in the real world can be destroyed, altered or disrupted, such as industries, infrastructures, telecommunications, or financial systems.Footnote 106
Cyber operations are conducted via software, hardware, or via a combination of software and personnel. A recent example of a cyber operation that was essentially conducted purely by software is the Stuxnet virus. Once in place, the Stuxnet virus appears to have operated independently of any further human input.Footnote 107 Compare this to a software program that is designed to allow a remote operator to exercise control over a computer – allowing, among other things, the upload of data or modification of data on the target computer. Finally, a non-military example of a cyber operation that requires both hardware and software is credit card skimming.
The application of specific international humanitarian law rules to cyber warfare remains a topic of debate.Footnote 108 However, for the purposes of this article, it is assumed that the key international humanitarian law principles of distinction, proportionality, and precaution, apply, as a minimum, to those cyber attacks that have physical consequences (e.g., the Stuxnet virus altered the operating conditions for the Iranian uranium enrichment centrifuges, which ultimately resulted in physical damage to those centrifuges).Footnote 109 Four particular legal aspects of cyber weapons are worth mentioning.
First, cyber weapons have the distinct possibility of being operated by civilians.Footnote 110 The ‘weapon’ is likely to be remote from the battlefield, is technologically sophisticated, and does not have an immediate association with death and injury. The operation of the cyber weapon exposes a civilian operator to lethal targeting (as a civilian taking a direct part in hostilities),Footnote 111 as well as potential criminal prosecution for engaging in acts not protected by the combatant immunity enjoyed by members of the armed forces.Footnote 112 These issues are discussed in detail in a recent article by Watts who raises, among other things, the possibility of the need for a complete rethink of how the law on direct participation in hostilities applies in the area of cyber warfare.Footnote 113 It could also be queried what training such civilian operators might have in the relevant rules of international humanitarian law.Footnote 114
Second, cyber attacks can have consequences in the real world and not just the virtual world.Footnote 115 Where those consequences affect the civilian population by causing loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, those consequences must be considered under international humanitarian law.Footnote 116 The discussion of this point for directed energy weapon attacks applies equally to cyber attacks. A further related consideration is that where it could reasonably be expected that a virus introduced into a military system might find its way into civilian systems and cause infrastructure damage, that collateral damage must also be considered.Footnote 117 A common example of a possible cyber attack that would directly affect civilians is disabling a power station – either just by shutting it down, or by overloading or shutting down a fail-safe, thereby damaging hardware. This can potentially happen to any infrastructure maintained by software.
Third, cyber weapons need to be considered not only in relation to international humanitarian law, but also very importantly under jus ad bellum.Footnote 118 As Blake and Imburgia point out, even if a cyber attack has no kinetic effects, the attack might still be contrary to the UN Charter specifically or international law generallyFootnote 119 and may, if amounting to an ‘armed attack’, legitimize the use of force by the affected state in self-defence.
Finally, the very nature of cyber warfare can make it hard to determine who initiated an attack, and issues of attribution go to the very heart of both state responsibility and individual accountability.Footnote 120
Nanotechnology and weaponization of neurobiology
Nano-weapons are hard to define, but encompass not only objects and devices using nanotechnology that are designed or used for harming humans, but also those causing harmful effects in nano-scale if those effects characterise the lethality of the weapon.Footnote 121
An example of the latter is the Dense Inert Metal Explosive (DIME):
DIME involves an explosive spray of superheated micro shrapnel made from milled and powdered Heavy Metal Tungsten Alloy (HMTA), which is highly lethal within a relatively small area. The HMTA powder turns to dust (involving even more minute particles) on impact. It loses inertia very quickly due to air resistance, burning and destroying through a very precise angulation everything within a four-meter range – and it is claimed to be highly carcinogenic and an environmental toxin. This new weapon was developed originally by the US Air Force and is designed to reduce collateral damage in urban warfare by limiting the range of explosive force.Footnote 122
The ‘capacity [of DIME] to cause untreatable and unnecessary suffering (particularly because no shrapnel is large enough to be readily detected or removed by medical personnel) has alarmed medical experts’.Footnote 123 The other concern with nanotechnology is that elements and chemicals that on a macro scale are not directly harmful to humans can be highly chemically reactive on the nanoscale. This may require a review of what international humanitarian law considers as chemical weapons.
Similarly, with the current advances in the understanding of the human genome and in neuroscience, there exists the very real possibility of militarization of this knowledge.Footnote 124 One of the legal consequences is a need to reappraise maintaining a legal distinction between chemical and biological weapons. It may be that based on the manner in which they can be used we should legally view these weapons as part of a ‘continuous biochemical threat spectrum, with the Chemical Weapons Convention and Biological and Toxin Weapons Convention (CWC and BTWC) overlapping in their coverage of mid-spectrum agents such as toxins and bioregulators’.Footnote 125
There are competing tensions in this area. Quite understandably, chemical and biological weapons have a ‘bad name’. At the same time, research is underway into non-lethal weapons such as incapacitating biochemical weapons.
Although there is currently no universally agreed definition, incapacitating biochemical agents can be described as substances whose chemical action on specific biochemical processes and physiological systems, especially those affecting the higher regulatory activity of the central nervous system, produce a disabling condition (e.g., can cause incapacitation or disorientation, incoherence, hallucination, sedation, loss of consciousness). They are also called chemical incapacitating agents, biotechnical agents, calmatives, and immobilizing agents.Footnote 126
A key point to note is that while traditional biological and chemical agents were used against enemy soldiers or non-cooperative civilians, and clearly would be classified as weapons, modern agents may be used to ‘enhance’ the capability of a state's own military forces. In such cases, it is much less likely that the agents would amount to weapons.Footnote 127 For example:
within a few decades we will have performance enhancement of troops which will almost certainly be produced by the use of diverse pharmaceutical compounds, and will extend to a range of physiological systems well beyond the sleep cycle. Reduction of fear and pain, and increase of aggression, hostility, physical capabilities and alertness could significantly enhance soldier performance, but might markedly increase the frequency of violations of humanitarian law. For example, increasing a person's aggressiveness and hostility in conflict situations is hardly likely to enhance restraint and respect for legal prohibitions on violence.Footnote 128
Similar concerns have already been expressed about remotely operated weapons. And in a manner similar to using directed energy weapons to disperse civilian crowds, there is also the potential to pacify civilians in occupied territories through chemicals included in food distributions.Footnote 129 Perhaps of even more concern, as it goes directly to the ability to enforce international humanitarian law, particularly command responsibility, is the possibility of ‘memories of atrocities committed [being] chemically erased in after-action briefings’.Footnote 130
The need to understand the role of engineering in the weapon review process
The above overview of emerging weapons highlights that as weapons become more complex the ability for non-experts to understand the complex manner in which the weapon operates becomes increasingly difficult. This part of the article focuses on engineering issues and how an understanding of those issues can be factored into the legal review of weapons.
Why a weapon may not perform as intended
A weapon may not perform as intended or in accordance with the ‘product design specification’Footnote 131 for a variety of reasons. Those reasons include: inadequate technical specification, design flaws, or poor manufacturing quality control (batch variation). Other factors include ‘age of the munition, storage conditions, environmental conditions during employment, and terrain conditions’.Footnote 132
A simple example of specification failure, or at least a specification that will not be 100 per cent reliable, is an anti-vehicle mine that is not intended to explode when stepped on by a human. For example, if it is a load activated mine, the load might be set to 150 kg. However, biomechanical research:
shows very strong evidence that a human being can very easily exert an equivalent force close to and above such pressures. For example, an 8-year-old boy weighing 30 kg, running downhill in his shoes, exerts a ground force of 146 kg. A 9-year-old girl weighing 40 kg running downhill in her bare feet exerts 167 kg of force. An adult male running will exert 213 kg.Footnote 133
Alternatively, the specification might be correct but the design, manufacturing process, or integration of systems does not consistently lead to the intended result. This may be an engineering quality issue where the implemented engineering processes were inadequately robust leading to product flaws, and as such presents a reliability issue.
Where a weapon does not perform as intended, two prime consequences are:
• The desired combat effect is not achieved. If the weapon fails to perform, own forces are put at risk. If the weapon does not perform to specification, civilians and civilian property are put at risk.Footnote 134
• Where civilians are injured or killed or civilian property damaged, liability may be incurred.Footnote 135 State liability may be incurred for an internationally wrongful act (i.e., a breach of the international humanitarian law) and criminal liability potentially attaches to the commander who authorized the use, or to the person who employed the weapon, or both.
As weapons systems become more complex, an understanding of reliability analysis will need to become part of the legal review process.
Reliability: test and evaluation
The purpose of test and evaluation is to provide an objective measurement of whether a system (or a component thereof) performs reliably to a specification. Reliability is the probability of correct functioning to a specified life (measured in time, cycles of operation, etcetera) at a given confidence level. Understanding that reliability is a key factor in weapon performance is intuitively simple but in fact has a level of complexity not always immediately grasped by those unfamiliar with reliability engineering.Footnote 136 Quantifying reliability is not a ‘yes’ or ‘no’ proposition,Footnote 137 nor can it be achieved by a single pass/fail test, but rather ‘is subject to statistical confidence bounds’.Footnote 138 For example, to obtain an appropriate level of statistical confidence that the failure rate for a given weapon population is acceptable there are a minimum number of tests required. But as resources are always finite the question for responsible engineering practice is how to optimize resources and understand the minimum required resources to assure acceptable reliability? Suppose that undertaking the required number of tests will be too time-consuming or beyond budget allocation. A naïve approach would simply reduce the number of tests to meet budget requirements and presume that the test will still give some useful information. But that may not be the case. Arguably, the compromised test can only provide misleading conclusions if the result does not achieve the required level of confidence. For certification purposes, either a certain level of confidence is required or not. While the statistical confidence level may be set appropriately low for non-lethal weapon components where a failure has a low-operational impact and minor to no safety implications (e.g., failure of a tracer bullet), the target recognition system on an autonomous weapon may require a very high statistical confidence to minimize lethal weapon deployment on civilians while still ensuring engagement of enemy targets. If a high statistical assurance is deemed necessary for civilian safety while budgetary constraints preclude the corresponding necessary development testing, then appropriate limits should be implemented regarding the approved applications for that weapon until field experience provides appropriate reliability confidence.
How should this be applied in practice? The main steps of weapon acquisition are usefully outlined by McClelland, including the various testing stages during ‘demonstration’, ‘manufacture’, and ‘in-service’.Footnote 139 As McClelland notes, this is not a legal process but rather part of the acquisition process; but nonetheless these steps provide decision points that are ‘important stages for the input of formal legal advice’.Footnote 140 For testing to be meaningful, critical issues of performance must be translated into testable elements that can be objectively measured. While many smaller nations might be little more than purchasers of off-the-shelf weapons,Footnote 141 other governments are involved in envisaging, developing, and testing emerging weapons technology. While the degree of that involvement will vary, that is a choice for governments.Footnote 142 So, rather than being passive recipients of test results and other weapons data, one pro-active step that could be taken as part of the legal review process is for lawyers to input into the test and evaluation phases by identifying areas of legal concern that could then be translated into testable elements. This may be one way to at least partly address the security and compartmented access difficulties associated with high-technology weapons that were raised above. For example, it is appropriate to assign increased confidence in reliability for military applications involving higher risks factors for civilians. This could be cross-referenced against existing weapons system reliability data as an input to the decision-making process when determining whether a new targeting procedure may be considered lawful.
To be effective, the legal requirements need to be expressed in terms that are ‘testable, quantifiable, measurable, and reasonable’.Footnote 143 Part of the challenge will be bridging the disconnect that often exists between the definitions of technical requirements and the desired operational performance. This disconnect can often be ‘traced to the terminology used to define the level of performance required, under what conditions and how it is [to be] measured’.Footnote 144 This is where lawyers working with systems engineers can influence the process so that the use of tests, demonstrations, and analysis can be adopted as valid methods to predict actual performance.
Once a system is in-service, further testing may also be conducted to gain additional insights into the capability and to ensure that the system is actually meeting the requirements of the user. This phase of test and evaluation is particularly critical as it is the only phase that truly relates to the ‘real world’ use of a system.Footnote 145 By having lawyers provide meaningful legal criteria against which a class of weapons could be judged, the ongoing legal compliance of that weapon could be factored into an already existing process. Another area for useful input is evaluation and analysis of system and subsystem integration and interaction. When it comes to a system-of-systems, US military experience is that there is no ‘single program manager who “owns” the performance or the verification responsibility across the multiple constituent systems, and there is no widely used adjudication process to readily assign responsibility for [system-of-systems] capabilities, with the exception of command and control systems’.Footnote 146 Compare this to other industries such as leading automotive companies that have highly sophisticated design, production, testing, and quality-approval processes for every component that goes into a vehicle and a resulting detailed assignment of responsibility by component, system, and whole product (comprising multiple systems). Working with systems engineers, layers of quality control process could identify the critical legal issues that require both testing and assignment of responsibility (for example, in case of non-compliance with international humanitarian law) among the weapon manufacturer and the various military stakeholders.
Reliability and automatic target recognition
Weapons that are designed to explode but fail to when used operationally, and if left on the field after the cessation of hostilities, are known as explosive remnants of war.Footnote 147 Indeed, munition reliability is even defined as ‘a measure of the probability of successful detonation’.Footnote 148 Due to the effects on the civilian population of unexploded ordnance, legal regulation already exists in this area.Footnote 149 Less well understood is that weapons reliability associated with automatic target recognition has another important aspect. It is not just about a weapon that does not explode, but also about one that selects the wrong target.
Here we are trying to determine whether it is reasonable to conclude from the analysis of reconnaissance data that the target possesses certain enemy properties or characteristics, and when is it reasonable to reach such a conclusion. Suppose the difference between the hypothesized enemy characteristic and the reconnaissance measurements is neither so large that we automatically reject the target, nor so small that we readily accept it. In such a case, a more sophisticated statistical analysis, such as hypotheses testing, may be required. Suppose that experience indicates that a 90 per cent match in reconnaissance data with existing information regarding an enemy target type has proven to be a reliable criterion for confirming an enemy target. If the data was a 100 per cent match or a 30 per cent match we could possibly come to an acceptable conclusion using common sense. Now suppose that the data match was 81 per cent, which may be considered relatively close to 90 per cent, but is it close enough to accept as a lawful target? Whether we accept or reject the data as a lawful target, we cannot be absolutely certain of our decision and we have to deal with uncertainty. The higher we set our data-match acceptance criterion the less likely an automatic target recognition system will identify non-targets as lawful targets, but the more probable that the recognition system will fail to identify lawful targets as being lawful targets.Footnote 150
The desired level for whether or not a weapon explodes might be a ‘reliable functioning rate of 95 per cent’.Footnote 151 This corresponds to an autonomous weapon system that fires at an unlawful target, due to misclassification as ‘lawful’, one out of every twenty times. Would this be considered acceptable performance for discriminating between lawful and protected targets? So, when a weapon system is looked at in this way, the better definition for reliability is whether the weapon system ‘performs its intended function’Footnote 152 and as the ‘fuzing and guidance capabilities become more integrated, the reliability of target acquisition must be measured and assessed’.Footnote 153 It has been suggested that what is required is a ‘very high probability of correct target identification … and a very low probability of friendly or civilian targets being incorrectly identified as valid (i.e., enemy) targets’.Footnote 154 As there is an inherent trade-off between sensitivity and specificity, consideration also needs to be given to how a weapon will be employed. If a human provides go/no-go authorization based on an independent review, therefore providing additional safeguard against false recognition, then a greater number of false positives generated by the automatic recognition system may be acceptable. However, if the weapon system is autonomous, combat effect (correct employment against identified enemy targets) must be more carefully balanced against risk to civilians. Noting that one of the purposes of automated and autonomous systems is to undertake high-volume observations that would overwhelm a human operator, where ‘observations [are] in the millions … even very-low-probability failures could result in regrettable fratricide incidents’.Footnote 155 Confidence in the ability of an autonomous system to work in the real world might be developed by deploying such systems in a semi-autonomous mode where a human operator has to give the final approval for weapons release.Footnote 156 Rigorous post-mission analysis of data would allow, with time, a statistically significant assessment of the reliability of the system to correctly identify lawful targets.
A final point on testing:
Achieving these gains [capability increases, manpower efficiencies, and cost reductions available through far greater use of autonomous systems] will depend on development of entirely new methods for enabling ‘trust in autonomy’ through verification and validation (V&V) of the near-infinite state systems that result from high levels of adaptability and autonomy. In effect, the number of possible input states that such systems can be presented with is so large that not only is it impossible to test all of them directly, it is not even possible to test more than an insignificantly small fraction of them. Development of such systems is thus inherently unverifiable by today's methods, and as a result their operation in all but comparatively trivial applications is uncertifiable.
It is possible to develop systems having high levels of autonomy, but it is the lack of suitable V&V methods that prevents all but relatively low levels of autonomy from being certified for use. Potential adversaries, however, may be willing to field systems with far higher levels of autonomy without any need for certifiable V&V, and could gain significant capability advantages over the Air Force by doing so. Countering this asymmetric advantage will require as-yet undeveloped methods for achieving certifiably reliable V&V.Footnote 157
A distinctly separate consideration from weapons testing is weapons research. Should weapons research (as opposed to development) be limited or constrained by legal issues? Generally, there is no legal reason (budgets aside) why research cannot take potential weapons as far as the bounds of science and engineering will allow, not the least of which is because laws change.Footnote 158 The time for imposing limits based on law is in the production and employment of weapons. Of course, some may, and do, argue differently on moral and ethical lines.Footnote 159 That is where such arguments are best made and debated.
Conclusion
With the ever-increasing technological complexity of weapons and weapon systems, it is important that, among others, computer scientists, engineers, and lawyers engage with one another whenever a state conducts a review of weapons pursuant to Article 36 of the Protocol Additional to the Geneva Conventions of 12 August 1949 and relating to the Protection of Victims of International Armed Conflicts (API).Footnote 160 The reviews cannot be compartmentalized, with each discipline looking in isolation at their own technical area. Rather, those conducting legal reviews will require ‘a technical understanding of the reliability and accuracy of the weapon’,Footnote 161 as well as how it will be operationally employed.Footnote 162 While that does not mean lawyers, engineers, computer science experts, and operators need to each be multidisciplined, it does mean that each must have enough understanding of the other fields to appreciate potential interactions, facilitate meaningful discussion, and understand their own decisions in the context of impacts on other areas of development.
Those who develop weapons need to be aware of the key international humanitarian law principles that apply to the employment of weapons. Lawyers providing the legal input into the review of weapons need to be particularly aware of how a weapon will be operationally employed and use this knowledge to help formulate meaningful operational guidelines in light of any technological issues identified with the weapon in terms of international humanitarian law. Furthermore, all parties require an understanding of how test and validation methods, including measures of reliability, need to be developed and interpreted, not just in the context of operational outcomes, but also in compliance with international humanitarian law.
As the details of a weapon's capability are often highly classified and compartmentalized, lawyers, engineers, and operators may need to work cooperatively and imaginatively to overcome security classification and compartmental access limitations. One approach might be to develop clearly expressed legal parameters that can be the subject of meaningful systems testing. Another approach may be to devise multi-parameter acceptance criterion equation sets. Such equation sets would allow for hypothesis testing while factoring in reliability data, confidence levels, and risk factors using input data such as anticipated military advantage, weapon reliability data, reconnaissance measurement uncertainty, and civilian risk factors.