Distinctive Ethical Challenges of Cyberweapons
Neil C. Rowe
Department of Computer Science
U.S. Naval Postgraduate School
Monterey, California, United States
Abstract—Cyberweapons raise new problems in ethics. We first discuss the peculiarities of cyberweapons in the array of modern weapons. We then discuss five areas of ethical issues that are primarily unique to cyberweapons: attribution, product tampering, unreliability, damage repair, and collateral damage, with special attention to the latter. Although cyberweapons are generally nonlethal, they can hurt large numbers of civilians; we estimate that the collateral damage of the Stuxnet attacks on Iran in U.S. dollars was $2.9 million, similar in cost to that of a human death. Cyberattacks raise additional ethical issues in their need to impersonate civilians, what can be called cyber perfidy; in the difficulty of tracking down their damage; and in the problem of accurately measuring the damage of an often widely-distributed attack. We conclude that many of the ethical issues of cyberweapons are intractable, and international agreements should be sought to control them.
Index terms—ethics, laws of war, cyberwar, cyberattacks, cyberweapons, impersonation, attribution, perfidy, product tampering, damage, operating systems, software, botnets
This paper is to appear in the Research Handbook on Cyber Space and International Law, R. Buchan and N. Tsagorias, eds., Edward Elgar Publishing, UK, 2015.
Governments have been aggressively pursuing development of cyberweapons in recent years, most notably the United States (Gjelten, 2013), China, and Russia. Cyberweapons are a new kind of weapon (Goel, 2011). As with all new weapons, decisions should be made about how ethical it is to employ them in various situations. We will focus here on their use by militaries; so far there has been little threat from nongovernmental groups such as terrorist organizations, though some businesses have expressed interest in using them against their foreign competition. There is much debate about how well the current laws of warfare (ICRC, 2007) apply to cyberweapons (Belk and Noyes, 2012). First however we need to identify the ethical issues are since they guide lawmaking. Key ethical issues with cyberweapons arise in their implementation, targeting, and damage recovery. The discussion is evolving quickly since our earlier survey (Rowe, 2010).
To limit the size of this paper, we will focus on characteristics of cyberweapons that significantly differ from those than conventional weapons. Much also has been written about the ethics of conventional warfare (Walzer, 1977; Lee, 2012), and much of that can be applied to cyberweapons (Barrett, 2013). But the special characteristics of cyberweapons raise some new ethical problems, especially in the conduct of warfare (jus in bellum) (Owens et al, 2009). We shall follow here a negative utilitarian approach to ethics in which we try to assess the negative cost to societies of various policies with cyberweapons and try to recommend policies that result in the least negative cost on the average. Negative utilitarianism is appropriate here because the benefits of cyberattacks are not especially diverse, being confined to disabling adversary computer systems, networks, and what they control.
The Stuxnet cyberattack on Iran (Gross, 2011) provides an example of a problematic cyberattack. Some have lauded this as an example of "clean" cyberwarfare since it appeared to have a narrow target and achieved some tactical success against it. The target appears to have been software for industrial-control machines for Iranian centrifuges used for processing uranium, and the attacks appeared to cause destruction of at least some of the centrifuges. But to achieve this success, many machines had to be infected with the malicious code ("malware") with the hope of eventually transferring their infection to the target machines. This mode of infection was like a virus, and the attack code spread to millions of machines, so the attack involved widespread (albeit small) product tampering. Because the propagation methods were unreliable, multiple redundant methods were used. Once the attack was recognized, the new propagation and attack methods were analyzed and reports were published. This enabled criminal attackers to exploit these new attacks for their own purposes (Kaplan, 2011). So there was direct collateral damage from the propagation of the attack (small damage to many machines adding up) as well as from the criminal applications of the methods.
This case suggests we should be skeptical of claims that cyberweapons are precise weapons that do not cause significant or lasting collateral damage. This should not be a surprise because precision is often difficult for new weapons. Consider for instance air targeting with drones in Pakistan (IHRCR-SLS and GJC-NYUSL, 2012) for which “very few” of the attacks (though it varies with the estimator) were on militants, and similar results could likely have been better obtained without violence through judicial proceedings. The problem is that there are just too many possible errors in targeting when the attacker is not near the target and does not understand the culture. We suspect cyberattacks will be similarly errant, as they appear similarly low-risk to a remote perpetrator.
Cyberweapons are weapons that primarily use software to achieve their effects. They are usually in the form of modified programs to control computers and devices. Usually these modifications are for the purpose of sabotage, to prevent a victim from using their computer systems or certain aspects of those systems. A cyberweapon might prevent a missile defense system from functioning properly so that it will be destroyed by a missile. A cyberweapon could also delete key data or programs from a computer system so that it cannot do its tasks, or it could block network access so a system cannot communicate with others. Usually cyberweapons do not actively create false data, as that it more difficult to do. So sabotage is usually the goal.
Use of cyberweapons is a special case of a cyberattack. Cyberattacks are any methods that attempt to subvert computer software for some gain to the perpetrator. Most cyberattacks today are by criminals and support financial fraud. But they are also important today to intelligence agencies as a way to automate their collection of difficult-to-obtain or secret information. The United States has been subjected to extensive such intelligence gathering recently. But these are not cyberweapons because they do not try to disable systems. Thus they fall within the range of intelligence gathering in the laws of war, and do not justify counterattack in response.
Military commanders want control of their weapons, so many cyberweapons are designed be controlled remotely over the Internet using the techniques of “botnets” where the attacker’s machine sends orders to victim machines to direct their espionage or sabotage (Elisan, 2012). (Gjelten, 2013) reports the U.S. and China are eagerly pursuing this approach. If a target is not attached to the Internet, timing mechanisms or specifications of trigger events can be used to control it (Dipert, 2013).
Software techniques proposed (and occasionally used) for cyberweapons are similar to those used for criminal cyberattacks (Malin, Casey, and Aquilina, 2012). These usually involve impersonation to gain unauthorized access using flaws in the software of computers, followed by installing of malicious software on those computers. However, the ultimate goals of cybercriminals are different. They mostly want to steal rather than sabotage, and high priorities for them are stealing bank-card numbers and other personal information and sending spam (unsolicited email), good ways to earn quick money. Criminals are becoming increasingly professional in probing well-known software for flaws and developing attack methods, and their discoveries are just what cyberweapons developers need.
Some of argued that cyberweapons are "nonlethal" and thus raise few ethical questions. Mortality is not the only ethics metric in warfare, as people can be unethically harmed without being killed, an issue we will discuss further here. However, allegedly "nonlethal" weapons can definitely kill people, as for instance as crowd-control foam from which people can suffocate, since nearly any method of human control can be lethal in unexpected circumstances. Cyberweapons can kill directly as when they interfere with operations at hospitals or in industry (Iran alleges that Stuxnet caused an explosion of a centrifuge that killed a worker). Cyberweapons, like other weapons, can also kill indirectly in many ways by damaging the infrastructure of a society. For instance, it has been estimated that the damage to Iraqi society by the U.S. invasion resulted in a 654,000 additional deaths 2003-2006 of which 92% were due to violence (Burnham et al, 2006), plus another estimated 300,000 deaths through 2011 due to damage to the Iraqi infrastructure (Hagopian et al, 2013). We need to evaluate cyberweapons with most of the same standards as other weapons to see how harmful they are. Actually, they have a number of analogies to biological weapons in their dissemination methods and their difficulty of control (Rowe, 2010), and biological weapons are prohibited by several treaties.
We can enumerate several key differences between cyberweapons and most traditional weapons.
D1. Cyberweapons do not require physical proximity of the attacker to the victim, since attacks can be accomplished over the Internet, or else planted well in the advance of the attack (as “Trojan horses”) and triggered when the attacker is long gone. Thus they are analogous to long-range guided missiles. Thus they raise some of the same ethical concerns about the ability of the attacker to remotely confirm their victim before attacking.
D2. Cyberweapons are easy to conceal, even easier than biological weapons, since they are just abstract patterns of bits. They can also operate very quickly and then destroy all evidence of their presence. This means disarmament of cyberweapons is very difficult, though some methods can help (Rowe et al, 2014).
D3. Cyberattacks can be very difficult to attribute to the attacking country. That is because if attacks are launched across the Internet, it is difficult to trace attacks backward in the enormous traffic of the Internet; and if attacks are launched from Trojan horses already inside software, it is difficult to tell from examining them who is responsible for putting them there. This means that a pure cyberwar not accompanied by traditional attacks is virtually impossible to justify in cyberspace according to the standards of proof required by the law of war.
D4. Cyberweapons require flaws in their victim or they do not work at all. That is because computer systems and networks normally have many defenses against sabotage. The situation is quite different from a bullet or a bomb which only exploits the binding force of the atoms in its target, which is consistent over the same material no matter what the setting.
D5. Cyberweapons have considerably more variety than conventional munitions. Guns and bombs have a single purpose of violating the physical integrity of objects and beings and rendering them inoperative by means of projectiles and explosions. Cyberweapons can sabotage operations of computer systems in many different ways, some quite subtle.
D6. Cyberweapons technology is very similar to cyberespionage technology. That is because to do both, you must gain and maintain administrator access to a victim computer system or device. Sabotage of computer systems is usually easy once you have done that, so escalation from cyberespionage to cyberattack is tempting for aggressors. The major countries of the world are engaging in an increasing amount of cyberespionage (Segal, 2013).
D7. Cyberweapons tend to have unexpected consequences. That is because computer systems depend on billions of component instructions working consistently every time they are used, and just one error can upset the whole chain of instructions unless unusual precautions are taken such as adding redundant functionality. Similarly, when computers and devices interact over a network, failure of just one may cause failure of others since there is so much interdependence of systems today.
D8. Cyberweapons have no legitimate uses, unlike guns and explosives. (A possible exception is "red teaming", or deliberately attacking a system to find its flaws, but this is too blunt to be a very helpful tool for diagnosis.) Thus finding cyberweapons is prima facie evidence of offensive intent.
These features have many ethical implications. This paper we will focus on those which we consider the most important: justifying a cyberattack in cyberwar, the product tampering required for cyberweapons, the unreliability of cyberweapons, the difficulty of recovering from the damage of cyberweapons, and the ease of collateral damage with cyberweapons. These relate to the classic ethical issues in the conduct of war of having adequate justification for an attack, proportionality of the attack to the provocation, discrimination of combatants from noncombatants, and ability to assign responsibility for conduct (Moseley, 2014).
Many of the ethical principles about starting traditional warfare (jus ad bellum) apply to cyberwarfare (Barrett, 2013). A nation must be attacked first, suffer serious harm, and have no other feasible alternatives before it can consider cyberwarfare. But there is a special problem of the greater difficulty of attribution of a cyberattack compared to conventional attacks. Traditional land, water, and air warfare involve large military entities that have a clear direction of origin which makes it clear who is attacking whom. This is less true of cyberwarfare, following D1, D2, and D3 above. The malware used will likely have no obvious signs of origin, and its implantation on victim computer systems may not be recorded; and if the Internet was used to deliver it, its backtracing will be ambiguous. There are some ways to prove attribution if the attacking country's computer systems can be accessed, but that can be very difficult. To be sure, a country may have suspicions about who attacked it, but the world community needs legal-quality evidence to endorse a response following the laws of war. Thus it is very difficult to ethically justify counterattack in response to a pure cyberattack when applying the ethical principle of responsibility.
However, it certainly makes sense that a traditional military attack could justify a cyber-counterattack. Indeed, cyberattack methods could be an important force multiplier for military commanders, allowing them to achieve the same results with fewer forces and with possibly less harm to both sides (Strawser and Denning, 2014). It also makes sense to combine the cyber-counterattacks with traditional counterattacks since there are important targets not in cyberspace. However, achieving less harm with cyberattacks than traditional attacks is not automatic, and each case needs to be evaluated carefully according to the criteria we discuss below.
Another key ethical problem with cyberweapons that follows from D4, D6, and D8 is that setting them up usually needs to subvert existing software and data to achieve effects, a form of product tampering (Rowe, 2013). This is a violation of the ethical principle of discrimination of noncombatants from combatants since nearly all software is neutral and does not serve military purposes any more than civilian purposes; tampering with the tools like software used by noncombatants is ethically similar to hurting noncombatants directly. Nearly all countries have laws against product tampering as it can cause widespread harm including death. In the United States for instance, the Federal Anti-Tampering Law (USC Title 18 Chapter 65, "Malicious Mischief") defines several categories including "whoever, with intent to cause serious injury to the business of any persons, taints any consumer product or renders materially false or misleading the labeling of, or container for, a consumer product". This has a penalty of up three years in prison; tampering that causes bodily harm receives up to twenty years. These laws apply to software under the classification of "device" where malicious modification renders misleading the labeling of the software. Users of software are generally required to sign end-user license agreements that prohibit tampering with it.
The reason that product tampering is virtually essential to cyberweapons is that computer systems and digital devices are generally very reliable and strongly resistant to attempts to subvert them for unanticipated purposes. Attacking a computer system directly by sending it malicious instructions will usually be ignored. Flooding it with too much data rarely works because most computer systems limit the rate at which they accept information or requests. So an attacker must almost always impersonate a legitimate user of a computer or digital device to perpetrate a cyberattack on it. Usually the goal is to subvert and impersonate the operating system, the software that controls the entire computer. Particular targets are the security kernel of the operating system, the file management system, the update manager, the networking software, and the electronic mail system. All these are civilian artifacts almost entirely implemented by civilians, and there are very few alternatives due to the near-monopolies by a few companies. Making them military targets is similar to poisoning a well that a community must use.
Unfortunately, that is perfidy if done during war. Perfidy is impersonation of civilians by military forces, and is prohibited by the laws of war because failing to do so would result in far higher civilian casualties. Guns, bombs, and missiles do not need to impersonate people to work, so cyberweapons are fundamentally different. Thus virtually any cyberweapon installation or use appears to be unethical on the perfidy criterion.
There are alternatives to perfidy in cyberweapons. Denial-of-service attacks involving flooding of a victim with information may not require perfidy. Perfidy is also unnecessary for defense against attacks. Since botnets are the preferred method of controlling cyberweapons, all the defender need do is interfere with the communications of the botnet to prevent an attack from being launched; they do not need to get onto the attacker’s system to stop the attack. Interference can be accomplished by packet filtering. Even if the attacker encrypts their communications, the sources and destinations of their packets cannot be encrypted if they are to reach their targets, and the sizes and signatures of their packets provide clues to recognize them.
One objection to this notion of cyber perfidy is that cyberweapons do not involve as much threat to human life as perfidy with people would. We disputed this nonlethality claim above. But the Geneva Conventions do not require a death threat to civilians its definition of perfidy; more general harm such as threats of injury and capture are also included. Just-war theory does permit “dual-use” weapons that harm civilians as well as military targets when the military benefit is high, but attacking almost-exclusively civilian targets is considered unethical. Cyberwarfare is questionable ethically on that basis.
(Strawser and Denning, 2014) argue that cyberweapons should be ethically superior to traditional weapons because of their relative nonlethality, and thus their perfidy could be excused. Yes, if you could get the effect of a kinetic attack by a cyberweapon, it could be preferable. However, the unreliability of cyberweapons discussed in the next section means that a cyberweapon is less likely to be effective and that trades off with its nonlethality. Cyberweapons also tend to cause plenty of collateral damage as we discuss later. Human life has a finite value, usually assessed by economists as a few million dollars, and cyberattacks can cause millions of dollars in collateral damage. So a nonlethal cyberweapon may be worse than a weapon that kills.
Another key problem with cyberweapons that follows from D1, D4, and D7 is that they must exploit flaws in human artifacts, generally flaws in software. That means that some well-defended targets are unattackable no matter what their strategic importance, and other targets may or may not be attackable based on chance: the chance that the flaw has been found, the chance that a fix for the flaw has been found, the chance that the fix has been disseminated, and the chance that the target has installed the fix. Control of the cyberweapon may also be unreliable, since it may depend on Internet connections that are untested until conflict or deliberately broken once a conflict begins. This means that cyberweapons are generally unreliable. They can be more reliable if the victim standardizes their software and hardware too much, as the U.S. military tends to do, but even then many surprise responses can foil an attack. Routine maintenance on software can install new versions, reorganization of a network can give new configurations, or the target may engage in deliberate deception. Reliability of attacks can also be improved by using novel attack methods for which fixes are unlikely to be known, as with Stuxnet, but those require expensive research to find, and may not be within the capabilities of all but a few countries.
Unreliability risks both the ethical principles of discrimination of noncombatants and proportionality of attacks to the provocation. Cyberweapons will also be unreliable due to errors in targeting, as was mentioned earlier in analogy to air targeting with bombs and missiles, raising issue of discrimination of noncombatants. Errors occur with air targeting when intelligence is outdated, as with the U.S. bombing of the Chinese embassy in Belgrade in 1999, or when people or buildings are misidentified, as with bombing of a Red Cross warehouse in Afghanistan in 2001 (Bellamy, 2006). Cyberspace provides additional opportunities for mistakes in targeting because the range of operations is large, camouflage is easy, and large changes in the target can be made quickly. The Red Cross warehouse was bombed even though it had a large red cross on its roof because the bombing pilots flew too high to see it. There will likely be even more problems with identifying the context of a target in cyberspace and metaphorically seeing a “red cross”.
The unreliability of cyberweapons encourages commanders to use massive and multifaceted cyberattacks to ensure a desired effect. For instance, Stuxnet used at least five different novel propagation methods. Overkill tends to result in unnecessary damage when the attacker underestimates the chances of success, and unnecessary damage requires more work to repair. Underestimation of effects will be frequent with cyberweapons because the conditional probabilities of one weapon succeeding when another succeeds are not independent but positively correlated in unpredictable ways, since there can be hidden common underlying vulnerabilities that both weapons can exploit. Thus overkill can easily happen with cyberweapons and can violate the principle of proportionality in the laws of war.
Another reliability problem is that automated or semi-automated cyberweapons may not be stoppable after termination of hostilities, a violation of the laws of war regarding armistices (which derive from the ethical principle of responsibility for warfare). Cyberweapons not controlled from the Internet, as by a timer, would be examples. But even for botnet-controlled cyberattacks, once a victim recognizes they are under attack, they would likely terminate running processes and close network ports; such activity could very well break the Internet connection being used to control the attack and prevent it from being stopped. This suggests that an ethical cyberattacker must either use self-limiting attacks, such as attacks that destroy a set of data and then stop, or use mechanisms of control that are highly resistant to victim manipulation such as unsuspected timing channels for conveying commands. (Timing channels convey information in the time delay between external events such as the appearance of a packet from a particular address.)
Unreliability also has strategic implications. There is not much deterrence value in advertising the possession of a cache of cyberweapons, since many of them will not work, and advertising enables potential victims to better defend themselves since they can start hardening their defenses and filtering out traffic from the threatening country. This means a country would get the same deterrence effect whether or not it has cyberweapons – so there is no point getting them unless intending to use them soon. But that means cyberweapons are only good for unprovoked attacks, a violation of the laws of war. Another issue is that unreliable weapons are poor choices in a conflict since there are many reliable weapons that better ensure proportionate and discriminatory responses. It thus could be unethical for a commander to use a cyberweapon.
The United States thinks it has an answer to the unreliability of cyberweapons: It will launch massive cyberweapons campaigns. This is consistent with U.S. policy to stage massive traditional attacks, as in Iraq in 2003 to create "shock and awe" (alias terror). It is then hoping that the victim country cannot respond with counter-cyberattacks on a similar scale. But the U.S. military does not have the resources it once did due to the declining U.S. economy in the face of foreign competition. It can no longer rely on enormous shows of force to advance its national interests. Of course, software vendors promise the U.S. military all kinds of capabilities. Cyberweapons are a very appealing product for software vendors to sell to the U.S. government – software that is rarely used, can be made to look good in artificial test conditions, and will likely fail in real conflict long after the vendor has been paid.
A major ethical problem with cyberweapons is repairing their damage after the end of hostilities (jus post bellum), something that can be a serious challenge due to D2, D3, D5, and D7. This risks the ethical principle of proportionality of attacks in a different way. It can be hard to find the damage of a cyberweapon since it is not visible to the naked eye. So indiscriminate cyberweapons could be even worse the land mines, which are only concealed before the attack. Even when damage is clear, its cause may be difficult to track down. Software and systems are interconnected, and what may appear to be a flaw in one may actually be a flaw in another. Thus using some kinds of cyberweapons may be sentencing the victim to years of damage repair. This is particularly true for less-developed victim countries without much technical infrastructure.
The usual approach to damage repair from criminal cyberattacks is to try to restore the damaged systems from backup data. This can be done for cyberwar attacks too, but there are possible obstacles that are even more likely with cyberwar attacks. Backup restoration will not work if the attack code remains in the hardware, firmware, or in the boot code (such as BIOS). Restoring will also generally take a long time since usually an entire operating system should be restored to remove all possible sources of reinfection. Furthermore, attacks can hide in data as well, and the data may need to be restored from backup too. Not all systems make adequate backups, since taking backups is a tedious and little-rewarded task, and they may be incomplete, faulty, or even entirely missing, and then most of the damage of a cyberweapon could be permanent. In addition, cyberattacks can also cause permanent damage in the form of opportunities missed while the system was under attack.
Another problem is that new damage from a cyberweapon may continue long after hostilities have ceased if it used automatic propagation. The Stuxnet attacks continue to circulate today; though their volume has been reduced by antivirus software, not everyone has antivirus software nor has it configured correctly. So even if a system is restored from backup to a clean copy, it may be reinfected if not all the propagation methods have been discovered and countermeasures applied. Since cyberweapons will tend to use novel methods of attack, knowing all the right countermeasures may be a challenge.
For these reasons an ethical cyberattacker should try to facilitate damage repair after the cessation of hostilities. Several things can be done. First, an ethical cyberattacker should acknowledge their attack after hostilities because the attacker knows what they attacked and how, and can provide the best guidance as to what to repair. This is a form of the ethical principle of responsibility for conduct in warfare. Acknowledgement can be done by announcement, but it is better to prove it by attaching cryptographic signatures to the attack so false claims of responsibility can be prevented. In fact, it may be desirable for a country to take responsibility for an attack at the time of the attack to obtain political leverage. The overly close connection of cyberespionage to cyberwarfare makes this difficult for intelligence agencies to accept, since espionage tries to avoid attribution, but warfare is fundamentally different from espionage and subject to considerably more international agreements. Second, at the cessation of hostilities an ethical cyberattacker should provide information to the victim about exactly what was attacked so the damage can be found, perhaps in the form of lists of sites and software on those sites. The issue is similar to that of keeping records of where land mines have been emplaced. Third, an ethical cyberattacker should use attack methods that are easy to repair. That means avoiding autonomous propagation methods as much as possible, and employing attack methods that are easily reversible (Rowe et al, 2014). An example is encryption of key data by an attacker using a key that only the attacker knows; then the attacker can decrypt the data at the cessation of hostilities and exactly restore what was there before, since encryption preserves information.
Perhaps the most serious ethical problem with cyberweapons is their potential for collateral damage to civilians due to D1, D5, D6, and D7. This can violate the ethical principle of discrimination of noncombatants.
Cyberattack methods have been employed so extensively against civilians by criminals that an important question is how well cyberattacks in cyberwar can be focused on exclusively military targets. (Kaurin, 2007) points out that the definition of a civilian is increasingly unclear in modern warfare, and (Smith, 2002) notes increasing rates of collateral damage in modern warfare. The U.S. has claimed for many of its drone strikes in Pakistan that any military-age males killed must be combatants, so we will undoubtedly hear a similar argument from any aggressor about the victims of their cyberattacks during cyberwarfare. But we can generally identify civilians as people with not contributing substantially to warfare by their activities or products.
Cyberattacks involve manipulation of programs and data and there are many ways to do this. Ideally, a cyberattack should modify a single target program or file on a computer system, something critical to warfighting. We call these type-0 cyberattacks. This is consistent with the standard goal of warfare to interfere with a victim's ability to wage war, which is a more ethical goal than just that damaging a victim by the notion of proportionality. Consider the software for a missile-defense system as in the Israeli quasi-cyberattack on Syria in 2007 (Clarke and Knake, 2010); an intelligence operative could put a copy of software for missile defense on a victim's system that, unlike a correct copy, will malfunction in critical situations. Unfortunately, it is very difficult to do this, as software critical to victim's ability to wage war will be highly protected. Certainly a potential victim should calculate hash values such as the 160-bit SHA-1 values on their critical software and periodically check that the values on the software they run are the same as the values on the software they installed. So type-0 cyberattacks are usually infeasible.
A cyberattack could modify data on which the target depends, particularly data whose contents vary over users and installations. If this reaches civilians, we can call this type-1 collateral damage. An example would be what is known as a "configuration file" that defines the operating parameters for some software. But many modifications of configuration files will not disable software, and those that do will usually reveal their presence by causing failure in more than the targeted conditions.
A more effective kind of cyberattack is to induce a user to download a copy of Trojaned (modified) software or data from some repository such as an update site. If civilians also download it, we call this type-2 collateral damage. If downloading from the repository is concealed from the victim, as when files from a USB thumb drive are automatically transferred to a computer in which they are inserted, we call this type-3 collateral damage. This appears to have been a key mechanism in propagating the Stuxnet attack (USCCU, 2009). Collateral damage of type-3 attacks can be worse than type-2 attacks because tracing the source of the malware is harder, making the damage more difficult to diagnose and repair.
Finally, attacks can spread to a victim from viruses and worms that commandeer parts of legitimate programs to enable their spreading. We call these type-4 collateral damage since there are often inadequate controls within the virus or worm on how the attack spreads and a good number of targets are hit randomly. Type-4 attacks are too much blunt instruments to be appropriate for most cyberwarfare.
The traditional meaning of collateral damage is direct damage to a civilian entity by errors in targeting. Judged on this criterion alone, Stuxnet did well, as it looked for certain specialized process-control operating systems before attempting sabotage, though it did try to propagate itself to other network sites using type-4 methods. (Raymond et al, 2013) argues this inspection of its environment can be carried further by a cyberweapon to check the address of the site it is on, the kind of software the site is running, the kinds of network protocols it is running, the kind of files on the systems, the kinds of encoding used for transmissions, and so on, to be even more sure what system it is on before doing damage.
This sounds encouraging at limiting the collateral damage of cyberweapons, but this is illusory. The information that a cyberweapon can extract about its environment will often be technical information that does not reveal the true nature of a potential target. Just knowing the Internet (IP) address cannot confirm it is a military target, since even if it is in a block of addresses is owned by the military, it could be a military hospital, a public-relations office, or a housing complex. Rarely is an explicit statement of purpose attached to a computer system and visible from inside it. And any potential target should take pains to ensure its range of critical military addresses is kept secret, so it will be hard to get useful address information. Furthermore, a potential target may also distribute false address information, may deliberately create decoys to encourage attacks on the wrong targets, or may camouflage the right targets, as those are standard military tactics. Or a criminal may spoof a military site to gain leverage. Cyber-sabotage could be preceded by a long period of espionage to thoroughly confirm the identity of a machine before attacking it, but the more espionage done, the more likely it will be discovered, and the less likely an attack will be effective.
Just knowing what software is on a system is similarly ambiguous. How can it be proven that a particular software product is used by military organizations? This would require extensive intelligence to confirm, and the whole appeal of cyberweapons is in not having to visit to the target. Military organizations generally use a considerable amount of civilian software because it cheaper than the alternatives. Even if there are distinctive features of the cyber-environment of a military organization, identically-named or similarly-named software or data entities could be confused. This is analogous to the mistargeting of drone strikes when civilians have similar names to insurgents. A site may also be judged a military target on the grounds of its behavior or its associations with known military sites, neither of which sufficiently justify the use of deadly force as per the principle of discrimination.
We should note the unfortunate trend in modern warfare to reclassify civilian targets as military. (Smith, 2002) points out the electrical grids are increasingly considered military objectives. This created a good deal of collateral damage to civilians in Iraq in 2003-2004 since many utilities such as communications, water systems, and refrigeration depended on the electrical grid; in fact, civilians depend more on the electrical grid than militaries do since militaries have better backups like generators and batteries. Disabling the grid in Iraq did quickly achieve military results, but it took much time to restore infrastructure and there was much unnecessary civilian suffering. We should expect to see even more civilian targeting with cyberweapons since cyberweapons appear more benign.
Denial-of-service attacks on Internet sites involve damage to availability of a network service for a period of time but where no product tampering need be done. The cyberattacks on Georgia in 2008 were predominantly denial-of-service attacks against civilians (USCCU, 2009). Collateral damage can still occur with them because denial of service is a rather crude weapon, and it will slow down not just the direct targets but also any sites trying to connect to the targets, and some of these could be civilian. And again, it may be difficult to correctly identify a civilian site.
Let us now consider the costs of collateral damage of cyberweapons so we can apply utilitarian ethics. One category is the direct cost of repair. Recovering from a cyberattack requires removal of malware-infected files and running malicious processes, but this is not often simple, as we discussed above in the section on repair, and the cost can vary considerably with the system. Restoring an operating system requires some time, though not all human-supervised, and the people doing it need a certain level of technical knowledge. If there are backups, some tedious labor will be needed to find them and restore them. Typically the restoration also destroys the data on the systems, which will cost additional effort to restore if it can be restored. The people doing this work need a certain level of technical knowledge, so the overall cost should be several thousand U.S. dollars per system. However, if there are no backups for some or all of the files, the cost of permanent data loss can be considerably more. A lack of backups, in fact, could be a good reason to target a system in cyberwar.
Alternatively, the responder to the attack can try to "patch" (repair) just the programs and data having unauthorized modifications. Sites such as cert.org provide guidance on how to do such patches for specific types of attacks with specific indicators. But these are unlikely to be useful against cyberwar attacks as the attacks are likely to be novel and resistant to known patch methods. A country that follows a patching strategy will be subjected to years of repair activities.
Cyberspace is highly interconnected, and an attack that is designed to propagate itself could easily spread from a military system to civilian systems; (Raymond et al, 2013) does not consider this kind of damage. Stuxnet needed to propagate onto many civilian machines to have a chance of reaching its few targets since it could not access them directly. Even when the propagation does not affect the functionality the target system, it can hurt it by burdening the operating system with added processes and files. Those additions may be flagged by anomaly-based malware detection mechanisms, and may require time-consuming administrator inspection well out of proportion to their size. An ethical cyberattacker should design the attacks to remove themselves from stepping-stone sites after their goals have been accomplished, much in the way that the police should not linger on private property in pursuit of a criminal transiting the property, but this was not done with Stuxnet. Propagation of damage to civilian systems is made easier by the fact that militaries use much of the same software as civilians such as the Windows operating system, Internet Explorer and Mozilla browsers, Adobe Acrobat, and Google. These provide a steady stream of discovered vulnerabilities that attackers can exploit. Thus it is not difficult for an attack disabling a military site to disable a civilian site that it reaches by mistake.
If we assume that the average network node has K connections, and if we assume that the chance of propagating the attack to a new node is p, then we should expect from a single attack p*K infections from immediate neighbors, and infections in neighbors with M degrees of separation. If pK<1, the number of total infections will be calculated by a geometric series whose sum is , so we can multiply this by the cost of a single cleanup to get the overall cost of an attack. But if pK>1 the propagation is unbounded, and that was apparently true of Stuxnet. If malware can be analyzed carefully, its propagation methods may be identified, and its damage might be more easily traced. This is very time-consuming and may not succeed. To avoid tracing, if an attack is detected on one computer of a local-area network, the rest of the network usually receives repair or restoration too. This multiplies the cost of an individual system repair by the size of the network, which can considerably increase costs.
Countermeasures are eventually found for attacks, and the propagation rate should decrease over time. However, many computer systems and devices never get any countermeasures, due to lack of funds, carelessness, or previous cyberattack, and so there are always a residuum of machines willing and able to spread even an old cyberattack. This means that propagation of attacks can continue a long time.
If the attack came in over the Internet, it can be politically important to identify the country that is responsible for the attack. Tracing across the Internet can be difficult because routing information is generally short-lived and attackers will go at great lengths to conceal their origins by using proxy servers, spoofed servers, and long chains of control. But the increasing international cooperation on sharing routing information can be helpful, and intelligence data comparing attack code to that on likely sources of the attacks can be helpful in clinching a case against a country. This kind of investigation requires significant effort, however.
Another important source of collateral damage not considered in (Raymond et al, 2013) is the effort to understand the cyberattack. This includes analyzing the attack, finding methods to repair its damage, finding ways to strengthen defenses against it, extracting signatures of the attack for use by anti-malware software, and publishing and discussing the findings. It may also include attribution of the source of the attack to enable better countermeasures. All these steps are considered standard practice for novel attack methods (Tech Republic, 2005), and cyberweapons will almost invariably use novel methods to increase their chances of success. As with Stuxnet, methods of cyberattack against military systems will often be usable against civilian systems, so some degree of reporting and analysis of military cyberattacks is ethically imperative. And it needs to be done quickly, since once an attack has occurred its techniques are disseminated quickly and can be used in copycat attacks.
Cyberweapons will likely incur higher costs for analysis than criminal cyberattacks. That is because they have the resources of nation-states for their development. Thus they will be more likely to be novel, will be better tested, will be more likely obfuscated to make analysis difficult, and will require more work to find countermeasures. Hence even if cyberattacks on military targets do not propagate much to civilian systems, they may incur significant cost to civilians.
Will some military victims try to conceal attacks, thereby depriving the world community of useful intelligence about the attack methods? It is unlikely because most victims will be weaker than their attackers (else they would be less likely to be attacked) and often want all the international help they can get. Even if the attack is on a secret facility whose details the victim does not want to reveal, often the attacked software is civilian, and can be fully revealed without violating secrecy. In addition, an attack on a secret facility may still be detected elsewhere by its collateral damage, as with Stuxnet.
Much work on analyzing attacks, determining countermeasures, extracting signatures, and disseminating reports about them is done by civilian infrastructure today since civilians are bearing the brunt of cyberattacks. This kind of work needs to be done highly trained professionals, so it can be expensive. Initial clues are provided by observations of odd behavior reported on bulletin boards and blogs such as Bugtraq (at www.securityfocus.com) and those of software vendors for their user communities. Discussion on those sites tries to narrow down the circumstances of the behavior and attribute it to particular features of the software. Professionals at Mitre study these discussions and decide whether it represents new vulnerability (which is usually due to a software bug); if so, they give it a CVE number at cve.mitre.org which represents the vulnerability’s de facto label in subsequent discussions. Security professionals then try to plan remediation efforts; usually the responsibility is that of the software vendor, but in the case of serious problems, governments may also be involved. Remediation usually requires testing by the vendor to find a complete set of fixes. The fixes usually involve changing the original vulnerable software or its configuration parameters. If the vulnerability is serious enough, it is posted along with details including signatures for identifying it and proposed remediation measures at sites such as www.kb.cert.org (the US-CERT Vulnerability Notes Database) and web.nvd.nist.org, which represent de facto official recognition.
Attempting to attribute an attack to a particular country or group entails additional costs. If malware came in over the Internet, log files for the intervening computers can be inspected for evidence. There will need to be some distinctive features of the malware for this to work, and malware tries hard to conceal itself, so the success rate in tracing it will be low. Cyberwar malware will almost certainly be encrypted to eliminate the possibility of recognizing signatures in its contents, though matching its occurrences can still be done to trace its path across the Internet. If malware was installed surreptitiously without coming through the Internet, tracing possible installation times can be done, and analysis of the text of the malware may suggest distinctive national or personal code-writing styles. But all this takes a good deal of time. A thousand hours of the time of trained personnel would be a reasonable guess, for a total cost of US $100,000.
It thus costs at least $100,000 for the world to analyze and find countermeasures for a new attack method. Stuxnet was sufficiently novel that considerably more effort was expended in its analysis, so $1,000,000 is a better figure for it. If Stuxnet were primarily an Israeli operation as it appears to be, Israel should be paying this cost to the professionals around the world for its perpetration of the attacks, since no state of war exists between Israel and the countries of nearly all those professionals; in fact, much initial analysis of Stuxnet was done in Russia, so Israel owes Russia a significant sum. This sort of widespread collateral damage has no counterpart with bombs and missiles.
Analysis of a vulnerability or attack is essential to finding remediation. Posting information about the vulnerability or attack is essential to enabling people to defend against it, and the more openness of the information exchange, the more quickly that thorough remediation measures will be found. But this openness has a price: It facilitates new attacks using the same methods. Once criminals know where the vulnerability is and roughly what it involves, it greatly simplifies their search for how to exploit it. The consensus of the information-security community today is that openness is more important than preventing new attacks in the short term, and only a few vendors disagree with that. Once vulnerability information is posted, remediation methods usually follow quickly. But not all systems are updated quickly, and cyberattackers have a window of opportunity ranging from a few days to a few weeks for systems that are lagging in updates. Millions of dollars worth of theft can occur in this time period.
Damage can be caused by countermeasures themselves. (Anonymous, 2012) identifies worldwide damage to DNS services due to China’s efforts to prevent its citizens from going to certain Web sites. This kind of collateral damage can occur after attacks when victim states attempt to block network services that led to the attack.
Cyberweapons are well suited for inducing mistrust of a victim in their military systems since attack machinery can be well hidden. When a victim is attacked or discovers a weapon, they may suspect that other weapons are concealed. Some damage of a cyberattack by a cyberweapon can then be psychological, and this can be an effective deterrent against the victim going to war (Libicki, 2011). Interestingly, this damage can occur even before going to war, as for instance when many countries overestimate U.S. cyber capabilities. But the damage is greatest when cyberweapons have actually been demonstrated on a victim in a convincing manner. This damage can extend well beyond military systems because an effective cyberattack will likely be reported by the victim to gain international sympathy, and this reporting will reduce the confidence of civilians in their digital infrastructure. People may start to blame every problem with their computers and devices on malware. While from a military effectiveness standpoint this provides a multiplier on the effect of a weapon, it is bad from a collateral damage standpoint because the psychological damage may be unpredictable. Militaries prefer predictable weapons.
Psychological collateral damage is particularly difficult to measure because it may have little relationship to the actual damage. Overwhelming damage to a country’s military can have dramatic political consequences, as seen in Argentina after the Falklands war. Or civilians may just interpret the damage as the military’s own problem with little impact on themselves. If a country continues to live in fear for a long time after a cyberattack, this could have profound consequences for people in that country. The Stuxnet attacks led Iran to retaliate by attacking U.S. targets to very little effect (Gorman and Yadron, 2013), wasting resources that could have better spent elsewhere in Iran. The terrorist attacks on the United States in September 2001 induced an overreaction that has done little good for the United States in light of the few terrorists threats it has had subsequently (Kimmel and Stout, 2006). Cyberweapons, with their mysterious modes of operation and concealed effects, are even better at inducing fear in a populace. Cyberweapons with broad psychological damage, such as unnecessarily powerful cyberweapons that shut down cyberspace for weeks, could be unethical regardless of their targets.
Many of the costs we have mentioned above can be measured, so we can often apply utilitarian ethics. Let us take Stuxnet as an example. We conservatively estimate that 10,000,000 machines were infected worldwide by the multiple propagation mechanisms. Even though these infections did not sabotage most of these machines, it was not obvious at first that this was true, so quick detection and removal on each machine was important. The detection and removal were bundled with handling other threats, and their amortized time cost was probably around a second of a user’s time, hence their cost was around 10,000,000 * $100 per hour / 3600 seconds = $277,000. The cost of discovery and mitigation of Stuxnet was at least $1,000,000 as discussed above. Attempts at attribution probably cost around $100,000 since there was not much concern about it for this attack. The reuse of the attack methods by criminals probably resulted in 10,000 incidents worldwide having varying costs but probably averaging at least $100 per incident for $1,000,000 total. The psychological cost to Iran was at least $500,000 just from the cost of the wasted attacks on the U.S. Thus the total collateral damage of Stuxnet was at least $2.9 million. This needs to be compared to the numeric benefit of Stuxnet, a delay of a few months in Iran’s nuclear development, something difficult but not impossible to estimate.
Thus the cost of a cyberattack that does not kill anyone could easily reach into the millions of U.S. dollars. (Viscusi, 2004) assigns a value to a U.S. human life at $4.7 million based on economic impact, and the U.S. government has assigned values ranging from $6 million to $9 million recently (Appelbaum, 2011). Since this value has a large economic component, it will be less where per capita income is less. The international standard for insurance purposes is $50,000 per future year of human life (Kingbury, 2008), which would give a value of $2 million for an average adult. This value should be decreasing on an increasingly overpopulated planet since additional humans are increasingly redundant. Thus the damage of Stuxnet was similar to that of killing a person even if we do not believe Iranian claims that it killed someone. Thus the harm of a less well planned cyberattack could be greater than the cost of traditional military operations to achieve the same goals.
Cyberweapons are yet another step in the institutionalization of cowardice. With them the attacker is even more remote from their victim than with extrajudicial executions by drones in Pakistan today. This means further opportunities for errors and misjudgments in targeting, and further opportunities for collateral and persistent damage. This is consistent with the argument of (Smith, 2002) that modern high-technology warfare will and must increasingly target civilians. Worse, much of the damage will be hidden or disseminated across networks, so aggressors will not get adequate feedback about what they have done, and victims may find it difficult to remove the damage after the end of hostilities. Cyberweapons also embody some serious ethical problems on their own beyond collateral damage, since they are so close to cyberespionage in methods that it is very tempting to use them for small provocations,. They also usually demand cyber perfidy, and their dependence on software flaws makes them unreliable and inviting of overkill.
Thus a civilized country needs to think carefully before embarking on cyberwarfare. It is becoming increasingly important to resolve cyberconflicts without the violence of cyberattacks, as reducing violence is a desirable goal in cyberspace just as for other forms of warfare (Christie et al, 2008). Because of this host of ethical challenges associated with cyberweapons, their use should be discouraged, and international agreements should be sought to restrict their use, much in the manner of nuclear, chemical, and biological weapons.
This work was supported by the U.S. National Science Foundation under grant 1318126 of the Secure and Trustworthy Cyberspace Program. The views expressed are those of the author and do not represent those of the U.S. Government.
Anonymous, ‘The collateral damage of Internet censorship by DNS injection’ (July 2012) 42 (3) ACM SIGCOMM Computer Communications Review 22-27
B. Appelbaum, ‘As U.S. agencies put more value on a life, businesses fret’ The New York Times (February 16, 2011)
E. Barrett, ‘Warfare in a new domain: the ethics of military cyber-operations’ (2013) 12(1) Journal of Military Ethics 4-17
R. Belk and M. Noyes, ‘On the use of offensive cyber capabilities: A policy analysis on offensive US cyber policy’ (March 2012) <http://belfercenter.ksg.harvard.edu/experts/2633/robert_belk.html> accessed November 8, 2013.
A. Bellamy, Just wars: from Cicero to Iraq (Cambridge, UK: Cambridge University Press, 2006)
G. Burnham, R. Lafta, S. Doocy, and L. Roberts, ‘Mortality after the 2003 invasion of Iraq: a cross-sectional cluster sample’ (October 11, 2006) 368 (9545) The Lancet 1421-1428
D. Christie, B. Tint, R. Wagner, and D. Winter, ‘Peace psychology for a peaceful world’ (2008) 63 (6) American Psychologist 540-552.
R. Clarke and R. Knake, Cyber War: The Next Threat to National Security and What To Do About It (New York: HarperCollins, 2010)
R. Dipert, ‘Other-than-Internet (OTI) cyberwarfare: challenges for ethics, law, and policy’ (2013) 12 (1) Journal of Military Ethics, 34-53
C. Elisan, Malware, Rootkits, and Botnets: A Beginner’s Guide (New York: McGraw-Hill Osborne, 2012)
T. Gjelten, ‘First strike: US cyber warriors seize the offensive’ World Affairs Journal, (January/February 2013) <www.worldaffairsjournal.org/article/first-strike-us-cyber-warriors-seize-the-offensive> accessed November 8, 2013
S. Goel, ‘Cyberwarfare: connecting the dots in cyber intelligence’ (August 2011) 54 (8) Communications of the ACM 132-140
S. Gorman and D. Yadron, ‘Banks seek U.S. help on Iran cyberattacks’ Wall Street Journal (January 16, 2013) <online.wsj.com/news/articles/SB10001424127887324734904578244302923178548> accessed November 20, 2013
M. Gross, ‘A declaration of cyber-war’ Vanity Fair (April 2011) <www.vanityfair.com/culture/features/2011/04/stuxnet-201104> accessed May 12, 2012
A. Hagopian, A. Flaxman, T., Takaro, E. Shatari, A. Sahar, J. Rajaratnam, S. Becker, A. Levin-Rector, L. Galway, H. Al-Yasseri, J. Berq W. Weiss, C. Murray, G. Burnham, and E. Mills, ‘Mortality in Iraq associated with the 2003-2011 war and occupation: findings from a national cluster sample survey by the University Collaborative Iraq Mortality Study’ (October 15, 2013) 10 (10) PLoS Medicine
<www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.1001533> accessed November 9, 2013
ICRC (International Committee of the Red Cross) ‘International humanitarian law – treaties and documents’ <www.icrc.org/icl.nsf> accessed 1 December 2007
IHRCR-SLS (International Human Rights and Conflict Resolution Clinic, Stanford Law School), and GJC-NYUSL (Global Justice Clinic, New York University School of Law) ‘Living under Drones: Death, Injury, and Trauma to Civilians from U.S. Drone Practices in Pakistan’ (September 2012) <www.livingunderdrones.org> accessed October 26, 2013
D. Kaplan, ‘New malware appears carrying Stuxnet code’ SC Magazine (October 18, 2011) <www.scmagazine.com/new-malware-appears-carrying-stuxnet-code/article/214707> accessed August 1, 2012
P. Kaurin, ‘When less is more: expanding the combatant/noncombatant distinction’ in Brough, M., Lango, J., and van der Linden, H. (eds.), Chapter 6 in Rethinking The Just War Tradition (New York: SUNY Press, 2007)
P. Kimmel and C. Stout, (eds.), Collateral Damage: The Psychological Consequences of America's War on Terrorism (Westport, CT: Praeger, 2006)
K. Kingbury, ‘The value of a human life: $129,000’ Time (May 20, 2008) <content.time.com/time/health/ article/0,8599,1808049,00.html> accessed November 23, 2013
S. Lee, Ethics and War: An Introduction (Cambridge, UK: Cambridge University Press, 2012)
M. Libicki, ‘Cyberwar as a confidence game’ (Spring 2011) 5 (1) Strategic Studies Quarterly 132-146
C. Malin, E. Casey, and J. Aquilina, Malware Forensics Field Guide for Windows Systems: Digital Forensics Field Guides (New York: Syngress, 2012)
A. Moseley, ‘Just War Theory’ Internet Encyclopedia of Philosophy, <www.iep.utm.edu/justwar> accessed March 14, 2014
W. Owens, K. Dam, and H. Lin, (eds.), Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities (Washington, DC: National Academies Press, 2009)
D. Raymond, G. Conti, T. Cross, and R. Fanelli, ‘A control measure framework to limit collateral damage and propagation of cyber weapons’ Fifth Intl. Conference on Cyber Conflict (2013), Tallinn, Estonia
N. Rowe, ‘The ethics of cyberweapons in warfare’ (January-March 2010) 1 (1) International Journal of Technoethics. 20-31
N. Rowe, ‘Cyber perfidy’, Chapter 29 in F. Allhoff, N. Evans, and A. Henschke (eds.), The Routledge Handbook of War and Ethics (New York: Routledge, 2013) 394-404
N. Rowe, S. Garfinkel, R. Beverly, and P. Yannakogeorgos, ‘Challenges in monitoring cyberarms compliance’ in P. Yannakogeorgos and A. Lowther, Conflict and Cooperation in Cyberspace: The Challenge to National Security in Cyberspace (New York: Taylor and Francis, pp. 81-100, 2014)
E. Segal, ‘The code not taken: China, the United States, and the future of espionage’ (2013) 69 (5) Bulletin of Atomic Scientists 38-45
T. Smith, ‘The new law of war: Legitimizing hi-tech and infrastructural violence’ (2002) 46 International Studies Quarterly 355-374
B. Strawser and D. Denning, ‘Moral cyber weapons: the duty to employ cyberattacks’, Chapter 6 in M. Taddeo and L. Floridi (eds.), The Ethics of Information Warfare (Berlin, Germany: Springer, 2014)
TechRepublic, ‘Flaw finders go their own way’ (January 26, 2005) www.techrepublic.com/forum/ discussions/9-167221, accessed 1 August 2012
USCCU (United States Cyber Consequences Unit), ‘Overview by the US-CCU of the Cyber Campaign against Georgia in August of 2008. US-CCU Special Report’ (August 2009) <www.usccu.org> accessed November 2, 2009
W. Viscusi, ‘The value of life: estimates of risk by occupation and industry’ (January 2004) 42 (1) Economic Inquiry 29-48
D. Walzer, Just and Unjust Wars: A Moral Argument with Historical Illustrations (New York: Basic Books, 1977)