War Crimes from Cyberweapons
Neil C. Rowe
Center for Information Security Research
U.S. Naval Postgraduate School
As information warfare capabilities have grown in recent years, the possibilities of war crimes with cyber-attacks have increased. The main ethical problems of cyberweapons in regard to ruses, secrecy, and collateral damage are examined, and analogies drawn to biological weapons. It argues that most cyber-attacks are instances of perfidy, and spread so easily that they can approach biological weapons in their uncontrollability. Then mitigation techniques for cyberweapons in the form of more precise targeting, reversibility, and self-attribution are considered. The paper concludes with a survey of some methods for prosecution and punishment of cyberwar crimes including forensics, interventions, cyberblockades, and reparations, and propose a new kind of pacifism called 'cyber-pacifism'.
Keywords: war crimes, information warfare, cyber-attacks, biological warfare, collateral damage, ruses, attribution, repair, intervention, cyberblockades, reparations
This paper appeared in the Journal of Information Warfare, Vol. 6, No. 3, pp. 15-25, 2007.
Information warfare provides a new set of weapons different in many ways from those of conventional warfare. The focus here is on cyber-attacks or the 'computer-network attack' methods of information warfare. They can destroy adversary data, computer systems, and networks, and can have a major effect on an adversary’s ability to wage war (Bayles, 2001). An important question is whether unethical use of such cyberweapons could constitute war crimes. This is important because war crimes have been increasingly prosecuted in recent years.
International laws of the conduct of war (jus in bello) try to regulate how wars can be legally fought (Gutman & Rieff, 1999). The Hague Conventions of 1899 and 1907 and Geneva Conventions of 1949 and 1977 (ICRC, 2007) are the most important. Article 51 of the 1977 Additional Protocols of the Geneva Conventions prohibits attacks that employ methods and means of combat whose effects cannot be controlled or whose damage to civilians is disproportionate, and Article 57 says “Constant care shall be taken to spare the civilian population, civilians, and civilian objects”. The Hague Conventions prohibit weapons that cause unnecessary suffering. Nearly all authorities agree that international law does apply to cyberwarfare (Schmitt, 2002). Beyond laws, a wide body of doctrines, policies, and ethical precepts are associated with the waging of war (Nardin, 1998).
As an example, consider that country A attacks country B with a virus in cyberspace a few minutes before an invasion with conventional aircraft, naval forces, and troops. The cyber-attack could target B's military command-and-control structure, but could also include the computers and networks that control power delivery, water supplies, and civilian media communications (since militaries exploit them too). In fact, Country A may prefer to deliberately attack civilian systems as a more cost-effective way to hurt B's ability to wage war, since B's military networks are likely to be considerably harder to attack than their civilian ones. Clever viruses can take a long time to eradicate completely if they use good camouflage and infect backup copies of software as well, particularly if the victim country does not have much technical expertise due to poor training, dependence on foreign support no longer available, or casualties to its technical personnel. If it takes a month for B to repair the damage of the cyber-attack, this delay would hurt B's military; but lack of electrical power, water, and media communications for a month would cause much unnecessary suffering of the civilian population. It could lead to starvation, epidemics, and panic among civilians. This can be a war crime if the sufferings of civilians are severe and unnecessary.
Unfortunately, existing international law provides inconsistent and conflicting precedents for prosecuting cyberwar crimes (Hollis, 2008). So this paper will suggest here how the law should be applied, in the manner of Walzer (1977), not what is likely to be prosecuted under the current law. The argument is that war crimes are easier to accomplish in cyberspace than are commonly thought, and proactive steps can be taken now to prevent some of them. In fact, there are good arguments to prohibit all cyberweapons, what could be termed 'cyber-pacifism'.
Most cyber-attack methods currently used by amateurs ('hackers') exploit ruses of various kinds. Ruses in conventional armed conflict that feign civilian or noncombatant status are called 'perfidy' and are prohibited by article 37 of the 1977 Additional Protocol of the Geneva Convention (Gutman & Rieff, 1999). Most current cyber-attacks target systems similarly in pretending to be normal Internet traffic while hiding malicious intentions. This could include insincere requests for services where such requests contain hidden Trojan horses, or code that causes little-known side effects that lead to eventual attacker control of a system. Usually side effects are due to defects in the software design of a system that are known to the attacker but not the defender. Common examples are deliberate buffer overflows by supplying overly large inputs to programs that, due to programmer error, fail to adequately check the size of their inputs. Thus most cyber-attacks are ruses that attempt to feign normal operating-system operations and Internet protocols. Operating systems and networks are 'civilians' in cyber-warfare since they are neutral parties supporting tasks of everyone. Most attacks on computer systems for purposes of warfare must target them, and hence are perfidious and are war crimes. It does not matter that operating systems and networks are not people, since they support people, much in the way that bombing a building containing people is an attack on the people too.
The ease of cyberspace ruses makes it difficult to do accurate damage assessment for cyber-attacks. Computers can have odd behavior for many legitimate reasons such as programming bugs and overloading of the operating system, and a clever attacker may ensure that serious damage only be apparent at critical times, so it may be hard to tell whether one has been attacked. Then once attacked, locating the damage may be difficult since a modification to a piece of software may appear in many different features seemingly unrelated to the original site of damage. So not only are cyber-attacks difficult to repair, but difficult for attackers to assess in their results. Thus victims may not be prompted to fix the damage, and attackers may tend to use unnecessarily strong cyber-attacks just to ensure that they get some obvious effects, risking violation of the Geneva Convention's stipulation of proportionate attacks and counterattacks, and worsening war crimes. Another factor worsening them is that military commanders have been overpromised the benefits of cyberweapons for some time now; thus they may feel they need to launch multiple kinds of cyber-attacks simultaneously to obtain one attack that works.
Bissett (2004) points out that modern warfare rarely achieves its promise of precise 'surgical strikes' with any new technology: political pressures to use something new whether or not it is appropriate, the inevitable mistakes in implementing new technology, lack of feeling of responsibility in the attacker due to the technological remoteness of the target, and the inevitable surprises in warfare that were not encountered during testing in controlled environments. To these can be added equipment breakdowns and deliberate deception by an enemy. So despite hopes, U.S. forces attacking Iraq in 2003 and later occupying it did not often demonstrate high-precision targeting with their munitions. So a major issue with cyberweapons is their targeting. With many conventional weapons like guns, one needs to be near the target to damage it. But a cyberweapon could be controlled from anywhere -- perhaps anywhere on the Internet. This means there are all sorts of ways to mistakenly hit a civilian target and cause collateral damage.
Needless to say, it is difficult to hit a military target from the Internet. Those that are not 'air gapped' or inaccessible will have careful gatekeepers preventing entry or control of their networks from outside (Westwood, 1997; Arquilla, 1999). Then once inside military networks, it may be difficult to figure out which systems are doing what unless you work for the organization. Although the U.S. military does not, it is desirable for weak militaries to camouflage their military and government cyber-targets (machines and software) with misleading names and addresses, or even put them inside civilian sites. So when a cyberweapon is launched it could possibly hit a civilian target nearby or one with an accidentally similar name or address. What exactly is a civilian target can be argued; Jensen (2003) uses a broader definition of “military target” that includes the civilian infrastructure of software companies such as Microsoft, but attacks on them will entail preponderantly collateral damage to civilians. Civilian targets are tempting anyway since they are easier to attack and can have more impact on an adversary’s willingness to fight, so cyber-attackers may be encouraged to be careless with collateral damage (Knapp & Boulton, 2007).
Many of the current hacker attacks like viruses, worms, and bots can also spread autonomously from one computer to another. This is desirable in cyber-attacks because it gives more power to a single attack, and allows some flexibility in the attack if it is hard to reach the target machine. So even if a military site is accurately targeted by a cyber-attack, its damage may spread to civilian sites through electronic mail, shared storage media, Web sites, or shared network infrastructure. It could also spread through clever hackers who discover the attack method and modify it for their own purposes (information-warfare attacks are likely to use powerful new methods). Judging by the often poor preparedness of civilian machines against hacker attacks today, civilians will likely not be prepared with appropriate countermeasures for these propagations to their machines (Ericsson, 1999). Civilian machines are particularly vulnerable to propagation involving new network protocols since most do not prohibit such connections. In addition, the magnitude of the effect of cyberweapons is difficult to control, and can easily violate the 'proportionality' of a counterattack that is given as a justification in the Geneva Conventions (Gardam, 2004). Even without autonomously spreading attacks, destroying software or network functionality may have unexpected consequences since networks are multiuse (Molander and Siang, 1998). For instance, destroying a router that serves command-and-control nodes could also stop functions in a military hospital, a prisoner-of-war camp, or a power system that serves humanitarian purposes, all geographically dispersed.
Exacerbating the danger of collateral damage is the increasing similarity of the software on military and civilian computers. For instance, with the U.S. Navy's standardization on the Windows operating system for nontactical operations with the NMCI (Navy Marine Corps Internet) standard, many attacks on the Navy would also work against civilian computers. While standardization does simplify deployment of countermeasures against known attacks, it makes a system more vulnerable to new attack methods – and new methods are likely what we will see in information warfare because they are considerably more effective and militaries have the resources to develop good ones. New exploits for Windows are found all the time by hackers, so there are usually plenty of methods that will work against both Navy and civilian machines.
Information-warfare attacks also can involve manipulating other innocent computers to serve as camouflage or shields for the attacker. Often this must involve trespassing on computers and networks that have not previously agreed to cooperate. Trespassing is generally unethical and illegal in cyberspace as it is in physical space (Himma, 2004), much as people generally do not have a right to cross someone else's real estate to reach their own. (In the U.S., police are not even allowed to enter a house in pursuit of a fleeing criminal if the unrelated homeowner does not consent.) But there are exceptions for public resources such as World Wide Web sites. If cyberspace trespassing causes severe disruption to its victims, it could constitute a war crime.
Cyber-attack methods require more secrecy than missiles and artillery, since most attack methods can be easily foiled if it is known what mechanism is used (Denning, 1999). Time and place do not provide much surprise since everyone knows attacks can occur anytime and anywhere, and automated defenses can operate equally effectively anytime and anywhere. Thus cyber-attacks methods must be kept highly secret – and thus a highly desirable goal for espionage, entailing high secrecy to prevent it. Bok (1983) cites many disadvantages of secrecy, including the encouragement of an elite out of touch with the changing needs of their society. The capabilities of secret cyberweapons are also more likely to be misunderstood by their users, which means the weapons can more easily get out of control and cause war crimes than conventional weapons whose capabilities are well known.
It is useful to compare cyberweapons to biological weapons (Lederberg, 1999). Both are nonstandard weapons that have seen little use in conflict, but which could be developed to become powerful and frightening tools. Both are difficult to target and significantly risk collateral damage because their attacks can spread inadvertently (though some control of spreading is possible for both). Biological weapons engender considerable fear which cyberweapons could deserve as well due to their great flexibility.
Biological weapons are currently outlawed by the 1972 Biological and Toxin Weapons Convention, although (unlike the nuclear-weapons and chemical-weapons conventions) it contains no enforcement machinery. Specifically, it prohibits the development, production, and stockpiling of microbial or other biological agents or toxins that have no justification for prophylactic, protective, or other peaceful purpose, as well as weapons, equipment, and other means of delivery of such weapons. The rationale for the prohibition is that such weapons are primarily weapons of terror that are most effective against civilian populations that are not expecting an attack and are unvaccinated, lack protective gear, or are otherwise unprepared for them. Analogously, many cyberweapons are considerably more effective against civilian targets since civilians do not maintain their computer security at the level of military and commercial computers. Civilians do not install security software as often and do not patch security flaws as often.
The Biological and Toxin Weapons Convention bans in particular the spreading of plant diseases as a form of warfare (Whitby, 2002), a concept even more analogous to that of cyber-attacks. The rationale for prohibiting such agricultural warfare is that it affects civilians equally since everyone needs to eat, and the effects might not be immediate but could be long-term. Similarly, cyberweapons attack the information-processing infrastructure of society. The effects might not be immediate since societies manage when a few computers break down occasionally, but matters could be serious if outages last a day or weeks. So many services of a society depend on computers today that widespread failures could bring an economy quickly to a halt.
Given the above concerns, can technology help us employ cyberweapons ethically so they will not or cannot be used for war crimes? Several technical developments can help.
Mitigating collateral damage
Cyber-attacks can and should be designed to be selective in what systems they attack and what they attack in those systems. Much the same way that uniforms identify military personnel in conventional warfare, military systems should have identifying information on each computer identifying what it is and who owns it (they need this anyway for security of their own networks). Absence of such information, or better yet, clear identifying information to the contrary, should cause a computer to be labeled as civilian. Some such information is routinely requested in setting up an operating system on a computer, and almost all U.S. military computer systems display clear identifying information ‘banners’ on logging in. Even without explicit information, military and civilian systems and networks can often be distinguished by their Internet (IP) addresses as well as by inspection of their structure and files. Then if an attacker persists in attacking a computer identified as or likely to be civilian, this should constitute collateral damage and could be a war crime.
Many military command-and-control systems claim to be unconnected to the Internet (air-gapped), but many such claims have been shown to be false because of hidden modems, debugging interfaces, and forgotten features. Attacks on truly air-gapped military networks involve less risk of collateral damage – so paradoxically, highly secure systems are excellent targets. However, naturally an adversary will make it difficult through camouflage and deception to get accurate information about their computer systems, as this is their 'electronic order of battle'.
Attacks also can and should be limited to a few mission-critical parts of software. So an attack might disable most of the network protocols for communications but leave intact the software for slow services like email since those could be needed for humanitarian operations while not being much help for military needs. Undesirable ethically are 'denial-of-service' attacks that swamp an adversary's resources with requests, as are attacks on routers, since their effects are broad and could more easily cause collateral damage.
Attackers accused of war crimes may be able to shift some blame to the victim when the victim has not followed prudent designs and procedures, as with the adoption of vulnerable software by the U.S. Navy's NMCI standard.
Ability to stop the attack
An important feature of an ethical cyberweapon is ability to stop it. Much like long-range missiles, an ethical cyberweapon should be able to be 'called off' when circumstances change after they have triggered and before they have finished or even started their damage. This issue particularly applies to cyberweapons because attacks often require long preparation and have a long duration.
Cyberweapons could be provided with codes to disable them. But it is not so easy to control attacks whose targets are air-gapped or highly inaccessible. When an attack cannot be controlled, it could be a war crime to perpetrate it, much like using a nuclear weapon that cannot be targeted accurately.
An intriguing possibility for ethical cyber-attacks, not possible in conventional warfare, is to design their damage to be quickly repairable, even perhaps completely reversible. Encryption is a good attack tool for this because it is easily reversible with a secret key which could be known only to the attacker. For instance, damage could be in the form of an encryption of critical data or programs, making them unusable in a reversible way. Or a virus or worm could store with itself the code it has replaced, in an encrypted form, enabling reconstruction of the original code later by the attacker who can supply the decryption. But repair is harder when many targets have been attacked, or if they attack things like startup code that will reinstall attacks every time a machine is restarted. This suggests that ethical cyber-attacks be constrained in how many sites or software objects they target, much like air bombardment today. Repair procedures could be designed to be triggerable by the attacker at a time that they choose, or could be kept in escrow by a neutral party such as the United Nations until the termination of hostilities.
Self-attribution of attacks
Perfidy is prohibited by the laws of warfare is because the risk of collateral damage increases when combatants cannot recognize one another. Perfidy can occur with cyber-attacks since it is hard to tell who is attacking from just a stream of bits. While the international situation may suggest a culprit, no uniforms or vehicle markings identify attackers in cyberspace. Effective attacks in cyberspace usually need to be staged through a series of neutral computers, further obscuring the origin of the attack. This uncertainty about the attacker can cause retaliation against innocent countries which have been 'framed' or made to look like attackers, or just countries the retaliator doesn't like and is looking for an excuse to attack, like Spain by the U.S. in the Spanish-American War of 1898. Furthermore, attacks can be launched by small groups of people or even individuals that do not represent a government. So with most of today's cyber-attacks, it is generally illegal and unfair to attack the government of the country that seems to be the origin of the attack without considerable further justification.
That suggests that ethical cyberweapons should use methods of cryptographic authentication to identify themselves. A good way is to include a digital signature with attack code (as a form of 'self-attribution'). Digital signatures are cryptographic hash functions of the bits of a document (including its date) where any change to the document will have a different digital signature. Public-key cryptography is the easiest way to implement them; the signer of the document uses their private key to encrypt the signature, and the viewers decrypt this with the published public key to confirm the identity of the sender. An ethical justification for this is that people should be responsible for the actions of code they write just as much as for the actions they do personally (Orwant, 1994). The presence of a signature is somewhat similar to the chemicals added to plastic explosives by international convention (TRB, 2004) to make them easier to detect.
However, attacks are most effective when the victim does not even know they are being attacked -- surprise is important in warfare. Concealment is not a war crime, just the normal conduct of war. So digital signatures could be hidden steganographically (Wayner, 2002). Within computer code, this could be in the inessential bits of data or in the pattern of the code itself. For instance, it could be in the least significant bits of picture brightnesses, or in the particular permutation of four successive interchangeable commands (there are 24 ways to order four possible commands and each way could represent three characters). With attacks comprised of simpler data such as short packets of denial-of-service traffic, a steganographic signature could be concealed in the timings of the packets.
Steganographic signatures (and who is responsible for each) could be stored in a neutral-country site like one used by the United Nations. To prevent them from being forged, they can be cryptographic signatures that are a function of a secret key as well as the contents of the associated data. Alternatively, the neutral site could store some of the attack code signed by the secret key affirming where it came from. A country could have several incentives to submit their attack signatures or code to such a site. Unattributed attacks are characteristic of terrorism, and nations do not want to be accused of engaging in terrorist tactics. Code signatures on malicious code could be a mandated part of disarmament or sanctions against a country, and could be enforced by inspections of the country's laboratories much like inspections for nuclear weapons material. Note that it is easier to detect malicious code than nuclear materials because code must be in some kind of military computer storage and automated software tools can be designed to search storage systematically while ignoring other secrets there. Also, a country may want its legitimate attacks to be recognized so they cannot be blamed on scapegoats. For instance, the U.S. may wish its attacks in the Middle East to be recognized and provable so they cannot be blamed on Israel.
When a war crime is committed in cyberspace, several responses are possible. Conventional methods such as arresting the perpetrators and trying them in a court of law may work if perpetrators can be identified and found. This may be possible under the perpetrator's legal system, as in the United States where treaties approved by the U.S. Senate have the force of U.S. laws, but most international crimes require an international tribunal. The principal of 'universal jurisdiction' is increasingly being applied to serious war crimes, meaning that they are international crimes can be prosecuted in any country (Berman, 2002). Legal responses to cyberwar crimes will require that the international community apply computer-forensics methods used today to prosecute local and national cybercrimes. However, responses more specific to cyberspace can be useful.
Forensics for war crimes
Forensics methods for prosecuting war crimes need to be considerably more thorough than conventional forensics methods for hacker attacks (Mandia & Prosise, 2003). For one thing, it is not likely to be possible to obtain a search warrant to confirm the origin of an attack. The best that can be done is to capture international traffic involving third parties that were used as stepping stones in the attack. This will not work if the attack did not require them (as by planting viruses directly on machines within the victim country) or if the traffic is encrypted by other than link-to-link methods (but such encryption is suspicious by itself). A variety of additional circumstantial evidence will be helpful in forensic analysis, such as timings of events (which are difficult to disguise) and exact copies of code found in different victim sites (identified by hash tables). Lack of ethics in the attack such as lack of self-attribution and broad collateral damage can be documented and used as factors in increasing the penalty for the crime.
Useful circumstantial evidence for war crimes is the record of network events, since most military actions in cyberspace will require control by the attacker on the victim, and the chain of control may be traceable (though not always because encryption of traffic can be used in an attack). Control is usually necessary since preprogrammed attacks for particular times are scattershot and usually ineffective in what are often highly fluid international crises. While tracing is not easy, all routers store data about recent connections and this can be accessed if done quickly enough, and critical-infrastructure networks often store much more of this kind of data. This data provides connection information for routes taken. Alternatively, statistical analysis may suggest where an attack is coming from, an approach particularly good for denial-of-service attacks. If for instance an attack is propagating from a single site with a branching factor of B, we should expect to see a number of attack packets proportional to at a distance of K nodes (for small K) from the propagation site, which gives clues for tracking the attack source. However, proving that a country is responsible for an attack is harder than proving that a single hacker is responsible, since just because attacks originate within a country does not mean that its government is responsible. A successful prosecution will likely involve widespread or repeated attacks where the origin cannot be denied.
Cyberwar crimes raise a problem of apportioning blame since it is likely that many perpetrators are involved: the originators of the exploits used in the attacks, the software engineers that weaponize attacks, the people that deliver the weapons to the target through one means or another, and perhaps even personnel of the victim that have colluded with the attack. International law usually treats all participants as guilty to varying degrees. Certainly originators of exploits can be just as guilty as others involved in cyberwar crimes because very few exploits have legitimate uses, being usually highly specific software tricks that have little potential for software engineering or in defending information systems (Spafford, 1992). This is different than inventing dynamite, which has many civilian uses. So just research on new cyber-attack methods could be considered a war crime, much like research on fundamentally new biological weapons. The supervisors and managers of such research can be similarly guilty. Nazis in World War II that planned inhumane medical experiments on prisoners were guilty of crimes as much as those who implemented their orders.
A way to stop a cyberwar crime in progress might be to intervene against the criminal. A justification often used for U.S. military interventions into other countries is the prevention of crimes such as denial of human rights (Walzer, 1977). So the United Nations, a neutral country, or even an activist group (Manion & Goodrum, 2000) could intervene in the cyber-attack or cyber-attack preparations of an aggressive power to stop it, or at least could help the country defend its cyberspace through technical support. For instance, a neutral country could have tried to disable the U.S. command-and-control network before the latter's unprovoked attack on Iraq in 2003 (acknowledging that this might be an act of war against the U.S.), or it could have provided modern firewall technology to Iraq. However, such interventions would be attacks themselves, and if done in cyberspace, subject to the same objections we have raised elsewhere in this paper.
Misuse of tools such as computers, networks, and software on a local level is often punished by denial of those resources. The same idea can be applied on a broader level. A nation that unjustifiably attacks others could have its Internet connections to other countries cut. This is often not difficult since nations are often assigned ranges of Internet IP addresses to make it possible to attribute their Internet traffic, and these assignments are listed in public directories. Even if attackers give false IP addresses ('spoof'), international agencies could use a variety of available information-security techniques to detect most such spoofing by noticing various inconsistencies in the traffic and by correlating arrival times of packets at different sites. Disabling of ranges of IP addresses is done today in 'blacklisting' of sites that send large amounts of spam email, by refusing their email messages. Cyberblockades can also be made selective, so that for instance they apply only to financial transactions.
Reparations and financial penalties
Proved aggressors in cyberspace should pay reparations for the damage they have incurred since cybercrimes are generally costly to repair. Reparations are an increasingly popular concept in international law (Torpey, 2006). Tort cases in the victim country can provide guidance on appropriate penalties; one can calculate lost GNP or replacement costs for damaged data and software. If attacks are not reversible, reparations for cyberspace operations should include reinstallation of damaged operating systems and software – if a country is technologically sophisticated enough to attack, it should be able to repair. Self-attributable and reversible attacks will simplify reparations, so it is in the interest of attackers to adopt them in a world in which international law is starting to be enforced.
Financial penalties are appropriate punishment for many kinds of crimes, even when no harm has been caused, so they may play a useful role with cyberwar crimes as well. Beyond direct methods like fines, nations have other economic measures they can employ as instruments of policy, such boycotts, tariffs, and reciprocity policies.
Cyber-attacks are a tempting tool or warfare for many countries because they are novel and can be effective, at least initially, for very little money. But just as with other kinds of warfare, there are both unethical and ethical ways to employ cyberweapons. Used unethically, they can wreak unfair havoc on a country and accomplish war crimes. To reduce this danger, policies defining ethical cyber-attacks can be developed and encouraged, and tools can be developed to analyze, stop, or penalize serious cyberspace breaches of the laws of war. However, given the uncertainties, it currently seems poor judgment to use cyber-attacks, and cyber-attack methods should be prohibited much like biological weapons. This point of view is cyber-pacifism.
Arquilla J. (1999) Ethics and Information Warfare. In Khalilzad Z., White J., and Marsall A., (Eds.), Strategic Appraisal: The Changing Role of Information in Warfare, 379-401. Rand Corporation, Santa Monica , CA, USA.
Bayles W. (2001) Network Attack. Parameters, US Army War College Quarterly, 31: 44-58.
Berman P. (2002) The Globalization of Jurisdiction. University of Pennsylvania Law Review, 151 (2): 311-545.
Bissett A. (2004) High Technology War and 'Surgical Strikes'. Computers and Society (ACM SIGCAS), 32 (7): 4.
Bok S. (1986) Secrets, Oxford University Press, Oxford, UK.
Denning D. (1999) Information Warfare and Security, Addison-Wesley, Boston, MA, USA.
Ericsson E. (1999) Information Warfare: Hype or Reality? The Nonproliferation Review, 6 (3): 57-64.
Gardam J. (2004) Necessity, Proportionality, and the Use of Force by States, Cambridge University Press, Cambridge, UK.
Gutman R and Rieff D. (1999) Crimes of War: What the Public Should Know, Norton, New York, NY, USA.
Himma K. (2004) The Ethics of Tracing Hacker Attacks through the Machines of Innocent Persons. International Journal of Information Ethics, 2 (11): 1-13.
Hollis, D. (2007) New Tools, New Rules: International Law and Information Operations. In David G. & McKeldin T. (Eds.), The Message of War: Information, Influence, and Perception in Armed Conflict. Temple University Legal Studies Research Paper No. 2007-15, Philadelphia, PA, USA.
ICRC (International Committee of the Red Cross) (2007) International Humanitarian Law – Treaties and Documents. Retrieved December 1, 2007 from www.icrc.org/icl.nsf.
Jensen, E. (2003) Unexpected Consequences from Knock-On Effects: A Different Standard for Computer Network Operations? American University International Law Review, 18: 1145-1188.
Knapp K. and Boulton W. (2007) Ten Information Warfare Trends. In Janczewski L. & Colarik A. (Eds.), Cyber Warfare and Cyber Terrorism, 17-25. IDG Global, Hershey, PA, USA.
Lederberg J. (Ed.) (1999) Biological Weapons: Limiting the Threat, MIT Press, Cambridge, MA, USA.
Mandia K. and Prosise, C. (2003) Incident Response and Computer Forensics, McGraw-Hill / Osborne, New York, NY, USA.
Manion M. and Goodrum A. (2000) Terrorism or Civil Disobedience: Toward a Hacktivist Ethic. Computers and Society (ACM SIGCAS), 30 (2): 14-19.
Molander R. and Siang S. (1998, Fall) The Legitimization of Strategic Information Warfare: Ethical Considerations. AAAS Professional Ethics Report, 11 (4). URL: www.aaas.org/spp/sfrl/sfrl.htm [Accessed November 23, 2005].
Nardin T. (Ed.) (1998) The Ethics of War and Peace, Princeton University Press, Princeton, NJ, USA.
Orwant C. (1994) EPER Ethics. Proc. of the Conference on Ethics in the Computer Age, Gatlinburg, Tennessee, 105-108.
Schmitt M. (2002) Wired Warfare: Computer Network Attack and Jus in Bello. International Review of the Red Cross, 84 (846): 365-399.
Spafford E. (1992). Are Computer Hacker Break-Ins Ethical? Journal of Systems and Software, 17: 41-47.
Torpey J. (2006) Making Whole What Has Been Smashed: On Reparations Politics, Harvard University Press, Cambridge, MA, USA.
TRB (U.S. Transportation Research Board of the National Academies) (2004) Public Transportation Security, Volume 6: Applicability of Portable Explosive Detection Devices in Transit Environments, TCRP Report 86. URL: onlinepubs.trb.org/onlinepubs/tcrp/ tcrp_rpt_86v6.pdf [Accessed August 31, 2007].
Walzer, D. (1977) Just and Unjust Wars: A Moral Argument with Historical Illustrations, Basic Books, New York, NY, USA.
Wayner P. (2002) Disappearing Cryptography: Information Hiding: Steganography and Watermarking, Morgan Kaufmann, San Francisco, CA, USA.
Westwood C. (1997) The Future Is Not What It Used To Be: Conflict in the Information Age, Air Power Studies Center, Fairbairn, ACT, Australia.
Whitby S. (2002) Biological Warfare against Crops, Palgrave, Houndmills, UK.
Acknowledgements: This work was supported by the U.S. National Science Foundation under grant 0429411 of the Cybertrust Program. The views expressed are those of the author and do not necessarily represent those of any part of the U.S. Government. Thanks to Dorothy Denning for helpful comments.