A Taxonomy of Norms in Cyberconflict for Government Policymakers

 

 

NC Rowe

 

Computer Science Department

U.S. Naval Postgraduate School

Monterey, California, USA

 

 

E-mail: ncrowe@nps.edu

 

 

Abstract: Cyberconflict provides a new set of challenges to the Law of Armed Conflict. The proposals in the recent Tallinn Manual 2.0 provide a good start, but they are incomplete and do not address important issues without international consensus on policy. Where laws are lacking, states adopt norms to provide consistency and deterrence. This article provides a broad taxonomy of cyberconflict norms for use by government policymakers, including norms for low-level cyberconflict, norms for starting cyberconflict, norms for conducting it, and norms for post-conflict operations. It also introduces the concept of "metanorms", norms for handling other norms.

 

Keywords: Norms, Policy, Cyberconflict, Tallinn Manual, States, Discrimination, Neutrality, Escalation, Repair

 

This paper appeared in Journal of Information Warfare, Vol. 17, No. 1, Winter 2018.

 

 

Introduction-Cyberwarfare and Cyberconflict

Before getting into details, it is necessary to provide some standard definitions. 'Cyber' is an adjective used to refer to computers, digital devices, and digital networks. 'Cyberattacks' are malicious activities targeting cyber entities for political, military, or monetary gain. 'Cyberwarfare' is warfare involving cyberattacks (Carr 2011; Singer & Friedman 2014). 'Cyberconflict' includes cyberwarfare, but also a range of actions with coercive intent such as espionage and attempts to influence using cyber means (Robinson, Jones & Janicke, 2015). The activities comprising cyberconflict are also termed 'cyber operations'. The distinction between cyberwarfare and cyberconflict is important because the first version of the Tallinn Manual (Schmitt 2013) labelled its subject 'cyberwarfare', while the recent second version (Schmitt 2017) labelled its subject 'cyber operations'. Both manuals are proposals by international legal experts for laws governing cyberwarfare and cyber operations that would apply to nation-states and would extend the 'Just War' theory. The shift in emphasis in the second version was intended to broaden the coverage of the proposals to apply to most of today's cyber operations, including peacetime regimes (Leetaru 2017), so the second version is considerably longer. While cyberwarfare has been rare, other cyber operations occur continually.

 

For several reasons, the Tallinn Manual 2.0 (Schmitt 2017) does not cover most possible laws, rules, and norms for cyberconflict. First, the 2017 manual is focused on legal definitions that extend non-cyber law, so it only addresses issues having some analogy in current non-cyber law (Schmitt & Vihul 2014). Second, it tries to represent international consensus, but states have different values and must always have the ability to set their own norms on issues that particularly matter to them, such as Internet censorship. Third, cyberspace is inherently transnational (as data and programs do not respect borders of states); attempts to regulate it using traditional notions of nation-states may be doomed to failure (Dipert 2010).

 

Formalised Norms for Cyberwarfare and Cyberconflict

Norms are principles or policies that people or groups follow to resolve ethical or policy problems, or are a "collective understandings of the proper behaviour of actors" (Thomas 2001, p.27). Norms have traditionally arisen from ethics (Lucas 2016), but they can also come from politics, science, or other sources (Thomas 2001). Norms are invoked when law does not apply, so they are close to what is termed 'customary law'. Since cyberwarfare has only recently become possible, international law such as humanitarian law (International Committee of the Red Cross, 2012) is only gradually catching up with the new legal implications, as in both Tallinn manuals (Morgus 2016). Meanwhile, norms can provide guidance to decisionmakers.

 

Norms can be informal or formal, can pertain to an individual or a group, and can be advisory or mandatory (Thomas 2001). This article focuses on norms for government policymakers, the generally mandatory written norms of the government of a nation-state. The reason for this narrowing of focus is to enable an exploration of what is missing from the two versions of the Tallinn Manual. Thus, the norms discussed here will have most of the force of laws within their states. The hope with most such norms, as with both Tallinn Manuals, is to become de facto international standards and eventually international law (Nye 2016/2017). Standardization of norms makes it easier for states to be consistent and this standardization also provides models for states that do not have time to investigate cyberconflict issues in detail. Still, there are sceptics about cyber norms (Mazanec 2016).

 

Cyberconflict involves software, and norms of a different sort are common for many kinds of software, as, for instance, norms for file formats and network protocols. Some of these are endorsed by international standards agreements (such as those of the IEEE) or specialised organizations (such as ICANN) for regulating Internet addresses, but others are just standard practice. Thus, norms for cyberconflict are consistent with other software practice.

 

The United States. has been trying formalise its norms for cyberconflict (Stevens 2012), but it has much work ahead (Crowley & Gerstein 2014). Norms for policymakers in all countries will likely change more often than laws since they are more flexible; for instance, a country that suffers a serious cyberattack will likely change its norms, as did Estonia after 2007. Norms can also be emergent, gradually arising from the experiences of each state and analysis of its successes and failures; furthermore, norms arising this way may be a better basis for successful international agreements (Lucas 2016).

 

Many norms for cyberconflict have been proposed. Some relate to cyberconflict operations short of war, some relate to starting cyberwarfare (jus ad bellum), some to fighting cyberwarfare (jus in bello), and some to the aftermath of cyberconflict. This article considers them in that order. Many but not all the issues in the second and third contexts relate to 'Just War' theory, and most of the issues in the first and fourth do not. This article is based on issues observed in a broad range of discussions in the literature.

 

Tables 1 and 2, below, summarise the author's taxonomy of cyberconflict norms that can and should be put into government policies, the specifics of which are discussed in the remainder of this article. Here 'T' means the norm is covered in a reasonable level of detail in the Tallinn Manual 2.0 (Schmitt 2017); 'O' means the norm is primarily for offensive operations; and 'D' means the norm is primarily for defensive operations. It can be seen that norms include many issues beyond the manual.

 

Metanorms

It is first useful to identify norms for the use of other norms, what the author defines as 'metanorms' (Table 1, box A, below). These metanorms may also be part of a government policy.

 

One metanorm is whether a state's norm is public or private. For instance, a state can announce publicly that any attacks on its power plants will be met by retaliation, while privately setting a policy of non-retaliation to avoid escalation of conflict. The public stance provides a measure of deterrence and permits other states to provide feedback on whether they consider the norm acceptable. On the other hand, a state that often behaves inconsistently with regard to its public stance will lose credibility for the deterrence effect for its public norms. For instance, when states have proclaimed they will not negotiate with hostage-takers but then do negotiate, their actions undermine their credibility for future hostage negotiations.

 

Another metanorm is the degree to which applications of norms depend on context. If a state is influenced by political pressures, it may counterattack more harshly against certain countries than others in response to the same provocation. States may also be more inclined to attack certain targets than others, as seen, for instance, in North Korea's eagerness to attack sources of any criticism of its leaders. The laws of war do not permit many exceptions based on such external considerations.

 

Related to this, a metanorm is the consistency by which a state applies a norm even when context is taken into account. Exhibition of inconsistent and unpredictable norms by a state make it difficult for it to conduct diplomacy. Laws are intended to enforce predictability, but norms have no such mechanism, though international outcry or retaliation can provide some feedback when norms are inconsistent. However, two-person adversarial game theory suggests advantages in randomising norms as a "mixed strategy": unpredictability makes it harder for an adversary to counter the actions of another (Harrington 2014).

 

Another metanorm is the degree to which a state fairly conducts its cyber operations. Will a state conduct cyberattacks against countries that cannot defend themselves adequately because they lack technical knowledge and resources? Will the state permit its attack targets a fair opportunity to use their own exploits to conduct cyberattacks in return? Otherwise the state could be institutionalizing cowardice.

 

 

A.     What metanorms does a state use for managing other norms?

Does the state publicise its norms?

Are norms context-dependent?

Are norms randomised?

Do methods permit counterattack? (O)

Does the state want other countries to use its norms as well?

Which norms require reciprocity?

B.     What role does cyberconflict play in a national strategy?

Does the state do cyber operations at all? (O)

Does it think cyber-operation capabilities deter aggression? (O)

Is it willing to risk costly counterattacks from cyberconflict? (O)

Does it allow cyberconflict to entail perfidy? (O)

C.     When and how does a state conduct low-level cyber operations?

Will the state conduct cyber-espionage?

Will it conduct cyber coercion? (O)

What targets will it consider? (T, O)

How will it reduce the danger of escalation? (T, O)

How willing is it to share information about vulnerabilities it discovers in cyberspace, both publicly and within its government? (D)

Can entities other than governments do cyber operations? Will the government police them? (O)

D.    When does a state use cyberattack as an instrument of national policy?

What level of damage over what time period ensures a state's response? (D)

How much certainty in the attribution of cyberattacks does it require before it counterattacks? (T, D)

How does it rate the importance of attack targets? (O)

Can a non-cyberattack on a state entail a cyberattack response? (O)

Can a cyberattack on a state entail a non-cyber response? (O)

To what extent will it attack dual-use (jointly military and civilian) targets? (T, O)

What counter-cyberattacks of a state will be automated responses? (O, D)

 

Table 1: Key cyberconflict policy norms, part 1 (T = covered in (Schmitt 2017); O = applies primarily to offensive cyberconflict; D = applies primarily to defensive cyberconflict)

 

E.     How does a state conduct cyberconflict when it must?

Can attacks anticipate a conflict? (T, O)

To what extent will a state intervene in the internal affairs of another state with its attack? (T, O)

Will a state purchase cyberweapons commercially? (O)

To what extent will it acknowledge its cyberattacks? (O)

How will it try to limit escalation of counterattacks? (T, O)

Will it exploit neutral states as stepping stones and how? (T, O)

How much overkill will it design in its cyberweapons? (T, O)

Does it avoid attacking certain kinds of targets? (O)

What will it do if it hurts civilians significantly? (T, O)

Will it camouflage its cyber resources and how? (D)

F.     How does a state behave after cyberconflict?

    Will a state assist other states in attributing and analysing cyberattacks on them? How much is it willing to reveal about its own attribution methods in doing so? (D)

    Does a state acknowledge an obligation to assist in repair of cyber damage it has caused? (T, O)

    Will it do things to make it easier to repair its cyber damage? (O)

    Will it criminally prosecute cyberattacks short of war against it? (D)

    Does it seek international agreements to limit the spread of malware and cyberweapons? (D)

 

Table 2: Key cyberconflict policy norms, part 2 (T = covered in (Schmitt 2017), O = applies primarily to offensive cyberconflict; D = applies primarily to defensive cyberconflict)

 

Another is the degree to which a state's norms respect international standards. For instance, North Korea has little respect for norms about mounting unprovoked cyberattacks on other states. States that respect international standards on norms will find more support and aid from the international community.

 

Finally, a metanorm is the degree to which norms are based on reciprocity or following of similar norms by adversaries (Thomas 2001). This represents the degree to which norms have primarily an external instead of internal origin. Internal norms are more likely to remain consistent with changing international situations.

 

Norms about Using Cyber Operations Instead of Other Methods

Next it is necessary to consider specific norms. A first issue is whether a state should conduct cyberwarfare and cyber operations at all (Table 1, box B). Interstate warfare has been decreasing in the last 50 years because it has rarely achieved anything (Regehr 2015). Furthermore, warfare has been conducted for centuries without cyber operations, so there is no military necessity for them. International agreements have effectively banned chemical and biological weapons, and nuclear weapons have been all but banned by a taboo against their use (Sauer 2016). So, it stands to reason that a state could adopt a norm against use of cyberweapons and cyber operations. Cyberweapons are costly to develop (Rid & Arquilla 2012), unreliable (Rowe 2016), and can easily hurt civilians indiscriminately (Rowe 2016). Certainly, there is a fad for them right now, but nations do not need to follow fads. Today every state needs to have a thorough cyber-defence program. However, this need program not be coupled with cyberattack development. 'The best defence is a good offense' is a maxim of tactics, not strategy.

 

Another reason for adopting a norm forswearing cyber operations is that doing so is consistent with the Geneva Conventions on the issue of perfidy (Rowe 2013). Nearly all cyberattacks involve impersonating a neutral party, such as an operating system or software. Impersonation of neutral parties is perfidy and is outlawed by the conventions because it would lead to lack of trust in necessary neutral agents in warfare such as the Red Cross (and not necessarily bodily harm to them). Operating systems and software are the neutral agents in cyberspace upon which everything depends.

 

Another reason is that, for specific adversaries, a state can be more vulnerable in cyberspace than its adversaries. The United States and Europe have such extensive cyberinfrastructure that it would be foolish for them to attack a country such as North Korea with few cyber assets, because large cyber assets are not necessary for mounting damaging cyber counterattacks (cyberattack software can be purchased). Finally, cyber operations are like drone warfare in that they can be waged at a distance without risking the attacker's health. Peace advocates have argued that nations that exhibit cowardice cannot claim to be role models for the rest of the world.

 

The difficulty of deterrence with cyberweapons

Perhaps the best reason for forswearing cyber operations is that cyber weapons rarely deter in cyberspace (Cooper 2012); therefore, there is little justification for assembling arsenals of cyberweapons. The large nuclear arsenal of the United States, of which all countries are aware, serves as a deterrent against nuclear attack because attackers would be sure of retaliation in kind. The United States also has formidable cyberweapons capabilities, but this has had little deterrent effect on Chinese and Russian cyberattacks against it, though some have argued it may deter other states from cyberattacks (Elliot 2011). Deterrence only works when a threat is clear and credible (Quinlan 2006). The effectiveness of using cyberweapons and cyber operations as deterrents has multiple problems. First, cyberweapons are not visible objects, and providing any details of an attack capability in an attempt to deter a victim makes it easier for the victim to block the attack, since revealing a capability will almost necessarily reveal vulnerabilities that can be exploited (Libicki, 2013). Secondly, attribution of an attack is essential for a counterattack, but attribution is difficult in cyberspace (Rowe, 2015). In addition, repeatability of a counterattack is also essential to deterrence, but cyberattacks are generally one-shot weapons since they are usually based on flaws that will be fixed quickly once demonstrated (Libicki 2009). Moreover, even if a cyberweapon is never used, it can be perishable since fixes may independently be found for the flaws that the weapon exploits. Finally, the number of cyberweapons a state possesses does not correlate with their combined effectiveness, since cyberweapons differ considerably in effectiveness, and much about their effectiveness is unknown against specific targets until they are used because it is hard to simulate an adversary's computer systems exactly. Matters are quite different with nuclear weapons since there are only a few ways they can work, they are not especially perishable, and more nuclear weapons mean more potential attack targets.

 

Deterrence is possible in cyberspace, but the best methods are defensive, not offensive (Radunovic, 2013). A site with an effective cyber defence can be tested by an adversary and shown to be effective, thereby suggesting to the adversary that it is not cost-effective to spend its time attacking it, even if the attack could eventually succeed. This is one reason the Chinese expend so much more effort attacking American civilian sites rather than the U.S. military; the U.S. military has better defences. Another good deterrent is to exhibit large numbers of eventually recognizable honeypots (decoy targets) so that the attackers know they will have a low success rate in reaching real targets.

 

Norms for Low-Level Cyberconflict

Attacks by nation-states not rising to the level of an armed attack (low-level cyberconflict) have increasingly occurred in recent years. Espionage is often considered part of the normal business of governments, but it can be destabilising. This possibility is especially true for cyber espionage, and a state may formulate a norm foreswearing it. More serious forms of cyberconflict usually involve 'cyber coercion', threatening or demonstrating a capability to cause cyber harm (Flemming & Rowe 2015). Nations differ considerably on what they consider acceptable norms for cyber coercion and responding to it (Table 1, box C, above). Using cyber operations to interfere with the production of nuclear weapons by a country such as North Korea (as North Korea alleges against the United States) can be considered reasonable, but interfering with a power plant (as Ukraine alleged of Russia in 2014) or an election (as the United States alleged of Russia in 2016) are serious provocations. So, a complicated set of norms for justifying cyber operations is necessary. These norms will depend on the seriousness of the threat, the type of the threat, and the level of attribution of the threat. Even though low-level attacks are easier to implement than all-out cyberwarfare, and thus tempting for many states, there is always a risk of escalation. Norms can thus limit the scope of cyberattacks to reduce the danger. Bear in mind that most cyberattacks qualify as cybercrime in most states, and there are international agreements on cybercrime (Maurer 2011), so cyberattacks can often be prosecuted as crimes in victim states (Grama 2010; International Court of Justice, 2015).

 

All states have a responsibility to police their citizens. Some cyberattacks by citizens of a state can be attributable to their state because of connections between the citizens and the government (Schmitt 2017, rules 14-19). Rule 17 says that acts "pursuant to the instructions of a state" are attributable to the state. But nations differ in the degree to which they police their citizens for particular kinds of crimes. For instance, bots (remotely controlled saboteurs) are hard to find; how much effort should a state be required to expend to find bots in its territory to stop or prevent attacks on other states? Norms could establish the degrees. The United Nations (United Nations General Assembly 2015, p.7) says states should "prevent ICT (information and communications technology) practices that are acknowledged to be harmful or that may pose threats to international peace and security", but that could cover many things. Malware development by a country is likely included, but it might also include prohibiting Microsoft and Adobe products since those have high rates of malware attack. Note that the cost to develop effective cyberweapons is becoming increasingly high as simple vulnerabilities are getting fixed, and cyberweapons development is increasingly beyond the capabilities of small groups of people like terrorist cells. And although cyberweapons can be purchased, it takes skill to target them effectively. So effective major cyberattacks are very likely to be state-sponsored. But lesser attacks could have all kinds of attackers from 'patriots' in a country (as reported by China and Russia) to victimised businesses (Lin, Allhoff & Abney 2014).

 

Norms can also apply to the handling of cyberattack intelligence. Since nearly all cyberattacks exploit errors and flaws in software design, and known errors and flaws get fixed quickly, secrecy about methods is important to the effectiveness of attacks. Thus, the publishing of information about newly discovered cyberattack methods is a good way to neutralise them, albeit not immediately (Nye 2015). Publishing is generally done after fixes are found and disseminated, so some deliberate delay to reduce further exploitation of the attack method is often a norm (Libicki 2012). Thus, a state needs norms for its dissemination policy, including its degree of sharing information with other states or even within the government (Owens, Dam & Lin 2009). Fortunately, most errors and flaws in software can be quickly fixed or neutralised once identified, and automatic software updates provide quick dissemination, so most delays are only a few days. Steps in this direction of quick disclosure are already being taken by the U.S. CERT (Cyber Emergency Response Team) and U.S. Department of Homeland Security.

 

Norms can also be set or announced for commercial cyber espionage. All governments engage in some kind of espionage for government information, but, when a state like China subsidises commercial cyber espionage, it is restraining free trade in a manner that most states do not consider acceptable (Libicki 2012).

 

Norms about Going to War in Cyberspace (Jus ad Bellum)

Both (Schmitt, 2013) and (Schmitt, 2017) devote a significant amount of space to the issue of when a cyberattack is legally justified.

 

Defining an armed attack

States now generally agree that a sufficiently damaging cyberattack can be an act of war (U.S. White House 2011; Schmitt 2017 rule 69). A contentious issue is what level of cyberattack constitutes an "armed attack" by the laws of war (Banks 2013) (Table 1, box D, above). Because of this dispute, norms for each state for starting cyberwar (jus ad bellum) are especially important. Since cyberattacks could cause many deaths in some circumstances and most experts would agree this would represent an armed attack, states must define a threshold (Nguyen, 2013). On the other hand, cyberattacks (such as phishing) by criminals are an unfortunate part of the normal activity of the Internet and would not represent an armed attack even if initiated by a state. Hence, thresholds must be defined in norms, varying with the nature of the target (United Nations General Assembly 2015) and the nature of the attack (Fidler 2012). Critical infrastructure demanding lower thresholds includes utilities, banks, medical facilities, food production and delivery, as well as cyberinfrastructure, such as networking and cyber emergency response teams.

 

A problem with identifying cyberattacks as 'armed attacks' is that they often consist of multiple steps over a long period. Typically, access is gained to a system; malicious software is installed; the software gains control of the system; and then it damages something. This process may take years. At what point could a state counterattack in response to discovery of a cyberattack? The traditional laws of war say not until damage has occurred, although exceptions are allowed in crises. That is because a potential attack may be neutralised before it occurs, in which case a counterattack is unjustified. This is certainly possible for cyberattacks, as alert system administrators could remove the attack from the systems it has infected before it is ever used. Or maybe not—perhaps the attack code uses a new technique that is difficult for a state to understand and remove in time, in which case a counterattack is the only reasonable response. Therefore, norms for counterattacks need to address potential attacks as well as actual attacks.

 

Another problem with multiple attacks is that no one attack may rise to the threshold of an armed attack, but the cumulative effect does. For instance, North Korea has engaged in repeated disruptive cyberattacks on South Korea that have had serious cumulative effects on utilities and banks that amount to an armed attack (Geers et al. 2015). Another example could be an attack that diverts small amounts of funds repeatedly from a bank account to an adversary over a period of many years, where the individual transfers are small, but the total amount is large. Thus, states need to specify different norms of response on both individual attacks and groups of attacks.

 

Cyberattack targets

Schmitt (2017) provides several criteria for unacceptable targets of a cyberattack that are analogous to those in conventional warfare. Attacks are prohibited on civilians (rules 93-98), civilian objects including cyber infrastructure (rules 99-102), other kinds of infrastructure (rules 140-43), medical and religious personnel and infrastructure (rules 131-34), detained personnel (rules 135-37), children (rule 138), and journalists (rule 139). Attacks must not be indiscriminate (rules 105-06), cause unnecessary suffering (rules 104 and 107), or represent reprisals (rule 108). These criteria are more difficult to apply to the cyber domain, as they require some judgment which could be formalised as norms. States may wish to augment the list of unacceptable targets with those that raise political issues for them (Libicki, 2016).

 

A key problem offensively is identification of the target (Schmitt 2017, rule 115) since identities in cyberspace can be obscure; computer systems and files rarely identify clearly what they are. A key problem defensively is identification of who is attacking. This is critical for cyber-counterattacks, since a nation cannot counterattack an unknown adversary. Norms can specify the degree of certainty of attribution of a target or attacker before proceeding with an attack or a counterattack (Libicki 2012). Although attribution can be difficult for criminal cyberattacks, states are increasingly making it easy to attribute their state-sponsored attacks to get a better political effect (Libicki 2016). Moreover, attacks are increasingly being traced by the better network software today; the large volume of traffic necessary for effective state-sponsored attacks can often betray them.

 

Counterattacks in cyberspace are intrinsically problematic because there is often no visible target under the adversary's control to counterattack. If an adversary in conventional warfare fires artillery at an opponent, that opponent can fire back at the spot or in the direction from which the artillery came. However, with cyberattacks, there is often only a vague notion of where the attack originated. Sites co-opted for the attack (as in denial-of-service attacks) may be found, but these could be sites in neutral countries, and attacking them would violate their neutrality (Schmitt 2017, rules 150-54). Total avoidance of neutral countries is difficult in cyberspace because of the ubiquitous automatic routing of data packets, but attacking neutral countries is still a clear violation of the laws of war. Encryption of attack packets can prevent malware attacks on intermediate sites, but may not prevent denial-of-service attacks on them since intermediate sites can be flooded with encrypted packets, too. Thus, norms for counterattack need to set high thresholds in attack severity, targets, and methods before response.

 

A useful norm for cyberconflict is some form of the traditional diplomatic policy of non-intervention, meaning that a state will not get involved in internal affairs of another state through cyber means (Schmitt 2017, rule 66). But there are exceptions in conventional warfare that can also apply to cyberwarfare, such as when the chaos in a state such as Syria spills across its borders and threatens other states. This example illustrates how norms could define the exceptions.

 

Mode of response to a cyberattack

It has been argued that, consistent with the Law of Armed Conflict, an attack in cyberspace that reaches the level of an armed attack should only entail a response in cyberspace, because a different kind of response could seem escalatory. But that is difficult to follow since cyberattacks generally require considerable lead time, and a much-delayed counterattack is hard to claim as self-defence (Schmitt 2017, rule 73). Non-cyber counterattacks in response to cyberattacks have been called 'spillover' (Maness & Valeriano 2016). Since cyberattacks vary widely in scale and effects, it would be important to make the spillover proportional. However, the broad, albeit mistaken, perception that cyberattacks are not serious attacks could mean that any spillover counterattack could be perceived as an escalation, so nations should probably set norms avoiding this kind of response. In reality, very little spillover of current cyberconflicts has been observed, even when there were ample opportunities (Maness & Valeriano 2016; Libicki 2016).

 

In the other direction, it might make more sense for a traditional attack to receive a cyber-counterattack (Nye 2015). Nonetheless, a 'no-first-use' norm for cyberattacks for a powerful country such as the United States makes sense, because it is so much more vulnerable in cyberspace than many of its adversaries, and a first use of cyberattack by the United States would encourage damaging cyber counterattacks against it.

 

The speed of cyberconflict is high, and attacks can appear and disappear in a fraction of a second. Critical network log data of attacks can disappear in a matter of hours. This suggests that effective counterattacks will not likely result from the normal chain of approvals for military operations; instead, counterattacks will need to be largely pre-planned and will need to be automated so they are ready to be used against likely adversaries. The degree of such pre-planning, and the degree to which it is publicised, can be norms that a state adopts for cyber operations.

 

Legal or diplomatic responses to any kind of attack are always acceptable and may be better responses than a cyber counterattack, particularly when the original attack is not severe or is primarily symbolic. States may adopt norms on this situation.

 

Purchase of cyberweapons

It can be cost-effective for a state to obtain cyberweapons from freelance hackers at various online exploit markets, where there is price competition and quick delivery, instead of developing its own cyberweapons (Stockton & Golabek-Goldman 2013). However, when states purchase cyberweapons, they are supporting the development of cyberattacks against everyone. Most of these cyberattacks could be used equally for cybercrime, which the purchasing state could be inclined to overlook because it needs to preserve a relationship with a supplier. Thus, states have a dilemma—develop their own cyberweapons at high cost and deter cybercrime, or buy cyberweapons at low cost and encourage cybercrime (Herzog & Schmid 2016). Each state will need to set norms for this issue. However, the United Nations has argued that states have a responsibility to share vulnerability and remedy information (United Nations General Assembly 2015), and most cyberweapons are ineffective unless vulnerabilities and remedies are kept secret. Such sharing will increase the cost of buying cyberweapons and will make state development of cyberweapons more desirable.

 

Norms about Conducting Warfare in Cyberspace (Jus in Bello)

The norms most discussed concern the conduct of ongoing cyberwarfare (Table 2, box E, above) since there are many issues that differ from conventional warfare.

 

Military necessity

Cyberattacks, like other warfare, must satisfy a test of military necessity in international law (Schmitt 2017, rules 111-13). A cyberattack can disable broad infrastructures like a power grid, but this is rarely necessary; narrower targets will likely equally impede a state's military capabilities. So a state needs some norms regarding the scope of its cyberattacks. Military necessity also generally prohibits anticipatory cyberattacks, although surprise is an important part of military operations, but not everyone agrees with this prohibition (Lee 2014).

 

Recent concerns about terrorism have encouraged development of international law on that subject. Attacks whose primary purpose is to create terror (or "shock and awe") are outlawed and cannot be justified by any military necessity. This would appear to apply to cyberterrorism (Banks 2013) although the term has no consensus definition, so each state must establish norms for defining it.

 

Confining cyberattacks to military objectives can be difficult because there is no reliable way to identify an Internet site as military. A state may camouflage its military sites as civilian to hide them, or its civilian sites as military to provoke international outrage if cyberattacked. In addition, cyberattacks can be hard to control. Most cyberattacks are based on flaws in software, and those flaws can get fixed unexpectedly, thus rendering the attack useless. So most cyberweapons rely on multiple methods, which increase the risk of hitting unnecessary targets. 'Active defences' that try to disable attacking machinery by counterattacks have an increased chance of hitting unnecessary targets themselves since cyberattacks are usually launched from well-concealed locations that cannot be easily inferred. Automated attacks and counterattacks can be difficult to stop, so they can continue causing unnecessary damage for a long time. Indeed, viruses and worms do not check the international situation or whether a truce has been signed. Consequently, norms should specify the targeting algorithm, acceptable numbers of methods used by a cyberattack, the aggressiveness of the cyberattack, and the degree to which a state will consider automated cyberattack (plus, the method and process by which to stop the attack when necessary).

 

Distinction of combatants from non-combatants

Particularly important norms about cyberconflict relate to the principle of distinction, that is, the distinguishing of combatants from non-combatants (Schmitt 2017, rules 94-102). Unfortunately, in cyberspace, it is easy to hit civilians in a cyberattack (Kelsey 2008). Rowe (2016) enumerates nine major ways: (1) the ubiquity of cyberspace, (2) temptations due to the relative ease of attacking civilian targets, (3) the preponderance of dual-use targets, (4) the ease of damaging the environment of the target by cyber methods, (5) the need to use civilians as intermediates to reach targets, (6) the unreliability of cyberattack methods which encourages overkill, (7) the possibilities of reusing the cyberattack methods for cybercrime, (8) the frequent use of automated methods that have difficulty distinguishing targets for the dissemination of attacks, and (9) the frequent spoofing of identities in cyberattacks causing errant counterattacks.

 

A state can set several kinds of norms with respect to how it will target cyberattacks by considering the issues below.

 

   What kinds of dual-use targets will it attack? Is it willing to attack infrastructure such as Internet services and financial systems if they are significantly contributing to a military capability?

   What proportion of a dual-use target must be military for it to be considered attackable? This can vary with the military significance of the military portion. For example, cloud servers providing mostly civilian services and backup storage to the military are poor targets.

   What degree of certainty is necessary as to whether something is a legitimate military target before launching a cyberattack on it? Mistargeted attacks can appear as reprisals (Schmitt 2017, rule 109).

   To what degree will cyberattackers respect neutrality of states (Kelsey 2008)? Will they use neutral countries as intermediate steps or staging areas (as with botnets) in the attack? Will they use techniques such as encryption to reduce possible damage to neutral states? Schmitt (2017, rules 150-52) appears to prohibit using neutral states in cyberattacks, but this is almost a necessity for attackers in a networked world.

   To what degree will a neutral state in a cyberconflict try to find cyberattacks of that conflict passing through its cyberspace? To what extent will it cooperate with belligerents to stop attacks and counterattacks passing through?

   Will psychological and physiological damage to non-combatants from a cyberattack be considered in targeting, such as Post-Traumatic Stress Syndrome and increased anxiety (Canetti, Gross & Waismel-Manor 2016)?

   What limits on side effects such as degradation of cyberspace and even the physical environment due to the attack are permissible? Attacks on utilities can cause widespread environmental effects.

   Will a state camouflage its cyber resources? Certain kinds of camouflage, such as making a non-military site look like a military site, could invite violations of the laws of war (Libicki 2016).

   To what extent will camouflage of a state's attacks be used to complicate attribution? There are political advantages to acknowledging attacks.

   To what extent will attacks impersonate civilian entities (risking perfidy)? Software technology permits new kinds of perfidy, such as changing the targeting of an adversary's weapons systems to target hospitals. New methods of camouflage are increasingly sophisticated in making militaries look like civilians (Al-Rodhan 2015).

   How much overkill will be acceptable when the unreliability of cyberweapons requires many methods simultaneously, and too many of them work?

   Will the attacks use automated methods and how will they will be controlled? The Stuxnet attack on Iran propagated autonomously onto many machines having nothing to do with Siemens' process control; effort was required to analyse it and remove it. Autonomous propagation is one of many attack options, so it should be a last resort.

   How can the chances of the attack being reused for criminal attacks on civilians be reduced? Stuxnet was precise once it got onto Siemens' hardware. However, it used general-purpose attack methods to disseminate across civilian sites, and those methods were reused by criminal organizations later (Kaplan 2011). Attacks can be engineered to check for military hardware and software or to check if they are on known military sites, but these methods are imperfect.

 

Norms after Cyberconflict

It is also important to consider norms for jus post bellum or after cyberconflict has ceased (Orend 2016) (Table 2, box F, above). Lack of consideration of post-conflict issues was a serious problem in the American presence in Iraq in 2003-2010, and recent international conventions on land mines assign requirements on removal long after conflict has ceased. Relevant norms specific to cyberconflict include the following:

 

·    Attack tracing

Victim states may need help in tracing cyberattacks on them and determining what has been damaged and by whom. The United Nations says that states have an obligation to assist other states that are victims of cyberattacks (United Nations General Assembly 2015).

·   Damage repair assistance

Assistance may include help with repairs after cyberattacks, particularly for victim states that do not have much technological sophistication.

·   Reparability

Some kinds of cyberattack damage can be relatively easy to fix (Rowe 2010). An example is an attack using encryption, for which decryption can be done to restore the original data. A state can specify this kind of attack as a norm, as a more humane form of cyberconflict, and also as a way to reduce costs that it may be liable for in restoring services after a conflict.

·    Attribution

Cyberattacks can be acknowledged by the attacking state when it is to their political advantage. Attribution can be proved by having the attacking state embed steganographic (concealed) information in its attack data or programs that can be revealed when the attacker wishes to so that no other state can take credit (Rowe 2015). Attribution to a state assigns responsibility for the damage. Attribution to a narrower organisation may be important when an attack goes awry since criminal proceedings may then be appropriate. However, a convincing attribution may risk revealing intelligence information, as when the United States claimed that North Korea was responsible for cyberattacks on Sony Corporation in 2014 but said they could not explain how they knew. Norms are thus needed regarding how much a state is willing to document its attributions.

·    Removal of cyberaggression tools

Combatants may agree to be stripped of their cyberweapons and other cyberaggression tools as part of an agreement for cessation of hostilities (Orend 2016).

·    Reparations

Beyond immediate assistance in remediating the effects of an attack, international legal authority may authorise reparations for unjustified cyberattacks. Reparations are increasingly recognised in international law (Evans 2012; Schmitt 2017, rules 28-29).

·    Prosecution

Cyberattacks short of war can be criminally prosecuted (Owens, Dam & Lin 2009), either using the law of the victim country or the Convention on Cybercrime.

 

Conclusions

The formulation of explicit norms by the governments of states is important for the new challenge of cyberconflict because of the many ethical choices required and the lack of international laws. Announced and respected norms reduce uncertainty about the consequences of cyberconflict and should eventually reduce international tensions. Although the Tallinn Manual 2.0 (Schmitt 2017) is a step in the right direction, this article has shown that many additional norms for cyberconflict need to be formulated by a government, preferably in written policy, to provide consistency. Many norms listed here have a major impact on the conduct of cyberconflict. Furthermore, many of these norms do not have good analogies in conventional warfare and will require some thinking to articulate. To be sure, significant work lies ahead for states.

 

Acknowledgements

This work was supported by the U.S. National Science Foundation under the Secure and Trustworthy Cyberspace program. Some ideas came from Patrick Lin, Fritz Allhoff, and Bradley Strawser. The views expressed are those of the author and do not represent the views of the U.S. Government.

 

References

Al-Rodhan, N 2015, 'Future wars: Reshaping the ethics and norms of war', The Wilson Quarterly, Summer, viewed 31 December 2017, <https://www.wilsonquarterly.com/quarterly/summer-2015-an-age-of-connectivity/future-wars-reshaping-the-ethics-and-norms-of-war/>.

 

Banks, W 2013, 'The role of counterterrorism law in shaping ad bellum norms for cyber warfare', International Law Studies of the U.S. Naval War College, vol. 89, pp. 157-97.

 

Canetti, B, Gross, M & Waismel-Manor I 2016, 'Immune from cyberfire: The psychological and physiological effects of cyberwarfare', Binary bullets: The ethics of cyberwarfare, eds. F Allhoff, A Henschke & B Strawser, Oxford University Press, Oxford, UK, pp. 157-76.

 

Carr, J 2011, Inside cyberwarfare, 2nd edition, O'Reilly Media, Sebastopol, CA, US.

 

Cooper, J 2012, 'A new framework for cyber deterrence', Cyberspace & National Security, ed. D Reveron, Georgetown University Press, Washington, DC, US, pp. 105-20.

 

Crowley, M & Gerstein, J 2014, 'No rules for cyberwar', Politico, 12 December, viewed 10 October 2016, <http://www.politico.com/story/2014/12/no-rules-of-cyber-war-113785>.

 

Dipert, R 2010, 'The ethics of cyberwarfare', Journal of Military Ethics, vol. 9, no. 4, pp. 384-410.

 

Elliot, D 2011, 'Deterring strategic cyberattack', IEEE Security and Privacy, vol. 9, September/October, pp. 36-40.

 

Evans, C 2012, The right to reparation in international law for victims of armed conflict, Cambridge University Press, Cambridge, UK.

 

Fidler, D 2012, 'Inter arma silent leges redux? The law of armed conflict and cyber conflict', Cyberspace and National Security, ed. D Reveron, Georgetown University Press, Washington, DC, US, pp. 71-87.

 

Flemming, D & Rowe, N 2015, 'Cyber coercion: Cyber operations short of cyberwar', Proceedings of the 10th international conference on Cyberwarfare and Security, ICCWS-2015, March, pp. 95-101.

 

Geers, K, Kindlund, D, Moran, N & Rachwald, R 2013, 'World War C: Understanding nation-state motives behind today's advanced cyberattacks', viewed 16 February 2015, <http://www.fireeye.com/resources/pdfs/fireeye-wwc-report.pdf>.

 

Grama, J 2010, Legal issues in information security, Jones and Bartlett Learning, Sudbury, MA, US.

 

Harrington, J 2014, Games, strategies, and decision making, 2nd edition, Worth Publishing, New York, NY, US.

 

Herzog, M & Schmid, J 2016, 'Who pays for zero-days? Balancing long-term stability in cyberspace against short-term national security benefits', Conflict in cyber space: Theoretical, strategic, and legal perspectives, eds. K Kiriis & J Ringsmose, Routledge, Abington, UK.

 

International Committee of the Red Cross 2012, International humanitarian law: answers to your questions, viewed 11 July 2015, <http://www.redcross.org/images/MEDIA_ CustomProductCatalog/m22303661_IHL-FAQ.pdf>.

International Court of Justice 2015, Basic documents: statute of the court, viewed 7 September 2015, <http://www.icj-cij.org/documents/?p1=4&p2=2>.

 

Kaplan, D 2011, 'New malware appears carrying Stuxnet code', SC Magazine, 18 October, viewed 1 August 2012, <http://www.scmagazine.com/new-malware-appears-carrying-stuxnet-code/ article/ 214707>.

 

Kelsey, J 2008, 'Hacking into international humanitarian law: The principles of distinction and neutrality in the age of cyber warfare', Michigan Law Review, vol. 106, no. 7, pp. 1427-51.

 

Lee, S 2014, 'The ethics of cyberattack', Ethics of Information Warfare, eds. L Floridi & M Taddeo, Springer, New York, NY, US, pp. 105-22.

 

Leetaru, K 2017, 'What Tallinn Manual 2.0 teaches us about the new cyber order', Forbes, viewed 18 December 2017, <https://www.forbes.com/sites/kalevleetaru/2017/02/09/what-tallinn-manual-2-0-teaches-us-about-the-new-cyber-order/#63d2fac2928b>.

 

Libicki, M 2009, Cyberdeterrence and cyberwar, RAND Corporation, Santa Monica, CA, US.

 

——2012, Crisis and escalation in cyberspace, RAND Corporation, Santa Monica, CA, US.

 

——2013, Brandishing cyberattack capabilities, Rand Corporation, viewed 18 March 2015, <http://www.rand.org/content/dam/rand/pubs/research_reports/RR100/RR175/RAND_RR175.pdf>.

 

——2016, Cyberspace in peace and war, Naval Institute Press, Annapolis, MD, US.

 

Lin, P, Allhoff, F & Abney, K 2014, 'Is warfare the right frame for the cyber debate?', Ethics of information warfare, eds. L Floridi & M Taddeo, Springer, New York, NY, US, pp. 39-59.

 

Lucas, G 2016, 'Emerging norms for cyberwarfare', Binary bullets: The ethics of cyberwarfare, eds. F Allhoff, A Henschke & B Strawser, Oxford University Press, Oxford, UK, pp. 13-33.

 

Maness, R & Valeriano, B 2016, 'Cyber spillover conflicts: Transitions from cyber conflict to conventional foreign policy disputes', Conflict in cyber space: Theoretical, strategic, and legal perspectives, eds. K Kriis & J Ringsmose, Routledge, Abington, UK.

 

Mazanec, B 2016, 'Constraining norms for cyber warfare are unlikely', Georgetown Journal of International Affairs, vol. 17, no. 3, pp. 100-09.

 

Maurer, T 2011 'Cyber norm emergence at the United Nations–An analysis of the UN's activities regarding cyber-security', Discussion Paper 2011-11, Belfer Center for Science and International Affairs, Harvard Kennedy School, Harvard University, Cambridge, MA, US.

 

Morgus, R 2016, 'Rules of cyber engagement', Slate, 10 March, viewed 20 October 2016, <http://www.slate.com/articles/technology/future_tense/2016/03/the_fuzzy_international_rules_ for_war_in_cyberspace.html>.

 

Nguyen, R 2013, 'Navigating jus ad bellum in the age of cyber warfare', California Law Review, vol. 101, no. 4, pp. 1079-129.

 

Nye, J 2015, 'The world needs new norms on cyberwarfare', Washington Post, October 1, Opinion section, viewed 18 March 2017, <https://www.washingtonpost.com/opinions/the-world-needs-an-arms-control-treaty-for-cybersecurity/2015/10/01/20c3e970-66dd-11e5-9223-70cb36460919_ story.html?utm_term=.e94f5ab1130d>.

 

——2016/2017, 'Deterrence and dissuasion in cyberspace', International Security, vol. 41, no. 3, pp. 44-71.

 

Orend, B 2016, 'Postcyber: dealing with the aftermath of cyberattacks', Binary bullets: The ethics of cyberwarfare, eds. F Allhoff, F, A Henschke & B Strawser, Oxford University Press, Oxford, UK, pp. 115-35.

 

Owens, W, Dam, U & Lin H (eds.) 2009, Technology, policy, law, and ethics regarding U.S. acquisition and use of cyberattack capabilities, The National Academies Press, Washington, DC, US.

 

Quinlan, M 2006, 'Deterrence and deterrability', Deterrence and the new global security environment, eds. IR Kenyon & J Simpson, Taylor and Francis, Oxon, UK.

 

Radunovic, V 2013 'DDoS – Available weapon of mass disruption', Proceedings of the 2013 21st Telecommunications Forum (TELFOR), Geneva, CH.

 

Regehr, E 2015, Disarming conflict, Zed Books, London, UK.

 

Robinson, M, Jones, K & Janicke, H 2015 'Cyberwarfare: issues and challenges', Computers and Security, vol. 49, pp. 70-94.

 

Rid, T & Arquilla, J 2012, 'Think again: Cyberwar', Foreign Policy, vol. 192, March/April, pp. 80-84.

 

Rowe, N 2010, 'Towards reversible cyberattacks' Proceedings of the 9th European conference on Information Warfare and Security, Thessaloniki, GR.

 

——2013 'Friend or foe? Perfidy in cyberwarfare', The Routledge handbook of ear and ethics: Just war theory in the twenty-first century, eds. F Allhoff, N Evans & A Henschke, Routledge, Oxon, UK, pp. 394-404.

 

——2015, 'Attribution of cyberwarfare', Cyber warfare: a multidisciplinary analysis, ed. J Green, Routledge, Oxon, UK, pp. 61-72.

 

——2016, 'Challenges of civilian distinction in cyberwarfare', Ethics and policies for cyber warfare: A NATO cooperative cyber defence centre of excellence initiative, Philosophical Studies Series, vol. 124, ed. M. Taddeo and L. Glorioso, Springer, New York, 2017, pp. 33-48.

 

Sauer, F 2016, Atomic anxiety: Deterrence, taboo, and the non-use of U.S. nuclear weapons, Palgrave Macmillan UK, Houndsmills, Basingstoke, UK.

 

Schmitt, M (ed.) 2013, The Tallinn manual on the international law applicable to cyber warfare, Cambridge University Press, Cambridge, UK.

 

——2017, The Tallinn manual on the international law applicable to cyber operations, Cambridge University Press, Cambridge, UK.

 

Schmitt M & Vihil L 2014, 'The nature of international law cyber norms', Tallinn Paper No. 5, NATO Cooperative Cyber Defence Centre of Excellence, Tallinn, EE.

 

Singer, P & Friedman, A 2014, Cybersecurity and cyberwar: What everyone needs to know, Oxford University Press, New York, NY, US.

 

Stevens, T 2012, 'A cyberwar of ideas? Deterrence and norms in cyberspace', Contemporary Security Policy, vol. 33, no. 1, pp. 148-70.

 

Stockton, P & Golabek-Goldman, M 2013, 'Curbing the market for cyber weapons', Yale Law and Policy Review, vol. 32, no. 1, pp. 239-66.

 

Thomas, W 2001, The ethics of destruction: Norms and force in international relations, Cornell University Press, Ithaca, NY, US.

 

United Nations General Assembly 2015, Report of the group of governmental experts on developments in the field of information and telecommunications in the context of information security, A/70/174, viewed 2 January 2018, <http://undocs.org/A/70/174>.

 

U.S. White House 2011, International strategy for cyberspace: prosperity, security, and openness in a networked world, May, viewed 8 December 2016, <http://www.whitehouse.gov/ sites/default/files/rss_viewer/international_strategy_for_cyberspace.pdf>.