THIS PAGE INTENTIONALLY LEFT BLANK
REPORT DOCUMENTATION PAGE
Form Approved OMB No. 0704–0188
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202–4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704–0188) Washington DC 20503.
1. AGENCY U.S.E ONLY (Leave blank)
2. REPORT DATE
3. REPORT TYPE AND DATES COVERED
4. TITLE AND SUBTITLE
OFFENSE-IN-DEPTH: AN ANALYSIS OF CYBER COERCION (excerpted)
5. FUNDING NUMBERS
6. AUTHOR(S) Daniel R. Flemming
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)
Naval Postgraduate School
Monterey, CA 93943–5000
8. PERFORMING ORGANIZATION REPORT NUMBER
9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES)
AGENCY REPORT NUMBER
11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government. IRB Protocol number ____N/A____.
12a. DISTRIBUTION / AVAILABILITY STATEMENT
12b. DISTRIBUTION CODE
13. ABSTRACT (maximum 200 words)
The U.S. military needs to find alternative ways to exert influence abroad because of the current war-fatigued and budget-constrained environment. This paper explores whether cyber coercion should be considered. Cyber coercion can give the U.S. the ability to use phased aggression to compel adversaries to the bargaining table and avoid unnecessary escalation while expending fewer resources than alternative methods. With cyber as the instrument, users can flexibly tailor the attack with varying levels of force, attribution, and reversibility, depending on the situation. This paper first surveys the space of possible cyber coercion methods. Then, by analyzing a hypothetical scenario involving the U.S. and China, this paper attempts to examine specifics of how cyber coercion could be used. It finds that while there are major risks of abuse, and a smaller than expected window of applicability, the benefits of responsible use make cyber coercion worth adopting as a part of a larger strategic framework.
14. SUBJECT TERMS Cyber attack, coercion, influence, attribution, escalation, reversibility, morality, use of force
15. NUMBER OF PAGES
16. PRICE CODE
17. SECURITY CLASSIFICATION OF REPORT
18. SECURITY CLASSIFICATION OF THIS PAGE
19. SECURITY CLASSIFICATION OF ABSTRACT
20. LIMITATION OF ABSTRACT
NSN 7540–01–280–5500 Standard Form 298 (Rev. 2–89)
Prescribed by ANSI Std. 239–18
THIS PAGE INTENTIONALLY LEFT BLANK
OFFENSE-IN-DEPTH: AN ANALYSIS OF CYBER COERCION (excerpted)
Daniel R. Flemming
Lieutenant, United States Navy
B.S., United States Naval Academy, 2007
Submitted in partial fulfillment of the
requirements for the degree of
MASTER OF SCIENCE IN CYBER SYSTEMS AND OPERATIONS
NAVAL POSTGRADUATE SCHOOL
Author: Daniel R. Flemming
Approved by: Neil C. Rowe
Wade L. Huntley
Cynthia E. Irvine
Chair, Cyber Academic Group
THIS PAGE INTENTIONALLY LEFT BLANK
The U.S. military needs to find alternative ways to exert influence abroad because of the current war-fatigued and budget-constrained environment. This paper explores whether cyber coercion should be considered. Cyber coercion can give the U.S. the ability to use phased aggression to compel adversaries to the bargaining table and avoid unnecessary escalation while expending fewer resources than alternative methods. With cyber as the instrument, users can flexibly tailor the attack with varying levels of force, attribution, and reversibility, depending on the situation. This paper first surveys the space of possible cyber coercion methods. Then, by analyzing a hypothetical scenario involving the U.S. and China, this paper attempts to examine specifics of how cyber coercion could be used. It finds that while there are major risks of abuse, and a smaller than expected window of applicability, the benefits of responsible use make cyber coercion worth adopting as a part of a larger strategic framework.
THIS PAGE INTENTIONALLY LEFT BLANK
TABLE OF CONTENTS
I. Introduction........................................................................................... 1
A. Background and Justification....................................... 1
B. Purpose............................................................................................. 2
C. Offense-in-depth Defined..................................................... 3
D. Department of Defense Applicability....................... 6
E. Outline............................................................................................. 8
II. coercion...................................................................................................... 9
A. Coercion Defined...................................................................... 9
B. Traditional Instruments.................................................. 12
C. Deterrence Defined.............................................................. 13
D. Compellance Defined.......................................................... 15
E. Role of Force in cyber coercion................................ 16
F. Actors in Cyber Coercion................................................. 18
G. additional variables for cyber coercion.......... 20
1. Costs and Investments............................................................ 20
2. Legality and Morality............................................................. 23
3. Reversibility............................................................................. 26
4. Attribution............................................................................... 28
5. Escalation................................................................................ 29
H. Counterargument and response to Cyber coercion........................................................................................ 32
I. Conclusion.................................................................................. 34
III. Methods of cyber coercion........................................................ 35
A. anatomy of a hack................................................................. 35
B. Intelligence collection.................................................. 37
C. Cyber attack methods....................................................... 38
1. Software Vulnerabilities......................................................... 39
2. Hardware Vulnerabilities....................................................... 44
3. Human Vulnerabilities........................................................... 45
D. Targeting for coercion.................................................... 46
1. Outcome................................................................................... 47
2. Goals........................................................................................ 47
E. Conclusion.................................................................................. 51
IV. Scenario.................................................................................................... 53
A. Introduction............................................................................. 53
B. Scenario Background......................................................... 54
1. Chinese.................................................................................... 54
2. Recent Events.......................................................................... 56
C. Scenario events....................................................................... 58
1. Targeting................................................................................. 59
2. Application of Force............................................................... 61
D. Analysis......................................................................................... 67
V. Conclusion.............................................................................................. 71
List of References......................................................................................... 75
Initial Distribution List............................................................................. 85
LIST OF FIGURES
Figure 1. “Notional Operation Plan Phases” (from DOD, 2011b, p. III-39)......... 4
Figure 2. “The Coercion Dynamic and Policy” (from Hare, 2012, p. 131)......... 15
Figure 3. Conflict Spectrum (from Department of the Army, 2008, p. 2–1)...... 20
Figure 4. “U.S. Cyber Command Annual Budget” (from Fung, 2014).............. 22
Figure 5. “Anatomy of a Hack” (from McClure et al., 2012).............................. 36
Figure 6. Coercion Framework (from Byman & Waxman, 2002, p. 27)............. 47
Figure 7. Incursion and Deception (from SSG XXXI, 2013, p. 3-16)................ 64
Figure 8. “Possible Pathways into the Control System” (from Byres, 2012, p. 2)............................................................................................................... 66
THIS PAGE INTENTIONALLY LEFT BLANK
LIST OF TABLES
Table 1. Types of Threats (from Bratton, 2005, p. 100)..................................... 10
Table 2. Role of Force (from Bratton, 2005, p. 103)......................................... 10
Table 3. Actors (from Bratton, 2005, p. 108)..................................................... 11
Table 4. Definition of Success (from Bratton, 2005, p. 111)............................. 11
THIS PAGE INTENTIONALLY LEFT BLANK
LIST OF ACRONYMS AND ABBREVIATIONS
ADIZ air-defense identification zone
AESA active electronically scanned array
ASEAN Association of Southeast Asian Nations
C2 command and control
C4ISR command, control, communications, computers, intelligence, surveillance, and reconnaissance
CNA computer network attack
CNO computer network operations
CNO Chief of Naval Operations
DDoS distributed denial-of-service
DNS domain name server
DOD Department of Defense
IADS integrated air-defense system
IP Internet protocol
IW information warfare
JOAC Joint Operational Access Concept
LOAC Law of Armed Conflict
MCS machinery-control systems
NATO North Atlantic Treaty Organization
NSA National Security Agency
OS operating system
PLA People’s Liberation Army
PLC programmable logic controller
SSG Strategic Studies Group
UAV unmanned aerial vehicle
UUV unmanned underwater vehicle
USB universal serial bus
USV unmanned surface vehicle
THIS PAGE INTENTIONALLY LEFT BLANK
I would like to thank my wife, Jenny, for the love and support she has shown during my time at NPS. Furthermore, I would like to thank my advisor, Dr. Rowe, for his useful comments, remarks, and engagement throughout the thesis learning process. I would also like to thank Dr. Huntley for his assistance and guidance with this paper.
THIS PAGE INTENTIONALLY LEFT BLANK
You can play the stock market on-line. You can apply for a job on-line. You can shop for lingerie on-line. You can work on-line. You can learn on-line. You can borrow money on-line. You can engage in sexual activity on-line. You can barter on-line. You can buy and sell real estate on-line. You can purchase plane tickets on-line. You can gamble on-line. You can find long-lost friends on-line. You can be informed, enlightened, and entertained on-line. You can order pizza online. You can do your banking on-line. In some places, you can even vote online.
You can perform financial fraud on-line. You can steal secrets on-line. You can blackmail and extort on-line. You can trespass on-line. You can stalk on-line. You can vandalize someone’s property on-line. You can commit libel on-line. You can rob a bank on-line. You can frame someone on-line. You can engage in character assassination on-line. You can commit hate crimes on-line. You can sexually harass someone on-line. You can molest children on-line. You can ruin someone else’s credit on-line. You can disrupt commerce on-line. You can pillage and plunder on-line. You could incite to riot on-line. You could even start a war on-line. (Power, 2000, pp. 3–4)
As the opening quote illustrates, the Internet can be used for good and evil. To find the dark side, one need only turn on the news to find examples of how the cyber domain is being exploited. Symantec (2013) reported in its annual threat report that there was a 42% increase in targeted attacks from 2011 to 2012. Threats reportedly not only increased, but also expanded to include social media and mobile devices. These numbers are for civilian targets because reporting on military targets remains highly classified, but it is assumed that both networks share similar risks. The debate as to whether these intrusions constitute crimes, espionage, or attacks is ongoing. The bottom line is that the threat is chronic. Moreover, many nations have developed offensive and intelligence-collection cyber capabilities including China, Russia, India, Iran, and North Korea (Technolytics, 2010). Non-state actors, proxy nations, hacktivists, and even terrorist organizations also pose a threat. The question remains, what to do about it. Can the U.S. military use cyber force for good or at the very least to its own advantage? “The establishment of advanced information infrastructures allows the use of digital attacks based on micro applications of force to potentially have strategic influence” (Rattray, 2001, p. 463). We will explore that idea further in this thesis.
The simplest notion of coercion is threats to modify behavior; it is initiating an action that will stop only if an adversary complies. Coercion can include both deterrence and compellance. Cyber deterrence has not been very effective against intelligence gathering and theft since these types of intrusions continue to happen frequently. Therefore, this paper will look at the compellance side from the perspective of the United States. It can be used either in-kind to stop specific patterns of network intrusions or to alter an adversary’s behavior in general, expressly outside of U.S. networks. The vulnerability of civilian networks is reflected in the previous statistics. The U.S. military admittedly does not have a mission to protect civilians from their own careless security practices, which is what enables most of these attacks, and it is unlikely that coercion could even stop specific kinds of intrusions. Instead, it is more beneficial to look at cyber as the persuasive instrument in a larger strategic coercion framework to influence other nations abroad. This paper analyzes existing thinking to try to answer the primary question of: under what conditions is cyber coercion useful.
Coercion rarely results in total capitulation, but using cyber as the instrument of coercion to force an adversary to the bargaining table while avoiding more costly war is worth exploring. Does the U.S. want the cyber act to be nothing more than a wrist slap or should it carry the weight of an airstrike? Can it be used as a faster, short-term economic sanction? Can cyber acts be included amongst the existing political, economic, and kinetic instruments of coercion to show additional resolve, signal commitment, and bolster the threat? “The use of nuclear, conventional (specifically air power), and economic coercion in recent history parallels global transformations” and this paper believes that the use of cyber coercion will continue that trend as offensive cyber operations take an increased role in military operations (Byman & Waxman, 2002, p.17). Absent extensive use-case examples, our method will use a hypothetical scenario to test the utility of cyber coercion. It concludes that given two near-peer nations on the brink of war, the use of limited, attributable, and reversible acts of cyber coercion, coupled with the threat of subsequent traditional military operations, could potentially be not only useful but also de-escalatory. Adding cyber threats and the limited use of cyber force to the list of tools that can be utilized by the U.S. to coerce its adversaries is therefore desirable and should be explored in greater detail.
The U.S. currently has the capability to digitally exploit and attack its adversaries, arguably better than anyone in the world. This cyber skillset is primarily focused on gaining access to foreign networks to gather intelligence or as a force enabler to traditional kinetic operations. Based on recent trends, this paper anticipates the first volleys prior to war and specifically prior to kinetic strikes will most likely be through the cyber domain. This thesis then looks at another possible mission: to stop or modify an existing activity that the U.S. does not like—or to cause an adversary to start a whole new activity that is more beneficial to the U.S.—all from the cyber domain. This is something that the U.S. has not openly pursued on a large scale. The ideal would be having cyber coercion take place in the early stages of conflict to prevent additional hostilities. This is the idea of offense-in-depth. In much the same way that a layered defense offers the most protection, so too could there be a progression in offensive operations to be most effective. Aggression is typically phased, as nations do not jump right to nuclear weapons or total war at the outset of conflict. The purpose is to truly allow kinetic war to be a last resort as international law dictates. For instance, as seen in Figure 1, the U.S. currently uses a six-phase approach in the way it plans military operations (DOD, 2011b). In a similar fashion, this paper foresees coercive cyber acts being used at the outset of a conflict to provide an outlet for nation-states to resolve their difficulties and then move on. The use of additional force and more costly war remain options, but because of cyber coercion, hopefully avoidable ones. As the Chinese military theorist Sun Tzu famously wrote, “For to win one hundred victories in one hundred battles is not the acme of skill. To subdue the enemy without fighting is the acme of skill” (pp. 77–78).
Some precedence exists for this “offense-in-depth” idea. Prior to conducting the NATO-led air-strikes in Libya, a cyber attack was considered in the “hopes of bringing them down without a shot” (Sanger, 2012, p. 343). The reason a cyber attack was not conducted is two-fold, and most likely for the same reasons that plague decision makers today: the time needed to successfully employ it and the subsequent capability loss. To be successful and accurate in a cyber attack, extensive adversary infrastructure knowledge is required. This takes time to develop. The other issue is that a cyber attack can really only be used once before the vulnerability is patched and the attack effects nullified. Therefore, the capability to attack a specific vulnerability will be lost after launch until a new vulnerability is found and access to it subsequently established. Weighing the pros and cons of using cyber exploits will be situationally dependent. For Libya, other options that could achieve a similar goal were deemed more appropriate. This may not be the case in the future.
The situation leading up to Stuxnet, which targeted Iran’s nuclear production facility in Natanz, also personifies the concept of offense-in-depth. Sanger (2012) attributes Stuxnet, codenamed Olympic Games, to the United States and Israel. Bombing the Iranian nuclear facility at the outset, like the Israelis had done against Syria in 2007, was estimated to only cause a two-year delay because the Iranians were further along in the nuclear process. The strike might also cause the Iranians to emerge more unified and determined than before. Choosing to kinetically strike would harden their bunkers and their resolve, which would then require additional kinetic strikes in the future that would be undoubtedly less effective. The idea behind using a cyber attack at the outset was that Iran would probably not anticipate it, and if that attack failed, Iran could always be bombed later as originally planned. While Stuxnet is an act of sabotage and does not perfectly correlate to coercion, the ultimate objective and thought process can translate to future coercion applications. The fallout from Stuxnet may have played a role in forcing Iran to the bargaining table over its nuclear program. The accord reached in November 2013, put a temporary freeze on the Iranian nuclear program (Gordon, 2013). By adding offensive cyber capabilities to the foreign diplomacy toolbox, the U.S. can hopefully be better prepared and more effective in the manner in which it exerts influence abroad.
Operation Olympic Games demonstrated improved cyber capabilities, bolstered attack credibility, and even the weak attribution created an aura around U.S. cyberwarfare. This provides a unique advantage when approaching future conflicts and is something that can assist the effectiveness of coercion. Rather than jumping straight to dangerous and expensive kinetic operations or enacting slow economic sanctions, the U.S. has the opportunity to take advantage of the opportunities cyberwarfare can provide. Cyber operations can then become the first and hopefully last step or even the first of many steps when attempting to coerce adversaries and avoid more costly war. This paper analyzes the utility of taking advantage of such a concept and the increased possibilities it can provide to a larger coercion framework. In the end, cyber coercion is just one of many options the U.S. can use to exert influence; an option that will be useful in some instances and useless in others.
The concept of coercion is nothing new and should not be considered counter to military thought and practice. Freedman (1998) writes that, “military strategy in and of itself is defined as the use of force by one entity to impose its will on another” (p. 36). This, he continues, is nothing more than “organized coercion” on a massive scale located somewhere on a spectrum bookended by consent and total control (p. 36). In the current political landscape, “when confronted with a direct threat to American security, Obama has shown he is willing to act unilaterally—in a targeted, get in-and-out fashion, that avoids, at all costs, the kind of messy ground wars and lengthy occupation that have drained America’s treasury and spirit for the past decades” (Sanger, 2012, p. xiv). This is demonstrated by the bin Laden raid, escalating drone strikes, and Stuxnet. Cyber operations are one of the few areas to receive increased funding from the president as part of his “light-footprint” strategy (Sanger, 2012, p. 243; DOD, 2012a). As Commander-in-Chief, the president has proven his embrace of “hard, covert power” (Sanger, 2012, p. xvi). With a war-fatigued public and in a budget constrained environment, the use of cyber coercion can offer one way for the U.S. to continue to ensure military dominance around the world.
It is critical to plan for this type of warfare now, so it is available when time-sensitive targets arise. For instance, it took about eight months to put together the first plan of execution for Stuxnet. Overcoming air-gapped networks like the one attacked in Iran explains some of that time. The U.S. and Israel supposedly also took the extra time to make the malware as accurate and stealthy as possible to comply with the Law of Armed Conflict (LOAC). This is important because the worm caused physical damage and could be interpreted as a use of force; but by doing so, the creators were setting the standard for future applications. It is dangerous to have nations freely launching cyber munitions at each other for the slightest perceived fault. If the U.S. takes the lead, it can continue to lay the groundwork for future applications. Even if the U.S. never chooses to launch another cyber attack in the future, continuing to establish footholds in adversary networks can provide persistent intelligence gathering opportunities.
A military cyber coercion strategy is important because it has the potential of, “reducing the risks of war” and “limiting the costs of war that do occur” (Cimbala, 1998, p. 181). The end goal of a coercive act is to show the target the futility of its current strategy and induce concessions, not necessarily contribute directly to the military victory of the coercer (Freedman, 1998). It is not a demonstration of simple sabotage or brute force but rather attaching a message and meaning to the coercive act. “Even a potentially promising use of force can be squandered by an ineffectual diplomacy,” and the use of force tends to complicate, compromise, and harden attitudes (Freedman, 1998, p. 33). Therefore, any coercive strategy is going to be a balance or even a competition between the political and the military whether short of war or in war (Cimbala, 1998). The complexity of that relationship is reflected in the fact that cyber expertise resides in the military and the intelligence organizations, so the capabilities and use-case examples live in the classified world and are not readily accessible or well-known to the political leaders who will make the decisions to employ cyber force. For instance, Stuxnet “…was originally under military command until after President Obama settled into the Presidency” (Sanger, 2012, pp. 200–201).
It is likely that decision makers will second-guess the benefit of giving up a valuable cyber capability outside of an actual military conflict or covert intelligence collection operation to “maybe” coerce an adversary. The military simply cannot control every aspect of cyber use. Byman and Waxman (2002) argue that, “success in coercive contests seldom turns on superior firepower” (p. 229). Otherwise in recent years, the U.S. would rarely lose and almost always get what it wants. Problems will undoubtedly be political. Traditionally, military coercion has been weakened by leadership’s aversion to commit adequate amounts of force and to casualties. When cyber is the instrument however, it offers leaders the ability to limit or eliminate casualties, to act unilaterally, and to require a smaller military footprint.
A more looming question is whether the effect created by the cyber act can strike a balance between being strong enough to coerce and being so strong it escalates the conflict. Sometimes the most influential act would only invite retaliation. The U.S. could be one of the most vulnerable nations to cyber attacks, as evidenced by the statistics in the opening paragraph, so it must weigh the risks of retaliation prior to trying to compel adversaries through cyber coercion. The U.S. has also a fear of more dangerous scenarios lurking around the corner that it must save all of its cyber attacks for. It does make sense to avoid frequently launching attacks since that will send the wrong message to adversaries and increase the risk of retaliation. The answer to all international conflict problems cannot be found exclusively in the cyber domain, and cyber coercion might only be appropriate for a small subset of conflicts. The appropriate threshold for use has yet to be established, and the U.S. should take this opportunity to set precedence. Since the world is in the early stages of nation-states conducting offensive cyber operations with little regulation or few treaties, it is important for the U.S. to realize that other nations will not necessarily take the time to comply with international norms or use any type of judgment at all prior to launching their form of Stuxnet (Sanger, 2012).
The thesis is organized as follows. Chapter I introduces the justification and the purpose of the study. It also explains the meaning of the title term, offense-in-depth, and describes the relevance of the topic for the DOD. Chapter II defines coercion. It explores traditional instruments, many of the variables that will help determine successful implementation in a cyber application, and a counterargument to using cyber coercion. Chapter III researches examples of cyber attacks that might be employed and common mechanisms for achieving the desired level of influence over an adversary. Chapter IV analyzes the utility of using cyber coercion in a hypothetical scenario involving China and the U.S. Finally, the thesis concludes with a summary and recommended future work in Chapter V.
Cyberwarfare has the potential to change the way we fight the wars of the future…questions raised by these weapons need to be part of the public conversation. (Sanger, 2012, p. 270).
The ability to hurt gives a nation bargaining power, and coercion is one way a nation can choose to take advantage of that bargaining power (Schelling, 1966). Coercion at its core is the threat of more to come, so it is critical to understand what the enemy values to use it successfully. The end goal is to get the enemy not to proceed with its current course of action because it will be too painful or costly. Coercion does not always have to inflict costs; when compensation is the norm, coercion can also withhold benefits (Nye, 2011). Freedman (1998) argues that a nation’s interference in another’s affairs is denoted by a spectrum with extremes marked by consent and control. He continues that coercion, “lacks the legitimacy provided by consent or the certainty promised by control,” and worse if it fails would then require either a more aggressive approach to more closely resemble control or a more conciliatory approach that resembles consent (Freedman, 1998, p. 17).
The strategy of the actors and the attributes of the conflict environment are central to what determines the success of coercion. These variables will reflect each unique situation making coercion a complex issue to dissect and also hampers creating a prescriptive implementation. As Byman and Waxman (2002) put it, “writing on coercion requires modesty…the topic itself defies easy description” (p. 23). Definitions on the details of coercion vary depending on the author and term used, and this varying nature helps explain its previous failures. Even with sufficient understanding, the biggest difficulty is always in the application. This will be especially true when translating traditional coercion characteristics to the cyber domain. Bratton (2005) provides a nice summary highlighting the differences he found in what experts define as coercion. He divided coercion into four main categories found in Tables 1–4: the types of threats, the role of force, the actors, and the definition of success.
What types of threats are involved in coercion?
Authors who concur
Only compellent threats (i.e., coercion is different from deterrence)
Alexander George, Janice Gross Stein, Robert Pape
Both compellent and deterrent threats (deterrence and compellance are both types of coercion)
Thomas Schelling, Daniel Ellsberg, Wallace Thies, Lawrence Freedman, Daniel Byman, and Matthew Waxman
What role does force play in coercion?
Authors who concur
Coercion before the use of force
George, Gross Stein
Coercion only through force
Coercion through diplomacy and force
Who are the actors?
Authors who concur
Best thought of as identical, unitary, rational calculating actors
Schelling, George, Pape,* and Daniel Drezner
Rational actors that can be somewhat different (democracies vs. authoritarian governments) or are made up of a few simple parts (government, military, public, etc.)
Pape,* Risa Brooks, Byman and Waxman
Complex governments that both threaten and respond to threats differently
Thies, David Auerswald
How is success defined?
Authors who concur
Full compliance with coercer’s demands, independent of any other factors
Need to distinguish degrees of success. Possible to have partial success or secure secondary objectives without securing primary objectives
Kimberly Elliot, Drezner, Karl Mueller, and Byman and Waxman
This thesis follows most closely to Schelling’s definition of coercion. This includes coercion that involves both compellance and deterrence. Compellance causes an adversary to stop an action while deterrence causes an adversary not to take a certain action. This paper focuses on the compellance aspect of coercion and thus both words will be used interchangeably, but the codependence of both compellance and deterrence in the larger coercion discussion is fully recognized. Coercion will include limited force but with a caveat. That is, in the successful coercion of an adversary, the target should retain the ability to use violence yet choose not to employ it (Byman &Waxman, 2002). It is not violence for the sake of violence. This is the difference between sabotage and coercion. Coercion is conducted between nation-states, although the specific target can vary within that nation i.e., the military, the government, etc. And lastly, coercion is a dynamic process that has degrees of success that will vary based on the different situations it is used in. For instance, forcing an adversary to the bargaining table, while not a complete victory, can still be viewed as a valuable accomplishment especially if the alternative is more costly war. Schelling’s framework primarily deals with the nuclear age, and although there are similarities with cyber weapons, Libicki (2011) correctly points out that treating them the same can be misleading. This chapter hopes to show that Schelling’s conceptual framing of coercion has broader resonance by beginning with a description of traditional coercion and then transitioning to many of the unique characteristics of cyber coercion.
The most common non-lethal, non-military coercive instruments are economic sanctions and diplomatic demarches. Sanctions attempt to create broad discontent within the targeted nation while also weakening the financial assets of the power base (Byman & Waxman, 2002). Sanctions are a low cost option that often requires the help of other countries to be most effective. Due to the U.S.’s powerful economy, it has been able to enact them unilaterally in the past, although this ability is arguably dwindling. Since most crises lack urgency, sanctions are a way to act decisively without escalating. They are easier to undermine internally by third parties however and can also bring humanitarian issues. Sanctions are difficult to specifically target and so have had mixed results in the past. For instance, Syria is currently subjugated to a wide range of U.S. sanctions, which has clearly been unable to stop the ongoing civil war, whereas international sanctions appear to be working against Iran (Kessler, 2011). Diplomatic demarches are another frequently used instrument of coercion. Byman and Waxman (2002) write that this instrument “carries the lowest cost, demands little, and carries little risk” (p. 115). Diplomatic demarches are an attempt at political isolation, but when used alone their impact can be negligible. Demarches are tactics governments use when they want to send a message to another government with as little risk as possible. Demarches are an attempt to try to influence the power base’s perception of its international opinion. The problem is that most countries simply do not care.
Non-lethal, military instruments of coercion include the threat of conventional and nuclear retaliation. Conventional threats often require a use of force to be taken credibly, which the coercer may be unable to follow-through on. Even when the coercer possesses a clear superiority, “it often lacks the ability or will to apply that strength and inflict costs” (Byman & Waxman, 2002, p. 19). Conventional examples provided in the U.N. Charter include “demonstrations, blockade, and other operations by air, sea, or land forces” (U.N., 1945, 7:42). The last phrase provides important use of force exemptions that embrace military coercion as a way to settle disputes and recognizes an implicit concept of limited war (Lango, 2006). Conventional military coercion is the back and forth between the military plans of the coercer and the target (Pape, 1992). The back and forth nature stresses the importance of strategy; the adversary still gets a vote in coercion. This is best demonstrated in the Cold War when “small-scale wars and subversion and counter-subversion waged through local proxies became a common mode of superpower conflict” (Waxman, 2011, p. 441).
The idea behind nuclear threats is that their use punishes the society with more pain than it could possibly stand, “so much so that battlefield outcomes become unimportant by comparison” (Pape, 1992, p. 464). Nuclear compellance has questionable effectiveness because it is unlikely a rational leader would ever actually use nuclear weapons if the target nation called the bluff. At the same time, nuclear deterrence has helped avoid the start of direct interstate conventional military action between major powers since 1945. Military coercion can also include lethal use of force, but these tactics are normally reserved for actual conflicts. At one end of the lethal spectrum, is the support for insurgencies. This is an escalatory act because international norms typically discourage instigating a civil war in another sovereign territory. More lethal options include air-strikes, invasions, and land grabs, which all border on sabotage and brute force rather than coercion.
As a part of a larger coercion framework, deterrence is closely tied to compellance, as shown in Figure 2. Although not the focus of this paper, it still must be briefly defined. Freedman (1998) writes that coercion is the merging of deterrence and compellance, and that coercion is “to deter continuance of something the opponent is already doing” (p. 19). Byman and Waxman (2002) also write about the codependence of deterrence and compellance that is best demonstrated when the success of previous coercion attempts validates the coercing nation’s credibility, which then affects the ability to deter. For instance, if a nation can compel an adversary into stopping existing attacks, then that adversary might ultimately not attack in the future because it feels it is not worth it, thus leading to better deterrence.
With regard to cyber attacks, deterrence is difficult because of constantly changing technology, a diverse threat environment, attribution difficulties, the difficulty of communicating credible threats, the lack of clear thresholds, and the risk of targeting innocents with automated responses. Some of these variables also apply to cyber coercion and will be explored later in this chapter. The secrecy that surrounds cyber capabilities also makes it difficult to work with allies and partners to share new techniques and expose enemy vulnerabilities. There are some possibilities of success for cyber deterrence against nation-states because major attacks rarely happen in a vacuum and will involve other political, military, or economic goals (Kugler, 2009). Attackers may be unclear that the post-attack environment will be better or worse for them and therefore choose not to attack, which means uncertainty can be as powerful as projecting certainty in cyber deterrence (Lukasik, 2010). Ultimately, deterring intrusions into U.S. digital networks is failing because of a disconnect between U.S. cyber capabilities and any credible threat of retaliatory punishment. Therefore, at least at this time, compellance is the more pressing issue. Focusing on the compellance side of coercion can allow the U.S. to try to be proactive and use cyber operations to shape other world events and take the fight to its adversaries.
Schaub (1998) writes that compellance is “designed to alter the status quo” (p. 58). It is “an act that could not forcibly accomplish its aims but one that hurts the target enough to induce compliance anyways” (Schaub, 1998, p. 42). The goal is to simultaneously increase the perception of the coercer’s capabilities and will to continue while limiting the target’s capabilities just enough to influence its will not to continue. It is a blend of threats to use force and the use of limited force in connection with a threat. There is a wide spectrum of application. Thies (2003) writes that at times the relationship between threats and force is complex because initial threats of compellance can fail, yet a coercion strategy can still succeed, as demonstrated by NATO in Yugoslavia. The verbal threats and posturing by the Clinton administration were unable to compel the Serbs until the actual application of force in the form of the NATO air war. The addition of cyber as an instrument within the U.S.’s coercive toolkit will hopefully open up new opportunities not possible with just the traditional economic, diplomatic, or kinetic tools previously used. That is because even threatening destructive military force is banned by international law, but the nuances of cyber use of force have yet to be clearly defined by international norms. The U.S. can capitalize on these legal ambiguities and set precedence.
Before attempting to compel, each side must consider the “value of the target, the power that can be demonstrated, and the extent they can credibly communicate this information” (Schaub, 1998, p. 58). Schaub (1998) describes six additional tenets that need to be considered when attempting to compel an adversary, “demands, motivation, balance of capability, clarity of demands, publicness of demands, and illustrative use of force” (p. 58). The demands can include the coercive actor inducing the target to stop an action, altering or undoing an action, or inducing another action altogether. The motivation and balance of capability should take into account both the aggressor and the target to be most successful. The clarity of demands must also include what additional actions will occur if there is non-compliance. The publicness of these demands and the potential use of force act as signaling aimed at shaping the bargaining process. These tenets represent the science of compellance that must be understood before actual use. Any hope of successful application depends on understanding these factors. The art of coercion is then successfully applying them in the absence of perfect knowledge.
Due to the breadth of cyber capabilities, initial thoughts seem to favor threats over actual force for use in coercion. Using threats alone is more in line with traditional nonlethal coercion, as in theory, truly successful threats do not need to be carried out (Byman & Waxman, 2002). Cyber threats can force targets to doubt their ability to wage information warfare successfully because of the perceived threat of the coercer’s capabilities. Threats inject fear and doubt into the performance and security of a network, and then the thought of an adversary in a nation’s networks could be exploited for a coercive purpose. Just maintaining a publicized standing cyber force serves to taunt institutions (Libicki, 2011). In peacetime, this effect is not wholly debilitating as every network accepts a certain level of risk. Digital infrastructures are not constantly being shut-down at the first sign of network penetration, but these threats do drive the cyber defense industry and generate fear of ever looming cyberwar. This simply reflects the constant back and forth relationship of attacker and defender found in any domain. In wartime, however, fear and doubt are magnified because of the increased risk of death. If warfighters begin to doubt the accuracy of information on their computers, or the performance of their military system to which they have entrusted their lives, then cyber threats can create the tipping point for warfighters being unwilling to engage the enemy.
Given that there are no actual real-world examples of cyber coercion, however, there is an even stronger need for credibility and demonstration to generate the desired effect. Byman and Waxman (2002) point out that sometimes no amount of threatened force will actually compel an adversary to yield. To establish credibility, force will most likely be required. The notable cyber incidents in Estonia, Georgia, and Iran have shown that preemptive cyber aggression can be used to enact powerful effects. These incidents can create the necessary credibility and proof of concept for future uses of cyber coercion, even if they were employed differently. Whether threats or actual force, the tailorability of options found in cyber coercion allows for a wider spectrum of application than with traditional instruments. Cyber acts can incorporate elements of all other types of coercion and add unique elements of its own. The downside to this is that among peer or near-peer competitors, embracing offensive cyber capabilities could only encourage more competition and the militarization of cyberspace. This has arguably already occurred, and the U.S. now needs to figure out how to succeed and maintain an advantage in the current environment.
The amount of actual force required to coerce via cyber will undoubtedly be situationally dependent and one of the most difficult aspects of employing it. Too weak, and it can lead to embarrassment and capability loss; too strong, and it can lead to retaliation and escalation. The promising news is that the threshold for military retaliation from the use of cyber acts will likely be higher than any kinetic equivalent (Waxman, 2013). Although cyber attacks can clearly cause damage, it is arguably less when compared to that of other strategic weapons and when used by themselves would most likely only provide an irritant to the targeted nation (Lewis, 2011). Multiple examples of distributed denial-of-service (DDoS) and web defacements in the news throughout the past decade seem to support this claim. Nation-states are simply not prepared to act in response to these low level intrusions. The risk of unintended propagation also helps curb the extensive use of more powerful attacks.
The problem with low-level intrusion examples is that they do not carry any coercive weight. Liff (2012) writes that only a limited number of circumstances will be possible for cyberwarfare to be used as a tool to pursue strategic objectives. He continues that, “no examples of irrefutable effective coercive CNA exist” (Liff, 2012, p. 426). The more powerful attacks in history have provided only brute force measures during or precipitating conflict. There is a thin but distinguishable line between targeting for coercion and targeting for destruction (Cimbala, 1998). Brute force sabotage eliminates the target’s options, and coercion is built upon choices. Byman and Waxman (2002) write that, “simple pain inflicted does not translate into leverage” (p. 48). The problem is that this creates a continuum of approaches in which the coercer must find something short of total destruction but powerful enough to compel the desired behavior. Coercion is most successful when targeting what the enemy values, so “there is the risk of the most sensitive pressure point, if pressed, provokes the most dangerous response” (Byman & Waxman, 2002, p. 224). Figuring out the right balance of threats and force will require evolutionary understanding. Coercion is a process more than a singular event because conflicts with two determined parties tend to last a long time.
Power is the ability to get others to do something they would not normally do. This is a critical tenet of coercion. The power to hurt is nothing new, but thanks to technology proliferation, many people can inflict it on anyone from anywhere (Schelling, 1966). In the cyber domain, power diffusion exists due to the varied threat vectors. Nye (2010) writes that nation-states will remain the most dominant, but the delta will shrink and it will be more difficult to exert control. A reduction in power is not the same as equalization though, as nation-states will still have larger resources to mount the most dangerous types of attacks. There is a clear demarcation between the quantity and quality of actors in cyberspace. If a country wishes to exert its power to coerce via cyber most effectively, it must simultaneously attempt to defend its own gaping vulnerabilities. Nye (2010) writes of a paradox that more cyber power also creates more vulnerability. This is caused by the direct relationship between a reliance on highly technical systems and the ability to wield cyber power. A less technical country may have limited cyber capabilities but are also unable to be coerced through cyber by the U.S. Whereas the U.S., with arguably incredible cyber power, also has inherent vulnerabilities and therefore the susceptibility to cyber coercion itself. Liff (2012) suggests that the most successful use of cyber coercion will be in peer or near-peer disputes because of the ability to sustain cyber operations, the ability to bargain due to the fear of escalation, and the credibility of threats when reinforced by conventional or nuclear forces.
Although acquiring cyber capabilities has a low point of entry, effective use of cyber coercion is limited to the dominant conventional powers of today. Nation-states represent the most dangerous cyber threat. Cyberwarfare is expensive and labor-intensive, and it is unlikely that anyone who could use it to successfully coerce would be at a level less than the nation-state (Lucas, 2011). Peer or near-peer nations to the U.S. also have less interest in creating instability in the world and ruining the authority of sovereign nations; they recognize the risks of instigating and escalating major conflicts (Libicki, 2011). We further concentrate on adversaries rather than simply peers or near-peers. Coercion by cyber attack cannot be conducted against countries that the U.S. has agreements on criminal tort law because they could then sue the U.S. for any damage caused. This can also only happen for country pairs like the U.S. and China that do not have mutual extradition on cases of sabotage and product tampering. Otherwise, it would be impossible to wage offensive cyber because the costs of paying damages would be so high. This all places the emphasis on using cyber coercion at the higher end of the spectrum of conflict as shown in Figure 3, or in the case of this thesis, hopefully the avoidance of total war.
A problem with coercion is that it likely will not go according to plan. As the German military strategist Helmut von Moltke famously stated, “No plan survives contact with the enemy.”[†] Nations operate with incomplete and inaccurate information, and decision making is further short-circuited by the presence of strong emotions in any conflict. Nations are just as likely as human beings to stray from rational decision making, especially when forced to make decisions as a result of external international influence (Byman & Waxman, 2002). Schelling makes an assumption about states acting rationally. This could be perceived as a fault, but an enemy that might not seem rational is not necessarily “wholly irrational,” especially with regard to something like retaining power (Kugler, 2009, p. 325). Successful coercion will be dependent on understanding both the adversary to better predict its strategy and the strengths and weaknesses of the instrument used to coerce. Some of the important variables in the cyber domain will now be investigated.
Freedman (1998) describes some cost elements that come with coercion. The first deals with resistance and enforcement. The victim always has a say, and “mutual coercion” is likely (p. 30). The victim will surely attempt to resist and even retaliate. Coercion becomes a “trial of strength and confirms the balance of power” (Freedman, 1998, p. 29). The second involves compliance. These costs change over time, reflect the back and forth volleying, and are “attached to the eventual settlement” (p. 29). This is where cyber coercion can assist kinetic operations. Through the assistance of a cyber attack, an adversary network is not completely destroyed, but the attacker can potentially enjoy the physical freedom to maneuver undetected. The victim can then see that continuing its existing strategy will be unsuccessful and will potentially be more willing to come to the bargaining table. This compromise represents a partial success. The victim retains the choice to alter its existing strategy and continue fighting, or halt operations altogether. The attacker also retains options to increase the appropriate blend of threats and level of force to influence the desired behavior.
The actual financial costs are reflected in one example of USCYBERCOM’s budget, shown in Figure 4. According to the Washington Post, the House of Representatives approved a recent spending bill that plans to double the budget for USCYBERCOM from last year’s $212M to this year’s $447M; this represents a four-fold increase from USCYBERCOM’s inception just four years ago (Fung, 2014). The command also plans to add around 4,000 personnel over the next few years. During fiscally-constrained times and military sequestrations, this budgetary increase indicates new capabilities and the importance the nation is placing on cyber operations. While demonstrating growth, it is also important to keep in perspective that USCYBERCOM’s budget in 2012 was less than one new fighter jet (Sanger, 2012). The investment in cyber coercion can potentially allow nations the ability to exert influence in a cost effective manner.
Importantly, there is no breakdown in offensive or defensive spending provided; investing in both is required since mutual coercion will likely occur. A key tenet of coercion is to find and threaten an enemy’s pressure points or areas that are important to an adversary but cannot be impenetrably defended. Because of the difficulty of completely securing a network without diminishing its functionality, cyber attacks have an initial advantage as an instrument of coercion. For instance, zero-day exploits by definition are previously unknown vulnerabilities. Thus, the cost of development, implementation, and speed of attack favors the offensive (Liff, 2012). This does not discount the value of defense. Even offensive effects can be used in a defensive manner by either preventing an adversary advantage or by maintaining an attacker advantage. The inability for cyber to be a standalone tool to change the outcome of an extended conflict also favors continued investment in defensive capabilities. Due to the sheer volume of intrusions happening on a daily basis, every organization needs strong defensive measures that are resilient and redundant to absorb the inevitable retaliation. Arquilla (2011) writes that the balance of offense and defense in the future will most likely be that of an action-reaction cycle, while Hichkad and Bowie (2012) draw parallels to air warfare weapons and countermeasures. Successful coercion is dependent on the close relationship of both.
When investigating offensive cyber advantages found in coercion, it is important to remain balanced and not fall into the “cult of the cyber offensive” (Singer & Friedman, 2014). A similar offensive obsession was taken by militaries around the world at the turn of the 20th century. Technologies like the railroad, the telegraph, and the machine gun gave the false impression that whoever could mobilize the fastest and go on the offensive would gain the advantage. This obsession ultimately drove military competition that escalated to war. Once WWI actually broke out, the very technologies that the world thought would embrace the offensive actually only favored trench warfare and the defensive. These are important lessons in the cyber domain. It is no small feat to actually launch a devastating offensive exploit. It requires a great deal of time, money, and expertise not readily available. “Neither Rome nor Stuxnet was built in a day” so the limiting factor for implementing cyber coercion will be time rather than cost (Singer & Friedman, 2014).
Members shall refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state, or in any other manner inconsistent with the Purposes of the United Nations. (U.N., 1945, 1:2(4))
This article from the U.N. Charter states that nations should refrain from the use of force, but how that relates to cyber operations is an ongoing debate. Article 2(4) does not clearly define what constitutes a use of force. It generally favors a narrower instrument-based definition of force rather than a subjective interpretation of the degree of force. Article 51 deals with the issue of an armed attack. An armed attack is what triggers a state’s right to use force in self-defense; an armed attack is a narrower subset of force. Even something that is interpreted as a use of force does not require a state to respond. For instance, many experts feel that since Stuxnet caused physical damage it would constitute as a use of force, yet Iran never openly retaliated. In general, the U.S. typically adopts a narrower interpretation of force to only include military attacks, but a broader view for the range of what qualifies for Article 51 self-defense (Waxman, 2011). Determining what actions qualify as mere irritants and what actions threaten a state’s existence in the cyber domain is even less clearly defined.
At the extremes, it is clear that certain cyber activities fit within the existing international legal frameworks. “If state-sponsored cyber activities constitute a use of force, then international law governing the use of force (jus ad bellum) and the Law of Armed Conflict (jus in bello) apply” (Foltz, 2012). The problem arises from the large gray area found in many cyber activities. For instance, how the law applies to peacetime state-sponsored cyber coercion is a more difficult question. The difficulty with coercion is determining the line between unlawful coercion and lawful pressure. Foltz (2012) writes that any use of force is typically interpreted as lying between physical harm and the economic and political instruments of coercion. Broad sweeping bans on coercion as a use of force have never been widely accepted. The U.N. Charter specifically mentions economic and political coercive instruments, and even military acts of coercion, like demonstrations and blockades, as acceptable use of force exemptions. Determining the right point to stop to using alternative measures is one of the most difficult decisions to make (Lango, 2006).
The ban on force seems to primarily be for the use of armed military violence. Violent acts do not necessarily cause more suffering than nonviolent ones, however, and are therefore not always the greatest evil. “Most economic and diplomatic measures, even if they exact tremendous costs on target states (including significant loss of life), are generally not barred by the U.N. Charter” (Waxman, 2011, p. 422). That is a weakness of Article 2(4). In today’s precise military world, is a targeted military strike really worse than long-term economic sanctions? Yet, with cyber, the non-physical domain is arguably as important as the physical one further complicating legal interpretation (Taddeo, 2012). The nature of cyber attacks and the fact we live in a highly connected world means that a nonviolent attack could actually have more detrimental consequences than a violent one. It is easier for a cyber attack to take down a financial system or power grid and cause panic than it is for the attack to inflict mass causalities. So it seems that in some instances, cyber coercion would qualify for a reasonable use of force exemption, but there is an enormous margin for abuse.
Even threatening the use of force is not allowed under Article 2(4), so using threatened force to prevent potential future hostilities makes the idea of offense-in-depth legal gray area. There will be a thin line between use of force prior to conflict and at the outset of conflict. In the digital world, it is arguable that all attacks are imminent. How does one even assess imminence when human intent is often the only thing that distinguishes legal exploitation from an illegal attack? Historical statecraft is ripe with bluffs and threats that are easy to misinterpret. “Cyberweapons would seem to be an excellent choice for an unprovoked surprise strike” (Lin, Allhoff, & Rowe, 2012). But, if nations were able to use cyber as a non-kinetic warning shot at the outset of a conflict, or a threat of more to come, to modify an adversary’s behavior without unnecessary loss of life than it seems worth permitting.
Preventative or preemptive attacks have always challenged the philosophy of Just War Theory. Arquilla (2011) writes of a cyber-paradox in which the ease of conducting cyberwar might disregard the principle of last resort in favor of the possible efficiency and morality of its use. In order to remain in accordance with Just War Theory, the question is not whether physical war should be a last resort, but rather whether cyberwarfare should be a next to last resort. A tactical military cyber attack could instill fear in the enemy soldier, and when coupled with threatened kinetic strikes for noncompliance, the cyber attack can create strategic and de-escalatory potential. Aucsmith (2012) agrees that the end goal of cyberwarfare is not to win in that domain per se, but rather to garner enough control to affect conflict in the other domains. Hichkad and Bowie (2012) counter slightly by saying throughout history some new weapons can alter the battle-space initially, but none have had the ability to prevent future war.
Cyber coercion gives the military another tool of influence, but its use will be a balance of judging the effect and the motivation behind it because of the enormous risk of abuse. Questions remain whether motivation and intent can even be proven, and whether the potential benefits to the U.S. are worth the potential risks of abuse by adversaries. Schelling (1966) writes that war is an example of who can survive pain the longest, and that technology can make that pain even worse. He was speaking in reference to nuclear weapons but with regards to cyberwarfare, it may be possible to use technology in a manner that actually avoids unnecessary suffering. This is dependent on the responsibility and motivations of the user in tailoring the offensive cyber act in a way that attempts to reduce the risk of future bloodshed. If there was a way to use cyber instead of kinetic strikes to accomplish the same goals, then why would cyber not be pursued? Denning and Strawser (2013) write that all things being equal, if a military objective can be achieved with a kinetic and non-kinetic option, the commander has a moral obligation to employ the cyber option.
An interesting option for cyber coercion is reversibility. An offensive cyber act is not beyond recall once initiated. Damage to data and programs can be rewritten over with the correct information. This is arguably the strongest evidence of the value of cyber coercion. This becomes a positive inducement for the target to submit to the pressure and be no worse off, at least physically. It can provide face-saving cooperation because an adversary can play down the cost of concessions (Byman & Waxman, 2002). This allows cyber coercion to include force with the initial threat yet not risk any permanent damage to the victim. No other coercive instrument makes such an extensive and temporary use of force possible. In fact, the most notable kinetic forms of coercion, nuclear and air-strikes have clear limited utility in anything short of war without causing war itself (Byman & Waxman, 2002). This is primarily because the fear of use for kinetic and nuclear strikes is rooted in sensitivity to casualties. If targeted responsibly, cyber attacks carry less risk of casualties. Creating reversible attacks also help the coercer with the risk of retaliation. Even if the ease of launch might encourage increased use, if the retaliatory attacks are in-kind, then at least it encourages the counter-attackers to ensure the damage to the coercer’s network would also be repairable.
Just because an attack is reversible, does not mean the temporary effects cannot be powerful, which further strengthens the coercive effect. Any initial use of force should be tied to an actual objective and clearly communicated to the target. This is why the best application for cyber coercion will be through an operational level attack against the military rather than a strategic attack against the civilian population. The target must understand why it is being targeted so it can choose to comply or resist. Shutting down a nation’s early warning radars could mean a lot of casualties if the targeted nation does not submit to the coercer’s demands and the conflict is allowed to proceed. Interfering with logistics could increase rates of disease and injury from accidents if the target does not comply quickly. By using these tactics at the outset of a conflict, the goal is to get the target nation to stop its current path and come to the bargaining table. The target nation still retains the capability to defend itself; it is just weakened. That weakness becomes temporary by including a reversibility option, which then increases the likelihood of compliance. In this way, continuing the conflict might be avoidable and represents a limited but valuable success.
A con to reversibility is that it could create a cyber “fire-and-forget” mentality. Nations could release malware with a shrug, insisting they could always “turn it off” at a later time. China seems to already be doing this with advanced persistent threat attacks targeting data exploitation. Another difficulty in repairing cyber attacks is in localizing their effects to reverse them. Cyber attacks by their nature are difficult to precisely target so reversing every area of damage could be time-consuming if an attack is not well-planned. This time delay hurts coercion because even if the target acquiesces to the demands, the time it takes to fully reverse the damage could be deemed unacceptable to the target who might then still retaliate. This could lead to unintended escalation, which will be discussed shortly. The target may also doubt that all traces were actually removed. This argues for the importance of self-attributing attacks. If a signature is applied, then it will be far easier for the coercer to prove that all traces of the attack have been removed because nothing in the target system carries the coercer’s digital signature.
There is a relationship between the strength and effectiveness of a cyber attack and the resulting level of attribution. By using “technical data, all-source intelligence, and logical inference” most major attacks should be able to be attributed (Kugler, 2009, p. 319). Even with anonymizing services, digital forensics can discern attackers given enough time (Knake, 2010). The problem is that a full blown cyberwar could happen very quickly, and indisputable attribution might take days, rather than minutes, so a country may be destroyed by the time the attack is attributed. Hare (2012) argues that unequivocal technical attribution is not required to compel because of the importance of communication. Another indispensable attribute of successful coercion is conveying credible commitment to future action. Violence is used to strengthen a message and not just for simply causing pain. By attributing the act of cyber coercion, it can signal both commitment and credibility.
It may suffice that the coercer clearly communicate to the adversary that the cyber act is in retaliation to a specified fault and that it will only stop after the adversary meets the demands, much like extortion. This is to make sure the intended target clearly understands the origination of the threat and its options. For coercion, countries that make the origin of their attacks clear will have the ability to better enable the desired effects (Rowe, Garfinkel, Beverly, & Yannakogeorgos, 2011). These are often more important than the pure military value of the target because coercion brings force and a message. This can be accomplished by attaching digital signatures to the attack malware, or even attempting to conceal them steganographically. Without the clarity that attribution brings, some effectiveness could be lost. The counterargument is that vague demands can be less humiliating for the enemy. An overtly attributed attack will be embarrassing for the target as there is a perception of being strong-armed by a more powerful nation, which might also encourage retaliation to save face. An attacker might therefore approach an adversary covertly and allow it to back down without the rest of the world clearly knowing the cause. Ultimately, attribution will be situationally dependent and will be but one of many political considerations.
Many of the creators of Stuxnet felt that it would be more valuable if attributed because Iran would realize that the U.S. could unmistakably penetrate Iranian networks and therefore send a stronger message. Sanger (2012) believes that the president, “fears acknowledging the capability creates pretext for other countries, terrorists, or teenage hackers to justify their own attacks” (p. 265). Non-state groups do not sign international treaties, let alone terrorist organizations, which all seem willing to conduct hit-and-run attacks. The absence of international norms or treaties is not unique for cyberwarfare in comparison to other new technologies. Although not for any lack of effort, the first treaty limiting nuclear testing was not signed until nearly 20 years after the invention of the nuclear bomb (Sanger, 2012). Operating without clearly defined norms will not prevent other nations from taking advantage of the opportunities provided by this domain, and so the U.S. should not be afraid to either.
Byman and Waxman’s (2002) idea of “escalation dominance” is magnified when considering cyber as the instrument of coercion (p. 30). They define this as, “the ability to increase the threatened costs to an adversary while denying the adversary the opportunity to neutralize those costs or to counter escalate” (p. 30). Since the U.S. is so vulnerable to cyber attacks, its ability to overwhelm an adversary prior to it counterattacking in-kind and exploiting U.S. vulnerabilities is unlikely. The fact that many nations, and individuals for that matter, are able to acquire these capabilities makes the task all the more difficult. That suggests that to be effective, cyber coercion will require combining the initial cyber use of force with the threat of conventional force. With regard to state relationships, Liff (2012) writes that underestimating resolve and the misperception of vulnerabilities could lead to escalation, but coupling that threat with follow-on kinetic strikes could prevent it. The U.S. has an advantage in threatening its powerful nuclear and conventional assets in case cyber acts cannot coerce by themselves. This gives the threat greater credibility. Liff (2012) only foresees an increase in protracted wars between two weak states because of the lack of strong conventional military backing. Even when an asymmetric advantage does exist for weaker states against stronger states, the weaker state might gain temporary leverage but will lack the ability to sustain it.
If kinetic threats strengthen cyber use of force credibility and possibly prevent more costly kinetic wars, the initial use of cyber force might still create opportunity for escalation to unrestricted cyberwar. Maneuvering exclusively in the cyber domain can generate extreme and debilitating effects as discussed in the legality and morality section. Non-kinetic escalation could be substantial if the victim is a highly technology-dependent nation. For instance, the U.S. might think that a major cyber attack could coerce North Korea. North Korea might then assume the following: kinetic escalation is not worth the risk because of the threatened conventional U.S. strikes, the best method of counterattacking is in the cyber domain because of known U.S. vulnerabilities, and that it has nothing to lose by unleashing its cyber counterattack in one massive undertaking prior to the U.S. exacting increased pressure. The resulting North Korean counterattack could destroy U.S. infrastructure, and the attempted U.S. coercion would not only be a failure but it would also create an even worse situation. This again suggests that cyber attacks need to have limited objectives and clearly communicated military targets to minimize escalation. It is arguable that North Korea might still respond similarly even if the U.S. only attacked its military. Therefore, a better solution would be promising in-kind retaliation through a declaratory policy at the outset of a conflict to help reduce the possibility of miscalculation while trying to maneuver exclusively in the cyber domain (Mazanec, 2009).
Playing a dangerous game of risk that capitalizes on escalation fears is also what helps de-escalate. Schelling (1966) writes that a credible threat can often initiate something that might quickly get out of hand; this risk of escalation and shared risk is a part of brinkmanship. The unpredictable nature of cyber effects lends itself to brinkmanship, and cyber attacks convey the power of the coercive threat of escalation. The point of the coercive threat is to give the adversary a choice. If the proposed conditions are not met, the threat of additional and more costly war makes the initial cyber act more powerful because of the knowledge that war could happen whether each side intended for it to happen or not (Schelling, 1966). Byman and Waxman (2002) also talk about balancing escalation. The inability to maintain cyber pressure can embolden the target, but sudden escalation and being strong at the outset creates a powerful sense of urgency. The problem is finding the perfect blend; an act that is powerful enough to be noticed, but not strong enough as to force the adversary to escalate to unrestricted cyberwar or total war.
When coercing, Schelling (1966) writes that it is critical to give, “the last clear chance” (p. 44). By doing so, a nation maneuvers in such a way that the enemy now has the initiative to act or not. It gives the enemy a way to escape and save face. Freedman (1998) concurs that nations should want a bargain and need a bargain to save face; “total compliance is an act of submission” (p. 34). If the target nation complied only out of fear of a superior force, then it is unlikely any agreement would last. Take for example the Cuban Missile Crisis. The U.S. used bargaining and demonstrative military action but also gave the Soviet Union the option for a face-saving retreat (Cimbala, 1998). The blockade was not strong enough to remove the missiles already on Cuba, but it was able to stop future shipments through threats of escalation.
A major problem is that if the enemy believes an attack will happen regardless and not in response to something it did, then it has no incentive to be cautious and will probably attempt to guarantee that it gets the first shot. Schelling (1966) writes that this is the worst position a country can be in, where both sides are, “trapped by a technology that can convert a likelihood of war into a certainty” (p. 225). A major difficulty with cyber coercion is whether it can be done without both sides thinking that cyber attacks will only work for them if they only get the jump on the other. There is also a paradox within cyberwar that due to the nature of the targets in a powerful attack, the less likely the victim will be able to communicate and control the response, which can lead to escalation and more war. Communication problems will only occur with particular kinds of cyber attacks that have a network target, and so attackers do not need to cause communication problems if they do not want to. Ensuring there is constant communications link between the two countries, in a manner similar to the red “hotline” between the U.S. and the U.S.S.R in the Cold War, is critical (Mathers, 2007). Some of this uncertainty is necessary to prompt cautiousness; the inability to accurately predict the post-attack environment creates an important sense of hesitation.
A problem with carelessly designed offensive cyber acts is unintended effects like spillage onto the general Internet. Code can also be reverse-engineered to possibly work against the attacker’s own poorly defended networks. Benefits of offensive cyber operations could be overshadowed by the risk of having one’s own network vulnerabilities exploited. Attacks may not be able to find their targets effectively, and assumed vulnerabilities could be unknowingly patched making reliable and successful malware difficult. Defensive measures like passwords, encryption, loggers, and access controls complicate precise targeting. Networks are layered with firewalls or disconnected outright, so knowing how an attack might behave in the target environment is difficult. Anything new comes with large upfront costs, and this is especially true of new software and new cyber weapons, which both have high error rates and low reliability. Even after the extensive preparation it took to develop Stuxnet, it still reached many non-targeted networks. The creators wanted to maximize the effect of the code by applying it to a “large array of centrifuges in Natanz” rather than focusing on more discreet targeting; “the worm then didn’t recognize when its environment changed from secret to Internet” (Sanger, 2012, p. 204). The U.S. lost some credibility when it now tries to complain about other nations snooping around its own networks. Yet, the willingness to release Stuxnet knowing that no code is perfect and that collateral damage is always possible could also be interpreted as a signal of resolve, which could bolster future coercive threats.
Another problem for adopting cyber coercion is what Byman and Waxman (2002) call an asymmetry of constraints, in that some countries are more constrained by legal norms than others. Less-powerful countries will always be more interested in upsetting the status quo and trying to capitalize on a means of attack that the more powerful are quite vulnerable against. Even the smallest of conflicts to the U.S. might seem overwhelming to certain adversaries, and so they might be more willing to cast aside international law at the outset. The U.S. could also use its technological advantage to create a new form of unilateralism and unless properly managed could invite a cyber arms race (Sanger, 2012).
Designing a strategy to account for all the different nations and even the fluctuations within the same country is another major difficulty. It is what makes coercion so hard and why creating blanket foreign policies impossible. Based on recent military history, an adversary could conclude that the U.S. is unwilling to escalate beyond air-strikes. Therefore, an adversary must simply wait for the strikes to subside and then continue its current path again. This view might also translate to the way an adversary could react to cyber coercion. Air-strikes, sanctions, diplomatic pressure, and other forms of coercion can be interpreted as low risk, low commitment measures that are only used when policy-makers want to demonstrate they are not ignoring the problem. These low-cost tools are then sometimes interpreted as only being used when U.S. commitment is weak, which therefore weakens the overall coercive credibility (Byman & Waxman, 2002). However, such limited uses of force also flow from calculations of efficiency: why use more than needed to complete the desired effect?
Cyber coercion clearly will not always work. It will likely only be effective in or on the brink of conflict. For coercion to work, traditional or cyber alike, the motivation of the attacker must be high to see the threat or use of force out to completion and prevent additional hostilities by both sides. Perception can help with reality. For instance, in the brief history of offensive cyber acts, there has been limited or no escalation and poor responses in general. This strengthens the determination of the aggressor when attempting an act of cyber coercion. There are also few examples of unequivocal attributable cyber acts by nation-states, especially with regard to serious events that would necessitate the use of the military. This favors the determination of potential victims. It is likely that target nations would not perceive a cyber act worthy of compliance unless it was so strong that it would then escalate hostilities.
The employment of offensive cyber actions is relatively new, and more and more people are acquiring the capabilities to employ it. By adding cyber coercion to its list of options, we are not advocating for a country to be a bully and continually act unilaterally, but rather promoting the pursuance of all viable options to be better prepared for the future. Not doing so is simply poor planning. Byman and Waxman (2002) write that, “as a capability waxes, opportunities to use it wane” (p. 230). Not using offensive capabilities might completely nullify any coercive benefit that they could bring to the fight since the underlying technologies are quite unreliable and require demonstration. The current U.S. use of dominant air power and drone strikes might work for fighting terrorists in third world countries, but wielding U.S. military might to try to influence peer or near-peer adversaries will not allow for those tactics without increased risk of escalation. In the future, a peer competitor will most likely want to interact in an area where the U.S. has clear vulnerabilities, so the U.S. needs to be prepared to interact in the cyber domain and set itself up to find niche advantages ahead of time.
Cyber coercion can provide a range of physical effects not applicable to traditional instruments of coercion outside of war. This is because cyber coercion can create an initial threat with limited force that is temporary and reversible. Air-strikes while quick and powerful are more escalatory because of the physical damage; diplomatic demarches, while symbolic, might not carry much weight; and the easiest and most cost effective instrument of coercion, sanctions, takes a while to work. Cyber acts can be faster to implement, can have varying levels of attribution, and can scale to physical destruction when deemed politically necessary. The initial threat can bring with it some limited demonstration that by itself may not be all that impressive or powerful, but when combined with the promise of additional force, the necessary leverage could be created.
History demonstrates that technology continually changes the nature of warfare. Those who adapt best have won wars and extended their influence. (Rattray, 2001, p. 461)
This chapter looks at the process of conducting cyber coercion. It looks at the types of cyber attacks that could be used to coerce, and then how they could be used to exert influence against the target nation. In Figure 5, McClure, Scambray, and Kurtz (2012) present a diagram they have entitled “Anatomy of a Hack.” While hacking methods have some overlap with cyber attack methods, they do not represent a complete model for operational planning. Offensive cyber operations are about conducting reconnaissance to find vulnerabilities, building cyber munitions to match, and then delivering the exploits (Singer & Friedman, 2014). Each step opens up the possibility of detection. With coercion, becoming detected is not necessarily the biggest problem, as often the attack will be directly attributed and the desired future behavior communicated to the target. For cyber coercion, being detected prematurely means that the intended effect of the attack could be thwarted at the technical level prior to it generating the expected adversary behavior. Without behavior modification, coercion is unsuccessful. The title term, offense-in-depth, tries to stress the importance of consequences, or the effect of military actions on the enemy. The amount of violence used will be based on the desired adversary behavior. When deciding how best to employ force, all actors operate with incomplete knowledge and there is no way to know for sure exactly how events will evolve; this emphasizes extensive prior intelligence gathering. The operating picture is incomplete and the attacker needs to try to make it less imperfect. Finding adversary vulnerabilities in advance provides the U.S. with more options when deciding how best to exert its influence worldwide.
Detailed technical knowledge of enemy systems and networks is required prior to any exploit being successful. The technical knowledge needed can take months and even years of preparation. “Developing the offensive means to assess and target centers of gravity may prove very difficult because of the fast-changing information infrastructure of an adversary” (Rattray, 2001, p. 464). This information is not only difficult to acquire, but also must be continuously updated and verified. For instance, an exploit might only be successful against one model of radar and completely ineffective against all others. This process gets even more complicated when attempting to get information about classified networks and highly guarded military networks and systems. Without finding the vulnerabilities that create opportunities to apply pressure, it is impossible to effectively coerce. This highlights the importance of constant intelligence collection. For cyber coercion these steps include footprinting, scanning, and enumeration in Figure 5. They all try to identify significant characteristics of the operating environment.
The purpose of footprinting is to learn about employees and business practices, and then gather information about target IP addresses, network topology, phone numbers, etc. Tools like search engines, FingerGoogle, WHOis, arin.net, and NSlookup are good places to start (McClure et al., 2012). Many networks employ Snort or some other intrusion-detection system to try to catch intruders snooping for data. The purpose of scanning is to enable focus on the most promising avenues of entry into a network. Scans can be bulk or specific assessments. The larger and more intrusive the scan, the “noisier” it will be to the target. Tools like fping, Nmap, and SuperScan allow tailorability to shape packets to minimize the noise signature (McClure et al., 2012). Scans should provide the status of a machine and any open ports. Enumeration tries to find user accounts and identify specific applications that are running. It also correlates known bugs and software weaknesses. Finding target databases, applications, OS-specific information, baselining network activity, connectivity, and bandwidth are important. More intrusive probes, using banner grabbing with netcat, PSTools, and DumpSEC, should reveal less protected resources and user groups that could be targeted (McClure et al., 2012). The actual physical location of the target and its security policies are also important for coercion. Servers might be in third-party countries, which could complicate targeting, and simply knowing how an adversary might respond based on its doctrine will help drive the coercion mission plan. Finding as many vulnerabilities as possible is beneficial in case they have been patched by the time of the actual attack or if the adversary tries to outlast the initial attacks and more attacks are needed on short notice.
Once enough data has been collected, the attacker can attempt to gain access to the target or conduct a denial-of-service attack against it. Actually launching an attack is a complicated process that requires many steps. It can be done remotely or by having physical access. Sometimes just being on the target machine is sufficient for the mission, but at times it will be important to elevate to root-level privileges. Grabbing and cracking passwords can assist this process. Information can be modified, stolen, deleted, false information can be inserted, or whole systems can be shut-down. Depending on the level of attribution desired for the act of coercion, the attackers may need to cover their tracks by clearing logs and hiding the tools used. If access is obtained prior to the desire to generate effects, backdoors into the system will have to be set up and maintained. This can include setting up rogue administrator accounts, modifying startup scripts, and leaving behind corrupted files.
Pre-conflict cyber operations, like passive signals collection, should be expanded to include more preparation of the environment to facilitate the required intelligence support. Then, the coercing military’s manned and unmanned platforms loitering off a target’s coast can have the ability to not only effectively sense, manage, and utilize the dynamic environment autonomously, but also find vulnerabilities and deliver countermeasures independently. The DOD’s Joint Operational Access Concept (JOAC) states “Cyberspace operations likely will commence well in advance of other operations. In fact, even in the absence of open conflict, operations to gain and maintain cyberspace superiority and space control will be continuous requirements” (DOD, 2012b, p. 12). This seems to imply a willingness to make preemptive cyber attacks a part of U.S. military doctrine. Publicizing this fact to adversaries can have a powerful effect. In much the same way as adversaries knew that individual U.S. nuclear submarines had the power to launch ballistic missiles independently, so too should tactical command be given to employ cyber weapons. Exploits can be delivered and placed in reserve onboard deployed units for use during any stage of a conflict. For instance, they could be specifically designed target packages for use on command and control (C2) networks. New information and updates can be promulgated from theater commands down to the tactical level. This allows deployed forces to not only gain understanding of closed networks and collect potential system vulnerabilities, but also deliver cyber payloads that can produce mission-specific effects when required.
Attacks can be accomplished by exploiting discovered software, hardware, and human vulnerabilities. Software vulnerabilities are the most common and they take advantage of credentials gathered during intelligence collection, security issues in certain software versions, and misconfigured settings. Exploits can then be delivered in one or multiple stages. Single-stage exploits result in some type of executable being run immediately on the target machine, while a multiple-stage inserts code initially that can later pull additional exploits. The more stages, the more complicated, and therefore less likely the chances of success, but some difficult targets do require more stages. Hardware vulnerabilities involve the manipulation of physical devices from spies, manufacturers, or even unsuspecting users. Human vulnerabilities happen as a result of social-engineering attacks. Access is provided by users mistakenly providing the information to attackers.
Software flaws that have been accidentally or deliberately introduced can subvert the intended purpose of a design (Lin, 2010). Zero-day attacks, for instance, are not publically disclosed software weaknesses that can be distributed on the black market. By definition, they are most likely unknown to the target and can be exploited by the attacker. They are rare, and cyber coercion should not necessarily require these limited yet debilitating avenues of attack. Another common software flaw is not checking for buffer overflows in loops resulting in unauthorized privilege escalation (Rowe, 2009). Using escalated privileges to take control of the target machine undetected can be accomplished through rootkits. These delivery weapons can be hard to spot because rootkits often resemble other software modules. Once in a network, the malware can act as command-and-control, provide future remote access, conduct data exfiltration, manipulate data, or simply actively monitor depending on the operational phase (as shown in Figure 1).
Military software targets can include C2 nodes, weapons systems, and communication links. Access to these targets is often possible from many nodes within the network. The presence of less secure, bring-your-own devices like smart phones and tablets can introduce vulnerabilities into even well defended networks. These devices are often inconsistently configured and do not always comply with the requirements of enterprise-owned devices; unsuspecting users can download and introduce bad or poorly configured applications onto the network. Some of the most valuable targets, like weapons systems, will most likely be on disconnected networks complicating their targeting. Isolated networks might have a web component or administrative interfaces that could allow attackers another way into the system. For instance, an attacker-owned website could be set up and DNS settings can be redirected to it. Malicious downloads could then occur from the fake site and make the system susceptible to SQL injection and cross-site scripting attacks (Mateski et al., 2012). Civilian infrastructure is also a possible target for cyber weapons but is not ideal for coercion or de-escalatory practices as will be explained later. While not primarily targeted, civilians can still be used to provide easier access to more secure military networks. For instance, a home computer that is used to remotely log into a secure work network is an easier target to compromise than attacking the edge devices governed by the stricter security policies found within the actual secure network.
Other common types of attacks include Trojans, viruses, worms, and logic-bombs. Trojans are executable code hidden in commonly used applications and unknowingly launched when the innocent application is opened. They are normally just a payload and can be implanted by any access vector. Viruses and worms are more destructive in nature and would only be used at the outset of a conflict unless their payloads simply collected information. Viruses and worms have varying levels of replication and are commonly spread through shared files, e-mail attachments, and web downloads. When relaying data back to the attacker, victims might become aware that their systems are compromised. A logic-bomb can be planted in software to go off at a specified time or under specified external conditions and two-way communication is not needed, which is what easily reveals the presence of an exploit. This allows remote access to be gained through a previously discovered exploit and only triggered when needed. Success of the attack will still be contingent on the level of defensive measures in place by the target including access control, file monitoring, and up-to-date patches.
Depending on the level of attribution desired, event logs and timestamps can be modified to hide the presence of the attacker. Sometimes leaving behind code in a target system to show that an attacker was there, like a calling card, can create a great deal of doubt and uncertainty about the future reliability and performance of the target network. Force does not always need to be explicitly expressed if the underlying message is clear. The recent revelations in the news by Edward Snowden, while damaging to the U.S. government’s reputation, can actually strengthen U.S. cyber credibility abroad because the whole world now views the U.S. as a ubiquitous cyber force with many offensive capabilities. For instance, The New York Times recently reported that the NSA has implanted software into thousands of computers worldwide (Sanger & Shanker, 2014). The U.S. can capitalize on these suspicions of compromise when it attempts to tailor its threats and force to achieve a coercive purpose.
Techniques for reversible cyber attacks include encryption, obfuscation, withholding information, and resource deception (Rowe, 2010). Encryption involves encrypting the victim’s programs or data with a key only the attacker knows. Then once conditions are met, the coercer can provide the decryption key. These encryption techniques can be done through rootkits and can even be setup to bypass conventional antivirus solutions. Rootkits hide low-level, sub-OS system resources to give attackers unrestricted network access. The encryption tactic can be defeated by resorting to backups, but confusion over which backup is most accurate and the time needed to do so aids the coercer. Nations will undoubtedly try to deceive intruders by using confusing network and asset labeling. Encryption-based attacks will also be visible to the target, which can help facilitate negotiation. Obfuscation attacks attempt to confuse the target’s data organization. This is accomplished through random data insertions and remapping data values. Reversible obfuscation requires careful recording by the coercer. The target can solve the method of obfuscation, but the goal is to make this very time-consuming. If the attacker combines many techniques it would make it even more difficult for the target to overcome.
Withholding-information attacks include denial-of-service and man-in-the-middle attacks. Denial-of-service is conducted by flooding targets with large numbers of packets and does not require the target to respond. Common attacks are smurf attacks, ping floods, and SYN floods. The DDoS attacks against Estonia in 2007 proved their effectiveness. The source address is typically spoofed, but this can be tailored to the level of attribution needed to aid coercion. For instance, an encoded signature can be placed in the low-order bits of the times of the attacks (Rowe et al., 2011). Denial-of-service can be accomplished with a military-owned and managed botnet. If a military creates its own botnet to be used as a weapon system, the botnet will undoubtedly be a prime target for any adversary who wants to turn it against its controllers. Control of a botnet can be protected by keys to make it more difficult for an adversary to take control of it. Since it is hard to be discriminatory and there is risk of collateral damage to civilian infrastructure due to propagation, denial-of-service is not always a viable option for coercion. Target sites may support multiple functions. By impacting the site, an attacker would impact all of the functions, and some of them may support the civilian population. Other sites might also be inadvertently impacted if there are network connections to the target.
A man-in-the-middle attack is more precise. This is accomplished by injecting hardware or software into the target and intercepting subsequent communication. The attacker can then determine what information is allowed to leave or return. Data can be returned once the coercer is satisfied. The data can be encrypted or simply withheld all together. Data can be sent to a location that the attacker controls, which helps manage onsite storage problems. The target can attempt to bypass the man-in-the-middle, but the attacker can try to use servers and protocols that the target would be unable to avoid. This requires extensive prior knowledge about adversary networks.
Resource deception attacks attempt to deceive the victim. The lack of confidence in one’s information systems can aid the coercive effect. Through the use of spyware or adware that injects false error messages or data, the target is convinced something is wrong with the system. It is repaired by removing the malware and revealing the truth to the target. Components of the target operating system can be wrapped in coercer-designed software that behaves transparently until triggered by the coercer. Disadvantages of this kind of tactic are that the target can simply reinstall programs to bypass these types of attacks, and the target may be able to discern that the errors are a ruse. This tactic can be scaled to generate the appropriate coercive effect.
Many large-scale attacks take advantage of human error using psychology. The human element is often the weakest link for security, and common attacks include social-engineering and insider access. Initial information can be collected from something as simple as “dumpster-diving,” which is the act of going through the trash to collect sensitive information on targeted individuals. Other attackers act as an imposter and attempt to get information through social-engineering, like in phishing attacks. An official-looking e-mail with a file attached, which the target individual may not be expecting but from someone it recognizes, is opened. The attached file contains malware that can then send credentials and system information from the target back to the attacker. These attacks are difficult to detect and can use the network’s applications against its own users. This tactic was recently used by the Syrian Electronic Army to gain access to various online U.S. media sites (Dunnigan, 2013). Phishing attacks are difficult to defend against because of the scale, but they do not always have to be broad-blanketing. One example, known as spear-phishing, would be specifically targeting a high-ranking government official. Another kind of human vulnerability is insider access; this can be gained through a recruited agent or an unwitting user. This more closely resembles traditional espionage, and in a cyber sense, it provides valuable access to closed networks. The insider can install malware through a removable physical medium like a CD or USB drive, which is how Stuxnet was rumored to have been implanted in Natanz. Physical access is difficult to acquire, but if obtained, one of the easiest and most reliable methods to install malware.
Applying the previously described technical cyber methods to achieve a broader coercive goal will now be explored. Cimbala (1998) writes about five key attributes when implementing a strategy that are also applicable to cyber coercion. They are as follows: “influencing the will of the opponent, openness to revision, perspective taking, symbolic manipulation, and moral influence” (pp. 162–168). He writes that influencing the will of the opponent should be the primary objective since all war is ultimately about it. The goal is to make the adversary rethink risks and potential costs of continuing its current path through threats and the use of force. He emphasizes “calibrated, but not necessarily small amounts of destruction” (1998, p. 162). The second attribute, openness to revision, involves being willing to compromise and revise initial objectives without conceding to the opponent. Perspective-taking and symbolic manipulation involve a deep understanding of the adversary. It is critical to know what the enemy values to manipulate it, and predicting how it will respond makes for more successful bargaining. Successful coercion involves “avoiding battle rather than prevailing in it” (Cimbala, 1998, p. 165). Cimbala writes that the fifth tenet, moral influence, is perhaps the most important. It is finding the locus of easiest control, whether that be the ruling party, military leadership, etc. There are parallels between these traditional coercion tenets and cyber coercion and we will use Figure 6 for comparison. For this paper, the instrument will be cyber, and the desired outcome will be concessions of some kind. In actual application, cyber will be combined with other coercion instruments to strengthen the threat and use of force.
“Concessions of some kind” is admittedly nonspecific. In real-world application, it will be achieved through an altered status quo or a return to the status quo. Measuring success is difficult for two major reasons. It is impossible to know whether an adversary would have yielded even without attempts to coerce; and, it is difficult to assess a partial yield (Byman & Waxman, 2002). This is the reason for accepting degrees of success; it reiterates the bargaining aspect of state interaction. Byman and Waxman (2002) cite North Korea as an example of this. Another invasion of South Korea has never occurred, and North Korea has made little nuclear progress; yet, North Korea’s nuclear program remains intact, and its regime remains in power. Outcomes are always more complex than just simple successes or failures. At times, a failure can be magnified by threats backfiring and creating a situation that is even worse. For instance, cyber attacks on North Korea might cause it to isolate itself further as a default protection mode, which makes future interaction more difficult. Ultimately, bargaining should identify the most dangerous outcomes and proceed from there. For the North Korea example, decision makers must weigh questions resembling, “Is it worth allowing North Korea to get and maintain a nuclear program to prevent war with South Korea?” The end result will rarely be full compliance, but even forcing an adversary to the bargaining table could be a valuable accomplishment.
The mechanism in Figure 6 is finding the right target to influence. The goal is to successfully tailor the threat and use of limited force towards the desired mechanism in order to increase the likelihood of generating the desired behavior. The ultimate target would be one the enemy values and one that the coercing country can exert influence over with minimal cost to itself. Usually, the things the adversary holds most valuable will also be the most heavily guarded. Cyberspace, however, allows for a great deal of maneuver with regards to targeting; the leadership, the military, the economy, and the people are all examples. Byman and Waxman (2002) mention five common areas of influence for coercion, “power base erosion, unrest, decapitation, weakening, and denial” (p. 50).
Power base erosion targets members of the ruling political group and attempts to lessen their influence. This is typically referred to as subversion, or the deliberate attempt to weaken the established authority or order (Rid, 2011). This is possible with cyber coercion because world leaders are now more accessible and can more easily be targeted for influence operations. The NSA has recently proven the capability to hack foreign government leaders’ e-mail and mobile devices in Germany, Mexico, and Brazil (Welch, 2013). Since the human element is often the weakest link, these tactics provide a simpler way to gain additional access to internal systems. Going after the leadership is easier in the centralized structure found in authoritarian regimes and could be done with something as simple as spear-phishing. For instance, the U.S. targeted the leadership of Iraq prior to attacking in 2003 by sending thousands of e-mails telling leaders that they could not win a war against the U.S. and providing instructions to defect (Star, 2003). It is unlikely that social-engineering attacks or some type of misinformation campaign could achieve coercive success in isolation. Having only one target within a country can allow the enemy to ignore it, but the cumulative effect of many targets can force the enemy to yield. The downside to multiple targets is sending mixed messages. For instance, attempting to weaken the military can bolster public opinion and can create a strong sense of nationalism. Power base erosion is also related to “decapitation,” which is the targeted killing of the leaders themselves. Although less applicable to cyber coercion, it was recently reported that the Iranian cyberwarfare commander was killed in a suspected assassination (McElroy & Vahdat, 2013).
Unrest is attempting to compel the adversary into yielding from the bottom-up by targeting the general population. Arquilla (2011) writes that the end goal should not be the total destruction of a society, but rather to achieve a surprise and notable effect. For cyber coercion, operations like publishing propaganda and messaging the general populace are easy to do but unlikely to be very powerful, or at least unlikely to get the citizens to force the government to enact the desired change. Targeting the general population could also backfire by making the adversary government more popular for taking a stand against the meddling foreign coercer. North Korea reportedly has an online army of around 3,000 to garner support for the administration and undermine regime opponents (Firn, 2013). The agents attempt to create unrest by demoralizing the general population of South Korea. For instance, agents steal identities of South Korean Internet users and post North Korean propaganda on message boards and websites. This tactic is attempting to generate a more North Korean friendly government in South Korea by affecting the people. These operations try to influence the general population without getting overly messy; however, the mess is often what makes it provocative enough to generate the desired behavior.
When transitioning from information operations to force, Schelling (1966) writes that a nation should first target military forces rather than the civilian population. Even outside of the ethical dilemma of targeting the civilian population, its actual effectiveness has been questioned. For instance, nations showed impressive resolve during civilian bombing in World War II (Byman & Waxman, 2002). Another problem with cyber attacks is that because so many infrastructures are dual-use, it is hard to discern legitimate targets and isolate noncombatants. This is not to say that the threat of affecting a nation’s civilian population is useless. Schelling’s distinction was that targeting military forces retained the threat to target cities in the future, thereby retaining a coercive capability. This also reflects the phased aggression idea in offense-in-depth.
Weakening is targeting the social and economic cohesion of a country as a whole (Byman & Waxman, 2002). This has traditionally been accomplished through nuclear threats or economic sanctions. Cyber attacks have the potential ability to provide both a counter-force, “destruction of forces and military capability” and counter-value option, “destruction of social and economic values” in one instrument (Cimbala, 1998, p.88). Counter-force’s role in cyber coercion will be explored shortly in the denial section. While technically feasible, counter-value threats are weakened by a lack of credible follow-through; and, counter-value attacks can create such desperation within the victim that they ultimately lead to war. Arquilla (2011) fears that some cyber attacks could debilitate a nation such that it responds with whatever weapon is still available, even if that is a weapon-of-mass-destruction. Although the denial-of-service attacks against Estonia in 2007 were just protest, they showed a potential future use case for weakening. If, for instance, the Russians stated they did not want the statue moved from Tallinn and threatened a denial-of-service attack if Estonia did not comply, then that would qualify as cyber coercion. At the very least, it was proof of the vulnerability of a digitally-interconnected nation such as Estonia to these types of attacks. Denial-of-service and web defacement attacks can both be used as a nonthreatening form of influence since the effects can be turned off or repaired.
Denial strategies are focused on preventing military or political victory (Byman & Waxman, 2002). Pape (1992) states that, “the coercer can damage the target state’s capabilities in ways that undermine the target’s expectation of military success” (p. 464). This is the counter-force option described previously. The intended audience is adversary leadership, and the message is that they will continue to suffer without concessions. There is an important distinction here. If the attack is demonstrating that the adversary cannot succeed it is coercion, but if the attack focuses on preventing success it is brute force (Byman & Waxman, 2002). Rarely does any cyber attack used in isolation completely prevent an enemy from waging war. It either will need to be combined with the threat of kinetic operations or used to enable other operations; it is a force multiplier. This has been seen in warfare already in the Georgian crisis in 2008.
Pape’s denial strategies were focused on fielded forces in battle. Cyber acts can also influence war-making capabilities in real-time by affecting military production, rerouting supplies to the battlefield, disabling air defenses, and disrupting communications from a distance with the added benefit of not being over enemy airspace. There also seems to be potential for cyber coercion to impede the adversary’s strategy for victory prior to conflict. In doing so, it can prevent any need for conventional operations to even begin. Unfortunately, “adversaries might be willing to pay the price of a military defeat to score a political victory” (Byman & Waxman, 2002, p. 80). During 1993–1994, the U.S. attempted to force North Korea to stop pursuing its nuclear program. Due to the fact that North Korea was a neighbor and ally of China, and the fear of an invasion against South Korea, a U.S. ally, any type of airstrike or decapitation strategy by the U.S. was unfeasible; sanctions were the only option. At that time, cyber capabilities were not powerful enough to be effective, but now with Stuxnet-like capabilities, the U.S. could use cyber coercion in a similar manner. Cyber coercion allows effects to be demonstrated quickly, but speed is also a weakness because any cyber acts may not have the necessary persistence to encourage more lasting change. To be truly successful, the coercer must make the adversary realize its entire strategy is failing and therefore concede to some degree to the coercer’s demands or risk continued pressure. A risk is that the adversary might temporarily concede only to start again later.
This chapter described examples of cyber attack methods and then introduced areas to focus those attacks to exert the most influence. The emphasis on intelligence gathering and operations early in a conflict is twofold. First, it highlights many of the technical requirements needed to successfully use cyber as an instrument of coercion. Second, it focuses on an overarching goal of avoiding costly wars while still exerting influence through the use of limited cyber force and kinetic threats. Using a cyber capability for coercion is not contingent on it being used early in a conflict. Choices of operations early in a conflict will certainly shape outcomes, but coercion implies an intentional effort to affect behavior in a specific direction. Ultimately, attacks have to be tangibly linked to a desired adversarial behavior and could be used at any time in a conflict. Operations late in a conflict can still be coercive. For instance, the bombing of Hiroshima was meant to coerce a Japanese surrender. This example reflects the increased amount of force required to compel the adversary to concede once it has invested more into the conflict, however. The goal of offense-in-depth is to use cyber coercion as one instrument in a phased aggression model to ultimately avoid total war. Admittedly, that is an idealistic assertion, but since the opportunities for coercion recede as total war approaches, avoidance of total war is at least a boundary condition for cyber’s coercive potential.
Before undertaking any actions, the coercer must ask whether it has the means and the will to influence the subject’s estimate of cost and benefits. This underscores the importance of understanding the enemy, gathering intelligence, prepping the battle-space, and knowing the coercer’s own military and political limitations. “Technology allows the U.S. to use force selectively but still decisively,” and “can be an enabling force for the development of a coercive military strategy” (Cimbala, 1998, p. 161). Coercion may be ineffective in some cases depending on the adversary and its state of mind, and this can be an equal if not greater problem when cyber is the instrument used to coerce because of its degree of indirection. As to whether cyber coercion should operate by itself, the answer is most likely no; it is merely one instrument of coercion. Depending on the situation, it may be more or less difficult to cripple each target area and successful coercion will most likely require multiple instruments.
The real target in war is the mind of the enemy commander, not the bodies of his troops. (Hart, 1944, p. 48)
The goal of this scenario is to investigate conducting an attributable coercive cyber attack, with the threat of additional kinetic operations, in the hopes of de-escalating a conflict between the U.S. and China. “A coercive strategy is necessary when one side cannot clearly conquer or subdue the other” (Cimbala, 1998, p. 158–159). “The more symmetrical the relationship, the smaller the margin of error in any military encounter undertaken as part of a coercive project and thus the higher the potential implementation costs” (Freedman, 1998, p. 32). This is applicable to a scenario involving the forceful unification of Taiwan by China and attempts to intervene by the U.S. Without a cyber option, the U.S. will be either less likely to intervene because it would seem like an unnecessary waste of life and resources, or be more likely to skip straight to a kinetic response. By using cyber coercion, the U.S. can have an avenue to respond that could force the Chinese to decide to continue or even escalate the conflict. The U.S. must communicate to the Chinese the link between the use of initial cyber threats and force, and the U.S. desire for the Chinese to stop unification efforts and return to the pre-unification status quo. The conflict may not end up getting permanently resolved and true lasting peace may not be installed, but that level of success is preferable to more war. The risk is that use of a cyber capability might be a “bridge” to a shooting war, which would otherwise be avoided. It is always hard to control the situation when trying to persuade the enemy to do something it would prefer not to do. This is why this scenario focuses on only using coercion selectively and during a most dangerous type of situation.
The Chinese are developing the capability to affect the U.S. cyber domain. “PLA leaders have embraced the idea that successful warfighting is predicated on the ability to exert control over an adversary’s information and information systems, often preemptively” (Krekel, Adams, & Bakos, 2012, p. 8). The Chinese have a concept of “information confrontation” that seeks to integrate both electronic and non-electronic, and both offensive and defensive operations, under a single warfare commander (Krekel et al., 2012, p. 8). Beijing also understands the legal and policy gray area surrounding cyber operations and may seek to exploit it against the U.S. Most of the operational control for the PLA’s computer network operations resides in the Third (3PLA) and Fourth (4PLA) Departments of the General Staff. The 3PLA is most likely responsible for defensive and exploitation operations, while the 4PLA is in charge of attack missions (Krekel et al., 2012, p. 9; Ball, 2011). The Mandiant (2013) report highlighted one of the most prolific cyber espionage groups out of the General Staff 3rd Department known as Unit 61398.
Chinese government and civilian sector information technology often have a symbiotic relationship, which shows a nation making the most of cyberspace. “Military writings on information confrontation and CNO in particular highlight…the value these tools offer, particularly as a preemptive measure to force the enemy to concede before escalation to full-scale combat” (Krekel et al., 2012, p. 29). China seems to be preparing to protect against the cyber vulnerabilities it sees in the U.S. and elsewhere. Microsoft’s security blog reported that “China had the lowest malware infection rate…of any of the 105 locations included in volume 13 of the [Microsoft] Security Intelligence Report,” which referred back to 2012 (Rains, 2013). This may be due to China’s inherent resistance to English-language phishing and English-language malicious websites, a major source of malware in the Western world. There are still many holes in China’s cyber defense networks due to the widespread use of pirated software, which is even present in government industries and state-owned companies (Wan, 2013). The National Computer Network Emergency Response Technical Coordination Centre in Beijing reported, in March 2011, that more than 4,600 Chinese government websites had content modified by hackers in 2010, which is a 68% increase from the previous year (Ball, 2011). Chinese businesses also operate on highly fragmented networks, which could contribute to a weakness in network security; bank accounts and cellphone accounts are often on different systems depending on the provinces the employees work in (Wan, 2013). The issue of the degree of vulnerability of the Chinese military is more difficult to ascertain.
With the U.S. military undergoing a pivot to the Pacific and its plan to increase its military presence in the region, China now realizes it will have to come face to face with the U.S. again (DOD, 2012b). Some in China see the U.S. on a decline and that it is China’s turn to reassert itself in the Pacific region. “China may be far wealthier and more influential than at any time in history, but its leadership is at its weakest since the beginning of the revolution that swept in the communists” (Sanger, 2012, p. 372). The military is also more autonomous than it once was. There are three major options for China: the first is to just bide its time and focus on domestic issues; the second is to openly compete with the U.S. but still work on overlapping issues like climate change, piracy, and the protection of intellectual property; and the third option is to have China and its military take the lead in the region and be openly hostile to the West (Sanger, 2012).
Regardless, the U.S. and China need to make sure they do not fall into the Thucydides trap. This is in reference to Thucydides’ statement that the Peloponnesian War was started by Spartan fears of the Athenian rise to power (Holmes, 2013). There is now fear on both sides of the Pacific that war is inevitable between China and the U.S. This fear is a reflection of power transition theory. This theory states that throughout history, as the international order is altered amongst great powers, the potential for war increases (Lai, 2011). This “trap” metaphor does not provide a direct correlation to U.S. foreign policy with regards to China. In the near term, China seems content to become a regional rather than a global military power (Holmes, 2013). This does not remove the risk of conflict as local countries may solicit the U.S. every time China tries to assert itself within the region.
Current U.S. Pacific Command commander, Adm. Locklear, stated that the Pacific region is becoming, “the most militarized region in the world” (Munoz, 2014) and the National Intelligence Council states that the “risks of interstate conflict are increasing” (NIC, 2012, p. viii). China recently demonstrated coercive diplomacy on November 23, 2013 when it announced an “air defense identification zone” (ADIZ) around the Senkaku Islands (Hsu, 2014). An ADIZ is set up to allow states the time to identify aircraft type, nature, and location prior to entering national airspace. China could be instigating small-level issues with surrounding nations to help distract the Chinese general population from domestic issues like pollution and corruption. These foreign distractions help instill a sense of nationalism especially against a country as hated as Japan. To the U.S., the Senkakus may seem like meaningless islands in the East China Sea, but confronting the Chinese over a small issue can prevent reacting too late down the road over something more meaningful. Some sources speculate China is hoping to set up another ADIZ in the South China Sea where it has disputes with Brunei, Malaysia, the Philippines, Vietnam, and Taiwan (Hsu, 2014).
The Chinese rise represents the “largest shift in the distribution of world power since the fall of the Soviet Union” (Miller, 2013). China is not an outright enemy of the U.S., but it is clearly a competitor. Therefore, any demonstration of power by the Chinese, no matter the location, should concern the U.S. The U.S. is not required to get involved over every incident, but communicating perceived faults to the Chinese is important. That is the reasoning behind the U.S. administration’s decision to fly B-52’s through the Chinese ADIZ (Miller, 2013; Shanker, 2013). Coupling this act with public criticism and rallying regional support exemplifies some traditional coercive techniques by the U.S. By remaining proactive, the U.S. can continue to exert influence in the region. The most difficult decision for the U.S. is balancing the high cost of actively intervening all over the world with the risk of shying away from overseas commitments and thereby minimizing U.S. influence (Sanger, 2012). The U.S. cannot risk being perceived in a forced-retreat internationally even during a military drawdown and constrained budget.
The Chinese have also used small scale patriotic hacking in response to regional island disputes in the South and East China Seas throughout the 2000s. When a Chinese trawler crashed into two Japanese Coast Guard vessels patrolling off the coast of the Senkaku Islands in 2010, patriotic hacking in both China and Japan responded by defacing websites (Minemura, 2011; Zhou, 2012). The Filipino government and news agencies had their websites defaced and suffered denial-of-service attacks, in April and May of 2012, in response to a disagreement with the Chinese over the Scarborough Shoal region (Segal, 2012). Disagreements with Vietnam, in 2008, in the South China Sea resulted in Chinese invasion plans for Vietnam showing up on Chinese websites (Nye, 2011). Smaller ASEAN countries often request U.S. protection from Chinese regional aggression, but the Chinese view this as foreign interference in their backyard. “The biggest check on Chinese power is China itself. Every time Beijing overreaches, its neighbors get scared and seek deeper alliances with Washington” (Sanger, 2012, p. 416).
These examples show China’s willingness to use technology in military acts to force a political outcome (Norton, 2013). This may indicate a dissatisfaction with purely diplomatic overtures and may indicate how China might react in future conflicts. The Chinese seem to have no desire to escalate but rather to compel regional nations to the bargaining table and remove international spotlight. The last known Taiwan Straits Crisis occurred in 1995–1996 when China reacted to a perceived growth in Taiwanese independence. This period was marked by large-scale exercises, underground nuclear tests, ballistic missile tests, amphibious exercises, and live-fire exercises (Norton, 2013). China sent a clear message to the U.S. and the rest of the world that it did not want any external influence. The U.S. responded by mobilizing a large portion of the Navy to the region. Ultimately, China’s intimidation tactics proved unsuccessful, but its recent actions in the South and East China Seas show a pattern of increased aggression. Without a strong U.S. presence, the Chinese could feel emboldened to once again try to reclaim Taiwan.
Suppose China declares its plans to reunify Taiwan with the mainland and begins to mobilize its forces. Due to the inflammatory nature of these actions, China does not employ quick-strike kinetic forces because of the potential for the entire region to erupt in conflict. Suppose the U.S. plans to use cyber coercion to cause China to draw down its mobilization and prevent an actual invasion forcing the U.S. to honor the Taiwan Relations Act. The Act (1979) states the U.S. will “consider any effort to determine the future of Taiwan by other than peaceful means” to include boycotts and embargoes (Kan, 2011, p. 35). This provides some ambiguity in promising military support to Taiwan. This document specifically mentions the use of common instruments of military coercion for the time. It is reasonable to assume that cyber coercion could be used in a modern application.
The mobilization of Chinese forces is the ideal time to use cyber coercion because once China actually deploys to Taiwan, it is demonstrating extreme resolve, and this lessens the likelihood of U.S. coercive success. This explains the emphasis placed on the outset of the conflict. Coercion is also only successful if expected behavior is clearly communicated. In this case, the use of cyber force must be tied to the U.S. desire for the Chinese to return to the status quo ante. Any cyber acts must also be combined with a statement of consequences promising direct military intervention should China not concede. Target selection for cyber coercion is not much different than conventional air strategies but with the added incentive of reversibility. Since successfully acquiring Taiwan would be impossible without the military, cyber threats and the use of limited force by the U.S. should be prioritized against operational targets on which the Chinese war-waging capability depends (Cimbala, 1998). For instance, targets could include integrated air-defense systems (IADS), military and political leadership, and deployed forces. The U.S. should also be prepared for preemptive and retaliatory cyber strikes from the Chinese.
As described in Chapter III, the best mechanism to use is a denial strategy focused on military targets and the Chinese war-making ability. Since the behavior the U.S. does not like is military aggression aimed at Taiwan, then targeting the capabilities on which that behavior depends is logical. This is assuming sufficient intelligence preparation has been completed in advance to provide the necessary access. The U.S. must be careful that some forms of cyber attacks could be viewed as an act of war, especially if making a first strike without provocation. Based on Chinese cyber doctrine, however, the Chinese will most likely try to conduct a quick cyber strike against Taiwan while simultaneously delaying and degrading the U.S. armed forces (Mazanec, 2009). For instance, targeting the U.S. logistical systems could cause misdirection of materials and delay resupply operations. “China’s cyber-warfare capabilities are very destructive, but could not compete in extended scenarios of sophisticated IW operations” (Ball, 2011, pp. 101-102). Even if prolonged cyber operations are unlikely, the U.S. will need to make sure it is prepared to initially operate in a denied environment. Determining the first true shot will be difficult because it is assumed that both China and the U.S. will be using cyber attacks to influence each other.
In this scenario, the U.S. is acting in support of an ally in the hopes that an initial show of force will signal to the Chinese that reunification is not worth the costs. The goal is to play off the reasonable fear of a costly war breaking out between the U.S. and China. Launching a strategic cyber attack on the mainland of China will most likely be too strong of an action at the outset. By focusing on an operational, attributable cyber attack, the message to the Chinese is that the U.S. is only getting involved because of its disapproval with China’s regional aggression. The primary objective is to complicate and delay the Chinese from success rather than remove their war capability outright. This should encourage a return to the pre-conflict status quo rather than having the Chinese escalate out of fear of total defeat. A phased aggression approach using various acts of cyber coercion would hopefully allow both sides to air out their difference in the cyber domain without the need to go kinetic or even engage in unrestricted cyberwar. This course of action will be explored further.
Targeting the power base, or in this case the political leaders, may not be best in this scenario because it could anger and bolster resolve to continue the path of reunification. Issuing traditional demarches should be conducted at the outset. This is demonstrating traditional political confrontation, which would occur in any conflict of this scale. Rarely are government decisions unanimous, however; undoubtedly, there are factions of the Chinese government that oppose the reunification effort. Selective messaging through targeted e-mailing to bolster the opinions of these respective groups could be attempted. Draining the bank accounts of the most powerful and wealthy supporters of the regime or sending e-mails announcing the presence of the U.S. fleet and some of its unclassified capabilities would all have powerful effects. Spear-phishing attempts can place spyware and adware on machines to annoy and create doubt in the security of systems. By injecting false error messages or data, targets can become convinced something is wrong with their system. These represent reversible resource-deception type attacks that can be removed later. These attacks are not the primary focus and would only be used to strengthen the overall denial course of action.
Encouraging unrest should also not be the primary focus because it is already a sensitive domestic issue within China. If all China has to look forward to is stepping back from Taiwan and quelling widespread domestic unrest, then China would be less inclined to end its unification effort. The fear of losing power by the ruling party would be a motivating bargaining chip, and could be used later if other measures do not succeed. This is because maintaining regime security is arguably more important than regaining Taiwan. DNS redirects to pro-U.S. and anti-reunification sites could be set up along with web defacements to increase the quantity of attacks without being too inflammatory. A military-operated botnet could also be on standby for denial-of-service attacks. China’s strict Internet censorship represents a good single point of failure for the U.S. to target. For instance, on January 21st, 2014 nearly every user in China’s 500 million Internet users could not load websites for almost eight hours (Perlroth, 2014). This incident was most likely triggered when the nation’s DNS redirected users to typically blocked sites resulting in the Great Firewall conducting an internal denial-of-service. Since a denial-of-service could affect civilian user access to the Internet too, it should be used sparingly at the outset of the conflict. This scenario would not be just a military or political fight. China has its own patriotic hacking netizens ready to join the fight with or without the blessing of the host country. This presents an escalation risk that both the U.S. and China should fear.
The Chinese have embraced an anti-access/area-denial (A2/AD) strategy that relies on high levels of systems integration (Libicki, 2011). The Chinese value their sensor, surveillance, and missile systems and believe these systems provide the best chance for deterring and ultimately defeating the U.S. in a scenario just like this one. Targeting and hopefully disabling these systems would send a powerful message to the Chinese that could encourage bargaining. However, these systems are most likely well defended, and also indigenously produced to limit the attack vectors. This reinforces the importance of continuous and constant digital intelligence collection. By focusing on and targeting these systems at the outset of a conflict in a denial strategy, the U.S. can prove to the Chinese that their military strategy is no longer worth pursuing. The confidence the Chinese have or do not have in their information systems will directly contribute to the success of cyber coercion. If this is effectively communicated to China, the costs of continuing may outweigh the acceptance of returning to the status quo ante.
It is assumed that the U.S. has taken the necessary steps in phase 0/1 operations (see Figure 1) to collect intelligence and find vulnerabilities in Chinese systems. Waiting until phase 2 to introduce malware makes the task increasingly difficult because deployed U.S. forces might not be able to safely acquire the necessary proximity for malware delivery, and the increased Chinese defensive posture could make remote operations less successful. The U.S. military might have to rely on national intelligence agency support from the NSA and the CIA because it is likely these organizations have established additional footholds into adversary networks that can transition from exploitation to attack during conflicts. Malware can be placed through the primary access vectors of infected storage media, phishing, malicious software, and supply-chain interference. Infected storage media and supply-chain interference requires the most lead time and capitalizes on the human and hardware vulnerabilities described in Chapter III. Attacks like these can take advantage of Stuxnet-like propagation through removable storage devices placed on Chinese systems by recruited spies or unwitting users. By previously seeding Chinese forums with free copies of software that have built-in vulnerabilities, additional access could be established.
Once access is acquired, then targets like IADS and deployed naval vessels could be compromised at the outset of a conflict. Malware could be implanted through phishing scams or traditional network hacks and then lie dormant in memory-resident programs. Since persistent presence is the primary objective for dormant malware, it needs to make sure it is executed when the system initializes or the operating system is configured through system calls (Purisima, 2010). For instance, logic-bombs can then remain in memory space until required while avoiding security defenses by spoofing process names, modifying registry entries, and placing links in AutoStart entries. Logic-bombs lying dormant in memory space ccould be activated by hijacking Chinese shipboard access points to reach the wireless network. Such attacks can also use the reversibility attributes of encryption and obfuscation to act as an incentive to the Chinese should they return to the pre-conflict status quo. Chinese warfighters might not want to bet their lives knowing their weapons systems and platforms may have been infiltrated and may no longer be reliable. Disclosing the reversibility of the attacks does not have to be revealed immediately and can be used as a bargaining chip.
If simply modifying China’s operating picture and data does not induce compliance, then more pressure can be exerted as long as the increase is also communicated to the Chinese as a result of their failure to submit. This follows the offense-in-depth model described in Chapter I. The U.S. military can conduct denial-of-service attacks against the 4PLA, the department in charge of attack missions in China, to weaken its offensive capabilities. This would be similar to an attack in August 2013, when a massive DDoS attack reportedly took down China’s .cn country code top-level domain for several hours (CSIS, 2014).
More specific attacks could be focused on air or seawater intakes, inhibiting power systems, or disabling and disrupting electronics onboard vessels bound for Taiwan. Shipboard machinery-control systems (MCS) are a form of industrial-control systems. In March 2007, at the behest of the Department of Homeland Security, the Idaho National Laboratory demonstrated the ability to destroy a diesel generator by remotely hacking it (Ireland, 2012). The report is codenamed Project Aurora, and a video of the attack can be found online. This showed the vulnerability of industrial-control systems to remote attacks. This leapt from theory to reality with Stuxnet. Stuxnet “used at least eight different propagation mechanisms, including USB drives, PLC project files, and print servers, to work its way into the victim’s control system (Byres, 2012, p. 2). Because the reliability of industrial-control systems is so important, the equipment is rarely replaced and is often several generations behind the latest technology. This presents a security problem because while threats evolve with the latest technology, older systems can lack the ability to implement current security practices in a timely manner (Galloway & Hancke, 2013). Figure 8 shows many of the possible paths, denoted by red lines, into a control system (Byres, 2012, p. 2). The addition of bring-your-own devices found in many networks adds additional attack vectors. Even low-power smartphones present opportunities for detection and targeting.
Militaries are plagued by problems with patching systems to fix security flaws because it often interferes with ongoing mission readiness. This is also likely for Chinese industrial-control systems. Indigenous Chinese equipment is simply not as reliable as foreign hardware at this time. The more critical the network, the more likely it will require the better-made foreign equipment, which increases the possibility of the U.S. compromising the supply-chain. Revelations from Edward Snowden revealed China was also among the top targets for cyber operations by U.S. intelligence services in 2011 (Wan, 2013). China’s Ministry of State Security admitted in October of 2007 that foreign hackers originating from Taiwan and the U.S. had been stealing information in key Chinese areas; and “in 2006, when China’s China Aerospace Science & Industry Corporation (CASIC) Intranet Network was surveyed, spywares were found in the computers of classified departments and corporate leaders (CSIS, 2014, p. 2). In March 2013, malware-infected computers in mainland China totaled nearly 1.4 million, of which, 0.4 million were controlled by Trojans or Botnets and the other one million by Confickers (Paganini, 2013). These quotes primarily discuss network intrusions to conduct espionage in the Chinese civilian sector. While espionage does not equate to warfare, access for both objectives is obtained similarly. If foreign actors can gain entry into poorly defended systems to steal, it is assumed they could also gain access to destroy or modify provided there is sufficient motivation. The claim that the military is better prepared and more secure than civilian networks is unconvincing because practices in one sector are often similar to practices in another.
It is important to make MCS attacks reversible or relatively fixable because many of the systems often have safety implications. Shutting down a Chinese vessel in the middle of the Taiwan Strait could be both provocative and dangerous. Creating just enough risk to demonstrate U.S. resolve without forcing the Chinese to escalate will be a fragile balance. Since the Chinese have launched their own cyber attacks to lesser provocations, it can be expected that attacking their maritime platforms would incur retaliation. A new U.S. cybersecurity initiative emphasized the importance of the government and the private sector working closely together for best practices and better managing the risk to U.S. critical infrastructure (Castelli, 2014). This will hopefully allow the U.S. military to benefit from new defensive measures from the private sector in the future. The U.S. military will have to make sure it has insulated intranets and has trained to these types of attacks to make sure its own networks are survivable enough.
“By 2030, no country—whether the U.S., China, or any other large country—will be a hegemonic power” (NIC, 2012, p. 19). Since neither the U.S. nor China will have outright superiority, the Pacific region will exist in a sense of artificial stability. The “History of power transition is filled with bloodshed” and while both the U.S. and China currently seem willing to proceed without unnecessary loss of life, Taiwan represents a contentious and unsettled conflict (Lai, 2011, p. ix). The outcome will largely be dependent on the motivation levels of both countries. The greatest risk is total war and the subsequent collapse of the global economy. By capitalizing on those fears, it is unlikely that a rational nation like China would escalate a conflict with the U.S. if the U.S. is only using operational-level cyber attacks to conduct cyber coercion. By simultaneously promising increased cyber escalation and eventual conventional military force if the coercive attempts are ignored at the outset, it is possible to hope that the Chinese will see that the costs of yielding become lower than the costs of prolonged resistance. China is a country whose people and government are not in perfect equilibrium, and the U.S. has an opportunity to provide just enough external stress to hopefully tip the scale back to the current balance of power. Cyber force provides a less-violent opportunity to resolve a conflict and return to the status quo ante without any significant loss of face. While this has the potential to spiral out of control, the threat of total war between two peers and the greater risk of creating worldwide economic instability outweighs the escalation option. The two nations’ economic interdependence is akin to Cold War mutually assured destruction principles.
In the absence of unequivocal ethical guidelines, the actor who launched first could still face rebuke by the international community. This can be reduced if the primary targeting focuses only on the adversary military and the systems it uses, avoiding all risk of dual-use infrastructure. By using phased aggression in the scenario, the U.S. demonstrated its commitment and resolve and forced the decision to act or not to act onto the Chinese. The U.S. attempted deception tactics and reversible data modifications at the outset, and only when the Chinese refused to yield did it initiate actual destruction. Even then, targets like shipboard MCS are consistent with coercive strategy as they do not completely eliminate the Chinese ability to fight. Targeting MCS also reinforces the link between the target and the expected behavior modification. If China wants to take Taiwan, it will require amphibious support ships, which make them an ideal target for a denial strategy. If the Chinese see cyber operations through a coercive lens as well, they might be unwilling to escalate at first use. By using cyber force in a coercive manner, the U.S. would be using a tactic that the Chinese recognize. By coupling that limited coercive use of force with the threat of follow-on kinetic operations, even with in-kind retaliation by the Chinese, hopefully both sides can step back after firing cyber volleys.
Another issue, outside of this scenario, is that decision makers might be unwilling to use limited cyber exploits in the early stages of a conflict, to save them for when war actually breaks out. This might actually be one of the biggest obstacles in cyber coercion. Military force, and especially cyber acts, is not inexhaustible and in some instances of malware, highly perishable; as stockpiles deteriorate, so too does bargaining power (Freedman, 1998). Adversaries, like China, might think they could wait-out coercion, and they can use counter-strikes or counter-coercion to try to shape the U.S. decision to further intervene. Targeting the Chinese military’s ability to fight might compel it to no longer pursue its objectives, or it could also create feelings of despair in the ruling faction as the risk of unrest or regime change increases. A military-denial strategy might also backfire. For instance, China might concede on Taiwan only if the U.S. pulls its military out of Japan. In the end, Libicki writes that if the target state believes it has been cyber attacked, its new estimate of the potential war’s outcomes are now worse, and the target still retains a choice to go to war, then “one might conclude that its desire to go to war would be reduced” and “war is inhibited” (2011, p. 139).
THIS PAGE INTENTIONALLY LEFT BLANK
Past efforts to wage strategic warfare have often failed because of inadequate understanding of the ways that applying force related to the political objectives sought rather than because of the shortcomings of the technological tools for inflicting pain. (Rattray, 2001, p. 480)
Existing thought on traditional coercion is varied and lacks overarching commonalities. This does not take away from its potential application in the cyber domain, and Schelling provides an excellent model to build from. There are two possibilities of using cyber coercion. The first is that it might create an outlet for states to air their difficulties and then move on; the other is that it could create escalatory pressure that exacerbates an existing conflict. Which of these options prevails remains to be seen. States sometimes will jump to kinetic attacks regardless of whether it is in their best interest. Cyber coercion should be viewed as just another tool, not necessarily a progressive step on the path to war. Even weak instruments can be combined to have a comprehensive effect in a larger coercion framework (Byman & Waxman, 2002).
The importance of credibility, persuasiveness, and clear communication in cyber attacks should be explored further. There are only a few historical examples that strengthen cyber credibility for future cyber coercion use-cases: the power of brute force sabotage in Stuxnet, the vulnerability of digitally-interconnected nations like Estonia, and the force-multiplying effect of cyber acts and follow-on kinetic strikes in Georgia. The danger and importance of coupling conventional might with cyber threats should be explored further. On one hand, it enhances the power and persuasiveness of the cyber threat, but it might also make adversaries think they have no choice but to act first. Self-attribution is a double-edged sword for communication. It opens the door wider for retaliation, but at the same time it is vital for effective signaling. The possibility of adversaries communicating privately offers good potential.
As mentioned in the introduction, the use of cyber compellance to stop the continual intrusions by nation-states for intelligence gathering was not explored in this paper. Byman and Waxman (2002) talk about “second-order coercion,” and this might be a method to stop existing intrusions. They draw parallels to how the Israelis dealt with incidents happening from non-state actors that fall below the threshold for actual attacks. The Israelis used reprisals against whatever nation was backing or harboring the non-state actors, especially against Jordan when it became a base for Palestinian operations after the 1967 war (Byman & Waxman, 2002). In the cyber domain, the U.S. could hold Russia or China responsible for the groups within their respective countries that continue to penetrate U.S. networks and actually respond in-kind with attributable cyber acts. The problem remains that a majority of network intrusions sponsored by nation-states are for intelligence gathering and theft. The U.S. may never be able to launch acts of cyber reprisal that would create enough costs to outweigh the existing benefits adversaries receive by allowing the persistent theft and intelligence collection missions from happening within their borders. Another reason that compellance is not effective in stopping persistent network intrusions is that the strength of the threat is made most powerful by the backing of conventional and nuclear forces. It seems unlikely, outside of an actual cyber attack, that any cyber coercion could be used in this manner. The military is not usually considered as a response to espionage or the theft of intellectual property. There would be no credibility in trying to compel through cyber without the threat of military escalation.
For cyber coercion to work, constant surveillance and intelligence gathering of adversaries is needed. That is the only way to provide sufficient lead-time for exploits in time-sensitive operations. In addition, as vulnerabilities are exposed, they will be patched and eliminated for possible exploitation. This further stresses the importance of network intelligence gathering. This is also critical to do on U.S. networks because whether the U.S. adopts a cyber coercion strategy or not, these tactics will undoubtedly be used against it. “The data obtained from a successful penetration test often uncovers issues that no architecture review or vulnerability assessment would be able to identify…findings include shared passwords, cross-connected networks, and troves of sensitive data sitting in the clear” (Kennedy, et al., 2011, pp. xiii-xiv).
If the U.S. begins to use cyber coercion, it could reap short-term benefits and begin to set precedence. Its use could actually lead to more security and better international cooperation similar to what is being demonstrated with Edward Snowden. With the recent exposure of the NSA’s online surveillance, some leaders of the Internet’s technical infrastructure want to reduce the role of the U.S. in management of the Internet since they feel the U.S. can no longer be trusted (GCN, 2013). Although regulating the Internet could stifle economic and innovative growth, more examples of offensive cyber operations could accelerate international discussions on the topic to generate norms of behavior. While today there could be clear benefits from employing offensive cyber capabilities, especially for the U.S., which plays a dominant role in the domain, future benefits of actual attacks could dwindle and the only option might be an outright ban or criminalization of digital attack activities.
The U.S. should strive to exert influence internationally more efficiently while avoiding costly wars and unnecessary loss of life. If found that cyber coercion is a viable strategy, this should lead to the organization of a force structure that can accomplish it effectively. For this to happen, deployed forces must be trained in advance so that lessons can be learned before actual combat. Deployed vessels provide the forward presence required for access to many closed networks and the long-duration persistence to collect information and deliver payloads. Although cyber expertise primarily resides at the strategic level today, it would need to be more decentralized to better allow the application of force to the operational and even tactical level. Future conflicts might happen in a denied environment where deployed forces may not have the ability to receive support information from commands at home. It is important to stay ahead of the enemy in peacetime because when war comes, the U.S. needs to have already found and established niche advantages to maintain dominance. Hopefully, cyber coercion can provide one potential option to achieve that in the future.
THIS PAGE INTENTIONALLY LEFT BLANK
Anderson, R. (2012, March 22). Cyber and drone attacks may change warfare more than the machine gun. The Atlantic. Retrieved from http://www.theatlantic.com/technology/archive/2012/03/cyber-and-drone-attacks-may-change-warfare-more-than-the-machine-gun/254540/.
Arquilla, J. (2011, October). From blitzkrieg to bitskrieg: The military encounter with computers. Communications of the ACM, 54(10), 58–65.
Aucsmith, D. (2012, May-June). War in cyberspace: A theory of war in the cyber domain. Cyberbelli.com. Retrieved from https://cle.nps.edu CY4410 portal.
Aziz, A. (2013). The evolution of cyber attacks and next generation threat protection. RSA Conference 2013. San Francisco, CA. Retrieved from http://www.rsaconference.com/events/us13/agenda/sessions/350/next-generation-threat-protection-to-counter.
Ball, D. (2011). China’s cyber warfare capabilities. Security Challenges, 7(2), 81–103.
Bratton, P. C. (2005). When is coercion successful? And why can’t we agree on it? Naval War College Review, 58(3), 99–120.
Byman, D., & Waxman, M. (2002). The dynamics of coercion: American foreign policy and the limits of military might. UK: Cambridge University Press (RAND).
Byres, E. (2012, May). Using ANSI/ISA-99 standards to improve control system security. Tofino Security White Paper, Tofino Security Product Group, Version 1.1.
Castelli, C. J. (2014, February 19). NATO cybersecurity center praises U.S. framework initiative. Inside Cybersecurity. Retrieved from http://insidecybersecurity.com/Cyber-General/Cyber-Public-Content/nato-cybersecurity-center-praises-us-framework-initiative/menu-id-1089.html.
Center for Strategic & International Studies (CSIS). (2014, February 4). Significant cyber events. Retrieved from http://csis.org/publication/cyber-events-2006.
Cilluffo, F. J. (2013, March 20). Cyber threats from China, Russia and Iran: Protecting American critical infrastructure. Testimony before the subcommittee on Cybersecurity, Infrastructure Protection, and Security Technologies, United States House of Representatives.
Cimbala, S. J. (1998). Coercive military strategy. College Station: Texas A&M University Press.
Clarke R. A., & Knake R. K. (2010). Cyber war: The next threat to national security and what to do about it. New York, NY: HarperCollins.
Cooke, J. (2010). ‘Cyberation’ and the Just War Doctrine: A response to Randall Dipert. Journal of Military Ethics, 9(4), 411–423. DOI 10.1080/15027570.2010.536406.
Darnstaedt, T., Rosenbach, M., & Schmitz, G. P. (2013, April 4th). The dangerous new rules of cyberwar. Speigel Online. Retrieved from http://www.spiegel.de/international/world/expanding-combat-zone-the-dangerous-new-rules-of-cyberwar-a-892238.html.
Denning, D. E., & Strawser, B. J. (2013). Moral cyber weapons: The duty to employ cyber attacks. Retrieved from https://cle.nps.edu CY4410 portal.
Department of the Army, HQ. (2008, February 27). Field manual 3–0, operations. Washington, DC.
Department of Defense (DOD). (2011, July). Department of Defense strategy for operating in cyberspace. Washington, DC.
Department of Defense (DOD). (2011, August 11). Joint publication 5–0: Joint operational planning. Washington, DC.
Department of Defense (DOD). (2012, January). Sustaining U.S. global leadership: Priorities for 21st century defense. Washington, DC.
Department of Defense (DOD). (2012, January 17). Joint operational access concept (JOAC). Washington, D.C.
Dipert, R. R. (2006). Preventative war and the epistemological dimension of the morality of war. Journal of Military Ethics, 5(1), 32–54. DOI 10.1080/15027570500465728.
Dunnigan, J. (Ed.). (2013, September 23). Identifying the big dogs of cyber war. StrategyPage.com. Retrieved from http://www.strategypage.com/htmw/htiw/20130923.aspx.
Firn, M. (2013, August 13). North Korea builds online troll army of 3,000. The Telegraph. Retrieved from http://www.telegraph.co.uk/news/worldnews/asia/northkorea/10239283/North-Korea-builds-online-troll-army-of-3000.html.
Foltz, A. C. (2012). Stuxnet, Schmitt analysis, and the cyber “Use-of-Force” debate. JFQ, 67, 4th quarter, 40–48.
Freedman, L. (1998). Strategic coercion. In Freedman, L. (Ed.). Strategic coercion concepts and cases (pp. 15–36). New York, NY: Oxford University Press.
Fung, B. (2014, January 15). Cyber command’s exploding budget, in 1 chart. The Washington Post. Retrieved from http://www.washingtonpost.com/blogs/the-switch/wp/2014/01/15/cyber-commands-exploding-budget-in-1-chart/.
Galloway, B., & Hancke, G. P. (2013). Introduction to Industrial Control Networks. IEEE, 15(2), 860–880.
GCN. (2013, October). Could NSA spying cost U.S. control of Internet infrastructure? GCN: Technology, Tools and Tactics for Public Sector IT. Retrieved from http://gcn.com/Blogs/Pulse/2013/10/Internet-governance.aspx?p=1.
Gonsalves, A. (2013, February 6). Preemptive cyberattack disclosure a warning to China. CSO Security and Risk. Retrieved from http://www.csoonline.com/article/print/728341.
Goodin, D. (2013, October 31). Meet “badBIOS,” the mysterious Mac and PC malware that jumps airgaps. ARStechnica risk assessment/security & hacktivism. Retrieved from http://arstechnica.com/security/2013/10/meet-badbios-the-mysterious-mac-and-pc-malware-that-jumps-airgaps/.
Gordon, M. R. (2013, November 23). Accord reached with Iran to halt nuclear program. The Washington Times. Retrieved from http://www.nytimes.com/2013/11/24/world/middleeast/talks-with-iran-on-nuclear-deal-hang-in-balance.html?_r=0.
Halle, A. M. (2009, June). Cyberpower as a coercive instrument. (Master’s thesis, School of Advanced Air and Space Studies, Air University). Retrieved from http://www.dtic.mil/dtic.
Hare, F. (2012). The significance of attribution to cyberspace coercion: A political perspective. Paper presented at the 2012 4th International Conference on Cyber Conflict, NATO CCD COE Publications, Tallinn.
Hart, L. (1944). Thoughts on war. London: Faber and Faber Ltd.
Hichkad, R. R., & Bowie, C. J. (2012, July). Secret weapons & cyberwar: History shows the promise and limits of hidden capabilities. Armed Forces Journal. Retrieved from https://cle.nps.edu CY4410 portal.
Holmes, J. R. (2013, June 13). Beware the “Thucydides Trap” trap: Why the U.S. and China aren’t necessarily Athens and Sparta or Britain and Germany before WWI. The Diplomat. Retrieved from http://thediplomat.com/2013/06/beware-the-thucydides-trap-trap/.
Hsu, K. (2014, January 14). Air defense identification zone intended to provide China greater flexibility to enforce East China Sea claims. U.S.-China Economic and Security Review Commission Staff Report. Retrieved from http://origin.www.uscc.gov/sites/default/files/Research/China%20ADIZ%20Staff%20Report.pdf.
Hughes, D. (Ed.). (1993). Moltke on the art of war: Selected writings. Novato, CA: Presidio.
Ireland, B. (2012, January 1). Security risks. Electrical Construction and Maintenance. Retrieved from http://ecmweb.com/computers-amp-software/security-risks.
Johansson, J. (2005). Anatomy of a hack: How a criminal might infiltrate your network. TechNet Magazine, Winter. Retrieved from http://technet.microsoft.com/en-us/magazine/2005.01.anatomyofahack(printer).aspx.
Kan, S. A. (2011, June 24). China/Taiwan: Evolution of the “One China” policy—Key statements from Washington, Beijing, and Taipei. CRS Report for Congress (RL30341). Retrieved from http://www.taiwandocuments.net/crs/2011/RL30341.pdf.
Kennedy, D., O’Gorman, J., Kearns, D., & Aharoni, M. (2011). Metasploit: The penetration tester’s guide. San Francisco: No Starch Press, Inc.
Kessler, G. (2011, April 27). How effective are sanctions in ‘changing behavior’? The Washington Post. Retrieved from http://www.washingtonpost.com/blogs/fact-checker/post/how-effective-are-sanctions-in-changing-behavior/2011/04/26/AFCwRktE_blog.html.
Knake, R. K. (2010, July 15). Untangling attribution: Moving to accountability in cyberspace. Prepared statement before the Subcommittee on Technology and Innovation, Committee on Science and technology, United States House of Representatives, 2nd session, 111th Congress.
Krekel, B., Adams, P., & Bakos, G. (2012, March 7). Occupying the information high ground: Chinese capabilities for computer network operations and cyber espionage. Prepared for the U.S.-China Economic and Security Review Commission by Northrup Grumman Corp. Retrieved at http://www.washingtonpost.com/r/2010–2019/WashingtonPost/2012/03/08/National-Security/Graphics/USCC_Report_Chinese_Capabilities_for_Computer_Network_Operations_and_Cyber_%20Espionage.pdf.
Kugler, R. L. (2009). Deterrence of cyber attacks. In F. D. Kramer, S. H. Starr, & L. K. Wentz (Eds.), Cyberpower and national security (pp. 309–340). Dulles, VA: Potomac Books and National Defense University Press.
Lai, D. (2011, December). The United States and China in power transition. Carlisle, PA: U.S. Army War College.
Lango, J. W. (2006). Last resort and coercive threats: Relating a Just War Principle to a military practice. Paper presented at the 2006 Joint Services Conference on Professional Ethics (JSCOPE), Washington, D.C. Retrieved from http://isme.tamu.edu/JSCOPE06/Lango06.pdf
Lewis, J. A. (2011). Cyberwar thresholds and effects. IEEE Security & Privacy, September/October, 23–29.
Libicki, M. C. (2011, January 27). Chinese use of cyberwar as an anti-access strategy: Two scenarios. Testimony presented before the U.S. China Economic and Security Review Commission. RAND Corporation. Retrieved from http://www.rand.org/content/dam/rand/pubs/testimonies/2011/RAND_CT355.pdf.
Libicki, M. C. (2011). Cyberwar as a confidence game. Strategic Studies Quarterly, Spring, 132–146.
Lin, H. S. (2010). Offensive cyber operations and the use of force. Journal of National Security Law & Policy, 4(63), 63–86.
Lin, P., Allhoff, F., & Rowe, N. (2012). Is it possible to wage a just cyberwar? The Atlantic. Retrieved from http://theatlantic.com/technology/archive/2012/06/is-it-possible-to-wage-a-just-cyberwar/258106/.
Liff, A. P. (2012). Cyberwar: A new ‘absolute weapon’? The proliferation of cyberwarfare capabilities and interstate war. Journal of Strategic Studies, 35(3), 401–428.
Lucas, G. R. (2011, October). Permissible preventative cyberwar: Restricting cyber conflict to justified military targets. Proceedings of the Air Force Research Institute on the Future of Cyber Power, Montgomery, AL.
Lukasik, S. J. (2010). A framework for thinking about cyber conflict and cyber deterrence with possible declaratory policies for these domains. In Proceedings of a Workshop on Deterring CyberAttacks: Informing Strategies and Developing Options for U.S. Policy (pp. 99–121). Washington, DC: The National Academies Press.
Majumdar, D., & LaGrone, S. (2014, January 17). Navy’s next generation radar could have future electronic attack capabilities. USNI News. Retrieved from http://news.usni.org/2014/01/17/navys-next-generation-radar-future-electronic-attack-abilities.
Mandiant. (2013). APT1: Exposing one of China’s cyber espionage units. Retrieved from intelreport.mandiant.com.
Mateski, M., Trevino, C. M., Veitch, C. K., Michalski J., Harris, J. M., Maruoka, S., & Frye, J. (2012, March). Cyber threat metrics (Sandia Report SAND2012–2427). Retrieved from https://www.fas.org/irp/eprint/metrics.pdf.
Mathers, R. F. (2007, November 6). Cyberspace coercion in phase 0/1: How to deter armed conflict (Master’s thesis, Naval War College).
Mazanec, B. M. (2009). The art of (cyber) war. The Journal of International Security Affairs, Number 16. Retrieved from http://www.securityaffairs.org/issues/2009/16/mazanec.php.
McClure, S., Scambray, J., & Kurtz, G. (2012). Hacking exposed 7: Network security secrets and solutions. Berkeley, CA: McGraw-Hill.
McElroy, D. & Vahdat, A. (2013, October 2). Iranian cyber warfare commander shot dead in suspected assassination. The Telegraph. Retrieved from http://www.telegraph.co.uk/news/worldnews/middleeast/iran/10350285/Iranian-cyber-warfare-commander-shot-dead-in-suspected-assassination.html.
Miller, P. D. (2013, December 27). China, the United States, and great power diplomacy. RAND Corporation. Retrieved from http://www.rand.org/blog/2013/12/china-the-united-states-and-great-power-diplomacy.html.
Minemura, K. (2011, November 7). China bolstering cyber defenses for modern-day warfare. Retrieved from http://ajw.asahi.com/article/asia/china/AJ2011110716782.
Munoz, C. (2014, January 23). Locklear: Asia-Pacific is becoming ‘most militarized region in the world.’ USNI News. Retrieved from http://news.usni.org/2014/01/23/locklear-asia-pacific-becoming-militarized-region-world
National Intelligence Council (NIC). (2012, December). Global trends 2030: Alternative worlds. Office of the Director of National Intelligence. Retrieved from www.dni.gov/nic/globaltrends.
Cobos, M. A. (2012). Nodes and codes: The reality of cyber warfare (Monograph, United States Army Command and General Staff College). Fort Leavenworth, KS. Retrieved from https://www.hsdl.org/?view&did=735527.
Norton, J. M. (2013, August 18). China’s “warfare” strategies and tactics. The Diplomat. Retrieved from http://thediplomat.com/2013/08/18/chinas-warfare-strategies-and-tactics/.
Nutter, M. C. (2013, November 19). Navy: Acoustic hackers could halt fleets. Business Insider. Retrieved from http://www.businessinsider.com/navy-acoustic-hackers-could-halt-fleets-2013–11.
Nye, J. S. (2010, May). Cyber power. Belfer Center for Science and International Affairs. Cambridge, MA: Harvard Kennedy School.
Nye, J. S. (2011, November 8). The changing nature of coercive power. World Politics Review, Coercive Diplomacy Feature Report, 3–6.
Obama, B. (2011, May). International strategy for cyberspace: Prosperity, security, and openness in a networked world. Washington, DC: President of the United States. Retrieved from http://www.whitehouse.gov/sites/default/files/rss_viewer/international_strategy_for_cyberspace.pdf.
Paganini, P. (2013). Also China under attack. Cyber Defense Magazine. Retrieved from http://www.cyberdefensemagazine.com/also-china-under-attack/.
Pape Jr, R. A. (1992). Coercion and military strategy: Why denial works and punishment doesn’t. Journal of Strategic Studies, 15(4), 423–475, DOI: 10.1080/01402399208437495.
Perlroth, N. (2014, January 22). Big web crash in China: Experts suspect great firewall. The New York Times. Retrieved from http://bits.blogs.nytimes.com/2014/01/22/big-web-crash-in-china-experts-suspect-great-firewall/?_php=true&_type=blogs&_r=0.
Pincus, W. (2013, April 3). A rational approach to managing nuclear weapons: Deterrence. The Washington Post. Retrieved from http://www.washingtonpost.com/world/national-security/managing-the-bomb/2013/04/03/6724c370–9bc0–11e2–9bda-edd1a7fb557d_story.html.
Power, R. (2000). Tangled web: Tales of digital crime from the shadows of cyberspace. Indianapolis: Que Corporation.
Purisima, J. (2010, November 2). Are you infected? Detecting malware infection. Symantec Connect. Retrieved from http://www.symantec.com/connect/articles/are-you-infected-detecting-malware-infection.
Rains, T. (2013, March 11). The threat landscape in China: A paradox. Microsoft Security Blog. Retrieved from https://blogs.technet.com/b/security/archive/2013/03/11/the-threat-landscape-in-china-a-paradox.aspx?Redirected=true.
Rattray, G. J. (2001). Strategic warfare in cyberspace. Cambridge, MA: MIT Press.
Rid, T. (2011). Cyber war will not take place. Journal of Strategic Studies, 35(1), 5–32. DOI: 10.1080/01402390.2011.608939. Retrieved from http://dx.doi.org/10.1080/01402390.2011.608939.
Rowe, N. C. (2009). The ethics of cyberweapons in warfare. International Journal of Cyberethics, 1(1), 20–31.
Rowe, N. C. (2010, July). Towards reversible cyberattacks. Proceedings of the 9th European Conference on Information Warfare and Security. Thessaloniki, Greece.
Rowe, N. C., Garfinkel, S. L., Beverly, R., & Yannakogeorgos, P. (2011, July). Steps towards monitoring cyberarms compliance. Proceedings of the 10th European Conference on Information Warfare and Security, Tallinn, Estonia.
Sanger, D. E. (2012). Confront and conceal: Obama’s secret wars and surprising use of American power. New York: Crown Publishers.
Sanger, D. E., & Shanker, T. (2014, January 14). NSA devises radio pathway into computers. The New York Times. Retrieved from http://www.nytimes.com/2014/01/15/us/nsa-effort-pries-open-computers-not-connected-to-Internet.html.
Schaub Jr., G. (1998). Compellance: Resuscitating the concept. In Freedman, L. (Ed.). Strategic coercion concepts and cases (pp. 37–60). New York, NY: Oxford University Press.
Schelling, T. (1966). Arms and influence. New Haven and London: Yale University Press.
Schmitt, M. N. (2011). Cyber operations and the jus ad bellum revisited. Villanova Law Review, 56, 569–605.
Schmitt, M. N. (2012, December). International law in cyberspace: The Koh speech and Tallinn Manual juxtaposed. Harvard International Law Journal, 54, 13–37.
Segal, A. (2012, September 5). China and Japan’s cyber detente. The Diplomat. Retrieved from http://thediplomat.com/flashpoints-blog/2012/09/05/china-and-japans-cyber-detente/.
Shanker, T. (2013, November 26). U.S. sends two B-52 bombers into air zone claimed by China. The New York Times. Retrieved from http://www.nytimes.com/2013/11/27/world/asia/us-flies-b-52s-into-chinas-expanded-air-defense-zone.html?_r=0.
Simpson, M. T., Backman, K., & Corley, J. (2011). Hands-On ethical hacking and network defense (2nd ed.). Boston, MA: Course Technology, Cengage Learning.
Singer, P. W., & Friedman, A. (2014, January 15). Cult of the cyber offensive: Why belief in first-strike advantage is as misguided today as it was in 1914. Foreign Policy. Retrieved from http://www.foreignpolicy.com/articles/2014/01/15/cult_of_the_cyber_offensive_first_strike_advantage.
SSG XXXI. (2013, January). (U) EM maneuver warfare. Newport, RI: CNO’s Strategic Studies Group. This document is classified FOUO.
Star, B. (2003, January 12). U.S. e-mail attack targets key Iraqis. CNN. Retrieved from http://www.cnn.com/2003/WORLD/meast/01/11/sproject.irq.e-mail/.
Sun Tzu. (1963). The art of war. Ed. and trans. Samuel B. Griffith. New York: Oxford University Press.
Symantec Corporation. (2012, May 31). Flamer: A recipe for bluetoothache, Symantec Security Response Blog. Retrieved from http://www.symantec.com/connect/blogs/flamer-recipe-bluetoothache.
Symantec Corporation. (2013, April). Internet security threat report 2013, Volume 18, 2012 Trends. Mountain View, CA: Symantec Corporation.
Taddeo, M. (2012). An analysis for a just cyber warfare. Paper presented at the 4th International Conference on Cyber Conflict, Talinn: NATO CCD COE Publications.
Technolytics. (2010). The cyber commander’s handbook: The weaponry & strategies of digital conflict. McMurray, PA: Technolytics.
Thies, W. (2003). Compellance failure or coercive success? The case of NATO and Yugoslavia. Comparative Strategy, 22(3), 243–267, DOI: 10.1080/01495930390215171.
United Nations. (1945). Charter of the United Nations and statute of the international court of justice, 1 UNTS XVI. Retrieved from https://www.un.org/en/documents/charter/chapter2.shtml.
Wan, W. (2013, September 4). After Snowden revelations, China worries about cyberdefense, hackers. The Washington Post. Retrieved from http://washingtonpost.com/world/after-snowden-revelations-china-worries-about-cyberdefense-hackers/2013/09/04.
Waxman, M. C. (2011). Cyber-attacks and the use of force: Back to the future of article 2(4). The Yale Journal of International Law, 36(42), 421–459.
Waxman, M. C. (2013). Self-defensive force against cyber attacks: Legal, strategic and political dimensions. International Law Studies, 89, 109–122. Newport, RI: U.S. Naval War College.
Welch, C. (2013, October 20). NSA hacked Mexican president’s e-mail, according to latest leaks. The Verge. Retrieved from http://mobile.theverge.com/2013/10/20/4858566/nsa-hacked-mexican-presidential-e-mail.
Winterfield, S. P. (2001, December). Cyber IPB. Global Information Assurance Certification. Bethesda, MD: SANS Institute.
Zhou, D. (2012, July). Senkaku islands dispute pushes
Japan and China closer to war, and America may get sucked in. Retrieved from
1. Defense Technical Information Center
Ft. Belvoir, Virginia
2. Dudley Knox Library
Naval Postgraduate School
[*] “In some cases Pape treats the target as unitary rational actor, but in others he distinguishes between different factions that react differently to coercive threats (as in the case of Japan and Germany in World War II). See his Bombing to Win, pp. 108-27 and 283-313.”
[†] This is a common paraphrasing of the actual quote which is “…no plan of operations extends with any certainty beyond the first contact with the main hostile force” (Hughes, 1993, p. 92).