Cyber Coercion: Cyber Operations Short of Cyberwar
Daniel R. Flemming, Neil C. Rowe
U.S. Naval Postgraduate School, Monterey, California, USA
Countries should find ways to exert strong influence without resorting to excessive violence and warfare. Cyber coercion is a new option, the use of computer networks and software to influence other countries with cyber attacks short of full warfare. It could involve demonstrations of capabilities as a sample of what warfare could entail. An example would be when country A is threatening invasion of another country B, and B disables the computer networking on a single ship of A by using emplaced Trojan horses, to demonstrate that A's entire navy could be similarly disabled. Cyber coercion could enable phased aggression to compel adversaries to the bargaining table and avoid unnecessary escalation while expending fewer resources than alternative methods. Cyber coercion could permit flexible tailoring of an operation with varying levels of force, and could include features of attributability and reversibility. This paper surveys the potential for cyber coercion. It considers the possible goals, the possible targets, and the possible methods from a strategic point of view. Additional issues discussed are actors involved, possible ineffectiveness, reversibility, and attribution. Risks discussed are that of the attacker ignoring the coercion, the attacker escalating the conflict, stalemates, and spread of the conflict to civilian targets. Despite these risks, we argue there is a role for cyber coercion in the strategic arsenal of militaries.
Keywords: coercion, cyber, operations, attacks, nonlethal, compellence
This paper appeared in the proceedings of the 10th International Conference on Cyber Warfare and Security, Skukuza, South Africa, March 2015.
Coercion is the use of actions that imply threats that are intended to modify coercee behavior in a conflict situation between states. It involves initiating an action that will stop only if the coercee complies (“compellence”) or threatening an action if the coercee does something (“deterrence”). Coercion can work to prevent actions, stop ongoing actions, or encourage changes to a coercee’s behavior in general. Coercion is rarely decisive in a conflict, but its use can force the coercee to back down or negotiate while avoiding a more costly armed conflict for both sides. Usually there is a single coercer and single coercee, but there can be multiple coercers as for instance today with multinational coercion of Iran. We will use the term “cyber attack” to refer to cyber operations as it does to criminal exploits (Power, 2000) in cyberspace, but do not intend to suggest these attacks are necessarily acts of war.
Military strategy “is nothing more than ‘organized coercion’ on a massive scale located somewhere on a spectrum bookended by consent and total control” according to (Freedman 1998). The end goal of a coercive act is to show the coercee the futility of its current strategy and induce concessions, not necessarily contribute directly to the military victory of the coercer. It is not a demonstration of simple sabotage or brute force but rather attaching a message and meaning to the coercive act. However, any coercive strategy balances the political and the military (Cimbala 1998). Coercion is the threat of more to come, so it is critical to understand what the enemy values to use it successfully. The end goal is to get the enemy not to proceed with its current course of action because it will be too painful or costly. Coercion does not always have to inflict costs; when compensation is the norm, coercion can also withhold benefits (Nye 2011).
Cyber coercion, or coercion using computers and networks, is particularly appealing because limited cyber operations can be inexpensive compared to equivalent non-cyber operations (Clark and Knake 2010). Another appealing feature of cyber coercion is its wide spectrum of possibilities. An act of cyber coercion can range from a mere demonstration of capabilities to something equivalent in damage to an airstrike. Cyber coercion can also choose to focus on military, economic, or political effects. Cyber coercion can also be intrinsically limited, can be self-attributing, and can be reversible, all features that are not always easy to accomplish with traditional coercion methods. Cyber coercion can be thought of as a way to achieve “offense in depth” analogous to the “defense in depth” accomplished by having multiple lines of cyber defense. Rather than committing the usually serious step of employing munitions on even a limited scale, cyber coercion can offer a new range of tactics for military planners. However, it is not always effective, and is less often effective than outright warfare. Furthermore, the dividing line between cyber coercion and acts of war is unclear, and a nation that wishes to avoid war must choose its cyber-coercion methods cautiously.
As an example of cyber coercion, a cyber attack was considered prior to conducting the NATO-led air-strikes in Libya in 2012 (Sanger 2012). It was not implemented due to the time needed to successfully employ it and the risk of losing future capabilities by employing it. The cyberattacks on Georgia in 2008 functioned as an unsuccessful attempt at cyber coercion preceding a conventional military invasion by Russia. A better-known cyber coercion was the Stuxnet operation which targeted Iran’s nuclear production facility in Natanz. Bombing the Iranian nuclear facility at the outset, like the Israelis had done against Syria in 2007, was estimated to only cause a two-year delay because the Iranians were further along in the nuclear process. Cyber attacks for the purposes of cyber coercion were probably not anticipated by the Iranians, and if they failed, Iran could always be bombed later.
Definitions of coercion contrast along four dimensions: the types of threats, the role of force, the actors, and the definition of success (Bratton 2005). For the role of force, he distinguishes coercion before the use of force, coercion only through force, and coercion through diplomacy and force. For actors, he distinguishes cases where they are assumed to be rational similar actors, rational different actors, and complex actors not necessarily rational. For the definition of success, he distinguishes full compliance with the coercer’s demands from situations where there can be a degree of success. This paper follows most closely to Schelling’s definition of coercion (Schelling 1966) which includes both compellence and deterrence. It also includes limited force with the caveat that in the successful coercion of an adversary, the coercee should retain the ability to use violence yet choose not to employ it (Byman and Waxman 2002); it is not violence for the sake of violence. Schelling addresses coercion conducted between nation-states, although the specific target can be inside a nation, such as its military. He also agrees that coercion is a dynamic process that has degrees of success. For instance, forcing a coercee to the bargaining table, while not a complete victory, can be an accomplishment if it avoids a costly war. Schelling’s framework primarily deals with the nuclear age, and (Libicki 2011) points out that treating cyber weapons similarly can be misleading. Nonetheless, there are many good analogies.
Military instruments of coercion include the threat of military retaliation and the support of insurgencies. The U.N. Charter authorizes “demonstrations, blockade, and other operations by air, sea, or land forces”. This implicitly endorses military coercion as a way to settle disputes and recognizes an implicit concept of limited war (Lango 2006). However, even when the coercer possesses a clear superiority, it can lack the will to follow through, and the coercee may recognize this. For instance, nuclear capabilities have questionable coercive power. Nonmilitary coercive instruments include economic sanctions and diplomatic demarches. Sanctions do require the help of other countries to be most effective; they can be undermined by third parties; and they can also raise humanitarian issues. Sanctions are difficult to target and so have had mixed results; Syria is currently subject to a wide range of sanctions which have not stopped its ongoing civil war, and sanctions have had an unclear effect against Iran’s nuclear program (Kessler 2011). Diplomatic demarches are used by governments when they want to send a message to another government with as little risk as possible, but when used alone their impact can be small.
Compellence is coercion to act; deterrence is coercion to prevent action. The goal of compellence is to increase the perception of the coercer’s capabilities and will to continue while limiting the coercee’s capabilities just enough to encourage it not to continue (Schaub 1998); it is a blend of threats to use force and the use of limited force in connection with a threat. Compellence is easier than deterrence for cyber coercion because it involves demonstrations of power; the necessary secrecy for cyber capabilities makes it hard for a nation to deter by threatening cyber operations without giving away technical details that will reduce the effectiveness of any subsequent coercion (for instance, any clues to the attack target will encourage the potential coercee to harden that target or take it off the Internet). In addition, cyber operations are unpredictable in their effectiveness, and a victim will not be deterred unless the probability of damage is sufficient. But there are possibilities of success for cyber deterrence against nation-states because major attacks rarely happen in a vacuum and will involve other political, military, or economic goals (Kugler 2009). Coercees may be unclear that the post-attack environment will be better or worse for them and therefore yield to the deterrence, so uncertainty can be as powerful as projecting certainty in cyber deterrence (Lukasik 2010).
Byman and Waxman note the codependence of deterrence and compellence that is best demonstrated when the success of previous compellence validates the coercing nation’s credibility, which then improves its ability to deter. For instance, if a nation can compel an adversary into stopping existing attacks, then that adversary is less likely to attack in the future.
Cyber coercion itself can involve either threats or actual force against a coercee. Threats are less costly. Cyber threats alone may cause coercees to doubt their ability to wage any kind of warfare successfully because of the perceived threat of the coercer’s capabilities. Threats inject fear and doubt into the performance and security of a network, and then the thought of an adversary in a nation’s networks could be exploited for a coercive purpose. Just maintaining a publicized standing cyber force has some deterrence effect. Every network has a certain level of risk, but in wartime fear and doubt are magnified because of the increased risks. If warfighters begin to doubt the accuracy of information on their computers, or the performance of their military system to which they have entrusted their lives, then cyber threats can create the tipping point for warfighters being unwilling to engage the enemy.
If cyber coercion involves actual force against a coercee, the amount of force required is difficult to judge because it is situationally dependent. Too weak, and it can waste time and entail capability loss; too strong, and it can lead to retaliation. One good thing is that the threshold for military retaliation after cyber acts will likely be higher than that for non-cyber attacks (Waxman 2013) because the damage of cyber attacks tends to be less obvious than that of other strategic weapons (Lewis, 2011) and provides less political motivation to respond. Many examples of politically-motivated denial-of-service attacks and web defacements in the news recently support this claim. Nation-states get too many low-level intrusions to respond to every one, whatever its motivation. However, that also means that low-level intrusions do not carry much coercive weight (Liff 2012). So effective cyber coercion needs to be somewhat dramatic. However, there is a thin line between targeting for coercion and targeting for destruction (Cimbala 1998). Coercion is most successful when targeting what the enemy values, but this can more likely trigger escalation; brute force sabotage eliminates the coercee’s options, and coercion is built upon choices. So the cyber coercer must find something short of total destruction but powerful enough to compel the desired behavior, and this is not always easy.
Cyberattack methods proposed for cyber coercion will employ methods similar to those for “hacking” used by computer criminals for financial gain (Power 2000). However, hacking methods are only part of the necessary operational planning. Offensive cyber operations include intelligence gathering about target computer systems and their vulnerabilities, building cyber exploits to match, and then delivering the exploits (Singer and Friedman 2014). Each step opens up the possibility of detection. With cyber coercion, becoming detected is not necessarily the biggest problem, as often the attack will be directly attributed and the desired future behavior will be well communicated to the coercee. Also for cyber coercion, premature detection means that the intended effect of the attack could be thwarted, but it could still have some coercive power because it might show an adversary that they are more vulnerable than they thought.
Intelligence gathering to obtain the technical knowledge needed of vulnerabilities in adversary systems can take months and even years of preparation, and also must be continuously updated and verified. For instance, an exploit might only be successful against one model of radar and completely ineffective against all others. Intelligence gathering is even more difficult when attempting to get information about classified networks and highly-guarded military networks and systems, so it requires a major investment and is important to do it continually. The main steps in intelligence gathering are footprinting, scanning, and enumeration. Footprinting learns about employees and business practices, and then gathers information about target IP addresses, network topology, phone numbers, and so on. Tools like search engines, FingerGoogle, WHOis, arin.net, and NSlookup are used (McClure et al. 2012). Scanning tries to infer more detailed characteristics of a target machines such as its open ports. Scanning tools like fping, Nmap, and SuperScan allow tailorability to shape packets to minimize their noticeability. Enumeration tries to find user accounts and identify specific applications that are running. It also correlates this information with known bugs and software weaknesses. More intrusive probes, such as using banner grabbing with netcat, PSTools, and DumpSEC, can reveal less protected resources and user groups that could be targeted. Footprinting, scanning, and enumeration should try to find as many vulnerabilities as possible in target machines since vulnerabilities may be fixed unexpected or quickly disappear when they are discovered.
Once intelligence gathering is complete, cyber exploits can be built to take advantage of the known vulnerabilities in software, hardware, or humans. Software vulnerabilities are the most common and they take advantage of credentials gathered during intelligence collection, security issues in certain software versions, and misconfigured settings. Exploits can then be delivered in one or multiple stages. Single-stage exploits result in some type of executable being run immediately on the target machine, while a multiple-stage inserts code initially that can later pull additional exploits. The more stages, the more complicated, and therefore less likely the chances of success, but some difficult targets do require more stages. Hardware vulnerabilities involve the manipulation of physical devices by spies, manufacturers, or even unsuspecting users. Human vulnerabilities happen as a result of social-engineering attacks, typically with users mistakenly providing the information to attackers.
Influencing the will of the opponent should be the primary objective of coercion since all war is ultimately about it, according to (Cimbala 1998). The goal is to make the coercee rethink risks and potential costs of continuing its current path. He emphasizes “calibrated, but not necessarily small amounts of destruction”. It is also critical to know what the enemy values to manipulate it, and predicting how it will respond makes for more successful bargaining. Successful coercion also avoids conflict, and the desired outcome is concessions of some kind, either an altered status quo or a return to the status quo.
Usually the things the coercee holds most valuable will also be the most heavily guarded. Cyberspace, however, allows for a great deal of maneuver with regards to targeting; the leadership, the military, the economy, and the people are all examples. Byman and Waxman mention five tactics for coercion: attempting power-base erosion, encouraging unrest, decapitation of the leadership, weakening of the country, and denial of important capabilities. All these can be done with cyber coercion. Power-base erosion targets members of the ruling political group and attempts to lessen their influence. This is possible with cyber coercion because world leaders are now more accessible and can more easily be targeted for influence operations. The NSA has recently proven the capability to intercept foreign government leaders’ e-mail and mobile devices in Germany, Mexico, and Brazil (Welch 2013). Going after the leadership is easier in the centralized structure found in authoritarian regimes and could be done with something as simple as spear-phishing. For instance, the U.S. targeted the leadership of Iraq prior to attacking in 2003 by sending thousands of e-mails telling leaders that they could not win a war against the U.S., and providing instructions on how to defect.
Encouraging unrest is attempting to compel the adversary by targeting the general population. While warfare waged against the general population is against the laws of warfare, coercion can often be distinguished from warfare. (Arquilla 2011) writes that the end goal should be surprise and notable effect. For cyber coercion, operations like publishing propaganda and messaging the general populace are easy to do but unlikely to get citizens to force the government to enact the desired change. Targeting the general population could also backfire by making the adversary government more popular for taking a stand against a meddling foreign coercer. North Korea reportedly has an online army of around 3,000 to garner support for the administration and undermine regime opponents (Firn 2013), but its attempted cyber coercion is mostly ineffective. Cyber coercion against North Korea, on the other hand, might be effective because North Koreans rarely hear opposing views.
“Decapitation” of the leadership of a country means isolating them from their country, as by attacking their command communications lines. This damages their political and military control and can help coerce them to avoid doing something that depends on those lines that now appears impossible.
Coercion by weakening is targeting the social and economic cohesion of a country as a whole. This has traditionally been accomplished through nuclear threats or economic sanctions, a “counter-value” option. While technically feasible, counter-value threats are weakened by a lack of credible follow-through; and, counter-value attacks can create such desperation within the victim that they ultimately lead to war. Arquilla fears that some cyber attacks could debilitate a nation such that it responds with whatever weapon is still available, even a weapon of mass destruction. Although the denial-of-service attacks against Estonia in 2007 were just protest, if the Russians stated they did not want the statue moved from Tallinn and threatened a denial-of-service attack if Estonia did not comply, then that would qualify as cyber coercion. At the least, it was proof of the vulnerability of a digitally-interconnected nation such as Estonia to these types of attacks.
Denial of important capabilities is focused on preventing military or political victory. Cyber acts can affect military production, reroute supplies to the battlefield, disable air defenses, and disrupt communications from a distance. The intended audience is coercee leadership, and the message is that they will likely be unable to achieve the goals that they plan. Rarely can a cyber attack itself prevent an adversary from waging war, but the loss of important capabilities can suggest that waging war could be costlier and more difficult than the adversary anticipated. It was tried against Georgia in 2008.
Actors: Although acquiring cyber capabilities has a low point of entry, effective use of cyber coercion is mostly limited to the dominant conventional powers of today. That is because intelligence gathering for cyberwarfare is labor-intensive and expensive, and particularly expensive given that its methods because obsolete quickly once used (Lucas 2011). Coercion is most likely from a dominant state to a less-powerful state. Peer-to-peer cyber coercion among dominant states is also possible but such states may recognize the risks of possible escalation (Libicki 2011). Coercion by cyber attack is unlikely between countries that have agreements on criminal tort law because the victim could sue for damages. This applies to the U.S. and Germany but not the U.S. and China, for instance.
Reusability: As with cyberwarfare, the most effective coercion methods will involve cyberattack methods not seen before. That is because new attacks are usually reported to world information-security sites and quickly analyzed to find countermeasures. Unfortunately, that means that once most cyber coercion methods are used, they will be much less effective if used again. Hence cyber coercion, like cyber warfare, will be costly because a new method needs to be found for each new employment, and so should be used sparingly.
Ineffectiveness: Another problem with coercion is that it likely will not go according to plan due to the many uncertainties of new technology, and the vulnerabilities exploited may be fixed without warning. Furthermore, the cyber domain is poorly understood by many decisionmakers, so an attempted cyber coercion may not be recognized by the victims since computer systems can malfunction on their own without external prodding. Alternatively, victims may recognize the coercion attempt but figure they can ignore it because coercion is often used by an attacker that does not intend to escalate.
Reversibility: Cyber coercion can be made less risky by making it easily reversible, something not true of conventional warfare. For instance, coercee assets can be encrypted with a key only the attacker knows; encryption is a completely reversible operation (Rowe 2010). Other reversible methods are obfuscation, withholding information, and resource deception. Reversibility does not necessarily mean the attack is not powerful. It could encourage faster coercee capitulation if they know that damage can be fixed if they do. However, it may take some time to reverse the damage and may need to be done by a third party that the victim trusts, and some damage such as lost opportunities may not be repairable.
Attribution: Attribution is a key problem with cyber coercion as well as cyberwarfare that is much more problematic than in conventional warfare (Hare 2012). It can be difficult to tell what country is responsible for a cyber-coercive actions because proximity of the attacker to the victim is not necessary, attacks can be impossible to trace across networks, the data may contain no particular clues of origin, and isolating malicious code is challenging (Rowe 2015). But if the victim does not know who is attacking them, the goals of the coercion may be unclear, and it harder for the victim to know what actions are being coerced. Furthermore, knowing the identity of the attacker can contribute to the credibility of implied threats of further actions by the attacker. We have argued that voluntary acknowledgement by the perpetrator is desirable for most cyberattacks during cyberwarfare, and this should also apply to cyber coercion. Voluntary attribution can be achieved without compromising the effectiveness of coercion by using steganography (concealed messages) and public-key encryption (signed messages readable by everyone but guaranteed to be only signable by one entity). Many of the creators of Stuxnet felt that it would be more valuable if attributed because Iran would realize that the U.S. could unmistakably penetrate Iranian networks, but it took several years for the U.S. to acknowledge it.
Escalation risk: Cyber coercion can induce escalation to stronger actions or even outright warfare. The powerful countries like the U.S. that are most capable of cyber coercion are also the most vulnerable to cyberattacks, so it will be tempting for a victim of cyber coercion to strike back in kind. After all, the cost of doing some simple cyber coercion is low although sustained and effective cyber coercion can be costly. (Liff 2012) suggests that escalation can occur when resolve is underestimated and vulnerabilities are misperceived. That suggests that to be effective, cyber coercion will require combining the initial cyber use of force with the threat of conventional force. Even so, the especially unpredictable nature of cyberattacks could make for accidental escalations. And if a victim believes that the cyber coercion is just a prelude to an inevitable broader attack, they may escalate regardless of how the cyber coercion is implemented.
Stalemates: Another risk of cyber coercion is that it may result in a stalemate as each side attempts to get back at the other with low-level cyberattacks. This could go on for a long time if both sides have similar cyber capabilities. However, when a stalemate exists for a long time, there could be incentive to escalate to try to resolve the conflict.
Civilian spillage: Code used in cyber coercion can be reused and reengineered for criminal cyberattacks. This is most a danger when new kinds of attacks are used, and new kinds of attacks will be essential for cyber coercion of one nation against another to provide a high likelihood of effectiveness. The code for Stuxnet eventually appeared in criminal cyberattacks. In addition, even after the extensive preparation it took to develop Stuxnet, it still reached many civilian machines because it could not distinguish them from military targets, and this required worldwide countermeasures.
Costs: Preparing cyber coercion will be expensive. Exploits similar to those in cyberwarfare need to be designed and implemented in advance. The U.S. Congress recently quadrupled the budget of its Cyber Command (USCYBERCOM) to $447 million. Using nearly any form of cyber coercion also has a major secondary cost in making its methods unusable for any subsequent attacks, as discussed above, so development or purchase costs will be high. This means that countries may not have the political will to follow through on cyber coercion.
Despite the risks of cyber coercion, it is likely to be increasingly prevalent in the future because it can defuse conflicts short of outright warfare. Coercion can be accomplished in many ways, giving the coercer considerable flexibility. Thus even though its planning and indirect costs are high, it could have net benefits in saving on the potentially enormous costs, human and otherwise, of warfare. But cyber coercion is more problematic than traditional coercion because of the unreliable nature of the underlying technology, the difficulty of gauging what the response will be, and the questions about when it becomes an act of war.
Arquilla, J. (2011, October) “From Blitzkrieg to Bitskrieg: The Military Encounter with Computers”, Communications of the ACM, 54(10), pp 58–65.
Bratton, P. (2005) “When Is Coercion Successful? And Why Can’t We Agree on It?”, Naval War College Review, 58(3), pp 99–120.
Byman, D., and Waxman, M. (2002) The Dynamics of Coercion: American Foreign Policy and the Limits of Military Might, Cambridge University Press (RAND), Cambridge, UK.
Cimbala, S. (1998) Coercive Military Strategy, Texas A&M University Press, College Station, Texas, USA.
Clarke R. A., and Knake R. K. (2010) Cyber War: The Next Threat to National Security and What To Do about It, HarperCollins, New York, NY.
Firn, M. (2013, August 13) “North Korea Builds Online Troll Army of 3,000”, The Telegraph, retrieved from www.telegraph.co.uk/news/worldnews/asia/northkorea/10239283/North-Korea-builds-online-troll-army-of-3000.html.
Freedman, L. (1998) “Strategic Coercion”, in Freedman, L. (Ed.), Strategic Coercion Concepts and Cases, pp 15–36, Oxford University Press, New York, NY.
Hare, F. (2012) “The Significance of Attribution to Cyberspace Coercion: A Political Perspective”, 4th Intl. Conf. on Cyber Conflict, pp 125-129.
Kessler, G. (2011, April 27) “How Effective are Sanctions in ‘Changing Behavior’?”, The Washington Post, retrieved from http://www.washingtonpost.com/blogs/fact-checker/post/how-effective-are-sanctions-in-changing-behavior/2011/04/26/AFCwRktE_blog.html.
Libicki, M. (2011) “Cyberwar As a Confidence Game”, Strategic Studies Quarterly, Spring, pp 132–146.
Kugler, R. (2009) “Deterrence of Cyber Attacks”, in Kramer, F., Starr, S., and Wentz, L. (eds.), Cyberpower and national security (pp 309–340), Potomac Books and National Defense University Press, Dulles, VA, USA.
Lango, J. (2006) Last Resort and Coercive Threats: Relating a Just War Principle to a Military Practice, Proc. 2006 Joint Services Conference on Professional Ethics (JSCOPE), Washington, DC, USA, retrieved from http://isme.tamu.edu/JSCOPE06/Lango06.pdf.
Lewis, J. (2011) “Cyberwar Thresholds and Effects”, IEEE Security & Privacy, September/October, pp 23–29.
Liff, A. (2012) “Cyberwar: A New ‘Absolute Weapon’? The Proliferation of Cyberwarfare Capabilities and Interstate war”, Journal of Strategic Studies, 35(3), pp 401–428.
Lukasik, S. (2010) “A Framework for Thinking about Cyber Conflict and Cyber Deterrence with Possible Declaratory Policies for These Domains”, in Proceedings of a Workshop on Deterring Cyber Attacks: Informing Strategies and Developing Options for U.S. Policy, pp 99–121, The National Academies Press, Washington, DC, USA.
McClure, S., Scambray, J., and Kurtz, G. (2012) Hacking Exposed 7: Network Security Secrets and Solutions, McGraw-Hill, Berkeley, CA.
Nye, J. (2011, November 8) “The Changing Nature of Coercive Power”, World Politics Review, Coercive Diplomacy Feature Report, pp 3–6.
Power, R. (2000) Tangled Web: Tales of Digital Crime from the Shadows of Cyberspace, Que Corporation, Indianapolis, IN, USA.
Rowe, N. (2011) “Towards Reversible Cyberattacks”, in Leading Issues in Information Warfare and Security Research, Volume I, Academic Publishing, pp 145-158, originally published in the Proceedings of the European Conference on Information Warfare, Thessaloniki, Greece, July 2010.
Rowe, N. (2015) “Attribution of Cyberwarfare”, Chapter 3 in J. Green (ed.), Cyber Warfare: A Multidisciplinary Analysis, Routledge, London, UK.
Sanger, D. (2012) Confront and Conceal: Obama’s Secret Wars and Surprising Use of American Power, Crown Publishers, New York, NY.
Schelling, T. (1966) Arms and Influence, Yale University Press, New Haven, CN, USA.
Schaub Jr., G. (1998) “Compellence: Resuscitating the Concept”, in Freedman, L. (Ed.), Strategic Coercion Concepts and Cases, pp 37–60, Oxford University Press, New York, NY.
Singer, P., and Friedman, A. (2014, January 15) “Cult of the Cyber Offensive: Why Belief in First-Strike Advantage Is As Misguided Today As It Was In 1914”, Foreign Policy, retrieved from http://www.foreignpolicy.com/articles/2014/01/15/cult_of_the_cyber_offensive_first_strike_advantage.
Waxman, M. (2013) “Self-Defensive Force against Cyber Attacks: Legal, Strategic and Political Dimensions”, International Law Studies, 89, pp 109–122, U.S. Naval War College, Newport, RI, USA.
Welch, C. (2013, October 20) “NSA Hacked Mexican President’s E-mail, According to Latest Leaks”, The Verge, retrieved from http://mobile.theverge.com/2013/10/20/4858566/nsa-hacked-mexican-presidential-e-mail.