NAVAL POSTGRADUATE SCHOOL

MONTEREY, CALIFORNIA

THESIS

 

TESTING DECEPTION WITH A COMMERCIAL TOOL SIMULATING CYBERSPACE

Sasha K. Drew
Chief Petty Officer, United States Navy
BS, American Military University, 2013

Charles W. Heinen
Chief Petty Officer, United States Navy
BS, University of Maryland University College, 2016
M, Colorado State University-Global Campus, 2018

Submitted in partial fulfillment of the
requirements for the degree of

MASTER OF SCIENCE IN APPLIED CYBER OPERATIONS

from the

NAVAL POSTGRADUATE SCHOOL
March 2021

Approved by:���� Armon C. Barton
�������������������������� Advisor

�������������������������� Neil C. Rowe
�������������������������� Co-Advisor

�������������������������� Alex Bordetsky
�������������������������� Chair, Department of Information Sciences

ABSTRACT

�����������

Deception methods have been applied to the traditional domains of war (air, land, sea, and space); however, within the newest domain (cyber), there is a need to study how it can best be employed. We will use a cyberspace simulator to evaluate some methods of deception in cyberspace and assess their impact on cyberspace operations. This could be beneficial since the military�s digital infrastructure is threatened daily with cyber-attacks. This study aims to evaluate the costs and benefits of integrating military deception into cyberspace operations.


                                                                                                                                           I.            Introduction

Now that cyberspace is a warfare domain like air, land, sea, and space, military leadership is thinking about how defense and offense can be applied in a virtual environment. Even though cyberspace is evolving and consistently changing with the mass influx of internet-connected devices coming online daily, there are no Department of Defense (DOD) or commercial virtual wargame simulations used to cover basic cyberspace operations for planning purposes. One of the biggest challenges in Defensive Cyberspace Operations (DCO) is analysts� ability to understand, interpret, and act on the massive amounts of data that we use on our networks. The purpose of this research and experiment is to bridge that gap.

During the course of a few months, we, the authors, experimented on modeling cyberwar tactics as a wargame to prepare leadership and operators to succeed in cyberspace operations. Our ability to customize and propose realistic scenarios to Soar Technologies is an important factor in wargaming. Not every person will have the same skill level or base knowledge due to their role in the organization. With the emergence of cyber warfare, the concept of deception needs to be revisited. The deception was never really considered in cyberwar planning, and the knowledge gained from a wargame is indicative of the possibilities of how a conflict might evolve and the causes of the outcome. Using deception within cyberspace provides an additional level of protection to defenders. It enhances the defender�s rate of success while limiting the attacker�s ability to disrupt activities.

                                                                                                                                           II.            Background

Cyberwar games are proven to be useful for a broad range of purposes. Cyberspace wargames are helpful for planning purposes because they simulate a military operation between opposing forces using procedures, data, and rules to simulate actual conditions. The benefit of a wargame is it allows military leaders and operators to go through realistic virtual simulations without real-world penalties. Educational institutions and corporations are also conducting wargames due to their benefits. These games allow corporations to test their processes, train employees, or identify unforeseen vulnerabilities. There are unexpected scenarios in wargames, just as in actual warfare. Seeing and experiencing the unthinkable in a wargame physically and mentally prepares an agent, student, or player to respond efficiently in real-life. Wargames give agents, operators, and students hands-on practical experience either as a cyber defender or attacker.

The simulated experiment in this research involves many runs with random choices in each run. The simulations within the Soar Technologies developed tool offers a randomization level for agents to get a different outcome each time the simulation is played. This stochastic testing is useful for developing both offensive and defensive tactics. They can be viewed as tools and benefit a dynamic environment by simulating players or agents to explore concepts better and make decisions. Our expectation is the simulation of cyberspace threat scenarios will strengthen existing cybersecurity programs and provide an engaging and practical application of cyberspace operations for cyberspace doctrine, theory, and planning.

Wargames are executed with established rules within a structured framework and have been a role-playing method to anticipate, plan, understand, and prepare for war. Wargames support learning objectives; they teach and reinforce training of specific tasks or maneuvers. Emulating planned changes in architecture and strategy while exposing them to a cyberattack in the wargaming environment is beneficial. An organization can simulate the impacts and effects of modifications under consideration before dedicating time and effort to implementing them in their environment. These measures grant an opportunity to identify threats incurred with new capabilities and approaches rather than learning them after deployment in the real world. The DOD, corporations, and educational institutions use wargames to simulate problem-solving, creative thinking, and decision-making when applied to real-world situations. These elements are what the attacker and defender agents in the simulated commercial tool seek to accomplish.

As a team of two for this experiment, our preliminary process was to create the cyber wargame foundation with as much detail about the expansion plan for the simulated commercial tool. The analysis, design, development, implementation, and evaluation (ADDIE) model was used as a guideline during the thesis proposal stage. We set the framework with initial mechanics and dynamics for the simulated commercial tool, understanding that not all elements within the framework will remain the same during the experiment�s research and initial design phase. Chapter� II references previous experiments on cyberspace deception, honey pots, game theory, artificial intelligence, and attack modeling. Following this, in collaboration with Soar Technologies, Chapter� III describes our experiments� methodology, including tested scenarios. The attacker and defenders� actions will be measured from offensive strategies, defensive strategies, situational awareness, and time stamps for each scenario. Chapter� IV breaks down the results from the proposed scenarios. Chapter� V discusses considerations and recommendations for future work.

                                                                                                                      III.            Literature Review

A.                Deception in Cyberspace

Successful network defense is becoming harder to accomplish within today�s cyberspace because of the increasing sophistication of attacks, the number of new vulnerabilities discovered, and the number of devices requiring Internet connectivity. Network defense must be increasingly automated. One useful form of automated response could be implementing cyberspace deception within a network. Automated deception can allow defenders to conceal true-value targets while encouraging attackers to go after low-value targets (Gartzke� & Lindsay, 2015). It provides the defenders more time to successfully identify attacks and figure out a way to fight future attacks. This method allows a strategic advantage for defenders over their attackers (Park� & Kim, 2019).

Penetration testing or �red teaming� can test implemented network deceptions. It can show the network and the system�s weaknesses, and it can identify vulnerable or exploitable systems that require remediation (Randhawa et al., 2018). This type of testing is completed by an organization different from the targeted network owner to provide an unbiased test. An example software to support this test is Trogdor (Randhawa et al., 2018), an automated system that uses a model and critical-node analysis. It compiles a visual picture of the defenders and vulnerable resources within the network. Trogdor could also provide feedback to the defender about the network�s vulnerabilities on both the real network devices and the deceptive ones. This software makes deceptive elements appear more realistic.

Traditional deception taxonomy practices are still applicable to cyberspace (Rowe and Rothstein, 2004). Some taxonomies are most effective when referring to cyber defense and offense. Using object-based offensive deception is common and useful for the attacker (Rowe� & Rrushi, 2016). Some of the examples of this are messages, attachments to messages, and the data. Spear-phishing is one of the most common and effective methods of using messages as a form of deception that attackers use.

Along with messages, there can also be attachments for these messages. That is an efficient way for an attacker to conceal malicious software by hiding it within an attachment that appeals to the user. The final example is the data itself that is used. Data itself can be overlooked but can be malicious on its own (Rowe and Rrushi, 2016). Accomplished using malformed packets to compromise the target�s device; this can also be effective when compromising software.

Effective defensive deception examples are using camouflage and fakes within the network. Camouflage is mainly used for offensive purposes, but it has great value on the defensive side. An intelligent way to use camouflage is hiding honeypots within the defender�s network (Rowe and Rrushi, 2016). This action is something that attackers wish to avoid so, being able to effectively hide a defender�s honeypot can give the defender an edge.

Using fake documents or devices is an effective way for the defender to protect information and throw attackers off the trail. One way to use this is by planting fake user information in files to bait the attackers, providing alerts to defenders showing them potential malicious activity occurring on the network (Rowe and Rrushi, 2016).

Setting up fake network devices provides a similar effect as the fake files, except that network devices are being accessed. This setup could give the defender more time to react to an attacker. Even though both offensive and defensive cyberspace deception methods are similar to each other, they are different enough to affect each side positively.

B.                 Other Related Work

Attackers use cyber deception as a tool to mask reconnaissance and infiltrate networks while remaining hidden from defenders. Cyber deception strategies for the defenders include misinformation and disinformation to sabotage the early stage of attack reconnaissance. A project used game theory to reason about a cyberattack scenario where a defender used lightweight decoys to hide and defend real hosts (Major et al., 2019). The defender and attacker played a game with resources consisting of real and decoy systems, possible actions for each player, and a method for defining and evaluating individual player strategies. This work included multiple game trees and an explicit representation of each player�s knowledge of the game structure and payoffs.

Other previous work tested a deceptive response framework that categorizes the types of response used against intruders and displays how intrusion deception has its place on the framework (Goh, 2007). Another thesis put together tools and technologies such as Snort, VMWARE, and honeypots in a testbed open to attacks from the Internet (Chong and Koh, 2018). The results showed that attackers had interesting reactions to deceptions. Another article proposed a deep-learning method to conduct automated cyber deceptive defenses (Matthew, 2020).

Deceptive responses can be incorporated into network defenses at the application layer by adapting to Web traffic. One project did a controlled human-subject cyber deception experiment to examine the value of human interactions and teamwork in red team cognitive and behavioral testing on a large scale (Bruggen et al., 2019). This research concluded that red team members� cyber and psychological deception might be more effective on an operational network; these deceptive efforts provide confusion through complexities and anomalies (Bruggen et al., 2019).

Decoys deployed within a network can potentially entrap the attackers and divert their attacks from the assets that most impact the organization. A decoy is something that is extended on a network to mislead an attacker. A decoy in cyberspace is a decoy that exists within the network to trap the attacker. One project introduced a new hybrid decoy architecture that involved moving decoys (Sun et al., 2019). In this configuration, front-end decoys would constrain the attackers and forward the malicious commands to decoy servers, providing the attackers with plausible appearances from their attacks. This project showed that such decoys require very little overhead.

Fake documents can also provide deception in cyberspace. They can be disinformation or misleading but correct information. Mixing real documents with these fake ones can provide defenders an early warning of potential data exfiltration. One method to generate believable fake text documents uses a genetic algorithm to develop the plausible text. Experiments showed that the algorithm successfully created fakes that appear to be legitimate documents. Another idea is to confuse people trying to steal code by generating code that looks similar but has no function (Park and Stolfo, 2012).

Honeypots receive attacks and deceive attackers into thinking that they are accessing a real system while the honeypot is studying the attack pattern. Honeypots can detect new attacks and previously unknown vulnerabilities based on an attacker�s behavior. One project implemented deception techniques such as fake files, defensive camouflage, delays, and false excuses in a Web honeypot built with SNARE and TANNER software and an SSH honeypot built with Cowrie software (Chong and Koh, 2018). The results showed that most attackers performed only vulnerability scanning and fingerprinting of honeypots. They did not respond to customized deception and were primarily non-interactive.

The effectiveness of two free honeypot tools in real networks was analyzed in an experiment by varying their location, virtualization, and the effects of adding more deception to them (Rowe et al., 2015). The experiment tested and evaluated a Web honeypot tool, Glastopf, and an SSH honeypot tool Kippo. The results showed that including deception in the Web honeypot generated more interest by attackers in additional linked Web pages and interactive features. For comparison, the experiment examined log files of a legitimate website www.cmand.org.

Game theory provides tools for a framework of deception and allows the attacker and defender to simulate strategies. Game theory is the study where choices produce outcomes using agents to affect multiple results (Ross, 2019). In a game model, defenders use security methods to secure the network, and attackers try to get around those defenses. Both defender and attacker actions can be assigned costs. Costs are often site-specific because each organization has its unique information assets, security policies, and risk factors. A measure of how well the attacker won the game defines the difference between the total cost of the attack and the defense�s total cost.

Deception games are examples of imperfect information stochastic games that have been examined in the literature. These game models are beneficial for both attackers and defenders because they help provide real-life situations in a simulated environment that can help provide information on how both sides can perform better and is where minimax game theory comes into play. This game theory is used to minimize the worst-case potential loss by having the agents consider the opponent�s response to their strategy and select the best method that gives the payoff as enormous as possible (Strategies of play, n.d.).

Modeling attacks enables simulation and testing of them. One project-built attack models using deep learning with numerical simulations to verify accuracy (Najada, 2018). Another study generated an agent taxonomy using topological data analysis; it caused an agent taxonomy by analyzing simulation outputs (Swarup, 2019). The Malicious Activity Simulation Tool (MAST) is a scalable, flexible, and interoperable architecture for training specialists in Cyber Security (Swiatocha, 2018).

Attack modeling needs to include methods to allow movement throughout the network to establish footholds for malicious activities (�lateral movement�) (Bai et al., 2019). This activity leaves footprints on hosts as well as the network. The defender can detect these movements and implement countermeasures. Defenders can make decoys look like legitimate network assets to confuse the attacker (Amin et al., 2020). Previous indicators of compromise can help network defenders identify and prevent this activity. One way is by finding the shortest paths by biased random walks to identify lateral movement (Wilkens et al., 2019). Cyber deception in honeypots can help detect lateral movement within networks since anyone accessing them is suspicious.

Artificial Intelligence (AI) is considered the intelligence displayed by machines in how they behave like humans, such as speech recognition, decision-making, visual perception, and translation between languages. Machines use it for problem-solving or learning. Artificial Intelligence (AI) technologies can enhance security by defending against intrusions at various network layers. AI can enhance the network�s security by collecting data from logs, records, and alerts and use this information to correlate events on the network (Using artificial intelligence in cybersecurity, 2020). Being able to link the events occurring efficiently and effectively on a network is challenging to accomplish. While SIEM, security information, and event management devices can provide a place for defenders to correlate events. AI, however, does this seamlessly and will not miss any correlations. Additionally, AI learns from previous events and actions, enhancing the overall security the more it works.

An example of AI enhancing security is a program that can defend itself from various network attacks, intrusion detection with several multitasking processes (Anitha et al., 2016). A software model was developed to represent, capture, and learn to recognize attacks. AI technologies can process large amounts of data to identify patterns in the data; AI methods can be used in various cyber defense phases (Trifonov, 2020). Another project analyzed cyber-physical systems (CPS) with AI concepts, compared them to CPS modeling approaches, and suggested AI based on reinforcement learning. Also, a framework was created to analyze AI-based cyberattacks for smart grids (Kaloudi, 2020).

Machine learning is a subarea of AI that trains machines to learn, usually by identifying patterns. Cyber threats advance yearly in sophistication and automation, making automated learning about them desirable. If conventional security measures rely on the manual analysis of attack development and security incidents, timely protection from these threats cannot be provided, making systems unprotected for extended periods (Rieck, 2011). Machine learning could provide much faster protection.

Reinforcement learning is a field within machine learning and can be used in many ways. Two iterative reinforcement learning algorithms for the attacker and defender allowed them to confront one another (Zhu, 2014). To maximize the notion of cumulative reward, Intelligent agents take actions in an environment. A project used reinforcement learning to provide an optimal defense policy in a more accurate and timelier way (Feng, 2017). Reinforcement learning was proven to cope with increasingly complex cybersecurity problems (Nguyen, 2020).

                                                                                                                                   IV.            METHODOLOGY

A.                Overview

We collaborated with Soar Technology Inc. (SoarTech) to use their proprietary tool, Cyberspace Course of Action Tool (CCAT). CCAT simulates deception within a computer network and permits the testing of security measures. In particular, it can test the impact that deception has within a network. This provides actionable insights and helps support the cyberspace decision-making process. In our experiments with CCAT, we proposed three realistic scenarios. We used actions developed from previous experiments (Green, 2020). The deception effectiveness was then measured using a set of metrics. The metrics utilized were based on a cost and reward value discussed later in this chapter.

B.                 Scenario Design

The CCAT is a proprietary tool developed by SoarTech and was designed as a cyber wargame between attackers and defenders. There are various actions that both the attacker and defender can choose depending on the specific situation. The scenarios are designed with network assets and nodes to simulate a real-life network. Empirical game-theoretic analysis (EGTA) was used with reinforcement learning to train agent behavior based on runs. During training, the agent becomes less likely to choose an action that has been seen to make it more costly to achieve its objectives and more likely to choose an action that has been seen to make it less costly to achieve its objectives.

C.                Scenario Components

The network map can be seen in Figure 1 and is a simulated local-area network with segmentation and security. This is the network map that was used for our experiments. It has three simulated local-area subnetworks (VLANs). These contain simulated servers, workstations, a firewall, and other network devices commonly found within military networks. Also, front-end servers support Web, mail, and applications that access the backend servers for those specific tasks. Two simulated Program of Record (POR) servers are within the server VLAN, two domain controllers, two Web database servers, two file servers, and one router with the application and mail backend servers. Figure 2 is the defender map that shows the flow of response actions based on the attack experienced, and figures 3� through 5 are the attack map that shows the flow an attacker can take depending on the specific attack chosen (Green, 2020).

A close up of a map

Description automatically generated

                                                                                                        Figure 1.                    Network Map: Source: Green (2020).

Diagram

Description automatically generated

                                                                                                       Figure 2.                    Defender Map. Source: Green (2020).

Diagram

Description automatically generated

                                                                                              Figure 3.                    Attack Map (Part 1). Source: Green (2020).

Diagram

Description automatically generated

                                                                                              Figure 4.                    Attack Map (Part 2). Source: Green (2020).

Diagram

Description automatically generated

                                                                                              Figure 5.                    Attack Map (Part 5). Source: Green (2020).

D.                Scenarios Tested

The first scenario tested included a delayed effect, which tested the attacker�s perceptiveness. An example of such an effect is a user clicking on a phishing link after the attacker sends a phishing email. This scenario�s subsequent attacker�s actions could be to exfiltrate data or destroy the server. The defender could use actions to disable the links and attachments that were sent. The attack map is represented in figures 6 through 8.

Diagram

Description automatically generated

                                                                                            Figure 6.                    Attack Map for Proposed Scenario 1 (Part 1)

Diagram

Description automatically generated

                                                                                            Figure 7.                    Attack Map for Proposed Scenario 1 (Part 2)

Diagram

Description automatically generated

                                                                                            Figure 8.                    Attack Map for Proposed Scenario 1 (Part 3)

The second scenario set up an attack using a distributed denial-of-service (DDoS) attack as a distraction, otherwise known as a smokescreen. The attack performs a denial-of-service attack on an isolated part of the network to distract the defenders from an exploit attack against either the mail server or application server or perform a web crawl against the webserver. These are the attacks that will give the attacker access to the network. As shown in figures 9 through 11, the attacker can choose which server to attack when the DDoS is performed; then, they have several options for subsequent attacks to gain access to the network and perform other tasks. The defender�s options are to blacklist the IP address, do denial-of-service mitigations, and install tools to detect further and prevent additional such attacks.

Diagram

Description automatically generated

                                                                                            Figure 9.                    Attack Map for Proposed Scenario 2 (Part 1)

Diagram, schematic

Description automatically generated

                                                                                          Figure 10.                  Attack Map for Proposed Scenario 2 (Part 2)

Diagram

Description automatically generated

                                                                                          Figure 11.                  Attack Map for Proposed Scenario 2 (Part 3)

The third scenario involved an attacker spoofing a network IP address by creating IP packets with that address (figures 12�14). The attacker used a trusted IP address that did not require the usual degree of authentication. Spoofed IP addresses can be made more challenging to recognize if each packet appears to come from a different address. One method is for the attacker to scan the network to identify the network�s addresses, then construct an address that matches an internal network subdomain. The defender needs to detect and block all inbound traffic from the attacker�s IP. This could succeed or fail based on the probability that the defender chooses the correct IP to block, given the evidence generated by the attacker.

Diagram

Description automatically generated

                                                                                          Figure 12.                  Attack Map for Proposed Scenario 3 (Part 1)

Diagram

Description automatically generated

                                                                                          Figure 13.                  Attack Map for Proposed Scenario 3 (Part 2)

Diagram

Description automatically generated

                                                                                          Figure 14.                  Attack Map for Proposed Scenario 3 (Part 3)

E.                 Assigning Costs and Benefits to the Scenario Options

There are various costs associated with each selected action and benefits that come with a specific action. Each action by the attacker has a cost subtracted from the overall score at the end of the game. Similarly, the defender also has a cost based on the defender action that they choose. Cost, reward, and the associated, linked actions are shown in Tables 1�5. These costs and reward points were used from a previous experiment (Green, 2020). They were set while consulting SoarTech to identify values that would bring realism to the game within realistic scenarios.

                                                                      Table 1.                       Cost Attacker. Adapted from Green (2020).

Table

Description automatically generated

                                                                     Table 2.                       Cost Defender. Adapted from Green (2020).

Table

Description automatically generated with medium confidence

                                       Table 3.                       Attacker Decoy Scenario Actions. Adapted from Green (2020).

Table

Description automatically generated

                                      Table 4.                       Defender Decoy Scenario Actions. Adapted from Green (2020).

Table

Description automatically generated

                                      Table 5.                       Defender Decoy Scenario Actions. Adapted from Green (2020).

Table

Description automatically generated

 

Figures 3 through 5 show the attacker�s flow based on the actions chosen. Attackers start the game with a set of preparation actions by selecting �Detecting Decoy,� which means that the attacker determines a decoy�s existence within the environment, or �Maintain IP Address,� which allows the attacker to continue forward with the attack. The follow-on action is �Map Network.� This simulates a network scan on the defender network, resulting in open ports on the servers located within the DMZ. Following the �Map Network� completion, the attacker can select a specific network attack based on the security conditions discovered from the �Map Network� phase. These actions include spear-phishing attacks, exploiting public-facing applications through servers, and a web crawl (collects an inventory) of the web server through the open port. Upon selection and completion of this selection, the attacker can discover local networks, users, processes, search local data, or determine decoys. This leads the attacker to gain vital information from the network, such as usernames, personally identifiable information (PII), or network connections. The action was taken here relies on the result of the attack, with the two final possibilities being �Exfil Data� or �Server Destroyed.� The costs and rewards for these actions will be tallied, and a total score will be presented.

Like the attackers, the defenders also have a set of actions, Figure 2, that can be chosen from and accomplished to effectively grade via the cost/reward via Tables 4�6. The defender�s overall goals are to identify the actions being taken against their network and prevent the attacker from accomplishing their primary objective. At the beginning of the game, the defender can select six various options. These include file monitoring, process monitoring, �Audit Logon Events,� �Audit HIPS Logs,� or �Check Firewall Alerts.� These actions provide the defender the ability to determine if anything malicious has occurred or is occurring on the network. Once the defender has determined an attack has occurred, they are given the ability to perform follow-on actions to stop the current attack from progressing and stop the exfiltration of data or server being destroyed. Within the defender actions, there are options to create decoy servers, decoy POR, change file names, and change hostnames. Adding in the ability for deception can significantly increase the attacker�s costs, helping the defender win the game.

Specifically, for the denial-of-service scenario, the defender�s reward for detecting and mitigating the first attack is minimal since the attacker�s primary goal is to gain access using a different technique than the DDoS. The main benefit is in preventing the attacker from gaining critical-system access. The longer the denial of service occurs, the easier it is to figure how to block it, so there will be a small reward if the defender does not block the attack. However, there is a competing negative cost to the defender of suffering denial of service; an alternative cost-benefit accounting is that the attacker accrues points for each timestamp that a node has a DDoS. These new defender DDoS Scenario cost table can be seen in Table 6 and the new costs and rewards can be seen in Table 7. The defender could get a few points each time a DDoS attack has not been blocked, added to their overall cost. The costs and benefits for the attacker are different if they are a military adversary since their primary goal can be sabotage and espionage. The defender�s goal against denial of service should be to end excess network connections.

                                                                 Table 6.                       Proposed Defender DDoS Scenario Cost Table

Table

Description automatically generated

                                                                    Table 7.                       Proposed Actions for Attacker and Defender

Table

Description automatically generated


 

LIST OF REFERENCES

Amin, M. A. R. A., Shetty, S., Njilla, L. L., Tosh, D. K., & Kamhoua, C. A. (2020). Dynamic cyber deception using partially observable Monte‐Carlo planning framework. In Modeling and Design of Secure Internet of Things (pp. 331�355). IEEE. https://doi.org/10.1002/9781119593386.ch14

 

Anitha, A., Paul, G., & Kumari, S. (2016). Cyber defense using artificial intelligence. International Journal of Pharmacy and Technology, 8(4), 25352�25357.

 

Aybar, L., Singh, G., & Shaffer, A. (2018, March 8). Developing simulated cyberattack scenarios against virtualized adversary networks. Proceedings of the 13th International Conference on Cyber Warfare and Security ICCWS 2018. http://hdl.handle.net/10945/59147

 

Bai, T., Bian, H., Daya, A. A., Salahuddin, M. A., Limam, N., & Boutaba, R. (2019). A machine learning approach for RDP-based lateral movement detection. 2019 IEEE 44th Conference on Local Computer Networks (LCN), 242�245. https://doi.org/10.1109/LCN44214.2019.8990853

 

Bonetto, R., & Latzko, V. (2020). Machine learning. In F. H. P. Fitzek, F. Granelli, and P. Seeling (Eds.), Computing in communication networks (pp. 135�167). Academic Press. https://doi.org/10.1016/B978-0-12-820488-7.00021-9

 

Bruggen, D., Ferguson-Walter, K., Major, M., Fugate, S., & Gutzwiller, R. (2019). The world of CTF is not enough data: Lessons learned from a cyber deception experiment. IEEE Workshop on Human Aspect of Cyber Security (HACS), 1�8. https://doi.org/ 10.1109/CIC48465.2019.00048.

 

Cammarota, R., Banerjee, I., & Rosenberg, O. (2018). Machine learning IP protection. Proceedings of the International Conference on Computer-Aided Design, 1�3. https://doi.org/10.1145/3240765.3270589

 

Chang, X. S., & Chua, K. Y. (2011). A cyberciege traffic analysis extension for teaching network security [Master�s Thesis, Naval Postgraduate School]. https://calhoun.nps.edu/bitstream/handle/10945/10578/11Dec%255FChang.pdf?sequence=1&isAllowed=y

 

Changwook-Park, & Kim, Y. (2019). Deception tree model for cyber operation. 2019 International Conference on Platform Technology and Service (Platon), 1�4. https://doi.org/10.1109/PlatCon.2019.8669410

 

Chen, H., Han, Q., Jajodia, S., Lindelauf, R., Subrahmanian, V. S., & Xiong, Y. (2020). Disclose or exploit? A game-theoretic approach to strategic decision-making in cyber-warfare. IEEE Systems Journal, 14(3), 3779�3790. https://doi.org/10.1109/JSYST.2020.2964985

 

Chong, W., & Koh, C. K. R. (2018). LEARNING CYBERATTACK PATTERNS WITH ACTIVE HONEYPOTS [Master�s Thesis, Naval Postgraduate School]. https://calhoun.nps.edu/bitstream/handle/10945/60377/18Sep_Chong_Koh.pdf?sequence=1&isAllowed=y

 

Ciancioso, R., Budhwa, D., & Hayajneh, T. (2017). A framework for zero-day exploit detection and containment. 2017 IEEE 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech), 663�668. https://doi.org/10.1109/DASC-PICom-DataCom-CyberSciTec.2017.116

 

Farar, A., Bahsi, H., & Blumbergs, B. (2017). A case study about the evaluation of cyber deceptive methods against highly targeted attacks. 2017 International Conference on Cyber Incident Response, Coordination, Containment & Control (Cyber Incident), 1�7. https://doi.org/10.1109/CYBERINCIDENT.2017.8054640

 

Feng, M., & Xu, H. (2017). Deep reinforcement learning-based optimal defense for cyber-physical system in presence of unknown cyberattack. 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 1�8. https://doi.org/10.1109/SSCI.2017.8285298

 

Gartzke, E., & Lindsay, J. R. (2015). Weaving tangled webs: Offense, defense, and deception in cyberspace. Security Studies, 24(2), 316�348. https://doi.org/10.1080/09636412.2015.1038188

 

Goh, H. (2007). Intrusion deception in defense of computer systems. [Master�s Thesis, Naval Postgraduate School]. http://hdl.handle.net/10945/3534

 

Green, J. (2020). The fifth masquerade: An integration experiment of military deception theory and the emergent cyber domain [Master�s thesis, Naval Postgraduate School]. NPS Archive: Calhoun. http://hdl.handle.net/10945/66078

 

 

Jiang, W., Fang, B., Zhang, H., Tian, Z., & Song, X. (2009). Optimal network security strengthening using attack-defense game model. 2009 Sixth International Conference on Information Technology: New Generations, 475�480. https://doi.org/10.1109/ITNG.2009.300

 

Kaloudi, N., & Li, J. (2020). The ai-based cyber threat landscape: A survey. ACM Computing Surveys, 53(1), 20:1-20:34. https://doi.org/10.1145/3372823

 

Karuna, P., Purohit, H., Jajodia, S., Ganesan, R., & Uzuner, O. (2020). Fake Document Generation for Cyber Deception by Manipulating Text Comprehensibility. IEEE Systems Journal, 99, 1�11. https://doi.org/10.1109/JSYST.2020.2980177

Major, M., Fugate, S., Mauger, J., & Ferguson-Walter, K. (2019). Creating Cyber Deception Games. 2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI), Los Angeles, CA, USA 102�111. https://doi: 10.1109/CogMI48466.2019.00023.

 

 

Mangalraj, P. (2019). Implications of machine learning in cybersecurity. IEEE/WIC/ACM International Conference on Web Intelligence on - WI �19 Companion, 142�143. https://doi.org/10.1145/3358695.3360893

 

Matthew, A. (2020). Automation in cyber-deception evaluation with deep learning. Researchgate. https://www.researchgate.net/publication/340061861_Automation_in_Cyber-Deception_Evaluation_with_Deep_Learning

 

Mirkovic, J., & Reiher, P. (2004). A taxonomy of DDoS attack and DDoS defense mechanisms. ACM SIGCOMM Computer Communication Review, 34(2), 39�53. https://doi.org/10.1145/997150.997156

 

Morin, M. (2016). Protecting networks by automated defense of cyber systems [Master�s thesis, Naval Postgraduate School]. NPS Archive: Calhoun. https://calhoun.nps.edu/bitstream/handle/10945/50600/16Sep_Morin_Matthew.pdf?sequence=1&isAllowed=y

 

Najada, H. A., Mahgoub, I., & Mohammed, I. (2018). Cyber intrusion prediction and taxonomy system using deep learning and distributed big data processing. 2018 IEEE Symposium Series on Computational Intelligence (SSCI), 631�638. https://doi.org/10.1109/SSCI.2018.8628685

 

Nguyen, T. T., & Reddi, V. J. (2020). Deep reinforcement learning for cybersecurity. ArXiv:1906.05799 [Cs, Stat]. http://arxiv.org/abs/1906.05799

 

Ning, X., & Jiang, J. (2020). In the mind of an insider attacker on cyber-physical systems and how not being fooled. IET Cyber-Physical Systems: Theory & Applications, 5(2), 153�161. https://doi.org/10.1049/iet-cps.2019.0087

 

O�Loughlin, M. (2017). Three if by Internet: Exploring the utility of a hacker militia [Master�s Thesis, Naval Postgraduate School]. https://calhoun.nps.edu/bitstream/handle/10945/53027/17Mar_O%27Loughlin_Matthew.pdf?sequence=1&isAllowed=y

 

Park, Y., & Stolfo, S. J. (2012). Software decoys for insider threat. Proceedings of the 7th ACM Symposium on Information, Computer and Communications Security, 93�94. https://doi.org/10.1145/2414456.2414511

 

Prasad, R., & Rohokale, V. (2020). Artificial intelligence and machine learning in cybersecurity. In R. Prasad & V. Rohokale, Cyber Security: The Lifeline of Information and Communication Technology (pp. 231�247). Springer International Publishing. https://doi.org/10.1007/978-3-030-31703-4_16

 

Randhawa, S., Turnbull, B., Yuen, J., & Dean, J. (2018). Mission-centric automated cyber red teaming. Proceedings of the 13th International Conference on Availability, Reliability and Security, 1�11. https://doi.org/10.1145/3230833.3234688

 

Rieck, K. (2011). Computer security and machine learning: Worst enemies or best friends? 2011 First SysSec Workshop, 107�110. https://doi.org/10.1109/SysSec.2011.16

 

Ross, D. (2019, March 08). Game theory. Retrieved February 09, 2021, from https://plato.stanford.edu/entries/game-theory/

 

Rowe, N. C., & Rothstein, H. S. (2004). Two taxonomies of deception for attacks on information systems. Journal of Information Warfare, 3(2), 27�39.

 

Rowe, N. C., & Rrushi, J. (2016). Introduction to cyberdeception. Berlin: Springer International Publishing.

 

 

Ruff, T. (2016, October 4). 6 Must-Haves for Fed Bug Bounty Programs. FederalTimes. http://www.federaltimes.com/articles/20-agencies-can-streamline-software- for-savings-says-gao

 

Salazar, D. (2018). Leveraging machine-learning to enhance network security [Thesis, Monterey, CA; Naval Postgraduate School]. https://calhoun.nps.edu/handle/10945/59578

 

Shiva, S., Roy, S., & Dasgupta, D. (2010). Game theory for cybersecurity. Proceedings of the Sixth Annual Workshop on Cyber Security and Information Intelligence Research - CSIIRW �10, 1. https://doi.org/10.1145/1852666.1852704

 

Strategies of play. (n.d.). Retrieved February 09, 2021, from https://cs.stanford.edu/people/eroberts/courses/soco/projects/1998-99/game-theory/Minimax.html

 

Sun, J., Liu, S., & Sun, K. (2019). A scalable high fidelity decoy framework against sophisticated cyberattacks. Proceedings of the 6th ACM Workshop on Moving Target Defense, 37�46. https://doi.org/10.1145/3338468.3356826

 

Swarup, S., & Rezazadegan, R. (2019). Generating an agent taxonomy using topological data analysis. Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, 2204�2205.

 

Swiatocha, T. L. (2018). Attack graphs for modeling and simulating sophisticated cyberattacks [Thesis, Monterey, CA; Naval Postgraduate School]. https://calhoun.nps.edu/handle/10945/59599

 

Trifonov, R., Manolov, S., Tsochev, G., & Pavlova, G. (2020). Recommendations concerning the choice of artificial intelligence methods for increasing of cyber-security. Proceedings of the 21st International Conference on Computer Systems and Technologies �20, 51�55. https://doi.org/10.1145/3407982.3407986

 

Using artificial intelligence in cybersecurity. (2020, August 18). Retrieved February 09, 2021, from https://www.balbix.com/insights/artificial-intelligence-in-cybersecurity/

 

Veith, E. M., Fischer, L., Tr�schel, M., & Nie�e, A. (2019). Analyzing cyber-physical systems from the perspective of artificial intelligence. Proceedings of the 2019 International Conference on Artificial Intelligence, Robotics and Control, 85�95. https://doi.org/10.1145/3388218.3388222

 

Wilkens, F., Haas, S., Kaaser, D., Kling, P., & Fischer, M. (2019). Towards efficient reconstruction of attacker lateral movement. Proceedings of the 14th International Conference on Availability, Reliability and Security, 1�9. https://doi.org/10.1145/3339252.3339254

 

Yahyaoui, A., & Rowe, N. C. (2015). Testing simple deceptive honeypot tools (I. V. Ternovskiy & P. Chin, Eds.; p. 945803). https://doi.org/10.1117/12.2179793

 

Zhu, M., Hu, Z., & Liu, P. (2014). Reinforcement Learning Algorithms for Adaptive Cyber Defense against Heartbleed. In Proceedings of the First ACM Workshop on Moving Target Defense (pp. 51�58). New York, NY: ACM. doi:doi.org/10.1145/2663474.2663481