Honeypot Deception Tactics

 

Neil C. Rowe

U.S. Naval Postgraduate School

Monterey, California, USA

 

Abstract

 

Honeypots on computer networks are most effective when they use deception to fool cyberadversaries into thinking they are not actual decoy intelligence collectors.� Honeypot deception can be made more effective when applied with variety.� We discuss the range of deception tactics of which honeypots can take advantage.� Ideas can come from deception theory, and honeypot deceptions can benefit from planning and experimentation.

 

This paper appeared as chapter 3 in E. Al-Shaer, J. Wei, K. Hamlen, and C. Wang (Eds.), Autonomous Cyber Deception: Reasoning, Adaptive Planning, and Evaluation of HoneyThings, Springer, Chaum, Switzerland, 2018, pp. 35-45.

 

1.                 Introduction

 

Defensive cyber deception is an increasingly used against adversaries in cyberspace, cyberattackers and cyberspies (Wang and Lu, 2018).� Honeypots and honeynets (Girtler, 2013) are an efficient way to test defensive deception tactics for computer systems, mobile devices, and networks.� Honeypots can be defined as network nodes with no purpose beyond collecting network-security intelligence.� A honeypot testbed offers much flexibility since a computer or device that is not burdened with many routine tasks can be reconfigured more easily.� Detailed scripting of some interactions with adversaries is possible in advance, and most automated attacks will not notice deceptive details since they are not looking for them and rarely encounter deception.� Furthermore, minimizing links to the honeypot other than its address means there will be little chance of it being encountered by normal non-malicious non-spying users.

 

Honeypots are helpful for both cyberattacker adversaries (to collect intelligence about attack methods) and cyberspy adversaries (to find out what they are interested in).� Honeypots can be extended to �honeynets� which include network resources as well (Urias, Stout, and Lin, 2016).� Defensive deceptions tested on honeypots can also be used on non-honeypots or �production� systems (De Gaspari et al, 2016) to help protect them.� And even if a honeypot is detected, that knowledge by adversaries may provide good protection for a �fake honeypot� (Rowe, Custy, and Duong, 2007), defined as any production system subsequently installed there, and it will receive fewer future attacks (Zarras, 2014).

 

Deception enhances honeypots because cyberadversaries want to avoid honeypots.� Honeypots do not have useful information, they record attack methods to enable later thwarting of them, and they have safeguards to prevent them from being exploitable as launch points for other activity.� Thus a honeypot will be more useful if it conceals its purpose and the purpose of its data.

 

 

 

2.                 Honeypot deception options

 

Many deception taxonomies have been proposed.� (Sztompka, 1999) is a classic analysis of trust; (Dunnigan and Nofi, 2001) outlines military deception, which is useful because cyberattacks are like low-level warfare; (Fowler and Nesbit, 1995) gives practical advice on mounting deceptions in general; and (Almeshekah and Spafford, 2014) provides a recent taxonomy.� These can provide a menu of choices from which to construct plausible and convincing deceptions.� It has often been observed that deception is more effective when it is varied and surprising (Fowler and Nesbit, 1995), so having a large menu is desirable.

 

Here are specific suggestions for honeypot deceptions based on the �semantic� taxonomy of (Rowe and Rrushi, 2016, chapter 4):

         Deception in superconcept: Honeypots can masquerade just as cyberadversaries do.� They can pretend to be particular kinds of sites, concealing or deceiving as to who owns them, who uses them, whether they are servers, and their purposes (Borders, Falk, and Prakash, 2017).

         Deception in object: Honeypots can offer bait in the form of fake files and fake data within files (John et al, 2011; Rowe and Rrushi, 2016, chapter 7).� In the military world this is an accepted part of counterintelligence.� Other organizations and businesses are increasingly recognizing that some counterintelligence is useful for learning about their adversaries in cyberspace.

         Deception in purpose: Honeypots can run scripted or program-based interactions to confuse and thwart cyberadversaries (Tammi, Rauti, and Leppanen, 2017).� This is especially helpful with industrial-control-systems honeypots because they run quite specific processes.

         Deception in external precondition: When an adversary is asking it to do something dangerous, too revealing, or too hard to simulate, a honeypot can provide an excuse why not to do it (Rowe and Rrushi, 2016, chapter 9).� There are many possible excuses, but a good one should still leave the adversary with hope to achieve their primary goals.� This can be achieved, for instance, by giving a sense of progress to the adversary such as asking for a password or some other form of authorization to suggest that part of the previous request was acceptable.

         Deception in result: Lying about results of actions may also be effective when honeypots are asked to do something they do not want to do.� The easiest lie is to say that an action has been performed, so that a cyberadversary will waste time trying to exploit the nonexistent result, as by reporting nonexistent network nodes (El-Gharabally, 2009).� A response can also simulate a vulnerability so the adversary thinks their attack has succeeded (Han, Kheir, and Balzarotti, 2017).� The main difficulty with deception in result is in maintaining the consistency subsequently.� This can require secondary excuses, such as claiming protection violations when trying to open a nonexistent file.� Alternatively, the honeypot may be able to model effects and be able to predict secondary effects, as when a honeypot simulates a cyber-physical system with a mathematical model.� Even a crude simulation may fool automated adversaries.�

         Deception in time-through: In time-critical situations, delaying an attack can be critical to getting resources to fight it.� Deliberate delays are often not considered suspicious by cyberadversaries because unexpected delays occur frequently with networks and automated software processes (Pal et al, 2017; Rowe and Rrushi, 2016, chapter 6).

         Deception in time-from and time-at: Bait such as documents, messages, and logs can be assigned times to give the appearance of realistic activity.

         Deception in agent: Bait can refer to fake people, sites, and services.

         Deception in recipient and location-at: An adversary�s activities can be routed to a �sandbox� site where they can be better controlled (Borders, Falk, and Prakash, 2007; Stoecklin et al, 2018).�

         Deception in value: Arguments to adversary commands can be changed as a variant on deception in result.� For instance, files can be stored in a different directory than specified, giving the excuse of a cloud-based system.� Or the results of an operation can be deliberately modified to make them unusable or less usable, as for instance by deliberately changing the encoding scheme on some file that is downloaded.

         Deception in experiencer: Recording adversary interactions with the honeypot is central to their design.

 

Once a deception method has been chosen, honeypot deceptions should identify an object, a presentation method, a purpose, and a target.

         Deceptions can be distinguished as to the digital object with which they are associated.� This can be a large object such as a computer, mobile device, or network, or it can be a particular directory, file, or packet.� The importance of surprise in deception suggests putting it in unexpected objects like software patches (Araujo et al, 2014) and databases (Wegerer and Tjoa, 2016).�

         Deceptions can be distinguished by how they are presented.� They can be overt (stated in verbal messages) or indirect (inferred from failures or unexpected consequences of commands).� They can be single events or sequences forming a campaign, as with �second-order� deceptions where one deception, designed to be discovered, is a setup for a more complex deception.�

         Deceptions can be distinguished as to whether their purpose is to encourage, discourage, or manipulate.� Traditional honeypots want to encourage exploration by visitors so they can collect more data.� Hence their deceptions should try to offer what the visitor wants, which is easier for a high-interaction honeypot that offers or simulates an operating system.� However, giving the user everything they want right away is not usually as good as giving it to them is small pieces so they will reveal more of their methods.� �Fake honeypots� can try to discourage visitors as a means of protecting themselves or other nodes on their network.� Alternatively, honeypots can try to manipulate the user in other ways such as in trying to identify them and in deliberately making them angry.� A sophisticated honeypot could use automated planning to better manipulate the user.

         We should also choose deceptions based on knowledge of our adversaries.� The skill level of the adversary is often apparent, and deceptions can be tailored to it (Hassan and Guha, 2017).� Most adversaries connecting to a honeypot are cybercriminals, and we can expect automated attacks from them with often a low level of sophistication and a tendency to be easily discouraged. �Cybercriminals have many easy targets on the Internet, so if they cannot do much with our honeypot, they will give up quickly and move on.� If we want to encourage them so as to better collect data on their attack methods, we need to provide at least simple forms of bait.� On the other hand, a nation-state adversary will have less likelihood of being discouraged.� They have teams of professionals with long-term plans involving both espionage and cyberattacks, and may be willing to explore even low-interaction honeypots thoroughly.� These are the kinds of adversaries with which the honeypot can most effectively play games.� Many of these adversaries have quotas to fulfill, so sophisticated bait can be very effective.

 

3.                 Some example tactics

 

Deception tactics like those described are mostly active honeypot tactics compared to the traditional passive tactics of waiting to be attacked and collecting data about it.� Active tactics may provide intelligence more quickly to the defender.� Many of these tactics involve generating data to give to adversaries.� It can be displays, system or network configurations, error messages, misinformation, or files.�

 

Short messages can be generated by stochastic context-free grammars where each rule has a probability of being used in a top-down expansion of a starting symbol (Rowe and Rrushi, 2016, chapter 7).� For instance, a quick grammar for short error messages like �Error at 392257016� can generate a random error-describing string drawn from a set of common real error messages followed by �at� and a random number of 6-12 digits.� It could be sent to the honeypot adversary whenever an excuse is needed to avoid doing something.� Random error messages can be made unpredictable and thus hard for an adversary to anticipate.� Most cyberattacks are automated, and unexpected responses can be difficult for them to handle.

 

Randomization can be used to generate files of random characters.� These tend to look like encrypted files and adversaries could waste time trying to decrypt them.� However, adversaries can be kept interested much longer if files contain convincing real data.� This could occur if the data is almost real, as when it is real data slightly modified to be harmless, or real data that from a previous time period that is useless now.� For instance, we have been running a honeypot that appears to present pages from our school library (McKenna, 2016) with real technical documents that are out-of-date, with the goal of seeing which documents that cyberspies are most interested in.� To build this we collected a representative set of technical documents on a range of current topics for the honeypot.� A similar idea is giving out-of-date vehicle positions in military data files since timeliness is critical for planning of military operations.

 

A more dynamic approach to honeypot deception is using �software wrappers� to manage it (Rowe and Rrushi, 2016, chapter 13).� Wrappers are code that intervenes before the entry into some software and after the exit from it.� They are helpful for debugging and have good infrastructure support.� Wrappers can implement deceptions based on their context.� For instance, they could generate false error messages if asked to work with a suspicious resource, or if asked repeatedly about a resource, or if they receive what appears to be malicious code.� They could also delay in such cases, or substitute a different executable for the one referenced.� Wrappers permit a graduated response to the degree of a cyberthreat due to the range of options and parameters they have at their disposal.

 

Specialized honeypots such as those for cyber-physical systems can offer additional tactics for deception from their modeling of the physical system (Rowe and Rrushi, 2016, chapter 14).� For instance, suppose a cyberattacker of a honeypot power plant sends commands intended to close valves supplying coolant to the plant.� The honeypot has several options:

         It could give an error message and refuse to do anything.� This is a typical tactic of low-interaction honeypots.� That could encourage the adversary to try something else, but often it would discourage the adversary when they have not envisioned alternatives.

         It could fail to do anything but not say so.� This is another common tactic of low-interaction honeypots.�� This would discourage the adversary eventually, but for a while the honeypot could collect additional intelligence.

         It could simulate the effects of the command if the intended effect could be seen easily by the adversary.� So if told to close a coolant value, it could report increasing temperatures in routine packets and eventually alarm messages.� This will generally encourage the adversary, but only until their goals are achieved which may not be long.� It would require building a simulation of the physical system, though perhaps only a partial one for the features adversaries are interested in.� This can be done for simple sensor networks but will be more difficult for a complex power plant.� However, if the intended effect should be confirmable by another source of information (such as causing catastrophic failure of the power plant), the honeypot could be discovered.

         It could simulate the effects of the command but slowly.� For instance, it could demand a password, or respond slowly while giving periodic updates (�Working on your request�).� This could actually be encouraging because many adversaries expect and like a challenge.� Eventually a password can be accepted to see what the adversary will do next.

         It could simulate some of the intended effect but appear to thwart it.� For instance, it could simulate closing a value, then quickly opening it to suggest manual intervention; temperatures would go up and then down.� This will encourage the adversary to try again, perhaps with a different method that will offer new insights about them.� This may be better than refusing or delaying execution of commands because the adversary had partial success and may feel invested in the process, and may be more inclined to increase their efforts.� It also may encourage an adversary who likes the challenge of active opposition, and may increase their sense of self-worth.

         It could create some new effect that the adversary did not anticipate.� For instance, it could send alarm traffic across the simulated network (suggesting the adversary has been detected), or it could cause a valve to open wider instead of closing (suggesting the adversary used the wrong argument in the command), or it could cause new generators to start up (suggesting the adversary used the wrong command).� This will encourage the adversary to try again with a different method, increasing the intelligence collected.�

 

4.                 Deception as a game

 

Some researchers have have modeled cyberdeception as a formal game and taken advantage of results in game theory (Wang et al, 2012; De Faveri, Moreira, and Amaral, 2016; Rowe and Rrushi, 2016, chapters 11 and 12).� Deception has some distinctive features compared to other games, however.� Psychologists have noted that distrust propagates much better than trust: It takes only one untrustworthy act to cause someone to lose trust that took many acts to build up.� Thus honeypots should be very cautious in offering what might be considered obvious clues to deception.� That means going to some length to conceal the honeypot mechanisms in software, as with many products of the Honeynet Project (www.honeynet.org).� In addition, honeypots should be careful with what bait they offer.� With automated attacks the bait may not be inspected, but either low or high quantities of bait strongly indicate a honeypot and should be avoided.

 

The easy propagation of distrust has implications for deception taxonomies as well. �Figure 1 shows part of a honeypot deception taxonomy.� We can model a cyberadversary as initially sensing one or more of the leaves of this tree.� As they experience more of the honeypot, they may generalize their knowledge to nodes above their initial observations in this tree.� The degree to which this distrust is propagated across a link can be expressed as a conditional probability.� It can be estimated from the semantic similarity of the two concepts, but alternatively if we have statistics on instances of these concepts in the real world (the �extensions� of the concepts), it can be estimated as the size of the overlap between the extensions.� The simplest overlap measure is the Jaccard formula, the ratio of the size of the set intersection to the size of the set union.� Note that these estimates should be based if possible on what we think the adversary knows, and can be aided by experiments with human subjects.

 

Figure 1: Example deception taxonomy.

 

 

Different observations by the cyberadversary can propagate independently and reinforce a conclusion much in the way associations propagate with Bayesian network models.� So it adversary sees both odd behavior and an unusual delay, they may realize that both are indicators of a honeypot.� Eventually an adversary will accumulate enough evidence to be confident they are viewing a honeypot.� Then its value is decreased, since the adversary will likely leave and stop providing data, and may tell others.� Thus we should try to prevent that by minimizing the clues.

 

Game theory is more useful when we have a set of honeypots (a honeynet) rather than just one.� Then we can test more tactics independently to see how well each works and what risks each entails (Fraunholz and Schotten, 2018).� Figure 2 shows an example decision tree for a honeynet.� To analyze this, suppose �is the probability that the adversary will encounter a deception i when accessing the honeypot, �is the probability that deception i succeeds in fooling the adversary, and b is the benefit to intelligence collection of each deception encountered (counting an opportunity for an additional deception when the adversary has been fooled twice).� Assuming independence of the probabilities, the expected benefit is the sum of the expected costs over all leaf nodes n1-n9:

We can calculate this for different honeypot and honeynet designs and choose the one with the highest average benefit.� We should also recalculate periodically as adversaries readjust their tactics based on what they have seen.

Figure 2: Example decision tree for honeypot deceptions.

 

5.                 Honeypot experiments

 

Research is now conducting experiments with honeypot simulations (Hassan and Guha, 2017) and live adversaries against honeypots (Aggarwal, Gonzalez, and Dutt, 2016; Han, Kheir, and Balzarotti, 2017).� We have run honeypots for fifteen years (Frederick, Rowe, and Wong, 2012; Rowe and Rrushi, 2016, chapter 13).� We encourage everyone doing network security to run them, as the hardware and software need not be up-to-date, they are easy to set up, and you often encounter surprises when you start monitoring them.� We have run mostly ordinary machines as honeypots, plus a variety of honeypot software mostly from the Honeynet Project (www.honeynet.org) including Web-server, secure-shell, and industrial-control-system honeypots.

 

It often makes a considerable difference how a honeypot is configured.� Small changes to the honeypot and its services can make major differences to the traffic observed.� Also, the variety of traffic (not necessarily the volume) on a new honeypot is usually high at first and then decreases rapidly over a few weeks; a similar increase is observed after the honeypot is turned off for a while.� Clearly visitors are footprinting the machine and testing its susceptibility.� This is useful for planning deceptions.� A traditional honeypot that wants to encourage attacks should keep changing its appearance and keep going offline; a fake honeypot that wants to discourage attacks should avoid changing anything and try to stay online.� The rate of change must be faster for low-interaction honeypots since they cannot keep a visitor�s interest as long.

 

It is also useful to compare the same honeypot in different environments to see how much cyberadversaries exploit its context.� Our experiments comparing a honeypot running at our school with a honeypot running same hardware and software at a student�s home, both using the same Internet service provider, showed considerable differences in the traffic.� This suggests that many adversaries routinely exploit Internet registry information, and that deceptions in this information or DNS data could aid honeypot effectiveness.� Our current work is focused on industrial control systems with an emphasis on power plants.� We are getting a significantly higher rate of traffic than with the conventional SSH honeypots we have run.� This work is focusing on the simulation of processes as discussed in section 3.

 

References

 

Aggarwal, P., Gonzalez, C., and Dutt, V., Looking from the hacker�s perspective: Role of deceptive strategies in cyber security.� Proc. Intl. Conf. on Cyber Situational Awareness, Data Analytics, and Assessment, June 2016, p. 1.

Al-Gharabally, N., El-Sayed, N., Al-Mulla, S., and Ahmad, I., Wireless honeypots: Survey and assessment.� Proc. Intl. Conf. on Information Science, Technology, and� Applications, Kuwait, Kuwait, March 2009, pp. 45-52.

Almeshekah, M., and Spafford, E., Planning and integrating deception into computer security defenses.� Proc. New Security Paradigms Workshop, Victoria, BC, Canada, September 2014, pp. 127-137.

Araujo, F., Hamlen, K., Biedermann, S., Biedermann, S., and Katzenbeisser, S., From patches to honey-patches: Lightweight attacker misdirection, deception, and disinformation.� Proc. ACM SIGSAC Conf. on Computer and Communications Security, Scottsdale, AZ, US, November 2014, pp. 942-953.

Borders, K., Falk, L., and Prakash, A., OpenFire: Using deception to reduce network attacks.� Proc. 3rd Intl. Conf. on Security and Privacy in Communications Networks, Nice, France, September 2017.

De Gaspari, F., Jajodia, S., Mancini, L., and Panico, A., AHEAD: A new architecture for active defense.� Proc. ACM Workshop on Automated Decision Making for Active Cyber Defense, Vienna, Austria, October 2016, pp. 11-16.

Dunnigan, J., and Nofi, A., Victory and Deceit, Second Edition: Deception and Trickery in War.� Writers Club Press, San Jose, CA, US, 2001.

De Faveri, C., Moreira, A., and Amaral, V., Goal-driven deception tactics design.� Proc. 27th Intl. Symposium on Software reliability Engineering, Ottawa, ON, Canada, October 2016, pp. 264-275.

De Gaspari, F., Jojodia, S., Mancini, L., and Panico, A., AHEAD: A new architecture for active defense.� Proc. Safe Config: Testing and Evaluation for Active and Resilient Cyber Systems, Vienna, Austria, October 2016, pp. 11-16.

Fowler, C., and Nesbit, R., Tactical deception in air-land warfare, Journal of Electronic Defense 18 (6): 37-44 and 76-79, 1995.

Fraunholz, D., and Schotten, H., Strategic defense and attack in deception based network security.� Proc. Intl. Conf. on Information Networking, Barcelona, Spain, March 2018, pp. 156-181.

Frederick, E., Rowe, N., and Wong, A., Testing deception tactics in response to cyberattacks.� National Symposium on Moving Target Research, Annapolis, MD, June 2012.�

Girtler, F., Efficient Malware Detection by a Honeypot Network.� AV Akademikerverlag, 2013.

John, J., Yu, F., Xie, Y., Krishnamurthy, A., and Abadi, M., Heat-seeking honeypots: design and experience.� Proc. International World Wide Web Conference, Hyberabad, India, March 2011, pp. 207-216.

Han, X., Kheir, N., and Balzarotti, D., Evaluation of deception-based Web attacks detection.� ACM Workshop on Moving Targets Defense, Dallas TX, US, October 2017, pp. 65-73.

Hassan, S., and Guha, R., A probabilistic study on the relationship of deceptions and attacker skills.� Prof. 15th Intl. Conf. on Dependable, Autonomic, and Secure Computing, Orlando, FL, US, November 2017, pp. 693-698.

McKenna, S., Detection and classification of Web robots with honeypots.� Retrieved from faculty.nps.edu/ncrowe/oldstudents/28Mar_McKenna_Sean_thesis.htm, March 2016.

Pal, P., Soule, N., Lageman, N., Clark, S., Carvalho, M., Granados, A., and Alves, A., Adaptive resource management enabling deception.� Proc. 12th Intl. Conf. on Availability, Reliability, and Security, Reggio Calabria, Italy, August 2017, p. 52.

Rowe, N., Custy, E., and Duong, B., Defending cyberspace with fake honeypots.� Journal of Computers, Vol. 2, No. 2, 2007, pp. 25-36.

Rowe, N., and Rrushi, J., Introduction to Cyberdeception, Springer, 2016.

Stoecklin, M., Zhang, J., Araujo, F., and Taylor, T., Dressed up: Baiting attackers through endpoint service projection.� Proc. ACM Intl. Workshop on Security in Software Defined Networks and Network Function Virtualization, Tempe AZ, US, March 2018, pp. 23-28.

Sztompka, P., Trust, Cambridge University Press, London, UK, 1999.

Tammi, J., Rauti, S., and Leppanen, V., Practical challenges in building fake services with the record and play approach.� Proc. 10th Intl. Conf. on Security of Information and Networks, Jaipur, India, October 2017, pp. 235-239.

Urias, V., Stout, W., and Lin, H., Gathering threat intelligence through computer network deception.� Proc. Intl. Symposium on Technologies for Homeland Security, Boston, MA, US, May 2016.

Wang, C., and Lu, Z., Cyber deception: Overview and the road ahead.� IEEE Security and Privacy, Vol. 16, No. 2, March/April 2018, pp. 80-85.

Wang, W., Bickford, J., Murynets, I., Subbaraman, R., Forte, A., and Singaraju, G., Catching the wily hacker: A multilayer deception system.� Proc. 35th IEEE Sarnoff Symposium, Newark, NJ, US, June 2012.

Wegerer, M., and Tjoa, S., Defeating the database adversary using deception � A MySQL database honeypot.� Proc. Intl. Conf. on Software Security and Assurance, St. Polten, Austria, August 2016.

Zarras, A., The art of false alarms in the game of deception: Leveraging fake honeypots for enhanced security.� Proc. Intl. Carnahan Conf. on Security Technology, Rome, Italy, 2014, pp. 1-6.

 

Exercises

 

1. Networked home-monitoring systems could be a possible target by cybercriminals for harassment or extortion purposes.� A honeypot home-monitoring system not associated with a real home could collect intelligence on what cybercriminals are trying to do.� Assume a design to control the heating, air conditioning, lighting, and alarm system of a house.

 

(a) Suggest possible deceptions in �result� that could be effective for such systems and explain how they would be implemented.

 

(b) Suggest possible deceptions in �object� different from �result� that could be effective for such systems and explain how they would be implemented.

 

(c) How could the honeypot respond to attempts to turn off all the lights in a way that could encourage further interaction?

 

(d) How could the honeypot respond to periodic attempts to modify parameters, such as every day, in such a way that the adversary will keep returning?

 

2. Consider the problem of measuring the effectiveness of a honeypot�s deceptions.

 

(a) If we measure traffic volume, what should we compare to assess effectiveness?

 

(b) How could it be useful to measure traffic to nodes other than the honeypot to assess the effectiveness of the honeypot?

 

(c) If we measure changes made to the honeypot, what should we examine to assess effectiveness?