Deception in defense of computer systems from cyber-attack

Neil C. Rowe

U.S. Naval Postgraduate School

 

 

Abstract

 

While computer systems can be quite susceptible to deception by attackers, deception by defenders has increasingly been investigated in recent years.� Military history has classic examples of defensive deceptions, but not all tactics and strategies have analogies in cyberspace.� Honeypots are the most important example today; they are decoy computer systems designed to encourage attacks to collect data about attack methods.� We examine the opportunities for deception in honeypots, and then opportunities for deception in ordinary computer systems by tactics like fake information, false delays, false error messages, and identity deception.� We conclude with possible strategic deceptions.

 

This is a chapter in Cyber War and Cyber Terrorism, ed. A. Colarik and L. Janczewski, Hershey, PA: The Idea Group, 2007.

 

Introduction

 

Defense from cyber-attacks (�exploits�) in cyberspace is difficult because this kind of warfare is inherently asymmetric with the advantage to the attacker.� The attacker can choose the time, place, and methods with little warning to the defender.� Thus a multilayered defense ("defense in depth") is important (Tirenin and Faatz, 1999).� Securing one's cyberspace assets by access controls and authentication methods is the first line of defense, but other strategies and tactics from conventional warfare are also valuable, including deception.

 

(Dunnigan and Nofi, 2001) provides a useful taxonomy of nine kinds of military deception similar to several other published ones: concealment, camouflage, disinformation, ruses, displays, demonstrations, feints, lies, and manipulation of the adversary by insight into their reasoning and goals.� (Rowe and Rothstein, 2004) proposes an alternative taxonomy based on case theory from linguistics.� Table 1 shows those categories of deceptions they argue are feasible for defense from cyber-attack, with revised assessments of suitability on a scale of 1 (unsuitable) to 10 (suitable).� Some of these deceptions can also be used in a �second-order� way, after initial deceptions have been detected by the adversary; an example is creating inept deceptions with obviously-false error messages while also modifying attacker files in a subtle way.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Table 1: A taxonomy of deception in cyberspace.

Deception method

Suitability in cyberspace

Example

Agent

4

Pretend to be a naive consumer to entrap identity thieves

Object

7

Camouflage key targets or make them look unimportant, or disguise software as different software

Instrument

1

Do something an unexpected way

Accompaniment

4

Induce attacker to download a Trojan horse

Experiencer

8

Secretly monitor attacker activities

Direction

3

Transfer Trojan horses back to attacker

Location-from

2

Try to frighten attacker with false messages from authorities

Location-to

6

Transfer attack to a safer machine like a honeypot

Frequency

7

Swamp attacker with messages or requests

Time-at

2

Associate false times with files

Time-from

1

Falsify file-creation times

Time-to

1

Falsify file-modification times

Time-through

8

Deliberately delay in processing commands

Cause

7

Lie that you cannot do something, or do something unrequested

Effect

9

Lie that a suspicious command succeeded

Purpose

8

Lie about reasons for asking for an additional password

Content

9

Plant disinformation; redefine executables; give false system data

Material

3

�Emulate� hardware of a machine in software for increased safety

Measure

6

Send data too large or requests too hard back to the attacker

Value

7

Systematically misunderstand attacker commands, as by losing characters

Supertype

5

Be a decoy site for the real site

Whole

2

Ask questions that include a few attacker-locating ones

Precondition

10

Give false excuses why you cannot execute attacker commands

Ability

6

Pretend to be an inept defender, or have easy-to-subvert software

Honeypots

 

Honeypots are the best-known example of defensive deception in cyberspace (The Honeynet Project, 2004; Spitzner, 2003).� These computer systems serve no purpose besides collecting data about attacks on them.� That means they have no legitimate users other than system administrators, and anyone else who uses them is inherently suspicious.� Honeypots record all their activity in secure audit files for later analysis, and the lack of legitimate traffic means this is a rich source of attack data.� Honeypot data is one of the few ways by which new (�zero-day�) attacks can be detected.� Honeypots can also serve as decoys that imitate important systems like those of command-and-control networks.

 

Honeypots are often used in groups called �honeynets� to provide plenty of targets for attacks and to study how attacks spread from computer to computer.� Software for building honeynets is provided by the �Honeynet Project� (a consortium of researchers that provides open-source software) as well as some commercial vendors.� Honeypot and honeynets can be �low-interaction� (simulating just the first steps of network protocols, like (Cohen & Koike, 2004)) or �high-interaction� (permitting logins and most resources of their systems, like Sebek (The Honeynet Project, 2004)).� Low-interaction honeypots can fool attackers into thinking there are many good targets by simulating many Internet addresses and many vulnerable-looking services, as "decoys".� For instance, low-interaction honeypots could implement a decoy military command-and-control network so adversaries would attack it rather than the real network.� Low-interaction honeypots like HoneyD provide little risk to the deployer, but are not very deceptive since they must usually preprogram a limited set of responses.� High-interaction honeypots like Sebek are more work to install, and entail more risk of propagating an attack (since countermeasures cannot be perfect), but will fool more attackers and provide more useful data.� A safer form of high-interaction honeypot is a �sandbox�, a simulated environment that appears like a real computer environment, important for forensics on malicious code.

 

Counterdeception and counter-counterdeception for honeypots

 

Deception is necessary for honeypots because attackers do not want their activities recorded: This could permit legal action against them as well as learning of their tricks.� So some attackers search for evidence of honeypots on systems into which they trespass, a form of counterdeception (McCarty, 2003).� Analogously to intrusion-detection systems for cyberspace (Proctor, 2001), this counterdeception can either look for statistical anomalies or for features or �signatures� that suggest a honeypot.� Anomalies can be found in� statistics on the types, sizes, and dates of files and directories.� For instance, a system with no email files is suspicious.� Counter-counterdeception in designing good honeypots then requires ensuring realistic statistics on the honeypot.� A tool that calculates statistical metrics on typical computer systems is useful (Rowe, 2006). One good way to build a honeypot is to copy its file system from a typical real computer system.� However, exactly identical file systems are suspicious too, so it is important to make at least random differences between honeypots.

 

Honeypot signatures can be found in main memory, secondary storage, and network packets (Holz & Raynal, 2005).� The Honeynet Project has put much thought into signature-concealing methods for honeypots, many of which involve deception.� Since good honeypots should log data through several independent methods (packet dumps, intrusion-detection system alerts, and keystroke logging), it is especially important to conceal logging.� Sebek uses specially-modified operating systems and applications software rather than calling standard utilities, such as implementing the UDP communications protocol directly rather than calling a UDP utility.� It specially conceals the honeynet software when listing operating-system files.� It also implements a firewall (protective network filter) that deceptively does not decrement the �time to live� of packets traversing it as most firewalls do, helping to conceal itself.� Honeypots can also conceal logging by sending data by indirect routes, such as to nonexistent computer addresses where it can be picked up in transmission by network-packet �sniffers�.� Honeypots implemented in hardware can be even better at avoiding software clues.� Signatures of honeypots can also be concealed by putting key data in unusual places, encrypting it, or frequently overwriting it with legitimate data.� But steganography (concealed messages) is unhelpful with honeypots since just sending log data is suspicious, not its contents.

 

Deception to prevent the spread of attacks from honeypots

 

Honeypots must try to protect attacks on them from spreading to legitimate computers since attackers frequently use compromised systems as bases for new attacks.� This means honeypots should have a �reverse firewall� that just controls data leaving them.� Deception is essential for reverse firewalls because they are rarely found on legitimate systems and are obvious clues to honeypots.� Sebek and other �Gen II honeynets� use several deception tactics.� They impose a hidden limit on the number of outbound connections.� They can drop (lose) outgoing packets according to destination or the presence of known malicious signatures.� They can also modify packets so known malicious code can be made ineffective and/or more visible to its targets.� Modification is particularly good when packets an attacker is sending are malformed or otherwise unusual, since a good excuse to the attacker for why they do not work is that a new signature has been recognized.

 

Disinformation

 

Deception can be used by ordinary computer systems too.� As with print media, disinformation (false information) can be planted on computers for enemy spies to discover, as a counterintelligence tactic (Gerwehr et al, 2000).� This includes fake mission plans, fake logistics data, fake intelligence, and fake orders; it can also include fake operating-system data such as fake temporary files and fake audit logs constructed to make it appear that a system is being used for normal purposes, analogous to the fake radio messages used by the Allies before D-Day (Cruikshank, 1979).� Disinformation can work well in cyberspace because, unlike handwritten materials, electronic data does not provide clues to deception in style and provenance (how it was obtained), and methods for detecting text inconsistencies (Zhou et al, 2003; Kaza, Murthy, & Hu, 2003) do not work well for fixed-format audit records.� While operating systems do record who copied a file and when, the author may be masquerading as someone else, and dates can easily be faked by changing the system clock.� Since timing and location are often critical to military operations, a useful tactic for creating disinformation is to copy a previous real message but change the times and/or places mentioned in a systematic way.

 

Disinformation can be used to fight spam with "spam honeypots" (Krawetz, 2004).� These sites collect large amounts of email traffic to detect identical messages sent to large numbers of addresses, which they then identify as spam, and quickly report to email servers for blacklisting.� Spam honeypots can deceive in publicizing their fake email addresses widely, as on Web sites. �Also, sites designed for phishing (identity theft via spam) can be counterattacked by overwhelming them with large amounts of false identity data.

 

Since attackers want to avoid honeypots, other useful disinformation could be false indicators of a honeypot (Rowe, Duong, & Custy, 2006).� For instance, one can put in secondary storage the executables and data files for monitoring software like VMWare even if they are not being run.� Disinformation could also be a false report on a Web page that a site uses honeypots, scaring attackers away.� Amusingly, hackers created disinformation themselves when they distributed a fake journal issue with a fake technical article claiming that Sebek didn't work (McCarty, 2003).

 

Deceptive delays

 

Deceptive delaying is a useful tactic when a defender needs time to assemble a defense or await reinforcements.� It can mean just waiting before responding, or giving an attacker extra questions to answer or information to read before they can proceed. �Delaying helps a defender who is suspicious about a situation but not certain, and it gives time to collect more evidence.� Deception is necessary to delay effectively in cyberspace because computers do not deliberate before acting, though they may seek authorizations (Somayaji & Forrest, 2000).� One possible excuse for deceptive delays is that a computation requires a long time.� This can be done, for instance, in a Web site (Julian, Rowe, & Michael, 2003): If input to a form is unusually long or contains what looks like program code, delays will simulate a successful denial-of-service attack while simultaneously foiling it.� Delays are also used in the LaBrea tool (www.hackbusters.net) to slow attacks that query nonexistent Internet addresses. ��Plausible delays should be a monotonically increasing function of the expected processing time for an input so they seem causally related to it.� A delay that is a quadratic or exponential function of the expected processing time is good because it penalizes very suspicious situations more than it penalizes mildly suspicious situations.

 

Defensive lies

 

Lies can also be an effective way to defend computer systems from attack.� While not often recognized, software does deliberately lie to users on occasion to manipulate them.� For instance, most Web browsers will suggest that a site is not working when given a misspelled Web link, apparently to flatter the user.� Information systems can lie to protect themselves against dangerous actions.� Useful lies can give excuses for resource denial, like saying "the network is down" in response to a command requiring the network, much as a lie �the boss just stepped out� can be used to avoid confrontations in a workplace.� Attackers must exploit certain key resources of a victim information system like passwords, file-system access, and networking access.� If we can deny them these by deception, they may conclude that their attack cannot succeed and just go away without a fight.� False resource denial has an advantage over traditional mandatory and discretionary access control of resources in that it does not tell the attacker that their suspiciousness has been detected.� Thus they may keep wasting time trying to use the resource.� Resource denial can also be fine-tuned to the degree of suspiciousness, unlike access control, permitting more flexibility in security.

 

Deception planning is essential to the use of deliberate lies because good lies require consistency.� So cyber-deception planners need to analyze what attackers have been told so far to figure what lies to best tell them next; decision theory can be used to rank the suspiciousness of alternative excuses (Rowe, 2004).� For attacks that are known in advance, one can construct more detailed defensive plans.� (Michael, Fragkos, & Auguston, 2003) used a model of a file-transfer attack to design a program that would fake succumbing to the attack, and (Cohen & Koike, 2004) used �attack graphs� to channel the activities of attackers with situation-dependent lies about the status of services on a virtual honeynet.

 

Deception to identify attackers

 

A serious problem in defense against cyber-attack is finding its source (�attribution�) so that one can stop it, counterattack, impose sanctions, or start legal proceedings.� But the design of the Internet makes it very difficult to trace attacks.� Military networks do generally track routing information by using modified networking protocols, but it is often impossible to get civilian sites to do this since it requires significant storage, so an attacker that comes in through a civilian network can remain anonymous.

 

"Spyware" that remotely reports user activities (Thompson, 2005) is useful for source attribution.� �Trojan horses� or programs containing concealed processing can be used to insert spyware onto attacker machines by offering free software (like attack tools) through "hacker" sites or via email; or spyware can be installed by surreptitious loading onto attacker computers.� Spyware can track when a user is logged in, what programs they run, and what Internet sites they visit.� Spyware can also be designed to delay or impede an adversary when they attempt attacks, but then it is likely to be discovered, and once it is discovered, any subsequent deceptions are much less effective.

 

One can also try common cyber-scams (Grazioli & Jarvenpaa, 2003) on an attacker.� One can plant �bait� like passwords and track their use, or plant credit-card numbers and watch to where the goods are delivered.� One may be able to fool an attacker into submitting a form with personal data.� Or one can just try to chat with an attacker to fool them into to revealing information about themselves, since many hackers love to boast about their exploits.

 

Strategic deception

 

Deception can also be used at a strategic level to make the enemy think you have information-system capabilities you do not have or vice versa.� (Dunnigan and Nofi, 2001) argues that the Strategic Defense Initiative of the United States in the 1980s was a strategic deception, since it was then infeasible to shoot down missiles from space, and it served only to panic the Soviet Union into overspending on its military.� Something similar could be done with information technology by claiming, say, special software to find attackers that one does not have (Erdie & Michael, 2005).� Conversely, one could advertise technical weaknesses in one�s information systems in the hope of inducing an attack that one knows could be handled.� Dissemination of reports that an organization uses honeypots could make attackers take extra precautions in attacking its systems, thereby slowing them down, even if the organization does not use honeypots.� Strategic deceptions can be difficult to implement because they may require coordination of numbers of people and records.

 

Conclusion

 

Deception has long been an important aspect of warfare, so it is not surprising to see it emerging as a defensive tactic for cyber-warfare.� Honeypots have pioneered in providing a platform for experimentation with deceptive techniques, because they need deception to encourage attackers to use them and show off their exploits.� But any computer system can benefit from deception to protect itself since attackers expect computers to be obedient servants.� However, deceptions must be convincing for attackers and cost-effective considering their impact on legitimate users.

 

References

 

Cohen, F., & Koike, D. (2004, June).� Misleading attackers with deception.� Proc. 5th Information Assurance Workshop, West Point, New York, 30-37.

Cruikshank, C. G. (1979).� Deception in World War Two.� New York: Oxford.

Dunnigan, J., & Nofi, A. (2001).� Victory and Deceit, second edition: Deception and Trickery in War.� San Jose, California: Writers Club Press.

Erdie, P, & Michael, J. (2005, June).� Network-centric strategic-level deception.� Proceedings of the 10th International Command and Control Research and Technology Symposium, McLean, VA.

Gerwehr, S., Weissler, R., Medby, J. J., Anderson, R. H., & Rothenberg, J. (2000, November). �Employing deception in information systems to thwart adversary reconnaissance-phase activities.� Project Memorandum, National Defense Research Institute, Rand Corp., PM-1124-NSA.

Grazioli, S., & Jarvenpaa, S. (2003, December).� Deceived: under target online.� Communications of the ACM, 46 (12), 196-205.�

Julian, D., Rowe, N., & Michael, J. (2003, June).� Experiments with deceptive software responses to buffer-based attacks.� Proceedings of the IEEE-SMC Workshop on Information Assurance, West Point, New York, 43-44.

The Honeynet Project (2004).� Know your enemy, 2nd edition. Boston: Addison-Wesley.

Holz, T., & Raynal, F. (2005, March).� Defeating honeypots: system issues, parts 1 and 2.� Retrieved October 25, 2005, from www.securityfocus.com/infocus/1826.

Kaza, S., Murthy, S., & Hu, G. (2003, October).� Identification of deliberately doctored text documents using frequent keyword chain (FKC) model.� IEEE Intl. Conf. on Information Reuse and Integration, Las Vegas, NV, 398-405.

Krawetz, N. (2004, January-February).� Anti-honeypot technology.� IEEE Security and Privacy, 2(1), 76-79.

McCarty, B. (2003, November/December).� The honeynet arms race.� IEEE Security and Privacy, 1(6), 79-82.

Michael, J. B., Fragkos, G., & Auguston, M. (2003, May).�� An experiment in software decoy design: intrusion detection and countermeasures via system call instrumentation.� Proc. IFIP 18th International Information Security Conference, Athens, Greece, 253-264.

Proctor, P. E. (2001) Practical intrusion detection handbook.� Upper Saddle River, NJ: Prentice-Hall PTR.

Rowe, N. (2004, December).� Designing good deceptions in defense of information systems.� Computer Security Applications Conference, Tucson, Arizona, 418-427.

Rowe, N. (2006, January).� Measuring the effectiveness of honeypot counter-counterdeception.� Proc. Hawaii International Conference on Systems Sciences, Koloa,Hawaii.

Rowe, N., Duong, B., & Custy, E. (2006).� Fake honeypots: a defensive tactic for cyberspace.� Proc. 7th IEEE Workshop on Information Assurance, West Point, New York, June 2006.

Rowe, N., & Rothstein, H. (2004, July).� Two taxonomies of deception for attacks on information systems.� Journal of Information Warfare, 3 (2), 27-39.

Somayaji, A., and Forrest, S., (2000, August).� Automated response using system-call delays.� Proc. 9th Usenix Security Symposium, Denver, Colorado, 185-198.

Spitzner, L. (2003).� Honeypots: tracking hackers.� Boston: Addison-Wesley.

Thompson, R. (2005, August).� Why spyware poses multiple threats to security.� Communications of the ACM, 48 (8), 41-43.

Tirenin, W., and Faatz, D. (1999).� A concept for strategic cyber defense.� Proc. Conf. on Military Communications, Atlantic City, New Jersey, October 1999, Vol. 1, 458-463.

Zhou, L., Twitchell, D., Qin, T., Burgoon, J., & Nunamaker, J. (2003, January).� An exploratory study into deception detection in text-based computer-mediated communication.� Proc. 36th Hawaii International Conference on Systems Sciences, Waikoloa, Hawaii, 10.

 

Terms

 

Deception: Misleading someone into believing something that is false.

Disinformation: False information deliberately planted for spies to obtain.

Exploit: An attack method used by a cyber-attacker.

Honeypot: A computer system whose only purpose is to collect data on trespassers.

Honeynet: A network of honeypots, generally more convincing than a single honeypot.

Intrusion-detection system: Software that monitors a computer system or network for suspicious behavior and reports the instances that it finds.

Lie: Deception by deliberately stating something false.

Low-interaction honeypot: A honeypot that simulates only the initial steps of protocols and does not give attackers full access to operating systems on sites.

Reverse firewall: A computer that controls outgoing traffic from a local-area computer network, important with honeynets to prevent attacks from spreading.

Sniffer: Software that eavesdrops on data traffic on a computer network, essential for network-based intrusion-detection systems but also useful to attackers.

Spyware: Software that secretly transmits information about what a user does to a remote site.