The ethics of deception in cyberspace

Neil C. Rowe

U.S. Naval Postgraduate School

 

 

Abstract

 

We examine the main ethical issues concerning deception in cyberspace.� We first discuss the concept of deception and survey ethical theories applicable to cyberspace.� We then examine deception for commercial gain such as spam, phishing, spyware, deceptive commercial software, and dishonest games.� We next examine deception used in attacks on computer systems, including identity deception, Trojan horses, denial of service, eavesdropping, record manipulation, and social engineering.� We then consider several types of deception for defensive purposes, less well known, including honeypots, honeytokens, defensive obstructionism, false excuses, deceptive intelligence collection, and strategic deception.� In each case we assess the ethical issues pro and con for the use of deception.� We argue that sometimes deception in cyberspace is unethical and sometimes it is ethical.

 

This paper appeared in the Handbook of Research on Technoethics, ed. R. Luppicini, Hershey, PA: Information Science Reference, 2008.

Introduction

 

Deception is a ubiquitous human phenomenon (Ford, 1996).� As the Internet has evolved and diversified, it is not surprising to find increasing deception in cyberspace.� The increasing range of Internet users in particular, and the development of Internet commerce, has provided many opportunities and incentives.

 

Deception is a technique for persuading or manipulating people.� We will define deception as anything to cause people to have incorrect knowledge of the state of the world.� This includes lying but also misleading.� Deception in cyberspace can be used both offensively (to manipulate or attack computer systems, networks, and their users) or defensively (to defend against manipulations or attacks).� Most ethical theories proscribe most forms of deception while permitting some kinds (Bok, 1978).� Deception smooths social interactions, controls malicious people, and enables doing something for someone�s unrecognized benefit (Nyberg, 1994).� The harms of deception are the failure to accomplish desired goals, and the often long-term damage to the trust necessary to sustain social relationships, without which much human activity could not be accomplished.

 

(Quinn, 2006) provides a useful categorization of ethical theories applicable to computer technology.� He identifies subjective and cultural relativism, divine-command theory, Kantian rule-based ethics, social-contract theory, and act and rule utilitarianism.� Subjective relativism, cultural relativism, and divine-command theory do not fit well with cyberspace because cyberspace is a social resource that spans diverse cultures with diverse opinions, and it needs cooperation to work properly.� Kantian rule-based ethics is difficult to apply in general, though it helps resolve specific issues.� Social-contract theory is useful but may not provide specific enough guidance to resolve a particular ethical dilemma.� Alternative formulations to cyberethics such as the "disclosive" approach of (Brey, 2000) can also be explored but they are relatively new.

 

That leaves utilitarianism, which attempts to decide ethical questions by assessing the net benefit to society of a particular act or ethical rule.� So we will follow a utilitarian approach here, and rule utilitarianism in particular.� We shall say a particular policy of deception is ethical if its net of benefits minus costs, in general to a society in the long term, exceeds that of not deceiving, else it is unethical (Artz, 1994).� Benefits include achieving the goals of the deceiver and the value of those goals.� Costs include the direct damage caused by the deception as when its goals are malicious, direct costs of the deception being discovered such as retaliation, and indirect costs of discovery such as increased distrust of the parties.� In cyberspace for instance, if someone is attacking your computer, a deception that could make them go away could be justified if the cost of a successful attack on the computer is hours of work to reinstall the operating system and damaged files.� Both benefits and costs must be multiplied by probabilities to obtain expected values when they are uncertain due to such factors as whether the deception will succeed or whether it will be discovered.�

Background

 

Deception can be verbal or nonverbal (Vrij, 2000).� Verbal methods include outright lying, equivocation, failing to state key information, false claims, and false excuses.� Nonverbal methods include mimicry, decoying, and various nonverbal forms of pretense.� People use deception everyday without being aware of it, and many areas of human activity could not function without deliberate deception such as police work, law, politics, business negotiation, military actions, and entertainment.� Much deception as practiced is unjustified, however.� Hence there is an extensive literature on detection of deception (Vrij, 2000).� Human deceivers try to control the information they reveal, but it is hard to control all the channels of communication, and the truth often "leaks out" through secondary channels.� For instance, people who lie tend to fidget, hold their bodies rigidly, and use an unusual tone of voice.� Deception can also be detected in verbal utterances from the use of vagueness, exaggeration, high frequency of negative terms, and especially inconsistency between different assertions.� But deception detection is difficult in general, and attempts to build automated "lie detectors" have not been very successful.

 

Cyberspace is well suited for many forms of deception because of the difficulty of obtaining collaborating information when assertions are made (Fogg, 2003).� For instance, a policeman can �pretend to be a 14-year-old girl online to entrap pedophiles.� At the same time, a "phisher" can pretend to be a bank by implementing a fake Web site to steal personal data from victims.� In addition, the connectivity of the Internet enables social interactions over long international distances, and the speed of the Internet permits damage to be done and criminals to disappear quickly (Spinello, 2003).� Cyberspace relationships are usually weaker than real-world relationships because of the limited communications bandwidth, so there is less pressure of social obligation to act in a trustworthy manner (Castelfranchi, 2000).� So many problems of deception are worse in cyberspace than in the real world, although there are important differences between offensive (attacking) and defensive deception� (Rowe and Rothstein, 2004).

 

Software itself can deceive, as its deception methods can be programmed.� We will argue that the ethical issues with such software devolve on the programmer, since the software acts as their agent.� Thus there are similar issues with deceptive software and deceptive people who program it.� However, new developments in artificial intelligence are continuing to increase the human-like characteristics of software, and we can now conceive of automated agents with their own distinct ethics (Allen, Wallach, and Smit, 2006).

 

The two main professional societies for information-technology professionals, the ACM and the IEEE Computer Society, have a joint policy on ethics (ACM/IEEE-CS, 1999).� But it does not address many key issues.� For instance, it proscribes deception in speaking to the public about software, but says nothing about writing deliberately deceptive software or foisting malicious cyber-attacks.� So we need to develop further ethical principles for cyberspace.

Deception in electronic commerce and commercial software

 

The first association most people make with the term "deception in cyberspace" is fraudulent electronic commerce.� Cyberspace supports analogs of many common scams.� However, new ethical problems occur in electronic commerce as well.

Electronic-commerce scams

 

Commercial email and Web sites are susceptible to a wide variety of deceptions ranging from misleading advertising to outright fraud (Leitch & Warren, 2000).� Since the bandwidth of the Internet limits the amount of information one can obtain about a product for sale or a customer in electronic commerce, deception is more of a danger than in traditional commerce.� For instance, ineffective medicines or cheap imitation electronic products can be advertised, customers can take products without paying by various ploys, and both buyers and sellers in electronic auctions can be cheated by parties failing to deliver goods or payment.� Poorly designed policies of an electronic business can foster deception by clients (Harris & Spence, 2002).� The goal of almost all electronic-commerce deceptions is unjustified material gain and such deception is unethical.� Electronic commerce is also unethical when key information for a transaction is deliberately withheld.

 

Laws regulate electronic commerce in most countries similarly to non-electronic commerce, and many deceptions in cyberspace are illegal by similar laws.� For instance, it is unethical to fool customers in electronic commerce by showing a picture of a related but not identical product to one a customer wants to buy.� The most serious unethical deceptions satisfy the legal definition of fraud such as selling stock in nonexistent companies.

Email deceptions: spam and phishing

 

Unwanted commercial email or "spam" often involves deception since it is advertising and most advertising is deliberately deceptive.� Advertising is usually considered acceptable business practice unless it becomes too obviously deceptive.� For instance, it is considered acceptable for a vehicle manufacturer to claim that their truck is "tough" despite any evidence, but it is unacceptable to give false statistics showing that it has a longer lifetime than the average truck.� However, antispam laws in the United States and other countries broadly prohibit spam because the automated breadth of its delivery makes it a more serious nuisance than traditional forms of advertising.

 

The nature of cyberspace makes certain kinds of deceptive commercial activity more effective.� For instance, links in email and on the World Wide Web can be deliberately misleading so you think you are going to a different site than you really are.� This can be done by making the wording of a text link false or vague, or making the link an image.� Misleading links often occur in spam.� Such deceptions are unethical because they abuse the necessary trust for online commerce.

 

Phishing is an especially serious email threat (Berghel, 2006).� This kind of scam is designed to accomplish identity theft by inducing a victim, by email, to visit a Web site where their personal information such as account numbers is collected.� Phishing requires mimicry of both the email and Web site of a legitimate business, usually a bank or other financial service.� The victim is encouraged to visit the site through deceptive urgency in the form of alleged emergencies or serious threats, as well as by deceptive links.� This works best when the mimicry is very precise, as small details such as the address of the mimicry site can betray the deception.� For instance, a phisher might use "bankofamerica.net" or "bank_of_america.com" to imitate a legitimate site "bankofamerica.com".� Phishing deceptions must be considered very unethical since the deceptions are for outright fraud with a large potential payoff.

Adware and spyware

 

Adware is software that displays advertisements on your computer.� Often it is tied to a desirable free software product.� Then the adware is the price you pay to run the product.� This is ethical if the user is aware of the tradeoff, but many adware suppliers deceptively conceal the purpose of the software in long user agreements so that the user cannot provide informed consent.

 

Spyware is a more serious and rapidly growing problem (Awad & Fitzgerald, 2005).� It is software that secretly reports details of what a user is doing on their computer to remote Web sites.� It can provide valuable marketing data about what Web sites a user visits, but it almost always violates privacy and can enable identity theft.� Again, some spyware vendors deceptively force the user to sign an authorizing agreement that is long and complex.� Deception is more necessary than with adware because spyware does not benefit the victim.� (Awad & Fitzgerald, 2005) cites four major deceptions of spyware: (1) changing settings or key system parameters, (2) surreptitious downloading, (3) bundling with legitimate software, and (4) slowing systems and causing crashes.� �These are unethical because they violate privacy, hurt reliability of a system, and violate the principle that owners of something (a computer system) should understand and approve its settings and configuration, much as the owner of a house has the right to approve all activities taking place in it.

Deception by commercial software

 

Though infrequently acknowledged, legitimate and legal commercial software can be deliberately deceptive.� Pressures of the marketplace can make software try to look better than it really is (Gotterbarn, 1999).� For instance, help facilities may not provide much, or most of the control buttons may be useless or redundant.

 

Certain practices lend themselves to deceptive exploitation.� For instance, many commercial products take advantage of their "captive audience" to advertise other products of the same manufacturer.� Often the main purpose of free products is to market non-free products.� The user can be harassed repeatedly with "Do you want to upgrade?" requests or advertising images scattered over a page, or can actually be misled into thinking an additional product must be purchased.� Again, it is a matter of degree as to whether this is unethical.

 

Commercial software can also deceive to try to discourage users from using other products.� Microsoft, for instance, has repeatedly added its own enhancements to the open-source standard HTML for Web pages in an attempt to lock users into its own fee-based Web products, and has done similar ploys with several other open-source products.� Such techniques can be considered unethical interference with fair competition, and are often described and implemented deceptively.

 

Commercial software can also provide deceptively concealed unnecessary services.� For instance, the Microsoft Windows operating system now contains a large amount of features that most users never use and are not aware of, apparently as a reason to encourage users to keep buying updates although its basic features have not changed significantly from Windows 95 in 1995.� Such unnecessary code increases the vulnerability of the software to malicious attacks, and can be considered unethical in much the same way as a doctor who performs unnecessary surgery on patients to increase his or her income.

Deception in electronic games

 

Computer games are ubiquitous, and deception is becoming a serious problem with them. �Most cheating in games is acquisition of additional information about the game known to only a few players ("cheat codes") and is not usually deceptive.� But now with the growth of distributed multiplayer games, deceptions occur where players manipulate their computer software to unfairly give themselves additional capabilities and knowledge; this is possible because some of the game software must be kept on the player's computer for the game to run sufficiently fast (Hoglund and McGraw, 2007).� Deceptions can involve finding loopholes in game rules in abnormal situations, such as by aborting play halfway through an action.� Many deceptions are considered unethical by game companies and other gamers, and perpetrators can be blacklisted from games because of it.� Deception can also occur in multiplayer games with collusion between players to form alliances or build up credits by staged encounters.

Deception in cyber-attacks

 

We now consider deception in direct attacks on computers and software in cyberspace, where deception is used to control a computer or network for personal or group gain.

Eavesdropping

 

Since users of computers and networks often share the same hardware, there is a danger of a user being able to read the private information of another if the software is not designed well.� An ethical person should refuse to do this.� However, malicious people could deliberately eavesdrop on computer systems to learn secrets.� They could do this by surreptitiously installing "sniffers", software that displays all traffic sent over a computer network.� Sniffers are a legitimate useful tool for system administrators so they cannot be outlawed.� The ethical problems of eavesdropping in cyberspace are similar to those of eavesdropping in general; users should be told if system administrators are doing it.� But there are reliable technical solutions to preserving privacy on computer systems such as passwords and encryption.

Personal-identity deception

 

It is easy to assume a new identity on the Internet, where (unless you connect a video camera) no one can see what you look like or how you are acting.� Using new online names is actually encouraged in many online communities where anonymity is beneficial, as in online dating or in support groups for people with stigmatizing problems.� Some users of the Internet go beyond this to develop online personas different from their real ones.� Such personas may be harmless and can provide psychological benefits in role-playing which can be helpful in psychotherapy or just in helping to understand a new point of view.� For instance, a teenager can adopt the name "gamemaster" and pretend to be more knowledgeable than they really are to experience more adult interactions.� When such deceptions have net benefits, they are usually ethical.� However, role playing can grow to interfere with relationships in the real world, as when someone lies repeatedly about their accomplishments in discussion groups to reinforce serious psychological problems, in which case it becomes unethical.� Deceptive role playing is key in "social engineering" attacks on computer systems as discussed below.

Denial of service

 

A common attack in cyberspace tries to tie up resources of a victim computer or site with useless processing, what is called "denial of service".� Its goal is to disable a site of which the attacker disapproves.� This usually involves deception in the form of insincere requests for services.� For instance, a denial-of-service attack on a Web site can involve sending millions of identical requests for the same Web page within a second from millions of sites.� Denial of service exploits timing and surprise.

 

Denial of service has been used as a political tool against organizations that the attacker dislikes, as a form of civil disobedience (Manion & Goodrum, 2000).� But it is really a form of vandalism that attacks the usability of the resource, and is thus more invasive than traditional civil disobedience such as boycotts.� Thus most security professionals consider it unethical even if used against ethically questionable targets.� Denial of service could create serious harms if targeted against a critical resource such as a hospital.

Address spoofing

 

A frequent feature of malicious attacks on computer systems is deception in the attacker's location.� Data packets on computer networks are supposed to identify the Internet address (IP address) of their source.� When attackers are engaged in illegal activities, they often give false addresses.� While ordinary software cannot fake addresses, some illegal software can.� Most authorities consider this unethical in the same way that malicious anonymous letters are unethical; the author of provocative data should assume responsibility for their data.� Accountability is important in maintaining trust in a society.

Record manipulation and privilege escalation

 

Some attacks target data on computers, modifying or deleting it in unauthorized ways.� Since computers can record everything that occurs on them, some cyber-attackers delete such "audit" data; since audit data is supposed to be an objective record, tampering with it is definitely unethical.� Similarly unethical or illegal are tampering with job rating reports with the goal of improving one's own rating, vandalizing Web pages to discredit the owner, and changing business financial records for personal financial gain.

 

An important kind of record manipulation often done by attackers concerns their rights on a system.� Attackers generally try to become authorized as system administrators of a system because that category of users has special privileges in being able to modify the operating system.� A number of classic techniques such as buffer overflows can enable such "privilege escalation" to facilitate other deceptions.� Again, this is abnormal usage of a computer system with potentially dangerous consequences and should be considered unethical.

Trojan horses

 

Many common cyber-attacks involve Trojan horses.� These are programs that conceal a malicious intent by clever design.� Common Trojan horses are computer viruses (malicious code inserted into programs), worms (malicious self-reproducing processes) and spyware.� Viruses and worms are decreasing in importance, while spyware is increasing (Sipior, Ward, & Roselli, 2005).� It is often difficult to trace the source of Trojan horses because they attempt to subvert accountability.� Trojan horses are generally both unethical and illegal, since they are a form of fraud as to the nature of computer programs.

 

Trojan horses can use steganography to communicate with controllers at other sites.� Steganography is methods for sending concealed messages in innocent-looking data, such as a message concealed in the least-significant bits of a picture.� Steganography is ethically questionable because activities on a computer system should be known to the system and auditable.

 

The most dangerous Trojan horses are rootkits, programs used to replace key components of the operating system of a computer.� They �invisibly take control of the computer, exploiting it for purposes such as identity theft, sending spam email, or attacking and taking control of other computers.� Rootkits can also "hijack" sessions, connecting you to Internet sites other than those you intend.� Sets of computers with installed rootkits can form "botnets", armies of computers that attackers can control remotely for purposes such as denial of service.� Botnets have grown quickly as a problem in the last few years.� Rootkits and botnets are definitely unethical and illegal.

Hacking

 

"Hacking" is often cited as a key ethical issue for computers and networks.� While the term is also used for software modifications by anyone besides the authors of the software, it usually means malicious attempts of amateur computer aficianados to break into computers for which they are not authorized.� Almost always deception is used to circumvent the authorization.� Since most hackers begin when young, hacking is often similar to teenage trespassing in the physical world: People do it for the excitement and thrill the forbidden.

 

Malicious hackers have proposed various ethical justifications (Spafford, 1992).� One is the "socialistic" idea that computer systems are public resources available for whoever wants them.� This made sense in the 1960s and 1970s when computer systems were rare resources, but is not valid today when computers are everywhere and are often filled with sensitive personal data.� Another is claim that hackers are honing valuable skills for the software industry.� The skills necessary for hacking are a quite specialized with little relevance to software design, since software rarely embodies stealth and deceit.� Another claim is that hackers offer free testing of computer systems to alert their owners to security flaws.� A few small commercial companies use hacking methods ("red teaming") to test the security of computers and networks.� However, often such information is hard to use since it does not clearly suggest clear countermeasures.� Beyond that, we rarely see altruistic behavior from hackers, such as by reporting flaws to clearinghouses such as www.cert.org. �On the contrary, malicious hackers often are very competitive and want to conceal their attack tricks as much as possible.� Discoveries of attack methods are almost always made by victims through inspection of log records and running intrusion-detection systems, not by hacker confessions.� Thus we conclude that malicious hacking is unethical.

 

Note that many of the justifications advanced by hackers involve self-deception.� Much cyber-attacking involves willful disregard of the implications of actions, such not thinking anything strange about needing to collect your paycheck at a small Pacific island with liberal banking laws (Spammer X, 2004).� Having so little understanding of what one is doing is unethical in itself.

Information warfare

 

Coordinated cyber-attacks can be used as tactics and strategies of warfare, what is called "information warfare" (Bayles, 2001).� This raises ethical issues common to all forms of warfare (Nardin, 1998), but also some new ones (Arquilla, 1999).� Ethical issues in warfare are covered in part by the Geneva Conventions and other international agreements.

 

One ethical stance on warfare is pacifism, which views all warfare including information warfare as unethical (Miller, 1991).� Even for those who accept limited warfare, information warfare raises serious ethical problems because it is difficult to limit its collateral damage, a key issue with the Geneva Conventions.� When one drops a bomb, the damage is limited to a small physical area; but when one uses a computer virus, the damage can easily spread from attacked computers to civilian computers.� Spreading of damage is facilitated by the widespread use of key software, such as the Microsoft Windows operating system, so that if an attack succeeds on one machine it can succeed on many others.

 

The Geneva conventions also prohibit unprovoked attacks and say that counterattacks must be proportionate to the attack and of a similar type.� Thus cyber-attacks cannot be counterattacks to conventional attacks.� Even legitimate counterattacks can easily become disproportionate in cyberspace because of the difficulty in predicting and monitoring how far an attack will spread (Molander & Siang, 1998), and thus become illegal.� Another problem is that enemies in cyberspace try to be inaccessible to one another, so to reach an adversary an attacker must trespass through many intermediate computers, few of which would cooperate if they knew (Himma, 2004).� Trespassing is illegal and unethical.

 

Damage assessment is another key ethical problem with information warfare.� Unlike bombs, the damage caused by cyber-attacks may be subtle, and it may be difficult to find and fix.� As is well known to programmers, what seems to be a bug in one module may actually be a result of a bug in another, and attackers can exploit this to hide damage.� So cyber-attacks may continue to harm victims long after a peace treaty is signed, much like chemical weapons or battlefield mines, and thus may be unethical.� Since the source of a cyber-attack is difficult to track, it also may be impossible to attribute an attack to a particular nation, or a false evidence may be planted to implicate an innocent nation.� Finally, most cyber-attacks can only be used once against an adversary, since when the attack is recognized, forensics experts can usually quickly determine its mechanism and devise countermeasures (Ranum, 2004).� This means that cyber-weapons are very cost-ineffective; thus they are unethical considering all the more useful things on which a society could spend money.

Social engineering

 

On the border between cyberspace deception and conventional human deception are various forms of "social engineering" designed to fool people into revealing key information for cyberspace such as passwords (Mitnick, 2002).� The ethical issues are similar to those of conventional deception.� Since many users of cyberspace do not understand the reasons for security procedures, social engineering can be a very effective alternative to accomplish attacks beyond the purely technical means described above..

Defensive deceptions in response to cyber-attacks

 

Offensive deceptions are difficult to justify ethically from a utilitarian standpoint because their benefit to society is small and their harms can be large.� But the balance is different for defensive deceptions.� We often see deceptions in military tactics as the recourse of a weaker force defending itself against a stronger (Dunnigan and Nofi, 2001).� Defenders in cyberspace are often weaker than attackers because today's attackers can exploit a wide range of tools with the element of surprise, even against highly-secure computers and networks.� So deception might be a good defensive tactic in cyberspace.

Honeypots and honeynets

 

The most common defensive deception today is the honeypot.� This is a computer designed solely to be an attack victim, with the goal of collecting data about attack methods (The Honeynet Project, 2004).� Honeypots can provide the first notice of new kinds of attacks and permit experimentation with countermeasures.� Honeypots often are implemented in groups on local-area networks called honeynets.� Honeypots do not need to entice attacks, as they get sufficient numbers of attacks just by normal attacker network reconnaissance.

 

Honeypots must deceive to be effective because attackers want to avoid them.� Attackers know their activities will be recorded on a honeypot, revealing secrets of attack methods and exposing them to possible legal retaliation.� Furthermore, honeypots appear easy to attack, but this is illusory.� Honeypots must control the spread of attacks from them to other machines since honeypots are only be permitted on a network if they do not hurt their neighbor machines much.� Hence most honeypots limit the number of outgoing network connections in any one time period.� So since attackers will avoid a honeypot if they can recognize it, defenders should camouflage honeypots to look like ordinary machines.� For example, the Sebek tool of the Honeynet Project (www.honeynet.org) uses ideas from rootkit technology to modify the listing features of the operating system to conceal itself and its data.� Honeypots should also conceal their logging mechanisms, which tend to be more elaborate than on ordinary machines, and they should conceal the mechanisms that limit outgoing attacks.� Such deceptions can be considered ethical since all users of honeypots except the administrator are malicious, and the deceptions help reduce attacks both in the short run (by wasting attacker time in fruitless activity) and in the long run (by recording them so countermeasures can be found).� Note that users of any computer system must implicitly consent to some forms of monitoring for the system to work well.

 

Since attackers want to avoid honeypots, another useful deception for defenders is to make� ordinary systems appear to be honeypots, "fake honeypots" (Rowe, Duong, & Custy, 2006).� Real �honeypots may be detected by smart attackers from subtle clues in code of their operating system, unusual delays on responding to commands, and the absence of evidence of typical user activity in email and document files (Holz & Raynal, 2005).� Many of these clues can be faked, inducing attackers to go away without a fight.� Such deceptions appear quite ethical because only the most malicious users will look for them.

 

 

Defensive disinformation

 

Another defensive ploy is to provide attackers with false information to confuse them.� "Low-interaction" honeypots do this by pretending to offer services, ports, and sites that they do not actually have (Cohen & Koike, 2003).� They do this by mimicking the initial steps by which the real protocols would respond.� Then when an attacker tries to use these fake resources, they are met after a certain point in their interaction with error messages or failures to respond.� As long as innocent users are not provided with the addresses of such machines, this should be quite ethical.

 

To be more convincing, honeypot file systems can be populated with real files and data from ordinary systems, as defensive mimicry.� Directory design is more important in this than file design since attackers are unlikely to examine many files.� Convincing directories can be constructed by simulating the average number of directories, number of files per directory, depth of the directory tree, lengths of directory and file names, sizes of files, creation and modification dates of directories and files, and so on (Rowe, 2006).� To intrigue attackers, passwords can be associated with fake files or their content can be random characters suggesting encryption. These deceptions should be ethical since only malicious users such as spies should be examining directories and files of a honeypot.

 

Disinformation can also be provided for an attacker of an ordinary machine rather than a honeypot.� If intrusion-detection software assesses a user as sufficiently suspicious, deceptions can be foisted on them to interfere with their typical attack goals.� Easy deceptions that are hard to disprove are false error messages claiming unavailability of a resource that the attacker needs for their attack, such as network access or system-administrator privileges (Rowe, 2007).� Such deceptions may be more effective in inducing an attacker to leave than outright resource denial because they can sound like temporary obstacles that will encourage an attacker to keep wasting time.� These deceptions may not be ethical if the intrusion-detection assessment of a legitimate user is often incorrect, but the cost of a false positive is only wasted time.

Deceptive delaying

 

A classic ploy of people faced with suspicious requests is to delay responding until more information is obtained about the request or requester.� This is useful in cyberspace too.� For instance, an Internet site under a denial-of-service attack should respond very slowly to requests; this not only deceives an attacker that they are slowing the system, but provides more time to implement countermeasures like blocking certain incoming addresses.� Such defensive tactics should be mostly ethical because they only slow activities.� However, they could be triggered by legitimate users who accidentally do something suspicious such as downloading a large number of files together.� If the likelihood of such behavior in nonnegligible, the ethics of delaying tactics is more questionable.

Deceptive intelligence gathering on the attacker

 

Deception can also gather information about an attacker to help track them or to prepare a legal case against them.� One tactic is "honeytokens", digital bait for attackers (Spitzner, 2005):

 

Again, it can be argued that these deceptions are ethical because they are passive acts that require additional illegal actions by the attacker.� However, they are partly "entrapment" and, as well as ethical concerns, there are legal limitations on how much law enforcement can use them.

Strategic deception

 

Deception can also be broader in cyberspace.� For instance, an organization could pretend that it has better cyberspace defensive capabilities than it really does, to discourage adversaries from attacking it.� It could falsely announce installation of a new software technology or a plan to buy highly secure machines.� Such strategic deception does raise serious ethical problems because broad-scale deception affects many people and some are likely to be hurt by it.� For instance, contractors may waste time trying to sell software to the organization for the nonexistent capability, or the public may assume the organization is impregnable and underestimate risks to it.� It is also difficult to maintain a secret involving many people, so many such deceptions will be revealed early.

Future trends

 

Deception will increase in cyberspace in the future as more human activities continue to shift there.� For attacks and deceptive electronic commerce in particular, increasing countermeasures against them means successful attacks must increase their deceptiveness to be effective.� At the same time, defensive deception is a new idea with much potential, and it will show significant increases as well.� This means that ethical issues of what is acceptable deception in cyberspace will be increasingly important.� But as deception increases, awareness of it will increase as well, and ethical issues will be better formulated and understood by people.� Following consensus on key ethics issues, we will see increasing numbers of laws and policies relating to cyberspace to enforce these ethical insights.

Conclusion

 

Many kinds of deception are possible in cyberspace as enumerated here.� Offensive deceptions are generally unethical, and defensive deceptions are generally ethical.� Deceptions in electronic commerce may or may not be ethical depending on their nature and the consensus of society.

References

 

ACM/IEEE-CS Joint Task Force on Software Engineering Ethics and Professional Practices (1999).� Software engineering code of ethics and professional practice.� Retrieved April 15, 2007 from www.acm.org/service/se/code.htm.

Allen, C., Wallach, W., & Smit, I. (2006, July-August).� Why machine ethics?� IEEE Intelligent Systems, 21 (4), 12-17.

Arquilla, J. (1999).� Ethics and information warfare.� In Khalilzad, Z., White, J., & Marsall, A., (Eds.), Strategic appraisal: the changing role of information in warfare (pp. 379-401).� Santa Monica , California: Rand Corporation.

Artz, J. (1994).� Virtue versus utility: alternative foundations for computer ethics.� Proc. of Conference on Ethics in the Computer Age, Gatlinburg, Tennessee, USA, 16-21.

Awad, N., & Fitzgerald, K. (2005, August).� The deceptive behaviors that offend us most about spyware.� Communications of the ACM, 48 (8), 55-60.

Bayles, W. (2001, Spring).� Network attack.� Parameters, US Army War College Quarterly, 31, 44-58.

Berghel, H. (2006, April).� Phishing mongers and posers.� Communications of the ACM, 49 (4), 21-25.

Bok, S. (1978). �Lying: moral choice in public and private life.� New York: Pantheon.

Brey, P. (2000, December).� Disclosive computer ethics.� Computers and Society, 10-16.

Castelfranchi, C. (2000, June).� Artificial liars: Why computers will (necessarily) deceive us and each other.� Ethics and Information Technology , 2 (2), 113-119.

Cohen, F., and Koike, D. (2003).� Leading attackers through attack graphs with deceptions.� Computers and Security, 22 (5), 402-411.

Dunnigan, J.., & Nofi, A. (2001). �Victory and deceit, second edition: deception and trickery in war.� San Jose, CA: Writers Club Press.

Fogg, B. (2003).� Persuasive technology: using computers to change what we think and do.� San Francisco, CA: Morgan Kaufmann.

Ford, C. V. (1996). �Lies! Lies!! Lies!!! The Psychology of Deceit.� Washington, DC: American Psychiatric Press.

Gotterbarn, D. (1999, November-December).� How the new Software Engineering Code of Ethics affects you.� IEEE Software, 16 (6), 58-64.

Harris, L., & Spence, L. (2002).� The ethics of ebanking.� Journal of Electronic Commerce Research, 3 (2), 59-66.

Hoglund, G., & McGraw, G. (2007).� Exploiting online games: Cheating massively distributed systems.� Reading, MA: Addison-Wesley.

Holz, T., & Raynal, F. (2005, June)� Detecting honeypots and other suspicious environments. In Proc. 6th SMC Information Assurance Workshop, West Point, NY, 29-36.

The Honeynet Project (2004). �Know Your Enemy, 2nd Ed. Boston: Addison-Wesley.

Himma, K. (2004).� The ethics of tracing hacker attacks through the machines of innocent persons.� International Journal of Information Ethics, 2 (11).

Leitch, S., & Warren, M. (2000).� Ethics and electronic commerce.� In Selected papers from the second Australian Institute conference on computer ethics, Canberra, Australia (pp. 56-59).

Manion, M., & Goodrum, A. (2000, June).� Terrorism or civil disobedience: toward a hacktivist ethic.� Computers and Society (ACM SIGCAS), 30 (2), 14-19.

Miller, R. (1991).� Interpretations of conflict: ethics, pacifism, and the just-war tradition.� Chicago, IL: University of Chicago Press.

Mitnick, K. (2002).� The art of deception.� New York: Cyber Age Books.

Molander, R., & Siang, S. (1998, Fall).� The legitimization of strategic information warfare: ethical considerations.� AAAS Professional Ethics Report, 11 (4).� Retrieved November 23, 2005, from www.aaas.org/spp/sfrl/sfrl.htm.

Nardin, T. (Ed.) (1998).� The ethics of war and peace.� Princeton, NJ: Princeton University Press.

Quinn, M. (2006).� Ethics for the information age.� Boston: Pearson Addison-Wesley.

Ranum, M. (2004).� The myth of homeland security.� Indianapolis , IN: Wiley.

Rowe, N. (2007, May).� Finding logically consistent resource-deception plans for defense in cyberspace.� In Proc. 3rd International Symposium on Security in Networks and Distributed Systems, Niagara Falls, Ontario, Canada.� New York: IEEE Press.

Rowe, N. (2006, January).� Measuring the effectiveness of honeypot counter-counterdeception.� In Proc. Hawaii International Conference on Systems Sciences, Poipu, HI.

Rowe, N., Duong, B., & Custy, E. (2006, June).� Fake honeypots: a defensive tactic for cyberspace.� In Proc. 7th IEEE Workshop on Information Assurance, West Point, New York.� New York: IEEE Press.

Rowe, N., & Rothstein, H. (2004, July).� Two taxonomies of deception for attacks on information systems.� Journal of Information Warfare, 3 (2), 27-39.

Sipior, J., Ward, B., & Roselli, G. (2005, August).� A United States perspective on the ethical and legal issues of spyware.� In Proc. of ICEC, Xi'an, China.

�Spammer X� (2004).� Inside the Spam Cartel. �Rockland, Massachusetts: Syngress.

Spafford, E. (1992).� Are computer hacker break-ins ethical?� Journal of Systems and Software, 17, 41-47.

Spinello, R. (2003).� CyberEthics: Morality and Law in Cyberspace.� Sudbury, MA: Jones and Bartlett.

Spitzner, L. (2005).� Honeytokens: the other honeypot.� Retrieved May 30, 2005, from www.securityfocus.com/infocus/1713.

Vrij, A. (2000).� Detecting lies and deceit: the psychology of lying and the implications for professional practice.� Chichester, UK: Wiley.

Key terms

 

Botnet: A network of computers with rootkits that are secretly controlled by a cyber-attacker.

Disinformation: Lies or propaganda.

Hacker: An amateur attacker in cyberspace.

Honeypot: A computer system intended solely to be a victim of cyber-attacks so as to collect valuable intelligence about attack methods.

Identity deception: Pretending to be someone you are not.

Information warfare: Warfare attacking computers and networks, usually by software exploits.

Phishing: A deception involving email as bait to get victims to go to a Web site where their personal information can be stolen.

Rootkit: Software that secretly permits a cyber-attacker to control a computer remotely.

Social engineering: Techniques to manipulate people to obtain information from them that they would not give you voluntarily.

Spoofing: Pretending to operate from a different Internet address than you really are at.

Spyware: Software with a Trojan horse that secretly reports user activities over the Internet to a remote site.

Trojan horse: Software that conceals a malicious intent.

 

Biographical sketch

 

Neil C. Rowe is Professor of Computer Science at the U.S. Naval Postgraduate School where he has been since 1983.� He has a Ph.D. in Computer Science from Stanford University (1983), and E.E. (1978), S.M. (1978), and S.B. (1975) degrees from the Massachusetts Institute of Technology.� His main research interest is the role of deception in information processing, and he has also done research on surveillance systems, intelligent access to multimedia databases, image processing, robotic path planning, and intelligent tutoring systems.� He is the author of over 140 technical papers and a book.