The Ethics of Deception in Virtual Communities
Neil C. Rowe
U.S. Naval Postgraduate School
Code CS/Rp, 833 Dyer Road, Monterey, California 93943 USA
ncrowe@nps.edu
We examine ethical issues of deception within virtual communities.� Opinions differ as to the degree to which lying is permissible in a social context, and this same difference of opinion extends to cyberspace and to other forms of deception.� Deception can seriously harm a community since it damages trust, a necessary condition for many activities of communities, and it can gravely hurt the deceiver?s standing in the community.� But in many cases online, deception is harmless and can provide justifiable benefits to the deceiver, as with the identity deceptions that are common online.� Such justifications are not tenable, however, for deceptions that violate laws such as those involving virtual assaults.��
This article appeared in the Encyclopedia of Virtual Communities and Technologies, Hershey, PA: Idea Group, 2005.
Deception is an infrequent but inevitable part of human social interaction.� Deception fulfills important human social needs despite its disadvantages.� An obvious question is to what extent deception can be justified in virtual communities, and whether the justification could be different than that for deception in traditional societies.� While animals and plants blithely use deception (Mitchell & Thompson, 1986), humans are subject to many social constraints that affect the feasibility and suitability of deception.
Deception is a key issue in ethics with many important applications in law, business, politics, and psychology.� Deception has several potential negative consequences (Ford, 1996).� It damages relationships once discovered since they require trust; it can hurt a community by focusing its attention on false issues and devaluing its communications; it can hurt the deceiver?s reputation and make them unable to function in a community; and even if not discovered, it supports a deceiver?s self-deception and can ultimately hurt them (Sztompka, 1999).
Several studies have focused on the ethics of one form of deception, lying.� (Bok, 1978) has been influential in arguing for more discriminate use of lying.� This work analyzes a wide range of cases for lying and suggests relatively stringent guidelines, with the main categories being:
� White lies (small lies that are seemingly harmless).� These are often unnecessary since carefully chosen truthful statements or silence may easily serve the same purposes.
� False excuses (Snyder, Higgins, & Stucky, 1983).� Although these are passive lies, told to prevent something else, they can indirectly cause as much harm as active lies.
� Lies to prevent harm in a crisis.� Serious crises do not occur very often, so it is tempting to mislabel noncritical situations as critical.
� Lies to liars in retaliation.� But this lowers the retaliator to the same moral level as the offender.
� Lies to enemies on general principles.� But "enemy" is a fluid and poorly defined concept that is often used to justify bigotry.
� Lies protecting peers and clients.� Again, carefully chosen truthful statements or silence is often possible and preferable.
� Lies for the public good (as by politicians) (Levi & Stocker, 2000).� These are very difficult to justify since everyone has a different definition of the "public good".
� Paternalistic lies (as to children).� Guidance and persuasion can often eliminate the need for such lies.
� Lies to the sick and dying.� This violates the right of patients to make informed decisions.
As a rule of thumb, Bok suggests that a justifiable lie must satisfy three criteria: (1) that there are no alternative courses of action to lying; (2) that the moral arguments for the lie outweigh the moral arguments against it; and (3) that a "reasonable person" with no personal interest in the outcome would approve of the lie.
(Nyberg, 1993) takes a more tolerant view of lying, arguing that truth telling is only an instrumental value, not an intrinsic moral value.� Most arguments against deception, including Bok's, take a "slippery slope" argument that permitting any deception will encourage more deception.� But in fact, deception is intrinsic to all societies and few societies have collapsed in a cycle of increasing deception.� Deception is often necessary in law (including police work), business (including negotiation), politics (including diplomacy), and psychology (as an object of therapy).� Deception helps maintain civility of a society by permitting concealment of thoughts in an often more effective way than silence, thereby regulating the information conveyed from one member to another in a judicious way.� Deception is an essential tool in maintaining privacy as an alternative to creating ambiguity about one's self.� Deception is essential in maintaining friendships as a way of avoiding hurt feelings; contrary to popular belief, friends don't expect truth from friends but expect that that friends serve their best interests.� Deception is also essential in crises when confronted with evil forces.���
A question is to what extent the previous analysis applies to deception in virtual communities.� There is both more deception and more opportunities for deception in a virtual society, where visual and aural presence of the members is usually lacking and greater degrees of anonymity are possible (Friedman, Kahn, & Howe, 2000).� But opportunity does not excuse deception.
Identity deception is considered harmless in many virtual communities (Donath, 1998).� Does it really matter that someone alleging to be a 21-year-old female model is actually a 40-year-old overweight male?� If interaction within a virtual community is all virtual, such impersonation might seem harmless, and even perhaps beneficial because it permits a form of psychotherapy in its role-playing.� But usually virtual communities relate in some way to the real world, as when members are looking for other members for dating.� And some deception involving serious matters like death can be emotionally devastating (Brundage, 2001).� So boundaries must be set for every community as to acceptable identity deception (Katz & Rice, 2002).
Mimicking of data and processes can be dangerous to virtual communities because confirmatory information that often reveals it in the real world can be lacking.� For instance, posting a fake memo from a boss can hurt all concerned.� As for trolling, it does have benefits to the perpetrator: It provides an outlet for aggression, a problem in civilized societies, and gives the perpetrator the feeling of power, a problem of adolescents everywhere.� Nonetheless, trolling and other online insincerity are antisocial behavior and should be treated as such.� Virtual communities often need sincerity because of the ease of anonymity, so insincerity can be disruptive, even highly disruptive.� Communities need to set "netiquette" guidelines to reduce the problem.
False promises and excuses are another problem of virtual communities because it hard to monitor promise fulfillment and justifications.� For instance, people may promise repeatedly online to meet in person without any intention of doing so.� Some of this can be covered by netiquette, and false excuses are usually ignorable.� However, other promises in a virtual community can be just as serious as in the real world, those like contracts between members.� An example would be an agreement between two players of a fantasy game to provide one resource in exchange for another.� Many virtual communities provide valuable services, and even those primarily for entertainment are often taken very seriously by their members.� So violation of a contract in a virtual world should be just as serious as in the real world.
Deception can also occur in serious criminal activities in virtual communities.� Such activities may involve fraud (Boni & Kovacich, 1999).� They may also involve attacks directly on computer systems (Schneier, 2000), either for entertainment as by "hackers", or to advance personal agendas as with disgruntled-employee retaliation or terrorists.� Most of these exploit identity deception.
Many software defenses to attacks on computer systems are available, such as passwords, encryption, and access controls.� Most defenses impose some restrictions on the user, and most have flaws that can be exploited by knowledgable attackers.� Many attacks on computers are governed by criminal law (Loader & Thomas, 2000); for instance, damaging of data is generally subject to the laws protecting property.� However, laws require time to enforce and prosecute, and that may be insufficient redress for serious damage.� For that reason deception has been suggested as a defense method itself.� For instance, a law-enforcement agent may pretend online to be a 12-year-old child to catch pedophiles, or fake credit card numbers may be distributed in a borderline-legal newsgroup to catch anyone using them.� For people trying to attack computer systems, a decoy computer system called a ?honeypot? (one not used for any other purpose) can be made easy to attack, and all activity on it can be recorded to obtain clues about attack methods (The Honeynet Project, 2002).� Such deceptions may be ruled entrapment by law enforcement, however.
Virtual communities are becoming larger and more diverse in their members, and ethical problems, violations of netiquette, and even crime will continue to increase.� Forms of deception that are especially common in virtual communities like fake identities and false claims must be anticipated and measures must be taken.� We will see increased specification of appropriate behavior in virtual communities by netiquette and other policies.� We will also see increased appearance or imposition of moderators and leaders on virtual communities to ensure enforcement of these policies.
Deception is a fundamental problem for virtual communities.� Some of it is benign, some of it can be controlled or prevented by enforcement of policies by administrators, and some of it is criminal and needs to be prosecuted.� Virtual communities primarily need to address the second category.
Bok, S. (1978).� Lying: moral choice in public and private life. �New York: Pantheon.
Boni, W., & Kovacich, G. (1999).� I-way robbery: crime on the Internet.� Boston: Butterworth-Heinemann.
Brundage, S. (2001, February).� Playing with death.� Computer Gaming World, 29-31.
Donath, J. (1998).� Identity and deception in the virtual community.� In Kollock, P., & Smith, M. (Eds.), Communities in Cyberspace.� London: Routledge.
Ford, C. (1996).� Lies! Lies!! Lies!!! The psychology of deceit.� Washington, DC: American Psychiatric Press.
Friedman, B., Kahn, P., & Howe, D. (2000, December).� Trust online.� Communications of the ACM, 43 (12), 34-40.
The Honeynet Project (2002).� Know your enemy. Boston: Addison-Wesley.
Katz, J., & Rice, R. (2002).� Social consequences of Internet use.� Cambridge, MA: MIT Press.
Levi, M., & Stocker, L. (2000).� Political trust and trustworthiness.� Annual Review of Political Science, 3, 475-508.
Loader, B., & Thomas, D. (2000).� Cybercrime.� London: Routledge.
Mitchell, R., & Thompson, N., Eds. (1986).� Deception: perspectives on human and nonhuman deceit.� Albany, NY: SUNY Press.
Nyberg, D. (1993).� The varnished truth: truth telling and deceiving in ordinary life.� Chicago: University of Chicago Press.
Schneier, B. (2000).� Secrets and lies: digital security in a networked world.� New York: Wiley.
Snyder, C. R., Higgins, R. L., and Stucky, R. J. (1983).� Excuses: masquerades in search of grace.� New York: Wiley.
Sztompka, P. (1999).� Trust.� London: Cambridge University Press.
access control: Software control of the use of a computer.
anonymity: Ability to conceal one's identity.
deception: Conveying or implying false information to other people.
encryption: Concealing data by encoding it in a form that requires a secret "key" to decode.
hacker: Someone who breaks into a computer system for fun.
honeypot: A deceptive computer system that entraps attackers into revealing their methods.
netiquette: Informal policies for behavior in a virtual community, analogous to etiquette.
paternalistic lies: Lies told for the obstensive good of the deceivee.
white lies: Lies that are minor and supposedly harmless.
This work was supported by the National Science Foundation under the Cyber Trust program.