Neil C. Rowe
U.S. Naval Postgraduate School
Deception is a frequent but underappreciated aspect of human society (Eckman, 2001). Deception in electronic goods and services is facilitated by the difficulty of verifying details in the limited information available in cyberspace (Mintz, 2002). Fear of being deceived (often unjustified) is in fact a major obstacle to wider use of e-commerce and e-government by the public. One survey reported consumers thought fraud on the Internet was 12 times more common than offline fraud, and 3 out of 5 people thought their credit-card number could be stolen in most online transactions (Allen, 2001); both are overestimates. We assess here the nature of the deception threat, how deception can be detected, and what can be done about it.
This chapter appeared in the Encyclopedia of E-Commerce, E-Government, and Mobile Commerce, M. Khosrow-Pour, ed., Hershey, PA: The Idea Group, 2006.
Deception is common in many areas of human endeavor (Ford, 1996). Deception is in fact essential to the normal operation of business, law, government, and entertainment as a way to manipulate people (Nyberg, 1993). But there is a complex boundary between acceptable deception and unacceptable or illegal deception.
Deception can occur with either the purveyor (offeror) of goods or services or with the customer (buyer), and it strongly affects trust in a transaction (Friedman, Kahn, & Howe, 2000). Some examples in online activities include:
Usually the motivation for deception in goods and services is financial gain, but other reasons include revenge and self-glorification.
Unfortunately, the rather anonymous nature of cyberspace encourages deception. One problem is that the communications bandwidth, or amount of information that can be transmitted between people, is considerably less than in face-to-face human interactions, even with videocameras. Studies indicate that people are more deceptive the smaller the bandwidth (Burgoon et al, 2003); for instance, people are more deceptive on the telephone than in videoconferencing. The detection of deception in online interactions is made difficult by the absence of many useful visual and aural clues; careful studies of consumer behavior have confirmed this difficulty (Grazioli & Jarvenpaa, 2000). This raises problems for electronic commerce and government.
We can distinguish five major categories of deception in online transactions: Puffery or overstated claims, insincerity of promises or claims, trespassing, masquerading, and fraud (Grazioli & Jarvenpaa, 2003). Most instances happen with the World Wide Web, with some occurring in email and other uses of the Internet.
Puffery includes most advertising since it rarely accurately summarizes the merits of a product or service. Deceptive advertising is encouraged by the nature of online interaction: It is hard for a customer to know with whom they are dealing. An impressive Web site is no guarantee of a reliable business, unlike an impressive real-world store or shop. Furthermore, the customer cannot hold and touch the merchandise, and the images, audio, or video provided of it are typically limited. So it is tempting for an online purveyor to make unsupportable claims. Puffery also includes indirect methods such as a Web site for a children's television show that is designed to sell a particular toy, or people who endorse products in online discussion groups without revealing they work for the purveyor ("shilling").
Insincerity has many forms online. Many Web search engines list pages they have been paid to display but that are not the best matches to the given keywords. A purveyor can promise "extras" to a sale they have no intention of delivering, or a customer can promise large future purchases. Emotions can also be faked, even love (Cornwell & Lundgren, 2001). False excuses like "being busy" are easy to make on the Internet. Negative puffery, where a customer or other business says bad things about a product or service (Floridi, 1996), as for revenge or to manipulate stock prices, are another form of insincerity. And "Remove me from the mailing list" links can actually be scams to get your name onto a mailing list.
Trespassing is breaking into computer systems to steal its time, memory, or other resources, and is usually by deception. It is commonly associated with "hackers", people breaking in for fun, but is increasingly practiced by spyware, and by criminals to obtain staging sites for attacks on other computers (Chirillo, 2002; Bosworth & Kabay, 2002).
Masquerarding or "identity deception" is pretending to be someone that one is not. There are many forms online:
The most serious electronic deceptions in goods and services are crimes of fraud (Boni & Kovachich, 1999; Loader & Thomas, 2000). (McEvoy, Albro, & McCracken, 2001) and (Fraudwatch, 2005) survey specific popular techniques. Unscrupulous Web purveyors can collect money without providing a promised good or service since it is easy to appear and disappear on the Web; fake charities are a notorious example. Purveyors may not feel much consumer pressure because it is hard for customers to complain about long-distance transactions. The Internet is well suited to many classic scams, notably the many forms of the "Nigerian letter" asking for money in the promise of receiving much more money in the future. Electronic voting is a special concern for fraud (Kofler, Krimmer, & Prosser, 2003).
Online transactions benefit from the trust of the participants. Deception subverts trust and makes online businesses less efficient because of the subsequent need to check credentials and promises. Because of similar costs to society in general, ethical theories usually claim that most forms of deception are unethical (Bok, 1978), and laws in every society identify some forms of deception as fraud legally. American law uses the doctrine of "implied merchantability" to say that a contracted good or service must be provided adequately or the money must be refunded. Waivers of responsibility that consumers must approve before proceeding on a Web site do not have much weight in court because consumers rarely can be said to give informed consent. But there are many other issues; see the many Internet-related publications of the U.S. Federal Trade Commission (FTC, 2005).
Studies have shown that most people are poor at detecting deception (Ford, 1996). Thus in cyberspace with its limited bandwidth, deception is even more of a problem. Most training of people (such as law-enforcement personnel) to recognize deception in human interactions focuses on clues that are absent in cyberspace such as the visual ones of increased pupil dilation, blinking, and self-grooming, and vocal clues such as higher voice pitch, more frequent speech errors, and hesitation. However, some traditional clues to deception do apply to cyberspace (Zhou & Zhang, 2004), including:
All these are common in deceptive advertising, as in "Never need a proscription [sic] again with Viocks, the secret of celebrities [sic] long health!!!!" Here is a real phishing email with deliberate randomness and both deliberate and accidental spelling and punctuation errors (and invisible "normal" text as camouflage for spam filters):
From: Pasquale Pham [email@example.com]
Sent: Tuesday, November 16, 2004 10:17 AM
Subject: Why are you silent,
You can . r e finance your mortga g e . with 4.15 % . ra t e
and reduce your monthly payment at least twice. One minute can
save you t h ousands. Your application is . approv e d.
Besides "low-level" clues which reflect the difficulty of the deceiver controlling all their channels of indirect communication, cognitive clues reflect the difficulty of constructing and maintaining deceptions (Heuer, 1982; Whaley & Busby, 2002). Logical inconsistencies are the most important of these. The above example shows inconsistency between the email address of the sender, their name, and the company they claim to represent; inconsistency between the two parts of the sender's name; and inconsistency between the company, the clickable link text "uxqydujs", and the site the link actually takes you to, www.qolkamdnt.com. The Web registry www.whois.sc reports that this site was registered in Baku, Azerbaijan, for only 21 days, so it is unlikely to be a reputable lender for mortgages in the United States, the country to which it was sent. In addition, it is logically inconsistent to solicit a mortgage and also say "Your application is approved."
Lack of links from reputable Web sites is another clue that a site is suspicious, and that can be checked with a Web browser. The cliché that "If it's too good to be true, it probably isn't" is always helpful. Techniques derived from crime investigation are helpful for detecting dangerous deceptions (MacVittie, 2002) including computer forensics (Prosise & Mandia, 2001) and criminal profiling (Wang, Chen, & Akabashch, 2004).
Because deception can occur in such a range of online activities, a variety of countermeasures should be considered:
· Ignoring it: Deceptive businesses and sites should not be patronized, and this is helped if they are not indexed or linked to, based on critical review. Ignoring works well against lesser deceptions.
· Education: Both customers and purveyors can benefit from learning about possible deceptions. For instance, people should know not to give their passwords or identification numbers to anyone, no matter what emergency is alleged. Posting of statements of "netiquette", or etiquette for the Internet, also can educate customers as to acceptable behavior.
· Passwords: To reduce identity deception by users, passwords can be required in accessing a resource such as a Web site.
· Encryption: To maintain data privacy, sensitive data like credit-card numbers should be encrypted in files and in transmission on the Internet (Schneier, 2000). Most Web vendors implement this for transactions.
· Signatures: To authenticate a message (prove who it came from), an unforgeable electronic signature can be attached to it. This encrypts a complicated function of the contents of the message, and can be decrypted by the receiver to prove that the message came unmodified from the sender.
· Intrusion detection: Trespassing that breaks through first lines of defense can be recognized by software called intrusion-detection systems (Proctor, 2001).
· Third-party escrow: Utilities like PayPal can manage contracts between customers and purveyors as a neutral third-party broker.
· Protocol design: Deception in electronic commerce can be reduced with good "protocols" (scripts and rules) for interactions. For instance, interruption of an online purchase at any time should not allow other users to see private information, and this can be aided by "cookies" and time limits on responses.
· Reputation management systems: EBay and a number of Internet businesses have buyers and sellers rate one another. Observed deception is affects these ratings which are visible to future buyers and sellers (Yu & Singh, 2003). While there are ways to deliberately manipulate ratings, they are difficult.
· Automated deception detection: Some automated tools assess text for deception (Qin, Burgoon, and Nunamaker, 2004). Some tools eliminate advertising in displaying Web pages (Rowe et al, 2002).
· Manual Web-site assessments: Several organizations rate Web sites (e.g., the U.S. Better Business Bureau's "BBBOnline Reliability Program"). Adequately-rated sites get a "seal of approval" which they can display on their sites, but such graphics are easy for a malicious site to copy. (Pacifici, 2002) offers suggestions for rating sites yourself.
· Getting background on a Web site: The sites www.whois.sc and www.betterwhois.com provide information about who registered a U.S. Web site and where, its description, certification, and "blacklist status". Similar information is available for most countries; see www.iana.org/cctld/cctld-whois.htm.
· Alerting authorities: Deceptive electronic commerce should be reported to government or consumer agencies.
· Legal recourse: Particularly bad cases of deception should be handled by the courts. But jurisdictional conflicts can occur when the parties are in different countries or administrative areas.
· Counterattacks and revenge: Not recommended because they are usually illegal, risk escalation, and may attack an innocent target because of the difficulty of confirming identity in cyberspace. Counterattacks have been tried in the form of deliberately garbled files posted to music-sharing utilities (Kushner, 2003).
· Defensive deception: One can deceive a deceptive user to entrap or monitor them. For instance, one can post fake credit-card numbers online and see if trespassers use them. "Honeypots" (The Honeynet Project, 2004) are fake Internet sites that entrap trespassers to collect data about them.
The suitability of the Internet and the Web to particular kinds of deception means that this phenomenon will be with us for a long time. But a variety of new technology will increasingly provide help in combatting it. Rating and information services will be increasingly available for Web sites and other Internet business methods, and operating systems will offer increasingly effective built-in protection against hidden attacks. Laws eventually catch up with new technology, so we expect that legal recourses will steadily improve as precedents are gradually made for cyberspace. How quickly the customers and purveyors will overcome their fears of being deceived is another story, however. It may just take time, much in the way that automobiles were not trusted by the public for a long time simply because new powerful technology often looks dangerous.
Deception is always a danger in electronic transactions involving goods and services. Much of the public is reluctant to buy or contract online for this reason. But quite a variety of methods can detect deception and, better yet, prevent it. Wider use of these methods, and greater familiarity with them, can build public trust in online transactions. Trust is a key but complex issue for societies (Sztompka, 1999).
Allen, D. (2001). EConsumers: get lots of credit. Retrieved March 9, 2005 from www.ecognito.net/article2.htm.
Bok, S. (1999). Lying: moral choice in public and private life (updated edition). New York: Vintage.
Boni, W., & Kovacich, G. (1999). I-way robbery: crime on the Internet. Boston: Butterworth-Heinemann.
Boswoth, S., & Kabay, M., eds. (2002). The computer security handbook. New York: Wiley.
Burgoon, J., Stoner, G., Bonito, J., & Dunbar, N. (2003). Trust and deception in mediated communication. Proc. 36th Hawaii Intl. Conf. on System Sciences, Honolulu, HI, 44.1.
Chirillo, J. (2002). Hack attacks revealed. New York: Wiley.
Cornwell, B., & Lundgren, D. (2001). Love on the Internet: Involvement and misrepresentation in romantic relationships in cyberspace versus realspace. Computers in Human Behavior, 17, 197-211.
Eckman, P. (2001). Telling lies: clues to deceit in the marketplace, politics, and marriage. New York: Norton.
FTC (Federal Trade Commission of the United States) (2005). E-commerce and the Internet. Retrieved March 11, 2005, from www.ftc.gov/bcp/menu-internet.htm.
Floridi, L. (1996). Brave.net.world: the Internet as a disinformation superhighway? The Electronic Library, 14, 509-514.
Ford, C. V. (1996). Lies! Lies!! Lies!!! The psychology of deceit. Washington, DC: American Psychiatric Press.
Fraudwatch.com (2005). Internet fraud. Retrieved March 11, 2005 from www.fraudwatchinternational.com/internetfraud/internet.htm.
Friedman, B., Kahn, P., & Howe, D. (2000, December). Trust online. Communications of the ACM, 43 (12), 34-40.
Grazioli, S., & Jarvenpaa, S. (2000, July). Perils of Internet fraud: an empirical investigation of deception and trust with experienced Internet consumers. IEEE Trans. on Systems, Man, and Cybernetics ? Part A, 30 (4), 395-410.
Grazioli, S., & Jarvenpaa, S. (2003, December). Deceived: under target online. Communications of the ACM, 46 (12), 196-205.
Heuer, R. J. (1982). Cognitive factors in deception and counterdeception. In Strategic military deception, ed. Daniel, D. C., & Herbig, K. L. (New York: Pergamon), pp. 31-69.
The Honeynet Project, Know Your Enemy, 2nd Edition. Boston: Addison-Wesley, 2004.
Kaza, S., Murthy, S., & Hu, G. (2003, October). Identification of deliberately doctored text documents using frequent keyword chain (FKC) model. IEEE Intl. Conf. on Information Reuse and Integration, 398-405.
Kofler, R., Krimmer, R., & Prosser, A. (2003, January). Electronic voting: algorithmic and implementation issues. Proc. of 36th Hawaii Intl Conf. on System Sciences, Honolulu, HI, 142a.
Kushner, D. (2003, May). Digital decoys. IEEE Spectrum, 40 (5), 27.
Loader, B., & Thomas, D. (2000). Cybercrime. London: Routledge.
MacVittie, L. (2002). Online fraud detection takes diligence. Network Computing, 13 (4), 80-83.
McEvoy, A., Albro, E., & McCracken, H. (2001, May). Dot cons -- Auction scams, dangerous downloads, investment and credit-card hoaxes. PC World, 19 (5), 107-116.
Mintz, A. P. (ed.) (2002) Web of deception: misinformation on the Internet. CyberAge Books, New York.
Nyberg, D. (1993). The varnished truth: truth telling and deceiving in ordinary life. Chicago: University of Chicago Press.
Pacifici, S. (2002). Getting it right: verifying sources on the net. Retrieved March 2, 2005 from www.llrc.com/features/verifying.htm.
Proctor, P. E. (2001). Practical intrusion detection handbook. Upper Saddle River, NJ: Prentice-Hall PTR.
Prosise, C., & Mandia, K. (2001). Incident response. New York: McGraw-Hill Osborne Media.
Qin, T., Burgoon, J., and Nunamaker, J. (2004). An exploratory study of promising cues in deception detection and application of decision tree. Proc. 37th Hawaii Intl. Conf. On Systems Sciences.
Rowe, N., Coffman, J., Degirmenci, Y., Hall, S., Lee, S., & Williams, C. (2002, July). Automatic removal of advertising from Web-page display. Proc. ACM-IEEE Conf. on Digital Libraries, Portland, OR, 406.
Schneier, B. (2000). Secrets and lies: digital security in a networked world. New York: Wiley.
Sztompka, P., Trust. London: Cambridge University Press, 1999.
Wang, G., Chen, H., & Akabashch, H. (2004, March). Automatically detecting deceptive criminal identities. Communications of the ACM, 47 (3), 71-76.
Whaley, B., & Busby, J. (2002). Detecting deception: practice, practitioners, and theory. In Godson, R., & Wirtz, J. (Eds.), Strategic denial and deception (New Brunswick: Transaction Publishers), pp. 181-221.
Yu, B., & Singh, M. (2003, July). Detecting deception in reputation management. Proc. Conf. Multi-Agent Systems (AAMAS), Melbourne, AU, 73-80.
Zhou, L, & Zhang, D. (2004, January). Can online behavior reveal deceivers? -- an exploratory investigation of deception in Instant Messaging. 37th Hawaii Intl. Conf. on System Sciences, 22-30.
deception: Conveying or implying false information to other people.
fraud: Criminal deception leading to unjust enrichment of the deceiver.
honeypot: A deceptive computer system that entraps trespassers into revealing their methods.
identity deception: Pretending to be someone or some category of person that one is not.
intrusion-detection system: Software for detecting when suspicious behavior occurs on a computer or network.
netiquette: Informal policies for behavior in a virtual community, analogous to etiquette.
phishing: Inducing people (often by email) to visit a Web site that steals personal information about them.
shilling: Making claims (pro or con) for something without revealing that you have a financial stake in it.
signature, electronic: A code someone supplies electronically that confirms their identity.
social engineering: Using social interactions to deceptively steal information like passwords from people.