�� Logical Modeling of Deceptive Negative Persuasion

Neil C. Rowe

U.S. Naval Postgraduate School, Code CS/Rp, 1411 Cunningham Road,

Monterey, CA 93943 USA

ncrowe at nps.edu

Abstract. It is often easier to persuade someone that something is impossible to do than that it is possible, since the absence of one necessary resource suffices.� This makes lying a tempting tactic for negative persuasion.� We consider the problem of finding convincing lies for it as one of maintaining consistency of a set of logical assertions; we can track that consistency with a computer program.� We use an example of negative persuasion against electronic voting in elections, where automated analysis then suggests ways to prevent it.

This paper appeared in the Second Conference on Persuasive Technology, Stanford, California, USA, April 2007.

Negative persuasion is convincing someone that a goal of theirs is impossible.� Usually the persuader attempts to show that one or more necessary conditions on the goal cannot be achieved.� Often these conditions concern needed resources.� For instance, we can try to persuade someone to stop smoking by pointing out that a new tax on cigarettes makes them too expensive to afford.

Negative persuasion is so effective for motivation of people that there is a great temptation to lie to accomplish it.� Lies can be more flexible than just denying a resource outright because they can provide excuses rather than just prohibitions.� Our previous research investigated a wide spectrum of useful lies for defense of computer systems against attacks (Rowe and Rothstein, 2004)� Computers can be excellent liars since they can be well prepared, think quickly, have a good memory, avoid feelings of guilt, and pretend effectively (Vrij, 2000).� False excuses are good lies for computers since many features of cyberspace are hard to confirm.� But convincing excuses must be consistent.

Our theory of deceptive negative persuasion addresses some goal resources that an agent wants to obtain (such as a vote in an election). �A counteragent that controls the resources can try to interfere with agent plans by lying about secondary-resource availability in a �counterplan� (Carbonell, 1981).� Denial of secondary resources can be more effective at foiling plans than denial of primary ones because (a) it is less suggestive of deliberate manipulation, and (b) it is less discouraging to more attempts of the same kind by the agent, thereby wasting more resources of the agent. ��We assume that the counteragent only wants to interfere with some kinds of agents, and will decide once they learn about the agent.� To avoid contradicting themselves and revealing their deceptions, the counteragent should track the assertions it makes.� Deceivers should conceal their deceptions because people can have emotional reactions to them and may engage in retaliatory or violent behavior.

Consistency can be maintained by tracking assertions about resources such as physical objects, data, credentials, authorizations, and knowledge.� Most resources change status rarely; for instance, a working computer will be highly likely to be working an hour later if it is still on.� We propose six �facets� of status of each resource, in order of appearance: existence, authorization for use, intrinsic readiness for use, operability (functional correctness), compatibility with other resources, and reasonability of its parameters.� When an agent successfully completes an action, this implies all the facets are valid for each input resource. �But when an action fails, a particular facet can usually be blamed, and all subsequent facets are invalid.� For instance, if a voting machine will not start up, then it cannot be operable or compatible with other resources.� Inferences of facet invalidity also proceed upward from a part to a whole containing it, and inference of validity in the opposite direction.� So if one page of a ballot does not work properly, the entire ballot does not work properly.� Task-specific inferences can also apply; for instance, when voting machine is inoperable, it cannot produce ballots.

To thus do deceptive negative persuasion against an agent, the counteragent should choose some secondary resources necessary to their plan and some facets of those resources not yet described, and deny those resources by lying about the validity of the facets.� A key issue is when to start deceiving.� Although the deceivee must first be judged as worthy of deception, the longer the delay, the fewer possible resources on which to deceive without introducing inconsistencies. �Deception should also be avoided at key agent actions (such when a voter submits a ballot) since then the link between them and deceptions is more obvious.

We can rate acceptable deceptions by the product of four factors in a form of Naive Bayes inference:

There are also several tactics for implementing deceptions, to provide variety:

                 To deceive on existence, claim inability to find the resource.

                 To deceive on authorization, deny outright that the agent is authorized to use the resource, or ask for additional passwords or codes that the agent does not possess.�

                 To deceive on readiness, issue an immediate error message on attempted use such as "cannot be opened" or a cryptic code string, or do nothing but say nothing.

                 To deceive on operability, state that the resource isn�t working, stop while using the resource with an error message, or appear never to terminate.

                 To deceive on compatibility, pick another needed resource and cite a pair of allegedly incompatible attributes of the two.

                 To deceive on moderation, cite a parameter limit that is exceeded.

Deceivees have options themselves in detecting and foiling deceptions, including thorough testing of automated systems against misuse, designing systems to log all events (especially unusual ones) so that deceptions can be detected and attributed later, asking probing questions to catch deceivers in inconsistencies, providing alternate resources for allegedly unavailable ones, and enforcing policies and laws against deceptive practices while ensuring that all potential deceivees know what they are.

Our previous work applied these ideas to the defense of a computer system from a network attack, by persuading the attacker that critical resources were not available to continue (Rowe, 2007).� In the current work we studied an example of the reverse problem, of figuring how to thwart negative deception that tries to frustrate attempts to vote electronically.� (Keyssar, 2000) recounts the tortured history of suffrage in the United States, which includes many methods of denial such as difficult registration sites, difficult registration hours, poll taxes, difficult required documents, difficult poll locations, ballot shortages, inadequate polling facilities, and lost ballots (Piven & Cloward, 1985).� Electronic voting (Gritzalis, 2003) could provide more secure election management, but its implementation is difficult.� Voting is an ideal target for negative deception because it is run by government bureaucracies with a reputation for frustrating inefficiency (Barton, 1980).� We would like to provide systematic guidance for electoral commissions, watchdog groups, policies, and laws to safeguard the electoral process.

We implemented a Prolog program to model this problem using a two-agent model (voter and election commission).� The process modeled is one where voters register to get a password, then use this password on election day to vote on machines at a polling place.� The top-level goals for the voter were to submit a ballot and get it acknowledged; the top-level goals for the commission were to present a count of the vote and to correctly acknowledge that the vote of each voter was included in the total.� We implemented a logical model of the voting process with formal definitions of 15 action types permitting a total of 35 distinct actions.� The resources required were ballots, votes, voting machines, voting networks, counts, "logged-in" status, proofs of residency, proofs of age, passwords, registrars, knowledge of the locations of the registration office, the poll, and the information desk, and knowledge of the time of the election and when a voting machine was free.� With some random choices included to model real-world uncertainty, the typical calculated voting plan had 38.2 steps.� Here are example specifications for the act of person X marking document Y:

We rated plausible but consistent ways to falsely deny resources for a random voting plan.� Our ratings method described above prefers deceptions that look like normal breakdowns of the voting process rather than a conspiracy to prevent voting which could cause public outcry.� Here are the highest-rated deceptions found by our implementation; the full analysis found 116 deception opportunities.� Ensuring logical consistency in deception means that each must be done with the first use of the resource and continued with every subsequent use of the resource.

                 When the voter tries to vote page 1 of the ballot, abort execution with an error message about the ballot. [weight 0.129]

                 When the voter tries to log in to the voting machine, refuse with an error message about invalid credentials. [weight 0.128]

                 When the electoral commission tries to count the vote, abort with an error message saying the count is invalid. [weight 0.126]

                 When the electoral commission tries to acknowledge a voter ballot, abort with an error message. [weight 0.126]

                 When the voter tries to use a marker to mark their ballot, have this fail. [weight 0.125]

As mentioned, each of these can be implemented in several ways.� For instance, the first deception could mean displaying an obviously damaged ballot image, having the machine crash (stop working), or having the machine wait forever without displaying.

Each of the ploys we have found (especially the high-rated ones) needs then to be addressed by specific planned countermeasures.� For instance, voting machines should record all actions on them in a file on each machine, especially unusual actions such as ballot failure.� Replacement voting machines or alternate procedures for obtaining voting credentials should be provided.� Manipulation of voting counts could be reduced by requiring formal verification techniques on electoral software to show that it works correctly in all situations including with interruptions.� Obstruction of legitimate voter registration attempts can be reduced by voters asking for precise legal justifications for obstructive behavior encountered, then checking these with independent authorities.� Deception involving voter registration can be reduced by laws requiring the media to announce voter registration offices, and instituting penalties for deliberately disseminating incorrect information.

 

Acknowledgements. This work was supported by the National Science Foundation under the Cyber Trust Program.� Opinions expressed are those of the author alone.

References

Barton, A.: A Diagnosis of Bureaucratic Maladies.� In: Weiss, C., and Barton, A. (eds.): Making Bureaucracies Work.� Sage, Beverly Hills, California (1980) 27-36

Carbonell, J.: Counterplanning: A Strategy-Based Model of Adversary Planning in Real-World Situations.� Artificial Intelligence 16 (1981) 295-329

Gritzalis, D.: Secure Electronic Voting.� Kluwer, Norwell, Massachusetts (2003)

Keyssar, A.: The Right to Vote: The Contested History of Democracy in the United States.� Basic Books, New York (2000)

Piven, F., Cloward, R.: Prospects for Voter Registration Reform: A Report on the Experiences of the Human SERVE Campaign.� Political Science 18 (3) (1985) 582-593

Rowe, N.: Finding Logically Consistent Resource-Deception Plans for Defense in Cyberspace.� 3rd International Symposium on Security in Networks and Distributed Systems, Niagara Falls, Ontario, Canada (May 2007)

Rowe, N., Rothstein, H.: Two Taxonomies of Deception for Attacks on Information Systems.� Journal of Information Warfare 3 (2) (July 2004) 27-39

Vrij, A.: Detecting Lies and Deceit: The Psychology of Lying and the Implications for Professional Practice.� Wiley, Chichester, UK (2000)