Thinking about rationality of voting, and cooperative systems in general, in terms of the design of intelligent agents

Matt Ginsberg writes:

I saw your mention on 538.com [see also this article and this with Edlin and Kaplan]; a long time ago (80’s), I [Ginsberg] wrote an article with Mike Genesereth and Jeff Rosenschein about rationality for automated agents in collaborative environments. The punch line, which probably bears on this issue as well, is that the strategy, “Act in such a way that if all the other agents were designed identically, we’d do optimally” is provably a Pareto-optimal way to design such agents. It’s a nice result: handles the prisoner’s dilemma, why you should vote, throw yourself on the grenade, etc.

Ginsberg’s papers on the topic are here and here. I like the idea of framing the problem in terms of designing intelligent agents. This bypasses some of the normative vs. descriptive issues that cloud the analysis of rationality in human behavior.

2 thoughts on “Thinking about rationality of voting, and cooperative systems in general, in terms of the design of intelligent agents

  1. This solves the prisoner's dilemma if it is common knowledge that participants are agents built with this mechanism design. Other restrictions work as well; for example in playing prisoner's dilemma with exactly n repetitions, you can reach the Pareto-optimal solution if an agent is known to be a finite-state-machine with less than n states. In general, software agents have a big advantage in that they can credibly reveal their policy, in a way that humans cannot. Some humans (e.g. Kim Jong Il) enjoy another advantage: they can credibly threaten irrational behavior; software agents mostly miss out on that one.

  2. Peter: If you see some of the software I've worked on, you'll realize that a threat of destructively irrational behavior on the part of software agents is completely credible!

Comments are closed.