# Revelation principle

The **revelation principle** is a fundamental principle in mechanism design. It states that if a social choice function can be implemented by an arbitrary mechanism (i.e. if that mechanism has an equilibrium outcome that corresponds to the outcome of the social choice function), then the same function can be implemented by an incentive-compatible-direct-mechanism (i.e. in which players truthfully report type) with the same equilibrium outcome (payoffs).^{[1]}^{:224–225}

In mechanism design, the revelation principle is of utmost importance in finding solutions. The researcher need only look at the set of equilibria characterized by incentive compatibility. That is, if the mechanism designer wants to implement some outcome or property, they can restrict their search to mechanisms in which agents are willing to reveal their private information to the mechanism designer that has that outcome or property. If no such direct and truthful mechanism exists, no mechanism can implement this outcome/property by contraposition. By narrowing the area needed to be searched, the problem of finding a mechanism becomes much easier.

The principle comes in two variants corresponding to the two flavors of incentive-compatibility:

- The
*dominant-strategy revelation-principle*says that every social-choice function that can be implemented in dominant-strategies can be implemented by a dominant-strategy-incentive-compatible (DSIC) mechanism (introduced by Allan Gibbard^{[2]}). - The
*Bayesian-Nash revelation-principle*says that every social-choice function that can be implemented in Bayesian-Nash equilibrium (Bayesian game, i.e. game of incomplete information) can be implemented by a Bayesian-Nash incentive-compatibility (BNIC) mechanism. This broader solution concept was introduced by Dasgupta, Hammond and Maskin,^{[3]}Holmstrom,^{[4]}and Myerson.^{[5]}

## Example

Consider the following example. There is a certain item that Alice values as [math]\displaystyle{ v_A }[/math] and Bob values as [math]\displaystyle{ v_B }[/math]. The government needs to decide who will receive that item and in what terms.

- A
*social-choice-function*is a function that maps a set of individual*preferences*to a social outcome. An example function is the utilitarian function, which says "give the item to a person that values it the most". We denote a social choice function by**Soc**and its recommended outcome given a set of preferences by**Soc(Prefs)**. - A
*mechanism*is a rule that maps a set of individual*actions*to a social outcome. A mechanism**Mech**induces a game which we denote by**Game(Mech)**. - A mechanism
**Mech**is said to*implement*a social-choice-function**Soc**if, for every combination of individual preferences, there is a Nash equilibrium in**Game(Mech)**in which the outcome is**Soc(Prefs)**. Two example mechanisms are:- "Each individual says a number between 1 and 10. The item is given to the individual who says the lowest number; if both say the same number, then the item is given to Alice". This mechanism does NOT implement the utilitarian function, since for every individual who wants the item, it is a dominant strategy to say "1" regardless of his/her true value. This means that in equilibrium the item is always given to Alice, even if Bob values it more.
- First-price sealed-bid auction is a mechanism which implements the utilitarian function. For example, if [math]\displaystyle{ v_B\gt v_A }[/math], then any action profile in which Bob bids more than Alice and both bids are in the range [math]\displaystyle{ (v_A,v_B) }[/math] is a Nash-equilibrium in which the item goes to Bob. Additionally, if the valuations of Alice and Bob are random variables drawn independently from the same distribution, then there is a Bayesian Nash equilibrium in which the item goes to the bidder with the highest value.

- A
*direct-mechanism*is a mechanism in which the set of actions available to each player is just the set of possible preferences of the player. - A direct-mechanism
**Mech**is said to be*Bayesian-Nash-Incentive-compatible (BNIC)*if there is a Bayesian Nash equilibrium of**Game(Mech)**in which all players reveal their true preferences. Some example direct-mechanisms are:- "Each individual says how much he values the item. The item is given to the individual that said the highest value. In case of a tie, the item is given to Alice". This mechanism is NOT BNIC, since a player who wants the item is better-off by saying the highest possible value, regardless of his true value.
- First-price sealed-bid auction is also NOT BNIC, since the winner is always better-off by bidding the lowest value that is slightly above the loser's bid.
- However, if the distribution of the players' valuations is known, then there is a variant which is BNIC and implements the utilitarian function.
- Moreover, it is known that Second price auction is BNIC (it is even IC in a stronger sense - dominant-strategy IC). Additionally, it implements the utilitarian function.

## Proof

Suppose we have an arbitrary mechanism **Mech** that implements **Soc**.

We construct a direct mechanism **Mech'** that is truthful and implements **Soc**.

**Mech'** simply simulates the equilibrium strategies of the players in Game(**Mech**). I.e:

**Mech'**asks the players to report their valuations.- Based on the reported valuations,
**Mech'**calculates, for each player, his equilibrium strategy in**Mech**. **Mech'**returns the outcome returned by**Mech**.

Reporting the true valuations in **Mech'** is like playing the equilibrium strategies in **Mech**. Hence, reporting the true valuations is a Nash equilibrium in **Mech'**, as desired. Moreover, the equilibrium payoffs are the same, as desired.

The revelation principle says that for every arbitrary *coordinating device* a.k.a. correlating, there exists another direct device for which the state space equals the action space of each player. Then the coordination is done by directly informing each player of his action.

## See also

- Mechanism design
- Incentive compatibility
- The Market for Lemons
- Nash equilibrium
- Game theory
- Constrained Pareto efficiency
- Myerson–Satterthwaite theorem

## References

- ↑ Vazirani, Vijay V.; Nisan, Noam; Roughgarden, Tim; Tardos, Éva (2007).
*Algorithmic Game Theory*. Cambridge, UK: Cambridge University Press. ISBN 0-521-87282-0. http://www.cs.cmu.edu/~sandholm/cs15-892F13/algorithmic-game-theory.pdf. - ↑ Gibbard, A. 1973. Manipulation of voting schemes: a general result. Econometrica 41, 587–601.
- ↑ Dasgupta, P., Hammond, P. and Maskin, E. 1979. The implementation of social choice rules: some results on incentive compatibility. Review of Economic Studies 46, 185–216.
- ↑ Holmstrom, B. 1977. On incentives and control in organizations. Ph.D. thesis, Stanford University.
- ↑ Myerson, R. 1979. Incentive-compatibility and the bargaining problem. Econometrica 47, 61–73.

Original source: https://en.wikipedia.org/wiki/Revelation principle.
Read more |