Cooperative game theory
In game theory, a cooperative game (or coalitional game) is a game with competition between groups of players ("coalitions") due to the possibility of external enforcement of cooperative behavior (e.g. through contract law). Those are opposed to non-cooperative games in which there is either no possibility to forge alliances or all agreements need to be self-enforcing (e.g. through credible threats).[1]
Cooperative games are often analysed through the framework of cooperative game theory, which focuses on predicting which coalitions will form, the joint actions that groups take and the resulting collective payoffs. It is opposed to the traditional non-cooperative game theory which focuses on predicting individual players' actions and payoffs and analyzing Nash equilibria.[2][3]
Cooperative game theory provides a high-level approach as it only describes the structure, strategies and payoffs of coalitions, whereas non-cooperative game theory also looks at how bargaining procedures will affect the distribution of payoffs within each coalition. As non-cooperative game theory is more general, cooperative games can be analyzed through the approach of non-cooperative game theory (the converse does not hold) provided that sufficient assumptions are made to encompass all the possible strategies available to players due to the possibility of external enforcement of cooperation. While it would thus be possible to have all games expressed under a non-cooperative framework, in many instances insufficient information is available to accurately model the formal procedures available to the players during the strategic bargaining process, or the resulting model would be of too high complexity to offer a practical tool in the real world. In such cases, cooperative game theory provides a simplified approach that allows the analysis of the game at large without having to make any assumption about bargaining powers.
Mathematical definition
A cooperative game is given by specifying a value for every coalition. Formally, the coalitional game consists of a finite set of players [math]\displaystyle{ N }[/math], called the grand coalition, and a characteristic function [math]\displaystyle{ v : 2^N \to \mathbb{R} }[/math] [4] from the set of all possible coalitions of players to a set of payments that satisfies [math]\displaystyle{ v( \emptyset ) = 0 }[/math]. The function describes how much collective payoff a set of players can gain by forming a coalition, and the game is sometimes called a value game or a profit game.
Conversely, a cooperative game can also be defined with a characteristic cost function [math]\displaystyle{ c: 2^N \to \mathbb{R} }[/math] satisfying [math]\displaystyle{ c( \emptyset ) = 0 }[/math]. In this setting, players must accomplish some task, and the characteristic function [math]\displaystyle{ c }[/math] represents the cost of a set of players accomplishing the task together. A game of this kind is known as a cost game. Although most cooperative game theory deals with profit games, all concepts can easily be translated to the cost setting.
Cooperative game theory definition
Cooperative game theory is a branch of game theory that deals with the study of games where players can form coalitions, cooperate with one another, and make binding agreements. The theory offers mathematical methods for analysing scenarios in which two or more players are required to make choices that will affect other players wellbeing.[5] The key idea is that players can achieve superior outcomes by working together rather than working against each other. The following points provide a detailed explanation of the four key features of cooperative game theory:
Common interests: In cooperative games, players share a common interest in achieving a specific goal or outcome. The players must identify and agree on a common interest to establish the foundation and reasoning for cooperation. Once the players have a clear understanding of their shared interest, they can work together to achieve it.
Necessary information exchange: Cooperation requires communication and information exchange among the players. Players must share information about their preferences, resources, and constraints to identify opportunities for mutual gain. By sharing information, players can better understand each other's goals and work towards achieving them together.
Voluntariness, equality, and mutual benefit: In cooperative games, players voluntarily come together to form coalitions and make agreements. The players must be equal partners in the coalition, and any agreements must be mutually beneficial. Cooperation is only sustainable if all parties feel they are receiving a fair share of the benefits.
Compulsory contract: In cooperative games, agreements between players are binding and mandatory. Once the players have agreed to a particular course of action, they have an obligation to follow through. The players must trust each other to keep their commitments, and there must be mechanisms in place to enforce the agreements. By making agreements binding and mandatory, players can ensure that they will achieve their shared goal.
Harsanyi dividend
The Harsanyi dividend (named after John Harsanyi, who used it to generalize the Shapley value in 1963[6]) identifies the surplus that is created by a coalition of players in a cooperative game. To specify this surplus, the worth of this coalition is corrected by the surplus that is already created by subcoalitions. To this end, the dividend [math]\displaystyle{ d_v(S) }[/math] of coalition [math]\displaystyle{ S }[/math] in game [math]\displaystyle{ v }[/math] is recursively determined by
[math]\displaystyle{ \begin{align} d_v(\{i\})&= v(\{i\}) \\ d_v(\{i,j\})&= v(\{i,j\})-d_v(\{i\})-d_v(\{j\}) \\ d_v(\{i,j,k\})&= v(\{i,j,k\})-d_v(\{i,j\})-d_v(\{i,k\})-d_v(\{j,k\})-d_v(\{i\})-d_v(\{j\})-d_v(\{k\})\\ &\vdots \\ d_v(S) &= v(S) - \sum_{T\subsetneq S }d_v(T) \end{align} }[/math]
An explicit formula for the dividend is given by [math]\displaystyle{ d_v(S)=\sum_{T\subseteq S }(-1)^{|S\setminus T|}v(T) }[/math]. The function [math]\displaystyle{ d_v:2^N \to \mathbb{R} }[/math] is also known as the Möbius inverse of [math]\displaystyle{ v:2^N \to \mathbb{R} }[/math].[7] Indeed, we can recover [math]\displaystyle{ v }[/math] from [math]\displaystyle{ d_v }[/math] by help of the formula [math]\displaystyle{ v(S) = d_v(S) + \sum_{T\subsetneq S }d_v(T) }[/math].
Harsanyi dividends are useful for analyzing both games and solution concepts, e.g. the Shapley value is obtained by distributing the dividend of each coalition among its members, i.e., the Shapley value [math]\displaystyle{ \phi_i(v) }[/math] of player [math]\displaystyle{ i }[/math] in game [math]\displaystyle{ v }[/math] is given by summing up a player's share of the dividends of all coalitions that she belongs to, [math]\displaystyle{ \phi_i(v)=\sum_{S\subset N: i \in S }{d_v(S)}/{|S|} }[/math].
Duality
Let [math]\displaystyle{ v }[/math] be a profit game. The dual game of [math]\displaystyle{ v }[/math] is the cost game [math]\displaystyle{ v^* }[/math] defined as
- [math]\displaystyle{ v^*(S) = v(N) - v( N \setminus S ), \forall~ S \subseteq N. }[/math]
Intuitively, the dual game represents the opportunity cost for a coalition [math]\displaystyle{ S }[/math] of not joining the grand coalition [math]\displaystyle{ N }[/math]. A dual cost game [math]\displaystyle{ c^* }[/math] can be defined identically for a cost game [math]\displaystyle{ c }[/math]. A cooperative game and its dual are in some sense equivalent, and they share many properties. For example, the core of a game and its dual are equal. For more details on cooperative game duality, see for instance (Bilbao 2000).
Subgames
Let [math]\displaystyle{ S \subsetneq N }[/math] be a non-empty coalition of players. The subgame [math]\displaystyle{ v_S : 2^S \to \mathbb{R} }[/math] on [math]\displaystyle{ S }[/math] is naturally defined as
- [math]\displaystyle{ v_S(T) = v(T), \forall~ T \subseteq S. }[/math]
In other words, we simply restrict our attention to coalitions contained in [math]\displaystyle{ S }[/math]. Subgames are useful because they allow us to apply solution concepts defined for the grand coalition on smaller coalitions.
Properties for characterization
Superadditivity
Characteristic functions are often assumed to be superadditive (Owen 1995). This means that the value of a union of disjoint coalitions is no less than the sum of the coalitions' separate values:
[math]\displaystyle{ v ( S \cup T ) \geq v (S) + v (T) }[/math] whenever [math]\displaystyle{ S, T \subseteq N }[/math] satisfy [math]\displaystyle{ S \cap T = \emptyset }[/math].
Monotonicity
Larger coalitions gain more:
[math]\displaystyle{ S \subseteq T \Rightarrow v (S) \le v (T) }[/math].
This follows from superadditivity. i.e. if payoffs are normalized so singleton coalitions have zero value.
Properties for simple games
A coalitional game v is considered simple if payoffs are either 1 or 0, i.e. coalitions are either "winning" or "losing".[8]
Equivalently, a simple game can be defined as a collection W of coalitions, where the members of W are called winning coalitions, and the others losing coalitions. It is sometimes assumed that a simple game is nonempty or that it does not contain an empty set. However, in other areas of mathematics, simple games are also called hypergraphs or Boolean functions (logic functions).
- A simple game W is monotonic if any coalition containing a winning coalition is also winning, that is, if [math]\displaystyle{ S \in W }[/math] and [math]\displaystyle{ S\subseteq T }[/math] imply [math]\displaystyle{ T \in W }[/math].
- A simple game W is proper if the complement (opposition) of any winning coalition is losing, that is, if [math]\displaystyle{ S \in W }[/math] implies [math]\displaystyle{ N\setminus S \notin W }[/math].
- A simple game W is strong if the complement of any losing coalition is winning, that is, if [math]\displaystyle{ S \notin W }[/math] implies [math]\displaystyle{ N\setminus S \in W }[/math].
- If a simple game W is proper and strong, then a coalition is winning if and only if its complement is losing, that is, [math]\displaystyle{ S \in W }[/math] iff [math]\displaystyle{ N\setminus S \notin W }[/math]. (If v is a coalitional simple game that is proper and strong, [math]\displaystyle{ v(S) = 1 - v(N \setminus S) }[/math] for any S.)
- A veto player (vetoer) in a simple game is a player that belongs to all winning coalitions. Supposing there is a veto player, any coalition not containing a veto player is losing. A simple game W is weak (collegial) if it has a veto player, that is, if the intersection [math]\displaystyle{ \bigcap W := \bigcap_{S\in W} S }[/math] of all winning coalitions is nonempty.
- A dictator in a simple game is a veto player such that any coalition containing this player is winning. The dictator does not belong to any losing coalition. (Dictator games in experimental economics are unrelated to this.)
- A carrier of a simple game W is a set [math]\displaystyle{ T \subseteq N }[/math] such that for any coalition S, we have [math]\displaystyle{ S \in W }[/math] iff [math]\displaystyle{ S\cap T \in W }[/math]. When a simple game has a carrier, any player not belonging to it is ignored. A simple game is sometimes called finite if it has a finite carrier (even if N is infinite).
- The Nakamura number of a simple game is the minimal number of winning coalitions with empty intersection. According to Nakamura's theorem, the number measures the degree of rationality; it is an indicator of the extent to which an aggregation rule can yield well-defined choices.
A few relations among the above axioms have widely been recognized, such as the following (e.g., Peleg, 2002, Section 2.1[9]):
- If a simple game is weak, it is proper.
- A simple game is dictatorial if and only if it is strong and weak.
More generally, a complete investigation of the relation among the four conventional axioms (monotonicity, properness, strongness, and non-weakness), finiteness, and algorithmic computability[10] has been made (Kumabe and Mihara, 2011[11]), whose results are summarized in the Table "Existence of Simple Games" below.
Type | Finite Non-comp | Finite Computable | Infinite Non-comp | Infinite Computable |
---|---|---|---|---|
1111 | No | Yes | Yes | Yes |
1110 | No | Yes | No | No |
1101 | No | Yes | Yes | Yes |
1100 | No | Yes | Yes | Yes |
1011 | No | Yes | Yes | Yes |
1010 | No | No | No | No |
1001 | No | Yes | Yes | Yes |
1000 | No | No | No | No |
0111 | No | Yes | Yes | Yes |
0110 | No | No | No | No |
0101 | No | Yes | Yes | Yes |
0100 | No | Yes | Yes | Yes |
0011 | No | Yes | Yes | Yes |
0010 | No | No | No | No |
0001 | No | Yes | Yes | Yes |
0000 | No | No | No | No |
The restrictions that various axioms for simple games impose on their Nakamura number were also studied extensively.[13] In particular, a computable simple game without a veto player has a Nakamura number greater than 3 only if it is a proper and non-strong game.
Relation with non-cooperative theory
Let G be a strategic (non-cooperative) game. Then, assuming that coalitions have the ability to enforce coordinated behaviour, there are several cooperative games associated with G. These games are often referred to as representations of G. The two standard representations are:[14]
- The α-effective game associates with each coalition the sum of gains its members can 'guarantee' by joining forces. By 'guaranteeing', it is meant that the value is the max-min, e.g. the maximal value of the minimum taken over the opposition's strategies.
- The β-effective game associates with each coalition the sum of gains its members can 'strategically guarantee' by joining forces. By 'strategically guaranteeing', it is meant that the value is the min-max, e.g. the minimal value of the maximum taken over the opposition's strategies.
Solution concepts
The main assumption in cooperative game theory is that the grand coalition [math]\displaystyle{ N }[/math] will form.[15] The challenge is then to allocate the payoff [math]\displaystyle{ v(N) }[/math] among the players in some fair way. (This assumption is not restrictive, because even if players split off and form smaller coalitions, we can apply solution concepts to the subgames defined by whatever coalitions actually form.) A solution concept is a vector [math]\displaystyle{ x \in \mathbb{R}^N }[/math] (or a set of vectors) that represents the allocation to each player. Researchers have proposed different solution concepts based on different notions of fairness. Some properties to look for in a solution concept include:
- Efficiency: The payoff vector exactly splits the total value: [math]\displaystyle{ \sum_{ i \in N } x_i = v(N) }[/math].
- Individual rationality: No player receives less than what he could get on his own: [math]\displaystyle{ x_i \geq v(\{i\}), \forall~ i \in N }[/math].
- Existence: The solution concept exists for any game [math]\displaystyle{ v }[/math].
- Uniqueness: The solution concept is unique for any game [math]\displaystyle{ v }[/math].
- Marginality: The payoff of a player depends only on the marginal contribution of this player, i.e., if these marginal contributions are the same in two different games, then the payoff is the same: [math]\displaystyle{ v( S \cup \{ i \} ) = w( S \cup \{ i \} ), \forall~ S \subseteq N \setminus \{ i \} }[/math] implies that [math]\displaystyle{ x_i }[/math] is the same in [math]\displaystyle{ v }[/math] and in [math]\displaystyle{ w }[/math].
- Monotonicity: The payoff of a player increases if the marginal contribution of this player increase: [math]\displaystyle{ v( S \cup \{ i \} ) \leq w( S \cup \{ i \} ), \forall~ S \subseteq N \setminus \{ i \} }[/math] implies that [math]\displaystyle{ x_i }[/math] is weakly greater in [math]\displaystyle{ w }[/math] than in [math]\displaystyle{ v }[/math].
- Computational ease: The solution concept can be calculated efficiently (i.e. in polynomial time with respect to the number of players [math]\displaystyle{ |N| }[/math].)
- Symmetry: The solution concept [math]\displaystyle{ x }[/math] allocates equal payments [math]\displaystyle{ x_i = x_j }[/math] to symmetric players [math]\displaystyle{ i }[/math], [math]\displaystyle{ j }[/math]. Two players [math]\displaystyle{ i }[/math], [math]\displaystyle{ j }[/math] are symmetric if [math]\displaystyle{ v( S \cup \{ i \} ) = v( S \cup \{ j \} ), \forall~ S \subseteq N \setminus \{ i, j \} }[/math]; that is, we can exchange one player for the other in any coalition that contains only one of the players and not change the payoff.
- Additivity: The allocation to a player in a sum of two games is the sum of the allocations to the player in each individual game. Mathematically, if [math]\displaystyle{ v }[/math] and [math]\displaystyle{ \omega }[/math] are games, the game [math]\displaystyle{ ( v + \omega ) }[/math] simply assigns to any coalition the sum of the payoffs the coalition would get in the two individual games. An additive solution concept assigns to every player in [math]\displaystyle{ ( v + \omega ) }[/math] the sum of what he would receive in [math]\displaystyle{ v }[/math] and [math]\displaystyle{ \omega }[/math].
- Zero Allocation to Null Players: The allocation to a null player is zero. A null player [math]\displaystyle{ i }[/math] satisfies [math]\displaystyle{ v( S \cup \{ i \} ) = v( S ), \forall~ S \subseteq N \setminus \{ i \} }[/math]. In economic terms, a null player's marginal value to any coalition that does not contain him is zero.
An efficient payoff vector is called a pre-imputation, and an individually rational pre-imputation is called an imputation. Most solution concepts are imputations.
The stable set
The stable set of a game (also known as the von Neumann-Morgenstern solution (von Neumann Morgenstern)) was the first solution proposed for games with more than 2 players. Let [math]\displaystyle{ v }[/math] be a game and let [math]\displaystyle{ x }[/math], [math]\displaystyle{ y }[/math] be two imputations of [math]\displaystyle{ v }[/math]. Then [math]\displaystyle{ x }[/math] dominates [math]\displaystyle{ y }[/math] if some coalition [math]\displaystyle{ S \neq \emptyset }[/math] satisfies [math]\displaystyle{ x_i \gt y _i, \forall~ i \in S }[/math] and [math]\displaystyle{ \sum_{ i \in S } x_i \leq v(S) }[/math]. In other words, players in [math]\displaystyle{ S }[/math] prefer the payoffs from [math]\displaystyle{ x }[/math] to those from [math]\displaystyle{ y }[/math], and they can threaten to leave the grand coalition if [math]\displaystyle{ y }[/math] is used because the payoff they obtain on their own is at least as large as the allocation they receive under [math]\displaystyle{ x }[/math].
A stable set is a set of imputations that satisfies two properties:
- Internal stability: No payoff vector in the stable set is dominated by another vector in the set.
- External stability: All payoff vectors outside the set are dominated by at least one vector in the set.
Von Neumann and Morgenstern saw the stable set as the collection of acceptable behaviours in a society: None is clearly preferred to any other, but for each unacceptable behaviour there is a preferred alternative. The definition is very general allowing the concept to be used in a wide variety of game formats.
Properties
- A stable set may or may not exist (Lucas 1969), and if it exists it is typically not unique (Lucas 1992). Stable sets are usually difficult to find. This and other difficulties have led to the development of many other solution concepts.
- A positive fraction of cooperative games have unique stable sets consisting of the core (Owen 1995).
- A positive fraction of cooperative games have stable sets which discriminate [math]\displaystyle{ n-2 }[/math] players. In such sets at least [math]\displaystyle{ n-3 }[/math] of the discriminated players are excluded (Owen 1995).
The core
Let [math]\displaystyle{ v }[/math] be a game. The core of [math]\displaystyle{ v }[/math] is the set of payoff vectors
- [math]\displaystyle{ C( v ) = \left\{ x \in \mathbb{R}^N: \sum_{ i \in N } x_i = v(N); \quad \sum_{ i \in S } x_i \geq v(S), \forall~ S \subseteq N \right\}. }[/math]
In words, the core is the set of imputations under which no coalition has a value greater than the sum of its members' payoffs. Therefore, no coalition has incentive to leave the grand coalition and receive a larger payoff.
Properties
- The core of a game may be empty (see the Bondareva–Shapley theorem). Games with non-empty cores are called balanced.
- If it is non-empty, the core does not necessarily contain a unique vector.
- The core is contained in any stable set, and if the core is stable it is the unique stable set; see (Driessen 1988) for a proof.
The core of a simple game with respect to preferences
For simple games, there is another notion of the core, when each player is assumed to have preferences on a set [math]\displaystyle{ X }[/math] of alternatives. A profile is a list [math]\displaystyle{ p=(\succ_i^p)_{i \in N} }[/math] of individual preferences [math]\displaystyle{ \succ_i^p }[/math] on [math]\displaystyle{ X }[/math]. Here [math]\displaystyle{ x \succ_i^p y }[/math] means that individual [math]\displaystyle{ i }[/math] prefers alternative [math]\displaystyle{ x }[/math] to [math]\displaystyle{ y }[/math] at profile [math]\displaystyle{ p }[/math]. Given a simple game [math]\displaystyle{ v }[/math] and a profile [math]\displaystyle{ p }[/math], a dominance relation [math]\displaystyle{ \succ^p_v }[/math] is defined on [math]\displaystyle{ X }[/math] by [math]\displaystyle{ x \succ^p_v y }[/math] if and only if there is a winning coalition [math]\displaystyle{ S }[/math] (i.e., [math]\displaystyle{ v(S)=1 }[/math]) satisfying [math]\displaystyle{ x \succ_i^p y }[/math] for all [math]\displaystyle{ i \in S }[/math]. The core [math]\displaystyle{ C(v,p) }[/math] of the simple game [math]\displaystyle{ v }[/math] with respect to the profile [math]\displaystyle{ p }[/math] of preferences is the set of alternatives undominated by [math]\displaystyle{ \succ^p_v }[/math] (the set of maximal elements of [math]\displaystyle{ X }[/math] with respect to [math]\displaystyle{ \succ^p_v }[/math]):
- [math]\displaystyle{ x \in C(v,p) }[/math] if and only if there is no [math]\displaystyle{ y\in X }[/math] such that [math]\displaystyle{ y \succ^p_v x }[/math].
The Nakamura number of a simple game is the minimal number of winning coalitions with empty intersection. Nakamura's theorem states that the core [math]\displaystyle{ C(v,p) }[/math] is nonempty for all profiles [math]\displaystyle{ p }[/math] of acyclic (alternatively, transitive) preferences if and only if [math]\displaystyle{ X }[/math] is finite and the cardinal number (the number of elements) of [math]\displaystyle{ X }[/math] is less than the Nakamura number of [math]\displaystyle{ v }[/math]. A variant by Kumabe and Mihara states that the core [math]\displaystyle{ C(v,p) }[/math] is nonempty for all profiles [math]\displaystyle{ p }[/math] of preferences that have a maximal element if and only if the cardinal number of [math]\displaystyle{ X }[/math] is less than the Nakamura number of [math]\displaystyle{ v }[/math]. (See Nakamura number for details.)
The strong epsilon-core
Because the core may be empty, a generalization was introduced in (Shapley Shubik). The strong [math]\displaystyle{ \varepsilon }[/math]-core for some number [math]\displaystyle{ \varepsilon \in \mathbb{R} }[/math] is the set of payoff vectors
- [math]\displaystyle{ C_\varepsilon( v ) = \left\{ x \in \mathbb{R}^N: \sum_{ i \in N } x_i = v(N); \quad \sum_{ i \in S } x_i \geq v(S) - \varepsilon, \forall~ S \subseteq N \right\}. }[/math]
In economic terms, the strong [math]\displaystyle{ \varepsilon }[/math]-core is the set of pre-imputations where no coalition can improve its payoff by leaving the grand coalition, if it must pay a penalty of [math]\displaystyle{ \varepsilon }[/math] for leaving. [math]\displaystyle{ \varepsilon }[/math] may be negative, in which case it represents a bonus for leaving the grand coalition. Clearly, regardless of whether the core is empty, the strong [math]\displaystyle{ \varepsilon }[/math]-core will be non-empty for a large enough value of [math]\displaystyle{ \varepsilon }[/math] and empty for a small enough (possibly negative) value of [math]\displaystyle{ \varepsilon }[/math]. Following this line of reasoning, the least-core, introduced in (Maschler Peleg), is the intersection of all non-empty strong [math]\displaystyle{ \varepsilon }[/math]-cores. It can also be viewed as the strong [math]\displaystyle{ \varepsilon }[/math]-core for the smallest value of [math]\displaystyle{ \varepsilon }[/math] that makes the set non-empty (Bilbao 2000).
The Shapley value
The Shapley value is the unique payoff vector that is efficient, symmetric, and satisfies monotonicity.[16] It was introduced by Lloyd Shapley (Shapley 1953) who showed that it is the unique payoff vector that is efficient, symmetric, additive, and assigns zero payoffs to dummy players. The Shapley value of a superadditive game is individually rational, but this is not true in general. (Driessen 1988)
The kernel
Let [math]\displaystyle{ v : 2^N \to \mathbb{R} }[/math] be a game, and let [math]\displaystyle{ x \in \mathbb{R}^N }[/math] be an efficient payoff vector. The maximum surplus of player i over player j with respect to x is
- [math]\displaystyle{ s_{ij}^v(x) = \max \left\{ v(S) - \sum_{ k \in S } x_k : S \subseteq N \setminus \{ j \}, i \in S \right\}, }[/math]
the maximal amount player i can gain without the cooperation of player j by withdrawing from the grand coalition N under payoff vector x, assuming that the other players in i's withdrawing coalition are satisfied with their payoffs under x. The maximum surplus is a way to measure one player's bargaining power over another. The kernel of [math]\displaystyle{ v }[/math] is the set of imputations x that satisfy
- [math]\displaystyle{ ( s_{ij}^v(x) - s_{ji}^v(x) ) \times ( x_j - v(j) ) \leq 0 }[/math], and
- [math]\displaystyle{ ( s_{ji}^v(x) - s_{ij}^v(x) ) \times ( x_i - v(i) ) \leq 0 }[/math]
for every pair of players i and j. Intuitively, player i has more bargaining power than player j with respect to imputation x if [math]\displaystyle{ s_{ij}^v(x) \gt s_{ji}^v(x) }[/math], but player j is immune to player i's threats if [math]\displaystyle{ x_j = v(j) }[/math], because he can obtain this payoff on his own. The kernel contains all imputations where no player has this bargaining power over another. This solution concept was first introduced in (Davis Maschler).
The nucleolus
Let [math]\displaystyle{ v : 2^N \to \mathbb{R} }[/math] be a game, and let [math]\displaystyle{ x \in \mathbb{R}^N }[/math] be a payoff vector. The excess of [math]\displaystyle{ x }[/math] for a coalition [math]\displaystyle{ S \subseteq N }[/math] is the quantity [math]\displaystyle{ v(S) - \sum_{ i \in S } x_i }[/math]; that is, the gain that players in coalition [math]\displaystyle{ S }[/math] can obtain if they withdraw from the grand coalition [math]\displaystyle{ N }[/math] under payoff [math]\displaystyle{ x }[/math] and instead take the payoff [math]\displaystyle{ v(S) }[/math]. The nucleolus of [math]\displaystyle{ v }[/math] is the imputation for which the vector of excesses of all coalitions (a vector in [math]\displaystyle{ \mathbb{R}^{2^N} }[/math]) is smallest in the leximin order. The nucleolus was introduced in (Schmeidler 1969).
(Maschler Peleg) gave a more intuitive description: Starting with the least-core, record the coalitions for which the right-hand side of the inequality in the definition of [math]\displaystyle{ C_\varepsilon( v ) }[/math] cannot be further reduced without making the set empty. Continue decreasing the right-hand side for the remaining coalitions, until it cannot be reduced without making the set empty. Record the new set of coalitions for which the inequalities hold at equality; continue decreasing the right-hand side of remaining coalitions and repeat this process as many times as necessary until all coalitions have been recorded. The resulting payoff vector is the nucleolus.
Properties
- Although the definition does not explicitly state it, the nucleolus is always unique. (See Section II.7 of (Driessen 1988) for a proof.)
- If the core is non-empty, the nucleolus is in the core.
- The nucleolus is always in the kernel, and since the kernel is contained in the bargaining set, it is always in the bargaining set (see (Driessen 1988) for details.)
Convex cooperative games
Introduced by Shapley in (Shapley 1971), convex cooperative games capture the intuitive property some games have of "snowballing". Specifically, a game is convex if its characteristic function [math]\displaystyle{ v }[/math] is supermodular:
- [math]\displaystyle{ v( S \cup T ) + v( S \cap T ) \geq v(S) + v(T), \forall~ S, T \subseteq N. }[/math]
It can be shown (see, e.g., Section V.1 of (Driessen 1988)) that the supermodularity of [math]\displaystyle{ v }[/math] is equivalent to
- [math]\displaystyle{ v( S \cup \{ i \} ) - v(S) \leq v( T \cup \{ i \} ) - v(T), \forall~ S \subseteq T \subseteq N \setminus \{ i \}, \forall~ i \in N; }[/math]
that is, "the incentives for joining a coalition increase as the coalition grows" (Shapley 1971), leading to the aforementioned snowball effect. For cost games, the inequalities are reversed, so that we say the cost game is convex if the characteristic function is submodular.
Properties
Convex cooperative games have many nice properties:
- Supermodularity trivially implies superadditivity.
- Convex games are totally balanced: The core of a convex game is non-empty, and since any subgame of a convex game is convex, the core of any subgame is also non-empty.
- A convex game has a unique stable set that coincides with its core.
- The Shapley value of a convex game is the center of gravity of its core.
- An extreme point (vertex) of the core can be found in polynomial time using the greedy algorithm: Let [math]\displaystyle{ \pi: N \to N }[/math] be a permutation of the players, and let [math]\displaystyle{ S_i = \{ j \in N: \pi(j) \leq i \} }[/math] be the set of players ordered [math]\displaystyle{ 1 }[/math] through [math]\displaystyle{ i }[/math] in [math]\displaystyle{ \pi }[/math], for any [math]\displaystyle{ i = 0, \ldots, n }[/math], with [math]\displaystyle{ S_0 = \emptyset }[/math]. Then the payoff [math]\displaystyle{ x \in \mathbb{R}^N }[/math] defined by [math]\displaystyle{ x_i = v( S_{\pi(i)} ) - v( S_{\pi(i) - 1} ), \forall~ i \in N }[/math] is a vertex of the core of [math]\displaystyle{ v }[/math]. Any vertex of the core can be constructed in this way by choosing an appropriate permutation [math]\displaystyle{ \pi }[/math].
Similarities and differences with combinatorial optimization
Submodular and supermodular set functions are also studied in combinatorial optimization. Many of the results in (Shapley 1971) have analogues in (Edmonds 1970), where submodular functions were first presented as generalizations of matroids. In this context, the core of a convex cost game is called the base polyhedron, because its elements generalize base properties of matroids.
However, the optimization community generally considers submodular functions to be the discrete analogues of convex functions (Lovász 1983), because the minimization of both types of functions is computationally tractable. Unfortunately, this conflicts directly with Shapley's original definition of supermodular functions as "convex".
The relationship between cooperative game theory and firm
Corporate strategic decisions can develop and create value through cooperative game theory.[17] This means that cooperative game theory can become the strategic theory of the firm, and different CGT solutions can simulate different institutions.
See also
- Consensus decision-making
- Coordination game
- Intra-household bargaining
- Hedonic game
- Linear production game
- Minimum-cost spanning tree game - a class of cooperative games.
References
- ↑ Shor, Mike. "Non-Cooperative Game - Game Theory .net". http://www.gametheory.net/dictionary/Non-CooperativeGame.html.
- ↑ Chandrasekaran, R.. "Cooperative Game Theory". http://www.utdallas.edu/~chandra/documents/6311/coopgames.pdf.
- ↑ Brandenburger, Adam. "Cooperative Game Theory: Characteristic Functions, Allocations, Marginal Contribution". http://www.uib.cat/depart/deeweb/pdi/hdeelbm0/arxius_decisions_and_games/cooperative_game_theory-brandenburger.pdf.
- ↑ [math]\displaystyle{ 2^N }[/math] denotes the power set of [math]\displaystyle{ N }[/math].
- ↑ Javier Muros, Francisco (2019) (in English). Cooperative Game Theory Tools in Coalitional Control Networks (1 ed.). Springer Cham. pp. 9–11. ISBN 978-3-030-10488-7.
- ↑ Harsanyi, John C. (1982). "A Simplified Bargaining Model for the n-Person Cooperative Game" (in en). Papers in Game Theory. Theory and Decision Library. Springer, Dordrecht. pp. 44–70. doi:10.1007/978-94-017-2527-9_3. ISBN 9789048183692.
- ↑ (in en) Set Functions, Games and Capacities in Decision Making | Michel Grabisch | Springer. Theory and Decision Library C. Springer. 2016. ISBN 9783319306889. https://www.springer.com/de/book/9783319306889.
- ↑ Georgios Chalkiadakis; Edith Elkind; Michael J. Wooldridge (25 October 2011). Computational Aspects of Cooperative Game Theory. Morgan & Claypool Publishers. ISBN 978-1-60845-652-9. https://books.google.com/books?id=bN9aC0uabBAC.
- ↑ Peleg, B. (2002). "Chapter 8 Game-theoretic analysis of voting in committees". Handbook of Social Choice and Welfare Volume 1. Handbook of Social Choice and Welfare. 1. pp. 395–423. doi:10.1016/S1574-0110(02)80012-1. ISBN 9780444829146.
- ↑ See a section for Rice's theorem for the definition of a computable simple game. In particular, all finite games are computable.
- ↑ Kumabe, M.; Mihara, H. R. (2011). "Computability of simple games: A complete investigation of the sixty-four possibilities". Journal of Mathematical Economics 47 (2): 150–158. doi:10.1016/j.jmateco.2010.12.003. Bibcode: 2011arXiv1102.4037K. http://mpra.ub.uni-muenchen.de/29000/1/MPRA_paper_29000.pdf.
- ↑ Modified from Table 1 in Kumabe and Mihara (2011). The sixteen types are defined by the four conventional axioms (monotonicity, properness, strongness, and non-weakness). For example, type 1110 indicates monotonic (1), proper (1), strong (1), weak (0, because not nonweak) games. Among type 1110 games, there exist no finite non-computable ones, there exist finite computable ones, there exist no infinite non-computable ones, and there exist no infinite computable ones. Observe that except for type 1110, the last three columns are identical.
- ↑ Kumabe, M.; Mihara, H. R. (2008). "The Nakamura numbers for computable simple games". Social Choice and Welfare 31 (4): 621. doi:10.1007/s00355-008-0300-5. http://econpapers.repec.org/paper/pramprapa/3684.htm.
- ↑ Aumann, Robert J. "The core of a cooperative game without side payments." Transactions of the American Mathematical Society (1961): 539-552.
- ↑ Peters, Hans (2008). Game theory: a multi-leveled approach. Springer. pp. 123. doi:10.1007/978-3-540-69291-1_17. ISBN 978-3-540-69290-4. https://archive.org/details/gametheorymultil00pete.
- ↑ Young, H. P. (1985-06-01). "Monotonic solutions of cooperative games" (in en). International Journal of Game Theory 14 (2): 65–72. doi:10.1007/BF01769885. ISSN 0020-7276.
- ↑ Ross, David Gaddis (2018-08-01). "Using cooperative game theory to contribute to strategy research". Strategic Management Journal 39 (11): 2859–2876. doi:10.1002/smj.2936.
Further reading
- Bilbao, Jesús Mario (2000), Cooperative Games on Combinatorial Structures, Kluwer Academic Publishers, ISBN 9781461543930, https://books.google.com/books?id=ssfkBwAAQBAJ&q=%22Cooperative+Games+on+Combinatorial+Structures%22&pg=PR9
- Davis, M.; Maschler, M. (1965), "The kernel of a cooperative game", Naval Research Logistics Quarterly 12 (3): 223–259, doi:10.1002/nav.3800120303
- Driessen, Theo (1988), Cooperative Games, Solutions and Applications, Kluwer Academic Publishers, ISBN 9789401577878, https://books.google.com/books?id=1yDtCAAAQBAJ&q=%22cooperative+games%22
- Edmonds, Jack (1970), "Submodular functions, matroids and certain polyhedra", in Guy, R.; Hanani, H.; Sauer, N. et al., Combinatorial Structures and Their Applications, New York: Gordon and Breach, pp. 69–87
- Lovász, László (1983), "Submodular functions and convexity", in Bachem, A., Mathematical Programming—The State of the Art, Berlin: Springer, pp. 235–257
- Leyton-Brown, Kevin; Shoham, Yoav (2008), Essentials of Game Theory: A Concise, Multidisciplinary Introduction, San Rafael, CA: Morgan & Claypool Publishers, ISBN 978-1-59829-593-1, http://www.gtessentials.org. An 88-page mathematical introduction; see Chapter 8. Free online(Subscription content?) at many universities.
- Lucas, William F. (1969), "The Proof That a Game May Not Have a Solution", Transactions of the American Mathematical Society 136: 219–229, doi:10.2307/1994798.
- Lucas, William F. (1992), "Von Neumann-Morgenstern Stable Sets", in Aumann, Robert J.; Hart, Sergiu, Handbook of Game Theory, Volume I, Amsterdam: Elsevier, pp. 543–590
- Luce, R.D. and Raiffa, H. (1957) Games and Decisions: An Introduction and Critical Survey, Wiley & Sons. (see Chapter 8).
- Maschler, M.; Peleg, B.; Shapley, Lloyd S. (1979), "Geometric properties of the kernel, nucleolus, and related solution concepts", Mathematics of Operations Research 4 (4): 303–338, doi:10.1287/moor.4.4.303
- Osborne, M.J. and Rubinstein, A. (1994) A Course in Game Theory, MIT Press (see Chapters 13,14,15)
- Moulin, Herve (1988), Axioms of Cooperative Decision Making (1st ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-42458-5
- Owen, Guillermo (1995), Game Theory (3rd ed.), San Diego: Academic Press, ISBN 978-0-12-531151-9
- Schmeidler, D. (1969), "The nucleolus of a characteristic function game", SIAM Journal on Applied Mathematics 17 (6): 1163–1170, doi:10.1137/0117107.
- Shapley, Lloyd S. (1953), "A value for [math]\displaystyle{ n }[/math]-person games", in Kuhn, H.; Tucker, A.W., Contributions to the Theory of Games II, Princeton, New Jersey: Princeton University Press, pp. 307–317
- Shapley, Lloyd S. (1971), "Cores of convex games", International Journal of Game Theory 1 (1): 11–26, doi:10.1007/BF01753431
- Shapley, Lloyd S.; Shubik, M. (1966), "Quasi-cores in a monetary economy with non-convex preferences", Econometrica 34 (4): 805–827, doi:10.2307/1910101
- Shoham, Yoav; Leyton-Brown, Kevin (2009), Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, New York: Cambridge University Press, ISBN 978-0-521-89943-7, http://www.masfoundations.org. A comprehensive reference from a computational perspective; see Chapter 12. Downloadable free online.
- von Neumann, John; Morgenstern, Oskar (1944), "Theory of Games and Economic Behavior", Nature (Princeton: Princeton University Press) 157 (3981): 172, doi:10.1038/157172a0, Bibcode: 1946Natur.157..172R
- Yeung, David W.K. and Leon A. Petrosyan. Cooperative Stochastic Differential Games (Springer Series in Operations Research and Financial Engineering), Springer, 2006. Softcover-ISBN:978-1441920942.
- Yeung, David W.K. and Leon A. Petrosyan. Subgame Consistent Economic Optimization: An Advanced Cooperative Dynamic Game Analysis (Static & Dynamic Game Theory: Foundations & Applications), Birkhäuser Boston; 2012. ISBN:978-0817682613
External links
- Hazewinkel, Michiel, ed. (2001), "Cooperative game", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4, https://www.encyclopediaofmath.org/index.php?title=p/c026450
Original source: https://en.wikipedia.org/wiki/Cooperative game theory.
Read more |