Philosophy:Common knowledge (logic)

From HandWiki
Revision as of 07:27, 5 February 2024 by MedAI (talk | contribs) (url)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Statement that players know and also know that other players know (ad infinitum)

Common knowledge is a special kind of knowledge for a group of agents. There is common knowledge of p in a group of agents G when all the agents in G know p, they all know that they know p, they all know that they all know that they know p, and so on ad infinitum.[1] It can be denoted as [math]\displaystyle{ C_G p }[/math].

The concept was first introduced in the philosophical literature by David Kellogg Lewis in his study Convention (1969). The sociologist Morris Friedell defined common knowledge in a 1969 paper.[2] It was first given a mathematical formulation in a set-theoretical framework by Robert Aumann (1976). Computer scientists grew an interest in the subject of epistemic logic in general – and of common knowledge in particular – starting in the 1980s.[1] There are numerous puzzles based upon the concept which have been extensively investigated by mathematicians such as John Conway.[3]

The philosopher Stephen Schiffer, in his 1972 book Meaning, independently developed a notion he called "mutual knowledge" ([math]\displaystyle{ E_G p }[/math]) which functions quite similarly to Lewis's and Friedel's 1969 "common knowledge".[4] If a trustworthy announcement is made in public, then it becomes common knowledge; However, if it is transmitted to each agent in private, it becomes mutual knowledge but not common knowledge. Even if the fact that "every agent in the group knows p" ([math]\displaystyle{ E_G p }[/math]) is transmitted to each agent in private, it is still not common knowledge: [math]\displaystyle{ E_G E_G p \not \Rightarrow C_G p }[/math]. But, if any agent [math]\displaystyle{ a }[/math] publicly announces their knowledge of p, then it becomes common knowledge that they know p (viz. [math]\displaystyle{ C_G K_a p }[/math]). If every agent publicly announces their knowledge of p, p becomes common knowledge [math]\displaystyle{ C_G E_G p \Rightarrow C_G p }[/math].

Example

Puzzle

The idea of common knowledge is often introduced by some variant of induction puzzles (e.g. Muddy children puzzle):[2]

On an island, there are k people who have blue eyes, and the rest of the people have green eyes. At the start of the puzzle, no one on the island ever knows their own eye color. By rule, if a person on the island ever discovers they have blue eyes, that person must leave the island at dawn; anyone not making such a discovery always sleeps until after dawn. On the island, each person knows every other person's eye color, there are no reflective surfaces, and there is no communication of eye color.

At some point, an outsider comes to the island, calls together all the people on the island, and makes the following public announcement: "At least one of you has blue eyes". The outsider, furthermore, is known by all to be truthful, and all know that all know this, and so on: it is common knowledge that he is truthful, and thus it becomes common knowledge that there is at least one islander who has blue eyes ([math]\displaystyle{ C_G[\exists x\! \in\! G ( Bl_{x})] }[/math]). The problem: finding the eventual outcome, assuming all persons on the island are completely logical (every participant's knowledge obeys the axiom schemata for epistemic logic) and that this too is common knowledge.

Solution

The answer is that, on the kth dawn after the announcement, all the blue-eyed people will leave the island.

Proof

The solution can be seen with an inductive argument. If k = 1 (that is, there is exactly one blue-eyed person), the person will recognize that they alone have blue eyes (by seeing only green eyes in the others) and leave at the first dawn. If k = 2, no one will leave at the first dawn, and the inaction (and the implied lack of knowledge for every agent) is observed by everyone, which then becomes common knowledge as well ([math]\displaystyle{ C_G[\forall x\! \in\! G ( \neg K_{x}Bl_{x})] }[/math]). The two blue-eyed people, seeing only one person with blue eyes, and that no one left on the first dawn (and thus that k > 1; and also that the other blue-eyed person does not think that everyone except themself are not blue-eyed [math]\displaystyle{ \neg K_{a}[\forall x\! \in\! (G-a) (\neg Bl_{x})] }[/math], so another blue-eyed person [math]\displaystyle{ \exists x\! \in\! (G-a) (Bl_{x}) }[/math]), will leave on the second dawn. Inductively, it can be reasoned that no one will leave at the first k − 1 dawns if and only if there are at least k blue-eyed people. Those with blue eyes, seeing k − 1 blue-eyed people among the others and knowing there must be at least k, will reason that they must have blue eyes and leave.

For k > 1, the outsider is only telling the island citizens what they already know: that there are blue-eyed people among them. However, before this fact is announced, the fact is not common knowledge, but instead mutual knowledge.

For k = 2, it is merely "first-order" knowledge ([math]\displaystyle{ E_G[\exists x\! \in\! G ( Bl_{x})] }[/math]). Each blue-eyed person knows that there is someone with blue eyes, but each blue eyed person does not know that the other blue-eyed person has this same knowledge.

For k = 3, it is "second order" knowledge ([math]\displaystyle{ E_GE_G[\exists x\! \in\! G ( Bl_{x})]=E_G^2[\exists x\! \in\! G ( Bl_{x})] }[/math]). Each blue-eyed person knows that a second blue-eyed person knows that a third person has blue eyes, but no one knows that there is a third blue-eyed person with that knowledge, until the outsider makes their statement.

In general: For k > 1, it is "(k − 1)th order" knowledge ([math]\displaystyle{ E_G^{k-1} [\exists x\! \in\! G ( Bl_{x})] }[/math]). Each blue-eyed person knows that a second blue-eyed person knows that a third blue-eyed person knows that.... (repeat for a total of k − 1 levels) a kth person has blue eyes, but no one knows that there is a "kth" blue-eyed person with that knowledge, until the outsider makes his statement. The notion of common knowledge therefore has a palpable effect. Knowing that everyone knows does make a difference. When the outsider's public announcement (a fact already known to all, unless k=1 then the one person with blue eyes would not know until the announcement) becomes common knowledge, the blue-eyed people on this island eventually deduce their status, and leave.

In particular:

  1. [math]\displaystyle{ E_G^{i} [|G ( Bl_{x})| \gt = j] }[/math] is free (i.e. known prior to the outsider's statement) iff [math]\displaystyle{ i + j \lt = k }[/math].
  2. [math]\displaystyle{ E_G^{i} [|G ( Bl_{x})| \gt = j] }[/math], with a passing day where no one leaves, implies the next day [math]\displaystyle{ E_G^{i - 1} [|G ( Bl_{x})| \gt = j + 1] }[/math].
  3. [math]\displaystyle{ E_G^{i} [|G ( Bl_{x})| \gt = j] }[/math] for [math]\displaystyle{ j \gt = k }[/math] is thus reached iff it is reached for [math]\displaystyle{ i + j \gt k }[/math].
  4. The outsider gives [math]\displaystyle{ E_G^{i} [|G ( Bl_{x})| \gt = j] }[/math] for [math]\displaystyle{ i = \infty, j = 1 }[/math].

Formalization

Modal logic (syntactic characterization)

Common knowledge can be given a logical definition in multi-modal logic systems in which the modal operators are interpreted epistemically. At the propositional level, such systems are extensions of propositional logic. The extension consists of the introduction of a group G of agents, and of n modal operators Ki (with i = 1, ..., n) with the intended meaning that "agent i knows." Thus Ki [math]\displaystyle{ \varphi }[/math] (where [math]\displaystyle{ \varphi }[/math] is a formula of the logical calculus) is read "agent i knows [math]\displaystyle{ \varphi }[/math]." We can define an operator EG with the intended meaning of "everyone in group G knows" by defining it with the axiom

[math]\displaystyle{ E_G \varphi \Leftrightarrow \bigwedge_{i \in G} K_i \varphi, }[/math]

By abbreviating the expression [math]\displaystyle{ E_GE_G^{n-1} \varphi }[/math] with [math]\displaystyle{ E_G^n \varphi }[/math] and defining [math]\displaystyle{ E_G^0 \varphi = \varphi }[/math], common knowledge could then be defined with the axiom

[math]\displaystyle{ C \varphi \Leftrightarrow \bigwedge_{i = 0}^\infty E^i \varphi }[/math]

There is, however, a complication. The languages of epistemic logic are usually finitary, whereas the axiom above defines common knowledge as an infinite conjunction of formulas, hence not a well-formed formula of the language. To overcome this difficulty, a fixed-point definition of common knowledge can be given. Intuitively, common knowledge is thought of as the fixed point of the "equation" [math]\displaystyle{ C_G \varphi=[\varphi\wedge E_G (C_G \varphi)]=E_G^{\aleph_0} \varphi }[/math]. Here, [math]\displaystyle{ \aleph_0 }[/math] is the Aleph-naught. In this way, it is possible to find a formula [math]\displaystyle{ \psi }[/math] implying [math]\displaystyle{ E_G (\varphi \wedge C_G \varphi) }[/math] from which, in the limit, we can infer common knowledge of [math]\displaystyle{ \varphi }[/math].

From this definition it can be seen that if [math]\displaystyle{ E_G \varphi }[/math] is common knowledge, then [math]\displaystyle{ \varphi }[/math] is also common knowledge ([math]\displaystyle{ C_G E_G \varphi \Rightarrow C_G \varphi }[/math]).

This syntactic characterization is given semantic content through so-called Kripke structures. A Kripke structure is given by a set of states (or possible worlds) S, n accessibility relations [math]\displaystyle{ R_1,\dots,R_n }[/math], defined on [math]\displaystyle{ S \times S }[/math], intuitively representing what states agent i considers possible from any given state, and a valuation function [math]\displaystyle{ \pi }[/math] assigning a truth value, in each state, to each primitive proposition in the language. The Kripke semantics for the knowledge operator is given by stipulating that [math]\displaystyle{ K_i \varphi }[/math] is true at state s iff [math]\displaystyle{ \varphi }[/math] is true at all states t such that [math]\displaystyle{ (s,t) \in R_i }[/math]. The semantics for the common knowledge operator, then, is given by taking, for each group of agents G, the reflexive (modal axiom T) and transitive closure (modal axiom 4) of the [math]\displaystyle{ R_i }[/math], for all agents i in G, call such a relation [math]\displaystyle{ R_G }[/math], and stipulating that [math]\displaystyle{ C_G \varphi }[/math] is true at state s iff [math]\displaystyle{ \varphi }[/math] is true at all states t such that [math]\displaystyle{ (s,t) \in R_G }[/math].

Set theoretic (semantic characterization)

Alternatively (yet equivalently) common knowledge can be formalized using set theory (this was the path taken by the Nobel laureate Robert Aumann in his seminal 1976 paper). Starting with a set of states S. An event E can then be defined as a subset of the set of states S. For each agent i, define a partition on S, Pi. This partition represents the state of knowledge of an agent in a state. Intuitively, if two states s1 and s2 are elements of the same part of partition of an agent, it means that s1 and s2 are indistinguishable to that agent. In general, in state s, agent i knows that one of the states in Pi(s) obtains, but not which one. (Here Pi(s) denotes the unique element of Pi containing s. This model excludes cases in which agents know things that are not true.)

A knowledge function K can now be defined in the following way:

[math]\displaystyle{ K_i(e) = \{ s \in S \mid P_i(s) \subset e\} }[/math]

That is, Ki(e) is the set of states where the agent will know that event e obtains. It is a subset of e.

Similar to the modal logic formulation above, an operator for the idea that "everyone knows can be defined as e".

[math]\displaystyle{ E(e) = \bigcap_i K_i(e) }[/math]

As with the modal operator, we will iterate the E function, [math]\displaystyle{ E^1(e) = E(e) }[/math] and [math]\displaystyle{ E^{n+1}(e) = E(E^{n}(e)) }[/math]. Using this we can then define a common knowledge function,

[math]\displaystyle{ C(e) = \bigcap_{n=1}^{\infty} E^n(e). }[/math]

The equivalence with the syntactic approach sketched above can easily be seen: consider an Aumann structure as the one just defined. We can define a correspondent Kripke structure by taking the same space S, accessibility relations [math]\displaystyle{ R_i }[/math] that define the equivalence classes corresponding to the partitions [math]\displaystyle{ P_i }[/math], and a valuation function such that it yields value true to the primitive proposition p in all and only the states s such that [math]\displaystyle{ s \in E^p }[/math], where [math]\displaystyle{ E^p }[/math] is the event of the Aumann structure corresponding to the primitive proposition p. It is not difficult to see that the common knowledge accessibility function [math]\displaystyle{ R_G }[/math] defined in the previous section corresponds to the finest common coarsening of the partitions [math]\displaystyle{ P_i }[/math] for all [math]\displaystyle{ i \in G }[/math], which is the finitary characterization of common knowledge also given by Aumann in the 1976 article.

Applications

Common knowledge was used by David Lewis in his pioneering game-theoretical account of convention. In this sense, common knowledge is a concept still central for linguists and philosophers of language (see Clark 1996) maintaining a Lewisian, conventionalist account of language.

Robert Aumann introduced a set theoretical formulation of common knowledge (theoretically equivalent to the one given above) and proved the so-called agreement theorem through which: if two agents have common prior probability over a certain event, and the posterior probabilities are common knowledge, then such posterior probabilities are equal. A result based on the agreement theorem and proven by Milgrom shows that, given certain conditions on market efficiency and information, speculative trade is impossible.

The concept of common knowledge is central in game theory. For several years it has been thought that the assumption of common knowledge of rationality for the players in the game was fundamental. It turns out (Aumann and Brandenburger 1995) that, in two-player games, common knowledge of rationality is not needed as an epistemic condition for Nash equilibrium strategies.

Computer scientists use languages incorporating epistemic logics (and common knowledge) to reason about distributed systems. Such systems can be based on logics more complicated than simple propositional epistemic logic, see Wooldridge Reasoning about Artificial Agents, 2000 (in which he uses a first-order logic incorporating epistemic and temporal operators) or van der Hoek et al. "Alternating Time Epistemic Logic".

In his 2007 book, The Stuff of Thought: Language as a Window into Human Nature, Steven Pinker uses the notion of common knowledge to analyze the kind of indirect speech involved in innuendoes.

In popular culture

The comedy movie Hot Lead and Cold Feet has an example of a chain of logic that is collapsed by common knowledge. The Denver Kid tells his allies that Rattlesnake is in town, but that he [the Kid] has “the edge”: “He's here and I know he's here, and he knows I know he's here, but he doesn't know I know he knows I know he's here.” So both protagonists know the main fact (Rattlesnake is here), but it is not “common knowledge”. Note that this is true even if the Kid is wrong: maybe Rattlesnake does know that the Kid knows that he knows that he knows, the chain still breaks because the Kid doesn't know that. Moments later, Rattlesnake confronts the Kid. We see the Kid realizing that his carefully constructed “edge” has collapsed into common knowledge.

See also

Notes

  1. ^ See the textbooks Reasoning about knowledge by Fagin, Halpern, Moses and Vardi (1995), and Epistemic Logic for computer science by Meyer and van der Hoek (1995).
  2. ^ A structurally identical problem is provided by Herbert Gintis (2000); he calls it "The Women of Sevitan".

References

  1. Osborne, Martin J., and Ariel Rubinstein. A Course in Game Theory. Cambridge, MA: MIT, 1994. Print.
  2. Morris Friedell, "On the Structure of Shared Awareness," Behavioral Science 14 (1969): 28–39.
  3. Ian Stewart (2004). "I Know That You Know That...". Math Hysteria. OUP. 
  4. Stephen Schiffer, Meaning, 2nd edition, Oxford University Press, 1988. The first edition was published by OUP in 1972. For a discussion of both Lewis's and Schiffer's notions, see Russell Dale, The Theory of Meaning (1996).

Further reading

External links