Epistemic modal logic
Part of a series on |
Epistemology |
---|
Core concepts |
Distinctions |
Schools of thought |
Topics and views |
Specialized domains of inquiry |
Notable epistemologists |
Related fields |
Epistemic modal logic is a subfield of modal logic that is concerned with reasoning about knowledge. While epistemology has a long philosophical tradition dating back to Ancient Greece , epistemic logic is a much more recent development with applications in many fields, including philosophy, theoretical computer science, artificial intelligence, economics and linguistics. While philosophers since Aristotle have discussed modal logic, and Medieval philosophers such as Avicenna, Ockham, and Duns Scotus developed many of their observations, it was C. I. Lewis who created the first symbolic and systematic approach to the topic, in 1912. It continued to mature as a field, reaching its modern form in 1963 with the work of Kripke.
Historical development
Many papers were written in the 1950s that spoke of a logic of knowledge in passing, but the Finnish philosopher G. H. von Wright's 1951 paper titled An Essay in Modal Logic is seen as a founding document. It was not until 1962 that another Finn, Hintikka, would write Knowledge and Belief, the first book-length work to suggest using modalities to capture the semantics of knowledge rather than the alethic statements typically discussed in modal logic. This work laid much of the groundwork for the subject, but a great deal of research has taken place since that time. For example, epistemic logic has been combined recently with some ideas from dynamic logic to create dynamic epistemic logic, which can be used to specify and reason about information change and exchange of information in multi-agent systems. The seminal works in this field are by Plaza, Van Benthem, and Baltag, Moss, and Solecki.
Standard possible worlds model
Most attempts at modeling knowledge have been based on the possible worlds model. In order to do this, we must divide the set of possible worlds between those that are compatible with an agent's knowledge, and those that are not. This generally conforms with common usage. If I know that it is either Friday or Saturday, then I know for sure that it is not Thursday. There is no possible world compatible with my knowledge where it is Thursday, since in all these worlds it is either Friday or Saturday. While we will primarily be discussing the logic-based approach to accomplishing this task, it is worthwhile to mention here the other primary method in use, the event-based approach. In this particular usage, events are sets of possible worlds, and knowledge is an operator on events. Though the strategies are closely related, there are two important distinctions to be made between them:
- The underlying mathematical model of the logic-based approach are Kripke semantics, while the event-based approach employs the related Aumann structures based on set theory.
- In the event-based approach logical formulas are done away with completely, while the logic-based approach uses the system of modal logic.
Typically, the logic-based approach has been used in fields such as philosophy, logic and AI, while the event-based approach is more often used in fields such as game theory and mathematical economics. In the logic-based approach, a syntax and semantics have been built using the language of modal logic, which we will now describe.
Syntax
The basic modal operator of epistemic logic, usually written K, can be read as "it is known that," "it is epistemically necessary that," or "it is inconsistent with what is known that not." If there is more than one agent whose knowledge is to be represented, subscripts can be attached to the operator ([math]\displaystyle{ \mathit{K}_1 }[/math], [math]\displaystyle{ \mathit{K}_2 }[/math], etc.) to indicate which agent one is talking about. So [math]\displaystyle{ \mathit{K}_a\varphi }[/math] can be read as "Agent [math]\displaystyle{ a }[/math] knows that [math]\displaystyle{ \varphi }[/math]." Thus, epistemic logic can be an example of multimodal logic applied for knowledge representation.[1] The dual of K, which would be in the same relationship to K as [math]\displaystyle{ \Diamond }[/math] is to [math]\displaystyle{ \Box }[/math], has no specific symbol, but can be represented by [math]\displaystyle{ \neg K_a \neg \varphi }[/math], which can be read as "[math]\displaystyle{ a }[/math] does not know that not [math]\displaystyle{ \varphi }[/math]" or "It is consistent with [math]\displaystyle{ a }[/math]'s knowledge that [math]\displaystyle{ \varphi }[/math] is possible". The statement "[math]\displaystyle{ a }[/math] does not know whether or not [math]\displaystyle{ \varphi }[/math]" can be expressed as [math]\displaystyle{ \neg K_a\varphi \land \neg K_a\neg\varphi }[/math].
In order to accommodate notions of common knowledge and distributed knowledge, three other modal operators can be added to the language. These are [math]\displaystyle{ \mathit{E}_\mathit{G} }[/math], which reads "every agent in group G knows" (mutual knowledge); [math]\displaystyle{ \mathit{C}_\mathit{G} }[/math], which reads "it is common knowledge to every agent in G"; and [math]\displaystyle{ \mathit{D}_\mathit{G} }[/math], which reads "it is distributed knowledge to the whole group G." If [math]\displaystyle{ \varphi }[/math] is a formula of our language, then so are [math]\displaystyle{ \mathit{E}_G \varphi }[/math], [math]\displaystyle{ \mathit{C}_G \varphi }[/math], and [math]\displaystyle{ \mathit{D}_G \varphi }[/math]. Just as the subscript after [math]\displaystyle{ \mathit{K} }[/math] can be omitted when there is only one agent, the subscript after the modal operators [math]\displaystyle{ \mathit{E} }[/math], [math]\displaystyle{ \mathit{C} }[/math], and [math]\displaystyle{ \mathit{D} }[/math] can be omitted when the group is the set of all agents.
Semantics
As mentioned above, the logic-based approach is built upon the possible worlds model, the semantics of which are often given definite form in Kripke structures, also known as Kripke models. A Kripke structure [math]\displaystyle{ \mathcal{M}=\langle S,\pi,K_1,\dots,K_n \rangle }[/math] for n agents over [math]\displaystyle{ \Phi }[/math], the set of all primitive propositions, is an [math]\displaystyle{ (n+2) }[/math]-tuple, where [math]\displaystyle{ S }[/math] is a nonempty set of states or possible worlds, [math]\displaystyle{ \pi }[/math] is an interpretation, which associates with each state [math]\displaystyle{ s\in S }[/math] a truth assignment to the primitive propositions in [math]\displaystyle{ \Phi }[/math], and [math]\displaystyle{ \mathcal{K}_1, ..., \mathcal{K}_n }[/math] are binary relations on [math]\displaystyle{ S }[/math] for n numbers of agents. It is important here not to confuse [math]\displaystyle{ K_i }[/math], our modal operator, and [math]\displaystyle{ \mathcal{K}_i }[/math], our accessibility relation.
The truth assignment tells us whether or not a proposition [math]\displaystyle{ p }[/math] is true or false in a certain state. So [math]\displaystyle{ \pi (s)(p) }[/math] tells us whether [math]\displaystyle{ p }[/math] is true in state [math]\displaystyle{ s }[/math] in model [math]\displaystyle{ \mathcal{M} }[/math]. Truth depends not only on the structure, but on the current world as well. Just because something is true in one world does not mean it is true in another. To state that a formula [math]\displaystyle{ \varphi }[/math] is true at a certain world, one writes [math]\displaystyle{ (\mathcal{M},s) \models \varphi }[/math], normally read as "[math]\displaystyle{ \varphi }[/math] is true at [math]\displaystyle{ (\mathcal{M},s) }[/math]," or "[math]\displaystyle{ (\mathcal{M},s) }[/math] satisfies [math]\displaystyle{ \varphi }[/math]".
It is useful to think of our binary relation [math]\displaystyle{ \mathcal{K}_i }[/math] as a possibility relation, because it is meant to capture what worlds or states agent i considers to be possible; In other words, [math]\displaystyle{ w\mathcal{K}_i v }[/math] if and only if [math]\displaystyle{ \forall \varphi[(w\models K_i\varphi) \implies (v \models \varphi)] }[/math], and such [math]\displaystyle{ v }[/math]'s are called epistemic alternatives for agent i. In idealized accounts of knowledge (e.g., describing the epistemic status of perfect reasoners with infinite memory capacity), it makes sense for [math]\displaystyle{ \mathcal{K}_i }[/math] to be an equivalence relation, since this is the strongest form and is the most appropriate for the greatest number of applications. An equivalence relation is a binary relation that is reflexive, symmetric, and transitive. The accessibility relation does not have to have these qualities; there are certainly other choices possible, such as those used when modeling belief rather than knowledge.
The properties of knowledge
Assuming that [math]\displaystyle{ \mathcal{K}_i }[/math] is an equivalence relation, and that the agents are perfect reasoners, a few properties of knowledge can be derived. The properties listed here are often known as the "S5 Properties," for reasons described in the Axiom Systems section below.
The distribution axiom
This axiom is traditionally known as K. In epistemic terms, it states that if an agent knows [math]\displaystyle{ \varphi }[/math] and knows that [math]\displaystyle{ \varphi \implies \psi }[/math], then the agent must also know [math]\displaystyle{ \,\psi }[/math]. So,
- [math]\displaystyle{ (K_i\varphi \land K_i(\varphi \implies \psi)) \implies K_i\psi }[/math]
This axiom is valid on any frame in relational semantics. This axiom logically establishes modus ponens as a rule of inference for every epistemically possible world.
The knowledge generalization rule
Another property we can derive is that if [math]\displaystyle{ \phi }[/math] is valid (i.e. a tautology), then [math]\displaystyle{ K_i\phi }[/math]. This does not mean that if [math]\displaystyle{ \phi }[/math] is true, then agent i knows [math]\displaystyle{ \phi }[/math]. What it means is that if [math]\displaystyle{ \phi }[/math] is true in every world that an agent considers to be a possible world, then the agent must know [math]\displaystyle{ \phi }[/math] at every possible world. This principle is traditionally called N (Necessitation rule).
- [math]\displaystyle{ \text{if }\models \varphi\text{ then }M \models K_i \varphi.\, }[/math]
This rule always preserves truth in relational semantics.
The knowledge or truth axiom
This axiom is also known as T. It says that if an agent knows facts, the facts must be true. This has often been taken as the major distinguishing feature between knowledge and belief. We can believe a statement to be true when it is false, but it would be impossible to know a false statement.
- [math]\displaystyle{ K_i \varphi \implies \varphi }[/math]
This axiom can also be expressed in its contraposition as agents cannot know a false statement:
- [math]\displaystyle{ \varphi \implies \neg K_i \neg \varphi }[/math]
This axiom is valid on any reflexive frame.
The positive introspection axiom
This property and the next state that an agent has introspection about its own knowledge, and are traditionally known as 4 and 5, respectively. The Positive Introspection Axiom, also known as the KK Axiom, says specifically that agents know that they know what they know. This axiom may seem less obvious than the ones listed previously, and Timothy Williamson has argued against its inclusion forcefully in his book, Knowledge and Its Limits.
- [math]\displaystyle{ K_i \varphi \implies K_i K_i \varphi }[/math]
Equivalently, this modal axiom 4 says that agents do not know what they do not know that they know
- [math]\displaystyle{ \neg K_i K_i \varphi \implies \neg K_i \varphi }[/math]
This axiom is valid on any transitive frame.
The negative introspection axiom
The Negative Introspection Axiom says that agents know that they do not know what they do not know.
- [math]\displaystyle{ \neg K_i \varphi \implies K_i \neg K_i \varphi }[/math]
Or, equivalently, this modal axiom 5 says that agents know what they do not know that they do not know
- [math]\displaystyle{ \neg K_i \neg K_i \varphi \implies K_i \varphi }[/math]
This axiom is valid on any Euclidean frame.
Axiom systems
Different modal logics can be derived from taking different subsets of these axioms, and these logics are normally named after the important axioms being employed. However, this is not always the case. KT45, the modal logic that results from the combining of K, T, 4, 5, and the Knowledge Generalization Rule, is primarily known as S5. This is why the properties of knowledge described above are often called the S5 Properties. However, it can be proven that modal axiom B is a theorem in S5 (viz. [math]\displaystyle{ S5 \vdash \mathbf {B} }[/math]), which says that what an agent does not know that they do not know is true: [math]\displaystyle{ \neg K_i \neg K_i \varphi \implies \varphi }[/math]. The modal axiom B is true on any symmetric frame, but is very counterintuitive in epistemic logic: How can the ignorance on one's own ignorance imply truth? It is therefore debatable whether S4 describes epistemic logic better, rather than S5.
Epistemic logic also deals with belief, not just knowledge. The basic modal operator is usually written B instead of K. In this case, though, the knowledge axiom no longer seems right—agents only sometimes believe the truth—so it is usually replaced with the Consistency Axiom, traditionally called D:
- [math]\displaystyle{ \neg B_i \bot }[/math]
which states that the agent does not believe a contradiction, or that which is false. When D replaces T in S5, the resulting system is known as KD45. This results in different properties for [math]\displaystyle{ \mathcal{K}_i }[/math] as well. For example, in a system where an agent "believes" something to be true, but it is not actually true, the accessibility relation would be non-reflexive. The logic of belief is called doxastic logic.
Multi-agent systems
When there are multiple agents in the domain of discourse where each agent i corresponds to a separate epistemic modal operator [math]\displaystyle{ K_i }[/math], in addition to the axiom schemata for each individual agent listed above to describe the rationality of each agent, it is usually also assumed that the rationality of each agent is common knowledge.
Problems with the possible world model and modal model of knowledge
If we take the possible worlds approach to knowledge, it follows that our epistemic agent a knows all the logical consequences of their beliefs (known as logical omniscience[2]). If [math]\displaystyle{ Q }[/math] is a logical consequence of [math]\displaystyle{ P }[/math], then there is no possible world where [math]\displaystyle{ P }[/math] is true but [math]\displaystyle{ Q }[/math] is not. So if a knows that [math]\displaystyle{ P }[/math] is true, it follows that all of the logical consequences of [math]\displaystyle{ P }[/math] are true of all of the possible worlds compatible with a's beliefs. Therefore, a knows [math]\displaystyle{ Q }[/math]. It is not epistemically possible for a that not-[math]\displaystyle{ Q }[/math] given his knowledge that [math]\displaystyle{ P }[/math]. This consideration was a part of what led Robert Stalnaker to develop two-dimensionalism, which can arguably explain how we might not know all the logical consequences of our beliefs even if there are no worlds where the propositions we know come out true but their consequences false.[3]
Even when we ignore possible world semantics and stick to axiomatic systems, this peculiar feature holds. With K and N (the Distribution Rule and the Knowledge Generalization Rule, respectively), which are axioms that are minimally true of all normal modal logics, we can prove that we know all the logical consequences of our beliefs. If [math]\displaystyle{ Q }[/math] is a logical consequence of [math]\displaystyle{ P }[/math] (i.e. we have the tautology [math]\displaystyle{ \models (P \rightarrow Q) }[/math]), then we can derive [math]\displaystyle{ K_a (P \rightarrow Q) }[/math] with N, and using a conditional proof with the axiom K, we can then derive [math]\displaystyle{ K_a P \rightarrow K_a Q }[/math] with K. When we translate this into epistemic terms, this says that if [math]\displaystyle{ Q }[/math] is a logical consequence of [math]\displaystyle{ P }[/math], then a knows that it is, and if a knows [math]\displaystyle{ P }[/math], a knows [math]\displaystyle{ Q }[/math]. That is to say, a knows all the logical consequences of every proposition. This is necessarily true of all classical modal logics. But then, for example, if a knows that prime numbers are divisible only by themselves and the number one, then a knows that 8683317618811886495518194401279999999 is prime (since this number is only divisible by itself and the number one). That is to say, under the modal interpretation of knowledge, when a knows the definition of a prime number, a knows that this number is prime. This generalizes to any provable theorem in any axiomatic theory (i.e. if a knows all the axioms in a theory, then a knows all the provable theorems in that theory). It should be clear at this point that a is not human (otherwise there would not be any unsolved conjectures in mathematics, like P versus NP problem or Goldbach's conjecture). This shows that epistemic modal logic is an idealized account of knowledge, and explains objective, rather than subjective knowledge (if anything).[4]
Epistemic fallacy (masked-man fallacy)
In philosophical logic, the masked-man fallacy (also known as the intensional fallacy or epistemic fallacy) is committed when one makes an illicit use of Leibniz's law in an argument. The fallacy is "epistemic" because it posits an immediate identity between a subject's knowledge of an object with the object itself, failing to recognize that Leibniz's Law is not capable of accounting for intensional contexts.
Examples
The name of the fallacy comes from the example:
- Premise 1: I know who Bob is.
- Premise 2: I do not know who the masked man is
- Conclusion: Therefore, Bob is not the masked man.
The premises may be true and the conclusion false if Bob is the masked man and the speaker does not know that. Thus the argument is a fallacious one.
In symbolic form, the above arguments are
- Premise 1: I know who X is.
- Premise 2: I do not know who Y is.
- Conclusion: Therefore, X is not Y.
Note, however, that this syllogism happens in the reasoning by the speaker "I"; Therefore, in the formal modal logic form, it'll be
- Premise 1: The speaker believes he knows who X is.
- Premise 2: The speaker believes he does not know who Y is.
- Conclusion: Therefore, the speaker believes X is not Y.
Premise 1 [math]\displaystyle{ \mathcal{B}_s \forall t (t=X\rightarrow K_s(t=X)) }[/math] is a very strong one, as it is logically equivalent to [math]\displaystyle{ \mathcal{B_s}\forall t (\neg K_s(t=X)\rightarrow t\not=X) }[/math]. It is very likely that this is a false belief: [math]\displaystyle{ \forall t (\neg K_s(t=X)\rightarrow t\not=X) }[/math] is likely a false proposition, as the ignorance on the proposition [math]\displaystyle{ t=X }[/math] does not imply the negation of it is true.
Another example:
- Premise 1: Lois Lane thinks Superman can fly.
- Premise 2: Lois Lane thinks Clark Kent cannot fly.
- Conclusion: Therefore, Superman and Clark Kent are not the same person.
Expressed in doxastic logic, the above syllogism is:
- Premise 1: [math]\displaystyle{ \mathcal{B}_\text{Lois}\text{Fly}_\text{(Superman)} }[/math]
- Premise 2: [math]\displaystyle{ \mathcal{B}_\text{Lois}\neg \text{Fly}_\text{(Clark)} }[/math]
- Conclusion: [math]\displaystyle{ \text{Superman} \neq \text{Clark} }[/math]
The above reasoning is invalid (not truth-preserving). The valid conclusion to be drawn is [math]\displaystyle{ \mathcal{B}_\text{Lois}(\text{Superman}\neq \text{Clark}) }[/math].
See also
- Epistemic closure
- Epistemology
- Dynamic epistemic logic
- Logic in computer science
- Philosophical Explanations
Notes
- ↑ p. 257 in: Ferenczi, Miklós (2002) (in Hungarian). Matematikai logika. Budapest: Műszaki könyvkiadó. ISBN 963-16-2870-1.
- 257
- ↑ Rendsvig, Rasmus; Symons, John (2021), Zalta, Edward N., ed., Epistemic Logic (Summer 2021 ed.), Metaphysics Research Lab, Stanford University, https://plato.stanford.edu/archives/sum2021/entries/logic-epistemic/, retrieved 2022-03-06
- ↑ Stalnaker, Robert. "Propositions." Issues in the Philosophy of Language. Yale UP, 1976. p. 101.
- ↑ See Ted Sider's Logic for Philosophy. Currently page 230 but subject to change following updates.
References
- Anderson, A. and N. D. Belnap. Entailment: The Logic of Relevance and Necessity. Princeton: Princeton University Press, 1975. ASIN B001NNPJL8.
- Brown, Benjamin, Thoughts and Ways of Thinking: Source Theory and Its Applications. London: Ubiquity Press, 2017. [1].
- van Ditmarsch Hans, Halpern Joseph Y., van der Hoek Wiebe and Kooi Barteld (eds.), Handbook of Epistemic Logic, London: College Publications, 2015.
- Fagin, Ronald; Halpern, Joseph; Moses, Yoram; Vardi, Moshe (2003). Reasoning about Knowledge. Cambridge: MIT Press. ISBN 978-0-262-56200-3.. A classic reference.
- Ronald Fagin, Joseph Halpern, Moshe Vardi. "A nonstandard approach to the logical omniscience problem." Artificial Intelligence, Volume 79, Number 2, 1995, p. 203-40.
- Hendricks, V.F. Mainstream and Formal Epistemology. New York: Cambridge University Press , 2007.
- Hintikka, Jaakko (1962). Knowledge and Belief - An Introduction to the Logic of the Two Notions. Ithaca: Cornell University Press. ISBN 978-1-904987-08-6. https://archive.org/details/knowledgebeliefi00hint_0..
- Meyer, J-J C., 2001, "Epistemic Logic," in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic. Blackwell.
- Montague, R. "Universal Grammar". Theoretica, Volume 36, 1970, p. 373-398.
- Rescher, Nicolas (2005). Epistemic Logic: A Survey Of the Logic Of Knowledge. University of Pittsburgh Press. ISBN 978-0-8229-4246-7..
- Shoham, Yoav; Leyton-Brown, Kevin (2009). Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. New York: Cambridge University Press. ISBN 978-0-521-89943-7. http://www.masfoundations.org.. See Chapters 13 and 14; downloadable free online.
External links
- "Dynamic Epistemic Logic". Internet Encyclopedia of Philosophy. http://www.iep.utm.edu/de-logic.
- Hendricks, Vincent; Symons, John. "Epistemic Logic". in Zalta, Edward N.. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/logic-epistemic/.
- Garson, James. "Modal logic". in Zalta, Edward N.. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/logic-modal/.
- Vanderschraaf, Peter. "Common Knowledge". in Zalta, Edward N.. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/common-knowledge/.
- Epistemic modal logic at PhilPapers
- "Epistemic modal logic"—Ho Ngoc Duc.
Original source: https://en.wikipedia.org/wiki/Epistemic modal logic.
Read more |