Transferable belief model
This article provides insufficient context for those unfamiliar with the subject.May 2016) (Learn how and when to remove this template message) ( |
The transferable belief model (TBM) is an elaboration on the Dempster–Shafer theory (DST), which is a mathematical model used to evaluate the probability that a given proposition is true from other propositions that are assigned probabilities. It was developed by Philippe Smets who proposed his approach as a response to Zadeh’s example against Dempster's rule of combination. In contrast to the original DST the TBM propagates the open-world assumption that relaxes the assumption that all possible outcomes are known. Under the open world assumption Dempster's rule of combination is adapted such that there is no normalization. The underlying idea is that the probability mass pertaining to the empty set is taken to indicate an unexpected outcome, e.g. the belief in a hypothesis outside the frame of discernment. This adaptation violates the probabilistic character of the original DST and also Bayesian inference. Therefore, the authors substituted notation such as probability masses and probability update with terms such as degrees of belief and transfer giving rise to the name of the method: The transferable belief model.[1][2]
Zadeh’s example in TBM context
Lofti Zadeh describes an information fusion problem.[3] A patient has an illness that can be caused by three different factors A, B or C. Doctor 1 says that the patient's illness is very likely to be caused by A (very likely, meaning probability p = 0.95), but B is also possible but not likely (p = 0.05). Doctor 2 says that the cause is very likely C (p = 0.95), but B is also possible but not likely (p = 0.05). How is one to make one's own opinion from this?
Bayesian updating the first opinion with the second (or the other way round) implies certainty that the cause is B. Dempster's rule of combination lead to the same result. This can be seen as paradoxical, since although the two doctors point at different causes, A and C, they both agree that B is not likely. (For this reason the standard Bayesian approach is to adopt Cromwell's rule and avoid the use of 0 or 1 as probabilities.)
Formal definition
The TBM describes beliefs at two levels:[4]
- a credal level where beliefs are entertained and quantified by belief functions,
- a pignistic level where beliefs can be used to make decisions and are quantified by probability functions.
Credal level
According to the DST, a probability mass function [math]\displaystyle{ m }[/math] is defined such that:[1]
- [math]\displaystyle{ m: 2^X \rightarrow [0,1] \,\! }[/math]
with
- [math]\displaystyle{ \sum_{A \in 2^X} m(A) = 1 \,\! }[/math]
where the power set [math]\displaystyle{ 2^X }[/math] contains all possible subsets of the frame of discernment [math]\displaystyle{ X }[/math]. In contrast to the DST the mass [math]\displaystyle{ m }[/math] allocated to the empty set [math]\displaystyle{ \emptyset }[/math] is not required to be zero, and hence generally [math]\displaystyle{ 0 \leq m(\emptyset) \leq 1.0 }[/math] holds true. The underlying idea is that the frame of discernment is not necessarily exhaustive, and thus belief allocated to a proposition [math]\displaystyle{ A \in 2^X }[/math], is in fact allocated to [math]\displaystyle{ A \in 2^X\cup{e} }[/math] where [math]\displaystyle{ {e} }[/math] is the set of unknown outcomes. Consequently, the combination rule underlying the TBM corresponds to Dempster's rule of combination, except the normalization that grants [math]\displaystyle{ m(\emptyset)=0 }[/math]. Hence, in the TBM any two independent functions [math]\displaystyle{ m_1 }[/math] and [math]\displaystyle{ m_2 }[/math] are combined to a single function [math]\displaystyle{ m_{1,2} }[/math] by:[5]
- [math]\displaystyle{ m_{1,2}(A) = (m_1 \otimes m_2) (A) = \sum_{B \cap C = A} m_1(B) m_2(C) \, \! }[/math]
where
- [math]\displaystyle{ A,B,C \in 2^X \ne \emptyset. \, \! }[/math]
In the TBM the degree of belief in a hypothesis [math]\displaystyle{ H \in 2^X \ne \emptyset }[/math] is defined by a function:[1]
- [math]\displaystyle{ \operatorname{bel}: 2^X \rightarrow [0,1] \, \! }[/math]
with
- [math]\displaystyle{ \operatorname{bel}(H)= \sum_{\emptyset \ne A \subseteq H} m(A) }[/math]
- [math]\displaystyle{ \operatorname{bel}(\emptyset)=0. \, \! }[/math]
Pignistic level
When a decision must be made the credal beliefs are transferred to pignistic probabilities by:[4]
- [math]\displaystyle{ P_\text{Bet}(x)= \sum_{x \in A \subseteq X} \frac {m(A)} {|A|} \, \! }[/math]
where [math]\displaystyle{ x \in X }[/math] denote the atoms (also denoted as singletons)[6] and [math]\displaystyle{ |A| }[/math] the number of atoms [math]\displaystyle{ x }[/math] that appear in [math]\displaystyle{ A }[/math]. Hence, probability masses [math]\displaystyle{ m(A) }[/math] are equally distributed among the atoms of A. This strategy corresponds to the principle of insufficient reason (also denoted as principle of maximum entropy) according to which an unknown distribution most probably corresponds to a uniform distribution.
In the TBM pignistic probability functions are described by functions [math]\displaystyle{ P_\text{Bet} }[/math]. Such a function satisfies the probability axioms:[4]
- [math]\displaystyle{ P_\text{Bet}: X \rightarrow [0,1] \,\! }[/math]
with
- [math]\displaystyle{ \sum_{x \in X} P_\text{Bet}(x) = 1 \,\! }[/math]
- [math]\displaystyle{ P_\text{Bet}(\emptyset)=0 \,\! }[/math]
Philip Smets introduced them as pignistic to stress the fact that those probability functions are based on incomplete data, whose only purpose is a forced decision, e.g. to place a bet. This is in contrast to the credal beliefs described above, whose purpose is representing the actual belief.[1]
Open world example
When tossing a coin one usually assumes that Head or Tail will occur, so that [math]\displaystyle{ \Pr(\text{Head}) + \Pr(\text{Tail}) = 1 }[/math]. The open-world assumption is that the coin can be stolen in mid-air, disappear, break apart or otherwise fall sideways so that neither Head nor Tail occurs, so that the power set of {Head,Tail} is considered and there is a decomposition of the overall probability (i.e. 1) of the following form:
- [math]\displaystyle{ \Pr(\emptyset) + \Pr(\text{Head}) + \Pr(\text{Tail}) + \Pr(\text{Head,Tail}) = 1. }[/math]
See also
Notes
- ↑ 1.0 1.1 1.2 1.3 Ph, Smets (1990). "The combination of evidence in the transferable belief model". IEEE Transactions on Pattern Analysis and Machine Intelligence 12 (5): 447–458. doi:10.1109/34.55104.
- ↑ Dempster, A.P. (2007). "The Dempster–Shafer calculus for statisticians". International Journal of Approximate Reasoning 48 (2): 365–377. doi:10.1016/j.ijar.2007.03.004.
- ↑ Zadeh, A., L., (1984) "Review of shafer's a mathematical theory of evidence". AI Magazine, 5(3).
- ↑ 4.0 4.1 4.2 Smets, Ph.; Kennes, R. (1994). "The transferable belief model". Artificial Intelligence 66 (2): 191–234. doi:10.1016/0004-3702(94)90026-4.
- ↑ Haenni, R. (2006). "Uncover Dempster's Rule Where It Is Hidden" in: Proceedings of the 9th International Conference on Information Fusion (FUSION 2006), Florence, Italy, 2006.
- ↑ Shafer, Glenn (1976). "A Mathematical Theory of Evidence", Princeton University Press, ISBN:0-608-02508-9
References
- Smets Ph. (1988) "Belief function". In: Non Standard Logics for Automated Reasoning, ed. Smets Ph., Mamdani A, Dubois D. and Prade H. Academic Press, London
- Ph, Smets (1990). "The combination of evidence in the transferable belief model". IEEE Transactions on Pattern Analysis and Machine Intelligence 12 (5): 447–458. doi:10.1109/34.55104.
- Smets Ph. (1993) "An axiomatic justification for the use of belief function to quantify beliefs", IJCAI'93 (Inter. Joint Conf. on AI), Chambery, 598–603
- Smets, Ph.; Kennes, R. (1994). "The transferable belief model". Artificial Intelligence 66 (2): 191–234. doi:10.1016/0004-3702(94)90026-4.
- Smets Ph. and Kruse R. (1995) "The transferable belief model for belief representation" In: Smets and Motro A. (eds.) Uncertainty Management in Information Systems: from Needs to solutions. Kluwer, Boston
- Haenni, R. (2006). "Uncover Dempster's Rule Where It Is Hidden" in: Proceedings of the 9th International Conference on Information Fusion (FUSION 2006), Florence, Italy, 2006.
- Ramasso, E., Rombaut, M., Pellerin D. (2007) "Forward-Backward-Viterbi procedures in the Transferable Belief Model for state sequence analysis using belief functions", ECSQARU, Hammamet : Tunisie (2007).
- Touil, K.; Zribi, M.; Benjelloun, M. (2007). "Application of transferable belief model to navigation system". Integrated Computer-Aided Engineering 14 (1): 93–105. doi:10.3233/ICA-2007-14108. http://iospress.metapress.com/content/FW6KLYTMAB49FNXH.
- Dempster, A.P. (2007). "The Dempster–Shafer calculus for statisticians". International Journal of Approximate Reasoning 48 (2): 365–377. doi:10.1016/j.ijar.2007.03.004.
External links
Original source: https://en.wikipedia.org/wiki/Transferable belief model.
Read more |