Philosophy:Evidence-based subjective logic

From HandWiki
Revision as of 00:13, 6 March 2023 by Dennis Ross (talk | contribs) (fix)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Evidence-based subjective logic (EBSL) is a variant[1] of subjective logic in which the transitivity of opinions (discounting) is handled by applying weights to the evidence underlying the opinions. Subjective logic is based on Dempster–Shafer belief theory. The discounting rule in EBSL makes it possible to handle arbitrary trust networks.[2]

Relation between evidence and opinions

Consider a proposition P. Let p be the amount of evidence supporting P, and n the amount of evidence supporting ¬P. We write the evidence as a vector (p, n). Let c be a positive constant representing a "unit" of evidence. An opinion (b, d, u) is formed on the basis of the evidence (p, n), in which b,d, and u respectively quantify the level of belief, disbelief and uncertainty in P. There is a one-to-one mapping between the opinion and the evidence,

[math]\displaystyle{ (b,d,u)=\frac{(p,n,c)}{p+n+c} \quad\quad (p,n)=c\frac{(b,d)}{u}. \quad\quad (1) }[/math]

In the original literature on subjective logic the constant was set to c = 2. The mapping (1) is the unique solution of the following set of constraints,[1]

  • b/d = p/n.
  • b + d + u = 1.
  • p + n = 0 implies u = 1.
  • [math]\displaystyle{ p\to\infty }[/math] implies [math]\displaystyle{ u\to 0 }[/math].

Alternatively, (1) can be derived from an analysis of a posteriori probability distributions[3] (beta distributions).

There are three "corner points" in opinion space:

the full Belief B = (1,0,0),
the full Disbelief D = (0,1,0),
and full Uncertainty U = (0,0,1).

Opinions on the line between B and D (including B and D) are called "dogmatic opinions". They have zero uncertainty, which is achievable only with an infinite amount of evidence. Dogmatic opinions are often excluded from the algebra.

Consensus / fusion

The consensus operation combines two opinions about the same predicate into one opinion by piling up the evidence. Let x = (xb,xd,xu) and y = (yb,yd,yu) be the opinions that are to be fused, and z = xy the result. We denote their evidence vectors as (px, nx), (py, ny) and (pz, nz) respectively. In evidence space the consensus is straightforwardly defined as

[math]\displaystyle{ (p_z,n_z) = (p_x,n_x)+(p_y,n_y). }[/math]

In opinion space this yields

[math]\displaystyle{ x\oplus y = \frac{(p_x+p_y,n_x+n_y,c)}{p_x+p_y+n_x+n_y+c} }[/math]

which using (1) can be rewritten as

[math]\displaystyle{ x\oplus y = \frac{(x_{\rm u} y_{\rm b}+y_{\rm u} x_{\rm b},x_{\rm u} y_{\rm d}+y_{\rm u} x_{\rm d},x_{\rm u} y_{\rm u})} {x_{\rm u}+y_{\rm u}-x_{\rm u} y_{\rm u}}. \quad\quad (2) }[/math]

The consensus rule can only be applied if the evidence underlying x and y is independent, otherwise double counting of evidence occurs.

Discounting

Old discounting rule (⊗)

The traditional discounting operation in Subjective Logic is denoted as ⊗ and defined as

[math]\displaystyle{ x\otimes y \stackrel{\rm def}{=} (x_{\rm b}y_{\rm b}, x_{\rm b}y_{\rm d},1-x_{\rm b}y_{\rm b} - x_{\rm b}y_{\rm d}). }[/math]

This operation suffers from a number of serious problems, e.g.

  • There is no natural interpretation of the evidence underlying xy when (1) is applied.
  • Consider the following. Alice has trust x in Bob. Bob gathers two independent evidence vectors, (p1, n1) and (p2, n2), about some proposition P.
Scenario I: Bob forms two indendent opinions, y1 and y2, based on the evidence. He sends y1 and y2 to Alice. Alice forms opinion [math]\displaystyle{ (x\otimes y_1)\oplus(x\otimes y_2) }[/math] about P.
Scenario II: Bob combines his evidence and forms opinion [math]\displaystyle{ y_1\oplus y_2 }[/math]. He then sends [math]\displaystyle{ y_1\oplus y_2 }[/math] to Alice. Alice forms opinion [math]\displaystyle{ x\otimes (y_1\oplus y_2) }[/math] about P.

It is obvious that these scenarios should yield the same result for Alice. Yet the traditional discounting rule gives: [math]\displaystyle{ x\otimes(y_1\oplus y_2)\neq (x\otimes y_1)\oplus(x\otimes y_2). }[/math] These problems make it very difficult to handle trust networks in subjective logic.

The product of a scalar and an opinion

Let x = (xb, xd, xu) be an opinion based on evidence (p, n). Let λ ≥ 0 be a scalar. The product λ ⋅ x is defined[1] as (λ p, λ n) in evidence space, which corresponds to

[math]\displaystyle{ \lambda\cdot x=\frac{(\lambda x_{\rm b},\lambda x_{\rm d},x_{\rm u})} {\lambda(x_{\rm b}+x_{\rm d})+x_{\rm u}} \quad\quad (3) }[/math]

in opinion space.

The discounting rule (☒) in EBSL

Let x and y be opinions. Let g be a mapping from opinion space to [0,1] satisfying g(B) = 1 and g(D) = 0.

In EBSL the discounting of y through x is denoted as xy and defined as[1]

[math]\displaystyle{ \quad x \boxtimes y \stackrel{\rm def}{=} g(x)\cdot y, \quad \quad (4) }[/math]

with the "dot" product as specified in (3).

The function g can be chosen at will, depending on the context. The ☒ rule has a very simple interpretation in evidence space: Due to the disbelief and uncertainty present in x, only a fraction g(x) of the evidence in y survives.

The ☒ operation avoids all the inconsistencies of the ⊗ operation. The following properties hold,

  • [math]\displaystyle{ x\boxtimes (y_1\oplus y_2)=(x\boxtimes y_1)\oplus (x\boxtimes y_2) }[/math]
  • [math]\displaystyle{ x_1\boxtimes(x_2\boxtimes y)=x_2\boxtimes(x_1\boxtimes y) }[/math].

There is no associativity, i.e. [math]\displaystyle{ x_1\boxtimes(x_2\boxtimes y)\neq(x_1\boxtimes x_2)\boxtimes y }[/math], in contrast to the ⊗ operation. This is not a problem, since the flow if information in a trust network has a well defined direction.

Also, we have

[math]\displaystyle{ (x_1\oplus x_2)\boxtimes y \neq (x_1\boxtimes y)\oplus (x_2\boxtimes y). }[/math]

Computation of opinions in arbitrary trust networks

EBSL makes it possible to compute trust values even when the graph connecting the users in the trust network is complicated. This makes EBSL interesting e.g. for reputation systems.

Let Aij be the opinion that user i has about the trustworthiness of user j, based on direct evidence, e.g. direct interactions between i and j. We set Aii = U. Let every user publish these direct opinions in a reliable way; the matrix A is public and its integrity is guaranteed. Based on all the available trust information, direct as well as indirect, what should a user conclude about the trustworthiness of all the other users? In general this is a nontrivial problem because of the complicated connection graphs, in which loops may occur. The problem is to find a "reputation" matrix R that consistently combines the direct and indirect evidence. In EBSL the following "self-consistent" (self-containing) equation must be satisfied[1] by R,

[math]\displaystyle{ \mbox{For }i\neq j:\quad R_{ij}=A_{ij}\oplus \sum_{k:\,k\neq i}R_{ik}\boxtimes A_{kj} \quad\quad (5) }[/math]
[math]\displaystyle{ \mbox{Diagonal: }R_{ii}=B. }[/math]

Here the "Σ" stands for ⊕ operations. The diagonal is set to full belief since everybody trusts himself implicitly, independent of other users' opinions.

User i forms an opinion about j by combining his direct opinion Aij with other users' opinions Akj. The indirect evidence is weighted with a scalar that depends on the reputation of the intermediary: g(Rik).

Equation (5) can be written compactly in matrix form,

[math]\displaystyle{ R = B{\mathbf 1}\oplus (R\boxtimes A).\quad\quad (6) }[/math]

Here [math]\displaystyle{ {\mathbf 1} }[/math] is the identity matrix, and summation should be executed as ⊕. Eq.(6) is a fixed point equation similar to the case of Markov chains. It can be solved recursively. Let X be a square matrix with the same dimensions as A. Define a function f as

[math]\displaystyle{ f(X)=B{\mathbf 1}\oplus (X\boxtimes A) }[/math].

Pick a starting matrix [math]\displaystyle{ X_0 }[/math] and repeatedly apply f until the result does not change any more, i.e. a fixed point [math]\displaystyle{ R=f(R) }[/math] is reached. Independent of the choice of [math]\displaystyle{ X_0 }[/math], after one iteration the diagonal is B1 and stays B1 in all further iterations.

If Eq.(6) were an ordinary matrix equation [math]\displaystyle{ R={\mathbf 1}+RA }[/math] for real-valued (or complex) Aij, it would have solution [math]\displaystyle{ R=({\mathbf 1}-A)^{-1}={\mathbf 1}+\sum_{k\geq 1}A^k }[/math]. However, the opinion algebra does not allow for such a simple expression. Instead we have:

f2(X0) = B1 ⊕ ((B1 ⊕ (X0A)) ☒ A)

f3(X0) = B1 ⊕ ((B1 ⊕ ( B1 ⊕ (X0A) ☒ A)) ☒ A)

which in general cannot be simplified.

References

  1. 1.0 1.1 1.2 1.3 1.4 Skoric, B.; Zannone, N.. "Flow-based reputation with uncertainty: Evidence-Based Subjective Logic". International Journal of Information Security. doi:10.1007/s10207-015-0298-5. 
  2. A bot will complete this citation soon. Click here to jump the queue arXiv:1402.3319.
  3. A. Jøsang. A Logic for Uncertain Probabilities. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 9(3), pp. 279–311, June 2001. PDF