Uncertainty theory

From HandWiki
Revision as of 22:10, 6 February 2024 by Sherlock (talk | contribs) (url)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Uncertainty theory is a branch of mathematics based on normality, monotonicity, self-duality, countable subadditivity, and product measure axioms.[clarification needed]

Mathematical measures of the likelihood of an event being true include probability theory, capacity, fuzzy logic, possibility, and credibility, as well as uncertainty.

Four axioms

Axiom 1. (Normality Axiom) [math]\displaystyle{ \mathcal{M}\{\Gamma\}=1\text{ for the universal set }\Gamma }[/math].

Axiom 2. (Self-Duality Axiom) [math]\displaystyle{ \mathcal{M}\{\Lambda\}+\mathcal{M}\{\Lambda^c\}=1\text{ for any event }\Lambda }[/math].

Axiom 3. (Countable Subadditivity Axiom) For every countable sequence of events [math]\displaystyle{ \Lambda_1,\Lambda_2,\ldots }[/math], we have

[math]\displaystyle{ \mathcal{M}\left\{\bigcup_{i=1}^\infty\Lambda_i\right\}\le\sum_{i=1}^\infty\mathcal{M}\{\Lambda_i\} }[/math].

Axiom 4. (Product Measure Axiom) Let [math]\displaystyle{ (\Gamma_k,\mathcal{L}_k,\mathcal{M}_k) }[/math] be uncertainty spaces for [math]\displaystyle{ k=1,2,\ldots,n }[/math]. Then the product uncertain measure [math]\displaystyle{ \mathcal{M} }[/math] is an uncertain measure on the product σ-algebra satisfying

[math]\displaystyle{ \mathcal{M}\left\{\prod_{i=1}^n\Lambda_i\right\}=\underset{1\le i\le n}{\operatorname{min} }\mathcal{M}_i\{\Lambda_i\} }[/math].

Principle. (Maximum Uncertainty Principle) For any event, if there are multiple reasonable values that an uncertain measure may take, then the value as close to 0.5 as possible is assigned to the event.

Uncertain variables

An uncertain variable is a measurable function ξ from an uncertainty space [math]\displaystyle{ (\Gamma,L,M) }[/math] to the set of real numbers, i.e., for any Borel set B of real numbers, the set [math]\displaystyle{ \{\xi\in B\}=\{\gamma \in \Gamma\mid \xi(\gamma)\in B\} }[/math] is an event.

Uncertainty distribution

Uncertainty distribution is inducted to describe uncertain variables.

Definition: The uncertainty distribution [math]\displaystyle{ \Phi(x):R \rightarrow [0,1] }[/math] of an uncertain variable ξ is defined by [math]\displaystyle{ \Phi(x)=M\{\xi\leq x\} }[/math].

Theorem (Peng and Iwamura, Sufficient and Necessary Condition for Uncertainty Distribution): A function [math]\displaystyle{ \Phi(x):R \rightarrow [0,1] }[/math] is an uncertain distribution if and only if it is an increasing function except [math]\displaystyle{ \Phi (x) \equiv 0 }[/math] and [math]\displaystyle{ \Phi (x)\equiv 1 }[/math].

Independence

Definition: The uncertain variables [math]\displaystyle{ \xi_1,\xi_2,\ldots,\xi_m }[/math] are said to be independent if

[math]\displaystyle{ M\{\cap_{i=1}^m(\xi \in B_i)\}=\mbox{min}_{1\leq i \leq m}M\{\xi_i \in B_i\} }[/math]

for any Borel sets [math]\displaystyle{ B_1,B_2,\ldots,B_m }[/math] of real numbers.

Theorem 1: The uncertain variables [math]\displaystyle{ \xi_1,\xi_2,\ldots,\xi_m }[/math] are independent if

[math]\displaystyle{ M\{\cup_{i=1}^m(\xi \in B_i)\}=\mbox{max}_{1\leq i \leq m}M\{\xi_i \in B_i\} }[/math]

for any Borel sets [math]\displaystyle{ B_1,B_2,\ldots,B_m }[/math] of real numbers.

Theorem 2: Let [math]\displaystyle{ \xi_1,\xi_2,\ldots,\xi_m }[/math] be independent uncertain variables, and [math]\displaystyle{ f_1,f_2,\ldots,f_m }[/math] measurable functions. Then [math]\displaystyle{ f_1(\xi_1),f_2(\xi_2),\ldots,f_m(\xi_m) }[/math] are independent uncertain variables.

Theorem 3: Let [math]\displaystyle{ \Phi_i }[/math] be uncertainty distributions of independent uncertain variables [math]\displaystyle{ \xi_i,\quad i=1,2,\ldots,m }[/math] respectively, and [math]\displaystyle{ \Phi }[/math] the joint uncertainty distribution of uncertain vector [math]\displaystyle{ (\xi_1,\xi_2,\ldots,\xi_m) }[/math]. If [math]\displaystyle{ \xi_1,\xi_2,\ldots,\xi_m }[/math] are independent, then we have

[math]\displaystyle{ \Phi(x_1, x_2, \ldots, x_m)=\mbox{min}_{1\leq i \leq m}\Phi_i(x_i) }[/math]

for any real numbers [math]\displaystyle{ x_1, x_2, \ldots, x_m }[/math].

Operational law

Theorem: Let [math]\displaystyle{ \xi_1,\xi_2,\ldots,\xi_m }[/math] be independent uncertain variables, and [math]\displaystyle{ f: R^n \rightarrow R }[/math] a measurable function. Then [math]\displaystyle{ \xi=f(\xi_1,\xi_2,\ldots,\xi_m) }[/math] is an uncertain variable such that

[math]\displaystyle{ \mathcal{M}\{\xi\in B\}=\begin{cases} \underset{f(B_1,B_2,\cdots,B_n)\subset B}{\sup }\;\underset{1\le k\le n}{\min }\mathcal{M}_k\{\xi_k\in B_k\}, & \text{if } \underset{f(B_1,B_2,\cdots,B_n)\subset B}{\sup }\;\underset{1\le k\le n}{\min }\mathcal{M}_k\{\xi_k\in B_k\} \gt 0.5 \\ 1-\underset{f(B_1,B_2,\cdots,B_n)\subset B^c}{\sup }\;\underset{1\le k\le n}{\min }\mathcal{M}_k\{\xi_k\in B_k\}, & \text{if } \underset{f(B_1,B_2,\cdots,B_n)\subset B^c}{\sup }\;\underset{1\le k\le n}{\min }\mathcal{M}_k\{\xi_k\in B_k\} \gt 0.5 \\ 0.5, & \text{otherwise} \end{cases} }[/math]

where [math]\displaystyle{ B, B_1, B_2, \ldots, B_m }[/math] are Borel sets, and [math]\displaystyle{ f( B_1, B_2, \ldots, B_m)\subset B }[/math] means [math]\displaystyle{ f(x_1, x_2, \ldots, x_m) \in B }[/math] for any[math]\displaystyle{ x_1 \in B_1, x_2 \in B_2, \ldots,x_m \in B_m }[/math].

Expected Value

Definition: Let [math]\displaystyle{ \xi }[/math] be an uncertain variable. Then the expected value of [math]\displaystyle{ \xi }[/math] is defined by

[math]\displaystyle{ E[\xi]=\int_0^{+\infty}M\{\xi\geq r\}dr-\int_{-\infty}^0M\{\xi\leq r\}dr }[/math]

provided that at least one of the two integrals is finite.

Theorem 1: Let [math]\displaystyle{ \xi }[/math] be an uncertain variable with uncertainty distribution [math]\displaystyle{ \Phi }[/math]. If the expected value exists, then

[math]\displaystyle{ E[\xi]=\int_0^{+\infty}(1-\Phi(x))dx-\int_{-\infty}^0\Phi(x)dx. }[/math]
Uncertain expected value.jpg

Theorem 2: Let [math]\displaystyle{ \xi }[/math] be an uncertain variable with regular uncertainty distribution [math]\displaystyle{ \Phi }[/math]. If the expected value exists, then

[math]\displaystyle{ E[\xi]=\int_0^1\Phi^{-1}(\alpha)d\alpha. }[/math]

Theorem 3: Let [math]\displaystyle{ \xi }[/math] and [math]\displaystyle{ \eta }[/math] be independent uncertain variables with finite expected values. Then for any real numbers [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math], we have

[math]\displaystyle{ E[a\xi+b\eta]=aE[\xi]+b[\eta]. }[/math]

Variance

Definition: Let [math]\displaystyle{ \xi }[/math] be an uncertain variable with finite expected value [math]\displaystyle{ e }[/math]. Then the variance of [math]\displaystyle{ \xi }[/math] is defined by

[math]\displaystyle{ V[\xi]=E[(\xi-e)^2]. }[/math]

Theorem: If [math]\displaystyle{ \xi }[/math] be an uncertain variable with finite expected value, [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] are real numbers, then

[math]\displaystyle{ V[a\xi+b]=a^2V[\xi]. }[/math]

Critical value

Definition: Let [math]\displaystyle{ \xi }[/math] be an uncertain variable, and [math]\displaystyle{ \alpha\in(0,1] }[/math]. Then

[math]\displaystyle{ \xi_{sup}(\alpha)=\sup \{r \mid M\{\xi\geq r\}\geq\alpha\} }[/math]

is called the α-optimistic value to [math]\displaystyle{ \xi }[/math], and

[math]\displaystyle{ \xi_{inf}(\alpha)=\inf \{r \mid M\{\xi\leq r\}\geq\alpha\} }[/math]

is called the α-pessimistic value to [math]\displaystyle{ \xi }[/math].

Theorem 1: Let [math]\displaystyle{ \xi }[/math] be an uncertain variable with regular uncertainty distribution [math]\displaystyle{ \Phi }[/math]. Then its α-optimistic value and α-pessimistic value are

[math]\displaystyle{ \xi_{sup}(\alpha)=\Phi^{-1}(1-\alpha) }[/math],
[math]\displaystyle{ \xi_{inf}(\alpha)=\Phi^{-1}(\alpha) }[/math].

Theorem 2: Let [math]\displaystyle{ \xi }[/math] be an uncertain variable, and [math]\displaystyle{ \alpha\in(0,1] }[/math]. Then we have

  • if [math]\displaystyle{ \alpha\gt 0.5 }[/math], then [math]\displaystyle{ \xi_{inf}(\alpha)\geq \xi_{sup}(\alpha) }[/math];
  • if [math]\displaystyle{ \alpha\leq 0.5 }[/math], then [math]\displaystyle{ \xi_{inf}(\alpha)\leq \xi_{sup}(\alpha) }[/math].

Theorem 3: Suppose that [math]\displaystyle{ \xi }[/math] and [math]\displaystyle{ \eta }[/math] are independent uncertain variables, and [math]\displaystyle{ \alpha\in(0,1] }[/math]. Then we have

[math]\displaystyle{ (\xi + \eta)_{sup}(\alpha)=\xi_{sup}(\alpha)+\eta_{sup}{\alpha} }[/math],

[math]\displaystyle{ (\xi + \eta)_{inf}(\alpha)=\xi_{inf}(\alpha)+\eta_{inf}{\alpha} }[/math],

[math]\displaystyle{ (\xi \vee \eta)_{sup}(\alpha)=\xi_{sup}(\alpha)\vee\eta_{sup}{\alpha} }[/math],

[math]\displaystyle{ (\xi \vee \eta)_{inf}(\alpha)=\xi_{inf}(\alpha)\vee\eta_{inf}{\alpha} }[/math],

[math]\displaystyle{ (\xi \wedge \eta)_{sup}(\alpha)=\xi_{sup}(\alpha)\wedge\eta_{sup}{\alpha} }[/math],

[math]\displaystyle{ (\xi \wedge \eta)_{inf}(\alpha)=\xi_{inf}(\alpha)\wedge\eta_{inf}{\alpha} }[/math].

Entropy

Definition: Let [math]\displaystyle{ \xi }[/math] be an uncertain variable with uncertainty distribution [math]\displaystyle{ \Phi }[/math]. Then its entropy is defined by

[math]\displaystyle{ H[\xi]=\int_{-\infty}^{+\infty} S(\Phi(x))dx }[/math]

where [math]\displaystyle{ S(x) = -t \ln(t) - (1-t) \ln(1-t) }[/math].

Theorem 1(Dai and Chen): Let [math]\displaystyle{ \xi }[/math] be an uncertain variable with regular uncertainty distribution [math]\displaystyle{ \Phi }[/math]. Then

[math]\displaystyle{ H[\xi]=\int_0^1\Phi^{-1}(\alpha)\ln\frac{\alpha}{1-\alpha} d\alpha. }[/math]

Theorem 2: Let [math]\displaystyle{ \xi }[/math] and [math]\displaystyle{ \eta }[/math] be independent uncertain variables. Then for any real numbers [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math], we have

[math]\displaystyle{ H[a\xi+b\eta] = |a|E[\xi] + |b|E[\eta]. }[/math]

Theorem 3: Let [math]\displaystyle{ \xi }[/math] be an uncertain variable whose uncertainty distribution is arbitrary but the expected value [math]\displaystyle{ e }[/math] and variance [math]\displaystyle{ \sigma^2 }[/math]. Then

[math]\displaystyle{ H[\xi]\leq\frac{\pi\sigma}{\sqrt{3}}. }[/math]

Inequalities

Theorem 1(Liu, Markov Inequality): Let [math]\displaystyle{ \xi }[/math] be an uncertain variable. Then for any given numbers [math]\displaystyle{ t \gt 0 }[/math] and [math]\displaystyle{ p \gt 0 }[/math], we have

[math]\displaystyle{ M\{|\xi|\geq t\}\leq \frac{E[|\xi|^p]}{t^p}. }[/math]

Theorem 2 (Liu, Chebyshev Inequality) Let [math]\displaystyle{ \xi }[/math] be an uncertain variable whose variance [math]\displaystyle{ V[\xi] }[/math] exists. Then for any given number [math]\displaystyle{ t \gt 0 }[/math], we have

[math]\displaystyle{ M\{|\xi-E[\xi]|\geq t\}\leq \frac{V[\xi]}{t^2}. }[/math]

Theorem 3 (Liu, Holder's Inequality) Let [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] be positive numbers with [math]\displaystyle{ 1/p + 1/q = 1 }[/math], and let [math]\displaystyle{ \xi }[/math] and [math]\displaystyle{ \eta }[/math] be independent uncertain variables with [math]\displaystyle{ E[|\xi|^p]\lt \infty }[/math] and [math]\displaystyle{ E[|\eta|^q] \lt \infty }[/math]. Then we have

[math]\displaystyle{ E[|\xi\eta|]\leq \sqrt[p]{E[|\xi|^p]} \sqrt[p]{E[\eta|^p]}. }[/math]

Theorem 4:(Liu [127], Minkowski Inequality) Let [math]\displaystyle{ p }[/math] be a real number with [math]\displaystyle{ p\leq 1 }[/math], and let [math]\displaystyle{ \xi }[/math] and [math]\displaystyle{ \eta }[/math] be independent uncertain variables with [math]\displaystyle{ E[|\xi|^p] \lt \infty }[/math] and [math]\displaystyle{ E[|\eta|^q] \lt \infty }[/math]. Then we have

[math]\displaystyle{ \sqrt[p]{E[|\xi+\eta|^p]}\leq \sqrt[p]{E[|\xi|^p]}+\sqrt[p]{E[\eta|^p]}. }[/math]

Convergence concept

Definition 1: Suppose that [math]\displaystyle{ \xi,\xi_1,\xi_2,\ldots }[/math] are uncertain variables defined on the uncertainty space [math]\displaystyle{ (\Gamma,L,M) }[/math]. The sequence [math]\displaystyle{ \{\xi_i\} }[/math] is said to be convergent a.s. to [math]\displaystyle{ \xi }[/math] if there exists an event [math]\displaystyle{ \Lambda }[/math] with [math]\displaystyle{ M\{\Lambda\} = 1 }[/math] such that

[math]\displaystyle{ \lim_{i\to\infty}|\xi_i(\gamma)-\xi(\gamma)|=0 }[/math]

for every [math]\displaystyle{ \gamma\in\Lambda }[/math]. In that case we write [math]\displaystyle{ \xi_i\to \xi }[/math],a.s.

Definition 2: Suppose that [math]\displaystyle{ \xi,\xi_1,\xi_2,\ldots }[/math] are uncertain variables. We say that the sequence [math]\displaystyle{ \{\xi_i\} }[/math] converges in measure to [math]\displaystyle{ \xi }[/math] if

[math]\displaystyle{ \lim_{i\to\infty}M\{|\xi_i-\xi|\leq \varepsilon \}=0 }[/math]

for every [math]\displaystyle{ \varepsilon\gt 0 }[/math].

Definition 3: Suppose that [math]\displaystyle{ \xi,\xi_1,\xi_2,\ldots }[/math] are uncertain variables with finite expected values. We say that the sequence [math]\displaystyle{ \{\xi_i\} }[/math] converges in mean to [math]\displaystyle{ \xi }[/math] if

[math]\displaystyle{ \lim_{i\to\infty}E[|\xi_i-\xi|]=0 }[/math].

Definition 4: Suppose that [math]\displaystyle{ \Phi,\phi_1,\Phi_2,\ldots }[/math] are uncertainty distributions of uncertain variables [math]\displaystyle{ \xi,\xi_1,\xi_2,\ldots }[/math], respectively. We say that the sequence [math]\displaystyle{ \{\xi_i\} }[/math] converges in distribution to [math]\displaystyle{ \xi }[/math] if [math]\displaystyle{ \Phi_i\rightarrow\Phi }[/math] at any continuity point of [math]\displaystyle{ \Phi }[/math].

Theorem 1: Convergence in Mean [math]\displaystyle{ \Rightarrow }[/math] Convergence in Measure [math]\displaystyle{ \Rightarrow }[/math] Convergence in Distribution. However, Convergence in Mean [math]\displaystyle{ \nLeftrightarrow }[/math] Convergence Almost Surely [math]\displaystyle{ \nLeftrightarrow }[/math] Convergence in Distribution.

Conditional uncertainty

Definition 1: Let [math]\displaystyle{ (\Gamma,L,M) }[/math] be an uncertainty space, and [math]\displaystyle{ A,B\in L }[/math]. Then the conditional uncertain measure of A given B is defined by

[math]\displaystyle{ \mathcal{M}\{A\vert B\}=\begin{cases} \displaystyle\frac{\mathcal{M}\{A\cap B\} }{\mathcal{M}\{B\} }, &\displaystyle\text{if }\frac{\mathcal{M}\{A\cap B\} }{\mathcal{M}\{B\} }\lt 0.5 \\ \displaystyle 1 - \frac{\mathcal{M}\{A^c\cap B\} }{\mathcal{M}\{B\} }, &\displaystyle\text{if } \frac{\mathcal{M}\{A^c\cap B\} }{\mathcal{M}\{B\} }\lt 0.5 \\ 0.5, & \text{otherwise} \end{cases} }[/math]
[math]\displaystyle{ \text{provided that } \mathcal{M}\{B\}\gt 0 }[/math]

Theorem 1: Let [math]\displaystyle{ (\Gamma,L,M) }[/math] be an uncertainty space, and B an event with [math]\displaystyle{ M\{B\} \gt 0 }[/math]. Then M{·|B} defined by Definition 1 is an uncertain measure, and [math]\displaystyle{ (\Gamma,L,M\{\mbox{·}|B\}) }[/math]is an uncertainty space.

Definition 2: Let [math]\displaystyle{ \xi }[/math] be an uncertain variable on [math]\displaystyle{ (\Gamma,L,M) }[/math]. A conditional uncertain variable of [math]\displaystyle{ \xi }[/math] given B is a measurable function [math]\displaystyle{ \xi|_B }[/math] from the conditional uncertainty space [math]\displaystyle{ (\Gamma,L,M\{\mbox{·}|_B\}) }[/math] to the set of real numbers such that

[math]\displaystyle{ \xi|_B(\gamma)=\xi(\gamma),\forall \gamma \in \Gamma }[/math].

Definition 3: The conditional uncertainty distribution [math]\displaystyle{ \Phi\rightarrow[0, 1] }[/math] of an uncertain variable [math]\displaystyle{ \xi }[/math] given B is defined by

[math]\displaystyle{ \Phi(x|B)=M\{\xi\leq x|B\} }[/math]

provided that [math]\displaystyle{ M\{B\}\gt 0 }[/math].

Theorem 2: Let [math]\displaystyle{ \xi }[/math] be an uncertain variable with regular uncertainty distribution [math]\displaystyle{ \Phi(x) }[/math], and [math]\displaystyle{ t }[/math] a real number with [math]\displaystyle{ \Phi(t) \lt 1 }[/math]. Then the conditional uncertainty distribution of [math]\displaystyle{ \xi }[/math] given [math]\displaystyle{ \xi\gt t }[/math] is

[math]\displaystyle{ \Phi(x\vert(t,+\infty))=\begin{cases} 0, & \text{if }\Phi(x)\le\Phi(t)\\ \displaystyle\frac{\Phi(x)}{1-\Phi(t)}\land 0.5, & \text{if }\Phi(t)\lt \Phi(x)\le(1+\Phi(t))/2 \\ \displaystyle\frac{\Phi(x)-\Phi(t)}{1-\Phi(t)}, & \text{if }(1+\Phi(t))/2\le\Phi(x) \end{cases} }[/math]

Theorem 3: Let [math]\displaystyle{ \xi }[/math] be an uncertain variable with regular uncertainty distribution [math]\displaystyle{ \Phi(x) }[/math], and [math]\displaystyle{ t }[/math] a real number with [math]\displaystyle{ \Phi(t)\gt 0 }[/math]. Then the conditional uncertainty distribution of [math]\displaystyle{ \xi }[/math] given [math]\displaystyle{ \xi\leq t }[/math] is

[math]\displaystyle{ \Phi(x\vert(-\infty,t])=\begin{cases} \displaystyle\frac{\Phi(x)}{\Phi(t)}, & \text{if }\Phi(x)\le\Phi(t)/2 \\ \displaystyle\frac{\Phi(x)+\Phi(t)-1}{\Phi(t)}\lor 0.5, & \text{if }\Phi(t)/2\le\Phi(x)\lt \Phi(t) \\ 1, & \text{if }\Phi(t)\le\Phi(x) \end{cases} }[/math]

Definition 4: Let [math]\displaystyle{ \xi }[/math] be an uncertain variable. Then the conditional expected value of [math]\displaystyle{ \xi }[/math] given B is defined by

[math]\displaystyle{ E[\xi|B]=\int_0^{+\infty}M\{\xi\geq r|B\}dr-\int_{-\infty}^0M\{\xi\leq r|B\}dr }[/math]

provided that at least one of the two integrals is finite.

References

Sources

  • Xin Gao, Some Properties of Continuous Uncertain Measure, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, Vol.17, No.3, 419-426, 2009.
  • Cuilian You, Some Convergence Theorems of Uncertain Sequences, Mathematical and Computer Modelling, Vol.49, Nos.3-4, 482-487, 2009.
  • Yuhan Liu, How to Generate Uncertain Measures, Proceedings of Tenth National Youth Conference on Information and Management Sciences, August 3–7, 2008, Luoyang, pp. 23–26.
  • Baoding Liu, Uncertainty Theory, 4th ed., Springer-Verlag, Berlin, [1] 2009
  • Baoding Liu, Some Research Problems in Uncertainty Theory, Journal of Uncertain Systems, Vol.3, No.1, 3-10, 2009.
  • Yang Zuo, Xiaoyu Ji, Theoretical Foundation of Uncertain Dominance, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 827–832.
  • Yuhan Liu and Minghu Ha, Expected Value of Function of Uncertain Variables, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 779–781.
  • Zhongfeng Qin, On Lognormal Uncertain Variable, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 753–755.
  • Jin Peng, Value at Risk and Tail Value at Risk in Uncertain Environment, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 787–793.
  • Yi Peng, U-Curve and U-Coefficient in Uncertain Environment, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 815–820.
  • Wei Liu, Jiuping Xu, Some Properties on Expected Value Operator for Uncertain Variables, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 808–811.
  • Xiaohu Yang, Moments and Tails Inequality within the Framework of Uncertainty Theory, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 812–814.
  • Yuan Gao, Analysis of k-out-of-n System with Uncertain Lifetimes, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 794–797.
  • Xin Gao, Shuzhen Sun, Variance Formula for Trapezoidal Uncertain Variables, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 853–855.
  • Zixiong Peng, A Sufficient and Necessary Condition of Product Uncertain Null Set, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 798–801.