Uncertainty Theory Baoding Liu Fourth Edition
Uncertainty Theory Baoding Liu Fourth Edition
Baoding Liu
Uncertainty
Theory
Fourth Edition
Springer Uncertainty Research
Springer Uncertainty Research
Springer Uncertainty Research is a book series that seeks to publish high quality
monographs, texts, and edited volumes on a wide range of topics in both funda-
mental and applied research of uncertainty. New publications are always solicited.
This book series provides rapid publication with a world-wide distribution.
Editor-in-Chief
Baoding Liu
Department of Mathematical Sciences
Tsinghua University
Beijing 100084, China
http://orsc.edu.cn/liu
Email: [email protected]
Executive Editor-in-Chief
Kai Yao
School of Management
University of Chinese Academy of Sciences
Beijing 100190, China
http://orsc.edu.cn/*kyao
Email: [email protected]
Uncertainty Theory
Fourth Edition
123
Baoding Liu
Department of Mathematical Sciences
Tsinghua University
Beijing
China
Preface xi
0 Introduction 1
0.1 Indeterminacy . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
0.2 Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
0.3 Belief Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
0.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1 Uncertain Measure 9
1.1 Measurable Space . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Uncertain Measure . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Uncertainty Space . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5 Product Uncertain Measure . . . . . . . . . . . . . . . . . . . 16
1.6 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.7 Polyrectangular Theorem . . . . . . . . . . . . . . . . . . . . 23
1.8 Conditional Uncertain Measure . . . . . . . . . . . . . . . . . 25
1.9 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 28
2 Uncertain Variable 29
2.1 Uncertain Variable . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2 Uncertainty Distribution . . . . . . . . . . . . . . . . . . . . . 31
2.3 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.4 Operational Law . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.5 Expected Value . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.6 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.7 Moment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.8 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
2.9 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.10 Conditional Uncertainty Distribution . . . . . . . . . . . . . . 90
2.11 Uncertain Sequence . . . . . . . . . . . . . . . . . . . . . . . . 93
2.12 Uncertain Vector . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.13 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 102
vi Contents
Bibliography 467
Index 485
Preface
Uncertain Measure
Uncertain Variable
Uncertain Programming
Uncertain Statistics
Uncertain statistics is a methodology for collecting and interpreting expert’s
experimental data by uncertainty theory. Chapter 4 will present a question-
naire survey for collecting expert’s experimental data. In order to deter-
mine uncertainty distributions from those expert’s experimental data, Chap-
ter 4 will also introduce empirical uncertainty distribution, principle of least
squares, method of moments, and Delphi method.
Uncertain Set
Uncertain set is a set-valued function on an uncertainty space, and attempts
to model “unsharp concepts”. The main difference between uncertain set and
uncertain variable is that the former takes values of set and the latter takes
values of point. Uncertain set theory will be introduced in Chapter 9. In
order to determine membership functions, Chapter 9 will also provide some
methods of uncertain statistics.
Preface xiii
Uncertain Logic
Some knowledge in human brain is actually an uncertain set. This fact en-
courages us to design an uncertain logic that is a methodology for calculating
the truth values of uncertain propositions via uncertain set theory. Uncertain
logic may provide a flexible means for extracting linguistic summary from a
collection of raw data. Chapter 10 will be devoted to uncertain logic and
linguistic summarizer.
Uncertain Inference
Uncertain inference is a process of deriving consequences from human knowl-
edge via uncertain set theory. Chapter 11 will present a set of uncertain
inference rules, uncertain system, and uncertain control with application to
an inverted pendulum system.
Uncertain Process
An uncertain process is essentially a sequence of uncertain variables indexed
by time. Thus an uncertain process is usually used to model uncertain phe-
nomena that vary with time. Chapter 12 is devoted to basic concepts of
uncertain process and uncertainty distribution. In addition, extreme value
theorem, first hitting time and time integral of uncertain processes are also
introduced. Chapter 13 deals with uncertain renewal process, renewal reward
process, and alternating renewal process. Chapter 13 also provides block re-
placement policy, age replacement policy, and an uncertain insurance model.
Uncertain Calculus
Uncertain calculus is a branch of mathematics that deals with differentiation
and integration of uncertain processes. Chapter 14 will introduce Liu process
that is a stationary independent increment process whose increments are
normal uncertain variables, and discuss Liu integral that is a type of uncertain
integral with respect to Liu process. In addition, the fundamental theorem of
uncertain calculus will be proved in this chapter from which the techniques
of chain rule, change of variables, and integration by parts are also derived.
time, and time integral of solution are provided. Furthermore, some numeri-
cal methods for solving general uncertain differential equations are designed.
Uncertain Finance
As applications of uncertain differential equation, Chapter 16 will discuss
uncertain stock model, uncertain interest rate model, and uncertain currency
model.
Lecture Slides
If you need lecture slides for uncertainty theory, please download them from
the website at http://orsc.edu.cn/liu/resources.htm.
Purpose
The purpose is to equip the readers with a branch of axiomatic mathematics
to deal with belief degrees. The textbook is suitable for researchers, engi-
neers, and students in the field of mathematics, information science, opera-
tions research, industrial engineering, computer science, artificial intelligence,
automation, economics, and management science.
Acknowledgment
This work was supported by National Natural Science Foundation of China
Grant No.61273044.
Baoding Liu
Tsinghua University
http://orsc.edu.cn/liu
May 2014
To My Wife Jinlan
Chapter 0
Introduction
Real decisions are usually made in the state of indeterminacy. For model-
ing indeterminacy, there exist two mathematical systems, one is probability
theory (Kolmogorov, 1933) and the other is uncertainty theory (Liu, 2007).
Probability is interpreted as frequency, while uncertainty is interpreted as
personal belief degree.
What is indeterminacy? What is frequency? What is belief degree? This
chapter will answer these questions, and show in what situation we should use
probability theory and in what situation we should use uncertainty theory.
Finally, it is concluded that a rational man behaves as if he used uncertainty
theory.
0.1 Indeterminacy
By indeterminacy we mean the phenomena whose outcomes cannot be ex-
actly predicted in advance. For example, we cannot exactly predict which
face will appear before we toss dice. Thus “tossing dice” is a type of in-
determinate phenomenon. As another example, we cannot exactly predict
tomorrow’s stock price. That is, “stock price” is also a type of indetermi-
nate phenomenon. Some other instances of indeterminacy include “roulette
wheel”, “product lifetime”, “market demand”, “bridge strength”, “travel dis-
tance”, etc.
Indeterminacy is absolute, while determinacy is relative. This is the rea-
son why we say real decisions are usually made in the state of indeterminacy.
How to model indeterminacy is thus an important research subject in not
only mathematics but also science and engineering.
In order to describe an indeterminate quantity, personally I think there
exist only two ways, one is frequency generated by samples (i.e., historical
data), and the other is belief degree evaluated by domain experts. Could you
imagine a third way?
0.2 Frequency
Assume we have collected a set of samples for some indeterminate quantity
(e.g. stock price). By cumulative frequency we mean a function representing
the percentage of all samples that fall into the left side of the current point.
It is clear that the cumulative frequency looks like a step function in Figure 1,
and will always have bigger values as the current point moves from the left
to right.
....
.........
..
1 ............................................................................
............ ..
............. .... ... . . . .
.... ............... .... .... ...
... .. .. .. .. ..
.. ............... .... .... .... ....
... ... ... ... ... ... ..
... ... ... ... ... ... ...
... ............... .... .... ..... ..... ....
... ... ... ... ... ... ... ...
... ... ... ... ... ... ... ...
.
... ............... .... .... ..... .... .... ....
... ... ... ... ... ... ... ... ..
... . . .. . . . . ..
... .............. ..... .... .... .... ..... ..... ...
... .. ... .. .. ... ... .. .. ...
... ............... .... .... .... .... .... .... .... ...
... . . . . . . . .
............ ... .. ... ... .. .. ... ... ..
... ............. .... ... .... ... .... .... .... ... ... ....
.......................................................................................................................................................................
.
but to invite some domain experts to evaluate the belief degree that each
event will happen.
Belief degrees are familiar to all of us. The object of belief is an event (i.e.,
a proposition). For example, “the sun will rise tomorrow”, “it will be sunny
next week”, and “John is a young man” are all instances of object of belief.
A belief degree represents the strength with which we believe the event will
happen. If we completely believe the event will happen, then the belief degree
is 1 (complete belief). If we think it is completely impossible, then the belief
degree is 0 (complete disbelief). If the event and its complementary event
are equally likely, then the belief degree for the event is 0.5, and that for the
complementary event is also 0.5. Generally, we will assign a number between
0 and 1 to the belief degree for each event. The higher the belief degree is,
the more strongly we believe the event will happen.
Assume a box contains 100 balls, each of which is known to be either red
or black, but we do not know how many of the balls are red and how many
are black. In this case, it is impossible for us to determine the probability of
drawing a red ball. However, the belief degree can be evaluated by us. For
example, the belief degree for drawing a red ball is 0.5 because “drawing a
red ball” and “drawing a black ball” are equally likely. Besides, the belief
degree for drawing a black ball is also 0.5.
The belief degree depends heavily on the personal knowledge (even includ-
ing preference) concerning the event. When the personal knowledge changes,
the belief degree changes too.
.....
.......
..
1 ............................................................................................•........................
.
.... ......
......
... ......
.. ......
... ....
.......
•
... ...
... ...
... ...
... ...
... ...
... ..
... ..
... .•
..
....
.
... ......
......
... ......
... ......
... ...........
..
..................................................................................................................................................................
0 ... • ...
that the bridge strength falls into the left side of the point x? The answer is
affirmative. For example, a reasonable value is
0, if x < 80
Φ(x) = (x − 80)/40, if 80 ≤ x ≤ 120 (1)
1, if x > 120.
See Figure 3. From the function Φ(x), we may infer that the belief degree
of “the bridge strength being less than 90 tons” is 0.25. In other words, it is
reasonable to infer that “I am 25% sure that the bridge strength is less than
90 tons”, or equivalently “I am 75% sure that the bridge strength is greater
than 90 tons”.
..
........
...
...
1 .........................................
...
...
...................................................
.......
..... .
... ..
..... ...
..
... ..... ..
... ..... ..
... ..... ..
... .
.....
. ..
... .....
. ..
... .....
. ..
... .....
. ..
... ..
.... ..
... .....
. ..
... .....
. ..
... .....
. ..
... ..
.... ..
... ..
.... ..
... ..
.... ..
.
..................................................................................................................................................................
0 ...
...
... x (ton)
80 .. 120
However, when there do not exist any observed samples for the bridge
strength at the moment, we have to invite some bridge engineers to evaluate
the belief degrees about it. As we stated before, human beings usually esti-
mate a much wider range of values than the bridge strength actually takes
because of the conservatism. Assume the belief degree function is
0, if x < 80
Φ(x) = (x − 80)/40, if 80 ≤ x ≤ 120 (3)
1, if x > 120.
See Figure 4. Let us imagine what will happen if the belief degree function
is treated as a probability distribution. At first, we have to regard the 50
bridge strengths as iid uniform random variables on [80, 120] in tons. If we
have the truck cross over the 50 bridges one by one, then we immediately
have
Pr{“the truck can cross over the 50 bridges”} = 0.7550 ≈ 0. (4)
Thus it is almost impossible that the truck crosses over the 50 bridges suc-
cessfully. Unfortunately, the results (2) and (4) are at opposite poles. This
example shows that, by inappropriately using probability theory, a sure event
becomes an impossible one. The error seems intolerable for us. Hence the
belief degrees cannot be treated as subjective probability.
Section 0.4 - Summary 7
That is to say, we are 75% sure that the truck can cross over the 50 bridges
successfully. Here the degree 75% does not achieve up to the true value 100%.
But the error is caused by the difference between belief degree and frequency,
and is not further magnified by uncertainty theory.
0.4 Summary
In order to model indeterminacy, many theories have been invented. What
theories are considered acceptable? Personally, I think that an acceptable
theory should be not only theoretically self-consistent but also the best among
others for solving at least one practical problem. On the basis of this principle,
I may conclude that there exist two mathematical systems, one is probability
theory and the other is uncertainty theory. It is emphasized that probability
theory is only applicable to modeling frequencies, and uncertainty theory
is only applicable to modeling belief degrees. In other words, frequency is
the empirical basis of probability theory, while belief degree is the empirical
8 Chapter 0 - Introduction
.... ....
......... .........
.... ............................................. ....
.......
. . ..............
......................
.................. ... .... .......................
... .................. .... ..... .... ... ............
... ..
. .
. .. . . ... . .
. . ..
....... .. . . .
... ................. .... .... .... ... ... ...... ... ...
... ..... .. .. .. ... ... ... ........ ... ... ..
... ...... .... .... .... .... ...
. ... ...... .. ... .. ...
... ............. ... ... ... .. ... ... ... .... .... .... .... ...
... .... ... .. .. .. ... .. ... ... ... .... ... .... ...
... .... ... ... ... ... .. .. ... ... ...... .. ... .. ...
............... .... .... .... .... ..... .... .. .. .. .. .. ... ..
... ... .... .... .... .... .... ... ....
. ... .. .... .... ..... .... .... ....
... .. ... ..
.. .......... .... ... .... ... ...
... ............... .... ... .... .... .... .... ... ...
..... .. .. .. .. .. .. .. .. .... .......
... ............... .... ... ... ... ... ... ... ... ... ..... ....... .... .... .... .... .... ...
... ........... .... .... .... ..... .... .... .... ..... .... ... ..... ... ... .. ... ... ... ... ...
...... .
... ..
.....
.
........ . . . . . . . . . ... .... ...... ..... ... .. ... ... ... ... ..
................................................................................................................................................................................................... ............................................................................................................................................................................................
... ...
.... ....
.. Probability .. Uncertainty
Figure 5: When the sample size is large enough, the estimated probability
distribution (left curve) may be close enough to the cumulative frequency (left
histogram). In this case, probability theory is the only legitimate approach.
When the belief degrees are available (no samples), the estimated uncertainty
distribution (right curve) usually deviates far from the cumulative frequency
(right histogram but unknown). In this case, uncertainty theory is the only
legitimate approach.
Uncertain Measure
Uncertainty theory was founded by Liu [122] in 2007 and subsequently studied
by many researchers. Nowadays uncertainty theory has become a branch of
axiomatic mathematics for modeling belief degrees. This chapter will present
normality, duality, subadditivity and product axioms of uncertainty theory.
From those four axioms, this chapter will also introduce an uncertain measure
that is a fundamental concept in uncertainty theory. In addition, product
uncertain measure and conditional uncertain measure will be explored at the
end of this chapter.
n
[
Λi ∈ L. (1.1)
i=1
Example 1.1: The collection {∅, Γ} is the smallest σ-algebra over Γ, and
the power set (i.e., all subsets of Γ) is the largest σ-algebra.
Example 1.3: Let L be the collection of all finite disjoint unions of all
intervals of the form
Then L is an algebra over < (the set of real numbers), but not a σ-algebra
because Λi = (0, (i − 1)/i] ∈ L for all i but
∞
[
Λi = (0, 1) 6∈ L. (1.4)
i=1
Example 1.5: Let < be the set of real numbers. Then L = {∅, <} is a
σ-algebra over <. Thus (<, L) is a measurable space. Note that there exist
only two measurable sets in this space, one is ∅ and another is <. Keep in
mind that the intervals like [0, 1] and (0, +∞) are not measurable!
Example 1.6: Let Γ = {a, b, c}. Then L = {∅, {a}, {b, c}, Γ} is a σ-algebra
over Γ. Thus (Γ, L) is a measurable space. Furthermore, {a} and {b, c} are
measurable sets in this space, but {b}, {c}, {a, b}, {a, c} are not.
Example 1.7: It has been proved that intervals, open sets, closed sets,
rational numbers, and irrational numbers are all Borel sets.
Example 1.8: There exists a non-Borel set over <. Let [a] represent the set
of all rational numbers plus a. Note that if a1 − a2 is not a rational number,
then [a1 ] and [a2 ] are disjoint sets. Thus < is divided into an infinite number
of those disjoint sets. Let A be a new set containing precisely one element
from them. Then A is not a Borel set.
sup fi (γ); inf fi (γ); lim sup fi (γ); lim inf fi (γ). (1.7)
1≤i<∞ 1≤i<∞ i→∞ i→∞
Especially, if limi→∞ fi (γ) exists for each γ, then the limit is also a measur-
able function.
1.2 Event
Let (Γ, L) be a measurable space. Recall that each element Λ in L is called
a measurable set. The first action we take is to rename measurable set as
event in uncertainty theory.
How do we understand those terminologies? Let us illustrate them by an
indeterminate quantity (e.g. bridge strength). At first, the universal set Γ
consists of all possible outcomes of the indeterminate quantity. If we believe
that the possible bridge strengths range from 80 to 120 in tons, then the
universal set is
Γ = [80, 120]. (1.8)
Note that you may replace the universal set with an enlarged interval, and
it would have no impact.
The σ-algebra L should contain all events we are concerned about. Note
that event and proposition are synonymous although the former is a set and
the latter is a statement. Assume the first event we are concerned about
corresponds to the proposition “the bridge strength is less than or equal to
100 tons”. Then it may be represented by
Also assume the second event we are concerned about corresponds to the
proposition “the bridge strength is more than 100 tons”. Then it may be
represented by
Λ2 = (100, 120]. (1.10)
If we are only concerned about the above two events, then we may construct
a σ-algebra L containing the two events Λ1 and Λ2 , for example,
0.6, then all of us will think that the proposition is false with belief degree
0.4.
Remark 1.3: Given two events with known belief degrees, it is frequently
asked that how the belief degree for their union is generated from the in-
dividuals. Personally, I do not think there exists any rule to make it. A
lot of surveys showed that, generally speaking, the belief degree of a union
of events is neither the sum of belief degrees of the individual events (e.g.
probability measure) nor the maximum (e.g. possibility measure). Perhaps
there is no explicit relation between the union and individuals except for the
subadditivity axiom.
Remark 1.5: Although probability measure satisfies the above three axioms,
probability theory is not a special case of uncertainty theory because the
product probability measure does not satisfy the fourth axiom, namely the
product axiom on Page 17.
Definition 1.5 (Liu [122]) The set function M is called an uncertain mea-
sure if it satisfies the normality, duality, and subadditivity axioms.
Exercise 1.2: Suppose that λ(x) is a nonnegative function on < (the set of
real numbers) such that
sup λ(x) = 0.5. (1.15)
x∈<
14 Chapter 1 - Uncertain Measure
Proof: The normality axiom says M{Γ} = 1, and the duality axiom says
M{Λc1 } = 1 − M{Λ1 }. Since Λ1 ⊂ Λ2 , we have Γ = Λc1 ∪ Λ2 . By using the
subadditivity axiom, we obtain
Theorem 1.2 Suppose that M is an uncertain measure. Then the empty set
∅ has an uncertain measure zero, i.e.,
M{∅} = 0. (1.20)
Proof: Since ∅ = Γc and M{Γ} = 1, it follows from the duality axiom that
M{∅} = 1 − M{Γ} = 1 − 1 = 0.
Theorem 1.3 Suppose that M is an uncertain measure. Then for any event
Λ, we have
0 ≤ M{Λ} ≤ 1. (1.21)
Section 1.4 - Uncertainty Space 15
Example 1.9: Assume Γ is the set of real numbers. Let α be a number with
0 < α ≤ 0.5. Define a set function as follows,
0, if Λ = ∅
α, if Λ is upper bounded
M{Λ} = 0.5, if both Λ and Λc are upper unbounded (1.25)
1 − α, if Λc is upper bounded
1, if Λ = Γ.
Γ = Γ1 × Γ2 × · · · (1.30)
Section 1.5 - Product Uncertain Measure 17
that is the set of all ordered tuples of the form (γ1 , γ2 , · · · ), where γk ∈ Γk
for k = 1, 2, · · · A measurable rectangle in Γ is a set
Λ = Λ1 × Λ2 × · · · (1.31)
L = L1 × L2 × · · · (1.32)
Remark 1.6: Note that (1.33) defines a product uncertain measure only for
rectangles. How do we extend the uncertain measure M from the class of
rectangles to the product σ-algebra L? For each event Λ ∈ L, we have
min Mk {Λk },
sup
Λ1 ×Λ2 ×···⊂Λ 1≤k<∞
if sup min Mk {Λk } > 0.5
Λ1 ×Λ2 ×···⊂Λ 1≤k<∞
M{Λ} = 1− sup min Mk {Λk }, (1.34)
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞
min Mk {Λk } > 0.5
if sup
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞
0.5, otherwise.
Remark 1.7: Note that the sum of the uncertain measures of the maximum
rectangles in Λ and Λc is always less than or equal to 1, i.e.,
Γ.2
....
.........
....
... .............................................
......... .......
... ....... ......
... ...... .....
... .
....... .....
.
....................................... ...... ................................................................................ ........
.
...... .... .... ... ... ....
... ... ...
. ..
. ..
. ...
...
... ... ... .... ... ...
... ... ... ... .... ...
... ... .... ... ...
...
...
...
... ... ... ... ...
... ... ... . .
Λ 2 ....
.
. ...
...
.
.
..
. Λ .
...
.
... ..
..
..
... ... ... ..
. ... .
.
... .
. ..
... ... ... . ... .
... ... ... .... ... ...
... ... ... ... ..
.. ..
.......... ... ...
... ....
. ... ...
.
....................................... ... .... ................................................................................ ... . .
.
... ..... ..
......
.. ......
... ....
...
...... ......
...
.. ......
.. ............. ...
......... ...
... .. ............................................ ..
... .. ...
... .. ..
.
..................................................................................................................................................................................................
. .
..
...
..
...
..
...
Γ1
... ... ...
.. ................................... ...................................
Λ1
Remark 1.8: If the sum of the uncertain measures of the maximum rect-
angles in Λ and Λc is just 1, i.e.,
Theorem 1.6 (Peng and Iwamura [185]) The product uncertain measure
defined by (1.34) is an uncertain measure.
Proof: In order to prove that the product uncertain measure (1.34) is indeed
an uncertain measure, we should verify that the product uncertain measure
satisfies the normality, duality and subadditivity axioms.
Step 1: The product uncertain measure is clearly normal, i.e., M{Γ} = 1.
Step 2: We prove the duality, i.e., M{Λ} + M{Λc } = 1. The argument
breaks down into three cases. Case 1: Assume
and
sup min Mk {Λk } ≤ 0.5.
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞
It follows from (1.34) that M{Λ} = M{Λc } = 0.5 which proves the duality.
Step 3: Let us prove that M is an increasing set function. Suppose Λ
and ∆ are two events in L with Λ ⊂ ∆. The argument breaks down into
three cases. Case 1: Assume
Then
Then
Thus
M{Λ} = 1 − sup min Mk {Λk }
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞
Case 3: Assume
sup min Mk {Λk } ≤ 0.5
Λ1 ×Λ2 ×···⊂Λ 1≤k<∞
and
sup min Mk {∆k } ≤ 0.5.
∆1 ×∆2 ×···⊂∆c 1≤k<∞
Then
M{Λ} ≤ 0.5 ≤ 1 − M{∆c } = M{∆}.
Λ1 × Λ2 × · · · ⊂ Λc , ∆1 × ∆2 × · · · ⊂ ∆c
such that
1 − min Mk {Λk } ≤ M{Λ} + ε/2,
1≤k<∞
Note that
(Λ1 ∩ ∆1 ) × (Λ2 ∩ ∆2 ) × · · · ⊂ (Λ ∪ ∆)c .
It follows from the duality and subadditivity axioms that
Mk {Λk ∩ ∆k } = 1 − Mk {(Λk ∩ ∆k )c } = 1 − Mk {Λck ∪ ∆ck }
≥ 1 − (Mk {Λck } + Mk {∆ck })
= 1 − (1 − Mk {Λk }) − (1 − Mk {∆k })
= Mk {Λk } + Mk {∆k } − 1
≤ M{Λ} + M{∆} + ε.
Letting ε → 0, we obtain
Case 2: Assume M{Λ} ≥ 0.5 and M{∆} < 0.5. When M{Λ ∪ ∆} = 0.5, the
subadditivity is obvious. Now we consider the case M{Λ ∪ ∆} > 0.5, i.e.,
M{Λc ∩ ∆c } < 0.5. By using Λc ∪ ∆ = (Λc ∩ ∆c ) ∪ ∆ and Case 1, we get
Thus
Case 3: If both M{Λ} ≥ 0.5 and M{∆} ≥ 0.5, then the subadditivity is
obvious because M{Λ} + M{∆} ≥ 1. The theorem is proved.
1.6 Independence
Definition 1.10 (Liu [129]) The events Λ1 , Λ2 , · · · , Λn are said to be inde-
pendent if
( n ) n
\ ^
M Λ∗i = M{Λ∗i } (1.36)
i=1 i=1
Remark 1.9: Especially, two events Λ1 and Λ2 are independent if and only
if
M {Λ∗1 ∩ Λ∗2 } = M{Λ∗1 } ∧ M{Λ∗2 } (1.37)
where Λ∗i are arbitrarily chosen from {Λi , Λci }, i = 1, 2, respectively. That is,
the following four equations hold:
where Λ∗i are arbitrarily chosen from {Λi , Λci , ∅}, i = 1, 2, · · · , n, respectively.
The equation (1.38) is proved. Conversely, if the equation (1.38) holds, then
( n ) ( n ) n n
\ [ _ ^
M Λ∗i = 1 − M Λ∗c
i =1− M{Λ∗ci }= M{Λ∗i }.
i=1 i=1 i=1 i=1
where Λ∗i are arbitrarily chosen from {Λi , Λci , Γ}, i = 1, 2, · · · , n, respectively.
The equation (1.36) is true. The theorem is proved.
are always independent in the product uncertainty space. That is, the events
Λ1 , Λ2 , · · · , Λn (1.40)
Γ.2 ............................................................................
....
......... ..
...
...
...
.... ... ...
... .
. ...
... ..
. ...
... ..
. ...
.
. .
.
. ...
.................................................................................................................................................................................
.
..... ..
. .
. ... ...
.... .... .... .... ...
... ... ... ... ...
.. ... ... ... ...
... ... ... ...
... ... ...
Λ 2 ... .
. .
.Λ ×Λ
..
.
1 2 ...
...
. ...
...
.
..... .... .... .... ...
... ... ... ... ...
... ... ... ... ...
....... ... ... ... ..
...........................................................................................................................................................................
... ... ....
... ... ...
... ... ...
... ... ...
... ... ...
... ... ...
. . .
......................................................................................................................................................................................
. .
Γ1
.... ..... .....
... .. ..
.. ................................ Λ1 .................................
Proof: For simplicity, we only prove the case of n = 2. It follows from the
product axiom that the product uncertain measure of the intersection is
Definition 1.11 (Liu [137]) Let (Γ1 , L1 , M1 ) and (Γ2 , L2 , M2 ) be two un-
certainty spaces. A set on Γ1 × Γ2 is called a polyrectangle if it has the form
m
[
Λ= (Λ1i × Λ2i ) (1.42)
i=1
Γ.2
...
..........
... ...................... ..........................
... .. ... ... ............................
... .... ...
... .... ... ... ...
... ... ... ... ... ..
... ... ...
... .
. .
... ... ..........................
... ... .
.
. .
. ...
... . ... . ...
... ... ....................... ....................... .
. ....................... .
.
... ... ... .. ... .... .
.........................
... ... ... ... ... ... ...
... ... ... ... ... ... ...
... ... ... .
. ... .
. .
... ... .. ... .. .... .........................
... ... ......................... ........................ ......................... .
. ...
... ... ... ...
. ... ..
. ...
.
... ... ...
... ... ...
... .... .........................
... ... ... ... ... ... ...
... ... . ... ... ... ...
... ..................................................................... .
.
...................... ......................
.
. ..
...
...
.
................................................................................................................................................................................................................................................................................. Γ1
..
...
Thus
M{Λ1k × Λ2k } + M{Λc1k × Λc2,k+1 } = 1.
Case II: If
M{Λ1k × Λ2k } = M2 {Λ2k },
then the maximum rectangle in Λc is Λc1,k−1 × Λc2k , and
M{Λc1,k−1 × Λc2k } = M2 {Λc2k } = 1 − M2 {Λ2k }.
Thus
M{Λ1k × Λ2k } + M{Λc1,k−1 × Λc2k } = 1.
No matter what case happens, the sum of the uncertain measures of the
maximum rectangles in Λ and Λc is always 1. It follows from the product
axiom that (1.46) holds.
Γ.2
...
..........
... .
.. .... ...... .......
... ...... ...... ......
... ........ ... ... ... ....
... ...... ... .... ... ....
... ... ... ... ...
... ... ......
... .... .... ..
.. . ... .....
.. .... .......
... ... ... ....
..
. ..
...... ... ........
... ... .... ....... ...
........ ... .............
... .... ... .............. .. .
. ...................
.......
... ... .... ....... ..
........ .
. ................
... .... ... .....
. ...
..... .
.
.
. ....
...........
... .... ... . ....
... ... .... ..
... .
... .
. .
... .
... ... .... ... ... .... ......
... .... ..... ... ... ... ...
... ... ..... ... ...
....... ... ... ... ...
... ... ........ .
. .
.
... ....................................................................................... .....
.. ...
....
...
...
.
...............................................................................................................................................................................................................................................................................
.
Γ1
..
...
Remark 1.12: The conditional uncertain measure M{A|B} yields the pos-
terior uncertain measure of A after the occurrence of event B.
M{A ∩ B} M{Ac ∩ B}
≥ 0.5, ≥ 0.5,
M{B} M{B}
M{A ∩ B} M{Ac ∩ B}
< 0.5 < ,
M{B} M{B}
then we have
M{A ∩ B} M{A ∩ B}
M{A|B} + M{Ac |B} = + 1− = 1.
M{B} M{B}
That is, M{·|B} satisfies the duality axiom. Finally, for any countable se-
quence {Ai } of events, if M{Ai |B} < 0.5 for all i, it follows from (1.51) and
the subadditivity axiom that
(∞ ) ∞
[ X
(∞ ) M Ai ∩ B M{Ai ∩ B} ∞
[ i=1 i=1
X
M Ai | B ≤ ≤ = M{Ai |B}.
i=1
M{B} M{B} i=1
If M{∪i Ai |B} > 0.5, we may prove the above inequality by the following
facts:
∞ ∞
!
[ \
Ac1 ∩ B ⊂ (Ai ∩ B) ∪ Aci ∩ B ,
i=2 i=1
∞
(∞ )
X \
M{Ac1 ∩ B} ≤ M{Ai ∩ B} + M Aci ∩B ,
i=2 i=1
(∞ )
\
(∞ ) M Aci ∩B
[ i=1
M Ai | B =1− ,
i=1
M{B}
28 Chapter 1 - Uncertain Measure
∞
X
∞
M{Ai ∩ B}
X M{Ac1 ∩ B} i=2
M{Ai |B} ≥ 1 − + .
i=1
M{B} M{B}
If there are at least two terms greater than 0.5, then the subadditivity is
clearly true. Thus M{·|B} satisfies the subadditivity axiom. Hence M{·|B} is
an uncertain measure. Furthermore, (Γ, L, M{·|B}) is an uncertainty space.
Uncertain Variable
<..
.
........
... ........................
... ..... ....
... ..... ....
... ..
..... ....
... .... ...
... ξ(γ) ..... ...
...
... ....
... ..
.
... .
...
.
... ...
... ...
... ...
... .
....
.
.. ...
.................................. ...
........ ....... ...
....... ... ....... .....
....... ... ...... .
...
.
..
... ...... .....
....... .....
... .........
......................
...
..
..............................................................................................................................................................................................................................................
....
Γ
.
is an uncertain variable.
Example 2.3: Let ξ1 and ξ2 be two uncertain variables. Then the sum
ξ = ξ1 + ξ2 is an uncertain variable defined by
Φ(x)
....
........
..
............................................................................
1 ... ...................................
................
... ...........
... .........
... .
...
.........
..
... ......
.....
... ......
... .....
... ..
......
.
... .....
.....
... ......
... ......
... ...
.......
.
......
... ........
.. .................
....................
......................................................................................................................................................................................................................................................................................... x
....
0 ..
..
Thus the two uncertain variables ξ and η are identically distributed but ξ 6= η.
Section 2.2 - Uncertainty Distribution 33
Note that such a sequence is not unique. Thus the set function M{B} is
defined by
∞ ∞
X X
M{Ai }, M{Ai } < 0.5
inf
∞
if inf ∞
S S
B⊂ Ai i=1 B⊂ Ai i=1
i=1 i=1
∞ ∞
M{B} = X X
1 − inf
∞
M{A i }, if inf
∞
M{Ai } < 0.5
c⊂
S c⊂
S
B Ai i=1 B A i i=1
i=1 i=1
0.5, otherwise.
We may prove that the set function M is indeed an uncertain measure on <,
and the uncertain variable defined by the identity function ξ(γ) = γ from the
uncertainty space (<, L, M) to < has the uncertainty distribution Φ.
1 −1
Φ(x) = (1 + exp(1000 − x)) (2.5)
2
for any real number x.
Φ(x)
....
.........
... .
1 .................................................................................
...
..
....
...
..
...
...
...
...
0.5 ............................................................
... .......
...........
...........................................................................
... ......
... .
........
... ....
.....
... .....
... .....
... ...........
.
. .........
............................................................................................................................................................................................................................................. x
....
0 ..
.
Φ(x)
....
.........
....
.........................
1 ....
.............................................................................................................................................................
........
......
... .....
.
... ..
.
...
... ....
... ....
... ...
... ....
..................................................................................
..
0.5 ....
..
...
...
...
...
...
...
.
..............................................................................................................................................................................................................................
..
...
x
0 ..
Someone thinks John is neither younger than 24 nor older than 28, and
presents an uncertainty distribution of John’s age as follows,
0, if x ≤ 24
Φ(x) = (x − 24)/4, if 24 ≤ x ≤ 28 (2.7)
1, if x ≥ 28.
Someone thinks James’ height is between 180 and 185 centimeters, and
presents the following uncertainty distribution,
0, if x ≤ 180
Φ(x) = (x − 180)/5, if 180 ≤ x ≤ 185 (2.8)
1, if x ≥ 185.
36 Chapter 2 - Uncertain Variable
Φ(x)
....
.........
....
1 ............................................................
...
...
..........................................................
..... .
..... .
... .
........ ...
... .... ...
.....
... ..... ..
... .....
... ........
. ..
..
... .
...... ..
... ...
.... ..
... .
...... ..
... ..
..... ..
... ......
. ..
... ..
..... ..
... ...
.... ..
... .
...... ..
... ...
.... ..
... ...
..... .
................................................................................................................................................................................................................................. x
....
0 a ..
.... b
Example 2.5: John’s age (2.7) is a linear uncertain variable L(24, 28), and
James’ height (2.8) is another linear uncertain variable L(180, 185).
Φ(x)
...
..........
....
1 ............................................................
...
...
..........................................................
...... ..
...... ..
... ...
........ ..
.... ..
... ......
... ...... ..
... .
...
....... ..
... .
...
..... ..
... .
...
..... ..
................................ .
..... ..
0.5 ... ..
.. ..
.
...
.
...
... ..
.. .. ..
... .
.
.. .. ..
... .. ..
... .... ... ..
... ..
. . ..
.. ..
... .. ..
... ...
. ... .
......................................................................................................................................................................................................................................
....
x
0 ..
....
a b c
Φ(x)
....
........
....
1 ...........................................................................
... ......................................
..............
... ..........
... ........
... .
...
........
... ......
......
... .....
... .....
... .
..
......
...........
0.5 ............................................................................
... ..
..... ...
... ..... .. ....
... ...
...... .. ...
... ..
.....
. .. ...
.
... ...... .. ...
... .
...
......... . ...
....
...
......................... ... .....
......... . . .........
..... .
....................................................................................................................................................................................................................................
. ...............
......................................... x
...
0 .... e
..
Φ(x)
...
..........
...
... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .............................................. .
1 ... .........
....................
... ...........
... ........
... ...
.........
... .....
.....
... .....
... .....
.... ..
.....
.
.. . . . . . . . . . . . . . . .........
0.5 ... ....
... .
... ... ..
... ..
.
... .... .
.... .
... .... .
... ..... .
... .
..
..... .
.. .
... ...................... .
......................... ...............................................................................................................................................................................................
....
x
0 ...exp(e)
...
Φ(x)
...
..........
....
..
1 ..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ........................................
... ...
.. .
α 5 ..............................................................................................................................• ...
.
..
.....
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...................... ..
α 4 .... •
...
.....
....
...
.. .
... ... .. .
... ... ... ....
... ... .. ...
... ... .. .
... ... .
.
.. ....
... ... . ...
α 3 ....
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ......•
.
.
...
...
.
.
... ..
.. ..
.
.
.
....
... .
...
........
. .
. .. ...
α .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...•
. .
. ......... ..
. ... ..
2 ... ..
..... ..
..
..
. ...
.... ..
.... ... .
..
.
.
. ..
... ..
.... .. . .. ....
... ..
.... .. .
..
.. ..
... ..
.... ..
. .
.
. ..
... ..
.... ..
. .
. .
. ..
... .
....
. . ..
..
..
. ....
. .. ..
α .... .. .. .. .. .. .. .. .. ......... . . . ..
1 .... •... ... ..
..
.
.. ..
.. ... ...
. . .
.. ....
. .
.................................................................................................................................................................................................................................................................
x
.
..
x
.
x x
. .
x
. x
0 ...
..
1 2 3 4 5
Proof: It follows from the subadditivity of uncertain measure and the mea-
sure inversion theorem that
That is,
M{a ≤ ξ ≤ b} + Φ(a) ≥ Φ(b).
Thus the inequality on the left hand side is verified. It follows from the
monotonicity of uncertain measure and the measure inversion theorem that
Remark 2.2: Perhaps some readers would like to get an exactly scalar value
of the uncertain measure M{a ≤ x ≤ b}. Generally speaking, it is an impos-
sible job (except a = −∞ or b = +∞) if only an uncertainty distribution is
available. I would like to ask if there is a need to know it. In fact, it is not
necessary for practical purpose. Would you believe? I hope so!
Definition 2.13 (Liu [129]) Let ξ be an uncertain variable with regular un-
certainty distribution Φ(x). Then the inverse function Φ−1 (α) is called the
inverse uncertainty distribution of ξ.
Note that the inverse uncertainty distribution Φ−1 (α) is well defined on the
open interval (0, 1). If needed, we may extend the domain to [0, 1] via
Φ−1 (0) = lim Φ−1 (α), Φ−1 (1) = lim Φ−1 (α). (2.19)
α↓0 α↑1
Φ−1 (α)
.. ..
......... ..
... ..
b ............................................................
... ..
..
.
..... ..
..
..
... ..
... ...
..
... ...... ...
......
... ..... ..
... ...... ..
... ...
....... ..
... ..
..
.... ..
... ...
...... ..
... ..
..
.... ..
... ...
...... ..
... ..
..
.... ..
... ...
...... ..
... ..
..
.... .
.
........................................................................................................................................................................................... α
... ..
..
..
0 ... ...
...... 1
... .....
......
... ..........
. ..
a ........
...
Φ−1 (α)
.... ..
........ ..
.. . ..
c .........................................................
...
..
......
....... .
....... .
....
..
....... ...
... .......
....... ..
.. ....... ..
... .
...
......... ..
... .
...
....... ..
..
b .......................................... ..
... ..
.. . ..
... ..
.... ..
. ..
... ...
.... .
. ..
.... .
... .
..... .
. ..
... ...
. .
. ..
... .
..
.... . .
......................................................................................................................................................................................... α
.... ...
...
0 ...
... ........
.
....
.... 0.5 1
... .......
..
a ........
..
Φ−1 (α)
.... ...
........ ..
..
... ..
... ....
.
... ... ..
... .. .
... .. ..
... .
... .... ...
... ...
..... ..
... ....
...... ..
... ....... ..
... ...
............. ..
...
.............................. ................. ..
e ...
... ...
...
.. ..
...
.
...... ..
.
..
.. ..
..
... .
...
........
. .
. ..
... ...
......
. .
. ..
... ...
...... .
. ..
.. ...
... ....... .
...................................................................................................................................................................................... α
... ....
0 ... ...
......
0.5 1
....
Φ−1 (α)
.... ..
........ ..
.... .
.. . ..
.
... .. ..
.
... .. ..
.
... ... ..
... ... ..
... ... ..
... ... ..
... ... ...
... ... ..
... .. ..
... ..
..... ..
... .
....
. ..
... ...
..... ..
... ...
...
..... ..
... ..
...
...
....... ..
... ..
...
...
...
........ ..
... ...
...
...
...
...
.......... ..
....
... .......................... .
...................................................................................................................................................................................... α
...
0 ...
. 1
Conversely, suppose Φ−1 meets (2.24). Write x = Φ−1 (α). Then α = Φ(x)
and
M{ξ ≤ x} = α = Φ(x).
That is, Φ is the uncertainty distribution of ξ and Φ−1 is its inverse uncer-
tainty distribution. The theorem is verified.
Theorem 2.6 (Liu [134], Sufficient and Necessary Condition) A function
Φ−1 (α) : (0, 1) → < is an inverse uncertainty distribution if and only if it is
a continuous and strictly increasing function with respect to α.
Proof: Suppose Φ−1 (α) is an inverse uncertainty distribution. It follows
from the definition of inverse uncertainty distribution that Φ−1 (α) is a con-
tinuous and strictly increasing function with respect to α ∈ (0, 1).
Conversely, suppose Φ−1 (α) is a continuous and strictly increasing func-
tion on (0, 1). Define
0, if x ≤ lim Φ−1 (α)
α↓0
−1
Φ(x) = α, if x = Φ (α)
1, if x ≥ lim Φ−1 (α).
α↑1
and Φ−1 (α) is a continuous and strictly increasing function with respect to
α ∈ (0, 1) even though it is not.
2.3 Independence
Example 2.10: Let ξ1 (γ1 ) and ξ2 (γ2 ) be uncertain variables on the uncer-
tainty spaces (Γ1 , L1 , M1 ) and (Γ2 , L2 , M2 ), respectively. It is clear that they
are also uncertain variables on the product uncertainty space (Γ1 , L1 , M1 ) ×
(Γ2 , L2 , M2 ). Then for any Borel sets B1 and B2 , we have
M{(ξ1 ∈ B1 ) ∩ (ξ2 ∈ B2 )}
= M {(γ1 , γ2 ) | ξ1 (γ1 ) ∈ B1 , ξ2 (γ2 ) ∈ B2 }
= M {(γ1 | ξ1 (γ1 ) ∈ B1 ) × (γ2 | ξ2 (γ2 ) ∈ B2 )}
= M1 {γ1 | ξ1 (γ1 ) ∈ B1 } ∧ M2 {γ2 | ξ2 (γ2 ) ∈ B2 }
= M {ξ1 ∈ B1 } ∧ M {ξ2 ∈ B2 } .
f (x1 , x2 , · · · , xn ) = x1 ∨ x2 ∨ · · · ∨ xn ,
f (x1 , x2 , · · · , xn ) = x1 ∧ x2 ∧ · · · ∧ xn ,
f (x1 , x2 , · · · , xn ) = x1 + x2 + · · · + xn ,
f (x1 , x2 , · · · , xn ) = x1 x2 · · · xn , x1 , x2 , · · · , xn ≥ 0.
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.30)
Proof: For simplicity, we only prove the case n = 2. At first, we always have
It follows that M{ξ ≤ Ψ−1 (α)} = α. That is, Ψ−1 is just the inverse uncer-
tainty distribution of ξ. The theorem is proved.
ξ = ξ1 × ξ2 × · · · × ξn (2.34)
The product of a linear uncertain variable L(a, b) and a scalar number k > 0
is also a linear uncertain variable L(ka, kb), i.e.,
Φ−1
1 (α) = (1 − α)a1 + αb1 ,
Φ−1
2 (α) = (1 − α)a2 + αb2 .
It follows from the operational law that the inverse uncertainty distribution
of ξ1 + ξ2 is
It follows from the operational law that the inverse uncertainty distribution
of ξ1 + ξ2 is
(
−1 (1 − 2α)(a1 + a2 ) + 2α(b1 + b2 ), if α < 0.5
Ψ (α) =
(2 − 2α)(b1 + b2 ) + (2α − 1)(c1 + c2 ), if α ≥ 0.5.
The product of a lognormal uncertain variable LOGN (e, σ) and a scalar num-
ber k > 0 is also a lognormal uncertain variable LOGN (e + ln k, σ), i.e.,
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.48)
ξ = ξ1 + ξ2 + · · · + ξn (2.51)
ξ = ξ1 ξ2 · · · ξn (2.53)
Si = ξ1 + ξ2 + · · · + ξi (2.59)
S = f (ξ1 , ξ2 , · · · , ξn ).
= min Ψi (x).
1≤i≤n
S = f (ξ1 , ξ2 , · · · , ξn ).
= max Ψi (x).
1≤i≤n
f (x) = −x,
f (x) = exp(−x),
1
f (x) = , x > 0.
x
Theorem 2.16 (Liu [129]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-
ables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If f
is a strictly decreasing function, then the uncertain variable
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.66)
Proof: For simplicity, we only prove the case n = 2. At first, we always have
It follows that M{ξ ≤ Ψ−1 (α)} = α. That is, Ψ−1 is just the inverse uncer-
tainty distribution of ξ. The theorem is proved.
f (x1 , x2 ) = x1 − x2 ,
f (x1 , x2 ) = x1 /x2 , x1 , x2 > 0,
f (x1 , x2 ) = x1 /(x1 + x2 ), x1 , x2 > 0.
Note that both strictly increasing function and strictly decreasing function
are special cases of strictly monotone function.
On the one hand, since the function f (x1 , x2 ) is strictly increasing with re-
spect to x1 and strictly decreasing with x2 , we obtain
On the other hand, since the function f (x1 , x2 ) is strictly increasing with
respect to x1 and strictly decreasing with x2 , we obtain
It follows that M{ξ ≤ Ψ−1 (α)} = α. That is, Ψ−1 is just the inverse uncer-
tainty distribution of ξ. The theorem is proved.
Φ−1
1 (α)
Ψ−1 (α) = −1 . (2.81)
Φ2 (1 − α)
Φ−1
1 (α)
Ψ−1 (α) = . (2.82)
Φ−1
1 (α) + Φ−1
2 (1 − α)
Remark 2.3: Keep in mind that sometimes the equation (2.89) may not
have a root. In this case, if
f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) < 0 (2.90)
for all α, then we set the root α = 1; and if
f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) > 0 (2.91)
for all α, then we set the root α = 0.
. ..
.... .....
.......
.. ... .
... .. ..
.
... .. .
.. ..
... .... ..
... ..... ..
... .......... ..
... ....... ..
... ................ ..
... ............... .
.... ...
.
•
...............................................................................................................................................................................................
..
..... ...
α
0 ...
...
..
.......
.
...
........
. .
1 ..
..
... ...
........ ..
..
... ........ ..
... .... ..
... ... ..
... ... ..
...... ..
...... ..
...
...
..
Remark 2.5: Keep in mind that sometimes the equation (2.93) may not
have a root. In this case, if
f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) < 0 (2.94)
f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) > 0 (2.95)
...
..........
...... ..
....... ..
... ... ..
... .... ..
... .... ..
... ...... ..
... .....
...... ..
... ....... ..
... ........
......... ..
... .......... ..
... .......... .
....
•
...........................................................................................................................................................................................
..........
......... ...
α
0 ...
...
.........
.......
......
1 ..
..
... ...... .
.....
... ..... ...
... .... .
... ..
... ... ..
... ... .
... .....
... ..
M {f (ξ1 , ξ2 , · · · , ξn ) ≤ 0} ≥ α (2.96)
if and only if
f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) ≤ 0. (2.97)
Proof: It follows from Theorem 2.18 that the inverse uncertainty distribution
of f (ξ1 , ξ2 , · · · , ξn ) is
Thus (2.96) holds if and only if Ψ−1 (α) ≤ 0. The theorem is thus verified.
for i = 1, 2, · · · , n, respectively.
Then we have
sup min M{ξi ∈ Bi } = 1 − sup min νi (xi ) > 0.5.
f (B1 ,B2 ,··· ,Bn )={0} 1≤i≤n f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
Case 2: Assume
sup min νi (xi ) > 0.5.
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
Then we have
sup min M{ξi ∈ Bi } = 1 − sup min νi (xi ) > 0.5.
f (B1 ,B2 ,··· ,Bn )={1} 1≤i≤n f (x1 ,x2 ,··· ,xn )=0 1≤i≤n
Case 3: Assume
sup min νi (xi ) = 0.5,
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
Then we have
sup min M{ξi ∈ Bi } = 0.5,
f (B1 ,B2 ,··· ,Bn )={1} 1≤i≤n
Case 4: Assume
sup min νi (xi ) = 0.5,
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
Then we have
sup min M{ξi ∈ Bi } = 1 − sup min νi (xi ) > 0.5.
f (B1 ,B2 ,··· ,Bn )={1} 1≤i≤n f (x1 ,x2 ,··· ,xn )=0 1≤i≤n
ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn (2.105)
M{ξ = 1} = a1 ∧ a2 ∧ · · · ∧ an , (2.106)
M{ξ = 0} = (1 − a1 ) ∨ (1 − a2 ) ∨ · · · ∨ (1 − an ). (2.107)
f (x1 , x2 , · · · , xn ) = x1 ∧ x2 ∧ · · · ∧ xn . (2.108)
ξ = ξ1 ∨ ξ2 ∨ · · · ∨ ξn (2.110)
M{ξ = 1} = a1 ∨ a2 ∨ · · · ∨ an , (2.111)
M{ξ = 0} = (1 − a1 ) ∧ (1 − a2 ) ∧ · · · ∧ (1 − an ). (2.112)
f (x1 , x2 , · · · , xn ) = x1 ∨ x2 ∨ · · · ∨ xn . (2.113)
for i = 1, 2, · · · , n. Then
(
1, if ξ1 + ξ2 + · · · + ξn ≥ k
ξ= (2.115)
0, if ξ1 + ξ2 + · · · + ξn < k
and
M{ξ = 0} = k-min [1 − a1 , 1 − a2 , · · · , 1 − an ] (2.117)
where k-max represents the kth largest value, and k-min represents the kth
smallest value.
Definition 2.15 (Liu [122]) Let ξ be an uncertain variable. Then the ex-
pected value of ξ is defined by
Z +∞ Z 0
E[ξ] = M{ξ ≥ x}dx − M{ξ ≤ x}dx (2.120)
0 −∞
Proof: It follows from the measure inversion theorem that for almost all
numbers x, we have M{ξ ≥ x} = 1 − Φ(x) and M{ξ ≤ x} = Φ(x). By using
the definition of expected value operator, we obtain
Z +∞ Z 0
E[ξ] = M{ξ ≥ x}dx − M{ξ ≤ x}dx
0 −∞
Z +∞ Z 0
= (1 − Φ(x))dx − Φ(x)dx.
0 −∞
Φ(x)
...
..........
....
..............................................................................................................................
1 . . . . . .
.... ... ... .. ... ... .. ... .....................................
..
... .. .. .. .. .. ... ..............
... .. ... .. ... ...........
... ... .. ... ..........
... .. .. .........
... ... ...........
... .. ......
... .......
. ..
..
..........
........ ...
....... .. ...
.
..... .. . ..
........ . . ..
........ . . .. ..
....... . . .. . ..
. ....... ........... ... .. ... ... ... ....
.... . . . ..
................... . . . .. .. . ..
............................................................................................................................................................................................................................................................................. x
....
0 ..
...
Z +∞ Z 0
Figure 2.16: E[ξ] = (1 − Φ(x))dx − Φ(x)dx. Reprinted from Liu
0 −∞
[129].
Proof: It follows from the integration by parts and Theorem 2.27 that the
expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z +∞ Z 0 Z +∞
= xdΦ(x) + xdΦ(x) = xdΦ(x).
0 −∞ −∞
68 Chapter 2 - Uncertain Variable
Φ(x)
...
..........
....
1 .............................................................................................................................
.......................................................................................
.............................................................
..............................................................................
...................................
.............................
.........................
...................
..............
. .........
.. .
.... ... ....
. ... ...............
....................
...........................
......
....... .........................
..
..
..
..
..
.............................................................................
.
........ .
........................................................................
...................................................................................................................................................................................................................................................................... x
....
0 ...
...
Z +∞ Z 1
Figure 2.17: E[ξ] = xdΦ(x) = Φ−1 (α)dα. Reprinted from Liu
−∞ 0
[129].
Theorem 2.29 (Liu [129]) Let ξ be an uncertain variable with regular un-
certainty distribution Φ. Then
Z 1
E[ξ] = Φ−1 (α)dα. (2.125)
0
Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem 2.28 that the expected value is
Z +∞ Z 1
E[ξ] = xdΦ(x) = Φ−1 (α)dα.
−∞ 0
Exercise 2.26: Show that the linear uncertain variable ξ ∼ L(a, b) has an
expected value
a+b
E[ξ] = . (2.126)
2
Section 2.5 - Expected Value 69
Exercise 2.27: Show that the zigzag uncertain variable ξ ∼ Z(a, b, c) has
an expected value
a + 2b + c
E[ξ] = . (2.127)
4
Exercise 2.28: Show that the normal uncertain variable ξ ∼ N (e, σ) has
an expected value e, i.e.,
E[ξ] = e. (2.128)
Exercise 2.29: Show that the lognormal uncertain variable ξ ∼ LOGN (e, σ)
has an expected value
( √ √ √
σ 3 exp(e) csc(σ 3), if σ < π/ 3
E[ξ] = √ (2.129)
+∞, if σ ≥ π/ 3.
This formula was first discovered by Dr. Zhongfeng Qin with the help of
Maple software, and was verified again by Dr. Kai Yao through a rigorous
mathematical derivation.
Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
E[aξ + bη] = E[aξ] + E[bη] = aE[ξ] + bE[η].
The theorem is proved.
It is easy to verify that E[ξ] = 0.9, E[η] = 0.8, and E[ξ + η] = 1.9. Thus we
have
E[ξ + η] > E[ξ] + E[η].
If the uncertain variables are defined by
0,
if γ = γ1 0, if γ = γ1
ξ(γ) = 1, if γ = γ2 η(γ) = 3, if γ = γ2
2, if γ = γ3 , 1, if γ = γ3 .
Then
0, if γ = γ1
(ξ + η)(γ) = 4, if γ = γ2
3, if γ = γ3 .
It is easy to verify that E[ξ] = 0.5, E[η] = 0.9, and E[ξ + η] = 1.2. Thus we
have
E[ξ + η] < E[ξ] + E[η].
It is easy to verify that (i) any function is comonotonic with any positive
constant multiple of the function; (ii) any monotone increasing functions are
comonotonic with each other; and (iii) any monotone decreasing functions
are also comonotonic with each other.
Proof: Without loss of generality, suppose f (ξ) and g(ξ) have regular un-
certainty distributions Φ and Ψ, respectively. Otherwise, we may give the
uncertainty distributions a small perturbation such that they become regu-
lar. Since f and g are comonotonic functions, at least one of the following
relations is true,
Some Inequalities
Theorem 2.33 (Liu [122]) Let ξ be an uncertain variable, and let f be a
nonnegative function. If f is even and increasing on [0, ∞), then for any
given number t > 0, we have
E[f (ξ)]
M{|ξ| ≥ t} ≤ . (2.142)
f (t)
74 Chapter 2 - Uncertain Variable
Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
f (x, y) − f (x0 , y0 ) ≤ a(x − x0 ) + b(y − y0 ), ∀x ≥ 0, y ≥ 0.
Letting x0 = E[|ξ|p ], y0 = E[|η|q ], x = |ξ|p and y = |η|q , we have
f (|ξ|p , |η|q ) − f (E[|ξ|p ], E[|η|q ]) ≤ a(|ξ|p − E[|ξ|p ]) + b(|η|q − E[|η|q ]).
Taking the expected values on both sides, we obtain
E[f (|ξ|p , |η|q )] ≤ f (E[|ξ|p ], E[|η|q ]).
Hence the inequality (2.144) holds.
Section 2.6 - Variance 75
Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Now
we assume √ E[|ξ|p ] > 0 and E[|η|p ] > 0. It is easy to prove that the function
√
f (x, y) = ( x + p y)p is a concave function on {(x, y) : x ≥ 0, y ≥ 0}. Thus
p
for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that
Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) − f (y) ≥ k · (x − y). Replacing x with ξ and y with E[ξ], we obtain
2.6 Variance
The variance of uncertain variable provides a degree of the spread of the
distribution around its expected value. A small value of variance indicates
that the uncertain variable is tightly concentrated around its expected value;
and a large value of variance indicates that the uncertain variable has a wide
spread around its expected value.
Definition 2.16 (Liu [122]) Let ξ be an uncertain variable with finite ex-
pected value e. Then the variance of ξ is
V [ξ] = E[(ξ − e)2 ]. (2.148)
This definition tells us that the variance is just the expected value of
(ξ − e)2 . Since (ξ − e)2 is a nonnegative uncertain variable, we also have
Z +∞
V [ξ] = M{(ξ − e)2 ≥ x}dx. (2.149)
0
M{(ξ − e)2 = 0} = 1.
That is, M{ξ = e} = 1. Conversely, assume M{ξ = e} = 1. Then we
immediately have M{(ξ − e)2 = 0} = 1 and M{(ξ − e)2 ≥ x} = 0 for any
x > 0. Thus Z +∞
V [ξ] = M{(ξ − e)2 ≥ x}dx = 0.
0
The theorem is proved.
Section 2.6 - Variance 77
Proof: This theorem is based on Stipulation 2.3 that says the variance of ξ
is Z +∞ Z +∞
√ √
V [ξ] = (1 − Φ(e + y))dy + Φ(e − y)dy.
0 0
√
Substituting e + y with x and y with (x − e)2 , the change of variables and
integration by parts produce
Z +∞ Z +∞ Z +∞
√ 2
(1 − Φ(e + y))dy = (1 − Φ(x))d(x − e) = (x − e)2 dΦ(x).
0 e e
√ 2
Similarly, substituting e − y with x and y with (x − e) , we obtain
Z +∞ Z −∞ Z e
√ 2
Φ(e − y)dy = Φ(x)d(x − e) = (x − e)2 dΦ(x).
0 e −∞
Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem 2.42 that the variance is
Z +∞ Z 1
V [ξ] = (x − e)2 dΦ(x) = (Φ−1 (α) − e)2 dα.
−∞ 0
Exercise 2.39: Show that the linear uncertain variable ξ ∼ L(a, b) has a
variance
(b − a)2
V [ξ] = . (2.156)
12
Exercise 2.40: Show that the normal uncertain variable ξ ∼ N (e, σ) has a
variance
V [ξ] = σ 2 . (2.157)
Section 2.7 - Moment 79
Remark 2.8: If ξ and η are independent linear uncertain variables, then the
condition (2.158) is met. If they are independent normal uncertain variables,
then the condition (2.158) is also met.
2.7 Moment
Definition 2.17 (Liu [122]) Let ξ be an uncertain variable and let k be a
positive integer. Then E[ξ k ] is called the k-th moment of ξ.
Proof: When k is an odd number, Theorem 2.45 says that the k-th moment
is Z +∞ Z 0
√ √
E[ξ k ] = (1 − Φ( k y))dy − Φ( k y)dy.
0 −∞
√
Substituting k y with x and y with xk , the change of variables and integration
by parts produce
Z +∞ Z +∞ Z +∞
√
(1 − Φ( k y))dy = (1 − Φ(x))dxk = xk dΦ(x)
0 0 0
Section 2.7 - Moment 81
and
0 0 0
√
Z Z Z
Φ( k y)dy = Φ(x)dxk = − xk dΦ(x).
−∞ −∞ −∞
Thus we have
Z +∞ Z 0 Z +∞
k k k
E[ξ ] = x dΦ(x) + x dΦ(x) = xk dΦ(x).
0 −∞ −∞
When k is an even number, the theorem is based on Stipulation 2.4 that says
the k-th moment is
Z +∞
√ √
E[ξ k ] = (1 − Φ( k y) + Φ(− k y))dy.
0
√
Substituting k y with x and y with xk , the change of variables and integration
by parts produce
Z +∞ Z +∞ Z +∞
√ k
(1 − Φ( y))dy =
k
(1 − Φ(x))dx = xk dΦ(x).
0 0 0
√
Similarly, substituting − k y with x and y with xk , we obtain
Z +∞ Z 0 Z 0
√
Φ(− k y)dy = Φ(x)dxk = xk dΦ(x).
0 −∞ −∞
Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem 2.46 that the k-th moment is
Z +∞ Z 1
k
E[ξ ] = k
x dΦ(x) = (Φ−1 (α))k dα.
−∞ 0
Exercise 2.42: Show that the second moment of linear uncertain variable
ξ ∼ L(a, b) is
a2 + ab + b2
E[ξ 2 ] = . (2.164)
3
82 Chapter 2 - Uncertain Variable
Exercise 2.43: Show that the second moment of normal uncertain variable
ξ ∼ N (e, σ) is
E[ξ 2 ] = e2 + σ 2 . (2.165)
2.8 Entropy
This section provides a definition of entropy to characterize the uncertainty
of uncertain variables.
S(t)
...
..........
...
..
.... . . . . . . . . . . . . . . .............................
ln 2 ... ...
......
.
.
.......
.....
... ..... . .....
... ..... . .....
... ..
..... . ....
....
... ..
.... . ....
... . ...
... . . ...
... ..
. . ...
... ... .
. ...
... ... . ...
... ... . ...
...
... ..
. .
. ...
... .... . ...
... ... . ...
... ... . ...
. ...
... ... . ...
... ... .
. ...
...... . ...
...... . ...
..... .
....................................................................................................................................................................................
....
t
0 ..
. 0.5 1
Example 2.16: Let ξ be a linear uncertain variable L(a, b). Then its entropy
is
Z b
x−a x−a b−x b−x b−a
H[ξ] = − ln + ln dx = . (2.168)
a b − a b − a b − a b − a 2
Exercise 2.44: Show that the zigzag uncertain variable ξ ∼ Z(a, b, c) has
an entropy
c−a
H[ξ] = . (2.169)
2
Exercise 2.45: Show that the normal uncertain variable ξ ∼ N (e, σ) has
an entropy
πσ
H[ξ] = √ . (2.170)
3
Theorem 2.49 Let ξ be an uncertain variable. Then H[ξ] ≥ 0 and equality
holds if ξ is essentially a constant.
Proof: The theorem follows from the fact that the function S(t) reaches its
maximum ln 2 at t = 0.5.
84 Chapter 2 - Uncertain Variable
Theorem 2.54 (Dai and Chen [27]) Let ξ and η be independent uncertain
variables. Then for any real numbers a and b, we have
Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
Theorem 2.55 (Chen and Dai [15]) Let ξ be an uncertain variable whose
uncertainty distribution is arbitrary but the expected value e and variance σ 2 .
Then
πσ
H[ξ] ≤ √ (2.175)
3
and the equality holds if ξ is a normal uncertain variable N (e, σ).
Z +∞
2 (x − e)Ψ(x)dx = (1 − κ)σ 2 .
e
2.9 Distance
Definition 2.19 (Liu [122]) The distance between uncertain variables ξ and
η is defined as
d(ξ, η) = E[|ξ − η|]. (2.176)
That is, the distance between ξ and η is just the expected value of |ξ − η|.
Since |ξ − η| is a nonnegative uncertain variable, we always have
Z +∞
d(ξ, η) = M{|ξ − η| ≥ x}dx. (2.177)
0
Theorem 2.56 Let ξ, η, τ be uncertain variables, and let d(·, ·) be the dis-
tance. Then we have
(a) (Nonnegativity) d(ξ, η) ≥ 0;
(b) (Identification) d(ξ, η) = 0 if and only if ξ = η;
(c) (Symmetry) d(ξ, η) = d(η, ξ);
(d) (Triangle Inequality) d(ξ, η) ≤ 2d(ξ, τ ) + 2d(η, τ ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the subadditivity axiom that
Z +∞
d(ξ, η) = M {|ξ − η| ≥ x} dx
0
Z +∞
≤ M {|ξ − τ | + |τ − η| ≥ x} dx
0
Z +∞
≤ M {(|ξ − τ | ≥ x/2) ∪ (|τ − η| ≥ x/2)} dx
0
Z +∞
≤ (M{|ξ − τ | ≥ x/2} + M{|τ − η| ≥ x/2}) dx
0
It is easy to verify that d(ξ, τ ) = d(τ, η) = 1/2 and d(ξ, η) = 3/2. Thus
3
d(ξ, η) = (d(ξ, τ ) + d(τ, η)).
2
A conjecture is d(ξ, η) ≤ 1.5(d(ξ, τ )+d(τ, η)) for arbitrary uncertain variables
ξ, η and τ . This is an open problem.
Section 2.10 - Conditional Uncertainty Distribution 89
Since Υ−1 (α) = Φ−1 (α) − Ψ−1 (1 − α), we immediately obtain the result.
90 Chapter 2 - Uncertain Variable
Exercise 2.49: Let ξ be a linear uncertain variable L(a, b), and let t be a real
number with a < t < b. Show that the conditional uncertainty distribution
of ξ given ξ > t is
0, if x ≤ t
x−a
∧ 0.5, if t < x ≤ (b + t)/2
Φ(x|(t, +∞)) = b−t
x−t
∧ 1, if (b + t)/2 ≤ x.
b−t
Φ(x|(t, +∞))
...
..........
...
..
...
1 ........................................................................
...
...
.......................................
.......
..........
... .. ....
... .............
... .. ...
... ..... ......
... .. ...
..... ........
... .. . .
... ..... ........
... . ..
... ..... ........
... .
..
.. ..
....
...
. .
....
0.5 ..................................................
...
.
.....
.
..
..... .....
.
...
...
...
.. ..
.
.
.....
..
...
...
...
...
. ...
.
... .... .
... .... .
... ...... ......
... ..........
...
... .. .
... ..... ..
...
.
..... ...
... .. ..
... ..... ..
. ..
. .
.................................................................................................................................................................................................................................................
.
....
.
x
0 ... t
Thus
M{(ξ ≤ x) ∩ (ξ ≤ t)} Φ(x)
Φ(x|(−∞, t]) = = .
M{ξ ≤ t} Φ(t)
When Φ(t)/2 ≤ Φ(x) < Φ(t), we have x < t, and
and
M{(ξ > x) ∩ (ξ ≤ t)} 1 − Φ(x)
≤ ,
M{ξ ≤ t} Φ(t)
i.e.,
M{(ξ > x) ∩ (ξ ≤ t)} Φ(x) + Φ(t) − 1
1− ≥ .
M{ξ ≤ t} Φ(t)
It follows from the maximum uncertainty principle that
Φ(x) + Φ(t) − 1
Φ(x|(−∞, t]) = ∨ 0.5.
Φ(t)
Thus
M{(ξ > x) ∩ (ξ ≤ t)}
Φ(x|(−∞, t]) = 1 − = 1 − 0 = 1.
M{ξ ≤ t}
The theorem is proved.
Exercise 2.50: Let ξ be a linear uncertain variable L(a, b), and let t be a real
number with a < t < b. Show that the conditional uncertainty distribution
Section 2.11 - Uncertain Sequence 93
of ξ given ξ ≤ t is
x−a
∨ 0, if x ≤ (a + t)/2
t−a
Φ(x|(−∞, t]) = b−x
1 − ∨ 0.5, if (a + t)/2 ≤ x < t
t−a
if x ≥ t.
1,
Φ(x|(−∞, t])
....
.........
..
...
..
1 ....
........................................................................ .........................................................................
... .. .
... .. .....
... .
.. .....
... .. ... .
... .. ....
... ..
... ......
... .
..
.. ..
.. ......
... .. .
.
... ..... ........ ...
... .....
. ... ..
... ....
........................................... ..
0.5 ....................................
...
...
.
.....
.
..
. .. ..
..
... ..
..... ....... ..
... ..
.... ... ..
... ..
.... ... ..
... ..
.... ....... ..
... ..... ..
. ..
... ..... .....
. ..
... ..
.......... ..
... ...... .. ..
... ..
.......... ..
... .
....
. .
.................................................................................................................................................................................................................................................. x
....
0 ..
..
t
Definition 2.21 (Liu [122]) The uncertain sequence {ξi } is said to be con-
vergent a.s. to ξ if there exists an event Λ with M{Λ} = 1 such that
Definition 2.22 (Liu [122]) The uncertain sequence {ξi } is said to be con-
vergent in measure to ξ if
Definition 2.23 (Liu [122]) The uncertain sequence {ξi } is said to be con-
vergent in mean to ξ if
lim E[|ξi − ξ|] = 0. (2.183)
i→∞
Proof: It follows from the Markov inequality that for any given number
ε > 0, we have
E[|ξi − ξ|]
M{|ξi − ξ| ≥ ε} ≤ →0
ε
as i → ∞. Thus {ξi } converges in measure to ξ. The theorem is proved.
Example 2.22: Convergence a.s. does not imply convergence in mean. Take
an uncertainty space (Γ, L, M) to be {γ1 , γ2 , · · · } with
X 1
M{Λ} = .
2i
γi ∈Λ
Example 2.23: Convergence in mean does not imply convergence a.s. Take
an uncertainty space (Γ, L, M) to be [0, 1] with Borel algebra and Lebesgue
measure. For any positive integer i, there is an integer j such that i = 2j + k,
where k is an integer between 0 and 2j − 1. The uncertain variables are
defined by (
1, if k/2j ≤ γ ≤ (k + 1)/2j
ξi (γ) =
0, otherwise
for i = 1, 2, · · · and ξ ≡ 0. Then
1
E[|ξi − ξ|] = →0
2j
as i → ∞. That is, the sequence {ξi } converges in mean to ξ. However, for
any γ ∈ [0, 1], there is an infinite number of intervals of the form [k/2j , (k +
1)/2j ] containing γ. Thus ξi (γ) does not converge to 0. In other words, the
sequence {ξi } does not converge a.s. to ξ.
It is clear that Φi (x) does not converge to Φ(x) at x > 0. That is, the
sequence {ξi } does not converge in distribution to ξ.
B = B ⊂ <k {ξ ∈ B} is an event .
is an event. Next, the class B is a σ-algebra over <k because (i) we have
<k ∈ B since {ξ ∈ <k } = Γ; (ii) if B ∈ B, then {ξ ∈ B} is an event, and
{ξ ∈ B c } = {ξ ∈ B}c
Remark 2.9: However, the equation (2.188) does not imply that the uncer-
tain variables are independent. For example, let ξ be an uncertain variable
with uncertainty distribution Φ. Then the joint uncertainty distribution Ψ
of uncertain vector (ξ, ξ) is
for any real numbers x1 and x2 . But, generally speaking, an uncertain vari-
able is not independent with itself.
τ = (τ1 , τ2 , · · · , τm ) (2.191)
ξ = e + στ (2.197)
for some real vector e and some real matrix σ, where τ is a standard normal
uncertain vector. Note that ξ, e and τ are understood as column vectors.
Please also note that for every index i, the component ξi is a normal uncertain
variable with expected value ei and standard variance
m
X
|σij |. (2.198)
j=1
η = c + Dξ (2.199)
Uncertain Programming
Uncertain programming was founded by Liu [124] in 2009. This chapter will
provide a theory of uncertain programming, and present some uncertain pro-
gramming models for machine scheduling problem, vehicle routing problem,
and project scheduling problem.
for j = 1, 2, · · · , p.
g(x, Φ−1 −1 −1 −1
1 (α), · · · , Φk (α), Φk+1 (1 − α), · · · , Φn (1 − α)) ≤ 0. (3.9)
where (
hi (x), if hi (x) > 0
h+
i (x) = (3.16)
0, if hi (x) ≤ 0,
(
−hi (x), if hi (x) < 0
h−
i (x) = (3.17)
0, if hi (x) ≥ 0
for i = 1, 2, · · · , n.
1
Z q q q
−1 −1 −1
max x1 + Φ1 (α) + x2 + Φ2 (α) + x3 + Φ3 (α) dα
x1 ,x2 ,x3 0
subject to:
(x1 + Ψ−1 2 −1 2 −1 2
1 (0.9)) + (x2 + Ψ2 (0.9)) + (x3 + Ψ3 (0.9)) ≤ 100
x1 , x2 , x3 ≥ 0
where Φ−1 −1 −1 −1 −1 −1
1 , Φ2 , Φ3 , Ψ1 , Ψ2 , Ψ3 are inverse uncertainty distributions of
uncertain variables ξ1 , ξ2 , ξ3 , η1 , η2 , η3 , respectively. The Matlab Uncertainty
Toolbox (http://orsc.edu.cn/liu/resources.htm) may solve this model and ob-
tain an optimal solution
Example 3.2: Assume that x1 and x2 are decision variables, ξ1 and ξ2 are iid
linear uncertain variables L(0, π/2). Consider the uncertain programming,
min E [x1 sin(x1 − ξ1 ) − x2 cos(x2 + ξ2 )]
x1 ,x2
subject to:
π π
0 ≤ x1 ≤ , 0 ≤ x2 ≤ .
2 2
subject to:
π π
0 ≤ x1 ≤ , 0 ≤ x2 ≤
2 2
where Φ−1 −1
1 , Φ2 are inverse uncertainty distributions of ξ1 , ξ2 , respectively.
The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
solve this model and obtain an optimal solution
Machine
.. ..
..........
...
..............................................................................................................................................................................................
... .. ..
... ... ...
... ... ...
M 3 .... .
. J 6
.
.
....
J 7 ...
...
... ... ...
.............................................................................................................................................................................................
.
. .
.
.. .. .. ..
... ... ...
... ... ... ...
M 2 .....
. J 4 ...
...
J 5 ...
...
..
..
... ... ... ..
.
. .
. .
.
..................................................................................................................................................................... ..
.. .. .. .. ..
... ... ... ... ..
... ... ... ... ..
M 1 ... J ...
1 ...
...
J 2 ...
...
J 3 ...
...
..
..
..
... ... ... ... .
. . . .
......................................................................................................................................................................................................................
. . . .
..
... ... Time
.... . ..
........................................... Makespan .............................................
Figure 3.1: A Machine Schedule with 3 Machines and 7 Jobs. Reprinted from
Liu [129].
jobs xyk−1 +1 , xyk−1 +2 , · · · , xyk in turn. Thus the schedule of all machines is
as follows,
y0 y1 y2 y3
... ... ... ...
... ....... ....... ... ....... ....... ... ....... ...
... ...... ......... ...... ......... ... ...... ......... ...... ......... ... ...... ......... ................... ................... ...
... ... .. ... ... .. ... ... ... . ... .
... ... x .. .... x ..
... ... x .. .... x ..
... ... x .
. ..
... x .
. ..
... x . ....
.
... ..... 1......
. ... 2 ..... ... ..... 3......
. ... 4 ..... 5 .
. 6 .
. 7 .
. .
................. ................. ... ..... ....
. ...
.. ... .. ...
.. ... .. ....
... ............. ... ............. ... ............. ............. ............. ...
... ... ... ...
....................................... .......................................................................... ................................................................................................ ............................................................
. M-1 . M-2 . M-3 .
Completion Times
Let Ci (x, y, ξ) be the completion times of jobs i, i = 1, 2, · · · , n, respectively.
For each k with 1 ≤ k ≤ m, if the machine k is used (i.e., yk > yk−1 ), then
we have
Cxyk−1 +1 (x, y, ξ) = ξxyk−1 +1 k (3.20)
and
Cxyk−1 +j (x, y, ξ) = Cxyk−1 +j−1 (x, y, ξ) + ξxyk−1 +j k (3.21)
for 2 ≤ j ≤ yk − yk−1 .
If the machine k is used, then the completion time Cxyk−1 +1 (x, y, ξ) of
job xyk−1 +1 is an uncertain variable whose inverse uncertainty distribution is
Ψ−1
xy (x, y, α) = Φ−1
xy k (α). (3.22)
k−1 +1 k−1 +1
Generally, suppose the completion time Cxyk−1 +j−1 (x, y, ξ) has an in-
verse uncertainty distribution Ψ−1xyk−1 +j−1 (x, y, α). Then the completion time
Cxyk−1 +j (x, y, ξ) has an inverse uncertainty distribution
Ψ−1
xy (x, y, α) = Ψ−1
xy (x, y, α) + Φ−1
xy k (α). (3.23)
k−1 +j k−1 +j−1 k−1 +j
Makespan
Note that, for each k (1 ≤ k ≤ m), the value Cxyk (x, y, ξ) is just the time
that the machine k finishes all jobs assigned to it. Thus the makespan of the
schedule (x, y) is determined by
Since Υ−1 (x, y, α) is the inverse uncertainty distribution of f (x, y, ξ), the
machine scheduling model is simplified as follows,
Z 1
Υ−1 (x, y, α)dα
min
x,y 0
subject to:
1 ≤ xi ≤ n, i = 1, 2, · · · , n (3.27)
x 6 = x , i 6 = j, i, j = 1, 2, · · · , n
i j
0 ≤ y ≤ y2 · · · ≤ ym−1 ≤ n
1
xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1, integers.
Numerical Experiment
Assume that there are 3 machines and 7 jobs with the following linear un-
certain processing times
where i is the index of jobs and k is the index of machines. The Matlab
Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) yields that the
Section 3.4 - Vehicle Routing Problem 113
optimal solution is
Figure 3.3: A Vehicle Routing Plan with Single Depot and 7 Customers.
Reprinted from Liu [129].
i = 1, 2, · · · , n: customers;
k = 1, 2, · · · , m: vehicles;
Dij : travel distance from customers i to j, i, j = 0, 1, 2, · · · , n;
Tij : uncertain travel time from customers i to j, i, j = 0, 1, 2, · · · , n;
Φij : uncertainty distribution of Tij , i, j = 0, 1, 2, · · · , n;
[ai , bi ]: time window of customer i, i = 1, 2, · · · , n.
Operational Plan
Liu [114] suggested that an operational plan should be represented by three
decision vectors x, y and t, where
x = (x1 , x2 , · · · , xn ): integer decision vector representing n customers
with 1 ≤ xi ≤ n and xi 6= xj for all i 6= j, i, j = 1, 2, · · · , n. That is, the
sequence {x1 , x2 , · · · , xn } is a rearrangement of {1, 2, · · · , n};
y = (y1 , y2 , · · · , ym−1 ): integer decision vector with y0 ≡ 0 ≤ y1 ≤ y2 ≤
· · · ≤ ym−1 ≤ n ≡ ym ;
t = (t1 , t2 , · · · , tm ): each tk represents the starting time of vehicle k at
the depot, k = 1, 2, · · · , m.
We note that the operational plan is fully determined by the decision
vectors x, y and t in the following way. For each k (1 ≤ k ≤ m), if yk = yk−1 ,
then vehicle k is not used; if yk > yk−1 , then vehicle k is used and starts from
the depot at time tk , and the tour of vehicle k is 0 → xyk−1 +1 → xyk−1 +2 →
· · · → xyk → 0. Thus the tours of all vehicles are as follows:
y0 y1 y2 y3
... ... ... ...
... ....... ... ... ..
...... ........ ................ .......
...... ........ ................ .......
...... ........
.......
...... ........
.......
...... ........ ....
...
... .... .... ... ...
... .... .... ... ...
... .... .. .. ... ...
... ... x ... ..... x .. ... ... x ... ..... x .. ... ... x ... ..
... x ... ..
... x . ...
.... 1......
. ... 2 ..... .... 3......
. ... 4 ..... 5 .
.. 6 .
.. 7 .
.
... ... ... .... .... .... ...
... .............. ....... ......
..... ... .............. ....... ......
..... ... .................. .................. .................. ...
... ... ... ...
..................................... . . .
............................................................................ ................................................................................................ ............................................................
... ... V-1 . V-2 . V-3 .
It is clear that this type of representation is intuitive, and the total number
of decision variables is n + 2m − 1. We also note that the above decision
variables x, y and t ensure that: (a) each vehicle will be used at most one
time; (b) all tours begin and end at the depot; (c) each customer will be
visited by one and only one vehicle; and (d) there is no subtour.
Section 3.4 - Vehicle Routing Problem 115
Arrival Times
Let fi (x, y, t) be the arrival time function of some vehicles at customers i
for i = 1, 2, · · · , n. We remind readers that fi (x, y, t) are determined by the
decision variables x, y and t, i = 1, 2, · · · , n. Since unloading can start either
immediately, or later, when a vehicle arrives at a customer, the calculation of
fi (x, y, t) is heavily dependent on the operational strategy. Here we assume
that the customer does not permit a delivery earlier than the time window.
That is, the vehicle will wait to unload until the beginning of the time window
if it arrives before the time window. If a vehicle arrives at a customer after
the beginning of the time window, unloading will start immediately. For each
k with 1 ≤ k ≤ m, if vehicle k is used (i.e., yk > yk−1 ), then we have
and
fxyk−1 +j (x, y, t) = fxyk−1 +j−1 (x, y, t) ∨ axyk−1 +j−1 + Txyk−1 +j−1 xyk−1 +j
for 2 ≤ j ≤ yk − yk−1 . If the vehicle k is used, i.e., yk > yk−1 , then the arrival
time fxyk−1 +1 (x, y, t) at the customer xyk−1 +1 is an uncertain variable whose
inverse uncertainty distribution is
Ψ−1
xy (x, y, t, α) = tk + Φ−1
0xy (α).
k−1 +1 k−1 +1
Generally, suppose the arrival time fxyk−1 +j−1 (x, y, t) has an inverse uncer-
tainty distribution Ψ−1
xyk−1 +j−1 (x, y, t, α). Then fxyk−1 +j (x, y, t) has an in-
verse uncertainty distribution
Ψ−1
xy (x, y, t, α) = Ψ−1
xy (x, y, t, α)∨axyk−1 +j−1 +Φ−1
xy xyk−1 +j (α)
k−1 +j k−1 +j−1 k−1 +j−1
Travel Distance
Let g(x, y) be the total travel distance of all vehicles. Then we have
m
X
g(x, y) = gk (x, y) (3.29)
k=1
where
k −1
yP
D
0xyk−1 +1 + Dxj xj+1 + Dxyk 0 , if yk > yk−1
gk (x, y) = j=yk−1 +1
0, if yk = yk−1
for k = 1, 2, · · · , m.
116 Chapter 3 - Uncertain Programming
If we want to minimize the total travel distance of all vehicles subject to the
time window constraint, then we have the following vehicle routing model,
min g(x, y)
x,y,t
subject to:
M{fi (x, y, t) ≤ bi } ≥ αi , i = 1, 2, · · · , n
1 ≤ xi ≤ n, i = 1, 2, · · · , n (3.31)
xi 6 = xj , i =
6 j, i, j = 1, 2, · · · , n
0 ≤ y1 ≤ y2 ≤ · · · ≤ ym−1 ≤ n
xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1, integers
which is equivalent to
min g(x, y)
x,y,t
subject to:
Ψ−1
i (x, y, t, αi ) ≤ bi , i = 1, 2, · · · , n
1 ≤ x i ≤ n, i = 1, 2, · · · ,n (3.32)
x i 6= x j , i =6 j, i, j = 1, 2, · · · , n
0 ≤ y1 ≤ y2 ≤ · · · ≤ ym−1 ≤ n
xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1,
integers
Numerical Experiment
Assume that there are 3 vehicles and 7 customers with time windows shown in
Table 3.1, and each customer is visited within time windows with confidence
level 0.90.
We also assume that the distances are Dij = |i − j| for i, j = 0, 1, 2, · · · , 7,
and the travel times are normal uncertain variables
x∗ = (1, 3, 2, 5, 7, 4, 6),
y ∗ = (2, 5), (3.33)
t∗ = (6 : 18, 4 : 18, 8 : 18).
....... .......
...... ......... ...... .........
.. .. .. ... ..
....
. .. .. . .2
...
. ...........................................................................
. .. .
5
.
.. ..
..
......
..... .
...... ............... .
........... ................ ...........
.
. . ......
...... ...... ......
...... ...... ......
...... ...... ......
...... ...... ......
........... ........... ......
......
....
..... ...
...... ......
...... . ....... ...... .......
. ......
......
........ ................
. . . . . . . .
....... ............... ....... ............... ....... ........ ......... ...
..... ....................................................................... ....................................................................... ........................................................................ ...
... 1 .
..
............................
.... ... 3....
... .....
.
. .
........
.... ... 6 ....
... ........ . .... ...
..
.....
.
8
......... .......
.
...
...... ........ ....... ........ ..... . . .
.......
...... ...... ...
.
...... ......
. ........
.
...... ...... ......
...... ...... ......
...... ...... ......
...... ......
...... ...... ......
......
...... .
......
. .
..........
...... . ........ .
......... ................
....... ... .......... ..... ..... ...........
... .... ....
. . ..
.... ........................................................................
4
...
.....................
. 7...
....................
..
Starting Times
For simplicity, we write ξ = {ξij : (i, j) ∈ A} and x = (x1 , x2 , · · · , xn ). Let
Ti (x, ξ) denote the starting time of all activities (i, j) in A. According to the
assumptions, the starting time of the total project (i.e., the starting time of
of all activities (1, j) in A) should be
T1 (x, ξ) = x1 (3.34)
Ψ−1
1 (x, α) = x1 . (3.35)
From the starting time T1 (x, ξ), we deduce that the starting time of activity
(2, 5) is
T2 (x, ξ) = x2 ∨ (x1 + ξ12 ) (3.36)
whose inverse uncertainty distribution may be written as
Ψ−1 −1
2 (x, α) = x2 ∨ (x1 + Φ12 (α)). (3.37)
Generally, suppose that the starting time Tk (x, ξ) of all activities (k, i) in A
has an inverse uncertainty distribution Ψ−1 k (x, α). Then the starting time
Ti (x, ξ) of all activities (i, j) in A should be
Ψ−1 Ψ−1 −1
i (x, α) = xi ∨ max k (x, α) + Φki (α) . (3.39)
(k,i)∈A
Completion Time
The completion time T (x, ξ) of the total project (i.e, the finish time of all
activities (k, n + 1) in A) is
Total Cost
Based on the completion time T (x, ξ), the total cost of the project can be
written as
dT (x,ξ)−xi e
X
C(x, ξ) = cij (1 + r) (3.42)
(i,j)∈A
where dae represents the minimal integer greater than or equal to a. Note that
C(x, ξ) is a discrete uncertain variable whose inverse uncertainty distribution
is
dΨ−1 (x;α)−xi e
X
Υ−1 (x, α) = cij (1 + r) (3.43)
(i,j)∈A
Numerical Experiment
Consider a project scheduling problem shown by Figure 3.5 in which there are
8 milestones and 11 activities. Assume that all duration times of activities
are linear uncertain variables,
cij = i + j, ∀(i, j) ∈ A.
In addition, we also suppose that the interest rate is r = 0.02, the due date is
T0 = 60, and the confidence level is α0 = 0.85. The Matlab Uncertainty Tool-
box (http://orsc.edu.cn/liu/resources.htm) yields that the optimal solution
is
x∗ = (7, 24, 17, 16, 35, 33, 30). (3.46)
In other words, the optimal allocating times of all loans needed for all activ-
ities are shown in Table 3.2 whose expected total cost is 190.6, and
Date 7 16 17 24 30 33 35
Node 1 4 3 2 7 6 5
Loan 12 11 27 7 15 14 13
Section 3.6 - Uncertain Multiobjective Programming 121
and E[fj (x, ξ)] < E[fj (x∗ , ξ)] for at least one index j.
and (
bi − E[fi (x, ξ)], if E[fi (x, ξ)] < bi
d−
i = (3.54)
0, otherwise
for each i. Sometimes, the objective function in the goal programming model
is written as follows,
(m m m
)
X X X
+ − + − + −
lexmin (ui1 di + vi1 di ), (ui2 di + vi2 di ), · · · , (uil di + vil di )
i=1 i=1 i=1
Definition 3.5 Suppose that x∗ is a feasible control vector of the leader and
(y ∗1 , y ∗2 , · · · , y ∗m ) is a Nash equilibrium of followers with respect to x∗ . We call
the array (x∗ , y ∗1 , y ∗2 , · · · , y ∗m ) a Stackelberg-Nash equilibrium to the uncertain
multilevel programming (3.57) if
Uncertain Statistics
............................................................................
.....
.....
x . .....
.......................................................................
.....
..... .... .....
..... . .
......
α .....
..... .... ........
..... .. .....
. .
...
1−α
..... .. .....
...
.................................................................................................................................................................................................................................... ξ
.....
M{ξ ≤ x} ... M{ξ ≥ x}
Figure 4.1: Expert’s Experimental Data (x, α). Reprinted from Liu [129].
Q1: May I ask you how far is from Beijing to Tianjin? What do you think
is the minimum distance?
A3: 130km.
Q4: What is the belief degree that the real distance is less than 130km?
A5: 140km.
Q6: What is the belief degree that the real distance is less than 140km?
A7: 120km.
Section 4.4 - Principle of Least Squares 129
Q8: What is the belief degree that the real distance is less than 120km?
A8: 0.3. (an expert’s experimental data (120, 0.3) is acquired)
Q9: Is there another number this distance may be?
A9: No idea.
By using the questionnaire survey, five expert’s experimental data of the
travel distance between Beijing and Tianjin are acquired from the domain
expert,
(100, 0), (120, 0.3), (130, 0.6), (140, 0.9), (150, 1). (4.4)
Example 4.1: Recall that the five expert’s experimental data (100, 0),
(120, 0.3), (130, 0.6), (140, 0.9), (150, 1) of the travel distance between Bei-
jing and Tianjin have been acquired in Section 4.2. Based on those expert’s
experimental data, an empirical uncertainty distribution of travel distance is
shown in Figure 4.3.
130 Chapter 4 - Uncertain Statistics
Φ(x)
...
..........
...
...
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..............................................
1 ...
...
....
..
... .. ...
............•
... .....
... (x , α ) .
...
...
...
... .
............
...
. (x , α ) 5 5
4 4 • .............
... ....
... ...
... ...
... ...
... ...
... ..
... ...
.
... ...
... (x , α ) 2 2 • ..
...
............................................•
... ...
... ... (x , α ) 3 3
... ...
... ...
... .
....
... ..
...
... ...
... ...
... .....
(x , α ) ... ...
... 1 1 .....
... •.....
... ..
. .
..................................................................................................................................................................................................................................................................................... x
0 ..
..
.
The optimal solution θb of (4.11) is called the least squares estimate of θ, and
then the least squares uncertainty distribution is Φ(x|θ).
b
(1, 0.15), (2, 0.45), (3, 0.55), (4, 0.85), (5, 0.95). (4.13)
Section 4.4 - Principle of Least Squares 131
Φ(x)
...
..........
.
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..........•
1 ..
... ......
.........
........................................
Φ(x|θ)
....
........
..
...
...
.........................
... ..................................
... •.... ...................................... •.
... .. ..
... ...
.......... •
... .....
.....
... ......
... .....
... .
.....
.
.
... ....
... ....
......
... .... ...
... .
.... •
... ..
....
... ....
... ....
... •.... ...........
... ......
.
... .....
... .......
...........
................................................................................................................................................................................................................................................................. x
...
0 ..
....
.
(0.6, 0.1), (1.0, 0.3), (1.5, 0.4), (2.0, 0.6), (2.8, 0.8), (3.6, 0.9). (4.16)
Φ(x|θ1 , θ2 , · · · , θp ) (4.18)
Wang and Peng [230] proposed a method of moments to estimate the un-
known parameters of uncertainty distribution. At first, the kth empirical
moments of the expert’s experimental data are defined as that of the corre-
sponding empirical uncertainty distribution, i.e.,
n−1 k
1 XX
ξ k = α1 xk1 + (αi+1 − αi )xji xk−j k
i+1 + (1 − αn )xn . (4.21)
k + 1 i=1 j=0
The moment estimates θb1 , θb2 , · · · , θbp are then obtained by equating the first
p moments of Φ(x|θ1 , θ2 , · · · , θp ) to the corresponding first p empirical mo-
ments. In other words, the moment estimates θb1 , θb2 , · · · , θbp should solve the
system of equations,
Z +∞ √
(1 − Φ( k x | θ1 , θ2 , · · · , θp ))dx = ξ k , k = 1, 2, · · · , p (4.22)
0
(1.2, 0.1), (1.5, 0.3), (1.8, 0.4), (2.5, 0.6), (3.9, 0.8), (4.6, 0.9). (4.23)
Section 4.6 - Multiple Domain Experts 133
Then the first three empirical moments are 2.5100, 7.7226 and 29.4936. We
also assume that the uncertainty distribution to be determined has a zigzag
form with three unknown parameters a, b and c, i.e.,
0, if x ≤ a
(x − a)/2(b − a),
if a ≤ x ≤ b
Φ(x|a, b, c) = (4.24)
(x + c − 2b)/2(c − b), if b ≤ x ≤ c
if x ≥ c.
1,
From the expert’s experimental data, we may believe that the unknown pa-
rameters must be positive numbers. Thus the first three moments of the
zigzag uncertainty distribution Φ(x|a, b, c) are
a + 2b + c
,
4
a2 + ab + 2b2 + bc + c2
,
6
a3 + a2 b + ab2 + 2b3 + b2 c + bc2 + c3
.
8
It follows from the method of moments that the unknown parameters a, b, c
should solve the system of equations,
a + 2b + c = 4 × 2.5100
a2 + ab + 2b2 + bc + c2 = 6 × 7.7226 (4.25)
3 2 2 3 2 2 3
a + a b + ab + 2b + b c + bc + c = 8 × 29.4936.
1
wi = , ∀i = 1, 2, · · · , n. (4.28)
m
Since Φ1 (x), Φ2 (x), · · ·, Φm (x) are uncertainty distributions, they are increas-
ing functions taking values in [0, 1] and are not identical to either 0 or 1. It
is easy to verify that their convex combination Φ(x) is also an increasing
function taking values in [0, 1] and Φ(x) 6≡ 0, Φ(x) 6≡ 1. Hence Φ(x) is also
an uncertainty distribution by Peng-Iwamura theorem.
Step 2. Use the i-th expert’s experimental data (xi1 , αi1 ), (xi2 , αi2 ), · · · ,
(xini , αini ) to generate the uncertainty distributions Φi of the i-
th domain experts, i = 1, 2, · · · , m, respectively.
Step 3. Compute Φ(x) = w1 Φ1 (x) + w2 Φ2 (x) + · · · + wm Φm (x) where
w1 , w2 , · · · , wm are convex combination coefficients representing
weights of the domain experts.
Step 4. If |αij − Φ(xij )| are less than a given level ε > 0 for all i and j, then
go to Step 5. Otherwise, the i-th domain experts receive the sum-
mary (for example, the function Φ obtained in the previous round
and the reasons of other experts), and then provide a set of revised
expert’s experimental data (xi1 , αi1 ), (xi2 , αi2 ), · · · , (xini , αini ) for
i = 1, 2, · · · , m. Go to Step 2.
Step 5. The last function Φ is the uncertainty distribution to be determined.
Section 4.8 - Bibliographic Notes 135
The term risk has been used in different ways in literature. Here the risk
is defined as the “accidental loss” plus “uncertain measure of such loss”.
Uncertain risk analysis is a tool to quantify risk via uncertainty theory. One
main feature of this topic is to model events that almost never occur. This
chapter will introduce a definition of risk index and provide some useful
formulas for calculating risk index. This chapter will also discuss structural
risk analysis and investment risk analysis in uncertain environments.
Example 5.1: Consider a series system in which there are n elements whose
lifetimes are uncertain variables ξ1 , ξ2 , · · · , ξn . Such a system works whenever
all elements work. Thus the system lifetime is
ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn . (5.2)
If the loss is understood as the case that the system fails before the time T ,
then we have a loss function
f (ξ1 , ξ2 , · · · , ξn ) = T − ξ1 ∧ ξ2 ∧ · · · ∧ ξn . (5.3)
ξ = ξ1 ∨ ξ2 ∨ · · · ∨ ξn . (5.4)
If the loss is understood as the case that the system fails before the time T ,
then the loss function is
f (ξ1 , ξ2 , · · · , ξn ) = T − ξ1 ∨ ξ2 ∨ · · · ∨ ξn . (5.5)
If the loss is understood as the case that the system fails before the time T ,
then the loss function is
Hence the system fails if and only if f (ξ1 , ξ2 , · · · , ξn ) > 0. Note that a series
system is an n-out-of-n system, and a parallel system is a 1-out-of-n system.
is active, and one of the redundant elements begins to work only when the
active element fails. Thus the system lifetime is
ξ = ξ1 + ξ2 + · · · + ξn . (5.8)
If the loss is understood as the case that the system fails before the time T ,
then the loss function is
Definition 5.2 (Liu [128]) Assume that a system contains uncertain factors
ξ1 , ξ2 , · · ·, ξn and has a loss function f . Then the risk index is the uncertain
measure that the system is loss-positive, i.e.,
Theorem 5.1 (Liu [128], Risk Index Theorem) Assume a system contains
independent uncertain variables ξ1 , ξ2 , · · · , ξn with regular uncertainty distri-
butions Φ1 , Φ2 , · · · , Φn , respectively. If the loss function f (ξ1 , ξ2 , · · · , ξn ) is
strictly increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly decreasing with
respect to ξm+1 , ξm+2 , · · · , ξn , then the risk index is just the root α of the
equation
f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) = 0. (5.11)
Remark 5.2: Keep in mind that sometimes the equation (5.11) may not
have a root. In this case, if
f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) < 0 (5.12)
for all α, then we set the root α = 0; and if
f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) > 0 (5.13)
for all α, then we set the root α = 1.
k-max [Φ−1 −1 −1
1 (α), Φ2 (α), · · · , Φn (α)] = T. (5.24)
Φ−1 −1 −1
1 (α) + Φ2 (α) + · · · + Φn (α) = T. (5.28)
142 Chapter 5 - Uncertain Risk Analysis
Example 5.5: (The Simplest Case) Assume there is only a single strength
variable ξ and a single load variable η with continuous uncertainty distribu-
tions Φ and Ψ, respectively. In this case, the structural risk index is
It follows from the risk index theorem that the risk index is just the root α
of the equation
Φ−1 (α) = Ψ−1 (1 − α). (5.30)
Especially, if the strength variable ξ has a normal uncertainty distribution
N (es , σs ) and the load variable η has a normal uncertainty distribution
N (el , σl ), then the structural risk index is
−1
π(e − el )
Risk = 1 + exp √ s . (5.31)
3(σs + σl )
That is,
Risk = Φ1 (c1 ) ∨ Φ2 (c2 ) ∨ · · · ∨ Φn (cn ). (5.32)
That is,
Risk = α1 ∨ α2 ∨ · · · ∨ αn (5.33)
where αi are the roots of the equations
Φ−1 −1
i (α) = Ψi (1 − α) (5.34)
for i = 1, 2, · · · , n, respectively.
However, generally speaking, the load variables η1 , η2 , · · · , ηn are neither
constants nor independent. For examples, the load variables η1 , η2 , · · · , ηn
may be functions of independent uncertain variables τ1 , τ2 , · · · , τm . In this
case, the formula (5.33) is no longer valid. Thus we have to deal with those
structural systems case by case.
f (ξ1 , ξ2 , · · · , ξn , η) = η − ξ1 ∧ ξ2 ∧ · · · ∧ ξn .
Then
Risk = M{f (ξ1 , ξ2 , · · · , ξn , η) > 0}.
Since the loss function f is strictly increasing with respect to η and strictly
decreasing with respect to ξ1 , ξ2 , · · · , ξn , it follows from the risk index theo-
rem that the risk index is just the root α of the equation
Ψ−1 (1 − α) − Φ−1 −1 −1
1 (α) ∧ Φ2 (α) ∧ · · · ∧ Φn (α) = 0. (5.35)
144 Chapter 5 - Uncertain Risk Analysis
Ψ−1 (1 − α) = Φ−1
i (α) (5.36)
Risk = α1 ∨ α2 ∨ · · · ∨ αn . (5.37)
////////////////
....................................................................................................................................................................................
...
...
...
...
...
...
.....
..... .....
......
...
...
...
...
...
....
..... .....
......
...
...
...
...
...
....
..... .....
......
...
...
...
...
...
...
........................................
... ...
····
.... ...
····
... ...
...
····
...
... ...
···· ...
......................................
..
Example 5.9: Consider a structural system shown in Figure 5.5 that consists
of 2 rods and an object. Assume that the strength variables of the left and
right rods are uncertain variables ξ1 and ξ2 with uncertainty distributions
Φ1 and Φ2 , respectively. We also assume that the gravity of the object is an
uncertain variable η with uncertainty distribution Ψ. In this case, the load
variables of left and right rods are respectively equal to
η sin θ2 η sin θ1
, .
sin(θ1 + θ2 ) sin(θ1 + θ2 )
Thus the structural system fails whenever for any one rod, the load variable
exceeds its strength variable. Hence the structural risk index is
η sin θ2 η sin θ1
Risk = M ξ1 < ∪ ξ2 <
sin(θ1 + θ2 ) sin(θ1 + θ2 )
ξ1 η ξ2 η
=M < ∪ <
sin θ2 sin(θ1 + θ2 ) sin θ1 sin(θ1 + θ2 )
ξ1 ξ2 η
=M ∧ <
sin θ2 sin θ1 sin(θ1 + θ2 )
Section 5.8 - Investment Risk Analysis 145
Risk = α1 ∨ α2 . (5.41)
////////////////
.......................................................................................................................................................................................
... .. .
... ..
... ... ...
... . .
....
... .. ..
... .. ...
... ...
... .. ...
...
... ..
...
.
... .. ..
... .. ...
... .. ...
... ..
... ..
....
.
... .. ...
... .. ..
... .. ...
... ...
...
θ
... 1 ... 2.....
...
θ
... .. ...
... .. .....
... .. ..
... . ...
... .....
.....
..........................................
····
...
...
...
...
····
.... ...
···· ... ...
...
···· ...
...
....................................
..
Φ−1 −1 −1
1 (α) + Φ2 (α) + · · · + Φn (α) = c. (5.43)
5.9 Value-at-Risk
As a substitute of risk index (5.10), a concept of value-at-risk is given by the
following definition.
Definition 5.3 (Peng [183]) Assume that a system contains uncertain fac-
tors ξ1 , ξ2 , · · ·, ξn and has a loss function f . Then the value-at-risk is defined
as
VaR(α) = sup{x | M{f (ξ1 , ξ2 , · · · , ξn ) ≥ x} ≥ α}. (5.44)
Note that VaR(α) represents the maximum possible loss when α percent of
the right tail distribution is ignored. In other words, the loss f (ξ1 , ξ2 , · · · , ξn )
will exceed VaR(α) with uncertain measure α. See Figure 5.6. If Φ(x) is the
uncertainty distribution of f (ξ1 , ξ2 , · · · , ξn ), then
Φ(x)
....
........
...
.........................................................................
1 ... ...
..........
.......................
.................
... ...........
... . .........
.......
α ...
... .. .....
.....
.
...
.
... ........ ......
....................................
.... .....
... ..
.......
.
... ..... .
..... ...
... .....
... ..... ...
... ..
......
. ..
... ......
... .
...
...... ..
..
... ...
......
. ..
....
... .................. ..
... . ...
...
. .
..........
........................................................................................................................................................................................................................................................................ x
...
0 ...
... VaR(α)
Proof: Let α1 and α2 be two numbers with 0 < α1 < α2 ≤ 1. Then for any
number r < VaR(α2 ), we have
M {f (ξ1 , ξ2 , · · · , ξn ) ≥ r} ≥ α2 > α1 .
VaR(α) = f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)). (5.47)
Proof: It follows from the operational law of uncertain variables that the
loss f (ξ1 , ξ2 , · · · , ξn ) has an inverse uncertainty distribution
Definition 5.4 (Liu and Ralescu [151]) Assume that a system contains un-
certain factors ξ1 , ξ2 , · · ·, ξn and has a loss function f . Then the expected loss
is defined as Z +∞
L= M{f (ξ1 , ξ2 , · · · , ξn ) ≥ x}dx. (5.48)
0
If its inverse uncertainty distribution Φ−1 (α) exists, then the expected loss
is Z 1
+
L= Φ−1 (α) dα. (5.50)
0
148 Chapter 5 - Uncertain Risk Analysis
Theorem 5.4 (Liu and Ralescu [154]) Assume a system contains indepen-
dent uncertain variables ξ1 , ξ2 , · · · , ξn with regular uncertainty distributions
Φ1 , Φ2 , · · · , Φn , respectively. If the loss function f (ξ1 , ξ2 , · · · , ξn ) is strictly
increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly decreasing with respect
to ξm+1 , ξm+2 , · · · , ξn , then the expected loss is
Z 1
L= f + (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α))dα. (5.51)
0
Proof: It follows from the operational law of uncertain variables that the
loss f (ξ1 , ξ2 , · · · , ξn ) has an inverse uncertainty distribution
Exercise 5.1: Let ξ be a linear uncertain variable L(a, b), and t a real
number with a < t < b. Show that the hazard distribution at time t is
0, if x ≤ t
x−a
∧ 0.5, if t < x ≤ (b + t)/2
Φ(x|t) = b−t
x−t
∧ 1, if (b + t)/2 ≤ x.
b−t
Section 5.12 - Bibliographic Notes 149
Theorem 5.5 (Liu [128], Conditional Risk Index Theorem) Assume that a
system contains uncertain factors ξ1 , ξ2 , · · ·, ξn , and has a loss function f .
Suppose ξ1 , ξ2 , · · · , ξn are independent uncertain variables with uncertainty
distributions Φ1 , Φ2 , · · · , Φn , respectively, and f (ξ1 , ξ2 , · · · , ξn ) is strictly in-
creasing with respect to ξ1 , ξ2 , · · · , ξm and strictly decreasing with respect to
ξm+1 , ξm+2 , · · · , ξn . If it is observed that all elements are working at some
time t, then the risk index is just the root α of the equation
f (Φ−1 −1 −1 −1
1 (1 − α|t), · · · , Φm (1 − α|t), Φm+1 (α|t), · · · , Φn (α|t)) = 0 (5.53)
0, if Φi (x) ≤ Φi (t)
Φi (x)
∧ 0.5, if Φi (t) < Φi (x) ≤ (1 + Φi (t))/2
Φi (x|t) = 1 − Φi (t) (5.54)
Φi (x) − Φi (t) ,
if (1 + Φi (t))/2 ≤ Φi (x)
1 − Φi (t)
for i = 1, 2, · · · , n.
Proof: It follows from Definition 5.5 that each hazard distribution of ele-
ment is determined by (5.54). Thus the conditional risk index is obtained by
Theorem 5.1 immediately.
Uncertain Reliability
Analysis
Example 6.1: For a series system, the structure function is a mapping from
{0, 1}n to {0, 1}, i.e.,
f (x1 , x2 , · · · , xn ) = x1 ∧ x2 ∧ · · · ∧ xn . (6.4)
f (x1 , x2 , · · · , xn ) = x1 ∨ x2 ∨ · · · ∨ xn . (6.5)
................................
.. ...
..............................
....
...
1..
...............................
................................
. ..
...
... ...
... ................................ ...
.
. .
. . .
Input .................................................................. ..................................................................... Output
....
2
..................................
.
....
... ...
... ................................ ...
... . . ..
................................ ..................................
... 3
.............................
...
Example 6.3: For a k-out-of-n system that works whenever at least k of the
n elements work, the structure function is a mapping from {0, 1}n to {0, 1},
i.e., (
1, if x1 + x2 + · · · + xn ≥ k
f (x1 , x2 , · · · , xn ) = (6.6)
0, if x1 + x2 + · · · + xn < k.
Especially, when k = 1, it is a parallel system; when k = n, it is a series
system.
Definition 6.2 (Liu [128]) Assume a Boolean system has uncertain ele-
ments ξ1 , ξ2 , · · · , ξn and a structure function f . Then the reliability index
is the uncertain measure that the system is working, i.e.,
Reliability = M{f (ξ1 , ξ2 , · · · , ξn ) = 1}. (6.8)
Theorem 6.1 (Liu [128], Reliability Index Theorem) Assume that a system
contains uncertain elements ξ1 , ξ2 , · · ·, ξn , and has a structure function f . If
ξ1 , ξ2 , · · · , ξn are independent uncertain elements with reliabilities a1 , a2 , · · · ,
an , respectively, then the reliability index is
sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
if sup min νi (xi ) < 0.5
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
Reliability = (6.9)
1− sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=0 1≤i≤n
if sup min νi (xi ) ≥ 0.5
1≤i≤n
f (x1 ,x2 ,··· ,xn )=1
for i = 1, 2, · · · , n, respectively.
Proof: Since ξ1 , ξ2 , · · · , ξn are independent Boolean uncertain variables and
f is a Boolean function, the equation (6.9) follows from Definition 6.2 and
Theorem 2.23 immediately.
It follows from the reliability index theorem that the reliability index is
It follows from the reliability index theorem that the reliability index is the
kth largest value of a1 , a2 , · · · , an , i.e.,
in uncertain measure.
Uncertain Propositional
Logic
Example 7.1: “Tom is tall with truth value 0.7” is an uncertain proposition,
where “Tom is tall” is a statement, and its truth value is 0.7 in uncertain
measure.
Example 7.2: “John is young with truth value 0.8” is an uncertain propo-
sition, where “John is young” is a statement, and its truth value is 0.8 in
uncertain measure.
Example 7.3: “Beijing is a big city with truth value 0.9” is an uncertain
proposition, where “Beijing is a big city” is a statement, and its truth value
is 0.9 in uncertain measure.
Connective Symbols
In addition to the proposition symbols X and Y , we also need the negation
symbol ¬, conjunction symbol ∧, disjunction symbol ∨, conditional symbol
→, and biconditional symbol ↔. Note that
Definition 7.2 (Li and Liu [100]) Let X be an uncertain proposition. Then
the truth value of X is defined as the uncertain measure that X is true, i.e.,
T (X ∨ ¬X) = 1. (7.15)
Proof: It follows from the definition of truth value and the property of
uncertain measure that
T (X ∧ ¬X) = 0. (7.16)
Proof: It follows from the definition of truth value and the property of
uncertain measure that
Z = f (X1 , X2 , · · · , Xn ). (7.23)
for i = 1, 2, · · · , n, respectively.
Z = X1 ∧ X2 ∧ · · · ∧ Xn (7.26)
T (Z) = α1 ∧ α2 ∧ · · · ∧ αn . (7.27)
Z = X1 ∨ X2 ∨ · · · ∨ Xn (7.28)
162 Chapter 7 - Uncertain Propositional Logic
T (Z) = α1 ∨ α2 ∨ · · · ∨ αn . (7.29)
Z = X1 ↔ X2 (7.30)
At first, we have
Thus we have
α1 ∧ α2 , if α1 ≥ 0.5 and α2 ≥ 0.5
(1 − α1 ) ∨ α2 , ≥ 0.5
if α1 and α2 < 0.5
T (Z) = (7.31)
α1 ∨ (1 − α2 ), if α1 < 0.5 and α2 ≥ 0.5
(1 − α1 ) ∧ (1 − α2 ),
if α1 < 0.5 and α2 < 0.5.
A run of Boolean System Calculator shows that the truth value of X is 0.7
in uncertain measure.
(∃a)X(a) = “At least one of Beijing and Tianjin is a big city”. (7.36)
Theorem 7.9 (Zhang and Li [267], Law of Excluded Middle) Let X(a) be
an uncertain predicate proposition. Then
Theorem 7.11 (Zhang and Li [267], Law of Truth Conservation) Let X(a)
be an uncertain predicate proposition. Then
Proof: The argument breaks into two cases. Case 1: If X(b) = 0, then
(∀a)X(a) = 0 and ¬(∀a)X(a) = 1. Thus
Proof: The argument breaks into two cases. Case 1: If X(b) = 0, then
¬X(b) = 1 and
Proof: The argument breaks into two cases. Case 1: If (∀a)X(a) = 0, then
¬(∀a)X(a) = 1 and
Uncertain Entailment
Yj = fj (X1 , X2 , · · · , Xn ) (8.1)
0 ≤ αi ≤ 1, i = 1, 2, · · · , n. (8.3)
T (Yj ) = cj (8.4)
for j = 1, 2, · · · , m and
(
αi , if xi = 1
νi (xi ) = (8.6)
1 − αi , if xi = 0
Since the truth values α1 , α2 , · · · , αn are not uniquely determined, the truth
value T (Z) is not unique too. In this case, we have to use the maximum
uncertainty principle to determine the truth value T (Z). That is, T (Z)
should be assigned the value as close to 0.5 as possible. In other words,
we should minimize the value |T (Z) − 0.5| via choosing appreciate values of
α1 , α2 , · · · , αn . The uncertain entailment model is thus written by Liu [126]
as follows,
min |T (Z) − 0.5|
subject to:
(8.8)
0 ≤ αi ≤ 1, i = 1, 2, · · · , n
T (Yj ) = cj , j = 1, 2, · · · , m
Y1 = A ∨ B, Y2 = A ∧ B, Z = A → B.
It is clear that
T (Y1 ) = α1 ∨ α2 = a,
T (Y2 ) = α1 ∧ α2 = b,
T (Z) = (1 − α1 ) ∨ α2 .
In this case, the uncertain entailment model (8.8) becomes
min |(1 − α1 ) ∨ α2 − 0.5|
subject to:
0 ≤ α1 ≤ 1
(8.10)
0 ≤ α2 ≤ 1
α1 ∨ α2 = a
α1 ∧ α2 = b.
When a ≥ b, there are only two feasible solutions (α1 , α2 ) = (a, b) and
(α1 , α2 ) = (b, a). If a + b < 1, the optimal solution produces
When a < b, there is no feasible solution and the truth values are ill-assigned.
In summary, from T (A ∨ B) = a and T (A ∧ B) = b we entail
1 − a, if a ≥ b and a + b < 1
a or b, if a ≥ b and a + b = 1
T (A → B) = (8.11)
b, if a ≥ b and a + b > 1
illness, if a < b.
and b, respectively. What is the truth value of B? Denote the truth values
of A and B by α1 and α2 , respectively, and write
Y1 = A, Y2 = A → B, Z = B.
It is clear that
T (Y1 ) = α1 = a,
T (Y2 ) = (1 − α1 ) ∨ α2 = b,
T (Z) = α2 .
In this case, the uncertain entailment model (8.8) becomes
min |α2 − 0.5|
subject to:
0 ≤ α1 ≤ 1
(8.12)
0 ≤ α2 ≤ 1
α1 = a
(1 − α1 ) ∨ α2 = b.
When a + b > 1, there is a unique feasible solution and then the optimal
solution is
α1∗ = a, α2∗ = b.
Thus T (B) = α2∗ = b. When a + b = 1, the feasible set is {a} × [0, b] and the
optimal solution is
α1∗ = a, α2∗ = 0.5 ∧ b.
Thus T (B) = α2∗ = 0.5 ∧ b. When a + b < 1, there is no feasible solution and
the truth values are ill-assigned. In summary, from
T (A) = a, T (A → B) = b (8.13)
we entail
b, if a + b > 1
T (B) = 0.5 ∧ b, if a + b = 1 (8.14)
illness, if a + b < 1.
This result coincides with the classical modus ponens that if both A and
A → B are true, then B is true.
and b, respectively. What is the truth value of A? Denote the truth values
of A and B by α1 and α2 , respectively, and write
Y1 = A → B, Y2 = B, Z = A.
It is clear that
T (Y1 ) = (1 − α1 ) ∨ α2 = a,
T (Y2 ) = α2 = b,
T (Z) = α1 .
When a > b, there is a unique feasible solution and then the optimal solution
is
α1∗ = 1 − a, α2∗ = b.
T (A → B) = a, T (B) = b (8.16)
we entail
1 − a, if a > b
T (A) = (1 − a) ∨ 0.5, if a = b (8.17)
illness, if a < b.
This result coincides with the classical modus tollens that if A → B is true
and B is false, then A is false.
174 Chapter 8 - Uncertain Entailment
Y1 = A → B, Y2 = B → C, Z = A → C.
It is clear that
T (Y1 ) = (1 − α1 ) ∨ α2 = a,
T (Y2 ) = (1 − α2 ) ∨ α3 = b,
T (Z) = (1 − α1 ) ∨ α3 .
In this case, the uncertain entailment model (8.8) becomes
min |(1 − α1 ) ∨ α3 − 0.5|
subject to:
0 ≤ α1 ≤ 1
0 ≤ α2 ≤ 1 (8.18)
0 ≤ α3 ≤ 1
(1 − α1 ) ∨ α2 = a
(1 − α2 ) ∨ α3 = b.
Write the optimal solution by (α1∗ , α2∗ , α3∗ ). When a ∧ b ≥ 0.5, we have
T (A → C) = (1 − α1∗ ) ∨ α3∗ = a ∧ b.
When a + b < 1, there is no feasible solution and the truth values are ill-
assigned. In summary, from
T (A → B) = a, T (B → C) = b (8.19)
we entail
a ∧ b, if a ≥ 0.5 and b ≥ 0.5
T (A → C) = 0.5, if a + b ≥ 1 and a ∧ b < 0.5 (8.20)
illness, if a + b < 1.
This result coincides with the classical hypothetical syllogism that if both
A → B and B → C are true, then A → C is true.
Section 8.5 - Bibliographic Notes 175
Uncertain Set
Uncertain set was first proposed by Liu [127] in 2010 for modeling unsharp
concepts. This chapter will introduce the concepts of uncertain set, mem-
bership function, independence, expected value, variance, entropy, and dis-
tance. This chapter will also introduce the operational law for uncertain sets
via membership functions or inverse membership functions, and uncertain
statistics for determining membership functions.
Remark 9.1: It is clear that uncertain set (Liu [127]) is very different from
random set (Robbins [198] and Matheron [167]) and fuzzy set (Zadeh [260]).
The essential difference among them is that different measures are used, i.e.,
random set uses probability measure, fuzzy set uses possibility measure and
uncertain set uses uncertain measure.
Remark 9.2: What is the difference between uncertain variable and un-
certain set? Both of them belong to the same broad category of uncertain
concepts. However, they are differentiated by their mathematical definitions:
the former refers to one value, while the latter to a collection of values. Es-
sentially, the difference between uncertain variable and uncertain set focuses
<..
..
..........
...
.........................................................
.
.... .... ....
... ... ...
...................................... ... ...
... ... .. ... ...
... ... ... ... ...
... ... .. ... ...
..................................................... . .
.
......... .
... .. . .......
... ... .. ... .. . ..
... ... ... ... ..
... ... ... .. ... ...
.................................
... ... ...... ..
... .... ... .. ..
... ... ... .. ..
.................... .. ..
... .. .. ..
... .. .. ..
... .. .. ..
............................................................................................................................................................................
γ ..... γ γ Γ
. 1 2 3
Theorem 9.1 Let ξ be an uncertain set and let B be a Borel set. Then the
set
{B 6⊂ ξ} = {γ ∈ Γ B 6⊂ ξ(γ)} (9.3)
is an event.
Section 9.1 - Uncertain Set 179
Theorem 9.2 Let ξ be an uncertain set and let B be a Borel set. Then the
set
{ξ 6⊂ B} = {γ ∈ Γ ξ(γ) 6⊂ B} (9.4)
is an event.
their intersection is
∅, if γ = γ1
(ξ ∩ η)(γ) = (2, 3], if γ = γ2
(2, 4], if γ = γ3 ,
180 Chapter 9 - Uncertain Set
Theorem 9.3 Let ξ be an uncertain set and let < be the set of real numbers.
Then
ξ ∪ < = <, ξ ∩ < = ξ. (9.8)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
the union is
(ξ ∪ <)(γ) = ξ(γ) ∪ < = <.
Thus we have ξ ∪ < = <. In addition, the intersection is
(ξ ∩ <)(γ) = ξ(γ) ∩ < = ξ(γ).
Thus we have ξ ∩ < = ξ.
Theorem 9.4 Let ξ be an uncertain set and let ∅ be the empty set. Then
ξ ∪ ∅ = ξ, ξ ∩ ∅ = ∅. (9.9)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
the union is
(ξ ∪ ∅)(γ) = ξ(γ) ∪ ∅ = ξ(γ).
Thus we have ξ ∪ ∅ = ξ. In addition, the intersection is
(ξ ∩ ∅)(γ) = ξ(γ) ∩ ∅ = ∅.
Thus we have ξ ∩ ∅ = ∅.
Theorem 9.5 (Idempotent Law) Let ξ be an uncertain set. Then we have
ξ ∪ ξ = ξ, ξ ∩ ξ = ξ. (9.10)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
the union is
(ξ ∪ ξ)(γ) = ξ(γ) ∪ ξ(γ) = ξ(γ).
Thus we have ξ ∪ ξ = ξ. In addition, the intersection is
(ξ ∩ ξ)(γ) = ξ(γ) ∩ ξ(γ) = ξ(γ).
Thus we have ξ ∩ ξ = ξ.
Section 9.1 - Uncertain Set 181
ξ ∪ (η ∩ τ ) = (ξ ∪ η) ∩ (ξ ∪ τ ), ξ ∩ (η ∪ τ ) = (ξ ∩ η) ∪ (ξ ∩ τ ). (9.15)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
Thus we have ξ ∩ (η ∪ τ ) = (ξ ∩ η) ∪ (ξ ∩ τ ).
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
we get ξ ∩ (ξ ∪ η) = ξ.
Theorem 9.12 (De Morgan’s Law) Let ξ and η be uncertain sets. Then
we get (ξ ∩ η)c = ξ c ∪ η c .
Section 9.2 - Membership Function 183
µ(x) µ(x)
... ...
.......... ..........
... ................... ... ...................
.... .... .... ....
... ... .... ... ... ....
... ... ... ... ... ...
... ... ... ... ... ...
... ... ... ... ... ...
.. ... .. ...
... . ... ... .
...................................................................
..
...
... ...
. ...
... sup µ(x) ........ ...
... .....
. . ... .
... ..
... ..
. ... c .
. .. ..
. ... ..
... .... ...
.
x∈B .... .... ... ....
inf µ(x) ........ ... ... ..................................................................... ... ... .. ....
... ... .. ...... ... ... .. ..........
x∈B ....... .. .. ...... ....... .. .. ......
. . . . .. ....... . . . . .. .......
..
....... .
. .. ..... ......... .
. .. .....
.. . .. ..... .. . ....
..... .... . .. ..... .... ... ..
.
...........................................................................................................................................................................
.
.. .
................................ ...............................
. x ................................................................................................................................................................................
.
..
.
.
................................ ...............................
. x
0 ...
. B 0 ...
. B
Figure 9.2: M{B ⊂ ξ} = inf µ(x) and M{ξ ⊂ B} = 1− sup µ(x). Reprinted
x∈B x∈B c
from Liu [133].
Remark 9.4: The value of µ(x) represents the membership degree that x
belongs to the uncertain set ξ. If µ(x) = 1, then x completely belongs to ξ;
if µ(x) = 0, then x does not belong to ξ at all. Thus the larger the value of
µ(x) is, the more true x belongs to ξ.
Exercise 9.1: The set < of real numbers is a special uncertain set ξ(γ) ≡ <.
Show that such an uncertain set has a membership function
Exercise 9.2: The empty set ∅ is a special uncertain set ξ(γ) ≡ ∅. Show
that such an uncertain set has a membership function
ξ(γ) = [γ − 1, 1 − γ] (9.27)
Exercise 9.5: It is not true that every uncertain set has a membership
function. Show that the uncertain set
(
[2, 4] with uncertain measure 0.6
ξ= (9.29)
[1, 3] with uncertain measure 0.4
µ(x) µ(x)
... ...
.......... ..........
... ...
... ..... ... ......................................................
... ........ ... .... ....
... ... . ... ... .... . ...
... ... .. .... ... .. ..
. . ..
. .
... ... . .... ... .. .
. . .....
... ... .. .... ... .. .
. . ..
... ... . ... ... .. ..
. . ...
. .
... ... . ...
... ... ... . . ....
... ... . ... ... .. .
. . ...
... ... .
. ... ... ... .
.
.
. ...
... ... . ... ... ..
. . . ...
...
... ... . ... ... ... . . ...
... .. . ... ... .. . .
. . ... . . . ...
... ... . ... ... ..
. . . ...
... ... . ... ... ..
. . . ...
. . . . . . .
......................................................................................................................................
. .
.
x .............................................................................................................................................................
. .
.
x
.. ..
a ... b c a ... b c d
What is “young”?
Sometimes we say “those students are young”. What ages can be considered
“young”? In this case, “young” may be regarded as an uncertain set whose
membership function is
0, if x ≤ 15
(x − 15)/5, if 15 ≤ x ≤ 20
µ(x) = 1, if 20 ≤ x ≤ 35 (9.32)
(45 − x)/10, if 35 ≤ x ≤ 45
0, if x ≥ 45.
µ(x)
...
..........
... .........................................................................................
... .... ......
... ..... .. ....
... ... . .. ....
... .. ...
. .. ...
... .. ..
. .. ...
... .. ..
. ..
...
...
... ... .. .. ...
... .. ..
. .. ...
... ... .. ..
...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
...
... ... .. .. ...
... ..
. .. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ..
. .. . ..
............................................................................................................................................................................................................................................................... x
...
15yr 20yr .. 35yr 45yr
What is “tall”?
Sometimes we say “those sportsmen are tall”. What heights (centimeters)
can be considered “tall”? In this case, “tall” may be regarded as an uncertain
Section 9.2 - Membership Function 187
µ(x)
.....
.......
.... ..........................................................................................
... ..... .....
.. ... .. .. ....
... ... .. .. ....
... ... .. .. ...
... .. ..
. .. ....
... ..
. .. .. ...
... .... .. .. ...
...
... ... .. ..
.. ...
... ... . ...
... ... .. .. ...
... ... .. .. ...
... .. .. .. ...
. .. .. ...
... ... ...
... ..
. .. .. ...
... ..
. .. .. ...
... .... ..
..
..
..
...
...
... .
.. .
.............................................................................................................................................................................................................................................................
....
x
180cm 185cm . 195cm 200cm
What is “warm”?
Sometimes we say “those days are warm”. What temperatures can be con-
sidered “warm”? In this case, “warm” may be regarded as an uncertain set
whose membership function is
0, if x ≤ 15
(x − 15)/3, if 15 ≤ x ≤ 18
µ(x) = 1, if 18 ≤ x ≤ 24 (9.34)
(28 − x)/4,
if 24 ≤ x ≤ 28
0, if 28 ≤ x.
What is “most”?
Sometimes we say “most students are boys”. What percentages can be con-
sidered “most”? In this case, “most” may be regarded as an uncertain set
188 Chapter 9 - Uncertain Set
µ(x)
...
..........
... ....................................................................
... .... .....
... ..... .. ...
... ... . .. ....
... ... ... .. ....
... .. ..
. .. ...
... ... .. .. ...
... .. .. .. ...
. ...
... ..
. .. .. ...
... ... .. . ...
... ..
. .. .
. ...
... .. .. .
. ...
. . ...
... ... .. . ...
... ..
. .. .
. ...
... .. .. .
. ...
. .
... ... .. . ...
...
... ... .. .
.
. ...
... ..
. .. . ..
........................................................................................................................................................................................................................................ x
... ◦ ◦ ◦ ◦
15 C 18 C .. 24 C 28 C
µ(x)
.
....
.......
..
... .....................................................................
... ..... ....
... ... . .. ....
... ... ... .. ...
... ... .. .. ....
... .. ..
. .. ...
... ... .. .. ....
... ..
. .. .. ...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
... ..
. .. .. ...
... ... .. .. ...
... .. .. .. ...
. ...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
. .
. .. .. .
.......................................................................................................................................................................................................................
. .
x
....
70% 75% .. 85% 90%
Figure 9.8: Take (Γ, L, M) to be [0, 1] with Borel algebra and Lebesgue mea-
sure. Then ξ(γ) = {x ∈ < | µ(x) ≥ γ} has the membership function µ. Keep
in mind that ξ is not the unique uncertain set whose membership function is
µ.
Proof: Since the membership function µ exists, it follows from the measure
inversion formula that
M{ξ = ∅} = 1 − sup µ(x) = 1 − sup µ(x).
x∈∅c x∈<
µ(x)
..
........ ........
... ....... ..............
... ..... .....
... ..... .....
... .
...... .....
.. .....
... .. .....
... . ... .....
.............. .
. ....
α ....
... ..
.
..
.
..............................................
.
.. .....
... .
... ... ..
... ........
.
... .... ... .. .....
... ........ .. .....
... ...... ... .
.....
.....
......
. .. .
. .....
.
. .... .. .. ......
..
... ... .......
...... . .
. ........
..... ..
. ... ..
..............................................................................................................................................................................................................
....
... ... x
0 .......................... −1 ............................
.. µ . (α)
Figure 9.9: Inverse Membership Function µ−1 (α). Reprinted from Liu [133].
Theorem 9.17 (Liu [133]) Let ξ be an uncertain set with inverse member-
ship function µ−1 (α). Then for each α ∈ [0, 1], we have
Proof: For each x ∈ µ−1 (α), we have µ(x) ≥ α. It follows from the measure
inversion formula that
For each x 6∈ µ−1 (α), we have µ(x) < α. It follows from the measure inversion
formula that
µ−1
l (α) = inf µ
−1
(α) (9.49)
µ−1
r (α) = sup µ
−1
(α) (9.50)
is called the right inverse membership function. It is clear that the left inverse
membership function µ−1 l (α) is increasing, and the right inverse membership
function µ−1r (α) is decreasing with respect to α.
Conversely, suppose an uncertain set ξ has a left inverse membership
function µ−1 −1
l (α) and right inverse membership function µr (α). Then the
membership function µ is determined by
0, if x ≤ µ−1
l (0)
−1 −1 −1
α, if µl (0) ≤ x ≤ µl (1) and µl (α) = x
µ(x) = 1, if µ−1 −1
l (1) ≤ x ≤ µr (1) (9.51)
β, if µ−1 −1 −1
r (1) ≤ x ≤ µr (0) and µr (β) = x
0, if x ≥ µ−1
r (0).
Note that the values of α and β may not be unique. In this case, we will take
the maximum values.
9.3 Independence
Definition 9.9 (Liu [136]) The uncertain sets ξ1 , ξ2 , · · · , ξn are said to be
independent if for any Borel sets B1 , B2 , · · · , Bn , we have
( n ) n
\ ^
M ∗
(ξi ⊂ Bi ) = M {ξi∗ ⊂ Bi } (9.52)
i=1 i=1
and ( )
n
[ n
_
M (ξi∗ ⊂ Bi ) = M {ξi∗ ⊂ Bi } (9.53)
i=1 i=1
Remark 9.7: Note that (9.52) represents 2n equations. For example, when
n = 2, the four equations are
M{(ξ1 ⊂ B1 ) ∩ (ξ2 ⊂ B2 )} = M{ξ1 ⊂ B1 } ∧ M{ξ2 ⊂ B2 },
M{(ξ1c ⊂ B1 ) ∩ (ξ2 ⊂ B2 )} = M{ξ1c ⊂ B1 } ∧ M{ξ2 ⊂ B2 },
M{(ξ1 ⊂ B1 ) ∩ (ξ2c ⊂ B2 )} = M{ξ1 ⊂ B1 } ∧ M{ξ2c ⊂ B2 },
M{(ξ1c ⊂ B1 ) ∩ (ξ2c ⊂ B2 )} = M{ξ1c ⊂ B1 } ∧ M{ξ2c ⊂ B2 }.
Also note that (9.53) represents other 2n equations. For example, when
n = 2, the four equations are
M{(ξ1 ⊂ B1 ) ∪ (ξ2 ⊂ B2 )} = M{ξ1 ⊂ B1 } ∨ M{ξ2 ⊂ B2 },
M{(ξ1c ⊂ B1 ) ∪ (ξ2 ⊂ B2 )} = M{ξ1c ⊂ B1 } ∨ M{ξ2 ⊂ B2 },
M{(ξ1 ⊂ B1 ) ∪ (ξ2c ⊂ B2 )} = M{ξ1 ⊂ B1 } ∨ M{ξ2c ⊂ B2 },
M{(ξ1c ⊂ B1 ) ∪ (ξ2c ⊂ B2 )} = M{ξ1c ⊂ B1 } ∨ M{ξ2c ⊂ B2 }.
Theorem 9.18 Let ξ1 , ξ2 , · · · , ξn be uncertain sets, and let ξi∗ be arbitrar-
ily chosen uncertain sets from {ξi , ξic }, i = 1, 2, · · · , n, respectively. Then
ξ1 , ξ2 , · · · , ξn are independent if and only if ξ1∗ , ξ2∗ , · · · , ξn∗ are independent.
Proof: Let ξi∗∗ be arbitrarily chosen uncertain sets from {ξi∗ , ξi∗c }, i =
1, 2, · · · , n, respectively. Then ξ1∗ , ξ2∗ , · · · , ξn∗ and ξ1∗∗ , ξ2∗∗ , · · · , ξn∗∗ represent
the same 2n combinations. This fact implies that (9.52) and (9.53) are equiv-
alent to ( n )
\ ^n
M ∗∗
(ξi ⊂ Bi ) = M {ξi∗∗ ⊂ Bi } , (9.54)
i=1 i=1
( n
) n
[ _
M (ξi∗∗ ⊂ Bi ) = M {ξi∗∗ ⊂ Bi } . (9.55)
i=1 i=1
Hence ξ1 , ξ2 , · · · , ξn are independent if and only if ξ1∗ , ξ2∗ , · · · , ξn∗ are indepen-
dent.
Exercise 9.6: Show that the following four statements are equivalent: (i)
ξ1 and ξ2 are independent; (ii) ξ1c and ξ2 are independent; (iii) ξ1 and ξ2c are
independent; and (iv) ξ1c and ξ2c are independent.
Theorem 9.19 The uncertain sets ξ1 , ξ2 , · · · , ξn are independent if and only
if for any Borel sets B1 , B2 , · · · , Bn , we have
( n ) n
\ ^
M (ξi∗ 6⊂ Bi ) = M {ξi∗ 6⊂ Bi } (9.56)
i=1 i=1
and ( )
n
[ n
_
M (ξi∗ 6⊂ Bi ) = M {ξi∗ 6⊂ Bi } (9.57)
i=1 i=1
where ξi∗ are arbitrarily chosen from {ξi , ξic }, i = 1, 2, · · · , n, respectively.
194 Chapter 9 - Uncertain Set
n
^ n
_
M {ξi∗ 6⊂ Bi } = 1 − M{ξi∗ ⊂ Bi }, (9.59)
i=1 i=1
( n
) ( n
)
[ \
M (ξi∗ 6⊂ Bi ) =1−M (ξi∗ ⊂ Bi ) , (9.60)
i=1 i=1
n
_ n
^
M {ξi∗ 6⊂ Bi } = 1 − M{ξi∗ ⊂ Bi }. (9.61)
i=1 i=1
It follows from (9.58), (9.59), (9.60) and (9.61) that (9.56) and (9.57) are
valid if and only if
( n ) n
\ ^
M ∗
(ξi ⊂ Bi ) = M{ξi∗ ⊂ Bi }, (9.62)
i=1 i=1
( n
) n
[ _
M (ξi∗ ⊂ Bi ) = M{ξi∗ ⊂ Bi }. (9.63)
i=1 i=1
The above two equations are also equivalent to the independence of the un-
certain sets ξ1 , ξ2 , · · · , ξn . The theorem is thus proved.
and ( )
n
[ n
_
M (Bi ⊂ ξi∗ ) = M {Bi ⊂ ξi∗ } (9.65)
i=1 i=1
( n
) ( n
)
[ [
M (Bi ⊂ ξi∗ ) =M (ξi∗c ⊂ Bic ) , (9.68)
i=1 i=1
n
_ n
_
M {Bi ⊂ ξi∗ } = M{ξi∗c ⊂ Bic }. (9.69)
i=1 i=1
It follows from (9.66), (9.67), (9.68) and (9.69) that (9.64) and (9.65) are
valid if and only if
( n ) n
\ ^
M ∗c c
(ξi ⊂ Bi ) = M{ξi∗c ⊂ Bic }, (9.70)
i=1 i=1
( n
) n
[ _
M (ξi∗c ⊂ Bic ) = M{ξi∗c ⊂ Bic }. (9.71)
i=1 i=1
The above two equations are also equivalent to the independence of the un-
certain sets ξ1 , ξ2 , · · · , ξn . The theorem is thus proved.
and ( )
n
[ n
_
M (Bi 6⊂ ξi∗ ) = M {Bi 6⊂ ξi∗ } (9.73)
i=1 i=1
Proof: Since {Bi 6⊂ ξi∗ }c = {Bi ⊂ ξi∗ } for i = 1, 2, · · · , n, it follows from the
duality of uncertain measure that
( n ) ( n )
\ [
M (Bi 6⊂ ξi ) = 1 − M
∗
(Bi ⊂ ξi ) ,∗
(9.74)
i=1 i=1
n
^ n
_
M {Bi 6⊂ ξi∗ } = 1 − M{Bi ⊂ ξi∗ }, (9.75)
i=1 i=1
( n
) ( n
)
[ \
M (Bi 6⊂ ξi∗ ) =1−M (Bi ⊂ ξi∗ ) , (9.76)
i=1 i=1
n
_ n
^
M {Bi 6⊂ ξi∗ } = 1 − M{Bi ⊂ ξi∗ }. (9.77)
i=1 i=1
196 Chapter 9 - Uncertain Set
It follows from (9.74), (9.75), (9.76) and (9.77) that (9.72) and (9.73) are
valid if and only if
( n ) n
\ ^
M ∗
(Bi ⊂ ξi ) = M{Bi ⊂ ξi∗ }, (9.78)
i=1 i=1
( n
) n
[ _
M (Bi ⊂ ξi∗ ) = M{Bi ⊂ ξi∗ }. (9.79)
i=1 i=1
The above two equations are also equivalent to the independence of the un-
certain sets ξ1 , ξ2 , · · · , ξn . The theorem is thus proved.
Thus
M{B ⊂ (ξ ∪ η)} ≥ inf µ(x) ∨ ν(x). (9.81)
x∈B
Thus
M{B ⊂ (ξ ∪ η)} ≤ inf µ(x) ∨ ν(x). (9.82)
x∈B
The first measure inversion formula is verified. Next we prove the second
measure inversion formula. By the independence of ξ and η, we have
That is,
M{(ξ ∪ η) ⊂ B} = 1 − sup µ(x) ∨ ν(x). (9.84)
x∈B c
λ(x)
..
........
µ(x) ν(x)
... ........ ........
..... ........ ..... ........
... .... ... .... ...
... ... ... ... ...
... ... ... ..
. ...
... ... ...
... .
.. ...
...
... ... ... ..
. ...
... ... .... .
.. ...
... ... .... ..
. ...
... ..
. ... . .
.. ...
... .. . . ...
. ... ... ...
... ... . .
.. ...
... ..
. . ...
... .... ... ... ...
... ..
... . . .. ....
... .....
. ... ..
.
.....
.....
... .
.....
. .. . ... ......
.. .. ...
... .......... .. . .......
.
.............................................................................................................................................................. . ..................................................................................................... x
....
..
That is,
M{B ⊂ (ξ ∩ η)} = inf µ(x) ∧ ν(x). (9.86)
x∈B
The first measure inversion formula is verified. In order to prove the second
measure inversion formula, we write
Letting ε → 0, we get
Thus
M{(ξ ∩ η) ⊂ B} ≤ 1 − sup µ(x) ∧ ν(x). (9.88)
x∈B c
λ(x)
...
..........
µ(x) ν(x)
... ......... .........
... .. .. .. ..
... .. .. .. ..
... ..
. .. ... ..
. .. . ..
... . .
.
... ... ..
.. .. ..
..
... .. . . . ..
... .
. .. . . ..
... .. .. .. ..
... .. .. .. ..
.. ... .
... .. .
.....
..
..
... . ..
... ... ... .... ..
..
... .. .... ....
.. .
... ...... ..
... .. ....
... ..
..... ..
...
... ..... ..... .... ...
. ....... .........
.....................................................................................................................................................................................................................................
.
. . .
. ... ........................ x
..
..
..
Theorem 9.24 (Liu [133]) Let ξ be an uncertain set with membership func-
tion µ. Then its complement ξ c has a membership function
λ(x)
..
.........
µ(x)
... .............. ........... ...............
.
... ........ ... .. .........
...
....... .. .. .......
...
......
..... ... ..
. ...
.......
..... . .. ....
... ..... .. .. ....
... ..... .. .. .....
... .... .. .. .....
.... ... .. ........
... .... .. .......
... ....
... ...
... .. ... ... ..
... .. ..... ... ...
... . .. ... ....
. ..
. .... .
. ..
... . . .
.... .
.. ..
... .. ..... .... ..
... . . .. ..
. ..
...... ...
.. ..... ...
. ...
... . . ..
. .
.. ....
. ........
.
......
.. .
...
...
. .... .
................................................................................................................................................................................................................................................
................................. x
....
..
ξ = f (ξ1 , ξ2 , · · · , ξn ) (9.91)
Proof: For simplicity, we only prove the case n = 2. Let B be any Borel
set, and write
β = inf λ(x).
x∈B
≥ M{(µ−1 −1
1 (β) ⊂ ξ1 ) ∩ (µ2 (β) ⊂ ξ2 )}
= M{µ−1 −1
1 (β) ⊂ ξ1 } ∧ M{µ2 (β) ⊂ ξ2 }
≥ β ∧ β = β.
Thus
M{B ⊂ ξ} ≥ inf λ(x). (9.93)
x∈B
On the other hand, for any given number ε > 0, we have B 6⊂ λ−1 (β + ε).
Since λ−1 (β + ε) = f (µ−1 −1
1 (β + ε), µ2 (β + ε)), we obtain
≥ M{(ξ1 ⊂ µ−1 −1
1 (β + ε)) ∩ (ξ2 ⊂ µ2 (β + ε))}
= M{ξ1 ⊂ µ−1 −1
1 (β + ε)} ∧ M{ξ2 ⊂ µ2 (β + ε)}
≥ (1 − β − ε) ∧ (1 − β − ε) = 1 − β − ε
and then
M{B ⊂ ξ} = 1 − M{B 6⊂ ξ} ≤ β + ε.
Letting ε → 0, we get
The first measure inversion formula is verified. In order to prove the second
measure inversion formula, we write
β = sup λ(x).
x∈B c
Then for any given number ε > 0, we have λ−1 (β + ε) ⊂ B. Please note that
λ−1 (β + ε) = f (µ−1 −1
1 (β + ε), µ2 (β + ε)). By the independence of ξ1 and ξ2 ,
we obtain
M{ξ ⊂ B} ≥ M{ξ ⊂ λ−1 (β + ε)} = M{ξ ⊂ f (µ−1 −1
1 (β + ε), µ2 (β + ε))}
≥ M{(ξ1 ⊂ µ−1 −1
1 (β + ε)) ∩ (ξ2 ⊂ µ2 (β + ε))}
= M{ξ1 ⊂ µ−1 −1
1 (β + ε)} ∧ M{ξ2 ⊂ µ2 (β + ε)}
≥ (1 − β − ε) ∧ (1 − β − ε) = 1 − β − ε.
Letting ε → 0, we get
On the other hand, for any given number ε > 0, we have λ−1 (β − ε) 6⊂ B.
Since λ−1 (β − ε) = f (µ−1 −1
1 (β − ε), µ2 (β − ε)), we obtain
≥ M{(µ−1 −1
1 (β − ε) ⊂ ξ1 ) ∩ (µ2 (β − ε) ⊂ ξ2 )}
= M{µ−1 −1
1 (β − ε) ⊂ ξ1 } ∧ M{µ2 (β − ε) ⊂ ξ2 }
≥ (β − ε) ∧ (β − ε) = β − ε
and then
M{ξ ⊂ B} = 1 − M{ξ 6⊂ B} ≤ 1 − β + ε.
Letting ε → 0, we get
ξ = f (ξ1 , ξ2 , · · · , ξn ) (9.111)
λ−1 −1 −1 −1 −1
l (α) = f (µ1l (α), · · · , µml (α), µm+1,r (α), · · · , µnr (α)), (9.112)
−1 −1 −1
λ−1 −1
r (α) = f (µ1r (α), · · · , µmr (α), µm+1,l (α), · · · , µnl (α)), (9.113)
where λ−1 −1 −1 −1
l , µ1l , µ2l , · · · , µnl are left inverse membership functions, and λr ,
−1
−1 −1 −1
µ1r , µ2r , · · · , µnr are right inverse membership functions of ξ, ξ1 , ξ2 , · · · , ξn ,
respectively.
is also an interval. Thus ξ has a regular membership function, and its left and
right inverse membership functions are determined by (9.112) and (9.113),
respectively.
Exercise 9.8: Let ξ and η be independent uncertain sets with left inverse
membership functions µ−1
l and νl−1 and right inverse membership functions
−1 −1
µr and νr , respectively. Show that the sum ξ + η has left and right inverse
membership functions,
λ−1 −1 −1
l (α) = µl (α) + νl (α), (9.114)
λ−1 −1 −1
r (α) = µr (α) + νr (α). (9.115)
Exercise 9.9: Let ξ and η be independent uncertain sets with left inverse
membership functions µ−1
l and νl−1 and right inverse membership functions
−1 −1
µr and νr , respectively. Show that the difference ξ − η has left and right
inverse membership functions,
λ−1 −1 −1
l (α) = µl (α) − νr (α), (9.116)
204 Chapter 9 - Uncertain Set
−1
λ−1 −1
r (α) = µr (α) − νl (α). (9.117)
Exercise 9.10: Let ξ and η be independent and positive uncertain sets with
left inverse membership functions µ−1
l and νl−1 and right inverse membership
−1 −1
functions µr and νr , respectively. Show that
ξ
(9.118)
ξ+η
µ−1
l (α)
λ−1
l (α) = , (9.119)
−1
µl (α) + νr−1 (α)
µ−1
r (α)
λ−1
r (α) = . (9.120)
−1
µr (α) + νl−1 (α)
Definition 9.10 (Liu [127]) Let ξ be a nonempty uncertain set. Then the
expected value of ξ is defined by
Z +∞ Z 0
E[ξ] = M{ξ x}dx − M{ξ x}dx (9.121)
0 −∞
M{ξ x} ≡ 0, ∀x ≤ 0.
Thus
Z 1 Z 2 Z 3 Z 4
E[ξ] = 1dx + 0.7dx + 0.3dx + 0.1dx = 2.1.
0 1 2 3
Proof: Since the uncertain set ξ has a membership function µ, the second
measure inversion formula tells us that
M{ξ ≥ x} = 1 − sup µ(y),
y<x
Thus (9.124) follows from (9.122) immediately. We may also prove (9.125)
similarly.
Theorem 9.28 (Liu [129]) Let ξ be an uncertain set with regular member-
ship function µ. Then
1 +∞ 1 x0
Z Z
E[ξ] = x0 + µ(x)dx − µ(x)dx (9.126)
2 x0 2 −∞
where x0 is a point such that µ(x0 ) = 1.
Proof: Since µ is increasing on (−∞, x0 ] and decreasing on [x0 , +∞), it
follows from Theorem 9.27 that for almost all x, we have
(
1 − µ(x)/2, if x ≤ x0
M{ξ x} = (9.127)
µ(x)/2, if x ≥ x0
and (
µ(x)/2, if x ≤ x0
M{ξ x} = (9.128)
1 − µ(x)/2, if x ≥ x0
for any real number x. If x0 ≥ 0, then
Z +∞ Z 0
E[ξ] = M{ξ x}dx − M{ξ x}dx
0 −∞
Z x0 Z +∞ Z 0
µ(x) µ(x) µ(x)
= 1− dx + dx − dx
0 2 x0 2 −∞ 2
1 +∞ 1 x0
Z Z
= x0 + µ(x)dx − µ(x)dx.
2 x0 2 −∞
If x0 < 0, then
Z +∞ Z 0
E[ξ] = M{ξ x}dx − M{ξ x}dx
0 −∞
Z +∞ Z x0 Z 0
µ(x) µ(x) µ(x)
= dx − dx − 1− dx
0 2 −∞ 2 x0 2
1 +∞ 1 x0
Z Z
= x0 + µ(x)dx − µ(x)dx.
2 x0 2 −∞
Section 9.6 - Expected Value 207
Exercise 9.11: Show that the triangular uncertain set ξ = (a, b, c) has an
expected value
a + 2b + c
E[ξ] = . (9.130)
4
Exercise 9.12: Show that the trapezoidal uncertain set ξ = (a, b, c, d) has
an expected value
a+b+c+d
E[ξ] = . (9.131)
4
Theorem 9.29 (Liu [133]) Let ξ be a nonempty uncertain set with member-
ship function µ. Then
Z 1
1
inf µ−1 (α) + sup µ−1 (α) dα
E[ξ] = (9.132)
2 0
where inf µ−1 (α) and sup µ−1 (α) are the infimum and supremum of the α-cut,
respectively.
Proof: Since ξ is a nonempty uncertain set and has a finite expected value,
we may assume that there exists a point x0 such that µ(x0 ) = 1 (perhaps
after a small perturbation). It is clear that the two integrals
Z +∞ Z 1
sup µ(y)dx and (sup µ−1 (α) − x0 )dα
x0 y≥x 0
1 1
Z
= (inf µ−1 (α) + sup µ−1 (α))dα.
2 0
The theorem is thus verified.
Theorem 9.30 (Liu [133]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain sets
with regular membership functions µ1 , µ2 , · · · , µn , respectively. If the func-
tion f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · , xm and
strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , then the uncertain set
ξ = f (ξ1 , ξ2 , · · · , ξn ) (9.133)
has an expected value
Z 1
1
µ−1 −1
E[ξ] = l (α) + µr (α) dα (9.134)
2 0
where µ−1 −1
l (α) and µr (α) are determined by
µ−1 −1 −1 −1 −1
l (α) = f (µ1l (α), · · · , µml (α), µm+1,r (α), · · · , µnr (α)), (9.135)
−1 −1 −1
µ−1 −1
r (α) = f (µ1r (α), · · · , µmr (α), µm+1,l (α), · · · , µnl (α)). (9.136)
Proof: It follows from Theorems 9.26 and 9.29 immediately.
Exercise 9.14: Let ξ and η be independent and positive uncertain sets with
regular membership functions µ and ν, respectively. Show that
1 1 µ−1 µ−1
Z
ξ l (α) r (α)
E = + −1 dα. (9.138)
η 2 0 νr−1 (α) νl (α)
Exercise 9.15: Let ξ and η be independent and positive uncertain sets with
regular membership functions µ and ν, respectively. Show that
1 1 µ−1 µ−1
Z
ξ l (α) r (α)
E = + −1 dα. (9.139)
ξ+η 2 0 µ−1 −1
l (α) + νr (α) µr (α) + νl−1 (α)
Section 9.7 - Variance 209
Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
E[aξ + bη] = E[aξ] + E[bη] = aE[ξ] + bE[η].
The theorem is proved.
210 Chapter 9 - Uncertain Set
9.7 Variance
The variance of uncertain set provides a degree of the spread of the member-
ship function around its expected value.
Definition 9.11 (Liu [130]) Let ξ be an uncertain set with finite expected
value e. Then the variance of ξ is defined by
This definition says that the variance is just the expected value of (ξ −e)2 .
Since (ξ − e)2 is a nonnegative uncertain set, we also have
Z +∞
V [ξ] = M{(ξ − e)2 x}dx. (9.142)
0
[x, +∞)”. What is the appropriate value of M{(ξ − e)2 x}? Intuitively,
it is too conservative if we take the value M{(ξ − e)2 ≥ x}, and it is too
adventurous if we take the value 1 − M{(ξ − e)2 < x}. Thus we assign
M{(ξ − e)2 x} the middle value between them. That is,
1
M{(ξ − e)2 x} = M{(ξ − e)2 ≥ x} + 1 − M{(ξ − e)2 < x} . (9.143)
2
Theorem 9.32 If ξ is an uncertain set with finite expected value, a and b
are real numbers, then
V [aξ + b] = a2 V [ξ]. (9.144)
Theorem 9.33 Let ξ be an uncertain set with expected value e. Then V [ξ] =
0 if and only if ξ = {e} almost surely.
Proof: We first assume V [ξ] = 0. It follows from the equation (9.142) that
Z +∞
M{(ξ − e)2 x}dx = 0
0
which implies M{(ξ − e)2 x} = 0 for any x > 0. Hence M{ξ = {e}} = 1.
Conversely, assume M{ξ = {e}} = 1. Then we have M{(ξ − e)2 x} = 0 for
any x > 0. Thus
Z +∞
V [ξ] = M{(ξ − e)2 x}dx = 0.
0
9.8 Entropy
This section provides a definition of entropy to characterize the uncertainty
of uncertain sets.
Definition 9.12 (Liu [130]) Suppose that ξ is an uncertain set with mem-
bership function µ. Then its entropy is defined by
Z +∞
H[ξ] = S(µ(x))dx (9.147)
−∞
Remark 9.9: Note that the entropy (9.147) has the same form with de Luca
and Termini’s entropy for fuzzy set [32].
212 Chapter 9 - Uncertain Set
and entropy is
Z +∞ Z +∞
H[ξ] = S(µ(x))dx = 0dx = 0.
−∞ −∞
Exercise 9.16: Let ξ = (a, b, c) be a triangular uncertain set. Show that its
entropy is
c−a
H[ξ] = . (9.149)
2
Theorem 9.37 Let ξ be an uncertain set on the interval [a, b]. Then
H[ξ] ≤ (b − a) ln 2 (9.151)
and equality holds if ξ has a membership function µ(x) = 0.5 on [a, b].
Proof: The theorem follows from the fact that the function S(t) reaches its
maximum value ln 2 at t = 0.5.
Theorem 9.38 Let ξ be an uncertain set, and let ξ c be its complement. Then
Theorem 9.39 (Yao [249]) Let ξ be an uncertain set with regular member-
ship function µ. Then
Z 1
α
H[ξ] = (µ−1 −1
l (α) − µr (α)) ln dα. (9.153)
0 1 − α
Proof: Without loss of generality, assume the uncertain sets ξ and η have
regular membership functions µ and ν, respectively.
Step 1: We prove H[aξ] = |a|H[ξ]. If a > 0, then the left and right
inverse membership functions of aξ are
λ−1 −1
l (α) = aµl (α), λ−1 −1
r (α) = aµr (α).
and
Z 1
−1 α
H[aξ] = (aµ−1
r (α) − aµl (α)) ln dα = (−a)H[ξ] = |a|H[ξ].
0 1−α
λ−1 −1 −1
l (α) = µl (α) + νl (α), λ−1 −1 −1
r (α) = µr (α) + νr (α).
Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
Exercise 9.18: Let ξ be an uncertain set, and let A be a crisp set. Show
that
H[ξ + A] = H[ξ]. (9.155)
That is, the entropy is invariant under arbitrary translations.
Section 9.10 - Conditional Membership Function 215
9.9 Distance
Definition 9.13 (Liu [130]) The distance between uncertain sets ξ and η is
defined as
d(ξ, η) = E[|ξ − η|]. (9.156)
That is, the distance between ξ and η is just the expected value of |ξ − η|.
Since |ξ − η| is a nonnegative uncertain set, we have
Z +∞
d(ξ, η) = M{|ξ − η| x}dx. (9.157)
0
Theorem 9.42 Let ξ and η be uncertain sets. Then the distance between ξ
and η is
!
1 +∞
Z
d(ξ, η) = sup λ(y) + 1 − sup λ(y) dx (9.160)
2 0 |y|≥x |y|<x
Assume the expert’s belief degree is α in uncertain measure. Note that the
expert’s belief degree of x not belonging to ξ must be 1 − α due to the duality
of uncertain measure. An expert’s experimental data (x, α) is thus acquired
from the domain expert. Repeating the above process, the following expert’s
experimental data are obtained by the questionnaire,
µ(x)
..
.........
...
.... .•
......................................................•
...
.. .... ...
... ..... ...
... ..... ...
... ..
..... ...
... •... ...
. ...
... ..
. .
... ... •...........
... ..
. .....
... ..
. .....
.....
... .
... ...
.
.....
....
...• •....
.
.
... ..
. ...
... ..
.... ...
... ..
.... ...
... ..
.... ...
... ..
.... ...
...
... ..
.... ...
... ..
.... • ...
... ..
...
. ...
... •..... ...
. . .
..................................................................................................................................................................................................................................................
. .
x
...
minimizes the sum of the squares of the distance of the expert’s experimental
data to the membership function. If the expert’s experimental data
(1, 0.15), (2, 0.45), (3, 0.90), (6, 0.85), (7, 0.60), (8, 0.20). (9.168)
Q1: May I ask you what distances belong to “about 100km”? What do you
think is the minimum distance?
A3: 95km.
Q4: What is the belief degree that 95km belongs to “about 100km”?
A5: 105km.
Q6: What is the belief degree that 105km belongs to “about 100km”?
A7: 90km.
Q8: What is the belief degree that 90km belongs to “about 100km”?
A9: 110km.
Q10: What is the belief degree that 110km belongs to “about 100km”?
A11: No idea.
Until now six expert’s experimental data (80, 0), (90, 0.5), (95, 1), (105, 1),
(110, 0.5), (120, 0) are acquired from the domain expert. Based on those
expert’s experimental data, an empirical membership function of “about
100km” is produced and shown by Figure 9.15.
µ(x)
....
........
..... (95, 1) (105, 1)
..
1 .... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .•
...
.......................................•
...
........ .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
...
... ... ...
...
... ..
. ...
... .... ...
... ... ...
... .
. ...
... .... ...
...
... ... ...
... .
. ...
... .... ...
... ... ...
... ... ...
... .
.
. ...
...
...
(90, 0.5) ..•
.
. (110, 0.5) •......
.. ...
... .. ...
... .... ...
... ... ...
... ... ...
...
... ... ...
... ... ...
... ...
. ...
... ... ...
...
... ..
. ...
... ... ...
... ...
. ...
(80, 0) ... .
... (120, 0) ...
..
................................................................................................................................................•
......................................................• ............................................................. x
0 ...
Uncertain Logic
A = {21, 22, 22, 23, 24, 25, 26, 27, 28, 30, 32, 35, 36, 38, 40} (10.3)
whose elements are ages in years. When we talk about “those sportsmen
are tall”, we should know the individual feature data of all sportsmen, for
example,
175, 178, 178, 180, 183, 184, 186, 186
A= (10.4)
188, 190, 192, 192, 193, 194, 195, 196
whose elements are heights in centimeters.
whose elements are ages and heights in years and centimeters, respectively.
Example 10.3: The quantifier “there does not exist one” on the universe A
is a special uncertain quantifier
Q ≡ {0} (10.12)
Q ≡ {m, m + 1, · · · , n} (10.16)
Q ≡ {0, 1, 2, · · · , m} (10.18)
λ(x)
...
..........
...
................................................................................ ................................
... .... ..
... ... ..
... ... .. ...
... ... .. ..
... ... .. ..
... ..
. .
. ..
... ... .
. ..
... ... .
. ..
... ..
. .
. ..
... .... .
.
. ..
... ..
. . ..
... .. .
. ..
. .
... ..
. . ..
... ... .
. ..
... ... .
. ..
... ..
. .
. ..
.. .
... . . .
.............................................................................................................................................................................................................................................................. x
....
n−5 n−2 n
λ(x)
..
.........
.
..................................
.
.... .......
... .. ....
... .. ....
... .. ...
...
... .. ...
... .. ...
... .. ...
... .. ...
... .. ...
... .. ...
...
... .. ...
... .. ...
... .. ...
... .. ...
...
... .. ...
... .. ...
.
...........................................................................................................................................................................................................................................................
.
..
.
x
2 ... 5
λ(x)
...
..........
...
.............................................. ...............................................................
... ..... .. ........
... ... ..
... ... . ... .. ....
... .. ...
. .. .. ...
... ... .. . .. ....
... .. .. .. .. ...
. . ...
... ..
. .. . .. ...
... ... .. .. .. ...
... ..
. . .
. . ...
.. ..
... .... .
. ...
... ..
. ... .. ... ...
...
... ... .. ... .. ...
... ... .. .
.
.. ...
... ..
. .. .
..
.. ...
...
... ... ..
.
.. ...
... ... .. .
..
.. ...
... ... .. .. ...
................................................................................................................................................................................................................................................................ x
....
7 9 10 11 13
λ(x)
....
.........
..........................................................................................................
... .... .....
... .... ......
... ... .. .. ...
... .. ...
. .. ....
... .... .. .. ...
... . .
. .. ...
... .... ..
. .. ....
... .
. . .. ...
.. ...
... .... .
.. ...
... .
. . .. ...
..
... .... .
.. ...
... .
. . .. ...
..
... .... .. ...
... ... ... .. ...
...
... .
. . .. ...
... .... ... .. ...
. . . ..
.....................................................................................................................................................................................................................................................................
. .
x
...
60% 65% 75% 80%
The uncertain quantifiers “almost all” and “almost none” are monotone,
but “about 10” and “about 70%” are not monotone. Note that both increas-
ing uncertain quantifiers and decreasing uncertain quantifiers are monotone.
In addition, any monotone uncertain quantifiers are unimodal.
Negated Quantifier
What is the negation of an uncertain quantifier? The following definition
gives a formal answer.
Definition 10.4 Let Q be an uncertain quantifier. Then the negated quan-
tifier ¬Q is the complement of Q in the sense of uncertain set, i.e.,
¬Q = Qc . (10.24)
Example 10.12: Let ∀ = {n} be the universal quantifier. Then its negated
quantifier
¬∀ ≡ {0, 1, 2, · · · , n − 1}. (10.25)
.....
.......
... ¬λ(x) λ(x)
..................................................................................................................................... ....... ....... ...
... ... ..
... ...
... ...
... ... .
... ... ...
... ...
... .
... ... ...
... ... .
... ......
... ...
... .. ...
... ... .....
... . ...
... ... ...
...
... .. ...
... .
.. ...
... .
...
..
.
. .
.
........................................................................................................................................................................................................................................................... x
....
n−5 n−2
..
........
. ¬λ(x) λ(x) ¬λ(x)
...................................................................................... .. ....... ....... ....... ....... . .......................................................
.... ... ... ... ...
... .
... ... . ...
... ... ... ... ..
.
... ... .. ...
... ... ..... ... ....
... ... .. ...
... .. .
... ..... .
......
... . .
... ..... ...
... ... .... ... ..
... ... ... ...
... ... ... .. .
.
... . ... ... ...
... .
... .. ... ...
... ... ... ..
..
...
..
... . ..
. ..
. ... .
.
...........................................................................................................................................................................................................................................................................
.
..
..
. .
x
60% 65% 75% 80%
Dual Quantifier
Definition 10.5 Let Q be an uncertain quantifier. Then the dual quantifier
of Q is
Q∗ = ∀ − Q. (10.30)
Remark 10.1: Note that Q and Q∗ are dependent uncertain sets such that
Q + Q∗ ≡ ∀. Since the cardinality of the universe A is n, we also have
Q∗ = n − Q. (10.31)
228 Chapter 10 - Uncertain Logic
(¬∀)∗ ≡ ∃. (10.33)
Proof: This theorem follows from the operational law of uncertain set im-
mediately.
..... ∗
.......
...λ (x) λ(x)
................................. ....... ....... ...
... ... ..
... ...
... ...
... ... .
... ... ...
... ...
... ... ..
... ..
... ...
... ... ..
... ... ...
... ...
... ....
... ... .
... ... .
... ... ...
... ...
...
...
...
.
... .
... ...
. . ..
............................................................................................................................................................................................................................................................. x
....
5 n−5
Example 10.22: “Warm days are here again” is a statement in which “warm
days” is an uncertain subject that is an uncertain set on the universe of “all
230 Chapter 10 - Uncertain Logic
ν(x)
....
.........
... ....................................................................
... .... .....
... ..... .. ...
... ... . .. ....
... ... ... .. ....
... ... .. .. ...
... ... .. .. ...
...
... .. ..
. .. ...
... ..
. .. .. ...
... ..
. .. . ...
... ... .. .
. ...
... .. .. .
. ...
. . ...
... ..
. .. .
. ...
... ... .. . ...
... .. .. .
. ...
. . ...
... ... .. .
. ...
... ... .. . ...
... .. .. .
. ..
.
.............................................................................................................................................................................................................................................................................. x
... ◦ ◦ ◦ ◦
15 C 18 C
.. 24 C 28 C
ν(x)
...
..........
... .........................................................................................
... .... ......
... ..... .. ....
... ... . .. ....
... .. ...
. .. ...
... .. ..
. .. ...
... .. ..
. ..
...
...
... ... .. .. ...
... .. ..
. .. ...
... ... .. ..
...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
...
... ... .. .. ...
... ..
. .. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ..
. .. . ..
............................................................................................................................................................................................................................................................... x
...
15yr 20yr .. 35yr 45yr
ν(x)
..
.........
... .........................................................................................
.... ..... .....
.. .... .. ....
... ... .. .. ....
... ... ..
... ... .. .. ....
... ... .. .. ...
.. .. ...
... ... ...
... ..
. .. .. ...
... ..
. .. .. ...
... .... ..
..
..
..
...
...
... ... .. .. ...
... ... ...
... .. .. .. ...
. .. .. ...
... ... ...
... ... .. .. ...
... ... .. .. ...
... ... .. .. ...
. . .. .. .
..........................................................................................................................................................................................................................................................
. .
.
x
..
180cm 185cm ... 195cm 200cm
that
M{ai ∈ S} = ν(ai ), i = 1, 2, · · · , n. (10.42)
In many cases, we are interested in some individuals a’s with ν(a) ≥ ω, where
ω is a confidence level. Thus we have a subuniverse,
Sω = {a ∈ A | ν(a) ≥ ω} (10.43)
that will play a new universe of individuals we are talking about, and the
individuals out of Sω will be ignored at the confidence level ω.
Theorem 10.7 Let ω1 and ω2 be confidence levels with ω1 > ω2 , and let Sω1
and Sω2 be subuniverses with confidence levels ω1 an ω2 , respectively. Then
µ(x)
...
..........
... ....................................................................
... .... .....
... ..... .. ...
... ... . .. ....
... ... ... .. ....
... ... .. .. ...
... .. ..
. .. ...
... .. .. .. ...
. ...
... ..
. .. .. ...
... ... .. . ...
... ... .. .
. ...
... .. .. .
. ...
. . ...
... ..
. .. . ...
... ..
. .. .
. ...
... .. .. .
. ...
. .
... ... .. . ...
...
... ... .. .
.
. ...
... ..
. .. . ..
.............................................................................................................................................................................................................................................................................. x
... ◦ ◦ ◦ ◦
15 C 18 C
.. 24 C 28 C
µ(x)
...
..........
... .........................................................................................
... .... ......
... ..... .. ....
... ... . .. ....
... .. ...
. .. ...
... .. ..
. .. ...
... .. ..
. ..
...
...
... ... .. .. ...
... .. ..
. .. ...
... ... .. ..
...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
...
... ... .. .. ...
... ..
. .. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ..
. .. . ..
............................................................................................................................................................................................................................................................... x
...
15yr 20yr .. 35yr 45yr
µ(x)
...
..........
.... ..........................................................................................
... ..... .....
... ... .. .. ....
... .... ... .. ....
... ... .. .. ...
... ... . .. ....
... .. .. .. ...
. ..
... ... .. ...
...
... ... .. ..
.. ...
... ..
. . ...
... ..
. .. .. ...
... .... ..
..
..
..
...
...
... ... .. .. ...
... ... .. .. ...
... ... .. .. ...
... ... .. ..
...
... .. ...
.
. .. .. ...
... .
. .
.............................................................................................................................................................................................................................................................
....
x
180cm 185cm . 195cm 200cm
Negated Predicate
Definition 10.8 Let P be an uncertain predicate. Then its negated predicate
¬P is the complement of P in the sense of uncertain set, i.e.,
¬P = P c . (10.48)
Theorem 10.8 Let P be an uncertain predicate with membership function
µ. Then its negated predicate ¬P has a membership function
¬µ(x) = 1 − µ(x). (10.49)
234 Chapter 10 - Uncertain Logic
Proof: The theorem follows from the definition of negated predicate and the
operational law of uncertain set immediately.
.
....
.......
..
¬µ(x) µ(x) ¬µ(x)
........................................................... . ....... ....... ....... ....... ....... .. .............................................
... ... ... ... ...
... ... ..
... ... .
... ... ...
.
... . .. .
... ... ...
... ... ... ... ...
... ... . .. ...
.
... ... . ... . .
.
... ...... .....
...
... ..
... .. .. .. ..
.. ..... .. ...
... .
.
... .. ..... ... ...
... .. ... ... ..
... . ... ... ...
... ...
... .
... ..
... .
... . .... ... ...
. . .
..........................................................................................................................................................................................................................................................................
. . .
x
..
.. ◦ ◦ ◦ ◦
. 15 C 18 C 24 C 28 C
...
µ(x)
.......... µ(x) ¬µ(x)
................................................. . ....... ....... ....... ....... ....... ....... ...... ..............................
... ... ... ...
... ... ... ...
... ... .
... ..
.....
... ... . ... ..
... .. ...
... ... .... ...
... ... ... ..
... ... .. ..
.....
... ..... ... ....
... .. ....
... ..... ...
.. .... .... .....
... ... .
..
.
... .. . .
. ...
... ... ..... .
... ..
... ... ... ...
...
.
... ... .
.... ..
... ...
.... ...
.. .. ... .. .
. . .
.....................................................................................................................................................................................................................................................................
...
x
15yr 20yr
.. 35yr 45yr
....
µ(x)
......... µ(x) ¬µ(x)
................................................. ....... ....... ....... ....... ....... ....... .......
... ..............................
... ... .. .. ...
... ... ... ..
...
...
. .. ...
.
... ...
... ... ... ...
... ...
.. ... ...
... ... .. ..
... ... ...
... ... .. ... .. .
... ..... .....
... ... ...
... .. ... ... ..
... .. .... ... ..
... . ... ..
. ...
... ..
... ... ... ...
... ... .. ...
... .... .. ...
. ..
. ... .
.
... . ... .
. ...
. . . .
.....................................................................................................................................................................................................................................................................
. . .
x
....
180cm 185cm . 195cm 200cm
where
Kω = {K ⊂ Sω | λ(|K|) ≥ ω} , (10.59)
K∗ω = {K ⊂ Sω | λ(|Sω | − |K|) ≥ ω} , (10.60)
Sω = {a ∈ A | ν(a) ≥ ω} . (10.61)
Remark 10.5: Keep in mind that the truth value formula (10.58) is vacuous
if the individual feature data of the universe A are not available.
Remark 10.6: The symbol |K| represents the cardinality of the set K. For
example, |∅| = 0 and |{2, 5, 6}| = 3.
where
Kω = {K ⊂ A | λ(|K|) ≥ ω} , (10.67)
K∗ω = {K ⊂ A | λ(|A| − |K|) ≥ ω} . (10.68)
Show that
T (∀, A, P ) = inf µ(a). (10.70)
a∈A
Show that
T (¬∃, A, P ) = 1 − sup µ(a). (10.78)
a∈A
Theorem 10.11 (Liu [130], Truth Value Theorem) Let (Q, S, P ) be an un-
certain proposition in which Q is a unimodal uncertain quantifier with mem-
bership function λ, S is an uncertain subject with membership function ν,
and P is an uncertain predicate with membership function µ. Then the truth
value of (Q, S, P ) is
where
kω = min {x | λ(x) ≥ ω} , (10.80)
∆(kω ) = the kω -th largest value of {µ(ai ) | ai ∈ Sω }, (10.81)
kω∗ = |Sω | − max{x | λ(x) ≥ ω}, (10.82)
∗
∆ (kω∗ ) = the kω∗ -th largest value of {1 − µ(ai ) | ai ∈ Sω }. (10.83)
Section 10.7 - Algorithm 239
Proof: Since the supremum is achieved at the subset with minimum cardi-
nality, we have
where
kω = min {x | λ(x) ≥ ω} , (10.87)
10.7 Algorithm
In order to calculate T (Q, S, P ) based on the truth value formula (10.58), a
truth value algorithm is given as follows:
Step 1. Set ω = 1 and ε = 0.01 (a predetermined precision).
Step 2. Calculate Sω = {a ∈ A | ν(a) ≥ ω} and k = min{x | λ(x) ≥ ω} as
well as k ∗ = |Sω | − max{x | λ(x) ≥ ω}.
Step 3. If ∆(k) ∧ ∆∗ (k ∗ ) < ω, then ω ← ω − ε and go to Step 2. Otherwise,
output the truth value ω and stop.
Example 10.35: Assume that the daily temperatures of some week from
Monday to Sunday are
Note that the uncertain quantifier is Q = {2, 3}. We also suppose the uncer-
tain predicate P = “warm” has a membership function
0, if x ≤ 15
(x − 15)/3, if 15 ≤ x ≤ 18
µ(x) = 1, if 18 ≤ x ≤ 24 (10.94)
(28 − x)/4, if 24 ≤ x ≤ 28
0, if 28 ≤ x.
It is clear that Monday and Tuesday are warm with truth value 1, and
Wednesday is warm with truth value 0.75. But Thursday to Sunday are
not “warm” at all (in fact, they are “hot”). Intuitively, the uncertain propo-
sition “two or three days are warm” should be completely true. The truth
value algorithm (http://orsc.edu.cn/liu/resources.htm) yields that the truth
value is
T (“two or three days are warm”) = 1. (10.95)
This is an intuitively expected result. In addition, we also have
Example 10.36: Assume that in a class there are 15 students whose ages
are
21, 22, 22, 23, 24, 25, 26, 27, 28, 30, 32, 35, 36, 38, 40 (10.98)
in years. Consider an uncertain proposition
Example 10.38: Assume that in a class there are 18 students whose ages
and heights are
(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
(28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188) (10.108)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)
Note that each individual is described by a feature data (y, z), where y rep-
resents ages and z represents heights. In this case, the uncertain subject
S = “young students” has a membership function
0, if y ≤ 15
(y − 15)/5, if 15 ≤ y ≤ 20
ν(y) = 1, if 20 ≤ y ≤ 35 (10.111)
(45 − y)/10, if 35 ≤ y ≤ 45
0, if y ≥ 45
Section 10.8 - Linguistic Summarizer 243
The truth value algorithm yields that the uncertain proposition has a truth
value
T (“most young students are tall”) = 0.8. (10.113)
A = {a1 , a2 , · · · , an }. (10.114)
Next, we should have some linguistic terms to represent quantifiers, for exam-
ple, “most” and “all”. Denote them by a collection of uncertain quantifiers,
Q = {Q1 , Q2 , · · · , Qm }. (10.115)
Then, we should have some linguistic terms to represent subjects, for exam-
ple, “young students” and “old students”. Denote them by a collection of
uncertain subjects,
S = {S1 , S2 , · · · , Sn }. (10.116)
Last, we should have some linguistic terms to represent predicates, for exam-
ple, “short” and “tall”. Denote them by a collection of uncertain predicates,
P = {P1 , P2 , · · · , Pk }. (10.117)
Find Q, S and P
subject to:
Q∈Q
(10.119)
S∈S
P ∈P
T (Q, S, P ) ≥ β.
Example 10.39: Assume that in a class there are 18 students whose ages
and heights are
(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
(28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188) (10.120)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)
0, if 0 ≤ x ≤ 0.7
20(x − 0.7), if 0.7 ≤ x ≤ 0.75
λmost (x) = 1, if 0.75 ≤ x ≤ 0.85 (10.122)
20(0.9 − x),
if 0.85 ≤ x ≤ 0.9
0, if 0.9 ≤ x ≤ 1,
(
1, if x = 1
λall (x) = (10.123)
0, if 0 ≤ x < 1,
Finally, we suppose that there are two linguistic terms “short” and “tall” as
uncertain predicates whose membership functions are
0, if z ≤ 145
(z − 145)/5, if 145 ≤ z ≤ 150
µshort (z) = 1, if 150 ≤ z ≤ 155 (10.129)
(160 − z)/5, if 155 ≤ z ≤ 160
0, if z ≥ 200,
0, if z ≤ 180
(z − 180)/5, if 180 ≤ z ≤ 185
µtall (z) = 1, if 185 ≤ z ≤ 195 (10.130)
(200 − z)/5, if 195 ≤ z ≤ 200
0, if z ≥ 200,
and then extracts a linguistic summary “most young students are tall”.
Uncertain Inference
Let X and Y be two concepts. It is assumed that we only have a single if-then
rule,
“if X is ξ then Y is η” (11.1)
where ξ and η are two uncertain sets. We first introduce the following infer-
ence rule.
Inference Rule 11.1 (Liu [127]) Let X and Y be two concepts. Assume a
rule “if X is an uncertain set ξ then Y is an uncertain set η”. From X is a
constant a we infer that Y is an uncertain set
η ∗ = η|a∈ξ (11.2)
Inference Rule 11.2 (Gao, Gao and Ralescu [49]) Let X, Y and Z be three
concepts. Assume a rule “if X is an uncertain set ξ and Y is an uncertain set
η then Z is an uncertain set τ ”. From X is a constant a and Y is a constant
b we infer that Z is an uncertain set
τ ∗ = τ |(a∈ξ)∩(b∈η) (11.5)
which is the conditional uncertain set of τ given a ∈ ξ and b ∈ η. The
inference rule is represented by
Rule: If X is ξ and Y is η then Z is τ
From: X is a and Y is b (11.6)
Infer: Z is τ ∗ = τ |(a∈ξ)∩(b∈η)
Section 11.1 - Uncertain Inference Rule 249
Proof: It follows from the inference rule 11.2 that τ ∗ has a membership
function
λ∗ (z) = M{z ∈ τ |(a ∈ ξ) ∩ (b ∈ η)}.
By using the definition of conditional uncertainty, M{z ∈ τ |(a ∈ ξ) ∩ (b ∈ η)}
is
M{z ∈ τ } M{z ∈ τ }
, if < 0.5
M{(a ∈ ∩ ∈ M{(a ∈ ξ) ∩ (b ∈ η)}
ξ) (b η)}
M{z 6∈ τ } M{z 6∈ τ }
1− , if < 0.5
M{(a ∈ ξ) ∩ (b ∈ η)} M{(a ∈ ξ) ∩ (b ∈ η)}
0.5, otherwise.
The theorem follows from M{z ∈ τ } = λ(z), M{z 6∈ τ } = 1 − λ(z) and
M{(a ∈ ξ) ∩ (b ∈ η)} = µ(a) ∧ ν(b) immediately.
Inference Rule 11.3 (Gao, Gao and Ralescu [49]) Let X and Y be two
concepts. Assume two rules “if X is an uncertain set ξ1 then Y is an uncertain
set η1 ” and “if X is an uncertain set ξ2 then Y is an uncertain set η2 ”. From
X is a constant a we infer that Y is an uncertain set
Rule 1: If X is ξ1 then Y is η1
Rule 2: If X is ξ2 then Y is η2
(11.9)
From: X is a constant a
Infer: Y is η ∗ determined by (11.8)
µ1 (a) µ2 (a)
η∗ = η∗ + η∗ (11.10)
µ1 (a) + µ2 (a) 1 µ1 (a) + µ2 (a) 2
250 Chapter 11 - Uncertain Inference
where η1∗ and η2∗ are uncertain sets whose membership functions are respec-
tively given by
ν1 (y)
, if ν1 (y) < µ1 (a)/2
µ1 (a)
ν1∗ (y) = ν1 (y) + µ1 (a) − 1 (11.11)
, if ν1 (y) > 1 − µ1 (a)/2
µ1 (a)
0.5, otherwise,
ν2 (y)
, if ν2 (y) < µ2 (a)/2
µ2 (a)
ν2∗ (y) = ν2 (y) + µ2 (a) − 1 (11.12)
, if ν2 (y) > 1 − µ2 (a)/2
µ2 (a)
0.5, otherwise.
Proof: It follows from the inference rule 11.3 that the uncertain set η ∗ is
just
M{a ∈ ξ1 } · η1 |a∈ξ1 M{a ∈ ξ2 } · η2 |a∈ξ2
η∗ = + .
M{a ∈ ξ1 } + M{a ∈ ξ2 } M{a ∈ ξ1 } + M{a ∈ ξ2 }
The theorem follows from M{a ∈ ξ1 } = µ1 (a) and M{a ∈ ξ2 } = µ2 (a)
immediately.
Theorem 11.4 Assume ξi1 , ξi2 , · · · , ξim , ηi are independent uncertain sets
with membership functions µi1 , µi2 , · · · , µim , νi , i = 1, 2, · · · , k, respectively.
Section 11.2 - Uncertain System 251
∗
If ξ1∗ , ξ2∗ , · · · , ξm are constants a1 , a2 , · · · , am , respectively, then the inference
rule 11.4 yields
k
X ci · ηi∗
η∗ = (11.16)
i=1 1
c + c2 + · · · + ck
where ηi∗ are uncertain sets whose membership functions are given by
νi (y)
, if νi (y) < ci /2
ci
νi∗ (y) = νi (y) + ci − 1 (11.17)
, if νi (y) > 1 − ci /2
ci
0.5, otherwise
for i = 1, 2, · · · , k, respectively.
1. inputs that are crisp data to be fed into the uncertain system;
5. outputs that are crisp data yielded from the expected value operator.
252 Chapter 11 - Uncertain Inference
Now let us consider an uncertain system in which there are m crisp inputs
α1 , α2 , · · · , αm , and n crisp outputs β1 , β2 , · · · , βn . At first, we infer n un-
certain sets η1∗ , η2∗ , · · · , ηn∗ from the m crisp inputs by the rule-base (i.e., a set
of if-then rules),
If ξ11 and ξ12 and· · · and ξ1m then η11 and η12 and· · · and η1n
If ξ21 and ξ22 and· · · and ξ2m then η21 and η22 and· · · and η2n
(11.19)
···
If ξk1 and ξk2 and· · · and ξkm then ηk1 and ηk2 and· · · and ηkn
βj = E[ηj∗ ] (11.22)
Theorem 11.5 Assume ξi1 , ξi2 , · · · , ξim , ηi1 , ηi2 , · · · , ηin are independent un-
certain sets with membership functions µi1 , µi2 , · · · , µim , νi1 , νi2 , · · · , νin , i =
1, 2, · · · , k, respectively. Then the uncertain system from (α1 , α2 , · · · , αm ) to
(β1 , β2 , · · · , βn ) is
k ∗
X ci · E[ηij ]
βj = (11.24)
c + c2 + · · · + ck
i=1 1
Section 11.2 - Uncertain System 253
∗
for j = 1, 2, · · · , n, where ηij are uncertain sets whose membership functions
are given by
νij (y)
, if νij (y) < ci /2
ci
∗ νij (y) + ci − 1
νij (y) = (11.25)
, if νij (y) > 1 − ci /2
c
i
0.5, otherwise
for i = 1, 2, · · · , k, j = 1, 2, · · · , n, respectively.
Proof: It follows from the inference rule 11.4 that the uncertain sets ηj∗ are
k ∗
X ci · ηij
ηj∗ =
i=1
c1 + c2 + · · · + ck
∗
for j = 1, 2, · · · , n. Since = 1, 2, · · · , k, j = 1, 2, · · · , n are independent
ηij ,i
uncertain sets, we get the theorem immediately by the linearity of expected
value operator.
Remark 11.1: The uncertain system allows the uncertain sets ηij in the
rule-base (11.19) become constants bij , i.e.,
for j = 1, 2, · · · , n.
Remark 11.2: The uncertain system allows the uncertain sets ηij in the
rule-base (11.19) become functions hij of inputs α1 , α2 , · · · , αm , i.e.,
.....
.......
....
...
A(t)
...............
...........
........... . ..........
... ...
.....
... ........... ... ...
... ......... ...
... ..
... ...
... ... ...
... ... ...
... ..........
... .. ..
... ...
... ... ...
... ... ...
... ..........
... .. ..
... ...
... ..........
... ........
... ........
.... ...
.......
...
•
.................................................................................................................
....
. ..
F (t) ............................... ....
... ...... . ..... .
. .
... ..................... .. .. .
...................... ....
.
•
. . .
•
................................................................................................................
....................
..
...................
...............................................................................................................................................................................................
Figure 11.4: An Inverted Pendulum in which A(t) represents the angular po-
sition and F (t) represents the force that moves the cart at time t. Reprinted
from Liu [129].
NL
..................................................
NS ... ...
Z ...
PS PL
................................................
... ... ... ... ... ... ... ...
... .. ... .. ... .. ... ..
... . .... ..... .
.... ..... .
.... ..... .
....
... ... ... ...
... ... ... ...
.
... ...
.
... ...
.
... .... ... .... ... .... ... ....
... ... ... ... ... ... ... ...
...... ..... ..... .....
.. . . .
....... ...... ...... ......
... ..... ... ... ... ... ... ...
... ... ... ..... ... ..... ... .....
.... .
. .... ... .... ... .... ...
... ... . ... . ... . ...
.. ... .... ... .... ... .... ...
...
.
... ..
. . ... ...
.
... ...
.
...
...........................................................................................................................................................................................................................................................................
.. ...
. .
... .
...
(rad)
−π/2 −π/4 0 π/4 π/2
Intuitively, when the inverted pendulum has a large clockwise angle and
a large clockwise angular velocity, we should give it a large force to the right.
Thus we have an if-then rule,
Note that each input or output has 5 states and each state is represented by
an uncertain set. This implies that the rule-base contains 5 × 5 if-then rules.
In order to balance the inverted pendulum, the 25 if-then rules in Table 11.1
are accepted.
Section 11.5 - Bibliographic Notes 257
NL
..................................................
NS Z PS PL
..... ...
... ...
...
... ...
................................................
...
...
... ... .... .. ... .. ... ..
... ... ..... ... ..... ... ..... ...
... .... .... ..
. ... .
.. ... .
..
... .. ... ... ... ... ... ...
... ..... ... .... ... .... ... ....
... .. ... ... ... ... ... ...
...... ..... .... .....
.. .
.. .. .
..
... .... ... .... ...... ......
... ..... ... ..... .. ... .. ...
... ... ... ..... ... .....
.. ...
. ..
...
. . ... ... . ... ...
.
. .... ... .
. .
... .. .... ... .
. ...
... ... .. ... ..... ... .. ...
.... .. .. .. .
. .. .. ...
. .
........................................................................................................................................................................................................................................................................................... (rad/sec)
−π/4 −π/8 0 π/8 π/4
NL
....
NS Z PS PL
. .. ....... ...... ...... ......
... .... ... .... .. ... .. ... .. ...
... ..... ... ..... .. ..... .. ..... .. .....
... ... ... . ..... ... ..... ... ..... ...
. . ... . ... . ... . ...
... ...
... .....
. ... ..... ... ..... ... ..... ...
... ... .. ... .. ... .. ...
.... ... ...
. . ...... ...... ...... ...
.... ......... ......... ......... ......... ...
...
..
. .. ...
. ... ... ... ... ... ... ...
..
. .. ...
. ... ... ... ... ... ... ...
.... ...
.
...
.... .... ...
.... .... ...
.... .
... ...
....
...
...
... ..
. ... .. .
.. ... .. ... ... .. .
.. ... ...
.... .... .... .. .
. .... .. .
. .... .. .
. .... ...
.
.. ... . . . . . . . ..
....................................................................................................................................................................................................................................................................................... (N)
−60 −40 −20 0 20 40 60
A lot of simulation results show that the uncertain controller may balance
the inverted pendulum successfully.
are universal approximator and then demonstrated that the uncertain con-
troller is a reasonable tool. As a successful application, Gao [52] balanced an
inverted pendulum by using the uncertain controller.
Chapter 12
Uncertain Process
The study of uncertain process was started by Liu [123] in 2008 for modeling
the evolution of uncertain phenomena. This chapter will give the concept of
uncertain process, and introduce sample path, uncertainty distribution, in-
dependent increment, stationary increment, extreme value, first hitting time,
and time integral of uncertain process.
Definition 12.1 (Liu [123]) Let (Γ, L, M) be an uncertainty space and let T
be a totally ordered set (e.g. time). An uncertain process is a function Xt (γ)
from T × (Γ, L, M) to the set of real numbers such that {Xt ∈ B} is an event
for any Borel set B at each time t.
Sample Path
Definition 12.2 (Liu [123]) Let Xt be an uncertain process. Then for each
γ ∈ Γ, the function Xt (γ) is called a sample path of Xt .
Note that each sample path is a real-valued function of time t. In addition,
an uncertain process may also be regarded as a function from an uncertainty
space to a collection of sample paths.
<..
...
.......
..... ...
... ... .... ......
.. .......... ....... ..............
... ...... .. .. ..... ....
... ..
. ... ..
... ... .......
...
... ..... ..
... ... .....
. ........ ..
.
.
...
. .. ....... ... ...... ...
... .. . . ..
... ......
. ... ......... ... .... ...... .... ..
.
... ............. .. ..... ........ ......
. . . ... ....
... .. .......
. .
.. . ....
. .
.
. .
... .. .
.
... ........ .. .... ........ ........... .... ....... ......
... ...... ...... ........... ...... ... ... ... ... .....
... ... ........ .. ..... ..
... ... .... ....
.
..
... ...
.... .... ...
... ... .....
......
....
..............................................................................................................................................................................................................................................................
t
Figure 12.1: A Sample Path of Uncertain Process. Reprinted from Liu [129].
Uncertain Field
Uncertain field is a generalization of uncertain process when the index set T
becomes a partially ordered set (e.g. time × space, or a surface).
Definition 12.4 (Liu [139]) Let (Γ, L, M) be an uncertainty space and let T
be a partially ordered set (e.g. time × space). An uncertain field is a function
Xt (γ) from T × (Γ, L, M) to the set of real numbers such that {Xt ∈ B} is
an event for any Borel set B at each time t.
Section 12.2 - Uncertainty Distribution 261
Example 12.5: The linear uncertain process Xt ∼ L(at, bt) has an uncer-
tainty distribution,
if x ≤ at
0,
x − at
Φt (x) = , if at ≤ x ≤ bt (12.5)
(b − a)t
1, if x ≥ bt.
Example 12.6: The zigzag uncertain process Xt ∼ Z(at, bt, ct) has an
uncertainty distribution,
0, if x ≤ at
x − at
, if at ≤ x ≤ bt
2(b − a)t
Φt (x) = (12.6)
x + ct − 2bt
, if bt ≤ x ≤ ct
2(c − b)t
if x ≥ ct.
1,
Example 12.7: The normal uncertain process Xt ∼ N (et, σt) has an un-
certainty distribution,
−1
π(et − x)
Φt (x) = 1 + exp √ . (12.7)
3σt
Example 12.8: The lognormal uncertain process Xt ∼ LOGN (et, σt) has
an uncertainty distribution,
−1
π(et − ln x)
Φt (x) = 1 + exp √ . (12.8)
3σt
262 Chapter 12 - Uncertain Process
and say Φ0 (x) is a continuous and strictly increasing function with respect
to x at which 0 < Φ0 (x) < 1 even though it is discontinuous at X0 .
Note that at each time t, the inverse uncertainty distribution Φ−1t (α) is
well defined on the open interval (0, 1). If needed, we may extend the domain
to [0, 1] via
Φ−1 −1
t (0) = lim Φt (α), Φ−1 −1
t (1) = lim Φt (α). (12.14)
α↓0 α↑1
Φ−1
t (α)
...
.......... ......
....... α = 0.9
...........
...............
.... ......
... ...........
.
... ........
.......... ........
...........................
α = 0.8
... .........................
... ............. .......
... ................ ..............
........
.. ... ...........
.......................
... ............................................ ........ α = 0.7
... ......... ........................
.......... . ........
... ............................................ .......................................................................... .......................
............................................................. ............................ ........... ...
.................................................................................. .............................................. ... ....
α = 0.6
....................................................................................................................................................................................
.......................................................
.. ............................... ................................................................................
α = 0.5
............................................................................. .........
... .................... ................... ....................................
......... .
.........................
........
........
..........
α = 0.4
...........................
.
... ........ .......................................................... .........
...
...
........
........
..........
..........
........
.......
α = 0.3
............................
... ..................
..................... .......
.........
...
...
...
.........
.......
......
α = 0.2
........................
......
... ......
.......
... ........
... ......................
...
...
α = 0.1
......................................................................................................................................................................................................................................................
t
Example 12.10: The linear uncertain process Xt ∼ L(at, bt) has an inverse
uncertainty distribution,
Φ−1
t (α) = (1 − α)at + αbt. (12.15)
264 Chapter 12 - Uncertain Process
Example 12.11: The zigzag uncertain process Xt ∼ Z(at, bt, ct) has an
inverse uncertainty distribution,
(
−1 (1 − 2α)at + 2αbt, if α < 0.5
Φt (α) = (12.16)
(2 − 2α)bt + (2α − 1)ct, if α ≥ 0.5.
Example 12.13: The lognormal uncertain process Xt ∼ LOGN (et, σt) has
an inverse uncertainty distribution,
√ !
−1 σt 3 α
Φt (α) = exp et + ln . (12.18)
π 1−α
Xt = ξt , ∀t ∈ T.
Remark 12.2: Note that we stipulate that a crisp initial value X0 has an
inverse uncertainty distribution
Φ−1
0 (α) ≡ X0 (12.19)
and say Φ−10 (α) is a continuous and strictly increasing function with respect
to α ∈ (0, 1) even though it is not.
Section 12.3 - Independence and Operational Law 265
Proof: At any time t, it is clear that X1t , X2t , · · · , Xnt are independent un-
certain variables with inverse uncertainty distributions Φ−1 −1
1t (α), Φ2t (α), · · · ,
−1
Φnt (α), respectively. The theorem follows from the operational law of un-
certain variables immediately.
That is, an independent increment process means that its increments are
independent uncertain variables whenever the time intervals do not overlap.
Please note that the increments are also independent of the initial state.
Yt = aXt + b (12.28)
Φ−1 −1 −1 −1
t (β) − Φt (α) ≥ Φs (β) − Φs (α).
That is,
Φ−1 −1 −1 −1
t (β) − Φs (β) ≥ Φt (α) − Φs (α).
Hence Φ−1 −1
t (α) − Φs (α) is a monotone (not strictly) increasing function with
respect to α.
Conversely, let us prove that there exists an independent increment pro-
cess whose inverse uncertainty distribution is just Φ−1 t (α). Without loss of
generality, we only consider the range of t ∈ [0, 1]. Let n be a positive
integer. Since Φ−1t (α) is a continuous and strictly increasing function and
Φ−1
t (α) − Φ−1
s (α) is a monotone increasing function with respect to α, there
exist independent uncertain variables ξ0n , ξ1n , · · · , ξnn such that ξ0n has an
inverse uncertainty distribution
Υ−1 −1
0n (α) = Φ0 (α)
Remark 12.4: It follows from Theorem 12.8 that the uncertainty distri-
bution of independent increment process has a horn-like shape, i.e., for any
s < t and α < β, we have
−1 −1
Φ−1 −1
s (β) − Φs (α) < Φt (β) − Φt (α). (12.29)
Φ−1
t (α)
...
.......... α = 0.9................
....................
.....
... ..............
.. ......... ............ α = 0.8 ............. ...........
... ....
... ......... .............
........ ............
... ....... ........... α = 0.7 ..........
... ........ .......... ...............
... ............ . .
. ............... ......
. .... ...............
... .... ....
. ............. ... .... ..
.........
...... . .
... ........ ...........
...
... .....
......
.
.......
...... ............. ..................
.
.......... α = 0.6 .................
................. ......
..................
... ......... ............ ................. .... .............................. .
. . ... ...............
...
... .................................... .... ...
.. .... ..... ........ .............. ...
.............. ..
....................................................... .. ..
..................
................................................................................................................................................................................. .
..............
............... .............. .. .
.
.
.
. α = 0.5 ..
....................... ................ ... ..
... ............... ............ .............. .. .
.............. .. ..
... ....... .......... ............. ...............
.. ...... ..
... ..... ...... . ..
. .. ... ........................
..... ....... ............ . .
... ..... .
...... ............. .................
.. . ......................... .....
... ...... ........ .......... ... ..................
.........
...
...
......
......
.......
........
.........
.........
...........
............
............
α = 0.4
... ....... ......... .............
..............
... ....... .......... ...............
........ ...........
... .....
...
........
.........
.......... α = 0.3
............
.............
...............
... ............ .................
...
...
.............
α = 0.2
................
...................
.
... ............
... α = 0.1
.....................................................................................................................................................................................................................................
t
Exercise 12.6: Show that there does not exist an independent increment
process with lognormal uncertainty distribution.
have the same length, i.e., for any given t > 0, the increments Xs+t − Xs are
identically distributed uncertain variables for all s > 0.
Yt = aXt + b (12.30)
Ys+t − Ys = a(Xs+t − Xs )
are also identically distributed uncertain variables for all s > 0, and Yt is a
stationary increment process. Hence Yt is a stationary independent increment
process.
Then
In addition,
It follows from (12.31) and (12.32) that Xt −X0 and t(X1 −X0 ) are identically
distributed, and so are Xt and (1 − t)X0 + tX1 .
Xt
∼ Φ(x) (12.33)
t
or equivalently, x
Xt ∼ Φ (12.34)
t
for any time t > 0. Note that Φ is just the uncertainty distribution of X1 .
Φ−1
t (α) = µ(α) + ν(α)t. (12.35)
uncertainty distribution ν(α) for all rational numbers r in (0, 1]. For each
positive integer n, we define an uncertain process
k
ξ(0) + 1 i k
X
ξ , if t = (k = 1, 2, · · · , n)
n n i=1 n n
Xt =
linear, otherwise.
Φ−1
t (α)
....
......... α = 0.9 .......
.......
.....
.. .......
...
... α = 0.8 .........
....... .............
.....
.. ..
... ....... ..
... ............. ..............
α = 0.7 ..........
... ...... ....... ........
....... ............. ..............
... ....... .. α = 0.6 ... ....
... ....... ....... ........ .........
... ............. .............. ............... ....... .........
.... .......... ............ .....
...
... ....
.....
....
....... ....... ........ ................
.... α = 0.5
...... ..
. ...
..........
...... ........
... ....... ........ ......... .. ..........
... ..
........................ ............... ................. ................... α = 0.4 .... ............
.
...... ....... ....... ........ ......... . ..
... ....... ....... ......... ......... ................... ............
....... ....... ........ ......... ............
...
....... ........ ......... ......... ...........
.. ............
α = 0.3 .......
... ............ ...............
... .. . . ......................................................................... ...................... .. .... ......................
.. . .. .. . ... .... .....
... ..................................................................................................... ............................. α = 0.2 ...... ............
....................................................................................... .......................... ..................
....................
..... ... .... .... ...... ...... ....................
..................................................................................................................................... α = 0.1
.. . ........ .... .......... .. ....................
.................................................................................... ................................
.... ..... ...............................
.................................................................................................
.........
............................................................................................................................................................................................................................................
... t
Exercise 12.10: Show that there does not exist a stationary independent
increment process with lognormal uncertainty distribution.
E[Xt ] = a + bt (12.36)
Proof: It follows from Theorem 12.12 that there exists a real number b such
that E[Xt ] = bt for any time t ≥ 0. Hence
Proof: It follows from Theorem 12.10 that Xt and (1 − t)X0 + tX1 are
identically distributed uncertain variables. Since X0 is a constant, we have
Proof: It follows from Theorem 12.14 that there exists a real number b such
that V [Xt ] = bt2 for any time t ≥ 0. Hence
p √ √ √ p p
V [Xs+t ] = b(s + t) = bs + bt = V [Xs ] + V [Xt ].
Section 12.6 - Extreme Value Theorem 273
are independent uncertain variables, it follows from Theorem 2.15 that the
maximum
max Xti
1≤i≤n
and
min Φti (x) → inf Φt (x)
1≤i≤n 0≤t≤s
274 Chapter 12 - Uncertain Process
and
max Φti (x) → sup Φt (x)
1≤i≤n 0≤t≤s
= inf Φt (f −1 (x)).
0≤t≤s
Section 12.6 - Extreme Value Theorem 275
Similarly, we have
Ψ(x) = M inf f (Xt ) ≤ x
0≤t≤s
=M inf Xt ≤ f −1 (x)
0≤t≤s
= sup Φt (f −1 (x)).
0≤t≤s
Similarly, we have
Ψ(x) = M inf f (Xt ) ≤ x = M sup Xt ≥ f (x)
−1
0≤t≤s 0≤t≤s
=1−M sup Xt < f (x) = 1 − inf Φt (f −1 (x)).
−1
0≤t≤s 0≤t≤s
Section 12.7 - First Hitting Time 277
X. t
....
.........
.... ..
... ... ..... .....
... ......... ........ .............
... ..... ... ... ....... ....
... ...
. ...
... ... ........
z ..........................................................................................
...
... ..
.
...
.
.
.. .....
...
... ............ .
..... ...
.
.
...
..
... .... ... ...... ........ .... .. .
......... ..
... . ..
... ...... .... .... ........... ............. . ... ..
... ... ........ ... ... ... ........
.. ........ ... .. ...
... .. ...... ...... ... ...
... ....... .. .... ........ ........... ... ..... ......
..
... .... ... .. ... . ...... .. ... ... ... ..
... ... ........... .... ... .. ... .. ... ... ..
....
... ..... ..... ..... ...
... ... ..
... .. ... ..
... .......... ..
... ... ..
...... .
.....................................................................................................................................................................................................................................................
τz
... t
Proof: When X0 < z, it follows from the definition of first hitting time that
When X0 > z, it follows from the definition of first hitting time that
strictly increasing function and z is a given level, then the first hitting time
τz that f (Xt ) reaches the level z has an uncertainty distribution,
When z < f (X0 ), it follows from the extreme value theorem that
Υ(s) = M{τz ≤ s} = M inf f (Xt ) ≤ z = sup Φt (f −1 (z)).
0≤t≤s 0≤t≤s
When z < f (X0 ), it follows from the extreme value theorem that
Υ(s) = M{τz ≤ s} = M inf f (Xt ) ≤ z = 1 − inf Φt (f −1 (z)).
0≤t≤s 0≤t≤s
Definition 12.12 (Liu [123]) Let Xt be an uncertain process. For any par-
tition of closed interval [a, b] with a = t1 < t2 < · · · < tk+1 = b, the mesh is
written as
∆ = max |ti+1 − ti |. (12.76)
1≤i≤k
provided that the limit exists almost surely and is finite. In this case, the
uncertain process Xt is said to be time integrable.
Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. Since the uncertain process Xt is sample-continuous, almost all sample
paths are continuous functions with respect to t. Hence the limit
k
X
lim Xti (ti+1 − ti )
∆→0
i=1
exists almost surely and is finite. On the other hand, since Xt is an uncertain
variable at each time t, the above limit is also a measurable function. Hence
the limit is an uncertain variable and then Xt is time integrable.
a = t1 < · · · < tm = a0 < tm+1 < · · · < tn = b0 < tn+1 < · · · < tk+1 = b,
Section 12.9 - Bibliographic Notes 281
the limit
k
X
lim Xti (ti+1 − ti )
∆→0
i=1
exists almost surely and is finite. Thus the limit
n−1
X
lim Xti (ti+1 − ti )
∆→0
i=m
exists almost surely and is finite. Hence Xt is time integrable on the subin-
terval [a0 , b0 ]. Next, for the partition
a = t1 < · · · < tm = c < tm+1 < · · · < tk+1 = b,
we have
k
X m−1
X k
X
Xti (ti+1 − ti ) = Xti (ti+1 − ti ) + Xti (ti+1 − ti ).
i=1 i=1 i=m
Note that
Z b k
X
Xt dt = lim Xti (ti+1 − ti ),
a ∆→0
i=1
Z c m−1
X
Xt dt = lim Xti (ti+1 − ti ),
a ∆→0
i=1
Z b k
X
Xt dt = lim Xti (ti+1 − ti ).
c ∆→0
i=m
Hence the equation (12.78) is proved.
Theorem 12.24 (Linearity of Time Integral) Let Xt and Yt be time inte-
grable uncertain processes on [a, b], and let α and β be real numbers. Then
Z b Z b Z b
(αXt + βYt )dt = α Xt dt + β Yt dt. (12.79)
a a a
Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. It follows from the definition of time integral that
Z b X k
(αXt + βYt )dt = lim (αXti + βYti )(ti+1 − ti )
a ∆→0
i=1
k
X k
X
= lim α Xti (ti+1 − ti ) + lim β Yti (ti+1 − ti )
∆→0 ∆→0
i=1 i=1
Z b Z b
= α Xt dt + β Yt dt.
a a
Uncertain Renewal
Process
N. t
....
.........
....
..
4 ....
...
....... ..............................
..
... ..
... ..
.........................................................
3 ..........
... .. ..
..
... ..
.. ..
... ..
.......................................
2 ..........
.... .. ..
..
..
..
... .. ..
.. .. ..
. ..
.......... .
1 ....
..
........................................................
.. .
..
..
.
.
.
..
..
.. .. . ..
... . . .
..................................................................................................................................................................................................................................
0 ...
...
.... .... .... .... ... t
ξ ... 1 ... ξ ...
. 2 ξ ...
.
... 3 ... ξ ...
. 4
...
...
.... .. .. .. .
..
S0 S1 S2 S3 S4
Figure 13.1: A Sample Path of Renewal Process. Reprinted from Liu [129].
Then we have
Nt ≥ n ⇔ Sn ≤ t (13.2)
for any time t and integer n. Furthermore, we also have
Nt ≤ n ⇔ Sn+1 > t. (13.3)
It follows from the fundamental relationship that Nt ≥ n is equivalent to
Sn ≤ t. Thus we immediately have
M{Nt ≥ n} = M{Sn ≤ t}. (13.4)
Since Nt ≤ n is equivalent to Sn+1 > t, by using the duality axiom, we also
have
M{Nt ≤ n} = 1 − M{Sn+1 ≤ t}. (13.5)
Theorem 13.2 (Liu [129]) Let Nt be a renewal process with uncertain inter-
arrival times ξ1 , ξ2 , · · · If those interarrival times have a common uncertainty
distribution Φ, then Nt has an uncertainty distribution
t
Υt (x) = 1 − Φ , ∀x ≥ 0 (13.6)
bxc + 1
where bxc represents the maximal integer less than or equal to x.
Proof: Note that Sn+1 has an uncertainty distribution Φ(x/(n + 1)). It
follows from (13.5) that
t
M{Nt ≤ n} = 1 − M{Sn+1 ≤ t} = 1 − Φ .
n+1
Since Nt takes integer values, for any x ≥ 0, we have
t
Υt (x) = M{Nt ≤ x} = M{Nt ≤ bxc} = 1 − Φ .
bxc + 1
The theorem is verified.
Section 13.1 - Uncertain Renewal Process 285
Υt (x)
...
..........
...
...
... Υ (5) t
... • .........................................
... ..
... Υ (4) t ..
... • . .. ...
...
...
...
...
...
...
...
...
...
...
.........
... ..
... Υ (3) t
.
.. ..
... • ............................................ ..
..
... ..
.. .
.
. ..
... .. .
.. ..
..
... .. .
.. ..
... Υ (2) t
..
.. .
. ..
... . ..
... •............................................ ..
. ..
..
... .. .. .. ..
.
.. .. .
.
... . .. .
.. ..
..
... .. .
.. . ..
.. ..
Υ (1)... t .
.. .
.. .
.. ..
..
... •.......................................... . .
.. .. ..
... .. .. .. .
.. ..
... ..
.. .
.. .
.. .
.. ..
. ..
Υ (0) t • .
....
...
...
...
...
...
...
...
...
...
...
...
...
... .
.
.
..
..
.
..
..
..
.
..
..
..
... .. . . . ..
............................................................................................................................................................................................................................................................................................ x
...
0 1 ..
....
2 3 4 5
.
Theorem 13.3 (Liu [129]) Let Nt be a renewal process with uncertain in-
terarrival times ξ1 , ξ2 , · · · Then the average renewal number
Nt 1
→ (13.7)
t ξ1
in the sense of convergence in distribution as t → ∞.
Proof: The uncertainty distribution Υt of Nt has been given by Theo-
rem 13.2 as follows,
t
Υt (x) = 1 − Φ
bxc + 1
where Φ is the uncertainty distribution of ξ1 . It follows from the operational
law that the uncertainty distribution of Nt /t is
t
Ψt (x) = 1 − Φ
btxc + 1
where btxc represents the maximal integer less than or equal to tx. Thus at
each continuity point x of 1 − Φ(1/x), we have
1
lim Ψt (x) = 1 − Φ
t→∞ x
which is just the uncertainty distribution of 1/ξ1 . Hence Nt /t converges in
distribution to 1/ξ1 as t → ∞.
Theorem 13.4 (Liu [129], Elementary Renewal Theorem) Let Nt be a re-
newal process with uncertain interarrival times ξ1 , ξ2 , · · · If E[1/ξ1 ] exists,
then
E[Nt ] 1
lim =E . (13.8)
t→∞ t ξ1
286 Chapter 13 - Uncertain Renewal Process
is called a renewal reward process, where Nt is the renewal process with un-
certain interarrival times ξ1 , ξ2 , · · ·
Theorem 13.6 (Liu [129]) Let Rt be a renewal reward process with uncer-
tain interarrival times ξ1 , ξ2 , · · · and uncertain rewards η1 , η2 , · · · Assume
those interarrival times and rewards have uncertainty distributions Φ and Ψ,
respectively. Then Rt has an uncertainty distribution
t x
Υt (x) = max 1 − Φ ∧Ψ . (13.21)
k≥0 k+1 k
Here we set x/k = +∞ and Ψ(x/k) = 1 when k = 0.
Proof: It follows from the definition of renewal reward process that the
renewal process Nt is independent of uncertain rewards η1 , η2 , · · · , and Rt
has an uncertainty distribution
(N ) (∞ k
)
X t [ X
Υt (x) = M ηi ≤ x = M (Nt = k) ∩ ηi ≤ x
i=1 k=0 i=1
∞
( )
[ x
=M (Nt = k) ∩ η1 ≤ (this is a polyrectangle)
k
k=0
n x o
= max M (Nt ≤ k) ∩ η1 ≤ (polyrectangular theorem)
k≥0 k
n xo
= max M {Nt ≤ k} ∧ M η1 ≤ (independence)
k≥0 k
t x
= max 1 − Φ ∧Ψ .
k≥0 k+1 k
The theorem is proved.
Section 13.3 - Renewal Reward Process 289
Υt (x)
...
.......... . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .........................................
... .. .. .. .. .. .. .. .. .. . . ..
... ........ .. .. .. .. .. .. .. .
...... ..... .. .. .. .. .. .. .. .. ..
...
... . .
. . ...
. .
. .... .. .. ..
... ..... ........ ..........
... ...
. ...... .. ...... ..............
.. .. . .. .
. ..... ...
.. .
.
.... .. .. .. .. .. .. .. .. .. .. ........ .. .. .. .. .. .. .. .. .. ............. .. .. .. .. .. .. .. ......................................................................
... .. .
. ...
. .. ..
. ...
. .... ....
... .. ...... ....
... .. ... ...... ....
. .. ... ..... ....
.... .. .. .. .. .. .. .. ....... .. .. .. .. .. .. ................................................ . ...
.
.. . . .. .
... .
.. ...
... ... ....
... .. ... ... ....
... .. ... ... ....
... ... ... ..... . ....
. .
... . ... .. ...
. .. .. ... ...
.... .. .. .. .. ................................. ... ...
. .. .. ...
.... ... .. .
..
....
... .. .. ... ...
... ... .. .. ...
... .. .. .. ...
... .. .. ... .....
... . ..
... .. .
.. ... ....
.. ... .. .... ....
............... ... .... .....
. .
.... .... ............... ....
.................................................................................................................................................................................................................................................................... x
..
0 ....
.
Theorem 13.7 (Liu [129]) Assume that Rt is a renewal reward process with
uncertain interarrival times ξ1 , ξ2 , · · · and uncertain rewards η1 , η2 , · · · Then
the reward rate
Rt η1
→ (13.22)
t ξ1
in the sense of convergence in distribution as t → ∞.
Proof: It follows from Theorem 13.6 that the uncertainty distribution of Rt
is
t x
Υt (x) = max 1 − Φ ∧Ψ .
k≥0 k+1 k
Then Rt /t has an uncertainty distribution
t tx
Ψt (x) = max 1 − Φ ∧Ψ .
k≥0 k+1 k
When t → ∞, we have
Ψt (x) → sup(1 − Φ(y)) ∧ Ψ(xy)
y≥0
Note that Ft (x) → G(x) and Ft (x) ≥ G(x). It follows from Lebesgue domi-
nated convergence theorem and the existence of E[η1 /ξ1 ] that
Z +∞ Z +∞
E[Rt ] η1
lim = lim (1 − Ft (x))dx = (1 − G(x))dx = E .
t→∞ t t→∞ 0 0 ξ1
Ψ−1 (α)
G−1 (α) = ,
Φ−1 (1 − α)
Nt
X
Rt = ηi (13.25)
i=1
with iid uncertain interarrival times ξ1 , ξ2 , · · · and iid uncertain claim amounts
η1 , η2 , · · · Then the capital of the insurance company at time t is
Zt = a + bt − Rt (13.26)
Z. t
....
.........
....
... .
........ .......
... ...... .. ...... ..
... ...... .... .......... .....
... ..
..
...... ........ ...
.. ..
... ...... ..
...
... ...... ...
... ...
....... .. ...
... .. ...
... ..
.
.. . ..
. ..
..
.
........
... .......... .... ..
..... ....
. .
..
..... .. ...
... .......... ....
... .... ... ........ ..
... ........ ... .
....
.
.. ... ...... .. ... ....... ...
........ .
. .
. ...
a ... ....
.
....
......
. ...
..
.
. ..
..
... .
..
.
.
...
... ... .......... .. .. ..
...
...
... ... ...... .. .. ..
... ..... ...
.. .. .. .. ...
... .. .. .. ...
... .. .. .. .. ...
... .. .. .. .. ...
... .. .. .. .. ...
... .. .. .. .. ...
.... .. . . . ..
.............................................................................................................................................................................................................................................................................................
... ... .... t
0 S ...
...
S 1 2 S S 3 4 ....
... .
.
.....
... .... ......
... ... .....
... ... . .
.. .......
Ruin Index
Ruin index is the uncertain measure that the capital of the insurance company
becomes negative.
Definition 13.3 (Liu [135]) Let Zt be an insurance risk process. Then the
ruin index is defined as the uncertain measure that Zt eventually becomes
negative, i.e.,
Ruin = M inf Zt < 0 . (13.27)
t≥0
It is clear that the ruin index is a special case of the risk index in the
sense of Liu [128].
Proof: For each positive integer k, it is clear that the arrival time of the kth
claim is
Sk = ξ1 + ξ2 + · · · + ξk
whose uncertainty distribution is Φ(s/k). Define an uncertain process in-
dexed by k as follows,
Yk = a + bSk − (η1 + η2 + · · · + ηk ).
292 Chapter 13 - Uncertain Renewal Process
Ruin Time
Definition 13.4 (Liu [135]) Let Zt be an insurance risk process. Then the
ruin time is determined by
τ = inf t ≥ 0 Zt < 0 . (13.29)
If Zt ≥ 0 for all t ≥ 0, then we define τ = +∞. Note that the ruin time is
just the first hitting time that the total capital Zt becomes negative. Since
inf t≥0 Zt < 0 if and only if τ < +∞, the relation between ruin index and
ruin time is
Ruin = M inf Zt < 0 = M{τ < +∞}.
t≥0
Then
αk = sup α | kΦ−1 (α) ≤ t ∧ sup α | a + kΦ−1 (α) − kΨ−1 (1 − α) < 0 .
On the one hand, it follows from the definition of the ruin time τ that
(∞ )
[
M{τ ≤ t} = M inf Zs < 0 = M (Sk ≤ t, Yk < 0)
0≤s≤t
k=1
∞
( k k k
!)
[ X X X
=M ξi ≤ t, a + b ξi − ηi < 0
k=1 i=1 i=1 i=1
∞ \
( k
)
[
≥M (ξi ≤ Φ−1 (αk )) ∩ (ηi > Ψ−1 (1 − αk ))
k=1 i=1
∞
( k
)
_ \
≥ M (ξi ≤ Φ−1 (αk )) ∩ (ηi > Ψ−1 (1 − αk ))
k=1 i=1
∞ ^
_ k
M (ξi ≤ Φ−1 (αk )) ∩ (ηi > Ψ−1 (1 − αk ))
=
k=1 i=1
∞ ^
_ k
M ξi ≤ Φ−1 (αk ) ∧ M ηi > Ψ−1 (1 − αk )
=
k=1 i=1
∞ ^
_ k ∞
_
= αk ∧ αk = αk .
k=1 i=1 k=1
Thus we obtain
∞
_
M{τ ≤ t} = αk
k=1
and the theorem is verified.
Then f (ξi ∧ s) is just the cost of replacing the ith element, and the average
replacement cost before the time t is
N
t
1X
f (ξi ∧ s). (13.34)
t i=1
Theorem 13.11 (Yao and Ralescu [245]) Assume ξ1 , ξ2 , · · · are iid uncer-
tain lifetimes and s is a positive number. Then
Nt
1X f (ξ1 ∧ s)
f (ξi ∧ s) → (13.35)
t i=1 ξ1 ∧ s
Nt
f (ξi ∧ s) (ξi ∧ s)
1X i=1 i=1
f (ξi ∧ s) = N × . (13.36)
t i=1 X t t
(ξi ∧ s)
i=1
Section 13.5 - Age Replacement Policy 295
(N Nt
)
X t X
f (ξi ∧ s)/ (ξi ∧ s) ≤ x
i=1 i=1
∞
( n n
!)
[ X X
= (Nt = n) ∩ f (ξi ∧ s)/ (ξi ∧ s) ≤ x
n=1 i=1 i=1
∞
( n
)
[ \
⊃ (Nt = n) ∩ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
n=1 i=1
∞ ∞
( )
[ \
⊃ (Nt = n) ∩ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
n=1 i=1
\∞
⊃ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
i=1
and
XNt
f (ξi ∧ s)
(∞ )
i=1
\ f (ξi ∧ s) f (ξ1 ∧ s)
M ≤x ≥M ≤x =M ≤x .
Nt
i=1
ξi ∧ s ξ1 ∧ s
X(ξ ∧ s)
i
i=1
(N Nt
)
X t X
f (ξi ∧ s)/ (ξi ∧ s) ≤ x
i=1 i=1
∞
( n n
!)
[ X X
= (Nt = n) ∩ f (ξi ∧ s)/ (ξi ∧ s) ≤ x
n=1 i=1 i=1
∞
( n
)
[ [
⊂ (Nt = n) ∩ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
n=1 i=1
∞ ∞
( )
[ [
⊂ (Nt = n) ∩ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
n=1 i=1
[∞
⊂ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
i=1
296 Chapter 13 - Uncertain Renewal Process
and
XNt
f (ξi ∧ s)
(∞ )
i=1
[ f (ξi ∧ s) f (ξ1 ∧ s)
M ≤x ≤M ≤x =M ≤x .
XNt
i=1
ξi ∧ s ξ1 ∧ s
(ξi ∧ s)
i=1
Hence
Nt
X
f (ξi ∧ s)
i=1 f (ξ1 ∧ s)
and
Nt
X ξ1 ∧ s
(ξi ∧ s)
i=1
are identically distributed uncertain variables. Since
Nt
X
(ξi ∧ s)
i=1
→1
t
as t → ∞, it follows from (13.36) that (13.35) holds. The theorem is verified.
Theorem 13.12 (Yao and Ralescu [245]) Assume ξ1 , ξ2 , · · · are iid uncer-
tain lifetimes with a common continuous uncertainty distribution Φ, and s is
a positive number. Then the long-run average replacement cost is
" N # Z s
t
1X b a−b Φ(x)
lim E f (ξi ∧ s) = + Φ(s) + a dx. (13.37)
t→∞ t i=1 s s 0 x2
and
Z +∞ Z s
f (ξ1 ∧ s) b a−b Φ(x)
E = (1 − Ψ(x))dx = + Φ(s) + a dx.
ξ1 ∧ s 0 s s 0 x2
Since
Nt
X
(ξi ∧ s)
i=1
≤ 1,
t
it follows from (13.36) that
( N )
t
1X f (ξ1 ∧ s)
M f (ξi ∧ s) ≤ x ≥ M ≤x
t i=1 ξ∧s
for any real number x. By using the Lebesgue dominated convergence theo-
rem, we get
" N # Z +∞ ( N )!
t t
1X 1X
lim E f (ξi ∧ s) = lim 1−M f (ξi ∧ s) ≤ x dx
t→∞ t i=1 t→∞ 0 t i=1
Z +∞
f (ξ1 ∧ s)
= 1−M ≤x dx
0 ξ1 ∧ s
f (ξ1 ∧ s)
=E .
ξ1 ∧ s
Hence the theorem is proved. Please also note that
" N #
t
1X
lim lim E f (ξi ∧ s) = +∞, (13.38)
s→0+ t→∞ t i=1
" N # Z +∞
t
1X Φ(x)
lim lim E f (ξi ∧ s) = a dx. (13.39)
s→+∞ t→∞ t i=1 0 x2
298 Chapter 13 - Uncertain Renewal Process
Note that the alternating renewal process At is just the total time at which
the system is on up to time t. It is clear that
Nt
X N
Xt +1
ξi ≤ At ≤ ξi (13.42)
i=1 i=1
for each time t. We are interested in the limit property of the rate at which
the system is on.
Since
ξk+1
→ 0, as t → ∞
t
and
k+1
X k
X
ηi ∼ (k + 1)η1 , ξi ∼ kξ1 ,
i=1 i=1
we have ( )
Nt
1X
lim M ξi ≤ x
t→∞ t i=1
(∞ )
[ t(1 − x) tx
≤ lim M η1 > ∩ ξ1 ≤
t→∞ k+1 k
k=0
t(1 − x) tx
= lim sup M η1 > ∧ M ξ1 ≤
t→∞ k≥0 k+1 k
t(1 − x) tx
= lim sup 1 − Ψ ∧Φ
t→∞ k≥0 k+1 k
= sup Φ(xy) ∧ (1 − Ψ(y − xy)) = Υ(x).
y>0
That is, ( )
N t
1X
lim M ξi ≤ x ≤ Υ(x). (13.44)
t→∞ t i=1
300 Chapter 13 - Uncertain Renewal Process
Since
ξk+1
→ 0, as t → ∞
t
and
k
X k+1
X
ηi ∼ kη1 , ξi ∼ (k + 1)ξ1 ,
i=1 i=1
we have
( Nt +1
)
1 X
lim M ξi > x
t→∞ t i=1
(∞ )
[ t(1 − x) tx
≤ lim M η1 ≤ ∩ ξ1 >
t→∞ k k+1
k=0
t(1 − x) tx
= lim sup M η1 ≤ ∧ M ξ1 >
t→∞ k≥0 k k+1
t(1 − x) tx
= lim sup Ψ ∧ 1−Φ
t→∞ k≥0 k+1 k+1
= sup(1 − Φ(xy)) ∧ Ψ(y − xy).
y>0
That is, ( )
Nt +1
1 X
lim M ξi ≤ x ≥ Υ(x). (13.45)
t→∞ t i=1
Since
Nt Nt +1
1X At 1 X
ξi ≤ ≤ ξi ,
t i=1 t t i=1
we obtain
( N
) ( Nt +1
)
t
1X At 1 X
M ξi ≤ x ≥M ≤x ≥M ξi ≤ x .
t i=1 t t i=1
It follows from (13.44) and (13.45) that for any real number x, we have
At
lim ≤ x = Υ(x).
t→∞ t
Hence the availability rate At /t converges in distribution to ξ1 /(ξ1 +η1 ). The
theorem is proved.
Theorem 13.14 (Yao and Li [242], Alternating Renewal Theorem) Assume
At is an alternating renewal process with uncertain on-times ξ1 , ξ2 , · · · and
uncertain off-times η1 , η2 , · · · If E[ξ1 /(ξ1 + η1 )] exists, then
E[At ] ξ1
lim =E . (13.46)
t→∞ t ξ1 + η1
If those on-times and off-times have regular uncertainty distributions Φ and
Ψ, respectively, then
Z 1
E[At ] Φ−1 (α)
lim = −1 (α) + Ψ−1 (1 − α)
dα. (13.47)
t→∞ t 0 Φ
Uncertain Calculus
Φ−1
t (α)
..
......... α = 0.9 ........
.........
..
... .........
.... ...
............
...
.. .........
... .........
........
α = 0.8 ...........
... ......... .............
... ............ .............
... ................
. ...... ..................
... ....
... ......... ............. α = 0.7 ................
........ ............. .....................
... ........ .......................... .....................
... ......... .....................
... . . . . . . . . ........................................................................... α = 0.6 ............. ...
........... ......
. ...... ...............
... ... ...... .............
.. ....................................................................................................................................
....................................................................................................................................................................................................................................................................................
0 ... .........................................................................................................................................................
......... ............... .......................
α = 0.5 ...........................................
... ..................... ...............
...
...
......... .............
......... ..
......... ..........................
......... .............
α = 0.4
.....................
.....................
.....................
... ......... ............. ...............
...
...
.........
.........
.........
α = 0.3
.............
.............
.............
... ......... .............
......... .............
...
... α = 0.2
.........
.........
.........
.
... .........
.........
... .........
... .........
.....
... α = 0.1
...
.
.
...................................................................................................................................................................................................................................................................
.. t
that are homogeneous linear functions of time t for any given α. See Fig-
ure 14.1.
A canonical Liu process is defined by three properties in the above defini-
tion. Does such an uncertain process exist? The following theorem answers
this question.
Theorem 14.1 (Liu [129], Existence Theorem) There exists a canonical Liu
process.
Proof: It follows from Theorem 12.11 that there exists a stationary inde-
pendent increment process Ct whose inverse uncertainty distribution is
√
−1 σ 3 α
Φt (α) = ln t.
π 1−α
Furthermore, Ct has a Lipschitz continuous version. It is also easy to verify
that every increment Cs+t − Cs is a normal uncertain variable with expected
value 0 and variance t2 . Hence there exists a canonical Liu process.
Theorem 14.2 Let Ct be a canonical Liu process. Then for each time t >
0, the ratio Ct /t is a normal uncertain variable with expected value 0 and
variance 1. That is,
Ct
∼ N (0, 1) (14.3)
t
for any t > 0.
Proof: Since Ct is a normal uncertain variable N (0, t), the operational law
tells us that Ct /t has an uncertainty distribution
−1
πx
Ψ(x) = Φt (tx) = 1 + exp − √ .
3
Section 14.1 - Liu Process 305
Theorem 14.3 (Liu [129]) Let Ct be a canonical Liu process. Then for each
time t, we have
t2
≤ E[Ct2 ] ≤ t2 . (14.4)
2
Proof: Note that Ct is a normal uncertain variable and has an uncertainty
distribution Φt (x) in (14.1). It follows from the definition of expected value
that
Z +∞ Z +∞
√ √
E[Ct2 ] = M{Ct2 ≥ x}dx = M{(Ct ≥ x) ∪ (Ct ≤ − x)}dx.
0 0
Proof: Let q be the expected value of Ct2 . On the one hand, it follows from
the definition of variance that
Z +∞
V [Ct2 ] = M{(Ct2 − q)2 ≥ x}dx
0
Z +∞ q
√
≤ M Ct ≥ q + x dx
0
Z +∞ q
√
+ M Ct ≤ − q + x dx
0
Z +∞ q
√
q
√
+ M − q − x ≤ Ct ≤ q − x dx.
0
306 Chapter 14 - Uncertain Calculus
Since t2 /2 ≤ q ≤ t2 , we have
Z +∞ q
√
First Term = M Ct ≥ q + x dx
0
Z +∞ q √
≤ M Ct ≥ t2 /2 + x dx
0
√ !!−1
+∞
p
t2 /2 + x
Z
π
= 1 − 1 + exp − √ dx
0 3t
≤ 1.725t4 ,
Z +∞ q
√
Second Term = M Ct ≤ − q + x dx
0
Z +∞ q √
≤ M Ct ≤ − t /2 + x dx
2
0
+∞
p √ !!−1
t2 /2 + x
Z
π
= 1 + exp √ dx
0 3t
≤ 1.725t4 ,
Z +∞ q
√
q
√
Third Term = M − q − x ≤ Ct ≤ q − x dx
0
Z +∞ q
√
≤ M Ct ≤ q − x dx
0
Z +∞ q
√
≤ M Ct ≤ t2 − x dx
0
+∞
p √ !!−1
t2 + x
Z
π
= 1 + exp − √ dx
0 3t
< 0.86t4 .
> 1.24t4 .
Definition 14.2 Let Ct be a canonical Liu process. Then for any real num-
bers e and σ > 0, the uncertain process
At = et + σCt (14.6)
is called an arithmetic Liu process, where e is called the drift and σ is called
the diffusion.
is called a geometric Liu process, where e is called the log-drift and σ is called
the log-diffusion.
308 Chapter 14 - Uncertain Calculus
Note that the geometric Liu process Gt has a lognormal uncertainty dis-
tribution, i.e.,
Gt ∼ LOGN (et, σt) (14.11)
provided that the limit exists almost surely and is finite. In this case, the
uncertain process Xt is said to be integrable.
Since Xt and Ct are uncertain variables at each time t, the limit in (14.16)
is also an uncertain variable provided that the limit exists almost surely and
is finite. Hence an uncertain process Xt is integrable with respect to Ct if
and only if the limit in (14.16) is an uncertain variable.
Section 14.2 - Liu Integral 309
Example 14.1: For any partition 0 = t1 < t2 < · · · < tk+1 = s, it follows
from (14.16) that
Z s k
X
dCt = lim (Cti+1 − Cti ) ≡ Cs − C0 = Cs .
0 ∆→0
i=1
That is,
Z s
dCt = Cs . (14.17)
0
Example 14.2: For any partition 0 = t1 < t2 < · · · < tk+1 = s, it follows
from (14.16) that
k
X
Cs2 = Ct2i+1 − Ct2i
i=1
k k
X 2 X
= Cti+1 − Cti +2 Cti Cti+1 − Cti
i=1 i=1
Z s
→0+2 Ct dCt
0
as ∆ → 0. That is,
Z s
1 2
Ct dCt = C . (14.18)
0 2 s
Example 14.3: For any partition 0 = t1 < t2 < · · · < tk+1 = s, it follows
from (14.16) that
k
X
sCs = ti+1 Cti+1 − ti Cti
i=1
k
X k
X
= Cti+1 (ti+1 − ti ) + ti (Cti+1 − Cti )
i=1 i=1
Z s Z s
→ Ct dt + tdCt
0 0
as ∆ → 0. That is,
Z s Z s
Ct dt + tdCt = sCs . (14.19)
0 0
Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. Since the uncertain process Xt is sample-continuous, almost all sample
paths are continuous functions with respect to t. Hence the limit
k
X
lim Xti (Cti+1 − Cti )
∆→0
i=1
exists almost surely and is finite. On the other hand, since Xt and Ct are
uncertain variables at each time t, the above limit is also a measurable func-
tion. Hence the limit is an uncertain variable and then Xt is integrable with
respect to Ct .
a = t1 < · · · < tm = a0 < tm+1 < · · · < tn = b0 < tn+1 < · · · < tk+1 = b,
the limit
k
X
lim Xti (Cti+1 − Cti )
∆→0
i=1
we have
k
X m−1
X k
X
Xti (Cti+1 − Cti ) = Xti (Cti+1 − Cti ) + Xti (Cti+1 − Cti ).
i=1 i=1 i=m
Note that
Z b k
X
Xt dCt = lim Xti (Cti+1 − Cti ),
a ∆→0
i=1
Section 14.2 - Liu Integral 311
Z c m−1
X
Xt dCt = lim Xti (Cti+1 − Cti ),
a ∆→0
i=1
Z b k
X
Xt dCt = lim Xti (Cti+1 − Cti ).
c ∆→0
i=m
Hence the equation (14.20) is proved.
Theorem 14.7 (Linearity of Liu Integral) Let Xt and Yt be integrable un-
certain processes on [a, b], and let α and β be real numbers. Then
Z b Z b Z b
(αXt + βYt )dCt = α Xt dCt + β Yt dCt . (14.21)
a a a
Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. It follows from the definition of Liu integral that
Z b Xk
(αXt + βYt )dCt = lim (αXti + βYti )(Cti+1 − Cti )
a ∆→0
i=1
k
X k
X
= lim α Xti (Cti+1 − Cti ) + lim β Yti (Cti+1 − Cti )
∆→0 ∆→0
i=1 i=1
Z b Z b
= α Xt dCt + β Yt dCt .
a a
That is, the sum is also a normal uncertain variable. Since f is an integrable
function, we have
Xk Z s
|f (ti )|(ti+1 − ti ) → |f (t)|dt
i=1 0
312 Chapter 14 - Uncertain Calculus
Exercise 14.1: Let s be a given time with s > 0. Show that the Liu integral
Z s
tdCt (14.24)
0
Exercise 14.2: For any real number α with 0 < α < 1, the uncertain process
Z s
Fs = (s − t)−α dCt (14.26)
0
Example 14.4: It follows from the equation (14.17) that the canonical Liu
process Ct can be written as
Z t
Ct = dCs .
0
Section 14.3 - Fundamental Theorem 313
Thus Ct is a Liu process with drift 0 and diffusion 1, and has an uncertain
differential dCt .
Example 14.5: It follows from the equation (14.18) that Ct2 can be written
as Z t
2
Ct = 2 Cs dCs .
0
Thus Ct2is a Liu process with drift 0 and diffusion 2Ct , and has an uncertain
differential
d(Ct2 ) = 2Ct dCt .
Example 14.6: It follows from the equation (14.19) that tCt can be written
as Z Z t t
tCt = Cs ds + sdCs .
0 0
Thus tCt is a Liu process with drift Ct and diffusion t, and has an uncertain
differential
d(tCt ) = Ct dt + tdCt .
Proof: Let Zt be a Liu process. Then there exist two uncertain processes
µt and σt such that
Z t Z t
Zt = Z0 + µs ds + σs dCs .
0 0
∂h ∂h
dZt = (t, Ct )dt + (t, Ct )dCt . (14.31)
∂t ∂c
314 Chapter 14 - Uncertain Calculus
Theorem 14.11 (Liu [125], Chain Rule) Let f (c) be a continuously differ-
entiable function. Then f (Ct ) has an uncertain differential
That is, Z s
f 0 (Ct )dCt = f (Cs ) − f (C0 ). (14.41)
0
Xt = exp(t), Yt = Ct2 .
Then
dXt = exp(t)dt, dYt = 2Ct dCt .
Example 14.17: The integration by parts may also calculate the uncertain
differential of
Z t
Zt = sin(t + 1) sdCs .
0
Then
dXt = cos(t + 1)dt, dYt = tdCt .
Then
dXt = f 0 (t)dt, dYt = g 0 (Ct )dCt .
Uncertain Differential
Equation
has a solution Z t Z t
Xt = X0 + us ds + vs dCs . (15.4)
0 0
Example 15.1: Let a and b be real numbers. Consider the uncertain differ-
ential equation
dXt = adt + bdCt . (15.5)
It follows from Theorem 15.1 that the solution is
Z t Z t
Xt = X0 + ads + bdCs .
0 0
That is,
Xt = X0 + at + bCt . (15.6)
Theorem 15.2 Let ut and vt be two integrable uncertain processes. Then
the uncertain differential equation
dXt = ut Xt dt + vt Xt dCt (15.7)
has a solution Z t Z t
Xt = X0 exp us ds + vs dCs . (15.8)
0 0
Example 15.2: Let a and b be real numbers. Consider the uncertain differ-
ential equation
dXt = aXt dt + bXt dCt . (15.9)
It follows from Theorem 15.2 that the solution is
Z t Z t
Xt = X0 exp ads + bdCs .
0 0
That is,
Xt = X0 exp (at + bCt ) . (15.10)
Section 15.1 - Uncertain Differential Equation 321
has a solution
Z t Z t
u2s v2s
Xt = Ut X0 + ds + dCs (15.12)
0 Us 0 Us
where Z t Z t
Ut = exp u1s ds + v1s dCs . (15.13)
0 0
At first, we have
Z t Z t
Ut = exp (−a)ds + 0dCs = exp(−at).
0 0
That is,
Z t
m m
Xt = + exp(−at) X0 − + σ exp(−at) exp(as)dCs (15.15)
a a 0
That is, Z t
Xt = exp(σCt ) X0 + m exp(−σCs )ds . (15.18)
0
and
dXt = αt Xt dt + g(t, Xt )dCt . (15.20)
Theorem 15.4 (Liu [148]) Let f be a function of two variables and let σt
be an integrable uncertain process. Then the uncertain differential equation
has a solution
Xt = Yt−1 Zt (15.22)
where Z t
Yt = exp − σs dCs (15.23)
0
and Zt is the solution of the uncertain differential equation
dZt = Yt f (t, Yt−1 Zt )dt (15.24)
with initial value Z0 = X0 .
Proof: At first, by using the chain rule, the uncertain process Yt has an
uncertain differential
Z t
dYt = − exp − σs dCs σt dCt = −Yt σt dCt .
0
Theorem 15.4 says the uncertain differential equation (15.25) has a solution
Xt = Yt−1 Zt , i.e.,
Z t 1/(1−α)
1−α
Xt = exp(σCt ) X0 + (1 − α) exp((α − 1)σCs )ds .
0
Theorem 15.5 (Liu [148]) Let g be a function of two variables and let αt
be an integrable uncertain process. Then the uncertain differential equation
has a solution
Xt = Yt−1 Zt (15.27)
where Z t
Yt = exp − αs ds (15.28)
0
and Zt is the solution of the uncertain differential equation
Proof: At first, by using the chain rule, the uncertain process Yt has an
uncertain differential
Z t
dYt = − exp − αs ds αt dt = −Yt αt dt.
0
That is,
d(Xt Yt ) = Yt g(t, Xt )dCt .
Defining Zt = Xt Yt , we obtain Xt = Yt−1 Zt and dZt = Yt g(t, Yt−1 Zt )dCt .
Furthermore, since Y0 = 1, the initial value Z0 is just X0 . The theorem is
thus verified.
Since β 6= 1, we have
Theorem 15.5 says the uncertain differential equation (15.30) has a solution
Xt = Yt−1 Zt , i.e.,
Z t 1/(1−β)
Xt = exp(αt) X01−β + (1 − β) exp((β − 1)αs)dCs .
0
and
dXt = αt dt + g(t, Xt )dCt . (15.32)
Theorem 15.6 (Yao [247]) Let f be a function of two variables and let σt
be an integrable uncertain process. Then the uncertain differential equation
has a solution
Xt = Yt + Zt (15.34)
where Z t
Yt = σs dCs (15.35)
0
Hence Z t
Xt = X0 + σCt − ln 1 − α exp(X0 + σCs )ds .
0
Theorem 15.7 (Yao [247]) Let g be a function of two variables and let αt
be an integrable uncertain process. Then the uncertain differential equation
dXt = αt dt + g(t, Xt )dCt (15.38)
has a solution
Xt = Yt + Zt (15.39)
where Z t
Yt = αs ds (15.40)
0
and Zt is the solution of the uncertain differential equation
dZt = g(t, Yt + Zt )dCt (15.41)
with initial value Z0 = X0 .
Section 15.3 - Existence and Uniqueness 327
That is,
d(Xt − Yt ) = g(t, Xt )dCt .
Defining Zt = Xt − Yt , we obtain Xt = Yt + Zt and dZt = g(t, Yt + Zt )dCt .
Furthermore, since Y0 = 0, the initial value Z0 is just X0 . The theorem is
proved.
Since σ 6= 0, we have
d exp(−Zt ) = σ exp(αt)dCt .
Hence Z t
Xt = X0 + αt − ln 1 − σ exp(X0 + αs)dCs .
0
has a unique solution if the coefficients f (t, x) and g(t, x) satisfy linear growth
condition
|f (t, x)| + |g(t, x)| ≤ L(1 + |x|), ∀x ∈ <, t ≥ 0 (15.44)
328 Chapter 15 - Uncertain Differential Equation
|f (t, x) − f (t, y)| + |g(t, x) − g(t, y)| ≤ L|x − y|, ∀x, y ∈ <, t ≥ 0 (15.45)
for each γ ∈ Γ. It follows from linear growth condition and Lipschitz condi-
tion that
Z s Z s
(0)
Dt (γ) = max f (v, X0 )dv + g(v, X0 )dCv (γ)
0≤s≤t 0 0
Z t Z t
≤ |f (v, X0 )| dv + Kγ |g(v, X0 )| dv
0 0
≤ (1 + |X0 |)L(1 + Kγ )t
Next we prove that the solution is unique. Assume that both Xt and Xt∗
are solutions of the uncertain differential equation. Then for each γ ∈ Γ, it
follows from linear growth condition and Lipschitz condition that
Z t
∗
|Xt (γ) − Xt (γ)| ≤ L(1 + Kγ ) |Xv (γ) − Xv∗ (γ)|dv.
0
15.4 Stability
Definition 15.2 (Liu [125]) An uncertain differential equation is said to be
stable if for any two solutions Xt and Yt , we have
lim M{|Xt − Yt | < ε for all t ≥ 0} = 1 (15.46)
|X0 −Y0 |→0
Example 15.10: Some uncertain differential equations are not stable. For
example, consider
dXt = Xt dt + bdCt . (15.48)
It is clear that two solutions with different initial values X0 and Y0 are
Z t
Xt = exp(t)X0 + b exp(t) exp(−s)dCs ,
0
Z t
Yt = exp(t)Y0 + b exp(t) exp(−s)dCs .
0
Then for any given number ε > 0, we have
lim M{|Xt − Yt | < ε for all t ≥ 0}
|X0 −Y0 |→0
Theorem 15.9 (Yao, Gao and Gao [243], Stability Theorem) The uncertain
differential equation
is stable if the coefficients f (t, x) and g(t, x) satisfy linear growth condition
|f (t, x) − f (t, y)| + |g(t, x) − g(t, y)| ≤ L(t)|x − y|, ∀x, y ∈ <, t ≥ 0 (15.51)
Proof: Since L(t) is bounded on [0, +∞), there is a constant R such that
L(t) ≤ R for any t. Then the strong Lipschitz condition (15.51) implies the
following Lipschitz condition,
|f (t, x) − f (t, y)| + |g(t, x) − g(t, y)| ≤ R|x − y|, ∀x, y ∈ <, t ≥ 0. (15.52)
where K(γ) is the Lipschitz constant of the sample path Ct (γ). It follows
that
Z +∞
|Xt (γ) − Yt (γ)| ≤ |X0 − Y0 | exp (1 + K(γ)) L(s)ds .
0
Since Z +∞
M |X0 − Y0 | exp (1 + K(γ)) L(s)ds < ε → 1
0
as |X0 − Y0 | → 0, we obtain
Exercise 15.1: Suppose u1t , u2t , v1t , v2t are bounded functions with respect
to t such that
Z +∞ Z +∞
|u1t |dt < +∞, |v1t |dt < +∞. (15.53)
0 0
is stable.
15.5 α-Path
Definition 15.3 (Yao and Chen [246]) Let α be a number with 0 < α < 1.
An uncertain differential equation
Remark 15.2: Note that each α-path Xtα is a real-valued function of time t,
but is not necessarily one of sample paths. Furthermore, almost all α-paths
are continuous functions with respect to time t.
Example 15.11: The uncertain differential equation dXt = adt + bdCt with
X0 = 0 has an α-path
Xtα = at + |b|Φ−1 (α)t (15.58)
where Φ−1 is the inverse standard normal uncertainty distribution.
Example 15.12: The uncertain differential equation dXt = aXt dt+bXt dCt
with X0 = 1 has an α-path
Xtα
.....
.......
.... α = 0.9 .............
......
... .............
.............
..
... .. .. ....... .............
... .............
..............
... ..............
... ..............
...............
... .............................
..............................
...........................................................
... ............................................ .......................
... ............................................ ...............
................
... .................................. ............ ................
... ..................................... ........... ................
................
... .......................... ........ ........... .................
... ... ............. ...... ....... ........ .................
........ .......
...
...
... ... ........... ...... ........
... .... ..... ..... ...... .......
... ..... ..... ...... ....... ........
.............
.........
α = 0.8
... ... ..... ...... ...... ...... ....... .........
... ..... ..... ...... ....... ....... .........
... ... ..... ..... ...... ...... .... . ... ..........
... ... .... ..... ..... ....... .............. .........
..........
... ... ..... ...... ...... .......
... ..... ..... ...... .......
.......
....... ..........
... ... ..... ..... ...... .. ........ ..........
.....
... .... ..... ...... ...... ............ ........
........ α = 0.7
... .... ..... ..... ...... ... ... ...........
.... .... ..... .....
... ..... ..... ..... ...... .............. .........
... ..... ..... ....... ....... .......
.......
.........
.........
..... ...... ...... .......
... .. ........ .....
...
.....
.....
..
.
..
..... .......... ........... ..............
. .
α = 0.6
........
.........
... ..... .......... ............ .............. .........
..
.
... ..... ...... ....... ........... .........
...
...
.....
.....
......
......
....... .............
.......
α = 0.5
.......
........
.........
.........
...... ... ........ .....
... .
...
......
.......
.......
.........
........ α = 0.4
........
..........
.........
..........
... .........
...
...
.......
........
......... α = 0.3 .........
..........
..
......
..........
...
...
α = 0.2 ..........
...........
........
...
... α = 0.1
...
............................................................................................................................................................................................................................................
... t
Theorem 15.10 (Yao-Chen Formula [246]) Let Xt and Xtα be the solution
and α-path of the uncertain differential equation
respectively. Then
M{Xt ≤ Xtα , ∀t} = α, (15.61)
M{Xt > Xtα , ∀t} = 1 − α. (15.62)
Proof: At first, for each α-path Xtα , we divide the time interval into two
parts,
T + = t g (t, Xtα ) ≥ 0 ,
dCt (γ)
Λ−
1 = γ ≥ Φ−1 (1 − α) for any t ∈ T −
dt
where Φ−1 is the inverse standard normal uncertainty distribution. Since T +
and T − are disjoint sets and Ct has independent increments, we get
M{Λ+
1 } = α, M{Λ−
1 } = α, M{Λ+ −
1 ∩ Λ1 } = α.
−
For any γ ∈ Λ+
1 ∩ Λ1 , we always have
dCt (γ)
g(t, Xt (γ)) ≤ |g(t, Xtα )|Φ−1 (α), ∀t.
dt
Hence Xt (γ) ≤ Xtα for all t and
−
M{Xt ≤ Xtα , ∀t} ≥ M{Λ+
1 ∩ Λ1 } = α. (15.63)
M{Λ+
2 } = 1 − α, M{Λ−
2 } = 1 − α, M{Λ+ −
2 ∩ Λ2 } = 1 − α.
−
For any γ ∈ Λ+
2 ∩ Λ2 , we always have
dCt (γ)
g(t, Xt (γ)) > |g(t, Xtα )|Φ−1 (α), ∀t.
dt
Hence Xt (γ) > Xtα for all t and
−
M{Xt > Xtα , ∀t} ≥ M{Λ+
2 ∩ Λ2 } = 1 − α. (15.64)
Note that {Xt ≤ Xtα , ∀t} and {Xt 6≤ Xtα , ∀t} are opposite events with each
other. By using the duality axiom, we obtain
It follows from M{Xt > Xtα , ∀t} ⊂ M{Xt 6≤ Xtα , ∀t} and monotonicity
theorem that
Thus (15.61) and (15.62) follow from (15.63), (15.64) and (15.65) immedi-
ately.
334 Chapter 15 - Uncertain Differential Equation
Exercise 15.2: Show that the solution of the uncertain differential equation
dXt = adt + bdCt with X0 = 0 has an inverse uncertainty distribution
Ψ−1
t (α) = at + |b|Φ
−1
(α)t (15.75)
−1
where Φ is the inverse standard normal uncertainty distribution.
Exercise 15.3: Show that the solution of the uncertain differential equation
dXt = aXt dt + bXt dCt with X0 = 1 has an inverse uncertainty distribution
Ψ−1 −1
t (α) = exp at + |b|Φ (α)t (15.76)
where Φ−1 is the inverse standard normal uncertainty distribution.
Section 15.6 - Yao-Chen Formula 335
Thus we have
Z 1 Z 1
E[J(Xt )] = Υ−1
t (α)dα = J(Xtα )dα.
0 0
Υ−1 1−α
t (α) = J(Xt ).
Thus we have
Z 1 Z 1 Z 1
E[J(Xt )] = Υ−1
t (α)dα = J(Xt1−α )dα = J(Xtα )dα.
0 0 0
Exercise 15.4: Let Xt and Xtα be the solution and α-path of some uncertain
differential equation. Show that
Z 1
E[Xt ] = Xtα dα, (15.79)
0
Z 1
+
E[(Xt − K) ] = (Xtα − K)+ dα, (15.80)
0
Z 1
E[(K − Xt )+ ] = (K − Xtα )+ dα. (15.81)
0
336 Chapter 15 - Uncertain Differential Equation
respectively. Then for any time s > 0 and strictly increasing function J(x),
the supremum
sup J(Xt ) (15.83)
0≤t≤s
Ψ−1 α
s (α) = sup J(Xt ); (15.84)
0≤t≤s
Ψ−1 α
s (α) = inf J(Xt ). (15.86)
0≤t≤s
Similarly, we have
M sup J(Xt ) > sup J(Xt ) ≥ M{Xt > Xtα , ∀t} = 1 − α.
α
(15.88)
0≤t≤s 0≤t≤s
Similarly, we have
M inf J(Xt ) > inf J(Xtα ) ≥ M{Xt > Xtα , ∀t} = 1 − α. (15.91)
0≤t≤s 0≤t≤s
Exercise 15.5: Let r and K be real numbers. Show that the supremum
sup exp(−rt)(Xt − K)
0≤t≤s
Ψ−1 α
s (α) = sup exp(−rt)(Xt − K)
0≤t≤s
Theorem 15.14 (Yao [244]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation
respectively. Then for any time s > 0 and strictly decreasing function J(x),
the supremum
sup J(Xt ) (15.94)
0≤t≤s
Ψ−1 1−α
s (α) = sup J(Xt ); (15.95)
0≤t≤s
Ψ−1 1−α
s (α) = inf J(Xt ). (15.97)
0≤t≤s
338 Chapter 15 - Uncertain Differential Equation
Similarly, we have
M sup J(Xt ) > sup J(Xt1−α ) ≥ M{Xt < Xt1−α , ∀t} = 1 − α. (15.99)
0≤t≤s 0≤t≤s
Similarly, we have
M inf J(Xt ) > inf J(Xt ) ≥ M{Xt < Xt1−α , ∀t} = 1 − α. (15.102)
1−α
0≤t≤s 0≤t≤s
Exercise 15.6: Let r and K be real numbers. Show that the supremum
sup exp(−rt)(K − Xt )
0≤t≤s
with an initial value X0 , respectively. Then for any given level z and strictly
increasing function J(x), the first hitting time τz that J(Xt ) reaches z has
an uncertainty distribution
α
1 − inf α sup J(X t ) ≥ z , if z > J(X0 )
0≤t≤s
Ψ(s) = (15.105)
J(Xtα ) ≤z ,
sup α inf if z < J(X0 ).
0≤t≤s
Then we have
sup J(Xtα0 ) = z,
0≤t≤s
{τz ≤ s} = sup J(Xt ) ≥ z ⊃ {Xt ≥ Xtα0 , ∀t},
0≤t≤s
{τz > s} = sup J(Xt ) < z ⊃ {Xt < Xtα0 , ∀t}.
0≤t≤s
Then we have
inf J(Xtα0 ) = z,
0≤t≤s
340 Chapter 15 - Uncertain Differential Equation
{τz ≤ s} = inf J(Xt ) ≤ z ⊃ {Xt ≤ Xtα0 , ∀t},
0≤t≤s
{τz > s} = inf J(Xt ) > z ⊃ {Xt > Xtα0 , ∀t}.
0≤t≤s
Theorem 15.16 (Yao [244]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation
with an initial value X0 , respectively. Then for any given level z and strictly
decreasing function J(x), the first hitting time τz that J(Xt ) reaches z has
an uncertainty distribution
α
sup α sup J(Xt ) ≥ z , if z > J(X0 )
0≤t≤s
Ψ(s) = (15.107)
1 − inf α J(Xtα ) ≤ z , if z < J(X0 ).
inf
0≤t≤s
Then we have
sup J(Xtα0 ) = z,
0≤t≤s
{τz ≤ s} = sup J(Xt ) ≥ z ⊃ {Xt ≤ Xtα0 , ∀t},
0≤t≤s
{τz > s} = sup J(Xt ) < z ⊃ {Xt > Xtα0 , ∀t}.
0≤t≤s
Section 15.6 - Yao-Chen Formula 341
Then we have
inf J(Xtα0 ) = z,
0≤t≤s
{τz ≤ s} = inf J(Xt ) ≤ z ⊃ {Xt ≥ Xtα0 , ∀t},
0≤t≤s
{τz > s} = inf J(Xt ) > z ⊃ {Xt < Xtα0 , ∀t}.
0≤t≤s
respectively. Then for any time s > 0 and strictly increasing function J(x),
the time integral Z s
J(Xt )dt (15.109)
0
342 Chapter 15 - Uncertain Differential Equation
Similarly, we have
Z s Z s
M J(Xt )dt > J(Xtα )dt ≥ M{Xt > Xtα , ∀t} = 1 − α. (15.112)
0 0
Exercise 15.7: Let r and K be real numbers. Show that the time integral
Z s
exp(−rt)(Xt − K)dt
0
Similarly, we have
Z s Z s
M J(Xt )dt > J(Xt1−α )dt ≥ M{Xt < Xt1−α , ∀t} = 1 − α. (15.118)
0 0
Exercise 15.8: Let r and K be real numbers. Show that the time integral
Z s
exp(−rt)(K − Xt )dt
0
Step 2. Solve dXtα = f (t, Xtα )dt + |g(t, Xtα )|Φ−1 (α)dt by any method of or-
dinary differential equation and obtain the α-path Xtα , for example,
by using the recursion formula
α
Xi+1 = Xiα + f (ti , Xiα )h + |g(ti , Xiα )|Φ−1 (α)h (15.121)
Remark 15.4: Shen and Yao [209] designed a Runge-Kutta method that
replaces the recursion formula (15.121) with
α h
Xi+1 = Xiα + (k1 + 2k2 + 2k3 + k4 ) (15.122)
6
where
k1 = f (ti , Xiα ) + |g(ti , Xiα )|Φ−1 (α), (15.123)
k2 = f (ti + h/2, Xiα + h2 k1 /2) + |g(ti + h/2, Xiα + h2 k1 /2)|Φ−1 (α), (15.124)
k3 = f (ti + h/2, Xiα + h2 k2 /2) + |g(ti + h/2, Xiα + h2 k2 /2)|Φ−1 (α), (15.125)
Uncertain Finance
This chapter will introduce uncertain stock model, uncertain interest rate
model, and uncertain currency model by using the tool of uncertain differen-
tial equation.
where r is the riskless interest rate, e is the log-drift, σ is the log-diffusion, and
Ct is a canonical Liu process. Note that the bond price is Xt = X0 exp(rt)
and the stock price is
Yt = Y0 exp(et + σCt ) (16.2)
whose inverse uncertainty distribution is
√ !
σt 3 α
Φ−1
t (α) = Y0 exp et + ln . (16.3)
π 1−α
European Option
Definition 16.1 A European call option is a contract that gives the holder
the right to buy a stock at an expiration time s for a strike price K.
The payoff from a European call option is (Ys −K)+ since the option is ra-
tionally exercised if and only if Ys > K. Considering the time value of money
resulted from the bond, the present value of the payoff is exp(−rs)(Ys − K)+ .
Hence the European call option price should be the expected present value
of the payoff.
Definition 16.2 Assume a European call option has a strike price K and
an expiration time s. Then the European call option price is
Y.t
....
.........
.... ....... ....
.. ... ... ........... .......
Y .................................................................................................................................................................. .... ....... . ....
s .... ... ......... ... .
.........
... .. .... ...
... . ... .. ...
... ........... .......
. ..
. .
. ..
... .... ... ...... ........ ... .. ........
. .
.
.
... ...... .... ... .. ........ .......... ........ . ...
... ... ... ........ ..
. ... ... ..... ....
... . .. .
.. . .. .
..
. .
............ ... ... . ....
... .. .. ..... . . .. . ..
.....................................................................................................................................................................................................
K ...
...
... ...... .. ...
.... ... ....
. .......
.
.. . ..
...
... .... .
... .......... ..
..
... ...
...... ...
..... ..
Y 0 ... .
. ..
...
.... .
..
. .
.....................................................................................................................................................................................................................................................................................
... s t
0 ....
..
Theorem 16.1 (Liu [125]) Assume a European call option for the uncertain
stock model (16.1) has a strike price K and an expiration time s. Then the
European call option price is
Z 1
√ ! !+
σs 3 α
fc = exp(−rs) Y0 exp es + ln −K dα. (16.5)
0 π 1−α
It follows from Definition 16.2 that the European call option price formula is
just (16.5).
Remark 16.1: It is clear that the European call option price is a decreasing
function of interest rate r. That is, the European call option will devaluate
if the interest rate is raised; and the European call option will appreciate in
value if the interest rate is reduced. In addition, the European call option
price is also a decreasing function of the strike price K.
Section 16.1 - Uncertain Stock Model 349
fc = 6.91.
Definition 16.3 A European put option is a contract that gives the holder
the right to sell a stock at an expiration time s for a strike price K.
The payoff from a European put option is (K −Ys )+ since the option is ra-
tionally exercised if and only if Ys < K. Considering the time value of money
resulted from the bond, the present value of this payoff is exp(−rs)(K −Ys )+ .
Hence the European put option price should be the expected present value
of the payoff.
Definition 16.4 Assume a European put option has a strike price K and
an expiration time s. Then the European put option price is
fp = exp(−rs)E[(K − Ys )+ ]. (16.6)
Theorem 16.2 (Liu [125]) Assume a European put option for the uncertain
stock model (16.1) has a strike price K and an expiration time s. Then the
European put option price is
Z 1 √ !!+
σs 3 α
fp = exp(−rs) K − Y0 exp es + ln dα. (16.7)
0 π 1−α
It follows from Definition 16.4 that the European put option price is
Z 1 √ !!+
σs 3 1 − α
fp = exp(−rs) K − Y0 exp es + ln dα
0 π α
Z 1 √ !!+
σs 3 α
= exp(−rs) K − Y0 exp es + ln dα.
0 π 1−α
fp = 4.40.
American Option
Definition 16.5 An American call option is a contract that gives the holder
the right to buy a stock at any time prior to an expiration time s for a strike
price K.
It is clear that the payoff from an American call option is the supremum
of (Yt −K)+ over the time interval [0, s]. Considering the time value of money
resulted from the bond, the present value of this payoff is
Hence the American call option price should be the expected present value
of the payoff.
Definition 16.6 Assume an American call option has a strike price K and
an expiration time s. Then the American call option price is
fc = E sup exp(−rt)(Yt − K)+ . (16.9)
0≤t≤s
Theorem 16.3 (Chen [13]) Assume an American call option for the uncer-
tain stock model (16.1) has a strike price K and an expiration time s. Then
the American call option price is
Z 1 √ ! !+
σt 3 α
fc = sup exp(−rt) Y0 exp et + ln −K dα.
0 0≤t≤s π 1−α
Proof: It follows from Theorem 15.13 that sup0≤t≤s exp(−rt)(Yt − K)+ has
an inverse uncertainty distribution
√ ! !+
−1 σt 3 α
Ψs (α) = sup exp(−rt) Y0 exp et + ln −K .
0≤t≤s π 1−α
Hence the American call option price formula follows from Definition 16.6
immediately.
fc = 19.8.
Definition 16.7 An American put option is a contract that gives the holder
the right to sell a stock at any time prior to an expiration time s for a strike
price K.
It is clear that the payoff from an American put option is the supremum
of (K −Yt )+ over the time interval [0, s]. Considering the time value of money
resulted from the bond, the present value of this payoff is
Hence the American put option price should be the expected present value
of the payoff.
Definition 16.8 Assume an American put option has a strike price K and
an expiration time s. Then the American put option price is
+
fp = E sup exp(−rt)(K − Yt ) . (16.11)
0≤t≤s
Theorem 16.4 (Chen [13]) Assume an American put option for the uncer-
tain stock model (16.1) has a strike price K and an expiration time s. Then
the American put option price is
Z 1
√ !!+
σt 3 α
fp = sup exp(−rt) K − Y0 exp et + ln dα.
0 0≤t≤s π 1−α
Hence the American put option price formula follows from Definition 16.8
immediately.
Asian Option
Definition 16.9 An Asian call option is a contract whose payoff at the ex-
piration time s is
Z s +
1
Yt dt − K (16.12)
s 0
where K is a strike price.
Considering the time value of money resulted from the bond, the present
value of the payoff from an Asian call option is
Z s +
1
exp(−rs) Yt dt − K . (16.13)
s 0
Hence the Asian call option price should be the expected present value of the
payoff.
Definition 16.10 Assume an Asian call option has a strike price K and an
expiration time s. Then the Asian call option price is
" Z + #
1 s
fc = exp(−rs)E Yt dt − K . (16.14)
s 0
Theorem 16.5 (Sun and Chen [218]) Assume an Asian call option for the
uncertain stock model (16.1) has a strike price K and an expiration time s.
Then the Asian call option price is
Z 1 √ ! !+
Y0 s
Z
σt 3 α
fc = exp(−rs) exp et + ln dt − K dα.
0 s 0 π 1−α
Proof: It follows from Theorem 15.17 that the inverse uncertainty distribu-
tion of time integral Z s
Yt dt
0
is √ !
Z s
σt 3 α
Ψ−1
s (α) = Y0 exp et + ln dt.
0 π 1−α
Hence the Asian call option price formula follows from Definition 16.10 im-
mediately.
Section 16.1 - Uncertain Stock Model 353
Theorem 16.6 (Sun and Chen [218]) Assume an Asian put option for the
uncertain stock model (16.1) has a strike price K and an expiration time s.
Then the Asian put option price is
Z 1 √ ! !+
Y0 s
Z
σt 3 α
fc = exp(−rs) K− exp et + ln dt dα.
0 s 0 π 1−α
Proof: It follows from Theorem 15.17 that the inverse uncertainty distribu-
tion of time integral Z s
Yt dt
0
is √ !
Z s
σt 3 α
Ψ−1
s (α) = Y0 exp et + ln dt.
0 π 1−α
Hence the Asian put option price formula follows from Definition 16.12 im-
mediately.
where r is the riskless interest rate, F and G are two functions, and Ct is a
canonical Liu process.
Note that the α-path Ytα of the stock price Yt can be calculated by some
numerical methods. Assume the strike price is K and the expiration time is
s. It follows from Definition 16.2 and Theorem 15.12 that the European call
option price is Z 1
fc = exp(−rs) (Ysα − K)+ dα. (16.19)
0
It follows from Definition 16.4 and Theorem 15.12 that the European put
option price is Z 1
fp = exp(−rs) (K − Ysα )+ dα. (16.20)
0
It follows from Definition 16.6 and Theorem 15.13 that the American call
option price is
Z 1
α +
fc = sup exp(−rt)(Yt − K) dα. (16.21)
0 0≤t≤s
It follows from Definition 16.8 and Theorem 15.14 that the American put
option price is
Z 1
α +
fp = sup exp(−rt)(K − Yt ) dα. (16.22)
0 0≤t≤s
It follows from Definition 16.9 and Theorem 15.17 that the Asian call option
price is
Z 1 " Z s + #
1 α
fc = exp(−rs) Y dt − K dα. (16.23)
0 s 0 t
It follows from Definition 16.11 and Theorem 15.18 that the Asian put option
price is
Z 1 " + #
1 s α
Z
fp = exp(−rs) K− Y dt dα. (16.24)
0 s 0 t
where r is the riskless interest rate, ei are the log-drifts, σij are the log-
diffusions, Cjt are independent Liu processes, i = 1, 2, · · · , m, j = 1, 2, · · · , n.
Section 16.1 - Uncertain Stock Model 355
Portfolio Selection
For the multifactor stock model (16.25), we have the choice of m + 1 different
investments. At each time t we may choose a portfolio (βt , β1t , · · · , βmt ) (i.e.,
the investment fractions meeting βt + β1t + · · · + βmt = 1). Then the wealth
Zt at time t should follow the uncertain differential equation
m
X m X
X n
dZt = rβt Zt dt + ei βit Zt dt + σij βit Zt dCjt . (16.26)
i=1 i=1 j=1
That is,
m
Z tX n Z tX
X m
Zt = Z0 exp(rt) exp (ei − r)βis ds + σij βis dCjs .
0 i=1 j=1 0 i=1
No-Arbitrage
The stock model (16.25) is said to be no-arbitrage if there is no portfolio
(βt , β1t , · · · , βmt ) such that for some time s > 0, we have
M{exp(−rs)Zs ≥ Z0 } = 1 (16.27)
and
M{exp(−rs)Zs > Z0 } > 0 (16.28)
where Zt is determined by (16.26) and represents the wealth at time t.
has a solution, i.e., (e1 −r, e2 −r, · · · , em −r) is a linear combination of column
vectors (σ11 , σ21 , · · · , σm1 ), (σ12 , σ22 , · · · , σm2 ), · · · , (σ1n , σ2n , · · · , σmn ).
Proof: When the portfolio (βt , β1t , · · · , βmt ) is accepted, the wealth at each
time t is
Z tXm Xn Z tXm
Zt = Z0 exp(rt) exp (ei − r)βis ds + σij βis dCjs .
0 i=1 j=1 0 i=1
356 Chapter 16 - Uncertain Finance
Thus
m
Z tX n Z tX
X m
ln(exp(−rt)Zt ) − ln Z0 = (ei − r)βis ds + σij βis dCjs
0 i=1 j=1 0 i=1
and variance 2
n Z
X t m
X
σij βis ds .
j=1 0 i=1
Assume the system (16.29) has a solution. The argument breaks down
into two cases. Case I: for any given time t and portfolio (βt , β1t , · · · , βmt ),
suppose
Xn Z t Xm
σij βis ds = 0.
j=1 0 i=1
Then
m
X
σij βis = 0, j = 1, 2, · · · , n, s ∈ (0, t].
i=1
and
m
Z tX
(ei − r)βis ds = 0.
0 i=1
ln(exp(−rt)Zt ) − ln Z0 = 0
and
M{exp(−rt)Zt > Z0 } = 0.
That is, the stock model (16.25) is no-arbitrage. Case II: for any given time
t and portfolio (βt , β1t , · · · , βmt ), suppose
n Z
X t m
X
σij βis ds 6= 0.
j=1 0 i=1
Section 16.1 - Uncertain Stock Model 357
and
m
X
(ei − r)αi > 0.
i=1
Now we take a portfolio
(βt , β1t , · · · , βmt ) ≡ (1 − (α1 + α2 + · · · + αm ), α1 , α2 , · · · , αm ).
Then
m
Z tX
ln(exp(−rt)Zt ) − ln Z0 = (ei − r)αi ds > 0.
0 i=1
Thus we have
M{exp(−rt)Zt > Z0 } = 1.
Hence the multifactor stock model (16.25) is arbitrage. The theorem is thus
proved.
Theorem 16.8 The multifactor stock model (16.25) is no-arbitrage if its
log-diffusion matrix
σ11 σ12 ··· σ1n
σ21 σ22 ··· σ2n
(16.30)
.. .. .. ..
. . . .
σm1 σm2 ··· σmn
has rank m, i.e., the row vectors are linearly independent.
Proof: If the log-diffusion matrix (16.30) has rank m, then the system of
equations (16.29) has a solution. It follows from Theorem 16.7 that the
multifactor stock model (16.25) is no-arbitrage.
Theorem 16.9 The multifactor stock model (16.25) is no-arbitrage if its
log-drifts are all equal to the interest rate r, i.e.,
ei = r, i = 1, 2, · · · , m. (16.31)
358 Chapter 16 - Uncertain Finance
where m, a, σ are positive numbers. Besides, Jiao and Yao [75] investigated
the uncertain interest rate model,
p
dXt = (m − aXt )dt + σ Xt dCt . (16.33)
More generally, we may assume the interest rate Xt follows a general uncer-
tain differential equation and obtain a general interest rate model,
Zero-Coupon Bond
A zero-coupon bond is a bond bought at a price lower than its face value
that is the amount it promises to pay at the maturity date. For simplicity,
we assume the face value is always 1 dollar. One problem is how to price a
zero-coupon bond.
Definition 16.13 Let Xt be the uncertain interest rate. Then the price of a
zero-coupon bond with a maturity date s is
Z s
f = E exp − Xt dt . (16.35)
0
Theorem 16.10 Let Xtα be the α-path of the uncertain interest rate Xt .
Then the price of a zero-coupon bond with maturity date s is
Z 1 Z s
f= exp − Xtα dt dα. (16.36)
0 0
Section 16.3 - Uncertain Currency Model 359
Proof: It follows from Theorem 15.17 that the inverse uncertainty distribu-
tion of time integral Z s
Xt dt
0
is Z s
Ψ−1
s (α) = Xtα dt.
0
Hence the price formula of zero-coupon bond follows from Definition 16.13
immediately.
On the other hand, the bank receives f for selling the contract at time 0,
and pays (1 − K/Zs )+ in foreign currency at the expiration time s. Thus the
expected return of the bank at the time 0 is
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
Thus the European currency option price is given by the definition below.
Definition 16.15 (Liu, Chen and Ralescu [152]) Assume a European cur-
rency option has a strike price K and an expiration time s. Then the Euro-
pean currency option price is
1 1
f= exp(−us)E[(Zs − K)+ ] + exp(−vs)Z0 E[(1 − K/Zs )+ ]. (16.43)
2 2
Theorem 16.11 (Liu, Chen and Ralescu [152]) Assume a European cur-
rency option for the uncertain currency model (16.37) has a strike price K
and an expiration time s. Then the European currency option price is
Z 1 √ ! !+
1 σs 3 α
f = exp(−us) Z0 exp es + ln −K dα
2 0 π 1−α
Z 1 √ !!+
1 σs 3 α
+ exp(−vs) Z0 − K/ exp es + ln dα.
2 0 π 1−α
Proof: Since (Zs − K)+ and Z0 (1 − K/Zs )+ are increasing functions with
respect to Zs , they have inverse uncertainty distributions
√ ! !+
σs 3 α
Ψ−1s (α) = Z0 exp es + ln −K ,
π 1−α
√ !!+
σs 3 α
Υ−1
s (α) = Z0 − K/ exp es + ln ,
π 1−α
respectively. Thus the European currency option price formula follows from
Definition 16.15 immediately.
Remark 16.5: The European currency option price of the uncertain cur-
rency model (16.37) is a decreasing function of K, u and v.
Example 16.5: Assume the domestic interest rate u = 0.08, the foreign in-
terest rate v = 0.07, the log-drift e = 0.06, the log-diffusion σ = 0.32, the ini-
tial exchange rate Z0 = 5, the strike price K = 6 and the expiration time s =
Section 16.3 - Uncertain Currency Model 361
f = 0.977.
On the other hand, the bank receives f for selling the contract, and pays
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
+
−f + E sup exp(−ut)(Zt − K)
0≤t≤s
(16.48)
+
=f −E sup exp(−vt)Z0 (1 − K/Zt ) .
0≤t≤s
Thus the American currency option price is given by the definition below.
Definition 16.17 (Liu, Chen and Ralescu [152]) Assume an American cur-
rency option has a strike price K and an expiration time s. Then the Amer-
ican currency option price is
1 + 1 +
f = E sup exp(−ut)(Zt − K) + E sup exp(−vt)Z0 (1 − K/Zt ) .
2 0≤t≤s 2 0≤t≤s
362 Chapter 16 - Uncertain Finance
Theorem 16.12 (Liu, Chen and Ralescu [152]) Assume an American cur-
rency option for the uncertain currency model (16.37) has a strike price K
and an expiration time s. Then the American currency option price is
√ ! !+
1 1
Z
σt 3 α
f = sup exp(−ut) Z0 exp et + ln −K dα
2 0 0≤t≤s π 1−α
√ !!+
1 1
Z
σt 3 α
+ sup exp(−vt) Z0 − K/ exp et + ln dα.
2 0 0≤t≤s π 1−α
Proof: It follows from Theorem 15.13 that sup0≤t≤s exp(−ut)(Zt − K)+ and
sup0≤t≤s exp(−vt)Z0 (1 − K/Zt )+ have inverse uncertainty distributions
√ ! !+
−1 σt 3 α
Ψs (α) = sup exp(−ut) Z0 exp et + ln −K ,
0≤t≤s π 1−α
√ !!+
σt 3 α
Υ−1
s (α) = sup exp(−vt) Z0 − K/ exp et + ln ,
0≤t≤s π 1−α
respectively. Thus the American currency option price formula follows from
Definition 16.17 immediately.
where u and v are interest rates, F and G are two functions, and Ct is a
canonical Liu process.
Note that the α-path Ztα of the exchange rate Zt can be calculated by some
numerical methods. Assume the strike price is K and the expiration time
is s. It follows from Definition 16.15 and Theorem 15.12 that the European
currency option price is
1 1
Z
exp(−us)(Zsα − K)+ + exp(−vs)Z0 (1 − K/Zsα )+ dα.
f=
2 0
It follows from Definition 16.17 and Theorem 15.13 that the American cur-
rency option price is
1 1
Z
α + α +
f= sup exp(−ut)(Zt − K) + sup exp(−vt)Z0 (1 − K/Zt ) dα.
2 0 0≤t≤s 0≤t≤s
Section 16.4 - Bibliographic Notes 363
Probability Theory
Definition A.2 Let Ω be a nonempty set, let A be a σ-algebra over Ω, and let
Pr be a probability measure. Then the triplet (Ω, A, Pr) is called a probability
space.
Example A.3: Let Ω = {ω1 , ω2 , · · · }, let A be the power set of Ω, and let Pr
be a probability measure defined by (A.2). Then (Ω, A, Pr) is a probability
space.
Example A.4: Let Ω = [0, 1], let A be the Borel algebra over Ω, and let Pr
be the Lebesgue measure. Then (Ω, A, Pr) is a probability space. For many
purposes it is sufficient to use it as the basic probability space.
= lim Pr{Ak }.
k→∞
Note that
∞
\ ∞
[
Ai ↑ A, Ai ↓ A.
i=k i=k
Product Probability
Let (Ωk , Ak , Prk ), k = 1, 2, · · · be a sequence of probability spaces. Now we
write
Ω = Ω1 × Ω2 × · · · , A = A1 × A2 × · · · (A.6)
It has been proved that there is a unique probability measure Pr on the
product σ-algebra A such that
(∞ ) ∞
Y Y
Pr Ak = Prk {Ak } (A.7)
k=1 k=1
368 Appendix A - Probability Theory
Remark A.1: Please mention that the product probability theorem cannot
be deduced from the three axioms except we presuppose that the product
probability meets the three axioms. If I was allowed to reconstruct probability
theory, I would like to replace the product probability theorem with Axiom 4:
Let (Ωk , Ak , Prk ) be probability spaces for k = 1, 2, · · · The product probability
measure Pr is a probability measure satisfying
(∞ ) ∞
Y Y
Pr Ak = Prk {Ak } (A.9)
k=1 k=1
Independence of Events
Definition A.4 The events A1 , A2 , · · · , An are said to be independent if
( n ) n
\ Y
∗
Pr Ai = Pr{A∗i }. (A.10)
i=1 i=1
Remark A.2: Especially, two events A1 and A2 are independent if and only
if
Pr {A1 ∩ A2 } = Pr{A1 } × Pr{A2 }. (A.11)
Example A.7: Take (Ω, A, Pr) to be {ω1 , ω2 } with Pr{ω1 } = Pr{ω2 } = 0.5.
Then the function (
0, if ω = ω1
ξ(ω) =
1, if ω = ω2
is a random variable.
Example A.8: Take (Ω, A, Pr) to be the interval [0, 1] with Borel algebra
and Lebesgue measure. We define ξ as an identity function from [0, 1] to
[0, 1]. Since ξ is a measurable function, it is a random variable.
Definition A.6 Let ξ1 , ξ2 , · · · , ξn be random variables on the probability space
(Ω, A, Pr), and let f be a real-valued measurable function. The
ξ = f (ξ1 , ξ2 , · · · , ξn ) (A.14)
is a random variable defined by
ξ(ω) = f (ξ1 (ω), ξ2 (ω), · · · , ξn (ω)), ∀ω ∈ Ω. (A.15)
370 Appendix A - Probability Theory
Example A.9: Take (Ω, A, Pr) to be {ω1 , ω2 } with Pr{ω1 } = Pr{ω2 } = 0.5.
We now define a random variable as follows,
0, if ω = ω1
ξ(ω) =
1, if ω = ω2 .
Then ξ has a probability distribution
0, if x < 0
Φ(x) = 0.5, if 0 ≤ x < 1
1, if x ≥ 1.
Definition A.8 The probability density function φ: < → [0, +∞) of a ran-
dom variable ξ is a function such that
Z x
Φ(x) = φ(y)dy (A.18)
−∞
holds for any real number x, where Φ is the probability distribution of the
random variable ξ.
Theorem A.4 (Probability Inversion Theorem) Let ξ be a random variable
whose probability density function φ exists. Then for any Borel set B, we
have Z
Pr{ξ ∈ B} = φ(y)dy. (A.19)
B
Section A.3 - Probability Distribution 371
Proof: Assume that C is the class of all subsets C of < for which the relation
Z
Pr{ξ ∈ C} = φ(y)dy (A.20)
C
holds. We will show that C contains all Borel sets. On the one hand, we
may prove that C is a monotone class (if Ai ∈ C and Ai ↑ A or Ai ↓ A, then
A ∈ C). On the other hand, we may verify that C contains all intervals of the
form (−∞, a], (a, b], (b, ∞) and ∅ since
Z a
Pr{ξ ∈ (−∞, a]} = Φ(a) = φ(y)dy,
−∞
Z +∞
Pr{ξ ∈ (b, +∞)} = Φ(+∞) − Φ(b) = φ(y)dy,
b
Z b
Pr{ξ ∈ (a, b]} = Φ(b) − Φ(a) = φ(y)dy,
a
Z
Pr{ξ ∈ ∅} = 0 = φ(y)dy
∅
where Φ is the probability distribution of ξ. Let F be the algebra consisting of
all finite unions of disjoint sets of the form (−∞, a], (a, b], (b, ∞) and ∅. Note
that for any disjoint sets C1 , C2 , · · · , Cm of F and C = C1 ∪ C2 ∪ · · · ∪ Cm ,
we have
Xm Xm Z Z
Pr{ξ ∈ C} = Pr{ξ ∈ Cj } = φ(y)dy = φ(y)dy.
j=1 j=1 Cj C
(x − µ)2
1
φ(x) = √ exp − , −∞ < x < +∞ (A.23)
σ 2π 2σ 2
(ln x − µ)2
1
φ(x) = √ exp − , x>0 (A.24)
xσ 2π 2σ 2
A.4 Independence
Definition A.13 The random variables ξ1 , ξ2 , · · · , ξn are said to be inde-
pendent if ( n )
\ n
Y
Pr (ξi ∈ Bi ) = Pr{ξi ∈ Bi } (A.25)
i=1 i=1
Example A.10: Let ξ1 (ω1 ) and ξ2 (ω2 ) be random variables on the probabil-
ity spaces (Ω1 , A1 , Pr1 ) and (Ω2 , A2 , Pr2 ), respectively. It is clear that they
are also random variables on the product probability space (Ω1 , A1 , Pr1 ) ×
(Ω2 , A2 , Pr2 ). Then for any Borel sets B1 and B2 , we have
Pr{(ξ1 ∈ B1 ) ∩ (ξ2 ∈ B2 )}
= Pr {(ω1 , ω2 ) | ξ1 (ω1 ) ∈ B1 , ξ2 (ω2 ) ∈ B2 }
= Pr {(ω1 | ξ1 (ω1 ) ∈ B1 ) × (ω2 | ξ2 (ω2 ) ∈ B2 )}
= Pr1 {ω1 | ξ1 (ω1 ) ∈ B1 } × Pr2 {ω2 | ξ2 (ω2 ) ∈ B2 }
= Pr {ξ1 ∈ B1 } × Pr {ξ2 ∈ B2 } .
ξ = f (ξ1 , ξ2 , · · · , ξn ) (A.26)
Proof: It follows from the additivity axiom of probability measure and the
independence of the random variables ξ1 , ξ2 , · · · , ξn that
where (
ai , if xi = 1
µi (xi ) = (A.38)
1 − ai , if xi = 0
for i = 1, 2, · · · , n.
Proof: It follows from the additivity axiom of probability measure and the
independence of the random variables ξ1 , ξ2 , · · · , ξn that
( n )
X \
Pr{ξ = 1} = Pr (ξi = xi ) I(f (x1 , x2 , · · · , xn ) = 1)
(x1 ,x2 ,··· ,xn )∈{0,1}n i=1
n
!
X Y
= Pr{ξi = xi } f (x1 , x2 , · · · , xn )
(x1 ,x2 ,··· ,xn )∈{0,1}n i=1
n
!
X Y
= µi (xi ) f (x1 , x2 , · · · , xn )
(x1 ,x2 ,··· ,xn )∈{0,1}n i=1
where (
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , n). (A.45)
1 − ai , if xi = 0
376 Appendix A - Probability Theory
Proof: It follows from the probability inversion theorem that for almost all
numbers x, we have Pr{ξ ≥ x} = 1 − Φ(x) and Pr{ξ ≤ x} = Φ(x). By using
the definition of expected value operator, we obtain
Z +∞ Z 0
E[ξ] = Pr{ξ ≥ x}dx − Pr{ξ ≤ x}dx
0 −∞
Z +∞ Z 0
= (1 − Φ(x))dx − Φ(x)dx.
0 −∞
Proof: It follows from the integration by parts and Theorem A.8 that the
expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z +∞ Z 0 Z +∞
= xdΦ(x) + xdΦ(x) = xdΦ(x).
0 −∞ −∞
Section A.6 - Expected Value 377
Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem A.8 that the expected value is
Z +∞ Z 1
E[ξ] = xdΦ(x) = Φ−1 (α)dα.
−∞ 0
Proof: It follows from the operational law of random variables that ξ has a
probability distribution
Z
Φ(x) = dΦ1 (x1 )dΦ2 (x2 ) · · · dΦn (xn )
f (x1 ,x2 ,··· ,xn )≤x
Z
= I(f (x1 , x2 , · · · , xn ) ≤ x)dΦ1 (x1 )dΦ2 (x2 ) · · · dΦn (xn )
<n
Theorem A.14 Let ξ and η be random variables with finite expected values.
Then for any numbers a and b, we have
Proof: Step 1: We first prove that E[ξ + b] = E[ξ] + b for any real number
b. When b ≥ 0, we have
Z ∞ Z 0
E[ξ + b] = Pr{ξ + b ≥ x}dx − Pr{ξ + b ≤ x}dx
0 −∞
Z ∞ Z 0
= Pr{ξ ≥ x − b}dx − Pr{ξ ≤ x − b}dx
0 −∞
Z b
= E[ξ] + (Pr{ξ ≥ x − b} + Pr{ξ < x − b}) dx
0
= E[ξ] + b.
If a < 0, we have
Z ∞ Z 0
E[aξ] = Pr{aξ ≥ x}dx − Pr{aξ ≤ x}dx
0 −∞
Z ∞ Z 0
n xo n xo
= Pr ξ ≤ dx − Pr ξ ≥ dx
0 a −∞ a
Z ∞ n Z 0
xo x n xo x
=a Pr ξ ≥ d −a Pr ξ ≤ d
0 a a −∞ a a
= aE[ξ].
= E[ξ] + E[η].
Step 4: We prove that E[ξ + η] = E[ξ] + E[η] when both ξ and η are
nonnegative random variables. For every i ≥ 1 and every ω ∈ Ω, we define
k − 1 , if k − 1 ≤ ξ(ω) < k , k = 1, 2, · · · , i2i
ξi (ω) = 2i 2i 2i
i, if i ≤ ξ(ω),
380 Appendix A - Probability Theory
k − 1 , if k − 1 ≤ η(ω) < k , k = 1, 2, · · · , i2i
ηi (ω) = 2i 2i 2i
i, if i ≤ η(ω).
Then {ξi }, {ηi } and {ξi + ηi } are three sequences of nonnegative simple
random variables such that ξi ↑ ξ, ηi ↑ η and ξi + ηi ↑ ξ + η as i → ∞. Note
that the functions Pr{ξi > x}, Pr{ηi > x}, Pr{ξi + ηi > x}, i = 1, 2, · · · are
also simple. It follows from the probability continuity theorem that
Pr{ξi > x} ↑ Pr{ξ > x}, ∀x ≥ 0
as i → ∞. Since the expected value E[ξ] exists, we have
Z +∞ Z +∞
E[ξi ] = Pr{ξi > x}dx → Pr{ξ > x}dx = E[ξ]
0 0
as i → ∞. Similarly, we may prove that E[ηi ] → E[η] and E[ξi +ηi ] → E[ξ+η]
as i → ∞. It follows from Step 3 that E[ξ + η] = E[ξ] + E[η].
Step 5: We prove that E[ξ + η] = E[ξ] + E[η] when ξ and η are arbitrary
random variables. Define
( (
ξ(ω), if ξ(ω) ≥ −i η(ω), if η(ω) ≥ −i
ξi (ω) = ηi (ω) =
−i, otherwise, −i, otherwise.
Since the expected values E[ξ] and E[η] are finite, we have
lim E[ξi ] = E[ξ], lim E[ηi ] = E[η], lim E[ξi + ηi ] = E[ξ + η].
i→∞ i→∞ i→∞
Note that (ξi + i) and (ηi + i) are nonnegative random variables. It follows
from Steps 1 and 4 that
E[ξ + η] = lim E[ξi + ηi ]
i→∞
= lim (E[(ξi + i) + (ηi + i)] − 2i)
i→∞
= lim (E[ξi + i] + E[ηi + i] − 2i)
i→∞
= lim (E[ξi ] + i + E[ηi ] + i − 2i)
i→∞
= lim E[ξi ] + lim E[ηi ]
i→∞ i→∞
= E[ξ] + E[η].
Thus we have Z ∞
lim Pr{|ξ|t ≥ r}dr = 0.
x→∞ xt /2
xt Pr{|ξ| ≥ x} ≤ 1, ∀x ≥ a.
Thus we have
Z a Z +∞
E[|ξ|s ] = Pr {|ξ|s ≥ r} dr + Pr {|ξ|s ≥ r} dr
0 a
Z a Z +∞
≤ Pr {|ξ|s ≥ r} dr + srs−1 Pr {|ξ| ≥ r} dr
0 0
Z a Z +∞
≤ Pr {|ξ|s ≥ r} dr + s rs−t−1 dr
0 0
Z ∞
< +∞. by rp dr < ∞ for any p < −1
0
Example A.11: The condition (A.55) does not ensure that E[|ξ|t ] < ∞.
We consider the positive random variable
r
i
t 2 1
ξ= with probability i , i = 1, 2, · · ·
i 2
It is clear that
r !t ∞
t t 2n X 1 2
lim x Pr{ξ ≥ x} = lim i
= lim = 0.
x→∞ n→∞ n i=n
2 n→∞ n
A.7 Variance
Definition A.15 Let ξ be a random variable with finite expected value e.
Then the variance of ξ is defined by V [ξ] = E[(ξ − e)2 ].
Proof: We first assume V [ξ] = 0. It follows from the equation (A.58) that
Z +∞
Pr{(ξ − e)2 ≥ x}dx = 0
0
Pr{(ξ − e)2 = 0} = 1.
E[ξ1 + ξ2 + · · · + ξn ] = e1 + e2 + · · · + en .
Proof: Without loss of generality, assume that E[ξi ] = 0 for each i. We set
≥ t2 Pr{Ak }.
n
X
V [Sn ] ≥ t2 Pr{Ai } = t2 Pr{A}
i=1
Proof: It follows from the additivity of probability measure that the variance
is Z +∞
V [ξ] = Pr{(ξ − e)2 ≥ x}dx
0
Z +∞ √ √
= Pr{(ξ ≥ e + x) ∪ (ξ ≤ e − x)}dx
0
Z +∞ √ √
= (Pr{ξ ≥ e + x} + Pr{ξ ≤ e − x})dx
0
Z +∞ √ √
= (1 − Φ(e + x) + Φ(e − x))dx.
0
The theorem is proved.
Theorem A.24 Let ξ be a random variable with probability distribution Φ
and expected value e. Then
Z +∞
V [ξ] = (x − e)2 dΦ(x). (A.64)
−∞
√
Proof: For the equation (A.63), substituting e + y with x and y with
(x − e)2 , the change of variables and integration by parts produce
Z +∞ Z +∞ Z +∞
√
(1 − Φ(e + y))dy = (1 − Φ(x))d(x − e)2 = (x − e)2 dΦ(x).
0 e e
√
Similarly, substituting e − y with x and y with (x − e)2 , we obtain
Z +∞ Z −∞ Z e
√
Φ(e − y)dy = Φ(x)d(x − e)2 = (x − e)2 dΦ(x).
0 e −∞
Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem A.24 that the variance is
Z +∞ Z 1
V [ξ] = 2
(x − e) dΦ(x) = (Φ−1 (α) − e)2 dα.
−∞ 0
A.8 Moment
Z +∞ √
Z 0 √
k
E[ξ ] = (1 − Φ( k x))dx − Φ( k x)dx. (A.67)
0 −∞
Z +∞ Z 0
k
k
E[ξ ] = Pr{ξ ≥ x}dx − Pr{ξ k ≤ x}dx
0 −∞
Z +∞ √
Z 0 √
= Pr{ξ ≥ k
x}dx − Pr{ξ ≤ k
x}dx
0 −∞
Z +∞ √
Z 0 √
= (1 − Φ( k x))dx − Φ( k x)dx.
0 −∞
Z +∞ √ √
E[ξ k ] = (1 − Φ( k x) + Φ(− k x))dx. (A.68)
0
Section A.8 - Moment 387
Proof: When k is an odd number, Theorem A.26 says that the k-th moment
is Z +∞ Z 0
√ √
E[ξ k ] = (1 − Φ( k y))dy − Φ( k y)dy.
0 −∞
√
Substituting k y with x and y with xk , the change of variables and integration
by parts produce
Z +∞ Z +∞ Z +∞
√ k
(1 − Φ( y))dy =
k
(1 − Φ(x))dx = xk dΦ(x)
0 0 0
and
0 0 0
√
Z Z Z
k
Φ( y)dy =
k
Φ(x)dx = − xk dΦ(x).
−∞ −∞ −∞
Thus we have
Z +∞ Z 0 Z +∞
E[ξ k ] = xk dΦ(x) + xk dΦ(x) = xk dΦ(x).
0 −∞ −∞
When k is an even number, Theorem A.27 says that the k-th moment is
Z +∞
k √ √
E[ξ ] = (1 − Φ( k y) + Φ(− k y))dy.
0
√
Substituting k y with x and y with xk , the change of variables and integration
by parts produce
Z +∞ Z +∞ Z +∞
√
(1 − Φ( k y))dy = (1 − Φ(x))dxk = xk dΦ(x).
0 0 0
388 Appendix A - Probability Theory
√
Similarly, substituting − k y with x and y with xk , we obtain
Z +∞ Z 0 Z 0
√ k
Φ(− y)dy =
k
Φ(x)dx = xk dΦ(x).
0 −∞ −∞
Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem A.28 that the k-th moment is
Z +∞ Z 1
k
E[ξ ] = k
x dΦ(x) = (Φ−1 (α))k dα.
−∞ 0
A.9 Entropy
Given a random variable, what is the degree of difficulty of predicting the
specified value that the random variable will take? In order to answer this
question, Shannon [205] defined a concept of entropy as a measure of uncer-
tainty.
Definition A.17 Let ξ be a random variable with probability density func-
tion φ. Then its entropy is defined by
Z +∞
H[ξ] = − φ(x) ln φ(x)dx. (A.71)
−∞
Given some constraints, for example, expected value and variance, there are
usually multiple compatible probability distributions. For this case, we would
like to select the distribution that maximizes the value of entropy and satisfies
the prescribed constraints. This method is often referred to as the maximum
entropy principle (Jaynes [70]).
Rb
subject to the natural constraint a
φ(x)dx = 1. The Lagrangian is
!
Z b Z b
L=− φ(x) ln φ(x)dx − λ φ(x)dx − 1 .
a a
It follows from the Euler-Lagrange equation that the maximum entropy prob-
ability density function meets
ln φ(x) + 1 + λ = 0
and has the form φ(x) = exp(−1 − λ). Substituting it into the natural
constraint, we get
1
φ∗ (x) = , a≤x≤b
b−a
which is just a uniform probability density function, and the maximum en-
tropy is H[ξ ∗ ] = ln(b − a).
The Lagrangian is
Z +∞ Z +∞
L=− φ(x) ln φ(x)dx − λ1 φ(x)dx − 1
−∞ −∞
Z +∞ Z +∞
−λ2 xφ(x)dx − µ − λ3 (x − µ)2 φ(x)dx − σ 2 .
−∞ −∞
(x − µ)2
∗ 1
φ (x) = √ exp − , x∈<
σ 2π 2σ 2
It is clear that
∞ [
∞
!
[ \
A= Ai (ε) .
ε>0 n=1 i=n
Note that ξi → ξ, a.s. if and only if Pr{A} = 0. That is, ξi → ξ, a.s. if and
only if (∞ ∞ )
\ [
Pr Ai (ε) = 0
n=1 i=n
Theorem A.31 If the random sequence {ξi } converges a.s. to ξ, then {ξi }
converges in probability to ξ.
Proof: It follows from the convergence a.s. and Theorem A.30 that
(∞ )
[
lim Pr {|ξi − ξ| ≥ ε} = 0
n→∞
i=n
1
Pr {|ξi − ξ| ≥ ε} = →0
2j
as i → ∞. That is, the sequence {ξi } converges in probability to ξ. However,
for any ω ∈ [0, 1], there is an infinite number of intervals of the form [k/2j , (k+
1)/2j ] containing ω. Thus ξi (ω) 6→ 0 as i → ∞. In other words, the sequence
{ξi } does not converge a.s. to ξ.
Proof: It follows from the Markov inequality that, for any given number
ε > 0,
E[|ξi − ξ|]
Pr {|ξi − ξ| ≥ ε} ≤ →0
ε
as i → ∞. Thus {ξi } converges in probability to ξ.
Example A.19: Convergence a.s. does not imply convergence in mean. For
example, take (Ω, A, Pr) to be {ω1 , ω2 , · · · } with Pr{ωj } = 1/2j for j =
1, 2, · · · The random variables are defined by
i
2 , if j = i
ξi (ωj ) =
0, otherwise
for i = 1, 2, · · · and ξ = 0. Then {ξi } converges a.s. to ξ. However, the
sequence {ξi } does not converge in mean to ξ.
Example A.20: Convergence in mean does not imply convergence a.s. For
example, take (Ω, A, Pr) to be the interval [0, 1] with Borel algebra and
Lebesgue measure. For any positive integer i, there is an integer j such
that i = 2j + k, where k is an integer between 0 and 2j − 1. We define a
random variable by
(
1, if k/2j ≤ ω ≤ (k + 1)/2j
ξi (ω) =
0, otherwise
for i = 1, 2, · · · and ξ = 0. Then
1
E [|ξi − ξ|] = →0
2j
394 Appendix A - Probability Theory
It follows from (A.77) and (A.78) that Φi (x) → Φ(x) as i → ∞. The theorem
is proved.
Proof: For any given ε > 0, it follows from Chebyshev inequality that
Sn − E[Sn ] 1 Sn V [Sn ] a
Pr ≥ ε ≤ 2V = 2 2 ≤ 2 →0
n ε n ε n ε n
Proof: For each i, since the expected value of ξi is finite, there exists β > 0
such that E[|ξi |] < β < ∞. Let α be an arbitrary positive number, and let n
be an arbitrary positive integer. We define
(
∗ ξi , if |ξi | < nα
ξi =
0, otherwise
then
Sn − E[Sn ]
→ 0, a.s. (A.85)
n
as n → ∞.
Proof: Since ξ1 , ξ2 , · · · are independent random variables with finite ex-
pected values, for every given ε > 0, we have
!
[ ∞ n+j
X ξi − E[ξi ]
Pr ≥ε
j=0 i=n
i
!
n+j
m
"n+j #
[ X ξi X ξi
= lim Pr −E ≥ε
m→∞
j=0 i=n
i i=n
i
n+j
( "n+j # )
X ξi X ξi
= lim Pr max −E ≥ε
m→∞ 0≤j≤m
i=n
i i=n
i
"n+m #
1 X ξi
≤ lim V (by Kolmogorov inequality)
m→∞ ε2 i=n
i
n+m ∞
1 X V [ξi ] 1 X V [ξi ]
= lim = →0 as n → ∞.
m→∞ ε2 i2 ε2 i=n i2
i=n
P∞
Thus i=1 (ξi −E[ξi ])/i
converges a.s. Applying Kronecker lemma, we obtain
n
Sn − E[Sn ] 1X ξi − E[ξi ]
= i → 0, a.s.
n n i=1 i
398 Appendix A - Probability Theory
≤ 2 + 2e < ∞.
It follows from Theorem A.36 that
Sn∗ − E[Sn∗ ]
→ 0, a.s. (A.87)
n
as n → ∞. Note that ξi∗ ↑ ξi as i → ∞. Using the Lebesgue dominated
convergence theorem, we conclude that E[ξi∗ ] → e. It follows from Toeplitz
Lemma that
E[Sn∗ ] E[ξ1∗ ] + E[ξ2∗ ] + · · · + E[ξn∗ ]
= → e. (A.88)
n n
Since (ξi − ξi∗ ) → 0, a.s. as i → ∞, Toeplitz Lemma states that
n
Sn − Sn∗ 1X
= (ξi − ξi∗ ) → 0, a.s. (A.89)
n n i=1
Pr{A ∩ B}
Pr{A|B} = (A.90)
Pr{B}
which means that the conditional probability is identical to the original prob-
ability. This is the so-called memoryless property of exponential distribution.
In other words, it is as good as new if it is functioning on inspection.
Pr{Ak } Pr{B|Ak }
Pr{Ak |B} = P
n (A.91)
Pr{Ai } Pr{B|Ai }
i=1
for k = 1, 2, · · · , n.
which is also called the formula for total probability. Thus, for any k, we have
Remark A.6: Especially, let A and B be two events with Pr{A} > 0 and
Pr{B} > 0. Then A and Ac form a partition of the space Ω, and the Bayes
formula is
Pr{A} Pr{B|A}
Pr{A|B} = . (A.92)
Pr{B}
Example A.24: Let (ξ, η) be a random vector with joint probability density
function ψ. Then the marginal probability density functions of ξ and η are
Z +∞ Z +∞
f (x) = ψ(x, y)dy, g(y) = ψ(x, y)dx,
−∞ −∞
ψ(x, y) ψ(x, y)
φ(x|η = y) = =Z +∞ , a.s. (A.96)
g(y)
ψ(x, y)dx
−∞
Note that (A.95) and (A.96) are defined only for g(y) 6= 0. In fact, the set
{y|g(y) = 0} has probability 0. Especially, if ξ and η are independent random
variables, then ψ(x, y) = f (x)g(y) and φ(x|η = y) = f (x).
Example A.27: Take a probability space (Ω, A, Pr) to be the interval [0, 1]
with Borel algebra and Lebesgue measure. Then the random set
√ √
ξ(ω) = − 1 − ω, 1 − ω (A.102)
is a membership function of some random set. It is easy to verify that µ−1 (α)
is the inverse membership function of the random set. The theorem is proved.
Proof: For each x ∈ µ−1 (α), we have µ(x) ≥ α. It follows from the proba-
bility inversion formula that
For each x 6∈ µ−1 (α), we have µ(x) < α. It follows from the probability
inversion formula that
Definition A.28 Let (Ω, A, Pr) be a probability space and let T be a totally
ordered set (e.g. time). A stochastic process is a function Xt (ω) from T ×
(Ω, A, Pr) to the set of real numbers such that {Xt ∈ B} is an event for any
Borel set B at each time t.
For each fixed ω, the function Xt (ω) is called a sample path of the stochas-
tic process Xt . A stochastic process Xt is said to be sample-continuous if
almost all sample paths are continuous with respect to t.
Definition A.29 A stochastic process Xt is said to have independent incre-
ments if
Xt0 , Xt1 − Xt0 , Xt2 − Xt1 , · · · , Xtk − Xtk−1 (A.111)
are independent random variables where t0 is the initial time and t1 , t2 , · · ·, tk
are any times with t0 < t1 < · · · < tk .
Definition A.30 A stochastic process Xt is said to have stationary incre-
ments if, for any given t > 0, the increments Xs+t − Xs are identically
distributed random variables for all s > 0.
A stationary independent increment process is a stochastic process that
has not only independent increments but also stationary increments. If Xt is
a stationary independent increment process, then
Yt = aXt + b
is also a stationary independent increment process for any numbers a and b.
Renewal Process
Let ξi denote the times between the (i − 1)th and the ith events, known as
the interarrival times, i = 1, 2, · · · , respectively. Define S0 = 0 and
Sn = ξ1 + ξ2 + · · · + ξn , ∀n ≥ 1. (A.112)
Then Sn can be regarded as the waiting time until the occurrence of the nth
event after time t = 0.
Definition A.31 Let ξ1 , ξ2 , · · · be iid positive interarrival times. Define
S0 = 0 and Sn = ξ1 + ξ2 + · · · + ξn for n ≥ 1. Then the stochastic pro-
cess
Nt = max {n | Sn ≤ t} (A.113)
n≥0
Wiener Process
In 1827 Robert Brown observed irregular movement of pollen grain suspended
in liquid. This movement is now known as Brownian motion. In 1923 Norbert
Wiener modeled Brownian motion by the following Wiener process.
Note that the lengths of almost all sample paths of Wiener process are
infinitely long during any fixed time interval, and are differentiable nowhere.
Furthermore, the squared variation of Wiener process on [0, t] is equal to t
both in mean square and almost surely.
∆ = max |ti+1 − ti |.
1≤i≤k
provided that the limit exists in mean square and is a random variable.
for any t ≥ 0, then Zt is called an Ito process with drift µt and diffusion σt .
Furthermore, Zt has a stochastic differential
Theorem A.43 (Ito Formula) Let Wt be a standard Wiener process, and let
h(t, w) be a twice continuously differentiable function. Then Xt = h(t, Wt )
is an Ito process and has a stochastic differential
∂h ∂h 1 ∂2h
dXt = (t, Wt )dt + (t, Wt )dWt + (t, Wt )dt. (A.118)
∂t ∂w 2 ∂w2
Proof: Since the function h is twice continuously differentiable, by using
Taylor series expansion, the infinitesimal increment of Xt has a second-order
approximation
∂h ∂h 1 ∂2h
∆Xt = (t, Wt )∆t + (t, Wt )∆Wt + (t, Wt )(∆Wt )2
∂t ∂b 2 ∂b2
1 ∂2h 2 ∂2h
+ (t, Wt )(∆t) + (t, Wt )∆t∆Wt .
2 ∂t2 ∂t∂b
Since we can ignore the terms (∆t)2 and ∆t∆Wt and replace (∆Wt )2 with
∆t, the Ito formula is obtained because it makes
Z s Z s
1 s ∂2h
Z
∂h ∂h
Xs = X0 + (t, Wt )dt + (t, Wt )dWt + (t, Wt )dt
0 ∂t 0 ∂b 2 0 ∂b2
for any s ≥ 0.
d(tWt ) = Wt dt + tdWt ,
Chance Theory
is clearly an event in L. Thus the uncertain measure M{Θω } exists for each
ω ∈ Ω. However, unfortunately, M{Θω } is not necessarily a measurable
function with respect to ω. In other words, for a real number x, the set
Ω..
..
......... ................................
........... ........
.... ........ ......
... ...... ......
... ...
...... .....
... . . .... .....
...
. .. ..
ω ..........
....
..
............................................................................................................
.
.. .. .. ...
... .
... .... .. .. ....
... .. .. .. ...
... .... .. .. ...
... .. ...
...
... ..
... ..
... .
Θ .. ...
.. ...
... ... ..
... .. ...
... ... .. .. ...
..... .....
... ... .
... ..... ..
.....
... .. ..... .
.
... .. ....... ...
... .
... .. ...... ........ ..
...... . ..
... .. ........ ......
............. ........ ...
... .. ............................ ..
... ..
... .. ..
.......................................................................................................................................................................
... ..
..............................................
.. Γ
..
Θω ...........................................
. ..
.
Definition B.1 (Liu [149]) Let (Γ, L, M)×(Ω, A, Pr) be a chance space, and
let Θ ∈ L × A be an event. Then the chance measure of Θ is defined as
Θω
Z 1 z }| {
Pr ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ} ≥ x dx.
Ch{Θ} = (B.7)
0 | {z }
Θ∗
x
Section B.1 - Chance Measure 411
Theorem B.1 (Liu [149]) Let (Γ, L, M)×(Ω, A, Pr) be a chance space. Then
Proof: Let us first prove the identity (B.8). For each ω ∈ Ω, we immediately
have
{γ ∈ Γ | (γ, ω) ∈ Λ × A} = Λ
and
M{γ ∈ Γ | (γ, ω) ∈ Λ × A} = M{Λ}.
For any real number x, if M{Λ} ≥ x, then
Thus
Z 1
Ch{Λ × A} = Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Λ × A} ≥ x} dx
0
Z M{Λ} Z 1
= Pr{A}dx + 0dx = M{Λ} × Pr{A}.
0 M{Λ}
Theorem B.2 (Liu [149], Monotonicity Theorem) Let (Γ, L, M)×(Ω, A, Pr)
be a chance space. Then the chance measure Ch{Θ} is a monotone increasing
function with respect to Θ.
Theorem B.3 (Liu [149], Duality Theorem) The chance measure is self-
dual. That is, for any event Θ, we have
Proof: Since both uncertain measure and probability measure are self-dual,
we have
Z 1
Ch{Θ} = Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ} ≥ x} dx
0
Z 1
= Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θc } ≤ 1 − x} dx
0
Z 1
= (1 − Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θc } > 1 − x}) dx
0
Z 1
=1− Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θc } > x} dx
0
= 1 − Ch{Θc }.
X∞
= Ch{Θi }.
i=1
ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) (B.14)
Theorem B.7 (Liu [149]) Let ξ be an uncertain random variable. Then the
chance measure Ch{ξ ∈ B} is a monotone increasing function of B and
Theorem B.8 (Liu [149]) Let ξ be an uncertain random variable. Then for
any Borel set B, we have
Theorem B.9 (Liu [149], Sufficient and Necessary Condition for Chance
Distribution) A function Φ : < → [0, 1] is a chance distribution if and only if
it is a monotone increasing function except Φ(x) ≡ 0 and Φ(x) ≡ 1.
416 Appendix B - Chance Theory
Proof: The equation Ch{ξ ≤ x} = Φ(x) follows from the definition of chance
distribution immediately. By using the duality of chance measure, we get
Proof: For any given numbers y1 , · · · , ym , it follows from the operational law
of uncertain variables that f (y1 , · · · , ym , τ1 , · · · , τn ) is an uncertain variable
with uncertainty distribution F (x; y1 , · · ·, ym ). By using (B.28), the chance
distribution of ξ is
Z
Φ(x) = M{f (y1 , · · · , ym , τ1 , · · · , τn ) ≤ x}dΨ1 (y1 ) · · · dΨm (ym )
<m
Z
= F (x; y1 , · · ·, ym )dΨ1 (y1 ) · · · dΨm (ym )
<m
f (η1 , · · · , ηm , τ1 , · · · , τn )
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α))
ξ = η1 + η2 + · · · + ηm + τ1 + τ2 + · · · + τn (B.31)
where
Z
Ψ(y) = dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (B.33)
y1 +y2 +···+ym ≤y
ξ = η1 η2 · · · ηm τ1 τ2 · · · τn (B.35)
where Z
Ψ(y) = dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (B.37)
y1 y2 ···ym ≤y
ξ = η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn (B.39)
where
Ψ(x) = 1 − (1 − Ψ1 (x))(1 − Ψ2 (x)) · · · (1 − Ψm (x)) (B.41)
is the probability distribution of η1 ∧ η2 ∧ · · · ∧ ηm , and
ξ = η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn (B.43)
where
Ψ(x) = Ψ1 (x)Ψ2 (x) · · · Ψm (x) (B.45)
is the probability distribution of η1 ∨ η2 ∨ · · · ∨ ηm , and
Υ(x) = Υ1 (x) ∧ Υ2 (x) ∧ · · · ∧ Υn (x) (B.46)
is the uncertainty distribution of τ1 ∨ τ2 ∨ · · · ∨ τn .
Proof: It follows from the definition of chance measure that for any numbers
y1 , · · · , ym , the theorem is true if the function G is
G(y1 , · · · , ym ) = M{f (y1 , · · · , ym , τ1 , · · · , τn ) ≤ 0}.
Furthermore, by using Theorem 2.20, we know that G is just the root α. The
theorem is proved.
Remark B.6: Sometimes, the equation may not have a root. In this case,
if
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) < 0
Remark B.7: The root α may be estimated by the bisection method because
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) is a strictly
increasing function with respect to α. See Figure B.2.
Section B.4 - Operational Law 421
. ..
.... .....
.......
.. ... .
... .. ..
.
... .. .
.. ..
... .... ..
... ..... ..
... .......... ..
... ....... ..
... ................ ..
... ............... .
.... ...
.
•
...............................................................................................................................................................................................
..
..... ...
α
0 ...
...
..
.......
.
...
........
. .
1 ..
..
... ...
........ ..
..
... ........ ..
... .... ..
... ... ..
... ... ..
...... ..
...... ..
...
...
..
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) = 0.
Proof: It follows from the definition of chance measure that for any numbers
y1 , · · · , ym , the theorem is true if the function G is
Furthermore, by using Theorem 2.21, we know that G is just the root α. The
theorem is proved.
Remark B.8: Sometimes, the equation may not have a root. In this case,
if
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) < 0
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) > 0
Remark B.9: The root α may be estimated by the bisection method because
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) is a strictly
decreasing function with respect to α. See Figure B.3.
422 Appendix B - Chance Theory
...
..........
...... ..
....... ..
... ... ..
... .... ..
... .... ..
... ...... ..
... .....
...... ..
... ....... ..
... ........
......... ..
... .......... ..
... .......... .
....
•
...........................................................................................................................................................................................
..........
......... ...
α
0 ...
...
.........
.......
......
1 ..
..
... ...... .
.....
... ..... ...
... .... .
... ..
... ... ..
... ... .
... .....
... ..
where
sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n
if sup min νj (yj ) < 0.5
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n
∗
f (x1 , · · · , xm ) = (B.51)
1− sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=0 1≤j≤n
if sup min νj (yj ) ≥ 0.5,
1≤j≤n
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1
Section B.4 - Operational Law 423
(
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m), (B.52)
1 − ai , if xi = 0
(
bj , if yj = 1
νj (yj ) = (j = 1, 2, · · · , n). (B.53)
1 − bj , if yj = 0
Remark B.11: When the random variables disappear, the operational law
becomes
sup min νj (yj ),
f (y1 ,y2 ,··· ,yn )=1 1≤j≤n
if sup min νj (yj ) < 0.5
f (y1 ,y2 ,··· ,yn )=1 1≤j≤n
M{ξ = 1} = (B.55)
1− sup min νj (yj ),
f (y1 ,y2 ,··· ,yn )=0 1≤j≤n
if sup min νj (yj ) ≥ 0.5.
f (y1 ,y2 ,··· ,yn )=1 1≤j≤n
ξ = η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn (B.58)
where
Proof: It follows from the chance inversion theorem that for almost all
numbers x, we have Ch{ξ ≥ x} = 1 − Φ(x) and Ch{ξ ≤ x} = Φ(x). By using
the definition of expected value operator, we obtain
Z +∞ Z 0
E[ξ] = Ch{ξ ≥ x}dx − Ch{ξ ≤ x}dx
0 −∞
Z +∞ Z 0
= (1 − Φ(x))dx − Φ(x)dx.
0 −∞
Proof: It follows from the change of variables of integral and Theorem B.16
that the expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z +∞ Z 0 Z +∞
= xdΦ(x) + xdΦ(x) = xdΦ(x).
0 −∞ −∞
Proof: It follows from the change of variables of integral and Theorem B.16
that the expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z 1 Z Φ(0) Z 1
= Φ−1 (α)dα + Φ−1 (α)dα = Φ−1 (α)dα.
Φ(0) 0 0
ξ = f (η1 , · · · , ηm , τ1 , · · · , τn ) (B.68)
That is,
E[η + τ ] = E[η] + E[τ ]. (B.70)
Proof: Since τ1 and τ2 are independent uncertain variables, for any real
numbers y1 and y2 , the functions f1 (y1 , τ1 ) and f2 (y2 , τ2 ) are also independent
uncertain variables. Thus
E[f1 (y1 , τ1 ) + f2 (y2 , τ2 )] = E[f1 (y1 , τ1 )] + E[f2 (y2 , τ2 )].
Let Ψ1 and Ψ2 be the probability distributions of random variables η1 and
η2 , respectively. Then we have
E[f1 (η1 , τ1 ) + f2 (η2 , τ2 )]
Z
= E[f1 (y1 , τ1 ) + f2 (y2 , τ2 )]dΨ1 (y1 )dΨ2 (y2 )
<2
Z
= (E[f1 (y1 , τ1 )] + E[f2 (y2 , τ2 )])dΨ1 (y1 )dΨ2 (y2 )
<2
Z Z
= E[f1 (y1 , τ1 )]dΨ1 (y1 ) + E[f2 (y2 , τ2 )]dΨ2 (y2 )
< <
Exercise B.10: Assume η1 and η2 are random variables, and τ1 and τ2 are
independent uncertain variables. Show that
E[η1 ∨ τ1 + η2 ∧ τ2 ] = E[η1 ∨ τ1 ] + E[η2 ∧ τ2 ]. (B.76)
B.6 Variance
Definition B.5 (Liu [149]) Let ξ be an uncertain random variable with finite
expected value e. Then the variance of ξ is
V [ξ] = E[(ξ − e)2 ]. (B.77)
Since (ξ − e)2 is a nonnegative uncertain random variable, we also have
Z +∞
V [ξ] = Ch{(ξ − e)2 ≥ x}dx. (B.78)
0
Theorem B.23 (Liu [149]) Let ξ be an uncertain random variable with ex-
pected value e. Then V [ξ] = 0 if and only if Ch{ξ = e} = 1.
Proof: We first assume V [ξ] = 0. It follows from the equation (B.78) that
Z +∞
Ch{(ξ − e)2 ≥ x}dx = 0
0
Ch{(ξ − e)2 = 0} = 1.
Stipulation B.1 (Guo and Wang [57]) Let ξ be an uncertain random vari-
able with chance distribution Φ and finite expected value e. Then
Z +∞
√ √
V [ξ] = (1 − Φ(e + x) + Φ(e − x))dx. (B.80)
0
Theorem B.24 (Sheng and Yao [211]) Let ξ be an uncertain random vari-
able with chance distribution Φ and finite expected value e. Then
Z +∞
V [ξ] = (x − e)2 dΦ(x). (B.81)
−∞
430 Appendix B - Chance Theory
Proof: This theorem is based on Stipulation B.1 that says the variance of ξ
is Z +∞ Z +∞
√ √
V [ξ] = (1 − Φ(e + y))dy + Φ(e − y)dy.
0 0
√
Substituting e + y with x and y with (x − e)2 , the change of variables and
integration by parts produce
Z +∞ Z +∞ Z +∞
√ 2
(1 − Φ(e + y))dy = (1 − Φ(x))d(x − e) = (x − e)2 dΦ(x).
0 e e
√ 2
Similarly, substituting e − y with x and y with (x − e) , we obtain
Z +∞ Z −∞ Z e
√ 2
Φ(e − y)dy = Φ(x)d(x − e) = (x − e)2 dΦ(x).
0 e −∞
Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem B.24 that the variance is
Z +∞ Z 1
V [ξ] = (x − e)2 dΦ(x) = (Φ−1 (α) − e)2 dα.
−∞ 0
The argument breaks into two cases. Case 1: Assume f (y, z) is strictly
increasing with respect to z. Let Υ denote the common uncertainty distri-
bution of τ1 , τ2 , · · · It is clear that
In addition, since f (η1 , z), f (η2 , z), · · · are a sequence of iid random variables,
the law of large numbers for random variables tells us that
Z +∞
f (η1 , z) + f (η2 , z) + · · · + f (ηn , z)
→ f (y, z)dΨ(y), a.s.
n −∞
as n → ∞. Thus
Z +∞
Sn
lim Ch ≤ f (y, z)dΨ(y) = Υ(z). (B.91)
n→∞ n −∞
It follows from (B.90) and (B.91) that (B.89) holds. Case 2: Assume f (y, z)
is strictly decreasing with respect to z. Then −f (y, z) is strictly increasing
with respect to z. By using Case 1 we obtain
Z +∞
Sn
lim Ch − < −z = M − f (y, τ1 )dΨ(y) < −z .
n→∞ n −∞
That is,
Z +∞
Sn
lim Ch >z =M f (y, τ1 )dΨ(y) > z .
n→∞ n −∞
Show that
Sn
→ E[η1 ] + τ1 (B.93)
n
Section B.8 - Uncertain Random Programming 433
Sn = η1 τ1 + η2 τ2 + · · · + ηn τn . (B.94)
Show that
Sn
→ E[η1 ]τ1 (B.95)
n
in the sense of convergence in distribution as n → ∞.
for j = 1, 2, · · · , p.
f (x, y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)).
gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) = 0. (B.104)
Ch{gj (x, η1 , · · · , ηm , τ1 , · · · , τn ) ≤ 0}
Hence the chance constraint (B.102) holds if and only if (B.103) is true. The
theorem is verified.
Remark B.14: Sometimes, the equation (B.104) may not have a root. In
this case, if
gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) < 0 (B.105)
for all α, then we set the root α = 1; and if
gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) > 0 (B.106)
Remark B.15: The root α may be estimated by the bisection method be-
cause gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) is a strictly increasing function
with respect to α.
gj (x, y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) = 0.
gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) = 0 (B.107)
for j = 1, 2, · · · , p, respectively.
436 Appendix B - Chance Theory
Definition B.8 (Liu and Ralescu [151]) Assume that a system contains un-
certain random factors ξ1 , ξ2 , · · ·, ξn , and has a loss function f . Then the risk
index is the chance measure that the system is loss-positive, i.e.,
If all uncertain random factors degenerate to random ones, then the risk
index is the probability measure that the system is loss-positive (Roy [199]).
If all uncertain random factors degenerate to uncertain ones, then the risk
index is the uncertain measure that the system is loss-positive (Liu [128]).
Theorem B.31 (Liu and Ralescu [151], Risk Index Theorem) Assume a
system contains independent random variables η1 , η2 , · · · , ηm with probability
distributions Ψ1 , Ψ2 , · · ·, Ψm and independent uncertain variables τ1 , τ2 , · · ·, τn
with regular uncertainty distributions Υ1 , Υ2 , · · ·, Υn , respectively. If the loss
function f (η1 , · · ·, ηm , τ1 , · · ·, τn ) is strictly increasing with respect to τ1 , · · · , τk
and strictly decreasing with respect to τk+1 , · · · , τn , then the risk index is
Z
Risk = G(y1 , · · ·, ym )dΨ1 (y1 ) · · · dΨm (ym ) (B.109)
<m
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) = 0.
Remark B.17: Sometimes, the equation may not have a root. In this case,
if
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) < 0
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) > 0
Exercise B.14: (Series System) Consider a series system in which there are
m elements whose lifetimes are independent random variables η1 , η2 , · · · , ηm
with probability distributions Ψ1 , Ψ2 , · · · , Ψm and n elements whose lifetimes
are independent uncertain variables τ1 , τ2 , · · · , τn with uncertainty distribu-
tions Υ1 , Υ2 , · · · , Υn , respectively. If the loss is understood as the case that
the system fails before the time T , then the loss function is
f = T − η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn . (B.110)
Risk = a + b − ab (B.111)
where
b = Υ1 (T ) ∨ Υ2 (T ) ∨ · · · ∨ Υn (T ). (B.113)
f = T − η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn . (B.114)
f = T − (η1 + η2 + · · · + ηm + τ1 + τ2 + · · · + τn ). (B.118)
438 Appendix B - Chance Theory
Υ−1 −1 −1
1 (α) + Υ2 (α) + · · · + Υn (α) = T − (y1 + y2 + · · · + ym ). (B.120)
Remark B.19: As a substitute of risk index, Liu and Ralescu [153] suggested
a concept of value-at-risk,
Note that VaR(α) represents the maximum possible loss when α percent
of the right tail distribution is ignored. In other words, the loss will ex-
ceed VaR(α) with chance measure α. If Φ(x) is the chance distribution of
f (ξ1 , ξ2 , · · · , ξn ), then
Remark B.20: Liu and Ralescu [151] proposed a concept of expected loss
that is the expected value of the loss f (ξ1 , ξ2 , · · · , ξn ) given f (ξ1 , ξ2 , · · · , ξn ) >
0. That is,
Z +∞
L= M{f (ξ1 , ξ2 , · · · , ξn ) ≥ x}dx. (B.124)
0
If its inverse uncertainty distribution Φ−1 (α) exists, then the expected loss
is
Z 1
+
L= Φ−1 (α) dα. (B.126)
0
Section B.10 - Uncertain Random Reliability Analysis 439
Definition B.9 (Wen and Kang [232]) Assume a Boolean system has un-
certain random elements ξ1 , ξ2 , · · · , ξn and a structure function f . Then the
reliability index is the chance measure that the system is working, i.e.,
Theorem B.32 (Wen and Kang [232], Reliability Index Theorem) Assume
that a system has a structure function f and contains independent random
elements η1 , η2 , · · · , ηm with reliabilities a1 , a2 , · · · , am , and independent un-
certain elements τ1 , τ2 , · · · , τn with reliabilities b1 , b2 , · · · , bn , respectively.
Then the reliability index is
m
!
X Y
Reliability = µi (xi ) f ∗ (x1 , · · · , xm ) (B.128)
(x1 ,··· ,xm )∈{0,1}m i=1
where
sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n
if sup min νj (yj ) < 0.5
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n
f ∗ (x1 , · · · , xm ) = (B.129)
1− sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=0 1≤j≤n
if sup min νj (yj ) ≥ 0.5,
1≤j≤n
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1
(
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m), (B.130)
1 − ai , if xi = 0
(
bj , if yj = 1
νj (yj ) = (j = 1, 2, · · · , n). (B.131)
1 − bj , if yj = 0
Exercise B.17: (Series System) Consider a series system in which there are
m independent random elements η1 , η2 , · · ·, ηm with reliabilities a1 , a2 , · · ·, am ,
440 Appendix B - Chance Theory
f = η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn . (B.132)
f = η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn . (B.134)
where
exist with some degrees in uncertain measure. In order to model this type of
graph, Liu [138] presented a concept of uncertain random graph.
We say a graph is of order n if it has n vertices labeled by 1, 2, · · · , n. In
this section, we assume the graph is always of order n, and has a collection
of vertices,
V = {1, 2, · · · , n}. (B.140)
Let us define two collections of edges,
Please note that the uncertain random graph becomes a random graph
(Erdős and Rényi [38], Gilbert [56]) if the collection U of uncertain edges
vanishes; and becomes an uncertain graph (Gao and Gao [50]) if the collection
R of random edges vanishes.
442 Appendix B - Chance Theory
21 x22 · · · x2n
x
X= .. .. .. ..
(B.144)
. . . .
xn1 xn2 ··· xnn
and
xij = 0 or 1, if (i, j) ∈ R
xij = 0, if (i, j) ∈ U
X= X| . (B.145)
xij = xji , i, j = 1, 2, · · · , n
xii = 0, i = 1, 2, · · · , n
where
sup min νij (X), if sup min νij (X) < 0.5
X∈Y ∗, f (X)=1 (i,j)∈U X∈Y ∗, f (X)=1 (i,j)∈U
∗
f (Y ) =
1 −
sup min νij (X), if sup min νij (X) ≥ 0.5,
X∈Y ∗, f (X)=0 (i,j)∈U X∈Y ∗, f (X)=1 (i,j)∈U
Section B.12 - Uncertain Random Network 443
(
αij , if xij = 1
νij (X) = (i, j) ∈ U, (B.149)
1 − αij , if xij = 0
(
1, if I + X + X 2 + · · · + X n−1 > 0
f (X) = (B.150)
0, otherwise,
X is the class of matrixes satisfying (B.145), and Y ∗ is the extension class of
Y satisfying (B.147).
where
xij = 0 or 1, i, j = 1, 2, · · · , n
X = X | xij = xji , i, j = 1, 2, · · · , n . (B.152)
xii = 0, i = 1, 2, · · · , n
Remark B.22: (Gao and Gao [50]) If the uncertain random graph becomes
an uncertain graph, then the connectivity index is
sup min νij (X), if sup min νij (X) < 0.5
(X)=1 1≤i<j≤n X∈X,f (X)=1 1≤i<j≤n
X∈X,f
ρ=
1−
sup min νij (X), if
1≤i<j≤n
sup min νij (X) ≥ 0.5
1≤i<j≤n
X∈X,f (X)=0 X∈X,f (X)=1
where X becomes
xij = 0 or 1, i, j = 1, 2, · · · , n
X = X | xij = xji , i, j = 1, 2, · · · , n . (B.153)
xii = 0, i = 1, 2, · · · , n
Exercise B.20: An Euler circuit in the graph is a circuit that passes through
each edge exactly once. In other words, a graph has an Euler circuit if it can
be drawn on paper without ever lifting the pencil and without retracing over
any edge. It has been proved that a graph has an Euler circuit if and only
if it is connected and each vertex has an even degree (i.e., the number of
edges that are adjacent to that vertex). In order to measure how likely an
uncertain random graph has an Euler circuit, an Euler index is defined as
the chance measure that the uncertain random graph has an Euler circuit.
Please give a formula for calculating Euler index.
444 Appendix B - Chance Theory
N = {1, 2, · · · , n} (B.154)
where “1” is always the source node, and “n” is always the destination node.
Let us define two collections of arcs,
Please note that the uncertain random network becomes a random net-
work (Frank and Hakimi [43]) if all weights are random variables; and be-
comes an uncertain network (Liu [129]) if all weights are uncertain variables.
................ .................
... .. ... ... ..
....
.. ...
.. ... 2 ...............................................................
. 4 ... ..........
..
........ ................... ..................... ............
...
........ ... .
.. ......
.... .... . ......
...... .... .... ......
...... ... .... ......
.
. ......... ....
.... . .
.... ......
......
....... ....
.... . ..
.. ........ ..........
.
... ............. ..... .... . ............. .....
.... .. . .
..... . ...
....
...
1
.... ........ .
........... ...... ..... .
. .....
... ...... .
6
...... ..
.......... ..............
.
...... . . ..
..
...... ....
. ...
.. ........
...... .. .... ......
......
...... .... .... ......
...... ...
... .... ......
...... ...... ......
........ ....... ....... .................. ............
. . .
............ .. ...... .. ..........
.... ............................................................ ...
3
.... .....
...........
. ... ... 5 ....... ........
...
..
U = {(1, 2), (1, 3), (2, 4), (2, 5), (3, 4), (3, 5)}, (B.159)
R = {(4, 6), (5, 6)}, (B.160)
W = {w12 , w13 , w24 , w25 , w34 , w35 , w46 , w56 }. (B.161)
and f may be calculated by the Dijkstra algorithm (Dijkstra [34]) for each
given α.
Exercise B.21: (Sheng and Gao [212]) Maximum flow problem is to find
a flow with maximum value from a source node to a destination node in an
uncertain random network. What is the maximum flow distribution?
Definition B.12 (Gao and Yao [47]) Let (Γ, L, M) × (Ω, A, Pr) be a chance
space and let T be a totally ordered set (e.g. time). An uncertain random
process is a function Xt (γ, ω) from T × (Γ, L, M) × (Ω, A, Pr) to the set of
real numbers such that {Xt ∈ B} is an event in L × A for any Borel set B
at each time t.
Xt = f (Yt , Zt ) (B.167)
Definition B.13 (Gao and Yao [47]) Let η1 , η2 , · · · be iid random variables,
let τ1 , τ2 , · · · be iid uncertain variables, and let f be a positive and strictly
monotone function. Define S0 = 0 and
for n ≥ 1. Then
Nt = max n Sn ≤ t (B.169)
n≥0
Theorem B.33 (Gao and Yao [47]) Let η1 , η2 , · · · be iid random variables
with a common probability distribution Ψ, let τ1 , τ2 , · · · be iid uncertain vari-
ables, and let f be a positive and strictly monotone function. Assume Nt is an
uncertain random renewal process with interarrival times f (η1 , τ1 ), f (η2 , τ2 ),
· · · Then the average renewal number
Z +∞ −1
Nt
→ f (y, τ1 )dΨ(y) (B.170)
t −∞
where btxc represents the maximal integer less than or equal to tx. Since
btxc ≤ tx < btxc + 1, we immediately have
btxc 1 t 1
· ≤ <
btxc + 1 x btxc + 1 x
and then
Sbtxc+1 1 Sbtxc+1 t Sbtxc+1 1
Ch > ≤ Ch > ≤ Ch > .
btxc + 1 x btxc + 1 btxc + 1 btxc x
It follows from the law of large numbers for uncertain random variables that
Sbtxc+1 1 Sbtxc+1 1
lim Ch > = 1 − lim Ch ≤
t→∞ btxc + 1 x t→∞ btxc + 1 x
Z +∞
1
=1−M f (y, τ1 )dΨ(y) ≤
−∞ x
(Z
+∞ −1 )
=M f (y, τ1 )dΨ(y) ≤x
−∞
and
Sbtxc+1 1 btxc + 1 Sbtxc+1 1
lim Ch > = 1 − lim Ch · ≤
t→∞ btxc x t→∞ btxc btxc + 1 x
Z +∞
1
=1−M f (y, τ1 )dΨ(y) ≤
−∞ x
(Z
+∞ −1 )
=M f (y, τ1 )dΨ(y) ≤x .
−∞
and then
(Z
+∞ −1 )
Nt
lim Ch ≤x =M f (y, τ1 )dΨ(y) ≤x .
t→∞ t −∞
Nt 1
→ (B.171)
t E[η1 ] + τ1
Nt 1
→ (B.172)
t E[η1 ]τ1
Nt
X
Rt = τi (B.173)
i=1
Rt τ1
→ (B.174)
t E[η1 ]
Nt
1 X
τi
Nt i=1
Section B.13 - Uncertain Random Process 449
Probabilistic risk analysis was dated back to 1952 when Roy [199] pro-
posed his safety-first criterion for portfolio selection. Another important con-
tribution is the probabilistic value-at-risk methodology developed by Morgan
[171] in 1996. On the other hand, uncertain risk analysis was proposed by Liu
[128] in 2010 for evaluating the risk index that is the uncertain measure of an
uncertain system being loss-positive. More generally, in order to quantify the
risk of uncertain random systems, Liu and Ralescu [151] invented the tool
of uncertain random risk analysis. Furthermore, value-at-risk methodology
was presented by Liu and Ralescu [153] and expected loss was investigated
by Liu and Ralescu [154] for dealing with uncertain random systems.
Probabilistic reliability analysis was traced back to 1944 when Pugsley
[187] first proposed structural accident rates for the aeronautics industry.
Nowadays, probabilistic reliability analysis has become a widely used disci-
pline. As a new methodology, uncertain reliability analysis was developed
by Liu [128] in 2010 for evaluating the reliability index. More generally, for
dealing with uncertain random systems, Wen and Kang [232] presented the
tool of uncertain random reliability analysis.
Random graph was defined by Erdős and Rényi [38] in 1959 and indepen-
dently by Gilbert [56] at nearly the same time. As an alternative, uncertain
graph was proposed by Gao and Gao [50] in 2013 via uncertainty theory.
Assuming some edges exist with some degrees in probability measure and
others exist with some degrees in uncertain measure, Liu [138] defined the
concept of uncertain random graph in 2014.
Random network was first investigated by Frank and Hakimi [43] in 1965
for modeling communication network with random capacities. From then on,
the random network was well developed and widely applied. As a break-
through approach, uncertain network was first explored by Liu [129] in 2010
for modeling project scheduling problem with uncertain duration times. More
generally, assuming some weights are random variables and others are uncer-
tain variables, Liu [138] initialized the concept of uncertain random network
in 2014.
One of the earliest investigations of stochastic process was Bachelier [3] in
1900, and the study of uncertain process was started by Liu [123] in 2008. In
order to deal with uncertain random phenomenon evolving in time, Gao and
Yao [47] presented an uncertain random process in the light of chance theory.
Gao and Yao [47] also proposed an uncertain random renewal process. As
extensions, Yao [255] discussed an uncertain random renewal reward process,
and Yao [256] investigated an uncertain random alternating renewal process.
Appendix C
Frequently Asked
Questions
This appendix will answer some frequently asked questions related to prob-
ability theory and uncertainty theory as well as their applications. This
appendix will also show why fuzzy set is a wrong model in both theory and
practice. Finally, I will clarify what uncertainty is.
Figure C.1: Let A and B be two events from different probability spaces
(essentially they come from two different experiments). If A happens α times
and B happens β times, then the product A × B happens α × β times, where
α and β are understood as percentage numbers.
nothing otherwise. For example, if the domain expert thinks the belief degree
of an event A is α, then the price of the bet about A is α × 100¢. Here the
word “fair” means both the domain expert and the decision maker are willing
to either buy or sell this bet at this price.
Besides, Ramsey [196] suggested a Dutch book argument1 that says the
belief degree is irrational if there exists a book that guarantees either the do-
main expert or the decision maker a loss. For the moment, we are assumed
to agree with it.
Let A1 be a bet that offers $1 if A1 happens, and let A2 be a bet that
offers $1 if A2 happens. Assume the belief degrees of A1 and A2 are α1
and α2 , respectively. This means the prices of A1 and A2 are $α1 and $α2 ,
respectively. Now we consider the bet A1 ∪ A2 that offers $1 if either A1 or
A2 happens, and write the belief degree of A1 ∪ A2 by α. This means the
price of A1 ∪ A2 is $α. If α > α1 + α2 , then you (i) sell A1 , (ii) sell A2 , and
(iii) buy A1 ∪ A2 . It is clear that you are guaranteed to lose α − α1 − α2 > 0.
Thus there exists a Dutch book and the assumption α > α1 + α2 is irrational.
If α < α1 + α2 , then you (i) buy A1 , (ii) buy A2 , and (iii) sell A1 ∪ A2 . It is
clear that you are guaranteed to lose α1 + α2 − α > 0. Thus there exists a
Dutch book and the assumption α < α1 + α2 is irrational. Hence we have to
assume α = α1 + α2 and the belief degree meets the additivity axiom (but
this assertion is questionable because you cannot reverse “buy” and “sell”
arbitrarily due to the unequal status of the decision maker and the domain
expert).
Until now we have verified that the belief degree meets the three axioms
of probability theory. Almost all subjectivists stop here and assert that belief
degree follows the laws of probability theory. Unfortunately, the evidence is
not enough for this conclusion because we have not verified whether the belief
degree meets the product probability theorem or not.
Recall the example of truck-cross-over-bridge on Page 6. Let Ai represent
that the ith bridge strengths are greater than 90 tons, i = 1, 2, · · · , 50, re-
spectively. For each i, since your belief degree for Ai is 75%, you are willing
to pay 75¢ for the bet that offers $1 if Ai happens. If the belief degree did
follow the laws of probability theory, then it would be fair to pay
of the outcome of the gamble. For example, let A be a bet that offers $1 if A happens, let
B be a bet that offers $1 if B happens, and let A ∨ B be a bet that offers $1 if either A or
B happens. If the prices of A, B and A ∨ B are 30¢, 40¢ and 80¢, respectively, and you (i)
sell A, (ii) sell B, and (iii) buy A ∨ B, then you are guaranteed to lose 10¢ no matter what
happens. Thus there exists a Dutch book, and the prices are considered to be irrational.
Section C.5 - Belief Degree Follows the Laws of Uncertainty 457
of us will be happy to bet on it. But who is willing to offer such a bet? It
seems that no one does, and then the belief degree of A1 × A2 × · · · × A50 is
not the product of each individuals. Hence the belief degree does not follow
the laws of probability theory.
It is thus concluded that the belief interpretation of probability is un-
acceptable. The main mistake of subjectivists is that they only justify the
belief degree meets the three axioms of probability theory, but do not check
if it meets the product probability theorem.
1 − α1 − α2 − (1 − α) = α − α1 − α2 > 0. (C.7)
Section C.7 - What goes wrong with Cox’s theorem? 459
It follows from the negative commission argument that the assumption α >
α1 + α2 is irrational. Hence we have to assume α ≤ α1 + α2 and the belief
degree meets the subadditivity axiom. Note that the decision maker cannot
sell the bets to the domain expert due to the unequal status of them.
Finally, regarding the product axiom, let us recall the example of truck-
cross-over-bridge on Page 6. Suppose Ai represent that the ith bridge strengths
are greater than 90 tons, i = 1, 2, · · · , 50, respectively. For each i, since the
belief degree of Ai is 75%, the price of the bet about Ai is 75¢. It is reasonable
to pay
75¢ ∧ 75¢ ∧ · · · ∧ 75¢ = 75¢ (C.8)
| {z }
50
for a bet that offers $1 if A1 × A2 × · · · × A50 happens. Thus the belief degree
meets the product axiom of uncertainty theory.
Hence the belief degree follows the laws of uncertainty theory. It is easy to
prove that if a set of belief degrees violate the laws of uncertainty theory, then
there exists a book that guarantees the domain expert a loss. It is also easy
to prove that if a set of belief degrees follow the laws of uncertainty theory,
then there does not exist any book that guarantees the domain expert a loss.
This difference implies that random variables and uncertain variables obey
different operational laws.
Probability theory and uncertainty theory are complementary mathemat-
ical systems that provide two acceptable mathematical models to deal with
the indeterminate world. Probability is interpreted as frequency, while un-
certainty is interpreted as personal belief degree.
T (P ∧ Q) = f (T (P ), T (Q)) (C.11)
and then excludes uncertain measure from its start because the function
f (x, y) = x ∧ y used in uncertainty theory is not differentiable with respect
to x and y. In fact, there does not exist any evidence that the truth value
of conjunction is completely determined by the truth values of individual
propositions, let alone a twice differentiable function.
On the one hand, it is recognized that probability theory is a legitimate
approach to deal with the frequency. On the other hand, at any rate, it is
impossible that probability theory is the unique one for modeling indetermi-
nacy. In fact, it has been demonstrated in this book that uncertainty theory
is successful to deal with belief degrees.
for any events A and B no matter if they are independent or not, and the
latter holds
M{A ∪ B} = M{A} ∨ M{B} (C.13)
only for independent events A and B. A lot of surveys showed that the
measure of a union of events is usually greater than the maximum of the
measures of individual events when they are not independent. This fact
states that human brains do not behave fuzziness.
Both uncertainty theory and possibility theory attempt to model belief
degrees, where the former uses the tool of uncertain measure and the latter
uses the tool of possibility measure. Thus they are complete competitors.
that is just the trapezoidal fuzzy variable (80, 90, 110, 120). Please do not
argue why I choose such a membership function because it is not important for
the focus of debate. Based on the membership function µ and the definition
of possibility measure
Pos{ξ ∈ B} = sup µ(x), (C.15)
x∈B
represents the grade of membership of x in the fuzzy set. This definition was
given by Zadeh [260] in 1965. Although I strongly respect Professor Lotfi
Zadeh’s achievements, I disagree with him on the topic of fuzzy set.
Up to now, fuzzy set theory has not evolved as a mathematical system
because of its inconsistence. Theoretically, it is undeniable that there ex-
ist too many contradictions in fuzzy set theory. In practice, perhaps some
people believe that fuzzy set is a suitable tool to model unsharp concepts.
Unfortunately, it is not true. In order to convince the reader, let us examine
the concept of “young”. Without loss of generality, assume “young” has a
trapezoidal membership function (15, 20, 30, 40), i.e.,
0, if x ≤ 15
(x − 15)/5, if 15 ≤ x ≤ 20
µ(x) = 1, if 20 ≤ x ≤ 30 (C.18)
(40 − x)/10, if 30 ≤ x ≤ 40
0, if x ≥ 40.
It follows from the fuzzy set theory that “young” takes any values of α-cut
of µ, and then we infer that
The first proposition sounds good. However, the second proposition seems
unacceptable because the belief degree that “young” is between 20yr to 30yr
is impossible to achieve up to 1 (in fact, the belief degree should be almost 0
due to the fact that 19yr and 31yr are also nearly sure to be “young”). This
result says that “young” cannot be regarded as a fuzzy set.
Traditionally, stochastic finance theory presumes that the stock price (in-
cluding interest rate and currency exchange rate) follows Ito’s stochastic dif-
ferential equation. Is it really reasonable? In fact, this widely accepted
presumption was continuously challenged by many scholars.
As a paradox given by Liu [134], let us assume that the stock price Xt
follows the stochastic differential equation,
A, A, · · · , A, B, C, · · · , Z.
| {z } | {z } (C.26)
9900 100
Nobody can believe that those 10000 samples follow a normal probability
distribution with expected value 0 and variance ∆t. This fact is in contra-
diction with the property of Wiener process that the increment ∆Wt is a
normal random variable. Therefore, the real stock price Xt does not follow
the stochastic differential equation.
Perhaps some people think that the stock price does behave like a geomet-
ric Wiener process (or Ornstein-Uhlenbeck process) in macroscopy although
they recognize the paradox in microscopy. However, as the very core of
stochastic finance theory, Ito’s calculus is just built on the microscopic struc-
ture (i.e., the differential dWt ) of Wiener process rather than macroscopic
464 Appendix C - Frequently Asked Questions
.....
.......
....
99% ...............
...
..
... .. . ...
... . . ...
... ... ...
... ... ...
... .. ...
... ... ...
... ... ...
... ... ...
... .. ...
... .. .
... .. . ...
... .. ...
... ... ....
... ... .......................................
... ........ ... ......
.....
... ......... ... .....
...... .. ... .....
...
......... .... .
.
.....
.....
....... ... .............. ... .....
...
.... ... ... ...............
. ......
......
.. ..... ... .. .. ......................... .......
.. . .
.... ............................ ... ... ... ... ............................ . . . ......
. ....
......
................................ ... ... ... ... ... ... ... ... ............................................. ......
...
............................................................................................................................................................................................. ..
..
..
Figure C.2: There does not exist any continuous probability distribution
(curve) that can approximate to the frequency (histogram) of ∆Wt . Hence
it is impossible that the real stock price Xt follows any Ito’s stochastic dif-
ferential equation.
∂h ∂h 1 ∂2h
dXt = (t, Wt )dt + (t, Wt )dWt + (t, Wt )dt. (C.27)
∂t ∂w 2 ∂w2
In fact, the increment of stock price is impossible to follow any continuous
probability distribution.
On the basis of the above paradox, personally I do not think Ito’s calculus
can play the essential tool of finance theory because Ito’s stochastic differen-
tial equation is impossible to model stock price. As a substitute, uncertain
calculus may be a potential mathematical foundation of finance theory. We
will have a theory of uncertain finance if the stock price, interest rate and
exchange rate are assumed to follow uncertain differential equations.
nized that Knight and Keynes made a great process to break the monopoly
of probability theory.
However, a major retrogression arose from Cox (1946) with a theorem
that human’s belief degree is isomorphic to a probability measure. Many
people do not notice that Cox’s theorem is based on an unreasonable as-
sumption, and then mistakenly believe that uncertainty and probability are
synonymous. This idea remains alive today under the name of subjective
probability (de Finetti, 1937). Yet numerous experiments demonstrated that
the belief degree does not follow the laws of probability theory.
An influential exploration by Zadeh (1965) was the fuzzy set theory that
was widely said to be successfully applied in many areas of our life. However,
fuzzy set theory has neither evolved as a mathematical system nor become
a suitable tool for rationally modeling belief degrees. The main mistake of
fuzzy set theory is based on the wrong assumption that the belief degree
of a union of events is the maximum of the belief degrees of the individual
events no matter if they are independent or not. A lot of surveys showed
that human brains do not behave fuzziness in the sense of Zadeh.
The latest development was uncertainty theory founded by Liu (2007).
Nowadays, uncertainty theory has become a branch of pure mathematics
that is not only a formal study of an abstract structure (i.e., uncertainty
space) but also applicable to modeling belief degrees. Perhaps some readers
may complain that I never clarify what uncertainty is. I think we can answer
it this way. Mathematically, uncertainty is anything that follows the laws of
uncertainty theory. Practically, uncertainty is anything that is described by
belief degrees. From then on, “uncertainty” became a scientific terminology
on the basis of uncertainty theory.
Bibliography
[16] Chen XW, Kar S, and Ralescu DA, Cross-entropy measure of uncertain vari-
ables, Information Sciences, Vol.201, 53-60, 2012.
[17] Chen XW, Variation analysis of uncertain stationary independent increment
process, European Journal of Operational Research, Vol.222, No.2, 312-316,
2012.
[18] Chen XW, and Ralescu DA, B-spline method of uncertain statistics with
applications to estimate travel distance, Journal of Uncertain Systems, Vol.6,
No.4, 256-262, 2012.
[19] Chen XW, Liu YH, and Ralescu DA, Uncertain stock model with periodic
dividends, Fuzzy Optimization and Decision Making, Vol.12, No.1, 111-123,
2013.
[20] Chen XW, and Ralescu DA, Liu process and uncertain calculus, Journal of
Uncertainty Analysis and Applications, Vol.1, Article 3, 2013.
[21] Chen XW, and Gao J, Uncertain term structure model of interest rate, Soft
Computing, Vol.17, No.4, 597-604, 2013.
[22] Chen XW, Li XF, and Ralescu DA, A note on uncertain sequence, Inter-
national Journal of Uncertainty, Fuzziness and Knowledge-Based Systems,
Vol.22, No.2, 305-314, 2014.
[23] Chen Y, Fung RYK, and Yang J, Fuzzy expected value modelling approach for
determining target values of engineering characteristics in QFD, International
Journal of Production Research, Vol.43, No.17, 3583-3604, 2005.
[24] Chen Y, Fung RYK, and Tang JF, Rating technical attributes in fuzzy QFD
by integrating fuzzy weighted average method and fuzzy expected value op-
erator, European Journal of Operational Research, Vol.174, No.3, 1553-1566,
2006.
[25] Choquet G, Theory of capacities, Annales de l’Institute Fourier, Vol.5, 131-
295, 1954.
[26] Cox RT, Probability, frequency and reasonable expectation, American Jour-
nal of Physics, Vol.14, 1-13, 1946.
[27] Dai W, and Chen XW, Entropy of function of uncertain variables, Mathe-
matical and Computer Modelling, Vol.55, Nos.3-4, 754-760, 2012.
[28] Dantzig GB, Linear programming under uncertainty, Management Science,
Vol.1, 197-206, 1955.
[29] Das B, Maity K, Maiti A, A two warehouse supply-chain model under possi-
bility/necessity/credibility measures, Mathematical and Computer Modelling,
Vol.46, No.3-4, 398-409, 2007.
[30] de Cooman G, Possibility theory I-III, International Journal of General Sys-
tems, Vol.25, 291-371, 1997.
[31] de Finetti B, La prévision: ses lois logiques, ses sources subjectives, Annales
de l’Institut Henri Poincaré, Vol.7, 1-68, 1937.
[32] de Luca A, and Termini S, A definition of nonprobabilistic entropy in the
setting of fuzzy sets theory, Information and Control, Vol.20, 301-312, 1972.
[33] Dempster AP, Upper and lower probabilities induced by a multivalued map-
ping, Annals of Mathematical Statistics, Vol.38, No.2, 325-339, 1967.
Bibliography 469
[34] Dijkstra EW, A note on two problems in connection with graphs, Numerical
Mathematics, Vol.1, No.1, 269-271, 1959.
[35] Dubois D, and Prade H, Possibility Theory: An Approach to Computerized
Processing of Uncertainty, Plenum, New York, 1988.
[36] Elkan C, The paradoxical success of fuzzy logic, IEEE Expert, Vol.9, No.4,
3-8, 1994.
[37] Elkan C, The paradoxical controversy over fuzzy logic, IEEE Expert, Vol.9,
No.4, 47-49, 1994.
[38] Erdős P, and Rényi A, On random graphs, Publicationes Mathematicae, Vol.6,
290-297, 1959.
[39] Esogbue AO, and Liu B, Reservoir operations optimization via fuzzy criterion
decision processes, Fuzzy Optimization and Decision Making, Vol.5, No.3,
289-305, 2006.
[40] Fei WY, Optimal control of uncertain stochastic systems with Markovian
switching and its applications to portfolio decisions, Cybernetics and Systems,
Vol.45, 69-88, 2014.
[41] Feng Y, and Yang LX, A two-objective fuzzy k-cardinality assignment prob-
lem, Journal of Computational and Applied Mathematics, Vol.197, No.1, 233-
244, 2006.
[42] Feng YQ, Wu WC, Zhang BM, and Li WY, Power system operation risk
assessment using credibility theory, IEEE Transactions on Power Systems,
Vol.23, No.3, 1309-1318, 2008.
[43] Frank H, and Hakimi SL, Probabilistic flows through a communication net-
work, IEEE Transactions on Circuit Theory, Vol.12, 413-414, 1965.
[44] Fung RYK, Chen YZ, and Chen L, A fuzzy expected value-based goal pro-
graming model for product planning using quality function deployment, En-
gineering Optimization, Vol.37, No.6, 633-647, 2005.
[45] Gao J, and Liu B, Fuzzy multilevel programming with a hybrid intelligent
algorithm, Computers & Mathematics with Applications, Vol.49, 1539-1548,
2005.
[46] Gao J, Uncertain bimatrix game with applications, Fuzzy Optimization and
Decision Making, Vol.12, No.1, 65-78, 2013.
[47] Gao J, and Yao K, Some concepts and theorems of uncertain random process,
International Journal of Intelligent Systems, to be published.
[48] Gao X, Some properties of continuous uncertain measure, International Jour-
nal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.17, No.3, 419-
426, 2009.
[49] Gao X, Gao Y, and Ralescu DA, On Liu’s inference rule for uncertain sys-
tems, International Journal of Uncertainty, Fuzziness and Knowledge-Based
Systems, Vol.18, No.1, 1-11, 2010.
[50] Gao XL, and Gao Y, Connectedness index of uncertain graphs, International
Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.21, No.1,
127-137, 2013.
470 Bibliography
[51] Gao Y, Shortest path problem with uncertain arc lengths, Computers and
Mathematics with Applications, Vol.62, No.6, 2591-2600, 2011.
[52] Gao Y, Uncertain inference control for balancing inverted pendulum, Fuzzy
Optimization and Decision Making, Vol.11, No.4, 481-492, 2012.
[53] Gao Y, Existence and uniqueness theorem on uncertain differential equations
with local Lipschitz condition, Journal of Uncertain Systems, Vol.6, No.3,
223-232, 2012.
[54] Ge XT, and Zhu Y, Existence and uniqueness theorem for uncertain delay
differential equations, Journal of Computational Information Systems, Vol.8,
No.20, 8341-8347, 2012.
[55] Ge XT, and Zhu Y, A necessary condition of optimality for uncertain optimal
control problem, Fuzzy Optimization and Decision Making, Vol.12, No.1, 41-
51, 2013.
[56] Gilbert EN, Random graphs, Annals of Mathematical Statistics, Vol.30, No.4,
1141-1144, 1959.
[57] Guo HY, and Wang XS, Variance of uncertain random variables, Journal of
Uncertainty Analysis and Applications, Vol.2, Article 6, 2014.
[58] Guo R, Zhao R, Guo D, and Dunne T, Random fuzzy variable modeling on
repairable system, Journal of Uncertain Systems, Vol.1, No.3, 222-234, 2007.
[59] Ha MH, Li Y, and Wang XF, Fuzzy knowledge representation and reasoning
using a generalized fuzzy petri net and a similarity measure, Soft Computing,
Vol.11, No.4, 323-327, 2007.
[60] Han SW, Peng ZX, and Wang SQ, The maximum flow problem of uncertain
network, Information Sciences, Vol.265, 167-175, 2014.
[61] He Y, and Xu JP, A class of random fuzzy programming model and its ap-
plication to vehicle routing problem, World Journal of Modelling and Simu-
lation, Vol.1, No.1, 3-11, 2005.
[62] Hong DH, Renewal process with T-related fuzzy inter-arrival times and fuzzy
rewards, Information Sciences, Vol.176, No.16, 2386-2395, 2006.
[63] Hou YC, Subadditivity of chance measure, Journal of Uncertainty Analysis
and Applications, Vol.2, Article 14, 2014.
[64] Hou YC, Distance between uncertain random variables, http://orsc.edu.cn/
online/130510.pdf.
[65] Inuiguchi M, and Ramı́k J, Possibilistic linear programming: A brief review
of fuzzy mathematical programming and a comparison with stochastic pro-
gramming in portfolio selection problem, Fuzzy Sets and Systems, Vol.111,
No.1, 3-28, 2000.
[66] Ito K, Stochastic integral, Proceedings of the Japan Academy Series A, Vol.20,
No.8, 519-524, 1944.
[67] Ito K, On stochastic differential equations, Memoirs of the American Math-
ematical Society, No.4, 1-51, 1951.
[68] Iwamura K, and Kageyama M, Exact construction of Liu process, Applied
Mathematical Sciences, Vol.6, No.58, 2871-2880, 2012.
Bibliography 471
[69] Iwamura K, and Xu YL, Estimating the variance of the square of canonical
process, Applied Mathematical Sciences, Vol.7, No.75, 3731-3738, 2013.
[70] Jaynes ET, Information theory and statistical mechanics, Physical Reviews,
Vol.106, No.4, 620-630, 1957.
[71] Jaynes ET, Probability Theory: The Logic of Science, Cambridge University
Press, 2003.
[72] Jeffreys H, Theory of Probability, Oxford University Press, 1961.
[73] Ji XY, and Shao Z, Model and algorithm for bilevel newsboy problem
with fuzzy demands and discounts, Applied Mathematics and Computation,
Vol.172, No.1, 163-174, 2006.
[74] Ji XY, and Iwamura K, New models for shortest path problem with fuzzy arc
lengths, Applied Mathematical Modelling, Vol.31, 259-269, 2007.
[75] Jiao DY, and Yao K, An interest rate model in uncertain environment, Soft
Computing, to be published.
[76] Kacprzyk J, and Esogbue AO, Fuzzy dynamic programming: Main develop-
ments and applications, Fuzzy Sets and Systems, Vol.81, 31-45, 1996.
[77] Kacprzyk J, and Yager RR, Linguistic summaries of data using fuzzy logic,
International Journal of General Systems, Vol.30, 133-154, 2001.
[78] Kahneman D, and Tversky A, Prospect theory: An analysis of decision under
risk, Econometrica, Vol.47, No.2, 263-292, 1979.
[79] Ke H, and Liu B, Project scheduling problem with stochastic activity duration
times, Applied Mathematics and Computation, Vol.168, No.1, 342-353, 2005.
[80] Ke H, and Liu B, Project scheduling problem with mixed uncertainty of ran-
domness and fuzziness, European Journal of Operational Research, Vol.183,
No.1, 135-147, 2007.
[81] Ke H, and Liu B, Fuzzy project scheduling problem and its hybrid intelligent
algorithm, Applied Mathematical Modelling, Vol.34, No.2, 301-308, 2010.
[82] Ke H, Ma WM, Gao X, and Xu WH, New fuzzy models for time-cost trade-
off problem, Fuzzy Optimization and Decision Making, Vol.9, No.2, 219-231,
2010.
[83] Ke H, and Su TY, Uncertain random multilevel programming with applica-
tion to product control problem, Soft Computing, to be published.
[84] Keynes JM, The General Theory of Employment, Interest, and Money, Har-
court, New York, 1936.
[85] Klement EP, Puri ML, and Ralescu DA, Limit theorems for fuzzy random
variables, Proceedings of the Royal Society of London Series A, Vol.407, 171-
182, 1986.
[86] Klir GJ, and Folger TA, Fuzzy Sets, Uncertainty, and Information, Prentice-
Hall, Englewood Cliffs, 1980.
[87] Knight FH, Risk, Uncertainty, and Profit, Houghton Mifflin, Boston, 1921.
[88] Kolmogorov AN, Grundbegriffe der Wahrscheinlichkeitsrechnung, Julius
Springer, Berlin, 1933.
472 Bibliography
[89] Kruse R, and Meyer KD, Statistics with Vague Data, D. Reidel Publishing
Company, Dordrecht, 1987.
[90] Kwakernaak H, Fuzzy random variables–I: Definitions and theorems, Infor-
mation Sciences, Vol.15, 1-29, 1978.
[91] Kwakernaak H, Fuzzy random variables–II: Algorithms and examples for the
discrete case, Information Sciences, Vol.17, 253-278, 1979.
[92] Li J, Xu JP, and Gen M, A class of multiobjective linear programming
model with fuzzy random coefficients, Mathematical and Computer Modelling,
Vol.44, Nos.11-12, 1097-1113, 2006.
[93] Li PK, and Liu B, Entropy of credibility distributions for fuzzy variables,
IEEE Transactions on Fuzzy Systems, Vol.16, No.1, 123-129, 2008.
[94] Li SM, Ogura Y, and Kreinovich V, Limit Theorems and Applications of
Set-Valued and Fuzzy Set-Valued Random Variables, Kluwer, Boston, 2002.
[95] Li X, and Liu B, A sufficient and necessary condition for credibility measures,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.14, No.5, 527-535, 2006.
[96] Li X, and Liu B, Maximum entropy principle for fuzzy variables, Interna-
tional Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.15,
Supp.2, 43-52, 2007.
[97] Li X, and Liu B, On distance between fuzzy variables, Journal of Intelligent
& Fuzzy Systems, Vol.19, No.3, 197-204, 2008.
[98] Li X, and Liu B, Chance measure for hybrid events with fuzziness and ran-
domness, Soft Computing, Vol.13, No.2, 105-115, 2009.
[99] Li X, and Liu B, Foundation of credibilistic logic, Fuzzy Optimization and
Decision Making, Vol.8, No.1, 91-102, 2009.
[100] Li X, and Liu B, Hybrid logic and uncertain logic, Journal of Uncertain
Systems, Vol.3, No.2, 83-94, 2009.
[101] Liu B, Dependent-chance goal programming and its genetic algorithm based
approach, Mathematical and Computer Modelling, Vol.24, No.7, 43-52, 1996.
[102] Liu B, and Esogbue AO, Fuzzy criterion set and fuzzy criterion dynamic
programming, Journal of Mathematical Analysis and Applications, Vol.199,
No.1, 293-311, 1996.
[103] Liu B, Dependent-chance programming: A class of stochastic optimization,
Computers & Mathematics with Applications, Vol.34, No.12, 89-104, 1997.
[104] Liu B, and Iwamura K, Chance constrained programming with fuzzy param-
eters, Fuzzy Sets and Systems, Vol.94, No.2, 227-237, 1998.
[105] Liu B, and Iwamura K, A note on chance constrained programming with
fuzzy coefficients, Fuzzy Sets and Systems, Vol.100, Nos.1-3, 229-233, 1998.
[106] Liu B, Minimax chance constrained programming models for fuzzy decision
systems, Information Sciences, Vol.112, Nos.1-4, 25-38, 1998.
[107] Liu B, Dependent-chance programming with fuzzy decisions, IEEE Transac-
tions on Fuzzy Systems, Vol.7, No.3, 354-360, 1999.
Bibliography 473
[108] Liu B, and Esogbue AO, Decision Criteria and Optimal Inventory Processes,
Kluwer, Boston, 1999.
[109] Liu B, Uncertain Programming, Wiley, New York, 1999.
[110] Liu B, Dependent-chance programming in fuzzy environments, Fuzzy Sets
and Systems, Vol.109, No.1, 97-106, 2000.
[111] Liu B, and Iwamura K, Fuzzy programming with fuzzy decisions and fuzzy
simulation-based genetic algorithm, Fuzzy Sets and Systems, Vol.122, No.2,
253-262, 2001.
[112] Liu B, Fuzzy random chance-constrained programming, IEEE Transactions
on Fuzzy Systems, Vol.9, No.5, 713-720, 2001.
[113] Liu B, Fuzzy random dependent-chance programming, IEEE Transactions on
Fuzzy Systems, Vol.9, No.5, 721-726, 2001.
[114] Liu B, Theory and Practice of Uncertain Programming, Physica-Verlag, Hei-
delberg, 2002.
[115] Liu B, Toward fuzzy optimization without mathematical ambiguity, Fuzzy
Optimization and Decision Making, Vol.1, No.1, 43-63, 2002.
[116] Liu B, and Liu YK, Expected value of fuzzy variable and fuzzy expected value
models, IEEE Transactions on Fuzzy Systems, Vol.10, No.4, 445-450, 2002.
[117] Liu B, Random fuzzy dependent-chance programming and its hybrid intelli-
gent algorithm, Information Sciences, Vol.141, Nos.3-4, 259-271, 2002.
[118] Liu B, Inequalities and convergence concepts of fuzzy and rough variables,
Fuzzy Optimization and Decision Making, Vol.2, No.2, 87-100, 2003.
[119] Liu B, Uncertainty Theory: An Introduction to its Axiomatic Foundations,
Springer-Verlag, Berlin, 2004.
[120] Liu B, A survey of credibility theory, Fuzzy Optimization and Decision Mak-
ing, Vol.5, No.4, 387-408, 2006.
[121] Liu B, A survey of entropy of fuzzy variables, Journal of Uncertain Systems,
Vol.1, No.1, 4-13, 2007.
[122] Liu B, Uncertainty Theory, 2nd edn, Springer-Verlag, Berlin, 2007.
[123] Liu B, Fuzzy process, hybrid process and uncertain process, Journal of Un-
certain Systems, Vol.2, No.1, 3-16, 2008.
[124] Liu B, Theory and Practice of Uncertain Programming, 2nd edn, Springer-
Verlag, Berlin, 2009.
[125] Liu B, Some research problems in uncertainty theory, Journal of Uncertain
Systems, Vol.3, No.1, 3-10, 2009.
[126] Liu B, Uncertain entailment and modus ponens in the framework of uncertain
logic, Journal of Uncertain Systems, Vol.3, No.4, 243-251, 2009.
[127] Liu B, Uncertain set theory and uncertain inference rule with application to
uncertain control, Journal of Uncertain Systems, Vol.4, No.2, 83-98, 2010.
[128] Liu B, Uncertain risk analysis and uncertain reliability analysis, Journal of
Uncertain Systems, Vol.4, No.3, 163-170, 2010.
474 Bibliography
[149] Liu YH, Uncertain random variables: A mixture of uncertainty and random-
ness, Soft Computing, Vol.17, No.4, 625-634, 2013.
[150] Liu YH, Uncertain random programming with applications, Fuzzy Optimiza-
tion and Decision Making, Vol.12, No.2, 153-169, 2013.
[151] Liu YH, and Ralescu DA, Risk index in uncertain random risk analysis, In-
ternational Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.22, 2014, to be published.
[152] Liu YH, Chen XW, and Ralescu DA, Uncertain currency model and currency
option pricing, International Journal of Intelligent Systems, to be published.
[153] Liu YH, and Ralescu DA, Value-at-risk in uncertain random risk analysis,
Technical Report, 2014.
[154] Liu YH, and Ralescu DA, Expected loss of uncertain random systems, Tech-
nical Report, 2014.
[155] Liu YK, and Liu B, Random fuzzy programming with chance measures
defined by fuzzy integrals, Mathematical and Computer Modelling, Vol.36,
Nos.4-5, 509-524, 2002.
[156] Liu YK, and Liu B, Fuzzy random variables: A scalar expected value opera-
tor, Fuzzy Optimization and Decision Making, Vol.2, No.2, 143-160, 2003.
[157] Liu YK, and Liu B, Expected value operator of random fuzzy variable and
random fuzzy expected value models, International Journal of Uncertainty,
Fuzziness & Knowledge-Based Systems, Vol.11, No.2, 195-215, 2003.
[158] Liu YK, and Liu B, A class of fuzzy random optimization: Expected value
models, Information Sciences, Vol.155, Nos.1-2, 89-102, 2003.
[159] Liu YK, and Liu B, Fuzzy random programming with equilibrium chance
constraints, Information Sciences, Vol.170, 363-395, 2005.
[160] Liu YK, Fuzzy programming with recourse, International Journal of Uncer-
tainty, Fuzziness & Knowledge-Based Systems, Vol.13, No.4, 381-413, 2005.
[161] Liu YK, and Gao J, The independence of fuzzy variables with applications to
fuzzy random optimization, International Journal of Uncertainty, Fuzziness
& Knowledge-Based Systems, Vol.15, Supp.2, 1-20, 2007.
[162] Lu M, On crisp equivalents and solutions of fuzzy programming with different
chance measures, Information: An International Journal, Vol.6, No.2, 125-
133, 2003.
[163] Luhandjula MK, Fuzzy stochastic linear programming: Survey and future
research directions, European Journal of Operational Research, Vol.174, No.3,
1353-1367, 2006.
[164] Maiti MK, and Maiti MA, Fuzzy inventory model with two warehouses under
possibility constraints, Fuzzy Sets and Systems, Vol.157, No.1, 52-73, 2006.
[165] Mamdani EH, Applications of fuzzy algorithms for control of a simple dy-
namic plant, Proceedings of IEEE, Vol.121, No.12, 1585-1588, 1974.
[166] Marano GC, and Quaranta G, A new possibilistic reliability index definition,
Acta Mechanica, Vol.210, 291-303, 2010.
[167] Matheron G, Random Sets and Integral Geometry, Wiley, New York, 1975.
476 Bibliography
[168] Merton RC, Theory of rational option pricing, Bell Journal of Economics and
Management Science, Vol.4, 141-183, 1973.
[169] Möller B, and Beer M, Engineering computation under uncertainty, Comput-
ers and Structures, Vol.86, 1024-1041, 2008.
[170] Moore RE, Interval Analysis, Prentice-Hall, New Jersey, 1966.
[171] Morgan JP, Risk Metrics TM – Technical Document, 4th edn, Morgan Guar-
anty Trust Companies, New York, 1996.
[172] Nahmias S, Fuzzy variables, Fuzzy Sets and Systems, Vol.1, 97-110, 1978.
[173] Negoita CV, and Ralescu DA, Representation theorems for fuzzy concepts,
Kybernetes, Vol.4, 169-174, 1975.
[174] Negoita CV, and Ralescu DA, Simulation, Knowledge-based Computing, and
Fuzzy Statistics, Van Nostrand Reinhold, New York, 1987.
[175] Nguyen HT, Nguyen NT, and Wang TH, On capacity functionals in interval
probabilities, International Journal of Uncertainty, Fuzziness & Knowledge-
Based Systems, Vol.5, 359-377, 1997.
[176] Nguyen VH, Fuzzy stochastic goal programming problems, European Journal
of Operational Research, Vol.176, No.1, 77-86, 2007.
[177] Nilsson NJ, Probabilistic logic, Artificial Intelligence, Vol.28, 71-87, 1986.
[178] Øksendal B, Stochastic Differential Equations, 6th edn, Springer-Verlag,
Berlin, 2005.
[179] Pawlak Z, Rough sets, International Journal of Information and Computer
Sciences, Vol.11, No.5, 341-356, 1982.
[180] Pawlak Z, Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer,
Dordrecht, 1991.
[181] Peng J, and Liu B, Parallel machine scheduling models with fuzzy processing
times, Information Sciences, Vol.166, Nos.1-4, 49-66, 2004.
[182] Peng J, and Yao K, A new option pricing model for stocks in uncertainty
markets, International Journal of Operations Research, Vol.8, No.2, 18-26,
2011.
[183] Peng J, Risk metrics of loss function for uncertain system, Fuzzy Optimization
and Decision Making, Vol.12, No.1, 53-64, 2013.
[184] Peng ZX, and Iwamura K, A sufficient and necessary condition of uncertainty
distribution, Journal of Interdisciplinary Mathematics, Vol.13, No.3, 277-285,
2010.
[185] Peng ZX, and Iwamura K, Some properties of product uncertain measure,
Journal of Uncertain Systems, Vol.6, No.4, 263-269, 2012.
[186] Peng ZX, and Chen XW, Uncertain systems are universal approximators,
Journal of Uncertainty Analysis and Applications, Vol.2, Article 13, 2014.
[187] Pugsley AG, A philosophy of strength factors, Aircraft Engineering and
Aerospace Technology, Vol.16, No.1, 18-19, 1944.
[188] Puri ML, and Ralescu DA, Fuzzy random variables, Journal of Mathematical
Analysis and Applications, Vol.114, 409-422, 1986.
Bibliography 477
[189] Qin ZF, and Li X, Option pricing formula for fuzzy financial market, Journal
of Uncertain Systems, Vol.2, No.1, 17-21, 2008.
[190] Qin ZF, and Gao X, Fractional Liu process with application to finance, Math-
ematical and Computer Modelling, Vol.50, Nos.9-10, 1538-1543, 2009.
[191] Qin ZF, Uncertain random goal programming, http://orsc.edu.cn/online/
130323.pdf.
[192] Ralescu AL, and Ralescu DA, Extensions of fuzzy aggregation, Fuzzy Sets
and Systems, Vol.86, No.3, 321-330, 1997.
[193] Ralescu DA, A generalization of representation theorem, Fuzzy Sets and Sys-
tems, Vol.51, 309-311, 1992.
[194] Ralescu DA, Cardinality, quantifiers, and the aggregation of fuzzy criteria,
Fuzzy Sets and Systems, Vol.69, No.3, 355-365, 1995.
[195] Ralescu DA, and Sugeno M, Fuzzy integral representation, Fuzzy Sets and
Systems, Vol.84, No.2, 127-133, 1996.
[196] Ramsey FP, Truth and probability, In Foundations of Mathematics and Other
Logical Essays, Humanities Press, New York, 1931.
[197] Reichenbach H, The Theory of Probability, University of California Press,
Berkeley, 1948.
[198] Robbins HE, On the measure of a random set, Annals of Mathematical Statis-
tics, Vol.15, No.1, 70-74, 1944.
[199] Roy AD, Safety-first and the holding of assets, Econometrica, Vol.20, 431-149,
1952.
[200] Sakawa M, Nishizaki I, Uemura Y, Interactive fuzzy programming for two-
level linear fractional programming problems with fuzzy parameters, Fuzzy
Sets and Systems, Vol.115, 93-103, 2000.
[201] Samuelson PA, Rational theory of warrant pricing, Industrial Management
Review, Vol.6, 13-31, 1965.
[202] Savage LJ, The Foundations of Statistics, Wiley, New York, 1954.
[203] Savage LJ, The Foundations of Statistical Inference, Methuen, London, 1962.
[204] Shafer G, A Mathematical Theory of Evidence, Princeton University Press,
Princeton, 1976.
[205] Shannon CE, The Mathematical Theory of Communication, The University
of Illinois Press, Urbana, 1949.
[206] Shao Z, and Ji XY, Fuzzy multi-product constraint newsboy problem, Applied
Mathematics and Computation, Vol.180, No.1, 7-15, 2006.
[207] Shen Q and Zhao R, A credibilistic approach to assumption-based truth
maintenance, IEEE Transactions on Systems, Man, and Cybernetics Part
A, Vol.41, No.1, 85-96, 2011.
[208] Shen YY, and Yao K, A mean-reverting currency model in an uncertain
environment, http://orsc.edu.cn/online/131204.pdf.
[209] Shen YY, and Yao K, Runge-Kutta method for solving uncertain differential
equations, http://orsc.edu.cn/online/130502.pdf.
478 Bibliography
[210] Sheng YH, and Wang CG, Stability in the p-th moment for uncertain differen-
tial equation, Journal of Intelligent & Fuzzy Systems, Vol.26, No.3, 1263-1271,
2014.
[211] Sheng YH, and Yao K, Some formulas of variance of uncertain random vari-
able, Journal of Uncertainty Analysis and Applications, Vol.2, Article 12,
2014.
[212] Sheng YH, and Gao J, Chance distribution of the maximum flow of uncertain
random network, Journal of Uncertainty Analysis and Applications, Vol.2,
Article 15, 2014.
[213] Sheng YH, and Kar S, Some results of moments of uncertain variable through
inverse uncertainty distribution, Fuzzy Optimization and Decision Making, to
be published.
[214] Sheng YH, Exponential stability of uncertain differential equation, http://
orsc.edu.cn/online/130122.pdf.
[215] Shih HS, Lai YJ, and Lee ES, Fuzzy approach for multilevel programming
problems, Computers and Operations Research, Vol.23, 73-91, 1996.
[216] Slowinski R, and Teghem J, Fuzzy versus stochastic approaches to multicrite-
ria linear programming under uncertainty, Naval Research Logistics, Vol.35,
673-695, 1988.
[217] Sugeno M, Theory of Fuzzy Integrals and its Applications, Ph.D. Dissertation,
Tokyo Institute of Technology, 1974.
[218] Sun JJ, and Chen XW, Asian option pricing formula for uncertain financial
market, http://orsc.edu.cn/online/130511.pdf.
[219] Takagi T, and Sugeno M, Fuzzy identication of system and its applications to
modeling and control, IEEE Transactions on Systems, Man and Cybernatics,
Vol.15, No.1, 116-132, 1985.
[220] Taleizadeh AA, Niaki STA, and Aryanezhad MB, A hybrid method of Pareto,
TOPSIS and genetic algorithm to optimize multi-product multi-constraint
inventory control systems with random fuzzy replenishments, Mathematical
and Computer Modelling, Vol.49, Nos.5-6, 1044-1057, 2009.
[221] Tian DZ, Wang L, Wu J, and Ha MH, Rough set model based on uncertain
measure, Journal of Uncertain Systems, Vol.3, No.4, 252-256, 2009.
[222] Tian JF, Inequalities and mathematical properties of uncertain variables,
Fuzzy Optimization and Decision Making, Vol.10, No.4, 357-368, 2011.
[223] Torabi H, Davvaz B, Behboodian J, Fuzzy random events in incomplete prob-
ability models, Journal of Intelligent & Fuzzy Systems, Vol.17, No.2, 183-188,
2006.
[224] Venn J, The Logic of Chance, MacMillan, London, 1866.
[225] von Mises R, Wahrscheinlichkeit, Statistik und Wahrheit, Springer, Berlin,
1928.
[226] von Mises R, Wahrscheinlichkeitsrechnung und ihre Anwendung in der Statis-
tik und Theoretischen Physik, Leipzig and Wien, Franz Deuticke, 1931.
Bibliography 479
[227] Wang XS, Gao ZC, and Guo HY, Uncertain hypothesis testing for two ex-
perts’ empirical data, Mathematical and Computer Modelling, Vol.55, 1478-
1482, 2012.
[228] Wang XS, Gao ZC, and Guo HY, Delphi method for estimating uncer-
tainty distributions, Information: An International Interdisciplinary Journal,
Vol.15, No.2, 449-460, 2012.
[229] Wang XS, and Ha MH, Quadratic entropy of uncertain sets, Fuzzy Optimiza-
tion and Decision Making, Vol.12, No.1, 99-109, 2013.
[230] Wang XS, and Peng ZX, Method of moments for estimating uncertainty dis-
tributions, Journal of Uncertainty Analysis and Applications, Vol.2, Article
5, 2014.
[231] Wang XS, and Wang LL, Delphi method for estimating membership function
of the uncertain set, http://orsc.edu.cn/online/130330.pdf.
[232] Wen ML, and Kang R, Reliability analysis in uncertain random system,
http://orsc.edu.cn/online/120419.pdf.
[233] Wiener N, Differential space, Journal of Mathematical Physics, Vol.2, 131-
174, 1923.
[234] Yager RR, A new approach to the summarization of data, Information Sci-
ences, Vol.28, 69-86, 1982.
[235] Yager RR, Quantified propositions in a linguistic logic, International Journal
of Man-Machine Studies, Vol.19, 195-227, 1983.
[236] Yang LX, and Liu B, On inequalities and critical values of fuzzy random
variable, International Journal of Uncertainty, Fuzziness & Knowledge-Based
Systems, Vol.13, No.2, 163-175, 2005.
[237] Yang N, and Wen FS, A chance constrained programming approach to trans-
mission system expansion planning, Electric Power Systems Research, Vol.75,
Nos.2-3, 171-177, 2005.
[238] Yang XF, and Gao J, Uncertain differential games with application to capi-
talism, Journal of Uncertainty Analysis and Applications, Vol.1, Article 17,
2013.
[239] Yang XH, Moments and tails inequality within the framework of uncertainty
theory, Information: An International Interdisciplinary Journal, Vol.14,
No.8, 2599-2604, 2011.
[240] Yang XH, On comonotonic functions of uncertain variables, Fuzzy Optimiza-
tion and Decision Making, Vol.12, No.1, 89-98, 2013.
[241] Yao K, Uncertain calculus with renewal process, Fuzzy Optimization and
Decision Making, Vol.11, No.3, 285-297, 2012.
[242] Yao K, and Li X, Uncertain alternating renewal process and its application,
IEEE Transactions on Fuzzy Systems, Vol.20, No.6, 1154-1160, 2012.
[243] Yao K, Gao J, and Gao Y, Some stability theorems of uncertain differential
equation, Fuzzy Optimization and Decision Making, Vol.12, No.1, 3-13, 2013.
[244] Yao K, Extreme values and integral of solution of uncertain differential equa-
tion, Journal of Uncertainty Analysis and Applications, Vol.1, Article 2, 2013.
480 Bibliography
[245] Yao K, and Ralescu DA, Age replacement policy in uncertain environment,
Iranian Journal of Fuzzy Systems, Vol.10, No.2, 29-39, 2013.
[246] Yao K, and Chen XW, A numerical method for solving uncertain differential
equations, Journal of Intelligent & Fuzzy Systems, Vol.25, No.3, 825-832,
2013.
[247] Yao K, A type of nonlinear uncertain differential equations with analytic
solution, Journal of Uncertainty Analysis and Applications, Vol.1, Article 8,
2013.
[248] Yao K, A no-arbitrage theorem for uncertain stock model, Fuzzy Optimization
and Decision Making, to be published.
[249] Yao K, Entropy operator for membership function of uncertain set, Applied
Mathematics and Computation, to be published.
[250] Yao K, Block replacement policy in uncertain environment, http://orsc.edu.
cn/online/110612.pdf.
[251] Yao K, and Gao J, Law of large numbers for uncertain random variables,
http://orsc.edu.cn/online/120401.pdf.
[252] Yao K, and Sheng YH, Stability in mean for uncertain differential equation,
http://orsc.edu.cn/online/120611.pdf.
[253] Yao K, Time integral of independent increment uncertain process, http://
orsc.edu.cn/online/130302.pdf.
[254] Yao K, A formula to calculate the variance of uncertain variable, http://orsc.
edu.cn/online/130831.pdf.
[255] Yao K, Uncertain random renewal reward process, http://orsc.edu.cn/online/
131019.pdf.
[256] Yao K, Uncertain random alternating renewal process, http://orsc.edu.cn/
online/131108.pdf.
[257] Yao K, On the ruin time of an uncertain insurance model, http://orsc.edu.cn/
online/140115.pdf.
[258] You C, Some convergence theorems of uncertain sequences, Mathematical and
Computer Modelling, Vol.49, Nos.3-4, 482-487, 2009.
[259] Yu XC, A stock model with jumps for uncertain markets, International Jour-
nal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.20, No.3, 421-
432, 2012.
[260] Zadeh LA, Fuzzy sets, Information and Control, Vol.8, 338-353, 1965.
[261] Zadeh LA, Outline of a new approach to the analysis of complex systems and
decision processes, IEEE Transactions on Systems, Man and Cybernetics,
Vol.3, 28-44, 1973.
[262] Zadeh LA, The concept of a linguistic variable and its application to approx-
imate reasoning, Information Sciences, Vol.8, 199-251, 1975.
[263] Zadeh LA, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and
Systems, Vol.1, 3-28, 1978.
Bibliography 481
[280] Zhu Y, and Liu B, Continuity theorems and chance distribution of random
fuzzy variable, Proceedings of the Royal Society of London Series A, Vol.460,
2505-2519, 2004.
[281] Zhu Y, and Ji XY, Expected values of functions of fuzzy variables, Journal
of Intelligent & Fuzzy Systems, Vol.17, No.5, 471-478, 2006.
[282] Zhu Y, and Liu B, Fourier spectrum of credibility distribution for fuzzy vari-
ables, International Journal of General Systems, Vol.36, No.1, 111-123, 2007.
[283] Zhu Y, and Liu B, A sufficient and necessary condition for chance distribution
of random fuzzy variables, International Journal of Uncertainty, Fuzziness &
Knowledge-Based Systems, Vol.15, Supp.2, 21-28, 2007.
[284] Zhu Y, Uncertain optimal control with application to a portfolio selection
model, Cybernetics and Systems, Vol.41, No.7, 535-547, 2010.
[285] Zimmermann HJ, Fuzzy Set Theory and its Applications, Kluwer Academic
Publishers, Boston, 1985.
List of Frequently Used Symbols
M uncertain measure
(Γ, L, M) uncertainty space
ξ, η, τ uncertain variables
Φ, Ψ, Υ uncertainty distributions
Φ−1 , Ψ−1 , Υ−1 inverse uncertainty distributions
µ, ν, λ membership functions
µ−1 , ν −1 , λ−1 inverse membership functions
L(a, b) linear uncertain variable
Z(a, b, c) zigzag uncertain variable
N (e, σ) normal uncertain variable
LOGN (e, σ) lognormal uncertain variable
(a, b, c) triangular uncertain set
(a, b, c, d) trapezoidal uncertain set
E expected value
V variance
H entropy
Xt , Yt , Zt uncertain processes
Ct Liu process
Nt renewal process
Q uncertain quantifier
(Q, S, P ) uncertain proposition
∨ maximum operator
∧ minimum operator
¬ negation symbol
∀ universal quantifier
∃ existential quantifier
Pr probability measure
(Ω, A, Pr) probability space
Ch chance measure
k-max the kth largest value
k-min the kth smallest value
∅ the empty set
< the set of real numbers
iid independent and identically distributed