0% found this document useful (0 votes)
36 views

Uncertainty Theory Baoding Liu Fourth Edition

Uploaded by

berkah fajar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Uncertainty Theory Baoding Liu Fourth Edition

Uploaded by

berkah fajar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 491

Springer Uncertainty Research

Baoding Liu

Uncertainty
Theory
Fourth Edition
Springer Uncertainty Research
Springer Uncertainty Research

Springer Uncertainty Research is a book series that seeks to publish high quality
monographs, texts, and edited volumes on a wide range of topics in both funda-
mental and applied research of uncertainty. New publications are always solicited.
This book series provides rapid publication with a world-wide distribution.

Editor-in-Chief
Baoding Liu
Department of Mathematical Sciences
Tsinghua University
Beijing 100084, China
http://orsc.edu.cn/liu
Email: [email protected]

Executive Editor-in-Chief
Kai Yao
School of Management
University of Chinese Academy of Sciences
Beijing 100190, China
http://orsc.edu.cn/*kyao
Email: [email protected]

More information about this series at http://www.springer.com/series/13425


Baoding Liu

Uncertainty Theory
Fourth Edition

123
Baoding Liu
Department of Mathematical Sciences
Tsinghua University
Beijing
China

ISSN 2199-3807 ISSN 2199-3815 (electronic)


ISBN 978-3-662-44353-8 ISBN 978-3-662-44354-5 (eBook)
DOI 10.1007/978-3-662-44354-5

Library of Congress Control Number: 2014946387

Springer Heidelberg New York Dordrecht London

© Springer-Verlag Berlin Heidelberg 2015


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed. Exempted from this legal reservation are brief
excerpts in connection with reviews or scholarly analysis or material supplied specifically for the
purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the
work. Duplication of this publication or parts thereof is permitted only under the provisions of
the Copyright Law of the Publisher’s location, in its current version, and permission for use must
always be obtained from Springer. Permissions for use may be obtained through RightsLink at the
Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


Contents

Preface xi

0 Introduction 1
0.1 Indeterminacy . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
0.2 Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
0.3 Belief Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
0.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1 Uncertain Measure 9
1.1 Measurable Space . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Uncertain Measure . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Uncertainty Space . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5 Product Uncertain Measure . . . . . . . . . . . . . . . . . . . 16
1.6 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.7 Polyrectangular Theorem . . . . . . . . . . . . . . . . . . . . 23
1.8 Conditional Uncertain Measure . . . . . . . . . . . . . . . . . 25
1.9 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 28

2 Uncertain Variable 29
2.1 Uncertain Variable . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2 Uncertainty Distribution . . . . . . . . . . . . . . . . . . . . . 31
2.3 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.4 Operational Law . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.5 Expected Value . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.6 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.7 Moment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.8 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
2.9 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.10 Conditional Uncertainty Distribution . . . . . . . . . . . . . . 90
2.11 Uncertain Sequence . . . . . . . . . . . . . . . . . . . . . . . . 93
2.12 Uncertain Vector . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.13 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 102
vi Contents

3 Uncertain Programming 105


3.1 Uncertain Programming . . . . . . . . . . . . . . . . . . . . . 105
3.2 Numerical Method . . . . . . . . . . . . . . . . . . . . . . . . 108
3.3 Machine Scheduling Problem . . . . . . . . . . . . . . . . . . 110
3.4 Vehicle Routing Problem . . . . . . . . . . . . . . . . . . . . . 113
3.5 Project Scheduling Problem . . . . . . . . . . . . . . . . . . . 117
3.6 Uncertain Multiobjective Programming . . . . . . . . . . . . 121
3.7 Uncertain Goal Programming . . . . . . . . . . . . . . . . . . 122
3.8 Uncertain Multilevel Programming . . . . . . . . . . . . . . . 123
3.9 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 124

4 Uncertain Statistics 127


4.1 Expert’s Experimental Data . . . . . . . . . . . . . . . . . . . 127
4.2 Questionnaire Survey . . . . . . . . . . . . . . . . . . . . . . . 128
4.3 Empirical Uncertainty Distribution . . . . . . . . . . . . . . . 129
4.4 Principle of Least Squares . . . . . . . . . . . . . . . . . . . . 130
4.5 Method of Moments . . . . . . . . . . . . . . . . . . . . . . . 132
4.6 Multiple Domain Experts . . . . . . . . . . . . . . . . . . . . 133
4.7 Delphi Method . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.8 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 135

5 Uncertain Risk Analysis 137


5.1 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.2 Risk Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
5.3 Series System . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
5.4 Parallel System . . . . . . . . . . . . . . . . . . . . . . . . . . 140
5.5 k-out-of-n System . . . . . . . . . . . . . . . . . . . . . . . . 141
5.6 Standby System . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.7 Structural Risk Analysis . . . . . . . . . . . . . . . . . . . . . 142
5.8 Investment Risk Analysis . . . . . . . . . . . . . . . . . . . . 145
5.9 Value-at-Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
5.10 Expected Loss . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5.11 Hazard Distribution . . . . . . . . . . . . . . . . . . . . . . . 148
5.12 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 149

6 Uncertain Reliability Analysis 151


6.1 Structure Function . . . . . . . . . . . . . . . . . . . . . . . . 151
6.2 Reliability Index . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.3 Series System . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.4 Parallel System . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.5 k-out-of-n System . . . . . . . . . . . . . . . . . . . . . . . . 154
6.6 General System . . . . . . . . . . . . . . . . . . . . . . . . . . 154
6.7 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 155
Contents vii

7 Uncertain Propositional Logic 157


7.1 Uncertain Proposition . . . . . . . . . . . . . . . . . . . . . . 157
7.2 Truth Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.3 Chen-Ralescu Theorem . . . . . . . . . . . . . . . . . . . . . . 161
7.4 Boolean System Calculator . . . . . . . . . . . . . . . . . . . 163
7.5 Uncertain Predicate Logic . . . . . . . . . . . . . . . . . . . . 163
7.6 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 167

8 Uncertain Entailment 169


8.1 Uncertain Entailment Model . . . . . . . . . . . . . . . . . . 169
8.2 Uncertain Modus Ponens . . . . . . . . . . . . . . . . . . . . 171
8.3 Uncertain Modus Tollens . . . . . . . . . . . . . . . . . . . . 172
8.4 Uncertain Hypothetical Syllogism . . . . . . . . . . . . . . . . 174
8.5 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 175

9 Uncertain Set 177


9.1 Uncertain Set . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
9.2 Membership Function . . . . . . . . . . . . . . . . . . . . . . 183
9.3 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
9.4 Set Operational Law . . . . . . . . . . . . . . . . . . . . . . . 196
9.5 Arithmetic Operational Law . . . . . . . . . . . . . . . . . . . 200
9.6 Expected Value . . . . . . . . . . . . . . . . . . . . . . . . . . 204
9.7 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
9.8 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
9.9 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
9.10 Conditional Membership Function . . . . . . . . . . . . . . . 216
9.11 Uncertain Statistics . . . . . . . . . . . . . . . . . . . . . . . 216
9.12 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 220

10 Uncertain Logic 221


10.1 Individual Feature Data . . . . . . . . . . . . . . . . . . . . . 221
10.2 Uncertain Quantifier . . . . . . . . . . . . . . . . . . . . . . . 222
10.3 Uncertain Subject . . . . . . . . . . . . . . . . . . . . . . . . 229
10.4 Uncertain Predicate . . . . . . . . . . . . . . . . . . . . . . . 232
10.5 Uncertain Proposition . . . . . . . . . . . . . . . . . . . . . . 235
10.6 Truth Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
10.7 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
10.8 Linguistic Summarizer . . . . . . . . . . . . . . . . . . . . . . 243
10.9 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 246

11 Uncertain Inference 247


11.1 Uncertain Inference Rule . . . . . . . . . . . . . . . . . . . . . 247
11.2 Uncertain System . . . . . . . . . . . . . . . . . . . . . . . . . 251
11.3 Uncertain Control . . . . . . . . . . . . . . . . . . . . . . . . 255
11.4 Inverted Pendulum . . . . . . . . . . . . . . . . . . . . . . . . 255
viii Contents

11.5 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 257

12 Uncertain Process 259


12.1 Uncertain Process . . . . . . . . . . . . . . . . . . . . . . . . 259
12.2 Uncertainty Distribution . . . . . . . . . . . . . . . . . . . . . 261
12.3 Independence and Operational Law . . . . . . . . . . . . . . . 265
12.4 Independent Increment Process . . . . . . . . . . . . . . . . . 266
12.5 Stationary Independent Increment Process . . . . . . . . . . . 268
12.6 Extreme Value Theorem . . . . . . . . . . . . . . . . . . . . . 273
12.7 First Hitting Time . . . . . . . . . . . . . . . . . . . . . . . . 277
12.8 Time Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
12.9 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 282

13 Uncertain Renewal Process 283


13.1 Uncertain Renewal Process . . . . . . . . . . . . . . . . . . . 283
13.2 Block Replacement Policy . . . . . . . . . . . . . . . . . . . . 287
13.3 Renewal Reward Process . . . . . . . . . . . . . . . . . . . . . 288
13.4 Uncertain Insurance Model . . . . . . . . . . . . . . . . . . . 290
13.5 Age Replacement Policy . . . . . . . . . . . . . . . . . . . . . 294
13.6 Alternating Renewal Process . . . . . . . . . . . . . . . . . . 298
13.7 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 302

14 Uncertain Calculus 303


14.1 Liu Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
14.2 Liu Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
14.3 Fundamental Theorem . . . . . . . . . . . . . . . . . . . . . . 313
14.4 Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
14.5 Change of Variables . . . . . . . . . . . . . . . . . . . . . . . 315
14.6 Integration by Parts . . . . . . . . . . . . . . . . . . . . . . . 316
14.7 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 318

15 Uncertain Differential Equation 319


15.1 Uncertain Differential Equation . . . . . . . . . . . . . . . . . 319
15.2 Analytic Methods . . . . . . . . . . . . . . . . . . . . . . . . . 322
15.3 Existence and Uniqueness . . . . . . . . . . . . . . . . . . . . 327
15.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
15.5 α-Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
15.6 Yao-Chen Formula . . . . . . . . . . . . . . . . . . . . . . . . 332
15.7 Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . 343
15.8 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 345

16 Uncertain Finance 347


16.1 Uncertain Stock Model . . . . . . . . . . . . . . . . . . . . . . 347
16.2 Uncertain Interest Rate Model . . . . . . . . . . . . . . . . . 358
16.3 Uncertain Currency Model . . . . . . . . . . . . . . . . . . . . 359
Contents ix

16.4 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 363

A Probability Theory 365


A.1 Probability Measure . . . . . . . . . . . . . . . . . . . . . . . 365
A.2 Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . 369
A.3 Probability Distribution . . . . . . . . . . . . . . . . . . . . . 370
A.4 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
A.5 Operational Law . . . . . . . . . . . . . . . . . . . . . . . . . 373
A.6 Expected Value . . . . . . . . . . . . . . . . . . . . . . . . . . 376
A.7 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
A.8 Moment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
A.9 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
A.10 Random Sequence . . . . . . . . . . . . . . . . . . . . . . . . 390
A.11 Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . 395
A.12 Conditional Probability . . . . . . . . . . . . . . . . . . . . . 399
A.13 Random Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
A.14 Stochastic Process . . . . . . . . . . . . . . . . . . . . . . . . 403
A.15 Stochastic Calculus . . . . . . . . . . . . . . . . . . . . . . . . 405
A.16 Stochastic Differential Equation . . . . . . . . . . . . . . . . . 406

B Chance Theory 409


B.1 Chance Measure . . . . . . . . . . . . . . . . . . . . . . . . . 409
B.2 Uncertain Random Variable . . . . . . . . . . . . . . . . . . . 413
B.3 Chance Distribution . . . . . . . . . . . . . . . . . . . . . . . 415
B.4 Operational Law . . . . . . . . . . . . . . . . . . . . . . . . . 417
B.5 Expected Value . . . . . . . . . . . . . . . . . . . . . . . . . . 424
B.6 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
B.7 Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . 431
B.8 Uncertain Random Programming . . . . . . . . . . . . . . . . 433
B.9 Uncertain Random Risk Analysis . . . . . . . . . . . . . . . . 436
B.10 Uncertain Random Reliability Analysis . . . . . . . . . . . . . 439
B.11 Uncertain Random Graph . . . . . . . . . . . . . . . . . . . . 440
B.12 Uncertain Random Network . . . . . . . . . . . . . . . . . . . 444
B.13 Uncertain Random Process . . . . . . . . . . . . . . . . . . . 445
B.14 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 450

C Frequently Asked Questions 453


C.1 What is the meaning that an object follows the laws of prob-
ability theory? . . . . . . . . . . . . . . . . . . . . . . . . . . 453
C.2 Why does frequency follow the laws of probability theory? . . 454
C.3 Why is probability theory unable to model belief degree? . . 455
C.4 Why should belief degree be understood as an oddsmaker’s
betting ratio rather than a fair one? . . . . . . . . . . . . . . 457
C.5 Why does belief degree follow the laws of uncertainty theory? 458
x Contents

C.6 What is the difference between probability theory and uncer-


tainty theory? . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
C.7 What goes wrong with Cox’s theorem? . . . . . . . . . . . . . 459
C.8 What is the difference between possibility theory and uncer-
tainty theory? . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
C.9 Why is fuzzy variable unable to model indeterminate quantity? 460
C.10 Why is fuzzy set unable to model unsharp concept? . . . . . 461
C.11 Does the stock price follow stochastic differential equation or
uncertain differential equation? . . . . . . . . . . . . . . . . . 462
C.12 How did “uncertainty” evolve over the past 100 years? . . . . 464

Bibliography 467

List of Frequently Used Symbols 483

Index 485
Preface

When no samples are available to estimate a probability distribution, we


have to invite some domain experts to evaluate the belief degree that each
event will happen. Perhaps some people think that the belief degree should
be modeled by subjective probability or fuzzy set theory. However, it is
usually inappropriate because both of them may lead to counterintuitive
results in this case. In order to rationally deal with belief degrees, uncertainty
theory was founded in 2007 and subsequently studied by many researchers.
Nowadays, uncertainty theory has become a branch of axiomatic mathematics
for modeling belief degrees.

Uncertain Measure

The most fundamental concept is uncertain measure that is a type of set


function satisfying the axioms of uncertainty theory. It is used to indicate
the belief degree that an uncertain event may happen. Chapter 1 will intro-
duce normality, duality, subadditivity and product axioms. From those four
axioms, this chapter will also present uncertain measure, product uncertain
measure, and conditional uncertain measure.

Uncertain Variable

Uncertain variable is a measurable function from an uncertainty space to the


set of real numbers. It is used to represent quantities with uncertainty. Chap-
ter 2 is devoted to uncertain variable, uncertainty distribution, independence,
operational law, expected value, variance, moments, entropy, distance, con-
ditional uncertainty distribution, uncertain sequence, and uncertain vector.

Uncertain Programming

Uncertain programming is a type of mathematical programming involving


uncertain variables. Chapter 3 will provide a type of uncertain program-
ming model with applications to machine scheduling problem, vehicle routing
problem, and project scheduling problem. In addition, uncertain multiob-
jective programming, uncertain goal programming and uncertain multilevel
programming are also documented.
xii Preface

Uncertain Statistics
Uncertain statistics is a methodology for collecting and interpreting expert’s
experimental data by uncertainty theory. Chapter 4 will present a question-
naire survey for collecting expert’s experimental data. In order to deter-
mine uncertainty distributions from those expert’s experimental data, Chap-
ter 4 will also introduce empirical uncertainty distribution, principle of least
squares, method of moments, and Delphi method.

Uncertain Risk Analysis


The term risk has been used in different ways in literature. In this book
the risk is defined as the accidental loss plus the uncertain measure of such
loss, and a risk index is defined as the uncertain measure that some specified
loss occurs. Chapter 5 will introduce uncertain risk analysis that is a tool
to quantify risk via uncertainty theory. As applications of uncertain risk
analysis, Chapter 5 will also discuss structural risk analysis and investment
risk analysis.

Uncertain Reliability Analysis


Reliability index is defined as the uncertain measure that some system is
working. Chapter 6 will introduce uncertain reliability analysis that is a tool
to deal with system reliability via uncertainty theory.

Uncertain Propositional Logic


Uncertain propositional logic is a generalization of propositional logic in
which every proposition is abstracted into a Boolean uncertain variable and
the truth value is defined as the uncertain measure that the proposition is
true. Chapter 7 will present uncertain propositional logic and uncertain pred-
icate logic. In addition, uncertain entailment is a methodology for determin-
ing the truth value of an uncertain proposition via the maximum uncertainty
principle when the truth values of other uncertain propositions are given.
Chapter 8 will discuss an uncertain entailment model from which uncertain
modus ponens, uncertain modus tollens and uncertain hypothetical syllogism
are deduced.

Uncertain Set
Uncertain set is a set-valued function on an uncertainty space, and attempts
to model “unsharp concepts”. The main difference between uncertain set and
uncertain variable is that the former takes values of set and the latter takes
values of point. Uncertain set theory will be introduced in Chapter 9. In
order to determine membership functions, Chapter 9 will also provide some
methods of uncertain statistics.
Preface xiii

Uncertain Logic
Some knowledge in human brain is actually an uncertain set. This fact en-
courages us to design an uncertain logic that is a methodology for calculating
the truth values of uncertain propositions via uncertain set theory. Uncertain
logic may provide a flexible means for extracting linguistic summary from a
collection of raw data. Chapter 10 will be devoted to uncertain logic and
linguistic summarizer.

Uncertain Inference
Uncertain inference is a process of deriving consequences from human knowl-
edge via uncertain set theory. Chapter 11 will present a set of uncertain
inference rules, uncertain system, and uncertain control with application to
an inverted pendulum system.

Uncertain Process
An uncertain process is essentially a sequence of uncertain variables indexed
by time. Thus an uncertain process is usually used to model uncertain phe-
nomena that vary with time. Chapter 12 is devoted to basic concepts of
uncertain process and uncertainty distribution. In addition, extreme value
theorem, first hitting time and time integral of uncertain processes are also
introduced. Chapter 13 deals with uncertain renewal process, renewal reward
process, and alternating renewal process. Chapter 13 also provides block re-
placement policy, age replacement policy, and an uncertain insurance model.

Uncertain Calculus
Uncertain calculus is a branch of mathematics that deals with differentiation
and integration of uncertain processes. Chapter 14 will introduce Liu process
that is a stationary independent increment process whose increments are
normal uncertain variables, and discuss Liu integral that is a type of uncertain
integral with respect to Liu process. In addition, the fundamental theorem of
uncertain calculus will be proved in this chapter from which the techniques
of chain rule, change of variables, and integration by parts are also derived.

Uncertain Differential Equation


Uncertain differential equation is a type of differential equation involving
uncertain processes. Chapter 15 will discuss the existence, uniqueness and
stability of solutions of uncertain differential equations, and will introduce
Yao-Chen formula that represents the solution of an uncertain differential
equation by a family of solutions of ordinary differential equations. On the
basis of this formula, some formulas to calculate extreme value, first hitting
xiv Preface

time, and time integral of solution are provided. Furthermore, some numeri-
cal methods for solving general uncertain differential equations are designed.

Uncertain Finance
As applications of uncertain differential equation, Chapter 16 will discuss
uncertain stock model, uncertain interest rate model, and uncertain currency
model.

Law of Truth Conservation


The law of excluded middle tells us that a proposition is either true or false,
and the law of contradiction tells us that a proposition cannot be both true
and false. In the state of indeterminacy, some people said, the law of excluded
middle and the law of contradiction are no longer valid because the truth
degree of a proposition is no longer 0 or 1. I cannot gainsay this viewpoint
to a certain extent. But it does not mean that you might “go as you please”.
The truth values of a proposition and its negation should sum to unity. This is
the law of truth conservation that is weaker than the law of excluded middle
and the law of contradiction. Furthermore, the law of truth conservation
agrees with the law of excluded middle and the law of contradiction when
the uncertainty vanishes.

Maximum Uncertainty Principle


An event has no uncertainty if its uncertain measure is 1 because we may be-
lieve that the event happens. An event has no uncertainty too if its uncertain
measure is 0 because we may believe that the event does not happen. An
event is the most uncertain if its uncertain measure is 0.5 because the event
and its complement may be regarded as “equally likely”. In practice, if there
is no information about the uncertain measure of an event, we should assign
0.5 to it. Sometimes, only partial information is available. In this case, the
value of uncertain measure may be specified in some range. What value does
the uncertain measure take? For any event, if there are multiple reasonable
values that an uncertain measure may take, then the value as close to 0.5 as
possible is assigned to the event. This is the maximum uncertainty principle.

Matlab Uncertainty Toolbox


Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) is a col-
lection of functions built on Matlab for many methods of uncertainty theory,
including uncertain programming, uncertain statistics, uncertain risk anal-
ysis, uncertain reliability analysis, uncertain logic, uncertain inference, un-
certain differential equation, scheduling, logistics, data mining, control, and
finance.
Preface xv

Lecture Slides
If you need lecture slides for uncertainty theory, please download them from
the website at http://orsc.edu.cn/liu/resources.htm.

Uncertainty Theory Online


If you want to read more papers related to uncertainty theory and applica-
tions, please visit the website at http://orsc.edu.cn/online.

Purpose
The purpose is to equip the readers with a branch of axiomatic mathematics
to deal with belief degrees. The textbook is suitable for researchers, engi-
neers, and students in the field of mathematics, information science, opera-
tions research, industrial engineering, computer science, artificial intelligence,
automation, economics, and management science.

Acknowledgment
This work was supported by National Natural Science Foundation of China
Grant No.61273044.

Baoding Liu
Tsinghua University
http://orsc.edu.cn/liu
May 2014
To My Wife Jinlan
Chapter 0

Introduction

Real decisions are usually made in the state of indeterminacy. For model-
ing indeterminacy, there exist two mathematical systems, one is probability
theory (Kolmogorov, 1933) and the other is uncertainty theory (Liu, 2007).
Probability is interpreted as frequency, while uncertainty is interpreted as
personal belief degree.
What is indeterminacy? What is frequency? What is belief degree? This
chapter will answer these questions, and show in what situation we should use
probability theory and in what situation we should use uncertainty theory.
Finally, it is concluded that a rational man behaves as if he used uncertainty
theory.

0.1 Indeterminacy
By indeterminacy we mean the phenomena whose outcomes cannot be ex-
actly predicted in advance. For example, we cannot exactly predict which
face will appear before we toss dice. Thus “tossing dice” is a type of in-
determinate phenomenon. As another example, we cannot exactly predict
tomorrow’s stock price. That is, “stock price” is also a type of indetermi-
nate phenomenon. Some other instances of indeterminacy include “roulette
wheel”, “product lifetime”, “market demand”, “bridge strength”, “travel dis-
tance”, etc.
Indeterminacy is absolute, while determinacy is relative. This is the rea-
son why we say real decisions are usually made in the state of indeterminacy.
How to model indeterminacy is thus an important research subject in not
only mathematics but also science and engineering.
In order to describe an indeterminate quantity, personally I think there
exist only two ways, one is frequency generated by samples (i.e., historical
data), and the other is belief degree evaluated by domain experts. Could you
imagine a third way?

© Springer-Verlag Berlin Heidelberg 2015 1


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_1
2 Chapter 0 - Introduction

0.2 Frequency
Assume we have collected a set of samples for some indeterminate quantity
(e.g. stock price). By cumulative frequency we mean a function representing
the percentage of all samples that fall into the left side of the current point.
It is clear that the cumulative frequency looks like a step function in Figure 1,
and will always have bigger values as the current point moves from the left
to right.
....
.........
..
1 ............................................................................
............ ..
............. .... ... . . . .
.... ............... .... .... ...
... .. .. .. .. ..
.. ............... .... .... .... ....
... ... ... ... ... ... ..
... ... ... ... ... ... ...
... ............... .... .... ..... ..... ....
... ... ... ... ... ... ... ...
... ... ... ... ... ... ... ...
.
... ............... .... .... ..... .... .... ....
... ... ... ... ... ... ... ... ..
... . . .. . . . . ..
... .............. ..... .... .... .... ..... ..... ...
... .. ... .. .. ... ... .. .. ...
... ............... .... .... .... .... .... .... .... ...
... . . . . . . . .
............ ... .. ... ... .. .. ... ... ..
... ............. .... ... .... ... .... .... .... ... ... ....
.......................................................................................................................................................................
.

Figure 1: Cumulative frequency histogram

Frequency is a factual property of indeterminate quantity, and does not


change with our state of knowledge and preference. In other words, the
frequency in the long run exists and is relatively invariant, no matter if it is
observed by us.

Probability theory is applicable when samples are available


The study of probability theory was started by Pascal and Fermat in the
17th century when they succeeded in deriving the exact probabilities for
certain gambling problems. After that, probability theory was studied by
many researchers. Particularly, a complete axiomatic foundation of proba-
bility theory was successfully given by Kolmogorov [88] in 1933. Since then,
probability theory has been developed steadily and widely applied in science
and engineering.
Keep in mind that a fundamental premise of applying probability theory
is that the estimated probability distribution is close enough to the long-run
cumulative frequency. Otherwise, the law of large numbers is no longer valid
and probability theory is no longer applicable.
When the sample size is large enough, it is possible for us to believe the
estimated probability distribution is close enough to the long-run cumulative
frequency. In this case, there is no doubt that probability theory is the only
legitimate approach to deal with our problems on the basis of the estimated
probability distributions.
However, in many cases, no samples are available to estimate a probability
distribution. What can we do in this situation? Perhaps we have no choice
Section 0.3 - Belief Degree 3

but to invite some domain experts to evaluate the belief degree that each
event will happen.

0.3 Belief Degree

Belief degrees are familiar to all of us. The object of belief is an event (i.e.,
a proposition). For example, “the sun will rise tomorrow”, “it will be sunny
next week”, and “John is a young man” are all instances of object of belief.
A belief degree represents the strength with which we believe the event will
happen. If we completely believe the event will happen, then the belief degree
is 1 (complete belief). If we think it is completely impossible, then the belief
degree is 0 (complete disbelief). If the event and its complementary event
are equally likely, then the belief degree for the event is 0.5, and that for the
complementary event is also 0.5. Generally, we will assign a number between
0 and 1 to the belief degree for each event. The higher the belief degree is,
the more strongly we believe the event will happen.
Assume a box contains 100 balls, each of which is known to be either red
or black, but we do not know how many of the balls are red and how many
are black. In this case, it is impossible for us to determine the probability of
drawing a red ball. However, the belief degree can be evaluated by us. For
example, the belief degree for drawing a red ball is 0.5 because “drawing a
red ball” and “drawing a black ball” are equally likely. Besides, the belief
degree for drawing a black ball is also 0.5.
The belief degree depends heavily on the personal knowledge (even includ-
ing preference) concerning the event. When the personal knowledge changes,
the belief degree changes too.

Belief Degree Function

How do we describe an indeterminate quantity (e.g. bridge strength)? It is


clear that a single belief degree is absolutely not enough. Do we need to know
the belief degrees for all possible events? The answer is negative. In fact,
what we need is a belief degree function that represents the degree with which
we believe the indeterminate quantity falls into the left side of the current
point.
For example, if we believe the indeterminate quantity completely falls
into the left side of the current point, then the belief degree function takes
value 1; if we think it completely falls into the right side, then the belief
degree function takes value 0. Generally, a belief degree function takes values
between 0 and 1, and has bigger values as the current point moves from the
left to right. See Figure 2.
4 Chapter 0 - Introduction

.....
.......
..
1 ............................................................................................•........................
.
.... ......
......
... ......
.. ......
... ....
.......

... ...
... ...
... ...
... ...
... ...
... ..
... ..
... .•
..
....
.
... ......
......
... ......
... ......
... ...........
..
..................................................................................................................................................................
0 ... • ...

Figure 2: Belief degree function

How to obtain belief degrees


Consider a bridge and its strength. At first, we have to admit that no destruc-
tive experiment is allowed for the bridge. Thus we have no samples about
the bridge strength. In this case, there do not exist any statistical methods
to estimate its probability distribution. How do we deal with it? It seems
that we have no choice but to invite some bridge engineers to evaluate the
belief degrees about the bridge strength. In practice, it is almost impossible
for the bridge engineers to give a perfect description of the belief degrees of
all possible events. Instead, they can only provide some subjective judgments
about the bridge strength. As a simple example, we assume a consultation
process is as follows:
(Q) What do you think is the bridge strength?
(A) I think the bridge strength is between 80 and 120 tons.
What belief degrees can we derive from the answer of the bridge engineer?
First, we may have an inference:
(i) I am 100% sure that the bridge strength is less than 120 tons.
This means the belief degree of “the bridge strength being less than 120 tons”
is 1. Thus we have an expert’s experimental data (120, 1). Furthermore, we
may have another inference:
(ii) I am 100% sure that the bridge strength is greater than 80 tons.
This statement gives a belief degree that the bridge strength falls into the
right side of 80 tons. We need translate it to a statement about the belief
degree that the bridge strength falls into the left side of 80 tons:
(ii 0 ) I am 0% sure that the bridge strength is less than 80 tons.
Although the statement (ii0 ) sounds strange to us, it is indeed equivalent to
the statement (ii). Thus we have another expert’s experimental data (80, 0).
Until now we have acquired two expert’s experimental data (80, 0) and
(120, 1) about the bridge strength. Could we infer the belief degree Φ(x)
Section 0.3 - Belief Degree 5

that the bridge strength falls into the left side of the point x? The answer is
affirmative. For example, a reasonable value is


 0, if x < 80
Φ(x) = (x − 80)/40, if 80 ≤ x ≤ 120 (1)

1, if x > 120.

See Figure 3. From the function Φ(x), we may infer that the belief degree
of “the bridge strength being less than 90 tons” is 0.25. In other words, it is
reasonable to infer that “I am 25% sure that the bridge strength is less than
90 tons”, or equivalently “I am 75% sure that the bridge strength is greater
than 90 tons”.
..
........
...
...
1 .........................................
...
...
...................................................
.......
..... .
... ..
..... ...
..
... ..... ..
... ..... ..
... ..... ..
... .
.....
. ..
... .....
. ..
... .....
. ..
... .....
. ..
... ..
.... ..
... .....
. ..
... .....
. ..
... .....
. ..
... ..
.... ..
... ..
.... ..
... ..
.... ..
.
..................................................................................................................................................................
0 ...
...
... x (ton)
80 .. 120

Figure 3: Belief degree function of “the bridge strength”

All belief degrees are wrong, but some are useful


Different people may produce different belief degrees. Perhaps some readers
may ask which belief degree is correct. I would like to answer it in this way:
All belief degrees are wrong, but some are useful. A belief degree becomes
“correct” only when it is close enough to the frequency of the indeterminate
quantity. However, usually we cannot make it to that.
Numerous surveys showed that human beings usually estimate a much
wider range of values than the object actually takes. This conservatism of
human beings makes the belief degrees deviate far from the frequency. Thus
all belief degrees are wrong compared with its frequency. However, it cannot
be denied that those belief degrees are indeed helpful for decision making.

Belief degrees cannot be treated as subjective probability


Can we deal with belief degrees by probability theory? Some people do think
so and call it subjective probability. However, Liu [131] declared that it is
inappropriate to model belief degrees by probability theory because it may
lead to counterintuitive results.
6 Chapter 0 - Introduction

Consider a counterexample presented by Liu [131]. Assume there is one


truck and 50 bridges in an experiment. Also assume the weight of the truck
is 90 tons and the 50 bridge strengths are iid uniform random variables on
[95, 110] in tons. For simplicity, suppose a bridge collapses whenever its real
strength is less than the weight of the truck. Now let us have the truck cross
over the 50 bridges one by one. It is easy to verify that
Pr{“the truck can cross over the 50 bridges”} = 1. (2)
That is to say, we are 100% sure that the truck can cross over the 50 bridges
successfully.
....
........
.....
1 ........ ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ......................................................................................
.. ... . .... ..
... .... .. ..... .
“true” probability ..... ...
... ..... .. ......
... ... .. .. ..........
...
. ...
. . .
... distribution .... . ...... .
... .. ...............
...
. ...
... ............. .. .. .
... ........ .. .. ..
. ...
... .
.........
. .. . . .
. . . . .
... . ..
.......... .. .. .. .. ...
... .
...... ........ .. .. .. ..
. .
... ... .... ....
. .
.... .. .. .. .. .. ...
... .. ...... .... .. .. .. .. .. ..
. .
belief degree ... .
....... .... .. .. .. .. .. ..
. ...
... ..
...... .
... .. .. .. .. .. .. ..
... .
.... . .. .. .. .. .. .. ..
function ...
... .......
.
...
.
. ..
..
..
... .. .. .. .. .. .. .. ...
.
... .......
. .
... .. .. .. .. .. .. .. ..
..
... ..
...... . ..
..... .. .. .. .. .. .. .. ..
. .
... ...
..... ... .. .. .. .. .. .. .. .. ..
. .
.. . . . . . . . . . .
0 ............................................................................................................................................................................................................................................ x (ton)
...
.. 80 95 110 120

Figure 4: Belief degree function, “true” probability distribution and cumu-


lative frequency histogram of “the bridge strength”

However, when there do not exist any observed samples for the bridge
strength at the moment, we have to invite some bridge engineers to evaluate
the belief degrees about it. As we stated before, human beings usually esti-
mate a much wider range of values than the bridge strength actually takes
because of the conservatism. Assume the belief degree function is


 0, if x < 80
Φ(x) = (x − 80)/40, if 80 ≤ x ≤ 120 (3)

1, if x > 120.

See Figure 4. Let us imagine what will happen if the belief degree function
is treated as a probability distribution. At first, we have to regard the 50
bridge strengths as iid uniform random variables on [80, 120] in tons. If we
have the truck cross over the 50 bridges one by one, then we immediately
have
Pr{“the truck can cross over the 50 bridges”} = 0.7550 ≈ 0. (4)
Thus it is almost impossible that the truck crosses over the 50 bridges suc-
cessfully. Unfortunately, the results (2) and (4) are at opposite poles. This
example shows that, by inappropriately using probability theory, a sure event
becomes an impossible one. The error seems intolerable for us. Hence the
belief degrees cannot be treated as subjective probability.
Section 0.4 - Summary 7

A possible proposition cannot be judged impossible


During information processing, we should follow such a basic principle that a
possible proposition cannot be judged impossible (Liu [131]). In other words,
if a proposition is possibly true, then its truth value should not be zero.
Likewise, if a proposition is possibly false, then its truth value should not be
unity.
In the example of truck-cross-over-bridge, a completely true proposition
is judged completely false. This means using probability theory violates the
above-mentioned principle, and therefore probability theory is not appropri-
ate to model belief degrees. In other words, belief degrees do not follow the
laws of probability theory.

Uncertainty theory is able to model belief degrees


In order to rationally deal with belief degrees, uncertainty theory was founded
by Liu [122] in 2007 and subsequently studied by many researchers. Nowa-
days, uncertainty theory has become a branch of axiomatic mathematics for
modeling belief degrees.
Liu [131] declared that uncertainty theory is the only legitimate approach
when only belief degrees are available. If we believe the estimated uncertainty
distribution is close enough to the belief degrees hidden in the mind of the
domain experts, then we may use uncertainty theory to deal with our own
problems on the basis of the estimated uncertainty distributions.
Let us reconsider the example of truck-cross-over-bridge by uncertainty
theory. If the belief degree function is regarded as a linear uncertainty dis-
tribution on [80, 120] in tons, then we immediately have

M{“the truck can cross over the 50 bridges”} = 0.75. (5)

That is to say, we are 75% sure that the truck can cross over the 50 bridges
successfully. Here the degree 75% does not achieve up to the true value 100%.
But the error is caused by the difference between belief degree and frequency,
and is not further magnified by uncertainty theory.

0.4 Summary
In order to model indeterminacy, many theories have been invented. What
theories are considered acceptable? Personally, I think that an acceptable
theory should be not only theoretically self-consistent but also the best among
others for solving at least one practical problem. On the basis of this principle,
I may conclude that there exist two mathematical systems, one is probability
theory and the other is uncertainty theory. It is emphasized that probability
theory is only applicable to modeling frequencies, and uncertainty theory
is only applicable to modeling belief degrees. In other words, frequency is
the empirical basis of probability theory, while belief degree is the empirical
8 Chapter 0 - Introduction

basis of uncertainty theory. Keep in mind that using uncertainty theory to


model frequency may produce a crude result, while using probability theory
to model belief degree may produce a big disaster.

.... ....
......... .........
.... ............................................. ....
.......
. . ..............
......................
.................. ... .... .......................
... .................. .... ..... .... ... ............
... ..
. .
. .. . . ... . .
. . ..
....... .. . . .
... ................. .... .... .... ... ... ...... ... ...
... ..... .. .. .. ... ... ... ........ ... ... ..
... ...... .... .... .... .... ...
. ... ...... .. ... .. ...
... ............. ... ... ... .. ... ... ... .... .... .... .... ...
... .... ... .. .. .. ... .. ... ... ... .... ... .... ...
... .... ... ... ... ... .. .. ... ... ...... .. ... .. ...
............... .... .... .... .... ..... .... .. .. .. .. .. ... ..
... ... .... .... .... .... .... ... ....
. ... .. .... .... ..... .... .... ....
... .. ... ..
.. .......... .... ... .... ... ...
... ............... .... ... .... .... .... .... ... ...
..... .. .. .. .. .. .. .. .. .... .......
... ............... .... ... ... ... ... ... ... ... ... ..... ....... .... .... .... .... .... ...
... ........... .... .... .... ..... .... .... .... ..... .... ... ..... ... ... .. ... ... ... ... ...
...... .
... ..
.....
.
........ . . . . . . . . . ... .... ...... ..... ... .. ... ... ... ... ..
................................................................................................................................................................................................... ............................................................................................................................................................................................
... ...
.... ....
.. Probability .. Uncertainty

Figure 5: When the sample size is large enough, the estimated probability
distribution (left curve) may be close enough to the cumulative frequency (left
histogram). In this case, probability theory is the only legitimate approach.
When the belief degrees are available (no samples), the estimated uncertainty
distribution (right curve) usually deviates far from the cumulative frequency
(right histogram but unknown). In this case, uncertainty theory is the only
legitimate approach.

However, single-variable system is an exception. When there exists one


and only one variable in a system, probability theory and uncertainty theory
will produce the same result because product measure is not used. In this
case, frequency may be modeled by uncertainty theory while belief degree
may be modeled by probability theory. Both are indifferent.
Since belief degrees are usually wrong compared with frequency, the gap
between belief degree and frequency always exists. Such an error is likely to
be further magnified if the belief degree is regarded as subjective probability.
Fortunately, uncertainty theory can successfully avoid turning small errors
to large ones.
Savage [203] said a rational man behaves as if he used subjective proba-
bilities. However, usually, we cannot make it to that. Personally, I think a
rational man behaves as if he used uncertainty theory. In other words, a ratio-
nal man is expected to hold belief degrees that follow the laws of uncertainty
theory rather than probability theory.
Chapter 1

Uncertain Measure

Uncertainty theory was founded by Liu [122] in 2007 and subsequently studied
by many researchers. Nowadays uncertainty theory has become a branch of
axiomatic mathematics for modeling belief degrees. This chapter will present
normality, duality, subadditivity and product axioms of uncertainty theory.
From those four axioms, this chapter will also introduce an uncertain measure
that is a fundamental concept in uncertainty theory. In addition, product
uncertain measure and conditional uncertain measure will be explored at the
end of this chapter.

1.1 Measurable Space

From the mathematical viewpoint, uncertainty theory is essentially an al-


ternative theory of measure. Thus uncertainty theory should begin with a
measurable space. In order to learn uncertainty theory, let us introduce al-
gebra, σ-algebra, measurable set, Borel algebra, Borel set, and measurable
function. The main results in this section are well-known. For this reason
the credit references are not provided. You may skip this section if you are
familiar with them.

Definition 1.1 Let Γ be a nonempty set (sometimes called universal set).


A collection L consisting of subsets of Γ is called an algebra over Γ if the
following three conditions hold: (a) Γ ∈ L; (b) if Λ ∈ L, then Λc ∈ L; and
(c) if Λ1 , Λ2 , · · · , Λn ∈ L, then

n
[
Λi ∈ L. (1.1)
i=1

The collection L is called a σ-algebra over Γ if the condition (c) is replaced

© Springer-Verlag Berlin Heidelberg 2015 9


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_2
10 Chapter 1 - Uncertain Measure

with closure under countable union, i.e., when Λ1 , Λ2 , · · · ∈ L, we have



[
Λi ∈ L. (1.2)
i=1

Example 1.1: The collection {∅, Γ} is the smallest σ-algebra over Γ, and
the power set (i.e., all subsets of Γ) is the largest σ-algebra.

Example 1.2: Let Λ be a proper nonempty subset of Γ. Then {∅, Λ, Λc , Γ}


is a σ-algebra over Γ.

Example 1.3: Let L be the collection of all finite disjoint unions of all
intervals of the form

(−∞, a], (a, b], (b, ∞), ∅. (1.3)

Then L is an algebra over < (the set of real numbers), but not a σ-algebra
because Λi = (0, (i − 1)/i] ∈ L for all i but

[
Λi = (0, 1) 6∈ L. (1.4)
i=1

Example 1.4: A σ-algebra L is closed under countable union, countable


intersection, difference, and limit. That is, if Λ1 , Λ2 , · · · ∈ L, then

[ ∞
\
Λi ∈ L; Λi ∈ L; Λ1 \ Λ2 ∈ L; lim Λi ∈ L. (1.5)
i→∞
i=1 i=1

Definition 1.2 Let Γ be a nonempty set, and let L be a σ-algebra over Γ.


Then (Γ, L) is called a measurable space, and any element in L is called a
measurable set.

Example 1.5: Let < be the set of real numbers. Then L = {∅, <} is a
σ-algebra over <. Thus (<, L) is a measurable space. Note that there exist
only two measurable sets in this space, one is ∅ and another is <. Keep in
mind that the intervals like [0, 1] and (0, +∞) are not measurable!

Example 1.6: Let Γ = {a, b, c}. Then L = {∅, {a}, {b, c}, Γ} is a σ-algebra
over Γ. Thus (Γ, L) is a measurable space. Furthermore, {a} and {b, c} are
measurable sets in this space, but {b}, {c}, {a, b}, {a, c} are not.

Definition 1.3 The smallest σ-algebra B containing all open intervals is


called the Borel algebra over the set of real numbers, and any element in B
is called a Borel set.
Section 1.2 - Event 11

Example 1.7: It has been proved that intervals, open sets, closed sets,
rational numbers, and irrational numbers are all Borel sets.

Example 1.8: There exists a non-Borel set over <. Let [a] represent the set
of all rational numbers plus a. Note that if a1 − a2 is not a rational number,
then [a1 ] and [a2 ] are disjoint sets. Thus < is divided into an infinite number
of those disjoint sets. Let A be a new set containing precisely one element
from them. Then A is not a Borel set.

Definition 1.4 A function f from a measurable space (Γ, L) to the set of


real numbers is said to be measurable if

f −1 (B) = {γ ∈ Γ | f (γ) ∈ B} ∈ L (1.6)

for any Borel set B of real numbers.

Continuous function and monotone function are instances of measurable


function. Let f1 , f2 , · · · be a sequence of measurable functions. Then the
following functions are also measurable:

sup fi (γ); inf fi (γ); lim sup fi (γ); lim inf fi (γ). (1.7)
1≤i<∞ 1≤i<∞ i→∞ i→∞

Especially, if limi→∞ fi (γ) exists for each γ, then the limit is also a measur-
able function.

1.2 Event
Let (Γ, L) be a measurable space. Recall that each element Λ in L is called
a measurable set. The first action we take is to rename measurable set as
event in uncertainty theory.
How do we understand those terminologies? Let us illustrate them by an
indeterminate quantity (e.g. bridge strength). At first, the universal set Γ
consists of all possible outcomes of the indeterminate quantity. If we believe
that the possible bridge strengths range from 80 to 120 in tons, then the
universal set is
Γ = [80, 120]. (1.8)
Note that you may replace the universal set with an enlarged interval, and
it would have no impact.
The σ-algebra L should contain all events we are concerned about. Note
that event and proposition are synonymous although the former is a set and
the latter is a statement. Assume the first event we are concerned about
corresponds to the proposition “the bridge strength is less than or equal to
100 tons”. Then it may be represented by

Λ1 = [80, 100]. (1.9)


12 Chapter 1 - Uncertain Measure

Also assume the second event we are concerned about corresponds to the
proposition “the bridge strength is more than 100 tons”. Then it may be
represented by
Λ2 = (100, 120]. (1.10)
If we are only concerned about the above two events, then we may construct
a σ-algebra L containing the two events Λ1 and Λ2 , for example,

L = {∅, Λ1 , Λ2 , Γ}. (1.11)

In this case, we totally have four events: ∅, Λ1 , Λ2 and Γ. However, please


note that the subsets like [80, 90] and [110, 120] are not events because they
do not belong to L.
Keep in mind that different σ-algebras are used for different purposes.
The minimum requirement of a σ-algebra is that it contains all events we
are concerned about. It is suggested to take the minimum σ-algebra that
contains those events.

1.3 Uncertain Measure


Let us define an uncertain measure M on the σ-algebra L. That is, a number
M{Λ} will be assigned to each event Λ to indicate the belief degree with
which we believe Λ will happen. There is no doubt that the assignment is
not arbitrary, and the uncertain measure M must have certain mathematical
properties. In order to rationally deal with belief degrees, Liu [122] suggested
the following three axioms:
Axiom 1. (Normality Axiom) M{Γ} = 1 for the universal set Γ.
Axiom 2. (Duality Axiom) M{Λ} + M{Λc } = 1 for any event Λ.
Axiom 3. (Subadditivity Axiom) For every countable sequence of events Λ1 ,
Λ2 , · · · , we have (∞ )
[ X∞
M Λi ≤ M{Λi }. (1.12)
i=1 i=1

Remark 1.1: Uncertain measure is interpreted as the personal belief degree


(not frequency) of an uncertain event that may happen. It depends on the
personal knowledge concerning the event. The uncertain measure will change
if the state of knowledge changes.

Remark 1.2: Duality axiom is in fact an application of the law of truth


conservation in uncertainty theory. The property ensures that the uncer-
tainty theory is consistent with the law of excluded middle and the law of
contradiction. In addition, the human thinking is always dominated by the
duality. For example, if someone says a proposition is true with belief degree
Section 1.3 - Uncertain Measure 13

0.6, then all of us will think that the proposition is false with belief degree
0.4.

Remark 1.3: Given two events with known belief degrees, it is frequently
asked that how the belief degree for their union is generated from the in-
dividuals. Personally, I do not think there exists any rule to make it. A
lot of surveys showed that, generally speaking, the belief degree of a union
of events is neither the sum of belief degrees of the individual events (e.g.
probability measure) nor the maximum (e.g. possibility measure). Perhaps
there is no explicit relation between the union and individuals except for the
subadditivity axiom.

Remark 1.4: Pathology occurs if subadditivity axiom is not assumed. For


example, suppose that a universal set contains 3 elements. We define a set
function that takes value 0 for each singleton, and 1 for each event with at
least 2 elements. Then such a set function satisfies all axioms but subaddi-
tivity. Do you think it is strange if such a set function serves as a measure?

Remark 1.5: Although probability measure satisfies the above three axioms,
probability theory is not a special case of uncertainty theory because the
product probability measure does not satisfy the fourth axiom, namely the
product axiom on Page 17.

Definition 1.5 (Liu [122]) The set function M is called an uncertain mea-
sure if it satisfies the normality, duality, and subadditivity axioms.

Exercise 1.1: Let Γ = {γ1 , γ2 , γ3 }. It is clear that there exist 8 events in


the σ-algebra

L = {∅, {γ1 }, {γ2 }, {γ3 }, {γ1 , γ2 }, {γ1 , γ3 }, {γ2 , γ3 }, Γ}. (1.13)

Assume c1 , c2 , c3 are nonnegative numbers satisfying the consistency condi-


tion
ci + cj ≤ 1 ≤ c1 + c2 + c3 , ∀i 6= j. (1.14)
Define
M{γ1 } = c1 , M{γ2 } = c2 , M{γ3 } = c3 ,
M{γ1 , γ2 } = 1 − c3 , M{γ1 , γ3 } = 1 − c2 , M{γ2 , γ3 } = 1 − c1 ,
M{∅} = 0, M{Γ} = 1.
Show that M is an uncertain measure.

Exercise 1.2: Suppose that λ(x) is a nonnegative function on < (the set of
real numbers) such that
sup λ(x) = 0.5. (1.15)
x∈<
14 Chapter 1 - Uncertain Measure

Define a set function



 sup λ(x), if sup λ(x) < 0.5
x∈Λ x∈Λ

M{Λ} = (1.16)
 1 − sup λ(x), if sup λ(x) = 0.5

x∈Λc x∈Λ

for each Borel set Λ. Show that M is an uncertain measure on <.

Exercise 1.3: Suppose ρ(x) is a nonnegative and integrable function on <


(the set of real numbers) such that
Z
ρ(x)dx ≥ 1. (1.17)
<

Define a set function


 Z Z


 ρ(x)dx, if ρ(x)dx < 0.5


 Λ Λ

M{Λ} =
Z Z
(1.18)
 1− ρ(x)dx, if ρ(x)dx < 0.5
Λc Λc






0.5, otherwise
for each Borel set Λ. Show that M is an uncertain measure on <.

Theorem 1.1 (Monotonicity Theorem) Uncertain measure M is a mono-


tone increasing set function. That is, for any events Λ1 ⊂ Λ2 , we have

M{Λ1 } ≤ M{Λ2 }. (1.19)

Proof: The normality axiom says M{Γ} = 1, and the duality axiom says
M{Λc1 } = 1 − M{Λ1 }. Since Λ1 ⊂ Λ2 , we have Γ = Λc1 ∪ Λ2 . By using the
subadditivity axiom, we obtain

1 = M{Γ} ≤ M{Λc1 } + M{Λ2 } = 1 − M{Λ1 } + M{Λ2 }.

Thus M{Λ1 } ≤ M{Λ2 }.

Theorem 1.2 Suppose that M is an uncertain measure. Then the empty set
∅ has an uncertain measure zero, i.e.,

M{∅} = 0. (1.20)

Proof: Since ∅ = Γc and M{Γ} = 1, it follows from the duality axiom that

M{∅} = 1 − M{Γ} = 1 − 1 = 0.

Theorem 1.3 Suppose that M is an uncertain measure. Then for any event
Λ, we have
0 ≤ M{Λ} ≤ 1. (1.21)
Section 1.4 - Uncertainty Space 15

Proof: It follows from the monotonicity theorem that 0 ≤ M{Λ} ≤ 1 because


∅ ⊂ Λ ⊂ Γ and M{∅} = 0, M{Γ} = 1.
Theorem 1.4 Let Λ1 , Λ2 , · · · be a sequence of events with M{Λi } → 0 as
i → ∞. Then for any event Λ, we have
lim M{Λ ∪ Λi } = lim M{Λ\Λi } = M{Λ}. (1.22)
i→∞ i→∞

Especially, an uncertain measure remains unchanged if the event is enlarged


or reduced by an event with uncertain measure zero.
Proof: It follows from the monotonicity theorem and subadditivity axiom
that
M{Λ} ≤ M{Λ ∪ Λi } ≤ M{Λ} + M{Λi }
for each i. Thus we get M{Λ ∪ Λi } → M{Λ} by using M{Λi } → 0. Since
(Λ\Λi ) ⊂ Λ ⊂ ((Λ\Λi ) ∪ Λi ), we have
M{Λ\Λi } ≤ M{Λ} ≤ M{Λ\Λi } + M{Λi }.
Hence M{Λ\Λi } → M{Λ} by using M{Λi } → 0.
Theorem 1.5 (Asymptotic Theorem) For any events Λ1 , Λ2 , · · · , we have
lim M{Λi } > 0, if Λi ↑ Γ, (1.23)
i→∞

lim M{Λi } < 1, if Λi ↓ ∅. (1.24)


i→∞

Proof: Assume Λi ↑ Γ. Since Γ = ∪i Λi , it follows from the subadditivity


axiom that
X∞
1 = M{Γ} ≤ M{Λi }.
i=1
Since M{Λi } is increasing with respect to i, we have limi→∞ M{Λi } > 0. If
Λi ↓ ∅, then Λci ↑ Γ. It follows from the first inequality and the duality axiom
that
lim M{Λi } = 1 − lim M{Λci } < 1.
i→∞ i→∞
The theorem is proved.

Example 1.9: Assume Γ is the set of real numbers. Let α be a number with
0 < α ≤ 0.5. Define a set function as follows,


 0, if Λ = ∅

 α, if Λ is upper bounded



M{Λ} = 0.5, if both Λ and Λc are upper unbounded (1.25)

 1 − α, if Λc is upper bounded




1, if Λ = Γ.

It is easy to verify that M is an uncertain measure. Write Λi = (−∞, i] for


i = 1, 2, · · · Then Λi ↑ Γ and limi→∞ M{Λi } = α. Furthermore, we have
Λci ↓ ∅ and limi→∞ M{Λci } = 1 − α.
16 Chapter 1 - Uncertain Measure

1.4 Uncertainty Space


Definition 1.6 (Liu [122]) Let Γ be a nonempty set, let L be a σ-algebra
over Γ, and let M be an uncertain measure. Then the triplet (Γ, L, M) is
called an uncertainty space.

For practical purposes, the study of uncertainty spaces is sometimes re-


stricted to complete uncertainty spaces.

Definition 1.7 An uncertainty space (Γ, L, M) is called complete if for any


Λ1 , Λ2 ∈ L with M{Λ1 } = M{Λ2 } and any subset A with Λ1 ⊂ A ⊂ Λ2 , one
has A ∈ L. In this case, we also have

M{A} = M{Λ1 } = M{Λ2 }. (1.26)

Exercise 1.4: Let (Γ, L, M) be a complete uncertainty space, and let Λ be


an event with M{Λ} = 0. Show that A is an event and M{A} = 0 whenever
A ⊂ Λ.

Exercise 1.5: Let (Γ, L, M) be a complete uncertainty space, and let Λ be


an event with M{Λ} = 1. Show that A is an event and M{A} = 1 whenever
A ⊃ Λ.

Definition 1.8 (Gao [48]) An uncertainty space (Γ, L, M) is called contin-


uous if for any events Λ1 , Λ2 , · · · , we have
n o
M lim Λi = lim M{Λi } (1.27)
i→∞ i→∞

provided that limi→∞ Λi exists.

Exercise 1.6: Let (Γ, L, M) be a continuous uncertainty space. For any


events Λ1 , Λ2 , · · · , show that

lim M{Λi } = 1, if Λi ↑ Γ, (1.28)


i→∞

lim M{Λi } = 0, if Λi ↓ ∅. (1.29)


i→∞

1.5 Product Uncertain Measure


Product uncertain measure was defined by Liu [125] in 2009, thus producing
the fourth axiom of uncertainty theory. Let (Γk , Lk , Mk ) be uncertainty
spaces for k = 1, 2, · · · Write

Γ = Γ1 × Γ2 × · · · (1.30)
Section 1.5 - Product Uncertain Measure 17

that is the set of all ordered tuples of the form (γ1 , γ2 , · · · ), where γk ∈ Γk
for k = 1, 2, · · · A measurable rectangle in Γ is a set

Λ = Λ1 × Λ2 × · · · (1.31)

where Λk ∈ Lk for k = 1, 2, · · · The smallest σ-algebra containing all mea-


surable rectangles of Γ is called the product σ-algebra, denoted by

L = L1 × L2 × · · · (1.32)

Then the product uncertain measure M on the product σ-algebra L is defined


by the following product axiom (Liu [125]).
Axiom 4. (Product Axiom) Let (Γk , Lk , Mk ) be uncertainty spaces for k =
1, 2, · · · The product uncertain measure M is an uncertain measure satisfying
(∞ ) ∞
Y ^
M Λk = Mk {Λk } (1.33)
k=1 k=1

where Λk are arbitrarily chosen events from Lk for k = 1, 2, · · · , respectively.

Remark 1.6: Note that (1.33) defines a product uncertain measure only for
rectangles. How do we extend the uncertain measure M from the class of
rectangles to the product σ-algebra L? For each event Λ ∈ L, we have

min Mk {Λk },

 sup
 Λ1 ×Λ2 ×···⊂Λ 1≤k<∞




if sup min Mk {Λk } > 0.5



Λ1 ×Λ2 ×···⊂Λ 1≤k<∞





M{Λ} = 1− sup min Mk {Λk }, (1.34)

 Λ1 ×Λ2 ×···⊂Λc 1≤k<∞


min Mk {Λk } > 0.5



 if sup
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞






0.5, otherwise.

Remark 1.7: Note that the sum of the uncertain measures of the maximum
rectangles in Λ and Λc is always less than or equal to 1, i.e.,

sup min Mk {Λk } + sup min Mk {Λk } ≤ 1.


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞ Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

This means that at most one of

sup min Mk {Λk } and sup min Mk {Λk }


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞ Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

is greater than 0.5. Thus the expression (1.34) is reasonable.


18 Chapter 1 - Uncertain Measure

Γ.2
....
.........
....
... .............................................
......... .......
... ....... ......
... ...... .....
... .
....... .....
.
....................................... ...... ................................................................................ ........
.
...... .... .... ... ... ....
... ... ...
. ..
. ..
. ...
...
... ... ... .... ... ...
... ... ... ... .... ...
... ... .... ... ...
...
...
...
... ... ... ... ...
... ... ... . .
Λ 2 ....
.
. ...
...
.
.
..
. Λ .
...
.
... ..
..
..
... ... ... ..
. ... .
.
... .
. ..
... ... ... . ... .
... ... ... .... ... ...
... ... ... ... ..
.. ..
.......... ... ...
... ....
. ... ...
.
....................................... ... .... ................................................................................ ... . .
.
... ..... ..
......
.. ......
... ....
...
...... ......
...
.. ......
.. ............. ...
......... ...
... .. ............................................ ..
... .. ...
... .. ..
.
..................................................................................................................................................................................................
. .
..
...
..
...
..
...
Γ1
... ... ...
.. ................................... ...................................
Λ1

Figure 1.1: Extension from Rectangles to Product σ-Algebra. The uncertain


measure of Λ (the disk) is essentially the acreage of its inscribed rectangle
Λ1 ×Λ2 if it is greater than 0.5. Otherwise, we have to examine its complement
Λc . If the inscribed rectangle of Λc is greater than 0.5, then M{Λc } is just
its inscribed rectangle and M{Λ} = 1 − M{Λc }. If there does not exist an
inscribed rectangle of Λ or Λc greater than 0.5, then we set M{Λ} = 0.5.
Reprinted from Liu [129].

Remark 1.8: If the sum of the uncertain measures of the maximum rect-
angles in Λ and Λc is just 1, i.e.,

sup min Mk {Λk } + sup min Mk {Λk } = 1,


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞ Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

then the product uncertain measure (1.34) is simplified as

M{Λ} = sup min Mk {Λk }. (1.35)


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞

Theorem 1.6 (Peng and Iwamura [185]) The product uncertain measure
defined by (1.34) is an uncertain measure.

Proof: In order to prove that the product uncertain measure (1.34) is indeed
an uncertain measure, we should verify that the product uncertain measure
satisfies the normality, duality and subadditivity axioms.
Step 1: The product uncertain measure is clearly normal, i.e., M{Γ} = 1.
Step 2: We prove the duality, i.e., M{Λ} + M{Λc } = 1. The argument
breaks down into three cases. Case 1: Assume

sup min Mk {Λk } > 0.5.


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞
Section 1.5 - Product Uncertain Measure 19

Then we immediately have

sup min Mk {Λk } < 0.5.


Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

It follows from (1.34) that

M{Λ} = sup min Mk {Λk },


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞

M{Λc } = 1 − sup min Mk {Λk } = 1 − M{Λ}.


Λ1 ×Λ2 ×···⊂(Λc )c 1≤k<∞

The duality is proved. Case 2: Assume

sup min Mk {Λk } > 0.5.


Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

This case may be proved by a similar process. Case 3: Assume

sup min Mk {Λk } ≤ 0.5


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞

and
sup min Mk {Λk } ≤ 0.5.
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

It follows from (1.34) that M{Λ} = M{Λc } = 0.5 which proves the duality.
Step 3: Let us prove that M is an increasing set function. Suppose Λ
and ∆ are two events in L with Λ ⊂ ∆. The argument breaks down into
three cases. Case 1: Assume

sup min Mk {Λk } > 0.5.


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞

Then

sup min Mk {∆k } ≥ sup min Mk {Λk } > 0.5.


∆1 ×∆2 ×···⊂∆ 1≤k<∞ Λ1 ×Λ2 ×···⊂Λ 1≤k<∞

It follows from (1.34) that M{Λ} ≤ M{∆}. Case 2: Assume

sup min Mk {∆k } > 0.5.


∆1 ×∆2 ×···⊂∆c 1≤k<∞

Then

sup min Mk {Λk } ≥ sup min Mk {∆k } > 0.5.


Λ1 ×Λ2 ×···⊂Λc 1≤k<∞ ∆1 ×∆2 ×···⊂∆c 1≤k<∞

Thus
M{Λ} = 1 − sup min Mk {Λk }
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

≤1− sup min Mk {∆k } = M{∆}.


∆1 ×∆2 ×···⊂∆c 1≤k<∞
20 Chapter 1 - Uncertain Measure

Case 3: Assume
sup min Mk {Λk } ≤ 0.5
Λ1 ×Λ2 ×···⊂Λ 1≤k<∞

and
sup min Mk {∆k } ≤ 0.5.
∆1 ×∆2 ×···⊂∆c 1≤k<∞

Then
M{Λ} ≤ 0.5 ≤ 1 − M{∆c } = M{∆}.

Step 4: Finally, we prove the subadditivity of M. For simplicity, we only


prove the case of two events Λ and ∆. The argument breaks down into three
cases. Case 1: Assume M{Λ} < 0.5 and M{∆} < 0.5. For any given ε > 0,
there are two rectangles

Λ1 × Λ2 × · · · ⊂ Λc , ∆1 × ∆2 × · · · ⊂ ∆c

such that
1 − min Mk {Λk } ≤ M{Λ} + ε/2,
1≤k<∞

1 − min Mk {∆k } ≤ M{∆} + ε/2.


1≤k<∞

Note that
(Λ1 ∩ ∆1 ) × (Λ2 ∩ ∆2 ) × · · · ⊂ (Λ ∪ ∆)c .
It follows from the duality and subadditivity axioms that
Mk {Λk ∩ ∆k } = 1 − Mk {(Λk ∩ ∆k )c } = 1 − Mk {Λck ∪ ∆ck }
≥ 1 − (Mk {Λck } + Mk {∆ck })
= 1 − (1 − Mk {Λk }) − (1 − Mk {∆k })
= Mk {Λk } + Mk {∆k } − 1

for any k. Thus


M{Λ ∪ ∆} ≤ 1 − min Mk {Λk ∩ ∆k }
1≤k<∞

≤ 1 − min Mk {Λk } + 1 − min Mk {∆k }


1≤k<∞ 1≤k<∞

≤ M{Λ} + M{∆} + ε.

Letting ε → 0, we obtain

M{Λ ∪ ∆} ≤ M{Λ} + M{∆}.

Case 2: Assume M{Λ} ≥ 0.5 and M{∆} < 0.5. When M{Λ ∪ ∆} = 0.5, the
subadditivity is obvious. Now we consider the case M{Λ ∪ ∆} > 0.5, i.e.,
M{Λc ∩ ∆c } < 0.5. By using Λc ∪ ∆ = (Λc ∩ ∆c ) ∪ ∆ and Case 1, we get

M{Λc ∪ ∆} ≤ M{Λc ∩ ∆c } + M{∆}.


Section 1.6 - Independence 21

Thus

M{Λ ∪ ∆} = 1 − M{Λc ∩ ∆c } ≤ 1 − M{Λc ∪ ∆} + M{∆}


≤ 1 − M{Λc } + M{∆} = M{Λ} + M{∆}.

Case 3: If both M{Λ} ≥ 0.5 and M{∆} ≥ 0.5, then the subadditivity is
obvious because M{Λ} + M{∆} ≥ 1. The theorem is proved.

Definition 1.9 Assume (Γk , Lk , Mk ) are uncertainty spaces for k = 1, 2, · · ·


Let Γ = Γ1 × Γ2 × · · · , L = L1 × L2 × · · · and M = M1 ∧ M2 ∧ · · · Then the
triplet (Γ, L, M) is called a product uncertainty space.

1.6 Independence
Definition 1.10 (Liu [129]) The events Λ1 , Λ2 , · · · , Λn are said to be inde-
pendent if
( n ) n
\ ^
M Λ∗i = M{Λ∗i } (1.36)
i=1 i=1

where Λ∗iare arbitrarily chosen from {Λi , Λci , Γ}, i = 1, 2, · · · , n, respectively,


and Γ is the sure event.

Remark 1.9: Especially, two events Λ1 and Λ2 are independent if and only
if
M {Λ∗1 ∩ Λ∗2 } = M{Λ∗1 } ∧ M{Λ∗2 } (1.37)

where Λ∗i are arbitrarily chosen from {Λi , Λci }, i = 1, 2, respectively. That is,
the following four equations hold:

M{Λ1 ∩ Λ2 } = M{Λ1 } ∧ M{Λ2 },


M{Λc1 ∩ Λ2 } = M{Λc1 } ∧ M{Λ2 },
M{Λ1 ∩ Λc2 } = M{Λ1 } ∧ M{Λc2 },
M{Λc1 ∩ Λc2 } = M{Λc1 } ∧ M{Λc2 }.

Example 1.10: The impossible event ∅ is independent of any event Λ be-


cause ∅c = Γ and

M{∅ ∩ Λ} = M{∅} = M{∅} ∧ M{Λ},


M{∅c ∩ Λ} = M{Λ} = M{∅c } ∧ M{Λ},
M{∅ ∩ Λc } = M{∅} = M{∅} ∧ M{Λc },
M{∅c ∩ Λc } = M{Λc } = M{∅c } ∧ M{Λc }.
22 Chapter 1 - Uncertain Measure

Example 1.11: The sure event Γ is independent of any event Λ because


Γc = ∅ and
M{Γ ∩ Λ} = M{Λ} = M{Γ} ∧ M{Λ},
M{Γc ∩ Λ} = M{Γc } = M{Γc } ∧ M{Λ},
M{Γ ∩ Λc } = M{Λc } = M{Γ} ∧ M{Λc },
M{Γc ∩ Λc } = M{Γc } = M{Γc } ∧ M{Λc }.

Example 1.12: Generally speaking, an event Λ is not independent of itself


because
M{Λ ∩ Λc } 6= M{Λ} ∧ M{Λc }
whenever M{Λ} is neither 1 nor 0.

Theorem 1.7 (Liu [129]) The events Λ1 , Λ2 , · · · , Λn are independent if and


only if ( n )
[ _ n
M ∗
Λi = M{Λ∗i } (1.38)
i=1 i=1

where Λ∗iare arbitrarily chosen from {Λi , Λci , ∅}, i = 1, 2, · · · , n, respectively,


and ∅ is the impossible event.

Proof: Assume Λ1 , Λ2 , · · · , Λn are independent events. It follows from the


duality of uncertain measure that
( n ) ( n ) n n
[ \ ^ _
M Λi = 1 − M
∗ ∗c
Λi =1− M{Λ∗ci } = M{Λ∗i }
i=1 i=1 i=1 i=1

where Λ∗i are arbitrarily chosen from {Λi , Λci , ∅}, i = 1, 2, · · · , n, respectively.
The equation (1.38) is proved. Conversely, if the equation (1.38) holds, then
( n ) ( n ) n n
\ [ _ ^
M Λ∗i = 1 − M Λ∗c
i =1− M{Λ∗ci }= M{Λ∗i }.
i=1 i=1 i=1 i=1

where Λ∗i are arbitrarily chosen from {Λi , Λci , Γ}, i = 1, 2, · · · , n, respectively.
The equation (1.36) is true. The theorem is proved.

Theorem 1.8 (Liu [137]) Let (Γk , Lk , Mk ) be uncertainty spaces and Λk ∈


Lk for k = 1, 2, · · · , n. Then the events

Γ1 × · · · × Γk−1 × Λk × Γk+1 × · · · × Γn , k = 1, 2, · · · , n (1.39)

are always independent in the product uncertainty space. That is, the events

Λ1 , Λ2 , · · · , Λn (1.40)

are always independent if they are from different uncertainty spaces.


Section 1.7 - Polyrectangular Theorem 23

Γ.2 ............................................................................
....
......... ..
...
...
...
.... ... ...
... .
. ...
... ..
. ...
... ..
. ...
.
. .
.
. ...
.................................................................................................................................................................................
.
..... ..
. .
. ... ...
.... .... .... .... ...
... ... ... ... ...
.. ... ... ... ...
... ... ... ...
... ... ...
Λ 2 ... .
. .
.Λ ×Λ
..
.
1 2 ...
...
. ...
...
.
..... .... .... .... ...
... ... ... ... ...
... ... ... ... ...
....... ... ... ... ..
...........................................................................................................................................................................
... ... ....
... ... ...
... ... ...
... ... ...
... ... ...
... ... ...
. . .
......................................................................................................................................................................................
. .
Γ1
.... ..... .....
... .. ..
.. ................................ Λ1 .................................

Figure 1.2: (Λ1 × Γ2 ) ∩ (Γ1 × Λ2 ) = Λ1 × Λ2

Proof: For simplicity, we only prove the case of n = 2. It follows from the
product axiom that the product uncertain measure of the intersection is

M{(Λ1 × Γ2 ) ∩ (Γ1 × Λ2 )} = M{Λ1 × Λ2 } = M1 {Λ1 } ∧ M2 {Λ2 }.

By using M{Λ1 × Γ2 } = M1 {Λ1 } and M{Γ1 × Λ2 } = M2 {Λ2 }, we obtain

M{(Λ1 × Γ2 ) ∩ (Γ1 × Λ2 )} = M{Λ1 × Γ2 } ∧ M{Γ1 × Λ2 }.

Similarly, we may prove that

M{(Λ1 × Γ2 )c ∩ (Γ1 × Λ2 )} = M{(Λ1 × Γ2 )c } ∧ M{Γ1 × Λ2 },


M{(Λ1 × Γ2 ) ∩ (Γ1 × Λ2 )c } = M{Λ1 × Γ2 } ∧ M{(Γ1 × Λ2 )c },
M{(Λ1 × Γ2 )c ∩ (Γ1 × Λ2 )c } = M{(Λ1 × Γ2 )c } ∧ M{(Γ1 × Λ2 )c }.

Thus Λ1 × Γ2 and Γ1 × Λ2 are independent events. Furthermore, since Λ1


and Λ2 are understood as Λ1 × Γ2 and Γ1 × Λ2 in the product uncertainty
space, respectively, the two events Λ1 and Λ2 are also independent.

1.7 Polyrectangular Theorem


Let (Γ1 , L1 , M1 ) and (Γ2 , L2 , M2 ) be two uncertainty spaces, Λ1 ∈ L1 and
Λ2 ∈ L2 . It follows from the product axiom that the rectangle Λ1 × Λ2 has
an uncertain measure

M{Λ1 × Λ2 } = M1 {Λ1 } ∧ M2 {Λ2 }. (1.41)

This section will extend this result to a more general case.


24 Chapter 1 - Uncertain Measure

Definition 1.11 (Liu [137]) Let (Γ1 , L1 , M1 ) and (Γ2 , L2 , M2 ) be two un-
certainty spaces. A set on Γ1 × Γ2 is called a polyrectangle if it has the form
m
[
Λ= (Λ1i × Λ2i ) (1.42)
i=1

where Λ1i ∈ L1 and Λ2i ∈ L2 for i = 1, 2, · · · , m, and

Λ11 ⊂ Λ12 ⊂ · · · ⊂ Λ1m , (1.43)

Λ21 ⊃ Λ22 ⊃ · · · ⊃ Λ2m . (1.44)

A rectangle Λ1 × Λ2 is clearly a polyrectangle. In addition, a “cross”-like


set is also a polyrectangle. See Figure 1.3.

Γ.2
...
..........
... ...................... ..........................
... .. ... ... ............................
... .... ...
... .... ... ... ...
... ... ... ... ... ..
... ... ...
... .
. .
... ... ..........................
... ... .
.
. .
. ...
... . ... . ...
... ... ....................... ....................... .
. ....................... .
.
... ... ... .. ... .... .
.........................
... ... ... ... ... ... ...
... ... ... ... ... ... ...
... ... ... .
. ... .
. .
... ... .. ... .. .... .........................
... ... ......................... ........................ ......................... .
. ...
... ... ... ...
. ... ..
. ...
.
... ... ...
... ... ...
... .... .........................
... ... ... ... ... ... ...
... ... . ... ... ... ...
... ..................................................................... .
.
...................... ......................
.
. ..
...
...
.
................................................................................................................................................................................................................................................................................. Γ1
..
...

Figure 1.3: Three Polyrectangles

Theorem 1.9 (Liu [137], Polyrectangular Theorem) Let (Γ1 , L1 , M1 ) and


(Γ2 , L2 , M2 ) be two uncertainty spaces. Then the polyrectangle
m
[
Λ= (Λ1i × Λ2i ) (1.45)
i=1

on the product uncertainty space (Γ1 , L1 , M1 )×(Γ2 , L2 , M2 ) has an uncertain


measure
m
_
M{Λ} = M1 {Λ1i } ∧ M2 {Λ2i }. (1.46)
i=1

Proof: It is clear that the maximum rectangle in the polyrectangle Λ is one


of Λ1i × Λ2i , i = 1, 2, · · · , n. Denote the maximum rectangle by Λ1k × Λ2k .
Case I: If
M{Λ1k × Λ2k } = M1 {Λ1k },
then the maximum rectangle in Λc is Λc1k × Λc2,k+1 , and

M{Λc1k × Λc2,k+1 } = M1 {Λc1k } = 1 − M1 {Λ1k }.


Section 1.8 - Conditional Uncertain Measure 25

Thus
M{Λ1k × Λ2k } + M{Λc1k × Λc2,k+1 } = 1.
Case II: If
M{Λ1k × Λ2k } = M2 {Λ2k },
then the maximum rectangle in Λc is Λc1,k−1 × Λc2k , and
M{Λc1,k−1 × Λc2k } = M2 {Λc2k } = 1 − M2 {Λ2k }.
Thus
M{Λ1k × Λ2k } + M{Λc1,k−1 × Λc2k } = 1.
No matter what case happens, the sum of the uncertain measures of the
maximum rectangles in Λ and Λc is always 1. It follows from the product
axiom that (1.46) holds.

Remark 1.10: Note that the polyrectangular theorem is also applicable to


the polyrectangles that are unions of infinitely many rectangles. In this case,
the polyrectangles may become the shapes in Figure 1.4.

Γ.2
...
..........
... .
.. .... ...... .......
... ...... ...... ......
... ........ ... ... ... ....
... ...... ... .... ... ....
... ... ... ... ...
... ... ......
... .... .... ..
.. . ... .....
.. .... .......
... ... ... ....
..
. ..
...... ... ........
... ... .... ....... ...
........ ... .............
... .... ... .............. .. .
. ...................
.......
... ... .... ....... ..
........ .
. ................
... .... ... .....
. ...
..... .
.
.
. ....
...........
... .... ... . ....
... ... .... ..
... .
... .
. .
... .
... ... .... ... ... .... ......
... .... ..... ... ... ... ...
... ... ..... ... ...
....... ... ... ... ...
... ... ........ .
. .
.
... ....................................................................................... .....
.. ...
....
...
...
.
...............................................................................................................................................................................................................................................................................
.
Γ1
..
...

Figure 1.4: Three Deformed Polyrectangles

1.8 Conditional Uncertain Measure


We consider the uncertain measure of an event A after it has been learned
that some other event B has occurred. This new uncertain measure of A is
called the conditional uncertain measure of A given B.
In order to define a conditional uncertain measure M{A|B}, at first we
have to enlarge M{A ∩ B} because M{A ∩ B} < 1 for all events whenever
M{B} < 1. It seems that we have no alternative but to divide M{A ∩ B} by
M{B}. Unfortunately, M{A∩B}/M{B} is not always an uncertain measure.
However, the value M{A|B} should not be greater than M{A ∩ B}/M{B}
(otherwise the normality will be lost), i.e.,
M{A ∩ B}
M{A|B} ≤ . (1.47)
M{B}
26 Chapter 1 - Uncertain Measure

On the other hand, in order to preserve the duality, we should have


M{Ac ∩ B}
M{A|B} = 1 − M{Ac |B} ≥ 1 − . (1.48)
M{B}
Furthermore, since (A ∩ B) ∪ (Ac ∩ B) = B, we have M{B} ≤ M{A ∩ B} +
M{Ac ∩ B} by using the subadditivity axiom. Thus
M{Ac ∩ B} M{A ∩ B}
0≤1− ≤ ≤ 1. (1.49)
M{B} M{B}
Hence any numbers between 1−M{Ac ∩B}/M{B} and M{A∩B}/M{B} are
reasonable values that the conditional uncertain measure may take. Based
on the maximum uncertainty principle (Liu [122]), we have the following
conditional uncertain measure.

Definition 1.12 (Liu [122]) Let (Γ, L, M) be an uncertainty space, and A, B ∈


L. Then the conditional uncertain measure of A given B is defined by
M{A ∩ B} M{A ∩ B}

 , if < 0.5
M{B} M{B}





M{A|B} = M{Ac ∩ B} M{Ac ∩ B} (1.50)
1 − , if < 0.5
M{B} M{B}






0.5, otherwise

provided that M{B} > 0.

Remark 1.11: It follows immediately from the definition of conditional


uncertain measure that
M{Ac ∩ B} M{A ∩ B}
1− ≤ M{A|B} ≤ . (1.51)
M{B} M{B}
Furthermore, the conditional uncertain measure obeys the maximum uncer-
tainty principle, and takes values as close to 0.5 as possible.

Remark 1.12: The conditional uncertain measure M{A|B} yields the pos-
terior uncertain measure of A after the occurrence of event B.

Theorem 1.10 Let (Γ, L, M) be an uncertainty space, and let B be an event


with M{B} > 0. Then M{·|B} defined by (1.50) is an uncertain measure,
and (Γ, L, M{·|B}) is an uncertainty space.

Proof: It is sufficient to prove that M{·|B} satisfies the normality, duality


and subadditivity axioms. At first, it satisfies the normality axiom, i.e.,
M{Γc ∩ B} M{∅}
M{Γ|B} = 1 − =1− = 1.
M{B} M{B}
Section 1.8 - Conditional Uncertain Measure 27

For any event A, if

M{A ∩ B} M{Ac ∩ B}
≥ 0.5, ≥ 0.5,
M{B} M{B}

then we have M{A|B} + M{Ac |B} = 0.5 + 0.5 = 1 immediately. Otherwise,


without loss of generality, suppose

M{A ∩ B} M{Ac ∩ B}
< 0.5 < ,
M{B} M{B}

then we have
M{A ∩ B} M{A ∩ B}

M{A|B} + M{Ac |B} = + 1− = 1.
M{B} M{B}

That is, M{·|B} satisfies the duality axiom. Finally, for any countable se-
quence {Ai } of events, if M{Ai |B} < 0.5 for all i, it follows from (1.51) and
the subadditivity axiom that
(∞ ) ∞
[ X
(∞ ) M Ai ∩ B M{Ai ∩ B} ∞
[ i=1 i=1
X
M Ai | B ≤ ≤ = M{Ai |B}.
i=1
M{B} M{B} i=1

Suppose there is one term greater than 0.5, say

M{A1 |B} ≥ 0.5, M{Ai |B} < 0.5, i = 2, 3, · · ·

If M{∪i Ai |B} = 0.5, then we immediately have


(∞ ) ∞
[ X
M Ai | B ≤ M{Ai |B}.
i=1 i=1

If M{∪i Ai |B} > 0.5, we may prove the above inequality by the following
facts:
∞ ∞
!
[ \
Ac1 ∩ B ⊂ (Ai ∩ B) ∪ Aci ∩ B ,
i=2 i=1


(∞ )
X \
M{Ac1 ∩ B} ≤ M{Ai ∩ B} + M Aci ∩B ,
i=2 i=1
(∞ )
\
(∞ ) M Aci ∩B
[ i=1
M Ai | B =1− ,
i=1
M{B}
28 Chapter 1 - Uncertain Measure


X

M{Ai ∩ B}
X M{Ac1 ∩ B} i=2
M{Ai |B} ≥ 1 − + .
i=1
M{B} M{B}
If there are at least two terms greater than 0.5, then the subadditivity is
clearly true. Thus M{·|B} satisfies the subadditivity axiom. Hence M{·|B} is
an uncertain measure. Furthermore, (Γ, L, M{·|B}) is an uncertainty space.

1.9 Bibliographic Notes


When no samples are available to estimate a probability distribution, we have
to invite some domain experts to evaluate the belief degree that each event
will happen. Perhaps some people think that the belief degree is subjective
probability or fuzzy concept. However, Liu [131] declared that it is usually
inappropriate because both probability theory and fuzzy set theory may lead
to counterintuitive results in this case.
In order to rationally deal with belief degrees, uncertainty theory was
founded by Liu [122] in 2007 and perfected by Liu [125] in 2009 with the
normality axiom, duality axiom, subadditivity axiom, and product axiom of
uncertain measure.
Furthermore, uncertain measure was also actively investigated by Gao
[48], Liu [129], Zhang [268], Peng and Iwamura [185], and Liu [137], among
others. Since then, the tool of uncertain measure was well developed and
became a rigorous footstone of uncertainty theory.
Chapter 2

Uncertain Variable

Uncertain variable is a fundamental concept in uncertainty theory. It is used


to represent quantities with uncertainty. The emphasis in this chapter is
mainly on uncertain variable, uncertainty distribution, independence, opera-
tional law, expected value, variance, moments, entropy, distance, conditional
uncertainty distribution, uncertain sequence, and uncertain vector.

2.1 Uncertain Variable

Roughly speaking, an uncertain variable is a measurable function on an un-


certainty space. A formal definition is given as follows.

Definition 2.1 (Liu [122]) An uncertain variable is a function ξ from an


uncertainty space (Γ, L, M) to the set of real numbers such that {ξ ∈ B} is
an event for any Borel set B.

<..
.
........
... ........................
... ..... ....
... ..... ....
... ..
..... ....
... .... ...
... ξ(γ) ..... ...
...
... ....
... ..
.
... .
...
.
... ...
... ...
... ...
... .
....
.
.. ...
.................................. ...
........ ....... ...
....... ... ....... .....
....... ... ...... .
...
.
..
... ...... .....
....... .....
... .........
......................
...
..
..............................................................................................................................................................................................................................................
....
Γ
.

Figure 2.1: An Uncertain Variable. Reprinted from Liu [129].

© Springer-Verlag Berlin Heidelberg 2015 29


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_3
30 Chapter 2 - Uncertain Variable

Example 2.1: Take (Γ, L, M) to be {γ1 , γ2 } with M{γ1 } = M{γ2 } = 0.5.


Then the function (
0, if γ = γ1
ξ(γ) =
1, if γ = γ2

is an uncertain variable.

Example 2.2: A crisp number b may be regarded as a special uncertain


variable. In fact, it is the constant function ξ(γ) ≡ b on the uncertainty
space (Γ, L, M).

Definition 2.2 An uncertain variable ξ on the uncertainty space (Γ, L, M) is


said to be (a) nonnegative if M{ξ < 0} = 0; and (b) positive if M{ξ ≤ 0} = 0.

Definition 2.3 Let ξ and η be uncertain variables defined on the uncertainty


space (Γ, L, M). We say ξ = η if ξ(γ) = η(γ) for almost all γ ∈ Γ.

Definition 2.4 Let ξ1 , ξ2 , · · · , ξn be uncertain variables, and let f be a real-


valued measurable function. Then ξ = f (ξ1 , ξ2 , · · · , ξn ) is an uncertain vari-
able defined by

ξ(γ) = f (ξ1 (γ), ξ2 (γ), · · · , ξn (γ)), ∀γ ∈ Γ. (2.1)

Example 2.3: Let ξ1 and ξ2 be two uncertain variables. Then the sum
ξ = ξ1 + ξ2 is an uncertain variable defined by

ξ(γ) = ξ1 (γ) + ξ2 (γ), ∀γ ∈ Γ.

The product ξ = ξ1 ξ2 is also an uncertain variable defined by

ξ(γ) = ξ1 (γ) · ξ2 (γ), ∀γ ∈ Γ.

The reader may wonder whether ξ(γ) defined by (2.1) is an uncertain


variable. The following theorem answers this question.

Theorem 2.1 Let ξ1 , ξ2 , · · · , ξn be uncertain variables, and let f be a real-


valued measurable function. Then f (ξ1 , ξ2 , · · · , ξn ) is an uncertain variable.

Proof: Since ξ1 , ξ2 , · · · , ξn are uncertain variables, they are measurable func-


tions from an uncertainty space (Γ, L, M) to the set of real numbers. Thus
f (ξ1 , ξ2 , · · · , ξn ) is also a measurable function from the uncertainty space
(Γ, L, M) to the set of real numbers. Hence f (ξ1 , ξ2 , · · · , ξn ) is an uncertain
variable.
Section 2.2 - Uncertainty Distribution 31

2.2 Uncertainty Distribution


This section introduces a concept of uncertainty distribution in order to de-
scribe uncertain variables. Mention that uncertainty distribution is a carrier
of incomplete information of uncertain variable. However, in many cases, it
is sufficient to know the uncertainty distribution rather than the uncertain
variable itself.

Definition 2.5 (Liu [122]) The uncertainty distribution Φ of an uncertain


variable ξ is defined by
Φ(x) = M {ξ ≤ x} (2.2)
for any real number x.

Φ(x)
....
........
..
............................................................................
1 ... ...................................
................
... ...........
... .........
... .
...
.........
..
... ......
.....
... ......
... .....
... ..
......
.
... .....
.....
... ......
... ......
... ...
.......
.
......
... ........
.. .................
....................
......................................................................................................................................................................................................................................................................................... x
....
0 ..
..

Figure 2.2: An Uncertainty Distribution. Reprinted from Liu [129].

Exercise 2.1: A real number b is a special uncertain variable ξ(γ) ≡ b.


Show that such an uncertain variable has an uncertainty distribution
(
0, if x < b
Φ(x) =
1, if x ≥ b.

Exercise 2.2: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 } with


M{γ1 } = 0.7 and M{γ2 } = 0.3. Show that the uncertain variable

0, if γ = γ1
ξ(γ) =
1, if γ = γ2

has an uncertainty distribution



 0, if x < 0

Φ(x) = 0.7, if 0 ≤ x < 1

1, if 1 ≤ x.

32 Chapter 2 - Uncertain Variable

Exercise 2.3: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 } with

M{γ1 } = 0.6, M{γ2 } = 0.3, M{γ3 } = 0.2.

Show that the uncertain variable



 1, if γ = γ1
ξ(γ) = 2, if γ = γ2
3, if γ = γ3

has an uncertainty distribution




 0, if x<1
1≤x<2

 0.6, if
Φ(x) =

 0.8, if 2≤x<3

3 ≤ x.

1, if

Exercise 2.4: Take an uncertainty space (Γ, L, M) to be the interval [0, 1]


with Borel algebra and Lebesgue measure. Show that the uncertain variable
ξ(γ) = γ 2 has an uncertainty distribution


 0, if x < 0
 √
Φ(x) = x, if 0 ≤ x ≤ 1 (2.3)


1, if x ≥ 1.

Definition 2.6 Uncertain variables are said to be identically distributed if


they have the same uncertainty distribution.

It is clear that uncertain variables ξ and η are identically distributed if


ξ = η. However, identical distribution does not imply ξ = η. For example,
let (Γ, L, M) be {γ1 , γ2 } with M{γ1 } = M{γ2 } = 0.5. Define
( (
1, if γ = γ1 −1, if γ = γ1
ξ(γ) = η(γ) =
−1, if γ = γ2 , 1, if γ = γ2 .

Then ξ and η have the same uncertainty distribution,



 0, if x < −1

Φ(x) = 0.5, if − 1 ≤ x < 1

1, if x ≥ 1.

Thus the two uncertain variables ξ and η are identically distributed but ξ 6= η.
Section 2.2 - Uncertainty Distribution 33

Sufficient and Necessary Condition


Theorem 2.2 (Peng-Iwamura Theorem [184]) A function Φ(x) : < → [0, 1]
is an uncertainty distribution if and only if it is a monotone increasing func-
tion except Φ(x) ≡ 0 and Φ(x) ≡ 1.
Proof: It is obvious that an uncertainty distribution Φ is a monotone in-
creasing function. In addition, both Φ(x) 6≡ 0 and Φ(x) 6≡ 1 follow from the
asymptotic theorem immediately. Conversely, suppose that Φ is a monotone
increasing function but Φ(x) 6≡ 0 and Φ(x) 6≡ 1. We will prove that there is
an uncertain variable whose uncertainty distribution is just Φ. Let C be a
collection of all intervals of the form (−∞, a], (b, ∞), ∅ and <. We define a
set function on < as follows,
M{(−∞, a]} = Φ(a),
M{(b, +∞)} = 1 − Φ(b),
M{∅} = 0, M{<} = 1.
For an arbitrary Borel set B, there exists a sequence {Ai } in C such that

[
B⊂ Ai .
i=1

Note that such a sequence is not unique. Thus the set function M{B} is
defined by
∞ ∞

X X
M{Ai }, M{Ai } < 0.5




 inf

if inf ∞
S S
B⊂ Ai i=1 B⊂ Ai i=1



 i=1 i=1

∞ ∞
M{B} = X X

 1 − inf

M{A i }, if inf

M{Ai } < 0.5
 c⊂
S c⊂
S

 B Ai i=1 B A i i=1
i=1 i=1




0.5, otherwise.

We may prove that the set function M is indeed an uncertain measure on <,
and the uncertain variable defined by the identity function ξ(γ) = γ from the
uncertainty space (<, L, M) to < has the uncertainty distribution Φ.

Example 2.4: Let c be a number with 0 < c < 1. Then Φ(x) ≡ c is an


uncertainty distribution. When c ≤ 0.5, we define a set function over < as
follows,


 0, if Λ = ∅

 c, if Λ is upper bounded



M{Λ} = 0.5, if both Λ and Λc are upper unbounded

1 − c, if Λc is upper bounded





1, if Λ = Γ.

34 Chapter 2 - Uncertain Variable

Then (<, L, M) is an uncertainty space. It is easy to verify that the identity


function ξ(γ) = γ is an uncertain variable whose uncertainty distribution is
just Φ(x) ≡ c. When c > 0.5, we define


 0, if Λ = ∅

1 − c, if Λ is upper bounded




M{Λ} = 0.5, if both Λ and Λc are upper unbounded

c, if Λc is upper bounded





1, if Λ = Γ.

Then the function ξ(γ) = −γ is an uncertain variable whose uncertainty


distribution is just Φ(x) ≡ c.

What is a “completely unknown number”?


A “completely unknown number” may be regarded as an uncertain variable
whose uncertainty distribution is

Φ(x) ≡ 0.5 (2.4)

for any real number x.

What is a “large number”?


A “large number” may be regarded as an uncertain variable. A possible
uncertainty distribution is

1 −1
Φ(x) = (1 + exp(1000 − x)) (2.5)
2
for any real number x.

Φ(x)
....
.........
... .
1 .................................................................................
...
..
....
...
..
...
...
...
...
0.5 ............................................................
... .......
...........
...........................................................................
... ......
... .
........
... ....
.....
... .....
... .....
... ...........
.
. .........
............................................................................................................................................................................................................................................. x
....
0 ..
.

Figure 2.3: Uncertainty Distribution of “Large Number”


Section 2.2 - Uncertainty Distribution 35

What is a “small number”?

A “small number” may be regarded as an uncertain variable. A possible


uncertainty distribution is
(
0, if x ≤ 0
Φ(x) = −1
(2.6)
(1 + exp(−10x)) , if x > 0.

Φ(x)
....
.........
....
.........................
1 ....
.............................................................................................................................................................
........
......
... .....
.
... ..
.
...
... ....
... ....
... ...
... ....
..................................................................................
..
0.5 ....
..
...
...
...
...
...
...
.
..............................................................................................................................................................................................................................
..
...
x
0 ..

Figure 2.4: Uncertainty Distribution of “Small Number”

How old is John?

Someone thinks John is neither younger than 24 nor older than 28, and
presents an uncertainty distribution of John’s age as follows,


 0, if x ≤ 24

Φ(x) = (x − 24)/4, if 24 ≤ x ≤ 28 (2.7)


1, if x ≥ 28.

How tall is James?

Someone thinks James’ height is between 180 and 185 centimeters, and
presents the following uncertainty distribution,


 0, if x ≤ 180

Φ(x) = (x − 180)/5, if 180 ≤ x ≤ 185 (2.8)


1, if x ≥ 185.

36 Chapter 2 - Uncertain Variable

Some Uncertainty Distributions


Definition 2.7 An uncertain variable ξ is called linear if it has a linear
uncertainty distribution


 0, if x ≤ a
Φ(x) = (x − a)/(b − a), if a ≤ x ≤ b (2.9)

1, if x ≥ b

denoted by L(a, b) where a and b are real numbers with a < b.

Φ(x)
....
.........
....
1 ............................................................
...
...
..........................................................
..... .
..... .
... .
........ ...
... .... ...
.....
... ..... ..
... .....
... ........
. ..
..
... .
...... ..
... ...
.... ..
... .
...... ..
... ..
..... ..
... ......
. ..
... ..
..... ..
... ...
.... ..
... .
...... ..
... ...
.... ..
... ...
..... .
................................................................................................................................................................................................................................. x
....
0 a ..
.... b

Figure 2.5: Linear Uncertainty Distribution. Reprinted from Liu [129].

Example 2.5: John’s age (2.7) is a linear uncertain variable L(24, 28), and
James’ height (2.8) is another linear uncertain variable L(180, 185).

Definition 2.8 An uncertain variable ξ is called zigzag if it has a zigzag


uncertainty distribution


 0, if x ≤ a
 (x − a)/2(b − a),

if a ≤ x ≤ b
Φ(x) = (2.10)

 (x + c − 2b)/2(c − b), if b ≤ x ≤ c

if x ≥ c

1,

denoted by Z(a, b, c) where a, b, c are real numbers with a < b < c.

Definition 2.9 An uncertain variable ξ is called normal if it has a normal


uncertainty distribution
−1
π(e − x)
Φ(x) = 1 + exp √ , x∈< (2.11)

denoted by N (e, σ) where e and σ are real numbers with σ > 0.
Section 2.2 - Uncertainty Distribution 37

Φ(x)
...
..........
....
1 ............................................................
...
...
..........................................................
...... ..
...... ..
... ...
........ ..
.... ..
... ......
... ...... ..
... .
...
....... ..
... .
...
..... ..
... .
...
..... ..
................................ .
..... ..
0.5 ... ..
.. ..
.
...
.
...
... ..
.. .. ..
... .
.
.. .. ..
... .. ..
... .... ... ..
... ..
. . ..
.. ..
... .. ..
... ...
. ... .
......................................................................................................................................................................................................................................
....
x
0 ..
....
a b c

Figure 2.6: Zigzag Uncertainty Distribution. Reprinted from Liu [129].

Φ(x)
....
........
....
1 ...........................................................................
... ......................................
..............
... ..........
... ........
... .
...
........
... ......
......
... .....
... .....
... .
..
......
...........
0.5 ............................................................................
... ..
..... ...
... ..... .. ....
... ...
...... .. ...
... ..
.....
. .. ...
.
... ...... .. ...
... .
...
......... . ...
....
...
......................... ... .....
......... . . .........
..... .
....................................................................................................................................................................................................................................
. ...............
......................................... x
...
0 .... e
..

Figure 2.7: Normal Uncertainty Distribution. Reprinted from Liu [129].

Definition 2.10 An uncertain variable ξ is called lognormal if ln ξ is a nor-


mal uncertain variable N (e, σ). In other words, a lognormal uncertain vari-
able has an uncertainty distribution
−1
π(e − ln x)
Φ(x) = 1 + exp √ , x≥0 (2.12)

denoted by LOGN (e, σ), where e and σ are real numbers with σ > 0.

Definition 2.11 An uncertain variable ξ is called empirical if it has an em-


pirical uncertainty distribution


 0, if x < x1
(αi+1 − αi )(x − xi )


Φ(x) = αi + , if xi ≤ x ≤ xi+1 , 1 ≤ i < n (2.13)

 xi+1 − xi

1, if x > xn

where x1 < x2 < · · · < xn and 0 ≤ α1 ≤ α2 ≤ · · · ≤ αn ≤ 1.


38 Chapter 2 - Uncertain Variable

Φ(x)
...
..........
...
... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .............................................. .
1 ... .........
....................
... ...........
... ........
... ...
.........
... .....
.....
... .....
... .....
.... ..
.....
.
.. . . . . . . . . . . . . . . .........
0.5 ... ....
... .
... ... ..
... ..
.
... .... .
.... .
... .... .
... ..... .
... .
..
..... .
.. .
... ...................... .
......................... ...............................................................................................................................................................................................
....
x
0 ...exp(e)
...

Figure 2.8: Lognormal Uncertainty Distribution. Reprinted from Liu [129].

Φ(x)
...
..........
....
..
1 ..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ........................................
... ...
.. .
α 5 ..............................................................................................................................• ...
.
..
.....
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...................... ..
α 4 .... •
...
.....
....
...
.. .
... ... .. .
... ... ... ....
... ... .. ...
... ... .. .
... ... .
.
.. ....
... ... . ...
α 3 ....
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ......•
.
.
...
...
.
.
... ..
.. ..
.
.
.
....
... .
...
........
. .
. .. ...
α .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...•
. .
. ......... ..
. ... ..
2 ... ..
..... ..
..
..
. ...
.... ..
.... ... .
..
.
.
. ..
... ..
.... .. . .. ....
... ..
.... .. .
..
.. ..
... ..
.... ..
. .
.
. ..
... ..
.... ..
. .
. .
. ..
... .
....
. . ..
..
..
. ....
. .. ..
α .... .. .. .. .. .. .. .. .. ......... . . . ..
1 .... •... ... ..
..
.
.. ..
.. ... ...
. . .
.. ....
. .
.................................................................................................................................................................................................................................................................
x
.
..
x
.
x x
. .
x
. x
0 ...
..
1 2 3 4 5

Figure 2.9: Empirical Uncertainty Distribution

Measure Inversion Theorem


Theorem 2.3 (Liu [129], Measure Inversion Theorem) Let ξ be an uncertain
variable with uncertainty distribution Φ. Then for any real number x, we have
M{ξ ≤ x} = Φ(x), M{ξ > x} = 1 − Φ(x). (2.14)
Proof: The equation M{ξ ≤ x} = Φ(x) follows from the definition of uncer-
tainty distribution immediately. By using the duality of uncertain measure,
we get
M{ξ > x} = 1 − M{ξ ≤ x} = 1 − Φ(x).
The theorem is verified.

Remark 2.1: When the uncertainty distribution Φ is a continuous function,


we also have
M{ξ < x} = Φ(x), M{ξ ≥ x} = 1 − Φ(x). (2.15)
Section 2.2 - Uncertainty Distribution 39

Theorem 2.4 Let ξ be an uncertain variable with continuous uncertainty


distribution Φ. Then for any interval [a, b], we have

Φ(b) − Φ(a) ≤ M{a ≤ ξ ≤ b} ≤ Φ(b) ∧ (1 − Φ(a)). (2.16)

Proof: It follows from the subadditivity of uncertain measure and the mea-
sure inversion theorem that

M{a ≤ ξ ≤ b} + M{ξ ≤ a} ≥ M{ξ ≤ b}.

That is,
M{a ≤ ξ ≤ b} + Φ(a) ≥ Φ(b).
Thus the inequality on the left hand side is verified. It follows from the
monotonicity of uncertain measure and the measure inversion theorem that

M{a ≤ ξ ≤ b} ≤ M{ξ ∈ (−∞, b]} = Φ(b).

On the other hand,

M{a ≤ x ≤ b} ≤ M{ξ ∈ [a, +∞)} = 1 − Φ(a).

Hence the inequality on the right hand side is proved.

Remark 2.2: Perhaps some readers would like to get an exactly scalar value
of the uncertain measure M{a ≤ x ≤ b}. Generally speaking, it is an impos-
sible job (except a = −∞ or b = +∞) if only an uncertainty distribution is
available. I would like to ask if there is a need to know it. In fact, it is not
necessary for practical purpose. Would you believe? I hope so!

Regular Uncertainty Distribution


Definition 2.12 (Liu [129]) An uncertainty distribution Φ(x) is said to be
regular if it is a continuous and strictly increasing function with respect to x
at which 0 < Φ(x) < 1, and

lim Φ(x) = 0, lim Φ(x) = 1. (2.17)


x→−∞ x→+∞

For example, linear uncertainty distribution, zigzag uncertainty distribu-


tion, normal uncertainty distribution, and lognormal uncertainty distribution
are all regular.

Stipulation 2.1 The uncertainty distribution of a crisp value c is regular.


That is, we will say (
1, if x ≥ c
Φ(x) = (2.18)
0, if x < c
is a continuous and strictly increasing function with respect to x at which
0 < Φ(x) < 1 even though it is discontinuous at c.
40 Chapter 2 - Uncertain Variable

Inverse Uncertainty Distribution


It is clear that a regular uncertainty distribution Φ(x) has an inverse function
on the range of x with 0 < Φ(x) < 1, and the inverse function Φ−1 (α) exists
on the open interval (0, 1).

Definition 2.13 (Liu [129]) Let ξ be an uncertain variable with regular un-
certainty distribution Φ(x). Then the inverse function Φ−1 (α) is called the
inverse uncertainty distribution of ξ.

Note that the inverse uncertainty distribution Φ−1 (α) is well defined on the
open interval (0, 1). If needed, we may extend the domain to [0, 1] via

Φ−1 (0) = lim Φ−1 (α), Φ−1 (1) = lim Φ−1 (α). (2.19)
α↓0 α↑1

Example 2.6: The inverse uncertainty distribution of linear uncertain vari-


able L(a, b) is
Φ−1 (α) = (1 − α)a + αb. (2.20)

Φ−1 (α)
.. ..
......... ..
... ..
b ............................................................
... ..
..
.
..... ..
..
..
... ..
... ...
..
... ...... ...
......
... ..... ..
... ...... ..
... ...
....... ..
... ..
..
.... ..
... ...
...... ..
... ..
..
.... ..
... ...
...... ..
... ..
..
.... ..
... ...
...... ..
... ..
..
.... .
.
........................................................................................................................................................................................... α
... ..
..
..
0 ... ...
...... 1
... .....
......
... ..........
. ..
a ........
...

Figure 2.10: Inverse Linear Uncertainty Distribution. Reprinted from Liu


[129].

Example 2.7: The inverse uncertainty distribution of zigzag uncertain vari-


able Z(a, b, c) is
(
−1 (1 − 2α)a + 2αb, if α < 0.5
Φ (α) = (2.21)
(2 − 2α)b + (2α − 1)c, if α ≥ 0.5.

Example 2.8: The inverse uncertainty distribution of normal uncertain


variable N (e, σ) is √
σ 3 α
Φ−1 (α) = e + ln . (2.22)
π 1−α
Section 2.2 - Uncertainty Distribution 41

Φ−1 (α)
.... ..
........ ..
.. . ..
c .........................................................
...
..
......
....... .
....... .
....
..
....... ...
... .......
....... ..
.. ....... ..
... .
...
......... ..
... .
...
....... ..
..
b .......................................... ..
... ..
.. . ..
... ..
.... ..
. ..
... ...
.... .
. ..
.... .
... .
..... .
. ..
... ...
. .
. ..
... .
..
.... . .
......................................................................................................................................................................................... α
.... ...
...
0 ...
... ........
.
....
.... 0.5 1
... .......
..
a ........
..

Figure 2.11: Inverse Zigzag Uncertainty Distribution. Reprinted from Liu


[129].

Φ−1 (α)
.... ...
........ ..
..
... ..
... ....
.
... ... ..
... .. .
... .. ..
... .
... .... ...
... ...
..... ..
... ....
...... ..
... ....... ..
... ...
............. ..
...
.............................. ................. ..
e ...
... ...
...
.. ..
...
.
...... ..
.
..
.. ..
..
... .
...
........
. .
. ..
... ...
......
. .
. ..
... ...
...... .
. ..
.. ...
... ....... .
...................................................................................................................................................................................... α
... ....
0 ... ...
......
0.5 1
....

Figure 2.12: Inverse Normal Uncertainty Distribution. Reprinted from Liu


[129].

Example 2.9: The inverse uncertainty distribution of lognormal uncertain


variable LOGN (e, σ) is
√ !
−1 σ 3 α
Φ (α) = exp e + ln . (2.23)
π 1−α

Theorem 2.5 A function Φ−1 is an inverse uncertainty distribution of an


uncertain variable ξ if and only if

M{ξ ≤ Φ−1 (α)} = α (2.24)

for all α ∈ [0, 1].

Proof: Suppose Φ−1 is the inverse uncertainty distribution of ξ. Then for


any α, we have
M{ξ ≤ Φ−1 (α)} = Φ(Φ−1 (α)) = α.
42 Chapter 2 - Uncertain Variable

Φ−1 (α)
.... ..
........ ..
.... .
.. . ..
.
... .. ..
.
... .. ..
.
... ... ..
... ... ..
... ... ..
... ... ..
... ... ...
... ... ..
... .. ..
... ..
..... ..
... .
....
. ..
... ...
..... ..
... ...
...
..... ..
... ..
...
...
....... ..
... ..
...
...
...
........ ..
... ...
...
...
...
...
.......... ..
....
... .......................... .
...................................................................................................................................................................................... α
...
0 ...
. 1

Figure 2.13: Inverse Lognormal Uncertainty Distribution. Reprinted from


Liu [129].

Conversely, suppose Φ−1 meets (2.24). Write x = Φ−1 (α). Then α = Φ(x)
and
M{ξ ≤ x} = α = Φ(x).
That is, Φ is the uncertainty distribution of ξ and Φ−1 is its inverse uncer-
tainty distribution. The theorem is verified.
Theorem 2.6 (Liu [134], Sufficient and Necessary Condition) A function
Φ−1 (α) : (0, 1) → < is an inverse uncertainty distribution if and only if it is
a continuous and strictly increasing function with respect to α.
Proof: Suppose Φ−1 (α) is an inverse uncertainty distribution. It follows
from the definition of inverse uncertainty distribution that Φ−1 (α) is a con-
tinuous and strictly increasing function with respect to α ∈ (0, 1).
Conversely, suppose Φ−1 (α) is a continuous and strictly increasing func-
tion on (0, 1). Define
0, if x ≤ lim Φ−1 (α)

α↓0




−1
Φ(x) = α, if x = Φ (α)

 1, if x ≥ lim Φ−1 (α).


α↑1

It follows from Peng-Iwamura theorem that Φ(x) is an uncertainty distribu-


tion of some uncertain variable ξ. Then for each α ∈ (0, 1), we have
M{ξ ≤ Φ−1 (α)} = Φ(Φ−1 (α)) = α.
Thus Φ−1 (α) is just the inverse uncertainty distribution of the uncertain
variable ξ. The theorem is verified.
Stipulation 2.2 We say a crisp value c has an inverse uncertainty distri-
bution
Φ−1 (α) ≡ c (2.25)
Section 2.3 - Independence 43

and Φ−1 (α) is a continuous and strictly increasing function with respect to
α ∈ (0, 1) even though it is not.

2.3 Independence

Independence has been explained in many ways. Personally, I think some


uncertain variables are independent if they can be separately defined on dif-
ferent uncertainty spaces. In order to ensure that we are able to do so, we
may define independence in the following mathematical form.

Definition 2.14 (Liu [125]) The uncertain variables ξ1 , ξ2 , · · · , ξn are said


to be independent if
( n
) n
\ ^
M (ξi ∈ Bi ) = M {ξi ∈ Bi } (2.26)
i=1 i=1

for any Borel sets B1 , B2 , · · · , Bn .

Example 2.10: Let ξ1 (γ1 ) and ξ2 (γ2 ) be uncertain variables on the uncer-
tainty spaces (Γ1 , L1 , M1 ) and (Γ2 , L2 , M2 ), respectively. It is clear that they
are also uncertain variables on the product uncertainty space (Γ1 , L1 , M1 ) ×
(Γ2 , L2 , M2 ). Then for any Borel sets B1 and B2 , we have

M{(ξ1 ∈ B1 ) ∩ (ξ2 ∈ B2 )}
= M {(γ1 , γ2 ) | ξ1 (γ1 ) ∈ B1 , ξ2 (γ2 ) ∈ B2 }
= M {(γ1 | ξ1 (γ1 ) ∈ B1 ) × (γ2 | ξ2 (γ2 ) ∈ B2 )}
= M1 {γ1 | ξ1 (γ1 ) ∈ B1 } ∧ M2 {γ2 | ξ2 (γ2 ) ∈ B2 }
= M {ξ1 ∈ B1 } ∧ M {ξ2 ∈ B2 } .

Thus ξ1 and ξ2 are independent in the product uncertainty space. In fact, it


is true that uncertain variables are always independent if they are defined on
different uncertainty spaces.

Theorem 2.7 The uncertain variables ξ1 , ξ2 , · · · , ξn are independent if and


only if
( n ) n
[ _
M (ξi ∈ Bi ) = M {ξi ∈ Bi } (2.27)
i=1 i=1

for any Borel sets B1 , B2 , · · · , Bn .


44 Chapter 2 - Uncertain Variable

Proof: It follows from the duality of uncertain measure that ξ1 , ξ2 , · · · , ξn


are independent if and only if
( n ) ( n )
[ \
M (ξi ∈ Bi ) = 1 − M c
(ξi ∈ Bi )
i=1 i=1
^n n
_
=1− M{ξi ∈ Bic } = M {ξi ∈ Bi } .
i=1 i=1

Thus the proof is complete.

Theorem 2.8 Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables, and let


f1 , f2 , · · · , fn be measurable functions. Then f1 (ξ1 ), f2 (ξ2 ), · · · , fn (ξn ) are
independent uncertain variables.

Proof: For any Borel sets B1 , B2 , · · · , Bn , it follows from the definition of


independence that
( n ) ( n )
\ \
−1
M (fi (ξi ) ∈ Bi ) = M (ξi ∈ fi (Bi ))
i=1 i=1
n
^ n
^
= M{ξi ∈ fi−1 (Bi )} = M{fi (ξi ) ∈ Bi }.
i=1 i=1

Thus f1 (ξ1 ), f2 (ξ2 ), · · · , fn (ξn ) are independent uncertain variables.

Example 2.11: Let ξ1 and ξ2 be independent uncertain variables. Then


their functions ξ1 + 2 and ξ22 + 3ξ2 + 4 are also independent.

2.4 Operational Law


The operational law of independent uncertain variables was given by Liu
[129] for calculating the uncertainty distribution of strictly increasing func-
tion, strictly decreasing function, and strictly monotone function of uncer-
tain variables. This section will also discuss the uncertainty distribution of
Boolean function of Boolean uncertain variables.

Strictly Increasing Function of Uncertain Variables


A real-valued function f (x1 , x2 , · · · , xn ) is said to be strictly increasing if

f (x1 , x2 , · · · , xn ) ≤ f (y1 , y2 , · · · , yn ) (2.28)

whenever xi ≤ yi for i = 1, 2, · · · , n, and

f (x1 , x2 , · · · , xn ) < f (y1 , y2 , · · · , yn ) (2.29)


Section 2.4 - Operational Law 45

whenever xi < yi for i = 1, 2, · · · , n. The following are strictly increasing


functions,

f (x1 , x2 , · · · , xn ) = x1 ∨ x2 ∨ · · · ∨ xn ,
f (x1 , x2 , · · · , xn ) = x1 ∧ x2 ∧ · · · ∧ xn ,
f (x1 , x2 , · · · , xn ) = x1 + x2 + · · · + xn ,
f (x1 , x2 , · · · , xn ) = x1 x2 · · · xn , x1 , x2 , · · · , xn ≥ 0.

Theorem 2.9 (Liu [129]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-


ables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If f
is a strictly increasing function, then the uncertain variable

ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.30)

has an inverse uncertainty distribution

Ψ−1 (α) = f (Φ−1 −1 −1


1 (α), Φ2 (α), · · · , Φn (α)). (2.31)

Proof: For simplicity, we only prove the case n = 2. At first, we always have

{ξ ≤ Ψ−1 (α)} ≡ {f (ξ1 , ξ2 ) ≤ f (Φ−1 −1


1 (α), Φ2 (α))}.

On the one hand, since f is a strictly increasing function, we obtain

{ξ ≤ Ψ−1 (α)} ⊃ {ξ1 ≤ Φ−1 −1


1 (α)} ∩ {ξ2 ≤ Φ2 (α)}.

By using the independence of ξ1 and ξ2 , we get

M{ξ ≤ Ψ−1 (α)} ≥ M{ξ1 ≤ Φ−1 −1


1 (α)} ∧ M{ξ2 ≤ Φ2 (α)} = α ∧ α = α.

On the other hand, since f is a strictly increasing function, we obtain

{ξ ≤ Ψ−1 (α)} ⊂ {ξ1 ≤ Φ−1 −1


1 (α)} ∪ {ξ2 ≤ Φ2 (α)}.

By using the independence of ξ1 and ξ2 , we get

M{ξ ≤ Ψ−1 (α)} ≤ M{ξ1 ≤ Φ−1 −1


1 (α)} ∨ M{ξ2 ≤ Φ2 (α)} = α ∨ α = α.

It follows that M{ξ ≤ Ψ−1 (α)} = α. That is, Ψ−1 is just the inverse uncer-
tainty distribution of ξ. The theorem is proved.

Exercise 2.5: Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables with


regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. Show that the
sum
ξ = ξ1 + ξ2 + · · · + ξn (2.32)
has an inverse uncertainty distribution

Ψ−1 (α) = Φ−1 −1 −1


1 (α) + Φ2 (α) + · · · + Φn (α). (2.33)
46 Chapter 2 - Uncertain Variable

Exercise 2.6: Let ξ1 , ξ2 , · · · , ξn be independent and positive uncertain


variables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively.
Show that the product

ξ = ξ1 × ξ2 × · · · × ξn (2.34)

has an inverse uncertainty distribution

Ψ−1 (α) = Φ−1 −1 −1


1 (α) × Φ2 (α) × · · · × Φn (α). (2.35)

Exercise 2.7: Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables with


regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. Show that the
minimum
ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn (2.36)
has an inverse uncertainty distribution

Ψ−1 (α) = Φ−1 −1 −1


1 (α) ∧ Φ2 (α) ∧ · · · ∧ Φn (α). (2.37)

Exercise 2.8: Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables with


regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. Show that the
maximum
ξ = ξ1 ∨ ξ2 ∨ · · · ∨ ξn (2.38)
has an inverse uncertainty distribution

Ψ−1 (α) = Φ−1 −1 −1


1 (α) ∨ Φ2 (α) ∨ · · · ∨ Φn (α). (2.39)

Theorem 2.10 Assume that ξ1 and ξ2 are independent linear uncertain


variables L(a1 , b1 ) and L(a2 , b2 ), respectively. Then the sum ξ1 + ξ2 is also a
linear uncertain variable L(a1 + a2 , b1 + b2 ), i.e.,

L(a1 , b1 ) + L(a2 , b2 ) = L(a1 + a2 , b1 + b2 ). (2.40)

The product of a linear uncertain variable L(a, b) and a scalar number k > 0
is also a linear uncertain variable L(ka, kb), i.e.,

k · L(a, b) = L(ka, kb). (2.41)

Proof: Assume that the uncertain variables ξ1 and ξ2 have uncertainty


distributions Φ1 and Φ2 , respectively. Then

Φ−1
1 (α) = (1 − α)a1 + αb1 ,

Φ−1
2 (α) = (1 − α)a2 + αb2 .

It follows from the operational law that the inverse uncertainty distribution
of ξ1 + ξ2 is

Ψ−1 (α) = Φ−1 −1


1 (α) + Φ2 (α) = (1 − α)(a1 + a2 ) + α(b1 + b2 ).
Section 2.4 - Operational Law 47

Hence the sum is also a linear uncertain variable L(a1 + a2 , b1 + b2 ). The


first part is verified. Next, suppose that the uncertainty distribution of the
uncertain variable ξ ∼ L(a, b) is Φ. It follows from the operational law that
when k > 0, the inverse uncertainty distribution of kξ is

Ψ−1 (α) = kΦ−1 (α) = (1 − α)(ka) + α(kb).

Hence kξ is just a linear uncertain variable L(ka, kb).

Theorem 2.11 Assume that ξ1 and ξ2 are independent zigzag uncertain


variables Z(a1 , b1 , c1 ) and Z(a2 , b2 , c3 ), respectively. Then the sum ξ1 + ξ2 is
also a zigzag uncertain variable Z(a1 + a2 , b1 + b2 , c1 + c2 ), i.e.,

Z(a1 , b1 , c1 ) + Z(a2 , b2 , c2 ) = Z(a1 + a2 , b1 + b2 , c1 + c2 ). (2.42)

The product of a zigzag uncertain variable Z(a, b, c) and a scalar number


k > 0 is also a zigzag uncertain variable Z(ka, kb, kc), i.e.,

k · Z(a, b, c) = Z(ka, kb, kc). (2.43)

Proof: Assume that the uncertain variables ξ1 and ξ2 have uncertainty


distributions Φ1 and Φ2 , respectively. Then
(
(1 − 2α)a1 + 2αb1 , if α < 0.5
Φ−1
1 (α) =
(2 − 2α)b1 + (2α − 1)c1 , if α ≥ 0.5,
(
(1 − 2α)a2 + 2αb2 , if α < 0.5
Φ−1
2 (α) =
(2 − 2α)b2 + (2α − 1)c2 , if α ≥ 0.5.

It follows from the operational law that the inverse uncertainty distribution
of ξ1 + ξ2 is
(
−1 (1 − 2α)(a1 + a2 ) + 2α(b1 + b2 ), if α < 0.5
Ψ (α) =
(2 − 2α)(b1 + b2 ) + (2α − 1)(c1 + c2 ), if α ≥ 0.5.

Hence the sum is also a zigzag uncertain variable Z(a1 + a2 , b1 + b2 , c1 + c2 ).


The first part is verified. Next, suppose that the uncertainty distribution of
the uncertain variable ξ ∼ Z(a, b, c) is Φ. It follows from the operational law
that when k > 0, the inverse uncertainty distribution of kξ is
(
−1 −1 (1 − 2α)(ka) + 2α(kb), if α < 0.5
Ψ (α) = kΦ (α) =
(2 − 2α)(kb) + (2α − 1)(kc), if α ≥ 0.5.

Hence kξ is just a zigzag uncertain variable Z(ka, kb, kc).


48 Chapter 2 - Uncertain Variable

Theorem 2.12 Let ξ1 and ξ2 be independent normal uncertain variables


N (e1 , σ1 ) and N (e2 , σ2 ), respectively. Then the sum ξ1 + ξ2 is also a normal
uncertain variable N (e1 + e2 , σ1 + σ2 ), i.e.,

N (e1 , σ1 ) + N (e2 , σ2 ) = N (e1 + e2 , σ1 + σ2 ). (2.44)

The product of a normal uncertain variable N (e, σ) and a scalar number


k > 0 is also a normal uncertain variable N (ke, kσ), i.e.,

k · N (e, σ) = N (ke, kσ). (2.45)

Proof: Assume that the uncertain variables ξ1 and ξ2 have uncertainty


distributions Φ1 and Φ2 , respectively. Then

−1 σ1 3 α
Φ1 (α) = e1 + ln ,
π 1−α

σ2 3 α
Φ−1
2 (α) = e2 + ln .
π 1−α
It follows from the operational law that the inverse uncertainty distribution
of ξ1 + ξ2 is

−1 −1 −1 (σ1 + σ2 ) 3 α
Ψ (α) = Φ1 (α) + Φ2 (α) = (e1 + e2 ) + ln .
π 1−α

Hence the sum is also a normal uncertain variable N (e1 + e2 , σ1 + σ2 ). The


first part is verified. Next, suppose that the uncertainty distribution of the
uncertain variable ξ ∼ N (e, σ) is Φ. It follows from the operational law that,
when k > 0, the inverse uncertainty distribution of kξ is

(kσ) 3 α
Ψ−1 (α) = kΦ−1 (α) = (ke) + ln .
π 1−α

Hence kξ is just a normal uncertain variable N (ke, kσ).

Theorem 2.13 Assume that ξ1 and ξ2 are independent lognormal uncertain


variables LOGN (e1 , σ1 ) and LOGN (e2 , σ2 ), respectively. Then the product
ξ1 · ξ2 is also a lognormal uncertain variable LOGN (e1 + e2 , σ1 + σ2 ), i.e.,

LOGN (e1 , σ1 ) · LOGN (e2 , σ2 ) = LOGN (e1 + e2 , σ1 + σ2 ). (2.46)

The product of a lognormal uncertain variable LOGN (e, σ) and a scalar num-
ber k > 0 is also a lognormal uncertain variable LOGN (e + ln k, σ), i.e.,

k · LOGN (e, σ) = LOGN (e + ln k, σ). (2.47)


Section 2.4 - Operational Law 49

Proof: Assume that the uncertain variables ξ1 and ξ2 have uncertainty


distributions Φ1 and Φ2 , respectively. Then
√ !
−1 σ1 3 α
Φ1 (α) = exp e1 + ln ,
π 1−α
√ !
σ2 3 α
Φ−1
2 (α) = exp e2 + ln .
π 1−α
It follows from the operational law that the inverse uncertainty distribution
of ξ1 · ξ2 is
√ !
−1 −1 −1 (σ1 + σ2 ) 3 α
Ψ (α) = Φ1 (α) · Φ2 (α) = exp (e1 + e2 ) + ln .
π 1−α

Hence the product is a lognormal uncertain variable LOGN (e1 + e2 , σ1 + σ2 ).


The first part is verified. Next, suppose that the uncertainty distribution of
the uncertain variable ξ ∼ LOGN (e, σ) is Φ. It follows from the operational
law that, when k > 0, the inverse uncertainty distribution of kξ is
√ !
−1 −1 σ 3 α
Ψ (α) = kΦ (α) = exp (e + ln k) + ln .
π 1−α

Hence kξ is just a lognormal uncertain variable LOGN (e + ln k, σ).

Theorem 2.14 (Liu [129]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-


ables with uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If f is a
strictly increasing function, then the uncertain variable

ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.48)

has an uncertainty distribution

Ψ(x) = sup min Φi (xi ). (2.49)


f (x1 ,x2 ,··· ,xn )=x 1≤i≤n

Proof: For simplicity, we only prove the case n = 2. Since f is a strictly


increasing function, it holds that
[
{f (ξ1 , ξ2 ) ≤ x} = (ξ1 ≤ x1 ) ∩ (ξ2 ≤ x2 ).
f (x1 ,x2 )=x

Thus the uncertainty distribution is


 
 [ 
Ψ(x) = M{f (ξ1 , ξ2 ) ≤ x} = M (ξ1 ≤ x1 ) ∩ (ξ2 ≤ x2 ) .
 
f (x1 ,x2 )=x
50 Chapter 2 - Uncertain Variable

Note that for each given number x, the event


[
(ξ1 ≤ x1 ) ∩ (ξ2 ≤ x2 )
f (x1 ,x2 )=x

is just a polyrectangle. It follows from the polyrectangular theorem that

Ψ(x) = sup M {(ξ1 ≤ x1 ) ∩ (ξ2 ≤ x2 )}


f (x1 ,x2 )=x

= sup M{ξ1 ≤ x1 } ∧ M{ξ2 ≤ x2 }


f (x1 ,x2 )=x

= sup Φ1 (x1 ) ∧ Φ2 (x2 ).


f (x1 ,x2 )=x

The theorem is proved.

Exercise 2.9: Let ξ be an uncertain variable with uncertainty distribution Φ,


and let f be a strictly increasing function. Show that f (ξ) has an uncertainty
distribution
Ψ(x) = Φ(f −1 (x)), ∀x ∈ <. (2.50)

Exercise 2.10: Let ξ1 , ξ2 , · · · , ξn be iid uncertain variables with a common


uncertainty distribution Φ. Show that the sum

ξ = ξ1 + ξ2 + · · · + ξn (2.51)

has an uncertainty distribution


x
Ψ(x) = Φ . (2.52)
n

Exercise 2.11: Let ξ1 , ξ2 , · · · , ξn be iid and positive uncertain variables with


a common uncertainty distribution Φ. Show that the product

ξ = ξ1 ξ2 · · · ξn (2.53)

has an uncertainty distribution



n

Ψ(x) = Φ x . (2.54)

Exercise 2.12: Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables with


uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. Show that the mini-
mum
ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn (2.55)
has an uncertainty distribution

Ψ(x) = Φ1 (x) ∨ Φ2 (x) ∨ · · · ∨ Φn (x). (2.56)


Section 2.4 - Operational Law 51

Exercise 2.13: Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables with


uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. Show that the maxi-
mum
ξ = ξ1 ∨ ξ2 ∨ · · · ∨ ξn (2.57)
has an uncertainty distribution

Ψ(x) = Φ1 (x) ∧ Φ2 (x) ∧ · · · ∧ Φn (x). (2.58)

Theorem 2.15 (Liu [135], Extreme Value Theorem) Let ξ1 , ξ2 , · · · , ξn be


independent uncertain variables. Assume that

Si = ξ1 + ξ2 + · · · + ξi (2.59)

have uncertainty distributions Ψi for i = 1, 2, · · · , n, respectively. Then the


maximum
S = S1 ∨ S2 ∨ · · · ∨ Sn (2.60)
has an uncertainty distribution

Υ(x) = Ψ1 (x) ∧ Ψ2 (x) ∧ · · · ∧ Ψn (x); (2.61)

and the minimum


S = S1 ∧ S2 ∧ · · · ∧ Sn (2.62)
has an uncertainty distribution

Υ(x) = Ψ1 (x) ∨ Ψ2 (x) ∨ · · · ∨ Ψn (x). (2.63)

Proof: Assume that the uncertainty distributions of the uncertain variables


ξ1 , ξ2 , · · · , ξn are Φ1 , Φ2 , · · · , Φn , respectively. Define

f (x1 , x2 , · · · , xn ) = x1 ∨ (x1 + x2 ) ∨ · · · ∨ (x1 + x2 + · · · + xn ).

Then f is a strictly increasing function and

S = f (ξ1 , ξ2 , · · · , ξn ).

It follows from Theorem 2.14 that S has an uncertainty distribution

Υ(x) = sup Φ1 (x1 ) ∧ Φ2 (x2 ) ∧ · · · ∧ Φn (xn )


f (x1 ,x2 ,··· ,xn )=x

= min sup Φ1 (x1 ) ∧ Φ2 (x2 ) ∧ · · · ∧ Φi (xi )


1≤i≤n x1 +x2 +···+xi =x

= min Ψi (x).
1≤i≤n

Thus (2.61) is verified. Similarly, define

f (x1 , x2 , · · · , xn ) = x1 ∧ (x1 + x2 ) ∧ · · · ∧ (x1 + x2 + · · · + xn ).


52 Chapter 2 - Uncertain Variable

Then f is a strictly increasing function and

S = f (ξ1 , ξ2 , · · · , ξn ).

It follows from Theorem 2.14 that S has an uncertainty distribution

Υ(x) = sup Φ1 (x1 ) ∧ Φ2 (x2 ) ∧ · · · ∧ Φn (xn )


f (x1 ,x2 ,··· ,xn )=x

= max sup Φ1 (x1 ) ∧ Φ2 (x2 ) ∧ · · · ∧ Φi (xi )


1≤i≤n x1 +x2 +···+xi =x

= max Ψi (x).
1≤i≤n

Thus (2.63) is verified.

Strictly Decreasing Function of Uncertain Variables


A real-valued function f (x1 , x2 , · · · , xn ) is said to be strictly decreasing if

f (x1 , x2 , · · · , xn ) ≥ f (y1 , y2 , · · · , yn ) (2.64)

whenever xi ≤ yi for i = 1, 2, · · · , n, and

f (x1 , x2 , · · · , xn ) > f (y1 , y2 , · · · , yn ) (2.65)

whenever xi < yi for i = 1, 2, · · · , n. If f (x1 , x2 , · · · , xn ) is a strictly increas-


ing function, then −f (x1 , x2 , · · · , xn ) is a strictly decreasing function. Fur-
thermore, 1/f (x1 , x2 , · · · , xn ) is also a strictly decreasing function provided
that f is positive. Especially, the following are strictly decreasing functions,

f (x) = −x,
f (x) = exp(−x),
1
f (x) = , x > 0.
x
Theorem 2.16 (Liu [129]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-
ables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If f
is a strictly decreasing function, then the uncertain variable

ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.66)

has an inverse uncertainty distribution

Ψ−1 (α) = f (Φ−1 −1 −1


1 (1 − α), Φ2 (1 − α), · · · , Φn (1 − α)). (2.67)

Proof: For simplicity, we only prove the case n = 2. At first, we always have

{ξ ≤ Ψ−1 (α)} ≡ {f (ξ1 , ξ2 ) ≤ f (Φ−1 −1


1 (1 − α), Φ2 (1 − α))}.
Section 2.4 - Operational Law 53

On the one hand, since f is a strictly decreasing function, we obtain


{ξ ≤ Ψ−1 (α)} ⊃ {ξ1 ≥ Φ−1 −1
1 (1 − α)} ∩ {ξ2 ≥ Φ2 (1 − α)}.

By using the independence of ξ1 and ξ2 , we get


M{ξ ≤ Ψ−1 (α)} ≥ M{ξ1 ≥ Φ−1 −1
1 (1 − α)} ∧ M{ξ2 ≥ Φ2 (1 − α)} = α ∧ α = α.

On the other hand, since f is a strictly decreasing function, we obtain


{ξ ≤ Ψ−1 (α)} ⊂ {ξ1 ≥ Φ−1 −1
1 (1 − α)} ∪ {ξ2 ≥ Φ2 (1 − α)}.

By using the independence of ξ1 and ξ2 , we get


M{ξ ≤ Ψ−1 (α)} ≤ M{ξ1 ≥ Φ−1 −1
1 (1 − α)} ∨ M{ξ2 ≥ Φ2 (1 − α)} = α ∨ α = α.

It follows that M{ξ ≤ Ψ−1 (α)} = α. That is, Ψ−1 is just the inverse uncer-
tainty distribution of ξ. The theorem is proved.

Exercise 2.14: Let ξ be a positive uncertain variable with regular uncer-


tainty distribution Φ. Show that the reciprocal 1/ξ has an inverse uncertainty
distribution
1
Ψ−1 (α) = −1 . (2.68)
Φ (1 − α)

Exercise 2.15: Let ξ be an uncertain variable with regular uncertainty


distribution Φ. Show that exp(−ξ) has an inverse uncertainty distribution
Ψ−1 (α) = exp −Φ−1 (1 − α) .

(2.69)
Theorem 2.17 (Liu [129]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-
ables with continuous uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively.
If f is a strictly decreasing function, then the uncertain variable
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.70)
has an uncertainty distribution
Ψ(x) = sup min (1 − Φi (xi )). (2.71)
f (x1 ,x2 ,··· ,xn )=x 1≤i≤n

Proof: For simplicity, we only prove the case n = 2. Since f is a strictly


decreasing function, it holds that
[
{f (ξ1 , ξ2 ) ≤ x} = (ξ1 ≥ x1 ) ∩ (ξ2 ≥ x2 ).
f (x1 ,x2 )=x

Thus the uncertainty distribution is


 
 [ 
Ψ(x) = M{f (ξ1 , ξ2 ) ≤ x} = M (ξ1 ≥ x1 ) ∩ (ξ2 ≥ x2 ) .
 
f (x1 ,x2 )=x
54 Chapter 2 - Uncertain Variable

Note that for each given number x, the event


[
(ξ1 ≥ x1 ) ∩ (ξ2 ≥ x2 )
f (x1 ,x2 )=x

is just a polyrectangle. It follows from the polyrectangular theorem that


Ψ(x) = sup M {(ξ1 ≥ x1 ) ∩ (ξ2 ≥ x2 )}
f (x1 ,x2 )=x

= sup M{ξ1 ≥ x1 } ∧ M{ξ2 ≥ x2 }


f (x1 ,x2 )=x

= sup (1 − Φ1 (x1 )) ∧ (1 − Φ2 (x2 )).


f (x1 ,x2 )=x

The theorem is proved.

Exercise 2.16: Let ξ be an uncertain variable with continuous uncertainty


distribution Φ, and let f be a strictly decreasing function. Show that f (ξ)
has an uncertainty distribution

Ψ(x) = 1 − Φ(f −1 (x)), ∀x ∈ <. (2.72)

Exercise 2.17: Let ξ be an uncertain variable with continuous uncertainty


distribution Φ, and let a and b be real numbers with a < 0. Show that aξ + b
has an uncertainty distribution

x−b
Ψ(x) = 1 − Φ , ∀x ∈ <. (2.73)
a

Exercise 2.18: Let ξ be a positive uncertain variable with continuous un-


certainty distribution Φ. Show that 1/ξ has an uncertainty distribution

1
Ψ(x) = 1 − Φ , ∀x > 0. (2.74)
x

Exercise 2.19: Let ξ be an uncertain variable with continuous uncertainty


distribution Φ. Show that exp(−ξ) has an uncertainty distribution

Ψ(x) = 1 − Φ(− ln(x)), ∀x > 0. (2.75)

Strictly Monotone Function of Uncertain Variables


A real-valued function f (x1 , x2 , · · · , xn ) is said to be strictly monotone if it
is strictly increasing with respect to x1 , x2 , · · · , xm and strictly decreasing
with respect to xm+1 , xm+2 , · · · , xn , that is,

f (x1 , · · · , xm , xm+1 , · · · , xn ) ≤ f (y1 , · · · , ym , ym+1 , · · · , yn ) (2.76)


Section 2.4 - Operational Law 55

whenever xi ≤ yi for i = 1, 2, · · · , m and xi ≥ yi for i = m + 1, m + 2, · · · , n,


and

f (x1 , · · · , xm , xm+1 , · · · , xn ) < f (y1 , · · · , ym , ym+1 , · · · , yn ) (2.77)

whenever xi < yi for i = 1, 2, · · · , m and xi > yi for i = m + 1, m + 2, · · · , n.


The following are strictly monotone functions,

f (x1 , x2 ) = x1 − x2 ,
f (x1 , x2 ) = x1 /x2 , x1 , x2 > 0,
f (x1 , x2 ) = x1 /(x1 + x2 ), x1 , x2 > 0.

Note that both strictly increasing function and strictly decreasing function
are special cases of strictly monotone function.

Theorem 2.18 (Liu [129]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-


ables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If
the function f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · ,
xm and strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , then the un-
certain variable
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.78)
has an inverse uncertainty distribution

Ψ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)). (2.79)

Proof: We only prove the case of m = 1 and n = 2. At first, we always have

{ξ ≤ Ψ−1 (α)} ≡ {f (ξ1 , ξ2 ) ≤ f (Φ−1 −1


1 (α), Φ2 (1 − α))}.

On the one hand, since the function f (x1 , x2 ) is strictly increasing with re-
spect to x1 and strictly decreasing with x2 , we obtain

{ξ ≤ Ψ−1 (α)} ⊃ {ξ1 ≤ Φ−1 −1


1 (α)} ∩ {ξ2 ≥ Φ2 (1 − α)}.

By using the independence of ξ1 and ξ2 , we get

M{ξ ≤ Ψ−1 (α)} ≥ M{ξ1 ≤ Φ−1 −1


1 (α)} ∧ M{ξ2 ≥ Φ2 (1 − α)} = α ∧ α = α.

On the other hand, since the function f (x1 , x2 ) is strictly increasing with
respect to x1 and strictly decreasing with x2 , we obtain

{ξ ≤ Ψ−1 (α)} ⊂ {ξ1 ≤ Φ−1 −1


1 (α)} ∪ {ξ2 ≥ Φ2 (1 − α)}.

By using the independence of ξ1 and ξ2 , we get

M{ξ ≤ Ψ−1 (α)} ≤ M{ξ1 ≤ Φ−1 −1


1 (α)} ∨ M{ξ2 ≥ Φ2 (1 − α)} = α ∨ α = α.
56 Chapter 2 - Uncertain Variable

It follows that M{ξ ≤ Ψ−1 (α)} = α. That is, Ψ−1 is just the inverse uncer-
tainty distribution of ξ. The theorem is proved.

Exercise 2.20: Let ξ1 and ξ2 be independent uncertain variables with regu-


lar uncertainty distributions Φ1 and Φ2 , respectively. Show that the inverse
uncertainty distribution of the difference ξ1 − ξ2 is

Ψ−1 (α) = Φ−1 −1


1 (α) − Φ2 (1 − α). (2.80)

Exercise 2.21: Let ξ1 and ξ2 be independent and positive uncertain vari-


ables with regular uncertainty distributions Φ1 and Φ2 , respectively. Show
that the inverse uncertainty distribution of the quotient ξ1 /ξ2 is

Φ−1
1 (α)
Ψ−1 (α) = −1 . (2.81)
Φ2 (1 − α)

Exercise 2.22: Assume ξ1 and ξ2 are independent and positive uncer-


tain variables with regular uncertainty distributions Φ1 and Φ2 , respectively.
Show that the inverse uncertainty distribution of ξ1 /(ξ1 + ξ2 ) is

Φ−1
1 (α)
Ψ−1 (α) = . (2.82)
Φ−1
1 (α) + Φ−1
2 (1 − α)

Theorem 2.19 (Liu [129]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-


ables with continuous uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively.
If the function f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 ,
· · · , xm and strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , then the
uncertain variable
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.83)
has an uncertainty distribution

Ψ(x) = sup min Φi (xi ) ∧ min (1 − Φi (xi )) . (2.84)
f (x1 ,x2 ,··· ,xn )=x 1≤i≤m m+1≤i≤n

Proof: For simplicity, we only prove the case of m = 1 and n = 2. Since


f (x1 , x2 ) is strictly increasing with respect to x1 and strictly decreasing with
respect to x2 , it holds that
[
{f (ξ1 , ξ2 ) ≤ x} = (ξ1 ≤ x1 ) ∩ (ξ2 ≥ x2 ).
f (x1 ,x2 )=x

Thus the uncertainty distribution is


 
 [ 
Ψ(x) = M{f (ξ1 , ξ2 ) ≤ x} = M (ξ1 ≤ x1 ) ∩ (ξ2 ≥ x2 ) .
 
f (x1 ,x2 )=x
Section 2.4 - Operational Law 57

Note that for each given number x, the event


[
(ξ1 ≤ x1 ) ∩ (ξ2 ≥ x2 )
f (x1 ,x2 )=x

is just a polyrectangle. It follows from the polyrectangular theorem that

Ψ(x) = sup M {(ξ1 ≤ x1 ) ∩ (ξ2 ≥ x2 )}


f (x1 ,x2 )=x

= sup M{ξ1 ≤ x1 } ∧ M{ξ2 ≥ x2 }


f (x1 ,x2 )=x

= sup Φ1 (x1 ) ∧ (1 − Φ2 (x2 )).


f (x1 ,x2 )=x

The theorem is proved.

Exercise 2.23: Let ξ1 and ξ2 be independent uncertain variables with con-


tinuous uncertainty distributions Φ1 and Φ2 , respectively. Show that ξ1 − ξ2
has an uncertainty distribution

Ψ(x) = sup Φ1 (x + y) ∧ (1 − Φ2 (y)). (2.85)


y∈<

Exercise 2.24: Let ξ1 and ξ2 be independent and positive uncertain vari-


ables with continuous uncertainty distributions Φ1 and Φ2 , respectively. Show
that ξ1 /ξ2 has an uncertainty distribution

Ψ(x) = sup Φ1 (xy) ∧ (1 − Φ2 (y)). (2.86)


y>0

Exercise 2.25: Let ξ1 and ξ2 be independent and positive uncertain vari-


ables with continuous uncertainty distributions Φ1 and Φ2 , respectively. Show
that ξ1 /(ξ1 + ξ2 ) has an uncertainty distribution

Ψ(x) = sup Φ1 (xy) ∧ (1 − Φ2 (y − xy)). (2.87)


y>0

Some Useful Theorems


In many cases, it is required to calculate M{f (ξ1 , ξ2 , · · · , ξn ) ≤ 0}. Perhaps
the first idea is to produce the uncertainty distribution Ψ(x) of f (ξ1 , ξ2 , · · ·, ξn )
by the operational law, and then the uncertain measure is just Ψ(0). How-
ever, for convenience, we may use the following theorems.

Theorem 2.20 (Liu [128]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-


ables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If
58 Chapter 2 - Uncertain Variable

f (ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly


decreasing with respect to ξm+1 , ξm+2 , · · · , ξn , then
M{f (ξ1 , ξ2 , · · · , ξn ) ≤ 0} (2.88)
is just the root α of the equation
f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) = 0. (2.89)
Proof: It follows from Theorem 2.18 that f (ξ1 , ξ2 , · · · , ξn ) is an uncertain
variable whose inverse uncertainty distribution is
Ψ−1 (α) = f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

Since M{f (ξ1 , ξ2 , · · · , ξn ) ≤ 0} = Ψ(0), it is the solution α of the equation


Ψ−1 (α) = 0. The theorem is proved.

Remark 2.3: Keep in mind that sometimes the equation (2.89) may not
have a root. In this case, if
f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) < 0 (2.90)
for all α, then we set the root α = 1; and if
f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) > 0 (2.91)
for all α, then we set the root α = 0.

Remark 2.4: Since f (ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect to


ξ1 , ξ2 , · · · , ξm and strictly decreasing with respect to ξm+1 , ξm+2 , · · · , ξn , the
function f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) is a strictly
increasing function with respect to α. See Figure 2.14. Thus its root α may
be estimated by the bisection method:

Step 1. Set a = 0, b = 1 and c = (a + b)/2.


Step 2. If f (Φ−1 −1 −1 −1
1 (c), · · · , Φm (c), Φm+1 (1 − c), · · · , Φn (1 − c)) ≤ 0, then set
a = c. Otherwise, set b = c.
Step 3. If |b − a| > ε (a predetermined precision), then set c = (b − a)/2
and go to Step 2. Otherwise, output b as the root.

Theorem 2.21 (Liu [128]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-


ables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If
f (ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly
decreasing with respect to ξm+1 , ξm+2 , · · · , ξn , then
M{f (ξ1 , ξ2 , · · · , ξn ) > 0} (2.92)
is just the root α of the equation
f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) = 0. (2.93)
Section 2.4 - Operational Law 59

. ..
.... .....
.......
.. ... .
... .. ..
.
... .. .
.. ..
... .... ..
... ..... ..
... .......... ..
... ....... ..
... ................ ..
... ............... .
.... ...
.

...............................................................................................................................................................................................
..
..... ...
α
0 ...
...
..
.......
.
...
........
. .
1 ..
..
... ...
........ ..
..
... ........ ..
... .... ..
... ... ..
... ... ..
...... ..
...... ..
...
...
..

Figure 2.14: f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α))

Proof: It follows from Theorem 2.18 that f (ξ1 , ξ2 , · · · , ξn ) is an uncertain


variable whose inverse uncertainty distribution is

Ψ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

Since M{f (ξ1 , ξ2 , · · · , ξn ) > 0} = 1−Ψ(0), it is the solution α of the equation


Ψ−1 (1 − α) = 0. The theorem is proved.

Remark 2.5: Keep in mind that sometimes the equation (2.93) may not
have a root. In this case, if

f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) < 0 (2.94)

for all α, then we set the root α = 0; and if

f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) > 0 (2.95)

for all α, then we set the root α = 1.

Remark 2.6: Since f (ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect to


ξ1 , ξ2 , · · · , ξm and strictly decreasing with respect to ξm+1 , ξm+2 , · · · , ξn , the
function f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) is a strictly
decreasing function with respect to α. See Figure 2.15. Thus its root α may
be estimated by the bisection method:

Step 1. Set a = 0, b = 1 and c = (a + b)/2.


Step 2. If f (Φ−1 −1 −1 −1
1 (1 − c), · · · , Φm (1 − c), Φm+1 (c), · · · , Φn (c)) > 0, then set
a = c. Otherwise, set b = c.
Step 3. If |b − a| > ε (a predetermined precision), then set c = (b − a)/2
and go to Step 2. Otherwise, output b as the root.

Theorem 2.22 Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables with


regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If the func-
tion f (ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect to ξ1 , ξ2 , · · · , ξm and
60 Chapter 2 - Uncertain Variable

...
..........
...... ..
....... ..
... ... ..
... .... ..
... .... ..
... ...... ..
... .....
...... ..
... ....... ..
... ........
......... ..
... .......... ..
... .......... .
....

...........................................................................................................................................................................................
..........
......... ...
α
0 ...
...
.........
.......
......
1 ..
..
... ...... .
.....
... ..... ...
... .... .
... ..
... ... ..
... ... .
... .....
... ..

Figure 2.15: f (Φ−1 −1 −1 −1


1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α))

strictly decreasing with respect to ξm+1 , ξm+2 , · · · , ξn , then

M {f (ξ1 , ξ2 , · · · , ξn ) ≤ 0} ≥ α (2.96)

if and only if

f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) ≤ 0. (2.97)

Proof: It follows from Theorem 2.18 that the inverse uncertainty distribution
of f (ξ1 , ξ2 , · · · , ξn ) is

Ψ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

Thus (2.96) holds if and only if Ψ−1 (α) ≤ 0. The theorem is thus verified.

Boolean Function of Boolean Uncertain Variables


A function is said to be Boolean if it is a mapping from {0, 1}n to {0, 1}. For
example,
f (x1 , x2 , x3 ) = x1 ∨ x2 ∧ x3 (2.98)
is a Boolean function. An uncertain variable is said to be Boolean if it
takes values either 0 or 1. For example, the following is a Boolean uncertain
variable, (
1 with uncertain measure a
ξ= (2.99)
0 with uncertain measure 1 − a
where a is a number between 0 and 1. This subsection introduces an opera-
tional law for Boolean system.

Theorem 2.23 (Liu [129]) Assume ξ1 , ξ2 , · · · , ξn are independent Boolean


uncertain variables, i.e.,
(
1 with uncertain measure ai
ξi = (2.100)
0 with uncertain measure 1 − ai
Section 2.4 - Operational Law 61

for i = 1, 2, · · · , n. If f is a Boolean function (not necessarily monotone),


then ξ = f (ξ1 , ξ2 , · · · , ξn ) is a Boolean uncertain variable such that

 sup min νi (xi ),
 f (x1 ,x2 ,··· ,xn )=1 1≤i≤n






 if sup min νi (xi ) < 0.5
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n


M{ξ = 1} = (2.101)

 1 − sup min ν i (x i ),
f (x1 ,x2 ,··· ,xn )=0 1≤i≤n





if sup min νi (xi ) ≥ 0.5



1≤i≤n

f (x1 ,x2 ,··· ,xn )=1

where xi take values either 0 or 1, and νi are defined by


(
ai , if xi = 1
νi (xi ) = (2.102)
1 − ai , if xi = 0

for i = 1, 2, · · · , n, respectively.

Proof: Let B1 , B2 , · · · , Bn be nonempty subsets of {0, 1}. In other words,


they take values of {0}, {1} or {0, 1}. Write

Λ = {ξ = 1}, Λc = {ξ = 0}, Λi = {ξi ∈ Bi }

for i = 1, 2, · · · , n. It is easy to verify that

Λ1 × Λ2 × · · · × Λn = Λ if and only if f (B1 , B2 , · · · , Bn ) = {1},

Λ1 × Λ2 × · · · × Λn = Λc if and only if f (B1 , B2 , · · · , Bn ) = {0}.


It follows from the product axiom that
min M{ξi ∈ Bi },

sup
f (B1 ,B2 ,··· ,Bn )={1} 1≤i≤n




min M{ξi ∈ Bi } > 0.5

if sup



f (B1 ,B2 ,··· ,Bn )={1} 1≤i≤n




M{ξ = 1} = 1− sup min M{ξi ∈ Bi }, (2.103)

 f (B1 ,B2 ,··· ,Bn )={0} 1≤i≤n

min M{ξi ∈ Bi } > 0.5



 if sup
f (B1 ,B2 ,··· ,Bn )={0} 1≤i≤n





0.5, otherwise.
Please note that

νi (1) = M{ξi = 1}, νi (0) = M{ξi = 0}

for i = 1, 2, · · · , n. The argument breaks down into four cases. Case 1:


Assume
sup min νi (xi ) < 0.5.
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
62 Chapter 2 - Uncertain Variable

Then we have
sup min M{ξi ∈ Bi } = 1 − sup min νi (xi ) > 0.5.
f (B1 ,B2 ,··· ,Bn )={0} 1≤i≤n f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

It follows from (2.103) that


M{ξ = 1} = sup min νi (xi ).
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

Case 2: Assume
sup min νi (xi ) > 0.5.
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

Then we have
sup min M{ξi ∈ Bi } = 1 − sup min νi (xi ) > 0.5.
f (B1 ,B2 ,··· ,Bn )={1} 1≤i≤n f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

It follows from (2.103) that


M{ξ = 1} = 1 − sup min νi (xi ).
f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

Case 3: Assume
sup min νi (xi ) = 0.5,
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

sup min νi (xi ) = 0.5.


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

Then we have
sup min M{ξi ∈ Bi } = 0.5,
f (B1 ,B2 ,··· ,Bn )={1} 1≤i≤n

sup min M{ξi ∈ Bi } = 0.5.


f (B1 ,B2 ,··· ,Bn )={0} 1≤i≤n

It follows from (2.103) that


M{ξ = 1} = 0.5 = 1 − sup min νi (xi ).
f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

Case 4: Assume
sup min νi (xi ) = 0.5,
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

sup min νi (xi ) < 0.5.


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

Then we have
sup min M{ξi ∈ Bi } = 1 − sup min νi (xi ) > 0.5.
f (B1 ,B2 ,··· ,Bn )={1} 1≤i≤n f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

It follows from (2.103) that


M{ξ = 1} = 1 − sup min νi (xi ).
f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

Hence the equation (2.101) is proved for the four cases.


Section 2.4 - Operational Law 63

Theorem 2.24 Assume that ξ1 , ξ2 , · · · , ξn are independent Boolean uncer-


tain variables, i.e.,
(
1 with uncertain measure ai
ξi = (2.104)
0 with uncertain measure 1 − ai

for i = 1, 2, · · · , n. Then the minimum

ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn (2.105)

is a Boolean uncertain variable such that

M{ξ = 1} = a1 ∧ a2 ∧ · · · ∧ an , (2.106)

M{ξ = 0} = (1 − a1 ) ∨ (1 − a2 ) ∨ · · · ∨ (1 − an ). (2.107)

Proof: Since ξ is the minimum of Boolean uncertain variables, the corre-


sponding Boolean function is

f (x1 , x2 , · · · , xn ) = x1 ∧ x2 ∧ · · · ∧ xn . (2.108)

Without loss of generality, we assume a1 ≥ a2 ≥ · · · ≥ an . Then we have

sup min νi (xi ) = min νi (1) = an ,


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n 1≤i≤n

sup min νi (xi ) = (1 − an ) ∧ min (ai ∨ (1 − ai ))


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n 1≤i<n

where νi (xi ) are defined by (2.102) for i = 1, 2, · · · , n, respectively. When


an < 0.5, we have

sup min νi (xi ) = an < 0.5.


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

It follows from Theorem 2.23 that

M{ξ = 1} = sup min νi (xi ) = an .


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

When an ≥ 0.5, we have

sup min νi (xi ) = an ≥ 0.5.


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

It follows from Theorem 2.23 that

M{ξ = 1} = 1 − sup min νi (xi ) = 1 − (1 − an ) = an .


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

Thus M{ξ = 1} is always an , i.e., the minimum value of a1 , a2 , · · · , an . Thus


the equation (2.106) is proved. The equation (2.107) may be verified by the
duality of uncertain measure.
64 Chapter 2 - Uncertain Variable

Theorem 2.25 Assume that ξ1 , ξ2 , · · · , ξn are independent Boolean uncer-


tain variables, i.e.,
(
1 with uncertain measure ai
ξi = (2.109)
0 with uncertain measure 1 − ai

for i = 1, 2, · · · , n. Then the maximum

ξ = ξ1 ∨ ξ2 ∨ · · · ∨ ξn (2.110)

is a Boolean uncertain variable such that

M{ξ = 1} = a1 ∨ a2 ∨ · · · ∨ an , (2.111)

M{ξ = 0} = (1 − a1 ) ∧ (1 − a2 ) ∧ · · · ∧ (1 − an ). (2.112)

Proof: Since ξ is the maximum of Boolean uncertain variables, the corre-


sponding Boolean function is

f (x1 , x2 , · · · , xn ) = x1 ∨ x2 ∨ · · · ∨ xn . (2.113)

Without loss of generality, we assume a1 ≥ a2 ≥ · · · ≥ an . Then we have

sup min νi (xi ) = a1 ∧ min (ai ∨ (1 − ai )),


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n 1<i≤n

sup min νi (xi ) = min νi (0) = 1 − a1


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n 1≤i≤n

where νi (xi ) are defined by (2.102) for i = 1, 2, · · · , n, respectively. When


a1 ≥ 0.5, we have

sup min νi (xi ) ≥ 0.5.


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

It follows from Theorem 2.23 that

M{ξ = 1} = 1 − sup min νi (xi ) = 1 − (1 − a1 ) = a1 .


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

When a1 < 0.5, we have

sup min νi (xi ) = a1 < 0.5.


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

It follows from Theorem 2.23 that

M{ξ = 1} = sup min νi (xi ) = a1 .


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

Thus M{ξ = 1} is always a1 , i.e., the maximum value of a1 , a2 , · · · , an . Thus


the equation (2.111) is proved. The equation (2.112) may be verified by the
duality of uncertain measure.
Section 2.4 - Operational Law 65

Theorem 2.26 Assume that ξ1 , ξ2 , · · · , ξn are independent Boolean uncer-


tain variables, i.e.,
(
1 with uncertain measure ai
ξi = (2.114)
0 with uncertain measure 1 − ai

for i = 1, 2, · · · , n. Then
(
1, if ξ1 + ξ2 + · · · + ξn ≥ k
ξ= (2.115)
0, if ξ1 + ξ2 + · · · + ξn < k

is a Boolean uncertain variable such that

M{ξ = 1} = k-max [a1 , a2 , · · · , an ] (2.116)

and
M{ξ = 0} = k-min [1 − a1 , 1 − a2 , · · · , 1 − an ] (2.117)
where k-max represents the kth largest value, and k-min represents the kth
smallest value.

Proof: This is the so-called k-out-of-n system. The corresponding Boolean


function is
(
1, if x1 + x2 + · · · + xn ≥ k
f (x1 , x2 , · · · , xn ) = (2.118)
0, if x1 + x2 + · · · + xn < k.

Without loss of generality, we assume a1 ≥ a2 ≥ · · · ≥ an . Then we have

sup min νi (xi ) = ak ∧ min (ai ∨ (1 − ai )),


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n k<i≤n

sup min νi (xi ) = (1 − ak ) ∧ min (ai ∨ (1 − ai ))


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n k<i≤n

where νi (xi ) are defined by (2.102) for i = 1, 2, · · · , n, respectively. When


ak ≥ 0.5, we have

sup min νi (xi ) ≥ 0.5.


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

It follows from Theorem 2.23 that

M{ξ = 1} = 1 − sup min νi (xi ) = 1 − (1 − ak ) = ak .


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

When ak < 0.5, we have

sup min νi (xi ) = ak < 0.5.


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
66 Chapter 2 - Uncertain Variable

It follows from Theorem 2.23 that

M{ξ = 1} = sup min νi (xi ) = ak .


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

Thus M{ξ = 1} is always ak , i.e., the kth largest value of a1 , a2 , · · · , an .


Thus the equation (2.116) is proved. The equation (2.117) may be verified
by the duality of uncertain measure.

Boolean System Calculator


Boolean System Calculator is a function in the Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) for computing the uncertain measure
like
M{f (ξ1 , ξ2 , · · · , ξn ) = 1}, M{f (ξ1 , ξ2 , · · · , ξn ) = 0} (2.119)
where ξ1 , ξ2 , · · · , ξn are independent Boolean uncertain variables and f is a
Boolean function. For example, let ξ1 , ξ2 , ξ3 be independent Boolean uncer-
tain variables, (
1 with uncertain mesure 0.8
ξ1 =
0 with uncertain mesure 0.2,
(
1 with uncertain mesure 0.7
ξ2 =
0 with uncertain mesure 0.3,
(
1 with uncertain mesure 0.6
ξ3 =
0 with uncertain mesure 0.4.
We also assume the Boolean function is
(
1, if x1 + x2 + x3 = 0 or 2
f (x1 , x2 , x3 ) =
0, if x1 + x2 + x3 = 1 or 3.

The Boolean System Calculator yields M{f (ξ1 , ξ2 , ξ3 ) = 1} = 0.4.

2.5 Expected Value


Expected value is the average value of uncertain variable in the sense of
uncertain measure, and represents the size of uncertain variable.

Definition 2.15 (Liu [122]) Let ξ be an uncertain variable. Then the ex-
pected value of ξ is defined by
Z +∞ Z 0
E[ξ] = M{ξ ≥ x}dx − M{ξ ≤ x}dx (2.120)
0 −∞

provided that at least one of the two integrals is finite.


Section 2.5 - Expected Value 67

Theorem 2.27 (Liu [122]) Let ξ be an uncertain variable with uncertainty


distribution Φ. Then
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx. (2.121)
0 −∞

Proof: It follows from the measure inversion theorem that for almost all
numbers x, we have M{ξ ≥ x} = 1 − Φ(x) and M{ξ ≤ x} = Φ(x). By using
the definition of expected value operator, we obtain
Z +∞ Z 0
E[ξ] = M{ξ ≥ x}dx − M{ξ ≤ x}dx
0 −∞
Z +∞ Z 0
= (1 − Φ(x))dx − Φ(x)dx.
0 −∞

See Figure 2.16. The theorem is proved.

Φ(x)
...
..........
....
..............................................................................................................................
1 . . . . . .
.... ... ... .. ... ... .. ... .....................................
..
... .. .. .. .. .. ... ..............
... .. ... .. ... ...........
... ... .. ... ..........
... .. .. .........
... ... ...........
... .. ......
... .......
. ..
..
..........
........ ...
....... .. ...
.
..... .. . ..
........ . . ..
........ . . .. ..
....... . . .. . ..
. ....... ........... ... .. ... ... ... ....
.... . . . ..
................... . . . .. .. . ..
............................................................................................................................................................................................................................................................................. x
....
0 ..
...

Z +∞ Z 0
Figure 2.16: E[ξ] = (1 − Φ(x))dx − Φ(x)dx. Reprinted from Liu
0 −∞
[129].

Theorem 2.28 (Liu [129]) Let ξ be an uncertain variable with uncertainty


distribution Φ. Then Z +∞
E[ξ] = xdΦ(x). (2.122)
−∞

Proof: It follows from the integration by parts and Theorem 2.27 that the
expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z +∞ Z 0 Z +∞
= xdΦ(x) + xdΦ(x) = xdΦ(x).
0 −∞ −∞
68 Chapter 2 - Uncertain Variable

Φ(x)
...
..........
....
1 .............................................................................................................................
.......................................................................................
.............................................................
..............................................................................
...................................
.............................
.........................
...................
..............
. .........
.. .
.... ... ....
. ... ...............
....................
...........................
......
....... .........................
..
..
..
..
..
.............................................................................
.
........ .
........................................................................
...................................................................................................................................................................................................................................................................... x
....
0 ...
...

Z +∞ Z 1
Figure 2.17: E[ξ] = xdΦ(x) = Φ−1 (α)dα. Reprinted from Liu
−∞ 0
[129].

See Figure 2.17. The theorem is proved.

Remark 2.7: If the uncertainty distribution Φ(x) has a derivative φ(x),


then we immediately have
Z +∞
E[ξ] = xφ(x)dx. (2.123)
−∞

However, it is inappropriate to regard φ(x) as an uncertainty density function


because uncertain measure is not additive, i.e., generally speaking,
Z b
M{a ≤ ξ ≤ b} 6= φ(x)dx. (2.124)
a

Theorem 2.29 (Liu [129]) Let ξ be an uncertain variable with regular un-
certainty distribution Φ. Then
Z 1
E[ξ] = Φ−1 (α)dα. (2.125)
0

Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem 2.28 that the expected value is
Z +∞ Z 1
E[ξ] = xdΦ(x) = Φ−1 (α)dα.
−∞ 0

See Figure 2.17. The theorem is proved.

Exercise 2.26: Show that the linear uncertain variable ξ ∼ L(a, b) has an
expected value
a+b
E[ξ] = . (2.126)
2
Section 2.5 - Expected Value 69

Exercise 2.27: Show that the zigzag uncertain variable ξ ∼ Z(a, b, c) has
an expected value
a + 2b + c
E[ξ] = . (2.127)
4

Exercise 2.28: Show that the normal uncertain variable ξ ∼ N (e, σ) has
an expected value e, i.e.,
E[ξ] = e. (2.128)

Exercise 2.29: Show that the lognormal uncertain variable ξ ∼ LOGN (e, σ)
has an expected value
( √ √ √
σ 3 exp(e) csc(σ 3), if σ < π/ 3
E[ξ] = √ (2.129)
+∞, if σ ≥ π/ 3.
This formula was first discovered by Dr. Zhongfeng Qin with the help of
Maple software, and was verified again by Dr. Kai Yao through a rigorous
mathematical derivation.

Exercise 2.30: Let ξ be an uncertain variable with empirical uncertainty


distribution


 0, if x < x1
(αi+1 − αi )(x − xi )


Φ(x) = αi + , if xi ≤ x ≤ xi+1 , 1 ≤ i < n

 xi+1 − xi

1, if x > xn

where x1 < x2 < · · · < xn and 0 ≤ α1 ≤ α2 ≤ · · · ≤ αn ≤ 1. Show that


n−1
X αi+1 − αi−1
α1 + α2 αn−1 + αn
E[ξ] = x1 + xi + 1 − xn . (2.130)
2 i=2
2 2

Expected Value of Monotone Function of Uncertain Variables


Theorem 2.30 (Liu and Ha [147]) Assume ξ1 , ξ2 , · · · , ξn are independent
uncertain variables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , re-
spectively. If f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · ,
xm and strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , then the un-
certain variable ξ = f (ξ1 , ξ2 , · · · , ξn ) has an expected value
Z 1
E[ξ] = f (Φ−1 −1 −1 −1
1 (α), · · ·, Φm (α), Φm+1 (1 − α), · · ·, Φn (1 − α))dα. (2.131)
0

Proof: Since the function f (x1 , x2 , · · · , xn ) is strictly increasing with respect


to x1 , x2 , · · · , xm and strictly decreasing with respect to xm+1 , xm+2 , · · · , xn ,
it follows from Theorem 2.18 that the inverse uncertainty distribution of ξ is
Ψ−1 (α) = f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).
70 Chapter 2 - Uncertain Variable

By using Theorem 2.29, we obtain (2.131). The theorem is proved.

Exercise 2.31: Let ξ be an uncertain variable with regular uncertainty


distribution Φ, and let f (x) be a strictly monotone (increasing or decreasing)
function. Show that
Z 1
E[f (ξ)] = f (Φ−1 (α))dα. (2.132)
0

Exercise 2.32: Let ξ be an uncertain variable with uncertainty distribution


Φ, and let f (x) be a strictly monotone (increasing or decreasing) function.
Show that Z +∞
E[f (ξ)] = f (x)dΦ(x). (2.133)
−∞

Exercise 2.33: Let ξ and η be independent and positive uncertain variables


with regular uncertainty distributions Φ and Ψ, respectively. Show that
Z 1
E[ξη] = Φ−1 (α)Ψ−1 (α)dα. (2.134)
0

Exercise 2.34: Let ξ and η be independent and positive uncertain variables


with regular uncertainty distributions Φ and Ψ, respectively. Show that
Z 1
ξ Φ−1 (α)
E = −1 (1 − α)
dα. (2.135)
η 0 Ψ

Exercise 2.35: Assume ξ and η are independent and positive uncertain


variables with regular uncertainty distributions Φ and Ψ, respectively. Show
that Z 1
Φ−1 (α)

ξ
E = −1
dα. (2.136)
ξ+η 0 Φ (α) + Ψ−1 (1 − α)

Linearity of Expected Value Operator


Theorem 2.31 (Liu [129]) Let ξ and η be independent uncertain variables
with finite expected values. Then for any real numbers a and b, we have
E[aξ + bη] = aE[ξ] + bE[η]. (2.137)
Proof: Without loss of generality, suppose ξ and η have regular uncertainty
distributions Φ and Ψ, respectively. Otherwise, we may give the uncertainty
distributions a small perturbation such that they become regular.
Step 1: We first prove E[aξ] = aE[ξ]. If a = 0, then the equation holds
trivially. If a > 0, then the inverse uncertainty distribution of aξ is
Υ−1 (α) = aΦ−1 (α).
Section 2.5 - Expected Value 71

It follows from Theorem 2.29 that


Z 1 Z 1
−1
E[aξ] = aΦ (α)dα = a Φ−1 (α)dα = aE[ξ].
0 0

If a < 0, then the inverse uncertainty distribution of aξ is

Υ−1 (α) = aΦ−1 (1 − α).

It follows from Theorem 2.29 that


Z 1 Z 1
−1
E[aξ] = aΦ (1 − α)dα = a Φ−1 (α)dα = aE[ξ].
0 0

Thus we always have E[aξ] = aE[ξ].


Step 2: We prove E[ξ + η] = E[ξ] + E[η]. The inverse uncertainty
distribution of the sum ξ + η is

Υ−1 (α) = Φ−1 (α) + Ψ−1 (α).

It follows from Theorem 2.29 that


Z 1 Z 1 Z 1
E[ξ + η] = Υ−1 (α)dα = Φ−1 (α)dα + Ψ−1 (α)dα = E[ξ] + E[η].
0 0 0

Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
E[aξ + bη] = E[aξ] + E[bη] = aE[ξ] + bE[η].
The theorem is proved.

Example 2.12: Generally speaking, the expected value operator is not


necessarily linear if the independence is not assumed. For example, take
(Γ, L, M) to be {γ1 , γ2 , γ3 } with M{γ1 } = 0.7, M{γ2 } = 0.3 and M{γ3 } = 0.2.
It follows from the extension theorem that M{γ1 , γ2 } = 0.8, M{γ1 , γ3 } = 0.7,
M{γ2 , γ3 } = 0.3. Define two uncertain variables as follows,
 
 1, if γ = γ1
  0, if γ = γ1

ξ(γ) = 0, if γ = γ2 η(γ) = 2, if γ = γ2
 
2, if γ = γ3 , 3, if γ = γ3 .
 

Note that ξ and η are not independent, and their sum is



 1, if γ = γ1

(ξ + η)(γ) = 2, if γ = γ2

5, if γ = γ3 .

72 Chapter 2 - Uncertain Variable

It is easy to verify that E[ξ] = 0.9, E[η] = 0.8, and E[ξ + η] = 1.9. Thus we
have
E[ξ + η] > E[ξ] + E[η].
If the uncertain variables are defined by
 
 0,
 if γ = γ1  0, if γ = γ1

ξ(γ) = 1, if γ = γ2 η(γ) = 3, if γ = γ2
 
2, if γ = γ3 , 1, if γ = γ3 .
 

Then 
 0, if γ = γ1

(ξ + η)(γ) = 4, if γ = γ2

3, if γ = γ3 .

It is easy to verify that E[ξ] = 0.5, E[η] = 0.9, and E[ξ + η] = 1.2. Thus we
have
E[ξ + η] < E[ξ] + E[η].

Comonotonic Functions of Uncertain Variable


Two real-valued functions f and g are said to be comonotonic if for any
numbers x and y, we always have

(f (x) − f (y))(g(x) − g(y)) ≥ 0. (2.138)

It is easy to verify that (i) any function is comonotonic with any positive
constant multiple of the function; (ii) any monotone increasing functions are
comonotonic with each other; and (iii) any monotone decreasing functions
are also comonotonic with each other.

Theorem 2.32 (Yang [240]) Let f and g be comonotonic functions. Then


for any uncertain variable ξ, we have

E[f (ξ) + g(ξ)] = E[f (ξ)] + E[g(ξ)]. (2.139)

Proof: Without loss of generality, suppose f (ξ) and g(ξ) have regular un-
certainty distributions Φ and Ψ, respectively. Otherwise, we may give the
uncertainty distributions a small perturbation such that they become regu-
lar. Since f and g are comonotonic functions, at least one of the following
relations is true,

{f (ξ) ≤ Φ−1 (α)} ⊂ {g(ξ) ≤ Ψ−1 (α)},

{f (ξ) ≤ Φ−1 (α)} ⊃ {g(ξ) ≤ Ψ−1 (α)}.


Section 2.5 - Expected Value 73

On the one hand, we have


M{f (ξ) + g(ξ) ≤ Φ−1 (α) + Ψ−1 (α)}
≥ M{(f (ξ) ≤ Φ−1 (α)) ∩ (g(ξ) ≤ Ψ−1 (α))}
= M{f (ξ) ≤ Φ−1 (α)} ∧ M{g(ξ) ≤ Ψ−1 (α)}
= α ∧ α = α.
On the other hand, we have
M{f (ξ) + g(ξ) ≤ Φ−1 (α) + Ψ−1 (α)}
≤ M{(f (ξ) ≤ Φ−1 (α)) ∪ (g(ξ) ≤ Ψ−1 (α))}
= M{f (ξ) ≤ Φ−1 (α)} ∨ M{g(ξ) ≤ Ψ−1 (α)}
= α ∨ α = α.
It follows that
M{f (ξ) + g(ξ) ≤ Φ−1 (α) + Ψ−1 (α)} = α
holds for each α. That is, Φ−1 (α) + Ψ−1 (α) is the inverse uncertainty distri-
bution of f (ξ) + g(ξ). By using Theorem 2.29, we obtain
Z 1
E[f (ξ) + g(ξ)] = (Φ−1 (α) + Ψ−1 (α))dα
0
Z 1 Z 1
= Φ−1 (α)dα + Ψ−1 (α)dα
0 0

= E[f (ξ)] + E[g(ξ)].


The theorem is verified.

Exercise 2.36: Let ξ be a positive uncertain variable. Show that ln x and


exp(x) are comonotonic functions on (0, +∞), and
E[ln ξ + exp(ξ)] = E[ln ξ] + E[exp(ξ)]. (2.140)

Exercise 2.37: Let ξ be a positive uncertain variable. Show that x, x2 ,


· · · , xn are comonotonic functions on [0, +∞), and
E[ξ + ξ 2 + · · · + ξ n ] = E[ξ] + E[ξ 2 ] + · · · + E[ξ n ]. (2.141)

Some Inequalities
Theorem 2.33 (Liu [122]) Let ξ be an uncertain variable, and let f be a
nonnegative function. If f is even and increasing on [0, ∞), then for any
given number t > 0, we have
E[f (ξ)]
M{|ξ| ≥ t} ≤ . (2.142)
f (t)
74 Chapter 2 - Uncertain Variable

Proof: It is clear that M{|ξ| ≥ f −1 (r)} is a monotone decreasing function


of r on [0, ∞). It follows from the nonnegativity of f (ξ) that
Z +∞ Z +∞
E[f (ξ)] = M{f (ξ) ≥ x}dx = M{|ξ| ≥ f −1 (x)}dx
0 0
Z f (t) Z f (t)
≥ M{|ξ| ≥ f −1 (x)}dx ≥ M{|ξ| ≥ f −1 (f (t))}dx
0 0
Z f (t)
= M{|ξ| ≥ t}dx = f (t) · M{|ξ| ≥ t}
0

which proves the inequality.


Theorem 2.34 (Liu [122], Markov Inequality) Let ξ be an uncertain vari-
able. Then for any given numbers t > 0 and p > 0, we have
E[|ξ|p ]
M{|ξ| ≥ t} ≤ . (2.143)
tp
Proof: It is a special case of Theorem 2.33 when f (x) = |x|p .

Example 2.13: For any given positive number t, we define an uncertain


variable as follows,
(
0 with uncertain measure 1/2
ξ=
t with uncertain measure 1/2.

Then E[ξ p ] = tp /2 and M{ξ ≥ t} = 1/2 = E[ξ p ]/tp .


Theorem 2.35 (Liu [122], Hölder’s Inequality) Let p and q be positive num-
bers with 1/p + 1/q = 1, and let ξ and η be independent uncertain variables
with E[|ξ|p ] < ∞ and E[|η|q ] < ∞. Then we have
p p
E[|ξη|] ≤ p E[|ξ|p ] q E[|η|q ]. (2.144)
Proof: The inequality holds trivially if at least one of ξ and η is zero a.s.
p
Now we assume E[|ξ|√ ] > 0 and E[|η|q ] > 0. It is easy to prove that the

function f (x, y) = x q y is a concave function on {(x, y) : x ≥ 0, y ≥ 0}.
p

Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
f (x, y) − f (x0 , y0 ) ≤ a(x − x0 ) + b(y − y0 ), ∀x ≥ 0, y ≥ 0.
Letting x0 = E[|ξ|p ], y0 = E[|η|q ], x = |ξ|p and y = |η|q , we have
f (|ξ|p , |η|q ) − f (E[|ξ|p ], E[|η|q ]) ≤ a(|ξ|p − E[|ξ|p ]) + b(|η|q − E[|η|q ]).
Taking the expected values on both sides, we obtain
E[f (|ξ|p , |η|q )] ≤ f (E[|ξ|p ], E[|η|q ]).
Hence the inequality (2.144) holds.
Section 2.6 - Variance 75

Theorem 2.36 (Liu [122], Minkowski Inequality) Let p be a real number


with p ≥ 1, and let ξ and η be independent uncertain variables with E[|ξ|p ] <
∞ and E[|η|p ] < ∞. Then we have
p
p
p
p
p
E[|ξ + η|p ] ≤ E[|ξ|p ] + p E[|η|p ]. (2.145)

Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Now
we assume √ E[|ξ|p ] > 0 and E[|η|p ] > 0. It is easy to prove that the function

f (x, y) = ( x + p y)p is a concave function on {(x, y) : x ≥ 0, y ≥ 0}. Thus
p

for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that

f (x, y) − f (x0 , y0 ) ≤ a(x − x0 ) + b(y − y0 ), ∀x ≥ 0, y ≥ 0.

Letting x0 = E[|ξ|p ], y0 = E[|η|p ], x = |ξ|p and y = |η|p , we have

f (|ξ|p , |η|p ) − f (E[|ξ|p ], E[|η|p ]) ≤ a(|ξ|p − E[|ξ|p ]) + b(|η|p − E[|η|p ]).

Taking the expected values on both sides, we obtain

E[f (|ξ|p , |η|p )] ≤ f (E[|ξ|p ], E[|η|p ]).

Hence the inequality (2.145) holds.

Theorem 2.37 (Liu [122], Jensen’s Inequality) Let ξ be an uncertain vari-


able, and let f be a convex function. If E[ξ] and E[f (ξ)] are finite, then

f (E[ξ]) ≤ E[f (ξ)]. (2.146)

Especially, when f (x) = |x|p and p ≥ 1, we have |E[ξ]|p ≤ E[|ξ|p ].

Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) − f (y) ≥ k · (x − y). Replacing x with ξ and y with E[ξ], we obtain

f (ξ) − f (E[ξ]) ≥ k · (ξ − E[ξ]).

Taking the expected values on both sides, we have

E[f (ξ)] − f (E[ξ]) ≥ k · (E[ξ] − E[ξ]) = 0

which proves the inequality.

Exercise 2.38: (Zhang [268]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain


variables with finite expected values, and let f be a convex function. Show
that
f (E[ξ1 ], E[ξ2 ], · · · , E[ξn ]) ≤ E[f (ξ1 , ξ2 , · · · , ξn )]. (2.147)
76 Chapter 2 - Uncertain Variable

2.6 Variance
The variance of uncertain variable provides a degree of the spread of the
distribution around its expected value. A small value of variance indicates
that the uncertain variable is tightly concentrated around its expected value;
and a large value of variance indicates that the uncertain variable has a wide
spread around its expected value.
Definition 2.16 (Liu [122]) Let ξ be an uncertain variable with finite ex-
pected value e. Then the variance of ξ is
V [ξ] = E[(ξ − e)2 ]. (2.148)
This definition tells us that the variance is just the expected value of
(ξ − e)2 . Since (ξ − e)2 is a nonnegative uncertain variable, we also have
Z +∞
V [ξ] = M{(ξ − e)2 ≥ x}dx. (2.149)
0

Theorem 2.38 If ξ is an uncertain variable with finite expected value, a and


b are real numbers, then
V [aξ + b] = a2 V [ξ]. (2.150)
Proof: Let e be the expected value of ξ. Then aξ + b has an expected value
ae + b. It follows from the definition of variance that
V [aξ + b] = E (aξ + b − (ae + b))2 = a2 E[(ξ − e)2 ] = a2 V [ξ].

The theorem is thus verified.


Theorem 2.39 Let ξ be an uncertain variable with expected value e. Then
V [ξ] = 0 if and only if M{ξ = e} = 1. That is, the uncertain variable ξ is
essentially the constant e.
Proof: We first assume V [ξ] = 0. It follows from the equation (2.149) that
Z +∞
M{(ξ − e)2 ≥ x}dx = 0
0

which implies M{(ξ − e) ≥ x} = 0 for any x > 0. Hence we have


2

M{(ξ − e)2 = 0} = 1.
That is, M{ξ = e} = 1. Conversely, assume M{ξ = e} = 1. Then we
immediately have M{(ξ − e)2 = 0} = 1 and M{(ξ − e)2 ≥ x} = 0 for any
x > 0. Thus Z +∞
V [ξ] = M{(ξ − e)2 ≥ x}dx = 0.
0
The theorem is proved.
Section 2.6 - Variance 77

Theorem 2.40 (Yao [254]) Let ξ and η be independent uncertain variables


whose variances exist. Then
p p p
V [ξ + η] ≤ V [ξ] + V [η]. (2.151)

Proof: It is a special case of Theorem 2.36 when p = 2 and the uncertain


variables ξ and η are replaced with ξ − E[ξ] and η − E[η], respectively.

Theorem 2.41 (Liu [122], Chebyshev Inequality) Let ξ be an uncertain vari-


able whose variance exists. Then for any given number t > 0, we have
V [ξ]
M {|ξ − E[ξ]| ≥ t} ≤ . (2.152)
t2
Proof: It is a special case of Theorem 2.33 when the uncertain variable ξ is
replaced with ξ − E[ξ], and f (x) = x2 .

Example 2.14: For any given positive number t, we define an uncertain


variable as follows,
(
−t with uncertain measure 1/2
ξ=
t with uncertain measure 1/2.

Then V [ξ] = t2 and M{|ξ − E[ξ]| ≥ t} = 1 = V [ξ]/t2 .

How to Obtain Variance from Uncertainty Distribution?


Let ξ be an uncertain variable with expected value e. If we only know its
uncertainty distribution Φ, then the variance
Z +∞
V [ξ] = M{(ξ − e)2 ≥ x}dx
0
Z +∞ √ √
= M{(ξ ≥ e + x) ∪ (ξ ≤ e − x)}dx
0
Z +∞ √ √
≤ (M{ξ ≥ e + x} + M{ξ ≤ e − x})dx
0
Z +∞ √ √
= (1 − Φ(e + x) + Φ(e − x))dx.
0

Thus we have the following stipulation.

Stipulation 2.3 Let ξ be an uncertain variable with uncertainty distribution


Φ and finite expected value e. Then
Z +∞
√ √
V [ξ] = (1 − Φ(e + x) + Φ(e − x))dx. (2.153)
0
78 Chapter 2 - Uncertain Variable

Theorem 2.42 Let ξ be an uncertain variable with uncertainty distribution


Φ and finite expected value e. Then
Z +∞
V [ξ] = (x − e)2 dΦ(x). (2.154)
−∞

Proof: This theorem is based on Stipulation 2.3 that says the variance of ξ
is Z +∞ Z +∞
√ √
V [ξ] = (1 − Φ(e + y))dy + Φ(e − y)dy.
0 0

Substituting e + y with x and y with (x − e)2 , the change of variables and
integration by parts produce
Z +∞ Z +∞ Z +∞
√ 2
(1 − Φ(e + y))dy = (1 − Φ(x))d(x − e) = (x − e)2 dΦ(x).
0 e e
√ 2
Similarly, substituting e − y with x and y with (x − e) , we obtain
Z +∞ Z −∞ Z e
√ 2
Φ(e − y)dy = Φ(x)d(x − e) = (x − e)2 dΦ(x).
0 e −∞

It follows that the variance is


Z +∞ Z e Z +∞
V [ξ] = (x − e)2 dΦ(x) + (x − e)2 dΦ(x) = (x − e)2 dΦ(x).
e −∞ −∞

The theorem is verified.


Theorem 2.43 (Yao [254]) Let ξ be an uncertain variable with regular un-
certainty distribution Φ and finite expected value e. Then
Z 1
V [ξ] = (Φ−1 (α) − e)2 dα. (2.155)
0

Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem 2.42 that the variance is
Z +∞ Z 1
V [ξ] = (x − e)2 dΦ(x) = (Φ−1 (α) − e)2 dα.
−∞ 0

The theorem is verified.

Exercise 2.39: Show that the linear uncertain variable ξ ∼ L(a, b) has a
variance
(b − a)2
V [ξ] = . (2.156)
12

Exercise 2.40: Show that the normal uncertain variable ξ ∼ N (e, σ) has a
variance
V [ξ] = σ 2 . (2.157)
Section 2.7 - Moment 79

Theorem 2.44 (Yao [254]) Assume ξ1 , ξ2 , · · · , ξn are independent uncertain


variables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively.
If f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · , xm and
strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , then the uncertain
variable ξ = f (ξ1 , ξ2 , · · · , ξn ) has a variance
Z 1
V [ξ] = (f (Φ−1 −1 −1 −1 2
1 (α), · · ·, Φm (α), Φm+1 (1 − α), · · ·, Φn (1 − α)) − e) dα
0

where e is the expected value of ξ.

Proof: Since the function f (x1 , x2 , · · · , xn ) is strictly increasing with respect


to x1 , x2 , · · · , xm and strictly decreasing with respect to xm+1 , xm+2 , · · · , xn ,
the inverse uncertainty distribution of ξ is

Ψ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

It follows from Theorem 2.43 that the result holds.

Exercise 2.41: Let ξ and η be independent uncertain variables with regular


uncertainty distributions Φ and Ψ, respectively. Assume there exist two real
numbers a and b such that

Φ−1 (α) = aΨ−1 (α) + b (2.158)

for all α ∈ (0, 1). Show that


p p p
V [ξ + η] = V [ξ] + V [η] (2.159)

in the sense of Stipulation 2.3.

Remark 2.8: If ξ and η are independent linear uncertain variables, then the
condition (2.158) is met. If they are independent normal uncertain variables,
then the condition (2.158) is also met.

2.7 Moment
Definition 2.17 (Liu [122]) Let ξ be an uncertain variable and let k be a
positive integer. Then E[ξ k ] is called the k-th moment of ξ.

Theorem 2.45 Let ξ be an uncertain variable with uncertainty distribution


Φ, and let k be an odd number. Then the k-th moment of ξ is
Z +∞ √
Z 0 √
k
E[ξ ] = (1 − Φ( k x))dx − Φ( k x)dx. (2.160)
0 −∞
80 Chapter 2 - Uncertain Variable

Proof: Since k is an odd number, it follows from the definition of expected


value operator that
Z +∞ Z 0
E[ξ k ] = M{ξ k ≥ x}dx − M{ξ k ≤ x}dx
0 −∞
Z +∞ √
Z 0 √
= M{ξ ≥ k
x}dx − M{ξ ≤ k
x}dx
0 −∞
Z +∞ √
Z 0 √
= (1 − Φ( k x))dx − Φ( k x)dx.
0 −∞

The theorem is proved.


However, when k is an even number, the k-th moment of ξ cannot be
uniquely determined by the uncertainty distribution Φ. In this case, we have
Z +∞
E[ξ k ] = M{ξ k ≥ x}dx
0
Z +∞ √ √
= M{(ξ ≥ k
x) ∪ (ξ ≤ − k x)}dx
0
Z +∞ √ √
≤ (M{ξ ≥ k
x} + M{ξ ≤ − k x})dx
0
Z +∞ √ √
= (1 − Φ( k x) + Φ(− k x))dx.
0

Thus for the even number k, we have the following stipulation.


Stipulation 2.4 Let ξ be an uncertain variable with uncertainty distribution
Φ, and let k be an even number. Then the k-th moment of ξ is
Z +∞
√ √
E[ξ k ] = (1 − Φ( k x) + Φ(− k x))dx. (2.161)
0

Theorem 2.46 Let ξ be an uncertain variable with uncertainty distribution


Φ, and let k be a positive integer. Then the k-th moment of ξ is
Z +∞
E[ξ k ] = xk dΦ(x). (2.162)
−∞

Proof: When k is an odd number, Theorem 2.45 says that the k-th moment
is Z +∞ Z 0
√ √
E[ξ k ] = (1 − Φ( k y))dy − Φ( k y)dy.
0 −∞

Substituting k y with x and y with xk , the change of variables and integration
by parts produce
Z +∞ Z +∞ Z +∞

(1 − Φ( k y))dy = (1 − Φ(x))dxk = xk dΦ(x)
0 0 0
Section 2.7 - Moment 81

and
0 0 0

Z Z Z
Φ( k y)dy = Φ(x)dxk = − xk dΦ(x).
−∞ −∞ −∞
Thus we have
Z +∞ Z 0 Z +∞
k k k
E[ξ ] = x dΦ(x) + x dΦ(x) = xk dΦ(x).
0 −∞ −∞

When k is an even number, the theorem is based on Stipulation 2.4 that says
the k-th moment is
Z +∞
√ √
E[ξ k ] = (1 − Φ( k y) + Φ(− k y))dy.
0

Substituting k y with x and y with xk , the change of variables and integration
by parts produce
Z +∞ Z +∞ Z +∞
√ k
(1 − Φ( y))dy =
k
(1 − Φ(x))dx = xk dΦ(x).
0 0 0

Similarly, substituting − k y with x and y with xk , we obtain
Z +∞ Z 0 Z 0

Φ(− k y)dy = Φ(x)dxk = xk dΦ(x).
0 −∞ −∞

It follows that the k-th moment is


Z +∞ Z 0 Z +∞
k k k
E[ξ ] = x dΦ(x) + x dΦ(x) = xk dΦ(x).
0 −∞ −∞

The theorem is thus verified for any positive integer k.


Theorem 2.47 (Sheng and Kar [213]) Let ξ be an uncertain variable with
regular uncertainty distribution Φ, and let k be a positive integer. Then the
k-th moment of ξ is Z 1
E[ξ k ] = (Φ−1 (α))k dα. (2.163)
0

Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem 2.46 that the k-th moment is
Z +∞ Z 1
k
E[ξ ] = k
x dΦ(x) = (Φ−1 (α))k dα.
−∞ 0

The theorem is verified.

Exercise 2.42: Show that the second moment of linear uncertain variable
ξ ∼ L(a, b) is
a2 + ab + b2
E[ξ 2 ] = . (2.164)
3
82 Chapter 2 - Uncertain Variable

Exercise 2.43: Show that the second moment of normal uncertain variable
ξ ∼ N (e, σ) is
E[ξ 2 ] = e2 + σ 2 . (2.165)

Theorem 2.48 (Sheng and Kar [213]) Assume ξ1 , ξ2 , · · · , ξn are indepen-


dent uncertain variables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn ,
respectively, and k is a positive integer. If f (x1 , x2 , · · · , xn ) is strictly in-
creasing with respect to x1 , x2 , · · · , xm and strictly decreasing with respect to
xm+1 , xm+2 , · · · , xn , then the k-th moment of ξ = f (ξ1 , ξ2 , · · · , ξn ) is
Z 1
E[ξ k ] = f k (Φ−1 −1 −1 −1
1 (α), · · ·, Φm (α), Φm+1 (1 − α), · · ·, Φn (1 − α))dα.
0

Proof: Since the function f (x1 , x2 , · · · , xn ) is strictly increasing with respect


to x1 , x2 , · · · , xm and strictly decreasing with respect to xm+1 , xm+2 , · · · , xn ,
the inverse uncertainty distribution of ξ is

Ψ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

It follows from Theorem 2.47 that the result holds.

2.8 Entropy
This section provides a definition of entropy to characterize the uncertainty
of uncertain variables.

Definition 2.18 (Liu [125]) Suppose that ξ is an uncertain variable with


uncertainty distribution Φ. Then its entropy is defined by
Z +∞
H[ξ] = S(Φ(x))dx (2.166)
−∞

where S(t) = −t ln t − (1 − t) ln(1 − t).

Example 2.15: Let ξ be an uncertain variable with uncertainty distribution


(
0, if x < a
Φ(x) = (2.167)
1, if x ≥ a.

Essentially, ξ is a constant a. It follows from the definition of entropy that


Z a Z +∞
H[ξ] = − (0 ln 0 + 1 ln 1) dx − (1 ln 1 + 0 ln 0) dx = 0.
−∞ a

This means a constant has no uncertainty.


Section 2.8 - Entropy 83

S(t)
...
..........
...
..
.... . . . . . . . . . . . . . . .............................
ln 2 ... ...
......
.
.
.......
.....
... ..... . .....
... ..... . .....
... ..
..... . ....
....
... ..
.... . ....
... . ...
... . . ...
... ..
. . ...
... ... .
. ...
... ... . ...
... ... . ...
...
... ..
. .
. ...
... .... . ...
... ... . ...
... ... . ...
. ...
... ... . ...
... ... .
. ...
...... . ...
...... . ...
..... .
....................................................................................................................................................................................
....
t
0 ..
. 0.5 1

Figure 2.18: Function S(t) = −t ln t − (1 − t) ln(1 − t). It is easy to verify


that S(t) is a symmetric function about t = 0.5, strictly increasing on the
interval [0, 0.5], strictly decreasing on the interval [0.5, 1], and reaches its
unique maximum ln 2 at t = 0.5. Reprinted from Liu [129].

Example 2.16: Let ξ be a linear uncertain variable L(a, b). Then its entropy
is
Z b
x−a x−a b−x b−x b−a
H[ξ] = − ln + ln dx = . (2.168)
a b − a b − a b − a b − a 2

Exercise 2.44: Show that the zigzag uncertain variable ξ ∼ Z(a, b, c) has
an entropy
c−a
H[ξ] = . (2.169)
2

Exercise 2.45: Show that the normal uncertain variable ξ ∼ N (e, σ) has
an entropy
πσ
H[ξ] = √ . (2.170)
3
Theorem 2.49 Let ξ be an uncertain variable. Then H[ξ] ≥ 0 and equality
holds if ξ is essentially a constant.

Proof: The nonnegativity is clear. In addition, when an uncertain variable


tends to a constant, its entropy tends to the minimum 0.

Theorem 2.50 Let ξ be an uncertain variable taking values on the interval


[a, b]. Then
H[ξ] ≤ (b − a) ln 2 (2.171)
and equality holds if ξ has an uncertainty distribution Φ(x) = 0.5 on [a, b].

Proof: The theorem follows from the fact that the function S(t) reaches its
maximum ln 2 at t = 0.5.
84 Chapter 2 - Uncertain Variable

Theorem 2.51 Let ξ be an uncertain variable, and let c be a real number.


Then
H[ξ + c] = H[ξ]. (2.172)
That is, the entropy is invariant under arbitrary translations.
Proof: Write the uncertainty distribution of ξ by Φ. Then the uncertain
variable ξ + c has an uncertainty distribution Φ(x − c). It follows from the
definition of entropy that
Z +∞ Z +∞
H[ξ + c] = S (Φ(x − c)) dx = S(Φ(x))dx = H[ξ].
−∞ −∞

The theorem is proved.


Theorem 2.52 (Dai and Chen [27]) Let ξ be an uncertain variable with
regular uncertainty distribution Φ. Then
Z 1
α
H[ξ] = Φ−1 (α) ln dα. (2.173)
0 1−α
Proof: It is clear that S(α) is a derivable function with S 0 (α) = − ln α/(1 −
α). Since
Z Φ(x) Z 1
S(Φ(x)) = S 0 (α)dα = − S 0 (α)dα,
0 Φ(x)
we have
Z +∞ Z 0 Z Φ(x) Z +∞ Z 1
H[ξ] = S(Φ(x))dx = S 0 (α)dαdx − S 0 (α)dαdx.
−∞ −∞ 0 0 Φ(x)

It follows from Fubini theorem that


Z Φ(0) Z 0 Z 1 Z Φ−1 (α)
0
H[ξ] = S (α)dxdα − S 0 (α)dxdα
0 Φ−1 (α) Φ(0) 0
Z Φ(0) Z 1
=− Φ−1 (α)S 0 (α)dα − Φ−1 (α)S 0 (α)dα
0 Φ(0)
Z 1 Z 1
−1 0 α
=− Φ (α)S (α)dα = Φ−1 (α) ln dα.
0 0 1−α
The theorem is verified.
Theorem 2.53 (Dai and Chen [27]) Let ξ1 , ξ2 , · · · , ξn be independent uncer-
tain variables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respec-
tively. If f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · , xm
and strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , then the uncertain
variable ξ = f (ξ1 , ξ2 , · · · , ξn ) has an entropy
Z 1
α
H[ξ] = f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) ln dα.
0 1−α
Section 2.8 - Entropy 85

Proof: Since f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · ,


xm and strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , it follows from
Theorem 2.18 that the inverse uncertainty distribution of ξ is

Ψ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

By using Theorem 2.52, we get the entropy formula.

Exercise 2.46: Let ξ and η be independent and positive uncertain variables


with regular uncertainty distributions Φ and Ψ, respectively. Show that
Z 1
α
H[ξη] = Φ−1 (α)Ψ−1 (α) ln dα.
0 1 − α

Exercise 2.47: Let ξ and η be independent and positive uncertain variables


with regular uncertainty distributions Φ and Ψ, respectively. Show that
Z 1
ξ Φ−1 (α) α
H = −1
ln dα.
η 0 Ψ (1 − α) 1 − α

Exercise 2.48: Let ξ and η be independent and positive uncertain variables


with regular uncertainty distributions Φ and Ψ, respectively. Show that
Z 1
Φ−1 (α)

ξ α
H = −1 −1
ln dα.
ξ+η 0 Φ (α) + Ψ (1 − α) 1 − α

Theorem 2.54 (Dai and Chen [27]) Let ξ and η be independent uncertain
variables. Then for any real numbers a and b, we have

H[aξ + bη] = |a|H[ξ] + |b|H[η]. (2.174)

Proof: Without loss of generality, suppose ξ and η have regular uncertainty


distributions Φ and Ψ, respectively. Otherwise, we may give the uncertainty
distributions a small perturbation such that they become regular.
Step 1: We prove H[aξ] = |a|H[ξ]. If a > 0, then the inverse uncertainty
distribution of aξ is
Υ−1 (α) = aΦ−1 (α).
It follows from Theorem 2.52 that
Z 1 Z 1
α α
H[aξ] = aΦ−1 (α) ln dα = a Φ−1 (α) ln dα = |a|H[ξ].
0 1 − α 0 1 − α

If a = 0, then we immediately have H[aξ] = 0 = |a|H[ξ]. If a < 0, then the


inverse uncertainty distribution of aξ is

Υ−1 (α) = aΦ−1 (1 − α).


86 Chapter 2 - Uncertain Variable

It follows from Theorem 2.52 that


Z 1 Z 1
α α
H[aξ] = aΦ−1 (1 − α) ln dα =(−a) Φ−1 (α) ln dα = |a|H[ξ].
0 1−α 0 1−α
Thus we always have H[aξ] = |a|H[ξ].
Step 2: We prove H[ξ + η] = H[ξ] + H[η]. Note that the inverse uncer-
tainty distribution of ξ + η is

Υ−1 (α) = Φ−1 (α) + Ψ−1 (α).

It follows from Theorem 2.52 that


Z 1
α
H[ξ + η] = (Φ−1 (α) + Ψ−1 (α)) ln dα = H[ξ] + H[η].
0 1−α

Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that

H[aξ + bη] = H[aξ] + H[bη] = |a|H[ξ] + |b|H[η].

The theorem is proved.

Maximum Entropy Principle


Given some constraints, for example, expected value and variance, there are
usually multiple compatible uncertainty distributions. Which uncertainty
distribution shall we take? The maximum entropy principle attempts to
select the uncertainty distribution that has maximum entropy and satisfies
the prescribed constraints.

Theorem 2.55 (Chen and Dai [15]) Let ξ be an uncertain variable whose
uncertainty distribution is arbitrary but the expected value e and variance σ 2 .
Then
πσ
H[ξ] ≤ √ (2.175)
3
and the equality holds if ξ is a normal uncertain variable N (e, σ).

Proof: Let Φ(x) be the uncertainty distribution of ξ and write Ψ(x) =


Φ(2e − x) for x ≥ e. It follows from the stipulation (2.3) and the change of
variable of integral that the variance is
Z +∞ Z +∞
V [ξ] = 2 (x − e)(1 − Φ(x))dx + 2 (x − e)Ψ(x)dx = σ 2 .
e e

Thus there exists a real number κ such that


Z +∞
2 (x − e)(1 − Φ(x))dx = κσ 2 ,
e
Section 2.9 - Distance 87

Z +∞
2 (x − e)Ψ(x)dx = (1 − κ)σ 2 .
e

The maximum entropy distribution Φ should maximize the entropy


Z +∞ Z +∞ Z +∞
H[ξ] = S(Φ(x))dx = S(Φ(x))dx + S(Ψ(x))dx
−∞ e e

subject to the above two constraints. The Lagrangian is


Z +∞ Z +∞
L= S(Φ(x))dx + S(Ψ(x))dx
e e
Z +∞
2
−α 2 (x − e)(1 − Φ(x))dx − κσ
e
Z +∞
2
−β 2 (x − e)Ψ(x)dx − (1 − κ)σ .
e

The maximum entropy distribution meets Euler-Lagrange equations

ln Φ(x) − ln(1 − Φ(x)) = 2α(x − e),

ln Ψ(x) − ln(1 − Ψ(x)) = 2β(e − x).

Thus Φ and Ψ have the forms

Φ(x) = (1 + exp(2α(e − x)))−1 ,

Ψ(x) = (1 + exp(2β(x − e)))−1 .

Substituting them into the variance constraints, we get


−1
π(e − x)
Φ(x) = 1 + exp √ ,
6κσ
!!−1
π(x − e)
Ψ(x) = 1 + exp p .
6(1 − κ)σ

Then the entropy is


√ √
πσ κ πσ 1 − κ
H[ξ] = √ + √
6 6
which achieves the maximum when κ = 1/2. Thus the maximum entropy
distribution is just the normal uncertainty distribution N (e, σ).
88 Chapter 2 - Uncertain Variable

2.9 Distance
Definition 2.19 (Liu [122]) The distance between uncertain variables ξ and
η is defined as
d(ξ, η) = E[|ξ − η|]. (2.176)
That is, the distance between ξ and η is just the expected value of |ξ − η|.
Since |ξ − η| is a nonnegative uncertain variable, we always have
Z +∞
d(ξ, η) = M{|ξ − η| ≥ x}dx. (2.177)
0

Theorem 2.56 Let ξ, η, τ be uncertain variables, and let d(·, ·) be the dis-
tance. Then we have
(a) (Nonnegativity) d(ξ, η) ≥ 0;
(b) (Identification) d(ξ, η) = 0 if and only if ξ = η;
(c) (Symmetry) d(ξ, η) = d(η, ξ);
(d) (Triangle Inequality) d(ξ, η) ≤ 2d(ξ, τ ) + 2d(η, τ ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the subadditivity axiom that
Z +∞
d(ξ, η) = M {|ξ − η| ≥ x} dx
0
Z +∞
≤ M {|ξ − τ | + |τ − η| ≥ x} dx
0
Z +∞
≤ M {(|ξ − τ | ≥ x/2) ∪ (|τ − η| ≥ x/2)} dx
0
Z +∞
≤ (M{|ξ − τ | ≥ x/2} + M{|τ − η| ≥ x/2}) dx
0

= 2E[|ξ − τ |] + 2E[|τ − η|] = 2d(ξ, τ ) + 2d(τ, η).

Example 2.17: Let Γ = {γ1 , γ2 , γ3 }. Define M{∅} = 0, M{Γ} = 1 and


M{Λ} = 1/2 for any subset Λ (excluding ∅ and Γ). We set uncertain variables
ξ, η and τ as follows,
 
 1, if γ = γ1
  0, if γ = γ1

ξ(γ) = 1, if γ = γ2 η(γ) = −1, if γ = γ2 τ (γ) ≡ 0.
 
0, if γ = γ3 , −1, if γ = γ3 ,
 

It is easy to verify that d(ξ, τ ) = d(τ, η) = 1/2 and d(ξ, η) = 3/2. Thus
3
d(ξ, η) = (d(ξ, τ ) + d(τ, η)).
2
A conjecture is d(ξ, η) ≤ 1.5(d(ξ, τ )+d(τ, η)) for arbitrary uncertain variables
ξ, η and τ . This is an open problem.
Section 2.10 - Conditional Uncertainty Distribution 89

How to Obtain Distance from Uncertainty Distributions?


Let ξ and η be independent uncertain variables. If ξ − η has an uncertainty
distribution Υ, then the distance is
Z +∞
d(ξ, η) = M{|ξ − η| ≥ x}dx
0
Z +∞
= M{(ξ − η ≥ x) ∪ (ξ − η ≤ −x)}dx
0
Z +∞
≤ (M{ξ − η ≥ x} + M{ξ − η ≤ −x})dx
0
Z +∞
= (1 − Υ(x) + Υ(−x))dx.
0

Thus we have the following stipulation.

Stipulation 2.5 Let ξ and η be independent uncertain variables, and let Υ


be the uncertainty distribution of ξ − η. Then the distance between ξ and η is
Z +∞
d(ξ, η) = (1 − Υ(x) + Υ(−x))dx. (2.178)
0

Theorem 2.57 Let ξ and η be independent uncertain variables with regular


uncertainty distributions Φ and Ψ, respectively. Then the distance between ξ
and η is Z 1
d(ξ, η) = |Φ−1 (α) − Ψ−1 (1 − α)|dα. (2.179)
0

Proof: Assume ξ − η has an uncertainty distribution Υ. Substituting Υ(x)


with α and x with Υ−1 (α), the change of variables and integration by parts
produce
Z +∞ Z 1 Z 1
(1 − Υ(x))dx = (1 − α)dΥ−1 (α) = Υ−1 (α)dα.
0 Υ(0) Υ(0)

Similarly, substituting Υ(−x) with α and x with −Υ−1 (α), we obtain


Z +∞ Z 0 Z Υ(0)
−1
Υ(−x)dx = αd(−Υ (α)) = − Υ−1 (α)dα.
0 Υ(0) 0

Based on the stipulation (2.178), we have


Z 1 Z Υ(0) Z 1
d(ξ, η) = Υ−1 (α)dα − Υ−1 (α)dα = |Υ−1 (α)|dα.
Υ(0) 0 0

Since Υ−1 (α) = Φ−1 (α) − Ψ−1 (1 − α), we immediately obtain the result.
90 Chapter 2 - Uncertain Variable

2.10 Conditional Uncertainty Distribution


Definition 2.20 (Liu [122]) The conditional uncertainty distribution Φ of
an uncertain variable ξ given B is defined by
Φ(x|B) = M {ξ ≤ x|B} (2.180)
provided that M{B} > 0.
Theorem 2.58 (Liu [129]) Let ξ be an uncertain variable with uncertainty
distribution Φ(x), and let t be a real number with Φ(t) < 1. Then the condi-
tional uncertainty distribution of ξ given ξ > t is
0, if Φ(x) ≤ Φ(t)




 Φ(x)


∧ 0.5, if Φ(t) < Φ(x) ≤ (1 + Φ(t))/2
Φ(x|(t, +∞)) = 1 − Φ(t)


 Φ(x) − Φ(t) , if (1 + Φ(t))/2 ≤ Φ(x).



1 − Φ(t)
Proof: It follows from Φ(x|(t, +∞)) = M {ξ ≤ x|ξ > t} and the definition of
conditional uncertainty that
M{(ξ ≤ x) ∩ (ξ > t)} M{(ξ ≤ x) ∩ (ξ > t)}

 , if < 0.5
M{ξ > t} M{ξ > t}





Φ(x|(t, +∞)) = M{(ξ > x) ∩ (ξ > t)} M{(ξ > x) ∩ (ξ > t)}
1− , if < 0.5
M{ξ > t} M{ξ > t}






0.5, otherwise.
When Φ(x) ≤ Φ(t), we have x ≤ t, and
M{(ξ ≤ x) ∩ (ξ > t)} M{∅}
= = 0 < 0.5.
M{ξ > t} 1 − Φ(t)
Thus
M{(ξ ≤ x) ∩ (ξ > t)}
Φ(x|(t, +∞)) = = 0.
M{ξ > t}
When Φ(t) < Φ(x) ≤ (1 + Φ(t))/2, we have x > t, and
M{(ξ > x) ∩ (ξ > t)} 1 − Φ(x) 1 − (1 + Φ(t))/2
= ≥ = 0.5
M{ξ > t} 1 − Φ(t) 1 − Φ(t)
and
M{(ξ ≤ x) ∩ (ξ > t)} Φ(x)
≤ .
M{ξ > t} 1 − Φ(t)
It follows from the maximum uncertainty principle that
Φ(x)
Φ(x|(t, +∞)) = ∧ 0.5.
1 − Φ(t)
Section 2.10 - Conditional Uncertainty Distribution 91

When (1 + Φ(t))/2 ≤ Φ(x), we have x ≥ t, and


M{(ξ > x) ∩ (ξ > t)} 1 − Φ(x) 1 − (1 + Φ(t))/2
= ≤ ≤ 0.5.
M{ξ > t} 1 − Φ(t) 1 − Φ(t)
Thus
M{(ξ > x) ∩ (ξ > t)} 1 − Φ(x) Φ(x) − Φ(t)
Φ(x|(t, +∞)) = 1 − =1− = .
M{ξ > t} 1 − Φ(t) 1 − Φ(t)
The theorem is proved.

Exercise 2.49: Let ξ be a linear uncertain variable L(a, b), and let t be a real
number with a < t < b. Show that the conditional uncertainty distribution
of ξ given ξ > t is


 0, if x ≤ t

 x−a


∧ 0.5, if t < x ≤ (b + t)/2
Φ(x|(t, +∞)) = b−t

 x−t



 ∧ 1, if (b + t)/2 ≤ x.
b−t

Φ(x|(t, +∞))
...
..........
...
..
...
1 ........................................................................
...
...
.......................................
.......
..........
... .. ....
... .............
... .. ...
... ..... ......
... .. ...
..... ........
... .. . .
... ..... ........
... . ..
... ..... ........
... .
..
.. ..
....
...
. .
....
0.5 ..................................................
...
.
.....
.
..
..... .....
.
...
...
...
.. ..
.
.
.....
..
...
...
...
...
. ...
.
... .... .
... .... .
... ...... ......
... ..........
...
... .. .
... ..... ..
...
.
..... ...
... .. ..
... ..... ..
. ..
. .
.................................................................................................................................................................................................................................................
.
....
.
x
0 ... t

Figure 2.19: Conditional Uncertainty Distribution Φ(x|(t, +∞))

Theorem 2.59 (Liu [129]) Let ξ be an uncertain variable with uncertainty


distribution Φ(x), and let t be a real number with Φ(t) > 0. Then the condi-
tional uncertainty distribution of ξ given ξ ≤ t is
Φ(x)


 , if Φ(x) ≤ Φ(t)/2


 Φ(t)

Φ(x|(−∞, t]) = Φ(x) + Φ(t) − 1
∨ 0.5, if Φ(t)/2 ≤ Φ(x) < Φ(t)
Φ(t)






1, if Φ(t) ≤ Φ(x).
92 Chapter 2 - Uncertain Variable

Proof: It follows from Φ(x|(−∞, t]) = M {ξ ≤ x|ξ ≤ t} and the definition of


conditional uncertainty that

M{(ξ ≤ x) ∩ (ξ ≤ t)} M{(ξ ≤ x) ∩ (ξ ≤ t)}



 , if < 0.5
M{ξ ≤ M{ξ ≤ t}



 t}

Φ(x|(−∞, t]) = M{(ξ > x) ∩ (ξ ≤ t)} M{(ξ > x) ∩ (ξ ≤ t)}
1− , if < 0.5
M{ξ ≤ t} M{ξ ≤ t}






0.5, otherwise.

When Φ(x) ≤ Φ(t)/2, we have x < t, and

M{(ξ ≤ x) ∩ (ξ ≤ t)} Φ(x) Φ(t)/2


= ≤ = 0.5.
M{ξ ≤ t} Φ(t) Φ(t)

Thus
M{(ξ ≤ x) ∩ (ξ ≤ t)} Φ(x)
Φ(x|(−∞, t]) = = .
M{ξ ≤ t} Φ(t)
When Φ(t)/2 ≤ Φ(x) < Φ(t), we have x < t, and

M{(ξ ≤ x) ∩ (ξ ≤ t)} Φ(x) Φ(t)/2


= ≥ = 0.5
M{ξ ≤ t} Φ(t) Φ(t)

and
M{(ξ > x) ∩ (ξ ≤ t)} 1 − Φ(x)
≤ ,
M{ξ ≤ t} Φ(t)
i.e.,
M{(ξ > x) ∩ (ξ ≤ t)} Φ(x) + Φ(t) − 1
1− ≥ .
M{ξ ≤ t} Φ(t)
It follows from the maximum uncertainty principle that

Φ(x) + Φ(t) − 1
Φ(x|(−∞, t]) = ∨ 0.5.
Φ(t)

When Φ(t) ≤ Φ(x), we have x ≥ t, and

M{(ξ > x) ∩ (ξ ≤ t)} M{∅}


= = 0 < 0.5.
M{ξ ≤ t} Φ(t)

Thus
M{(ξ > x) ∩ (ξ ≤ t)}
Φ(x|(−∞, t]) = 1 − = 1 − 0 = 1.
M{ξ ≤ t}
The theorem is proved.

Exercise 2.50: Let ξ be a linear uncertain variable L(a, b), and let t be a real
number with a < t < b. Show that the conditional uncertainty distribution
Section 2.11 - Uncertain Sequence 93

of ξ given ξ ≤ t is
 x−a
 ∨ 0, if x ≤ (a + t)/2
t−a





Φ(x|(−∞, t]) = b−x
1 − ∨ 0.5, if (a + t)/2 ≤ x < t
t−a





if x ≥ t.

1,

Φ(x|(−∞, t])
....
.........
..
...
..
1 ....
........................................................................ .........................................................................
... .. .
... .. .....
... .
.. .....
... .. ... .
... .. ....
... ..
... ......
... .
..
.. ..
.. ......
... .. .
.
... ..... ........ ...
... .....
. ... ..
... ....
........................................... ..
0.5 ....................................
...
...
.
.....
.
..
. .. ..
..
... ..
..... ....... ..
... ..
.... ... ..
... ..
.... ... ..
... ..
.... ....... ..
... ..... ..
. ..
... ..... .....
. ..
... ..
.......... ..
... ...... .. ..
... ..
.......... ..
... .
....
. .
.................................................................................................................................................................................................................................................. x
....
0 ..
..
t

Figure 2.20: Conditional Uncertainty Distribution Φ(x|(−∞, t])

2.11 Uncertain Sequence


Uncertain sequence is a sequence of uncertain variables indexed by integers.
This section introduces four convergence concepts of uncertain sequence: con-
vergence almost surely (a.s.), convergence in measure, convergence in mean,
and convergence in distribution.

Table 2.1: Relationship among Convergence Concepts

Convergence Convergence Convergence


⇒ ⇒
in Mean in Measure in Distribution
Convergence Almost Surely

Definition 2.21 (Liu [122]) The uncertain sequence {ξi } is said to be con-
vergent a.s. to ξ if there exists an event Λ with M{Λ} = 1 such that

lim |ξi (γ) − ξ(γ)| = 0 (2.181)


i→∞
94 Chapter 2 - Uncertain Variable

for every γ ∈ Λ. In that case we write ξi → ξ, a.s.

Definition 2.22 (Liu [122]) The uncertain sequence {ξi } is said to be con-
vergent in measure to ξ if

lim M {|ξi − ξ| ≥ ε} = 0 (2.182)


i→∞

for every ε > 0.

Definition 2.23 (Liu [122]) The uncertain sequence {ξi } is said to be con-
vergent in mean to ξ if
lim E[|ξi − ξ|] = 0. (2.183)
i→∞

Definition 2.24 (Liu [122]) Let Φ, Φ1 , Φ2 , · · · be the uncertainty distribu-


tions of uncertain variables ξ, ξ1 , ξ2 , · · · , respectively. We say the uncertain
sequence {ξi } converges in distribution to ξ if

lim Φi (x) = Φ(x) (2.184)


i→∞

for all x at which Φ(x) is continuous.

Convergence in Mean vs. Convergence in Measure


Theorem 2.60 (Liu [122]) If the uncertain sequence {ξi } converges in mean
to ξ, then {ξi } converges in measure to ξ.

Proof: It follows from the Markov inequality that for any given number
ε > 0, we have
E[|ξi − ξ|]
M{|ξi − ξ| ≥ ε} ≤ →0
ε
as i → ∞. Thus {ξi } converges in measure to ξ. The theorem is proved.

Example 2.18: Convergence in measure does not imply convergence in


mean. Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , · · · } with

sup 1/i, if sup 1/i < 0.5





 γi ∈Λ γi ∈Λ

M{Λ} = 1 − sup 1/i, if sup 1/i < 0.5

 γi 6∈Λ γi 6∈Λ


0.5, otherwise.

The uncertain variables are defined by


(
i, if j = i
ξi (γj ) =
0, otherwise
Section 2.11 - Uncertain Sequence 95

for i = 1, 2, · · · and ξ ≡ 0. For some small number ε > 0, we have


1
M{|ξi − ξ| ≥ ε} = M{|ξi − ξ| ≥ ε} = →0
i
as i → ∞. That is, the sequence {ξi } converges in measure to ξ. However,
for each i, we have
E[|ξi − ξ|] = 1.
That is, the sequence {ξi } does not converge in mean to ξ.

Convergence in Measure vs. Convergence in Distribution


Theorem 2.61 (Liu [122]) If the uncertain sequence {ξi } converges in mea-
sure to ξ, then {ξi } converges in distribution to ξ.
Proof: Let x be a given continuity point of the uncertainty distribution Φ.
On the one hand, for any y > x, we have
{ξi ≤ x} = {ξi ≤ x, ξ ≤ y} ∪ {ξi ≤ x, ξ > y} ⊂ {ξ ≤ y} ∪ {|ξi − ξ| ≥ y − x}.
It follows from the subadditivity axiom that
Φi (x) ≤ Φ(y) + M{|ξi − ξ| ≥ y − x}.
Since {ξi } converges in measure to ξ, we have M{|ξi − ξ| ≥ y − x} → 0 as
i → ∞. Thus we obtain lim supi→∞ Φi (x) ≤ Φ(y) for any y > x. Letting
y → x, we get
lim sup Φi (x) ≤ Φ(x). (2.185)
i→∞
On the other hand, for any z < x, we have
{ξ ≤ z} = {ξi ≤ x, ξ ≤ z} ∪ {ξi > x, ξ ≤ z} ⊂ {ξi ≤ x} ∪ {|ξi − ξ| ≥ x − z}
which implies that
Φ(z) ≤ Φi (x) + M{|ξi − ξ| ≥ x − z}.
Since M{|ξi − ξ| ≥ x − z} → 0, we obtain Φ(z) ≤ lim inf i→∞ Φi (x) for any
z < x. Letting z → x, we get
Φ(x) ≤ lim inf Φi (x). (2.186)
i→∞

It follows from (2.185) and (2.186) that Φi (x) → Φ(x) as i → ∞. The


theorem is proved.

Example 2.19: Convergence in distribution does not imply convergence in


measure. Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 } with M{γ1 } =
M{γ2 } = 1/2. We define an uncertain variable as
(
−1, if γ = γ1
ξ(γ) =
1, if γ = γ2 .
96 Chapter 2 - Uncertain Variable

We also define ξi = −ξ for i = 1, 2, · · · Then ξi and ξ have the same chance


distribution. Thus {ξi } converges in distribution to ξ. However, for some
small number ε > 0, we have
M{|ξi − ξ| ≥ ε} = M{|ξi − ξ| ≥ ε} = 1.
That is, the sequence {ξi } does not converge in measure to ξ.

Convergence Almost Surely vs. Convergence in Measure

Example 2.20: Convergence a.s. does not imply convergence in measure.


Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , · · · } with
sup i/(2i + 1), if sup i/(2i + 1) < 0.5


 γi ∈Λ γi ∈Λ


M{Λ} = 1 − sup i/(2i + 1), if sup i/(2i + 1) < 0.5

 γi 6∈Λ γi 6∈Λ


0.5, otherwise.
Then we define uncertain variables as
(
i, if j = i
ξi (γj ) =
0, otherwise

for i = 1, 2, · · · and ξ ≡ 0. The sequence {ξi } converges a.s. to ξ. However,


for some small number ε > 0, we have
i 1
M{|ξi − ξ| ≥ ε} = M{|ξi − ξ| ≥ ε} = →
2i + 1 2
as i → ∞. That is, the sequence {ξi } does not converge in measure to ξ.

Example 2.21: Convergence in measure does not imply convergence a.s.


Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel algebra and
Lebesgue measure. For any positive integer i, there is an integer j such
that i = 2j + k, where k is an integer between 0 and 2j − 1. Then we define
uncertain variables as
(
1, if k/2j ≤ γ ≤ (k + 1)/2j
ξi (γ) =
0, otherwise

for i = 1, 2, · · · and ξ ≡ 0. For some small number ε > 0, we have


1
M{|ξi − ξ| ≥ ε} = M{|ξi − ξ| ≥ ε} = →0
2j
as i → ∞. That is, the sequence {ξi } converges in measure to ξ. However, for
any γ ∈ [0, 1], there is an infinite number of intervals of the form [k/2j , (k +
1)/2j ] containing γ. Thus ξi (γ) does not converge to 0. In other words, the
sequence {ξi } does not converge a.s. to ξ.
Section 2.11 - Uncertain Sequence 97

Convergence Almost Surely vs. Convergence in Mean

Example 2.22: Convergence a.s. does not imply convergence in mean. Take
an uncertainty space (Γ, L, M) to be {γ1 , γ2 , · · · } with
X 1
M{Λ} = .
2i
γi ∈Λ

The uncertain variables are defined by


(
2i , if j = i
ξi (γj ) =
0, otherwise

for i = 1, 2, · · · and ξ ≡ 0. Then ξi converges a.s. to ξ. However, the sequence


{ξi } does not converge in mean to ξ because E[|ξi − ξ|] ≡ 1 for each i.

Example 2.23: Convergence in mean does not imply convergence a.s. Take
an uncertainty space (Γ, L, M) to be [0, 1] with Borel algebra and Lebesgue
measure. For any positive integer i, there is an integer j such that i = 2j + k,
where k is an integer between 0 and 2j − 1. The uncertain variables are
defined by (
1, if k/2j ≤ γ ≤ (k + 1)/2j
ξi (γ) =
0, otherwise
for i = 1, 2, · · · and ξ ≡ 0. Then
1
E[|ξi − ξ|] = →0
2j
as i → ∞. That is, the sequence {ξi } converges in mean to ξ. However, for
any γ ∈ [0, 1], there is an infinite number of intervals of the form [k/2j , (k +
1)/2j ] containing γ. Thus ξi (γ) does not converge to 0. In other words, the
sequence {ξi } does not converge a.s. to ξ.

Convergence Almost Surely vs. Convergence in Distribution

Example 2.24: Convergence in distribution does not imply convergence a.s.


Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 } with M{γ1 } = M{γ2 } =
1/2. We define an uncertain variable ξ as
(
−1, if γ = γ1
ξ(γ) =
1, if γ = γ2 .

We also define ξi = −ξ for i = 1, 2, · · · Then ξi and ξ have the same uncer-


tainty distribution. Thus {ξi } converges in distribution to ξ. However, the
sequence {ξi } does not converge a.s. to ξ.
98 Chapter 2 - Uncertain Variable

Example 2.25: Convergence a.s. does not imply convergence in distribution.


Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , · · · } with

sup i/(2i + 1), if sup i/(2i + 1) < 0.5





 γi ∈Λ γi ∈Λ

M{Λ} = 1 − sup i/(2i + 1), if sup i/(2i + 1) < 0.5

 γi 6∈Λ γi 6∈Λ


0.5, otherwise.

The uncertain variables are defined by


(
i, if j = i
ξi (γj ) =
0, otherwise

for i = 1, 2, · · · and ξ ≡ 0. Then the sequence {ξi } converges a.s. to ξ.


However, the uncertainty distributions of ξi are


 0, if x < 0
Φi (x) = (i + 1)/(2i + 1), if 0 ≤ x < i

1, if x ≥ i

for i = 1, 2, · · · , respectively. The uncertainty distribution of ξ is



0, if x < 0
Φ(x) =
1, if x ≥ 0.

It is clear that Φi (x) does not converge to Φ(x) at x > 0. That is, the
sequence {ξi } does not converge in distribution to ξ.

2.12 Uncertain Vector


As an extension of uncertain variable, this section introduces a concept of
uncertain vector whose components are uncertain variables.

Definition 2.25 (Liu [122]) A k-dimensional uncertain vector is a function


ξ from an uncertainty space (Γ, L, M) to the set of k-dimensional real vectors
such that {ξ ∈ B} is an event for any k-dimensional Borel set B.

Theorem 2.62 (Liu [122]) The vector (ξ1 , ξ2 , · · · , ξk ) is an uncertain vector


if and only if ξ1 , ξ2 , · · · , ξk are uncertain variables.

Proof: Write ξ = (ξ1 , ξ2 , · · · , ξk ). Suppose that ξ is an uncertain vector on


the uncertainty space (Γ, L, M). For any Borel set B over <, the set B ×<k−1
is a k-dimensional Borel set. Thus the set

{ξ1 ∈ B} = {ξ1 ∈ B, ξ2 ∈ <, · · · , ξk ∈ <} = {ξ ∈ B × <k−1 }


Section 2.12 - Uncertain Vector 99

is an event. Hence ξ1 is an uncertain variable. A similar process may prove


that ξ2 , ξ3 , · · · , ξk are uncertain variables.
Conversely, suppose that all ξ1 , ξ2 , · · · , ξk are uncertain variables on the
uncertainty space (Γ, L, M). We define

B = B ⊂ <k {ξ ∈ B} is an event .

The vector ξ = (ξ1 , ξ2 , · · · , ξk ) is proved to be an uncertain vector if we can


prove that B contains all k-dimensional Borel sets. First, the class B contains
all open intervals of <k because
( k
) k
Y \
ξ∈ (ai , bi ) = {ξi ∈ (ai , bi )}
i=1 i=1

is an event. Next, the class B is a σ-algebra over <k because (i) we have
<k ∈ B since {ξ ∈ <k } = Γ; (ii) if B ∈ B, then {ξ ∈ B} is an event, and

{ξ ∈ B c } = {ξ ∈ B}c

is an event. This means that B c ∈ B; (iii) if Bi ∈ B for i = 1, 2, · · · , then


{ξ ∈ Bi } are events and
∞ ∞
( )
[ [
ξ∈ Bi = {ξ ∈ Bi }
i=1 i=1

is an event. This means that ∪i Bi ∈ B. Since the smallest σ-algebra con-


taining all open intervals of <k is just the Borel algebra over <k , the class B
contains all k-dimensional Borel sets. The theorem is proved.

Definition 2.26 (Liu [122]) The joint uncertainty distribution of an uncer-


tain vector (ξ1 , ξ2 , · · · , ξk ) is defined by

Φ(x1 , x2 , · · · , xk ) = M {ξ1 ≤ x1 , ξ2 ≤ x2 , · · · , ξk ≤ xk } (2.187)

for any real numbers x1 , x2 , · · · , xk .

Theorem 2.63 (Liu [122]) Let ξ1 , ξ2 , · · · , ξk be independent uncertain vari-


ables with uncertainty distributions Φ1 , Φ2 , · · · , Φk , respectively. Then the
uncertain vector (ξ1 , ξ2 , · · · , ξk ) has a joint uncertainty distribution

Φ(x1 , x2 , · · · , xk ) = Φ1 (x1 ) ∧ Φ2 (x2 ) ∧ · · · ∧ Φk (xk ) (2.188)

for any real numbers x1 , x2 , · · · , xk .

Proof: Since ξ1 , ξ2 , · · · , ξk are independent uncertain variables, we have


( k ) k k
\ ^ ^
Φ(x1 , x2 , · · · , xk ) = M (ξi ≤ xi ) = M{ξi ≤ xi } = Φi (xi )
i=1 i=1 i=1
100 Chapter 2 - Uncertain Variable

for any real numbers x1 , x2 , · · · , xk . The theorem is proved.

Remark 2.9: However, the equation (2.188) does not imply that the uncer-
tain variables are independent. For example, let ξ be an uncertain variable
with uncertainty distribution Φ. Then the joint uncertainty distribution Ψ
of uncertain vector (ξ, ξ) is

Ψ(x1 , x2 ) = M{(ξ ≤ x1 ) ∩ (ξ ≤ x2 )} = Φ(x1 ) ∧ Φ(x2 )

for any real numbers x1 and x2 . But, generally speaking, an uncertain vari-
able is not independent with itself.

Definition 2.27 (Liu [137]) The k-dimensional uncertain vectors ξ 1 , ξ 2 , · · · ,


ξ n are said to be independent if for any k-dimensional Borel sets B1 , B2 , · · · ,
Bn , we have
( n ) n
\ ^
M (ξ i ∈ Bi ) = M{ξ i ∈ Bi }. (2.189)
i=1 i=1

Exercise 2.51: Let (ξ1 , ξ2 , · · · , ξk ) and (η1 , η2 , · · · , ηk ) be independent un-


certain vectors. Show that ξ1 and (η1 , ηk ) are independent.

Theorem 2.64 (Liu [137]) The k-dimensional uncertain vectors ξ 1 , ξ 2 , · · · ,


ξ n are independent if and only if
( n
) n
[ _
M (ξ i ∈ Bi ) = M {ξ i ∈ Bi } (2.190)
i=1 i=1

for any k-dimensional Borel sets B1 , B2 , · · · , Bn .

Proof: It follows from the duality of uncertain measure that ξ 1 , ξ 2 , · · · , ξ n


are independent if and only if
( n ) ( n )
[ \
M (ξ i ∈ Bi ) = 1 − M (ξ i ∈ Bic )
i=1 i=1
^n n
_
=1− M{ξ i ∈ Bic } = M {ξ i ∈ Bi } .
i=1 i=1

The theorem is thus proved.

Theorem 2.65 Let ξ 1 , ξ 2 , · · · , ξ n be independent uncertain vectors, and let


f1 , f2 , · · · , fn be vector-valued measurable functions. Then f1 (ξ 1 ), f2 (ξ 2 ), · · · ,
fn (ξ n ) are also independent uncertain vectors.
Section 2.12 - Uncertain Vector 101

Proof: For any Borel sets B1 , B2 , · · · , Bn , it follows from the definition of


independence that
( n ) ( n )
\ \
M (fi (ξ i ) ∈ Bi ) = M (ξ i ∈ fi−1 (Bi ))
i=1 i=1
n
^ n
^
= M{ξ i ∈ fi−1 (Bi )} = M{fi (ξ i ) ∈ Bi }.
i=1 i=1

Thus f1 (ξ 1 ), f2 (ξ 2 ), · · · , fn (ξ n ) are independent uncertain variables.

Normal Uncertain Vector


Definition 2.28 (Liu [137]) Let τ1 , τ2 , · · · , τm be independent normal un-
certain variables with expected value 0 and variance 1. Then

τ = (τ1 , τ2 , · · · , τm ) (2.191)

is called a standard normal uncertain vector.

It is easy to verify that a standard normal uncertain vector (τ1 , τ2 , · · · , τm )


has a joint uncertainty distribution
−1
π(x1 ∧ x2 ∧ · · · ∧ xm )
Φ(x1 , x2 , · · · , xm ) = 1 + exp − √ (2.192)
3
for any real numbers x1 , x2 , · · · , xm . It is also easy to show that

lim Φ(x1 , x2 , · · · , xm ) = 0, for each i, (2.193)


xi →−∞

lim Φ(x1 , x2 , · · · , xm ) = 1. (2.194)


(x1 ,x2 ,··· ,xm )→+∞

Furthermore, the limit

lim Φ(x1 , x2 , · · · , xm ) (2.195)


(x1 ,··· ,xi−1 ,xi+1 ,··· ,xm )→+∞

is a standard normal distribution with respect to xi .

Definition 2.29 (Liu [137]) Let (τ1 , τ2 , · · · , τm ) be a standard normal un-


certain vector, and let ei , σij , i = 1, 2, · · · , k, j = 1, 2, · · · , m be real numbers.
Define
Xm
ξi = ei + σij τj (2.196)
j=1

for i = 1, 2, · · · , k. Then (ξ1 , ξ2 , · · · , ξk ) is called a normal uncertain vector.


102 Chapter 2 - Uncertain Variable

That is, an uncertain vector ξ has a multivariate normal distribution if it


can be represented in the form

ξ = e + στ (2.197)

for some real vector e and some real matrix σ, where τ is a standard normal
uncertain vector. Note that ξ, e and τ are understood as column vectors.
Please also note that for every index i, the component ξi is a normal uncertain
variable with expected value ei and standard variance
m
X
|σij |. (2.198)
j=1

Theorem 2.66 (Liu [137]) Assume ξ is a normal uncertain vector, c is a


real vector, and D is a real matrix. Then

η = c + Dξ (2.199)

is another normal uncertain vector.

Proof: Since ξ is a normal uncertain vector, there exists a standard normal


uncertain vector τ , a real vector e and a real matrix σ such that ξ = e + στ .
It follows that

η = c + Dξ = c + D(e + στ ) = (c + De) + (Dσ)τ .

Hence η is a normal uncertain vector.

2.13 Bibliographic Notes


As a fundamental concept in uncertainty theory, the uncertain variable was
presented by Liu [122] in 2007. In order to describe uncertain variable, Liu
[122] also introduced the concept of uncertainty distribution. Later, Peng and
Iwamura [184] proved a sufficient and necessary condition for uncertainty dis-
tribution. In addition, Liu [129] proposed the concept of inverse uncertainty
distribution, and Liu [134] verified a sufficient and necessary condition for it.
More importantly, a measure inversion theorem was given by Liu [129] that
may yield uncertain measures from the uncertainty distribution of the corre-
sponding uncertain variable. Furthermore, Liu [122] proposed the concept of
conditional uncertainty distribution of uncertain variable, and derived some
formulas for calculating it.
Following the independence concept of uncertain variables proposed by
Liu [125], the operational law was given by Liu [129] for calculating the uncer-
tainty distribution and inverse uncertainty distribution of strictly monotone
function of independent uncertain variables.
Section 2.13 - Bibliographic Notes 103

In order to rank uncertain variables, Liu [122] proposed the concept of


expected value operator. In addition, the linearity of expected value operator
was verified by Liu [129]. As an important contribution, Liu and Ha [147]
derived a useful formula for calculating the expected values of strictly mono-
tone functions of independent uncertain variables. Based on the expected
value operator, Liu [122] presented the concepts of variance, moments and
distance of uncertain variables.
The concept of entropy was proposed by Liu [125] for characterizing the
uncertainty of uncertain variables. Dai and Chen [27] verified the positive
linearity of entropy and derived some formulas for calculating the entropy
of monotone function of uncertain variables. In addition, Chen and Dai
[15] discussed the maximum entropy principle in order to select the uncer-
tainty distribution that has maximum entropy and satisfies the prescribed
constraints. Especially, normal uncertainty distribution is proved to have
maximum entropy when the expected value and variance are fixed in ad-
vance. As an extension of entropy, Chen, Kar and Ralescu [16] proposed a
concept of cross entropy for comparing an uncertainty distribution against a
reference uncertainty distribution.
The concept of uncertain sequence was presented by Liu [122] with con-
vergence almost surely, convergence in measure, convergence in mean, and
convergence in distribution. Liu [122] also discussed the relationship among
those convergence concepts. Furthermore, Gao [48], You [258], Zhang [268],
and Chen, Li and Ralescu [22] developed some other concepts of convergence
and investigated their mathematical properties.
The concept of uncertain vector was defined by Liu [122]. In addition,
Liu [137] discussed the independence of uncertain vectors and proposed the
concept of normal uncertain vector.
Chapter 3

Uncertain Programming

Uncertain programming was founded by Liu [124] in 2009. This chapter will
provide a theory of uncertain programming, and present some uncertain pro-
gramming models for machine scheduling problem, vehicle routing problem,
and project scheduling problem.

3.1 Uncertain Programming


Uncertain programming is a type of mathematical programming involving
uncertain variables. Assume that x is a decision vector, and ξ is an uncer-
tain vector. Since an uncertain objective function f (x, ξ) cannot be directly
minimized, we may minimize its expected value, i.e.,
min E[f (x, ξ)]. (3.1)
x
In addition, since the uncertain constraints gj (x, ξ) ≤ 0, j = 1, 2, · · · , p do not
define a crisp feasible set, it is naturally desired that the uncertain constraints
hold with confidence levels α1 , α2 , · · · , αp . Then we have a set of chance
constraints,
M{gj (x, ξ) ≤ 0} ≥ αj , j = 1, 2, · · · , p. (3.2)
In order to obtain a decision with minimum expected objective value subject
to a set of chance constraints, Liu [124] proposed the following uncertain
programming model,

 min
 x
E[f (x, ξ)]
subject to: (3.3)

M{gj (x, ξ) ≤ 0} ≥ αj , j = 1, 2, · · · , p.

Definition 3.1 (Liu [124]) A vector x is called a feasible solution to the


uncertain programming model (3.3) if
M{gj (x, ξ) ≤ 0} ≥ αj (3.4)

© Springer-Verlag Berlin Heidelberg 2015 105


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_4
106 Chapter 3 - Uncertain Programming

for j = 1, 2, · · · , p.

Definition 3.2 (Liu [124]) A feasible solution x∗ is called an optimal solu-


tion to the uncertain programming model (3.3) if

E[f (x∗ , ξ)] ≤ E[f (x, ξ)] (3.5)

for any feasible solution x.

Theorem 3.1 Assume the objective function f (x, ξ1 , ξ2 , · · · , ξn ) is strictly


increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly decreasing with respect
to ξm+1 , ξm+2 , · · · , ξn . If ξ1 , ξ2 , · · · , ξn are independent uncertain variables
with uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively, then the expected
objective function E[f (x, ξ1 , ξ2 , · · · , ξn )] is equal to
Z 1
f (x, Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α))dα. (3.6)
0

Proof: It follows from Theorem 2.30 immediately.

Exercise 3.1: Assume f (x, ξ) = h1 (x)ξ1 + h2 (x)ξ2 + · · · + hn (x)ξn + h0 (x)


where h1 (x), h2 (x), · · · , hn (x), h0 (x) are real-valued functions and ξ1 , ξ2 , · · · ,
ξn are independent uncertain variables. Show that

E[f (x, ξ)] = h1 (x)E[ξ1 ] + h2 (x)E[ξ2 ] + · · · + hn (x)E[ξn ] + h0 (x). (3.7)

Theorem 3.2 Assume the constraint function g(x, ξ1 , ξ2 , · · · , ξn ) is strictly


increasing with respect to ξ1 , ξ2 , · · · , ξk and strictly decreasing with respect
to ξk+1 , ξk+2 , · · · , ξn . If ξ1 , ξ2 , · · · , ξn are independent uncertain variables
with uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively, then the chance
constraint
M {g(x, ξ1 , ξ2 , · · · , ξn ) ≤ 0} ≥ α (3.8)
holds if and only if

g(x, Φ−1 −1 −1 −1
1 (α), · · · , Φk (α), Φk+1 (1 − α), · · · , Φn (1 − α)) ≤ 0. (3.9)

Proof: It follows from Theorem 2.22 immediately.

Exercise 3.2: Assume x1 , x2 , · · · , xn are nonnegative decision variables, and


ξ1 , ξ2 , · · · , ξn , ξ are independent linear uncertain variables L(a1 , b1 ), L(a2 , b2 ),
· · · , L(an , bn ), L(a, b), respectively. Show that for any confidence level α ∈
(0, 1), the chance constraint
( n )
X
M ξi xi ≤ ξ ≥ α (3.10)
i=1
Section 3.1 - Uncertain Programming 107

holds if and only if


n
X
((1 − α)ai + αbi )xi ≤ αa + (1 − α)b. (3.11)
i=1

Exercise 3.3: Assume x1 , x2 , · · · , xn are nonnegative decision variables,


and ξ1 , ξ2 , · · · , ξn , ξ are independent normal uncertain variables N (e1 , σ1 ),
N (e2 , σ2 ), · · · , N (en , σn ), N (e, σ), respectively. Show that for any confidence
level α ∈ (0, 1), the chance constraint
( n )
X
M ξi xi ≤ ξ ≥ α (3.12)
i=1

holds if and only if


n √ ! √
X σi 3 α σ 3 α
ei + ln xi ≤ e − ln . (3.13)
i=1
π 1−α π 1−α

Exercise 3.4: Assume ξ1 , ξ2 , · · · , ξn are independent uncertain variables


with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively, and h1 (x),
h2 (x), · · · , hn (x), h0 (x) are real-valued functions. Show that
( n )
X
M hi (x)ξi ≤ h0 (x) ≥ α (3.14)
i=1

holds if and only if


n
X n
X
−1
h+
i (x)Φi (α) − h− −1
i (x)Φi (1 − α) ≤ h0 (x) (3.15)
i=1 i=1

where (
hi (x), if hi (x) > 0
h+
i (x) = (3.16)
0, if hi (x) ≤ 0,
(
−hi (x), if hi (x) < 0
h−
i (x) = (3.17)
0, if hi (x) ≥ 0
for i = 1, 2, · · · , n.

Theorem 3.3 Assume f (x, ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect


to ξ1 , ξ2 , · · · , ξm and strictly decreasing with respect to ξm+1 , ξm+2 , · · · , ξn ,
and gj (x, ξ1 , ξ2 , · · · , ξn ) are strictly increasing with respect to ξ1 , ξ2 , · · · , ξk
and strictly decreasing with respect to ξk+1 , ξk+2 , · · · , ξn for j = 1, 2, · · · , p.
108 Chapter 3 - Uncertain Programming

If ξ1 , ξ2 , · · · , ξn are independent uncertain variables with uncertainty distri-


butions Φ1 , Φ2 , · · · , Φn , respectively, then the uncertain programming

 min E[f (x, ξ1 , ξ2 , · · · , ξn )]
x

subject to: (3.18)

M{gj (x, ξ1 , ξ2 , · · · , ξn ) ≤ 0} ≥ αj , j = 1, 2, · · · , p

is equivalent to the crisp mathematical programming


 Z 1

 min f (x, Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α))dα
x



 0

subject to:



 gj (x, Φ−1 −1 −1 −1
1 (αj ), · · · , Φk (αj ), Φk+1 (1 − αj ), · · · , Φn (1 − αj )) ≤ 0



j = 1, 2, · · · , p.
Proof: It follows from Theorems 3.1 and 3.2 immediately.

3.2 Numerical Method


When the objective functions and constraint functions are monotone with
respect to the uncertain parameters, the uncertain programming model may
be converted to a crisp mathematical programming.
It is fortunate for us that almost all objective and constraint functions
in practical problems are indeed monotone with respect to the uncertain
parameters (not decision variables).
From the mathematical viewpoint, there is no difference between crisp
mathematical programming and classical mathematical programming except
for an integral. Thus we may solve it by simplex method, branch-and-bound
method, cutting plane method, implicit enumeration method, interior point
method, gradient method, genetic algorithm, particle swarm optimization,
neural networks, tabu search, and so on.

Example 3.1: Assume that x1 , x2 , x3 are nonnegative decision variables,


ξ1 , ξ2 , ξ3 are independent linear uncertain variables L(1, 2), L(2, 3), L(3, 4),
and η1 , η2 , η3 are independent zigzag uncertain variables Z(1, 2, 3), Z(2, 3, 4),
Z(3, 4, 5), respectively. Consider the uncertain programming,
 √ √ √
 max E x1 + ξ1 + x2 + ξ2 + x3 + ξ3


 x1 ,x2 ,x3
subject to:



 M{(x1 + η1 )2 + (x2 + η2 )2 + (x3 + η3 )2 ≤ 100} ≥ 0.9

x1 , x2 , x3 ≥ 0.

√ √ √
Note that x1 + ξ1 + x2 + ξ2 + x3 + ξ3 is a strictly increasing function
with respect to ξ1 , ξ2 , ξ3 , and (x1 + η1 )2 + (x2 + η2 )2 + (x3 + η3 )2 is a strictly
Section 3.3 - Machine Scheduling Problem 109

increasing function with respect to η1 , η2 , η3 . It is easy to verify that the


uncertain programming model can be converted to the crisp model,

1
 Z q q q
−1 −1 −1
max x1 + Φ1 (α) + x2 + Φ2 (α) + x3 + Φ3 (α) dα





 x1 ,x2 ,x3 0

subject to:
(x1 + Ψ−1 2 −1 2 −1 2
1 (0.9)) + (x2 + Ψ2 (0.9)) + (x3 + Ψ3 (0.9)) ≤ 100






x1 , x2 , x3 ≥ 0

where Φ−1 −1 −1 −1 −1 −1
1 , Φ2 , Φ3 , Ψ1 , Ψ2 , Ψ3 are inverse uncertainty distributions of
uncertain variables ξ1 , ξ2 , ξ3 , η1 , η2 , η3 , respectively. The Matlab Uncertainty
Toolbox (http://orsc.edu.cn/liu/resources.htm) may solve this model and ob-
tain an optimal solution

(x∗1 , x∗2 , x∗3 ) = (2.9735, 1.9735, 0.9735)

whose objective value is 6.3419.

Example 3.2: Assume that x1 and x2 are decision variables, ξ1 and ξ2 are iid
linear uncertain variables L(0, π/2). Consider the uncertain programming,

min E [x1 sin(x1 − ξ1 ) − x2 cos(x2 + ξ2 )]
 x1 ,x2


subject to:
 π π
0 ≤ x1 ≤ , 0 ≤ x2 ≤ .


2 2

It is clear that x1 sin(x1 − ξ1 ) − x2 cos(x2 + ξ2 ) is strictly decreasing with


respect to ξ1 and strictly increasing with respect to ξ2 . Thus the uncertain
programming is equivalent to the crisp model,
 Z 1
x1 sin(x1 − Φ−1 −1

min 1 (1 − α)) − x2 cos(x2 + Φ2 (α)) dα



 x1 ,x2 0

 subject to:
π π


0 ≤ x1 ≤ , 0 ≤ x2 ≤


2 2

where Φ−1 −1
1 , Φ2 are inverse uncertainty distributions of ξ1 , ξ2 , respectively.
The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
solve this model and obtain an optimal solution

(x∗1 , x∗2 ) = (0.4026, 0.4026)

whose objective value is −0.2708.


110 Chapter 3 - Uncertain Programming

3.3 Machine Scheduling Problem


Machine scheduling problem is concerned with finding an efficient schedule
during an uninterrupted period of time for a set of machines to process a set
of jobs. A lot of research work has been done on this type of problem. The
study of machine scheduling problem with uncertain processing times was
started by Liu [129] in 2010.

Machine
.. ..
..........
...
..............................................................................................................................................................................................
... .. ..
... ... ...
... ... ...
M 3 .... .
. J 6
.
.
....
J 7 ...
...
... ... ...
.............................................................................................................................................................................................
.
. .
.
.. .. .. ..
... ... ...
... ... ... ...
M 2 .....
. J 4 ...
...
J 5 ...
...
..
..
... ... ... ..
.
. .
. .
.
..................................................................................................................................................................... ..
.. .. .. .. ..
... ... ... ... ..
... ... ... ... ..
M 1 ... J ...
1 ...
...
J 2 ...
...
J 3 ...
...
..
..
..
... ... ... ... .
. . . .
......................................................................................................................................................................................................................
. . . .
..
... ... Time
.... . ..
........................................... Makespan .............................................

Figure 3.1: A Machine Schedule with 3 Machines and 7 Jobs. Reprinted from
Liu [129].

In a machine scheduling problem, we assume that (a) each job can be


processed on any machine without interruption; (b) each machine can process
only one job at a time; and (c) the processing times are uncertain variables
with known uncertainty distributions. We also use the following indices and
parameters:
i = 1, 2, · · · , n: jobs;
k = 1, 2, · · · , m: machines;
ξik : uncertain processing time of job i on machine k;
Φik : uncertainty distribution of ξik .

How to Represent a Schedule?


Liu [114] suggested that a schedule should be represented by two decision
vectors x and y, where
x = (x1 , x2 , · · · , xn ): integer decision vector representing n jobs with
1 ≤ xi ≤ n and xi 6= xj for all i = 6 j, i, j = 1, 2, · · · , n. That is, the sequence
{x1 , x2 , · · · , xn } is a rearrangement of {1, 2, · · · , n};
y = (y1 , y2 , · · · , ym−1 ): integer decision vector with y0 ≡ 0 ≤ y1 ≤ y2 ≤
· · · ≤ ym−1 ≤ n ≡ ym .
We note that the schedule is fully determined by the decision vectors x
and y in the following way. For each k (1 ≤ k ≤ m), if yk = yk−1 , then the
machine k is not used; if yk > yk−1 , then the machine k is used and processes
Section 3.3 - Machine Scheduling Problem 111

jobs xyk−1 +1 , xyk−1 +2 , · · · , xyk in turn. Thus the schedule of all machines is
as follows,

Machine 1: xy0 +1 → xy0 +2 → · · · → xy1 ;


Machine 2: xy1 +1 → xy1 +2 → · · · → xy2 ;
(3.19)
···
Machine m: xym−1 +1 → xym−1 +2 → · · · → xym .

y0 y1 y2 y3
... ... ... ...
... ....... ....... ... ....... ....... ... ....... ...
... ...... ......... ...... ......... ... ...... ......... ...... ......... ... ...... ......... ................... ................... ...
... ... .. ... ... .. ... ... ... . ... .
... ... x .. .... x ..
... ... x .. .... x ..
... ... x .
. ..
... x .
. ..
... x . ....
.
... ..... 1......
. ... 2 ..... ... ..... 3......
. ... 4 ..... 5 .
. 6 .
. 7 .
. .
................. ................. ... ..... ....
. ...
.. ... .. ...
.. ... .. ....
... ............. ... ............. ... ............. ............. ............. ...
... ... ... ...
....................................... .......................................................................... ................................................................................................ ............................................................
. M-1 . M-2 . M-3 .

Figure 3.2: Formulation of Schedule in which Machine 1 processes Jobs x1 , x2 ,


Machine 2 processes Jobs x3 , x4 and Machine 3 processes Jobs x5 , x6 , x7 .
Reprinted from Liu [129].

Completion Times
Let Ci (x, y, ξ) be the completion times of jobs i, i = 1, 2, · · · , n, respectively.
For each k with 1 ≤ k ≤ m, if the machine k is used (i.e., yk > yk−1 ), then
we have
Cxyk−1 +1 (x, y, ξ) = ξxyk−1 +1 k (3.20)

and
Cxyk−1 +j (x, y, ξ) = Cxyk−1 +j−1 (x, y, ξ) + ξxyk−1 +j k (3.21)

for 2 ≤ j ≤ yk − yk−1 .
If the machine k is used, then the completion time Cxyk−1 +1 (x, y, ξ) of
job xyk−1 +1 is an uncertain variable whose inverse uncertainty distribution is

Ψ−1
xy (x, y, α) = Φ−1
xy k (α). (3.22)
k−1 +1 k−1 +1

Generally, suppose the completion time Cxyk−1 +j−1 (x, y, ξ) has an in-
verse uncertainty distribution Ψ−1xyk−1 +j−1 (x, y, α). Then the completion time
Cxyk−1 +j (x, y, ξ) has an inverse uncertainty distribution

Ψ−1
xy (x, y, α) = Ψ−1
xy (x, y, α) + Φ−1
xy k (α). (3.23)
k−1 +j k−1 +j−1 k−1 +j

This recursive process may produce all inverse uncertainty distributions of


completion times of jobs.
112 Chapter 3 - Uncertain Programming

Makespan
Note that, for each k (1 ≤ k ≤ m), the value Cxyk (x, y, ξ) is just the time
that the machine k finishes all jobs assigned to it. Thus the makespan of the
schedule (x, y) is determined by

f (x, y, ξ) = max Cxyk (x, y, ξ) (3.24)


1≤k≤m

whose inverse uncertainty distribution is

Υ−1 (x, y, α) = max Ψ−1


xy (x, y, α). (3.25)
1≤k≤m k

Machine Scheduling Model


In order to minimize the expected makespan E[f (x, y, ξ)], we have the fol-
lowing machine scheduling model,

 min E[f (x, y, ξ)]


 x,y
subject to:





1 ≤ xi ≤ n, i = 1, 2, · · · , n

(3.26)


 xi 6= xj , i 6= j, i, j = 1, 2, · · · , n

0 ≤ y1 ≤ y2 · · · ≤ ym−1 ≤ n





 xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1, integers.

Since Υ−1 (x, y, α) is the inverse uncertainty distribution of f (x, y, ξ), the
machine scheduling model is simplified as follows,
 Z 1
Υ−1 (x, y, α)dα


 min
x,y 0





 subject to:



1 ≤ xi ≤ n, i = 1, 2, · · · , n (3.27)

x 6 = x , i 6 = j, i, j = 1, 2, · · · , n

i j




0 ≤ y ≤ y2 · · · ≤ ym−1 ≤ n

1




xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1, integers.

Numerical Experiment
Assume that there are 3 machines and 7 jobs with the following linear un-
certain processing times

ξik ∼ L(i, i + k), i = 1, 2, · · · , 7, k = 1, 2, 3

where i is the index of jobs and k is the index of machines. The Matlab
Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) yields that the
Section 3.4 - Vehicle Routing Problem 113

optimal solution is

x∗ = (1, 4, 5, 3, 7, 2, 6), y ∗ = (3, 5). (3.28)

In other words, the optimal machine schedule is


Machine 1: 1 → 4 → 5
Machine 2: 3 → 7
Machine 3: 2 → 6
whose expected makespan is 12.

3.4 Vehicle Routing Problem


Vehicle routing problem (VRP) is concerned with finding efficient routes,
beginning and ending at a central depot, for a fleet of vehicles to serve a
number of customers.
.....................
.... ... ........
....... .........
.. .. ...
... ................... ..... 7 ................................... ...
. .. .
.
.....
... ...
...
...
........ ...... .
. . ..
... 6 ...
...
... 1 .
..
.
.
.
.........
...
.....
....................
.....
........................... ... ...
...
. ..... ... ...
... ..... ..
.. ..... .
... ...
. ...... ...
..
. ..
.....
.
.
. ...
... ....... .
.
.
. ..
... .. ...... ....
. .
........................ . ................
..
. . ...... ........ ....... ...
...
... ...... ...... ..
. .
.
.
..
..
........ .... ..
.... ..
..
............... 0 .. ..
..
.............................................................
.
. ..
5 ...
..
...
......................... ... .. . .. .. . .......................
.... ... . ..
.....
. . . ........ .
...
..
....
. ...................... ......... .......
.
.. .....
...
2 .. ... .....
... ... ... .....
.....
.....
................... ... .....
..
. .....
. .. .....
.....
..
. .....
... ..... .............
....................... ...... .....
... . ... ... ...
.
..... .................................................................. ...
...
.....
3 . ...
..
.
...
.....
4 .....
..
.............. ... ................ ...

Figure 3.3: A Vehicle Routing Plan with Single Depot and 7 Customers.
Reprinted from Liu [129].

Due to its wide applicability and economic importance, vehicle routing


problem has been extensively studied. Liu [129] first introduced uncertainty
theory into the research area of vehicle routing problem in 2010. In this
section, vehicle routing problem will be modelled by uncertain programming
in which the travel times are assumed to be uncertain variables with known
uncertainty distributions.
We assume that (a) a vehicle will be assigned for only one route on which
there may be more than one customer; (b) a customer will be visited by one
and only one vehicle; (c) each route begins and ends at the depot; and (d) each
customer specifies its time window within which the delivery is permitted or
preferred to start.
Let us first introduce the following indices and model parameters:
i = 0: depot;
114 Chapter 3 - Uncertain Programming

i = 1, 2, · · · , n: customers;
k = 1, 2, · · · , m: vehicles;
Dij : travel distance from customers i to j, i, j = 0, 1, 2, · · · , n;
Tij : uncertain travel time from customers i to j, i, j = 0, 1, 2, · · · , n;
Φij : uncertainty distribution of Tij , i, j = 0, 1, 2, · · · , n;
[ai , bi ]: time window of customer i, i = 1, 2, · · · , n.

Operational Plan
Liu [114] suggested that an operational plan should be represented by three
decision vectors x, y and t, where
x = (x1 , x2 , · · · , xn ): integer decision vector representing n customers
with 1 ≤ xi ≤ n and xi 6= xj for all i 6= j, i, j = 1, 2, · · · , n. That is, the
sequence {x1 , x2 , · · · , xn } is a rearrangement of {1, 2, · · · , n};
y = (y1 , y2 , · · · , ym−1 ): integer decision vector with y0 ≡ 0 ≤ y1 ≤ y2 ≤
· · · ≤ ym−1 ≤ n ≡ ym ;
t = (t1 , t2 , · · · , tm ): each tk represents the starting time of vehicle k at
the depot, k = 1, 2, · · · , m.
We note that the operational plan is fully determined by the decision
vectors x, y and t in the following way. For each k (1 ≤ k ≤ m), if yk = yk−1 ,
then vehicle k is not used; if yk > yk−1 , then vehicle k is used and starts from
the depot at time tk , and the tour of vehicle k is 0 → xyk−1 +1 → xyk−1 +2 →
· · · → xyk → 0. Thus the tours of all vehicles are as follows:

Vehicle 1: 0 → xy0 +1 → xy0 +2 → · · · → xy1 → 0;


Vehicle 2: 0 → xy1 +1 → xy1 +2 → · · · → xy2 → 0;
···
Vehicle m: 0 → xym−1 +1 → xym−1 +2 → · · · → xym → 0.

y0 y1 y2 y3
... ... ... ...
... ....... ... ... ..
...... ........ ................ .......
...... ........ ................ .......
...... ........
.......
...... ........
.......
...... ........ ....
...
... .... .... ... ...
... .... .... ... ...
... .... .. .. ... ...
... ... x ... ..... x .. ... ... x ... ..... x .. ... ... x ... ..
... x ... ..
... x . ...
.... 1......
. ... 2 ..... .... 3......
. ... 4 ..... 5 .
.. 6 .
.. 7 .
.
... ... ... .... .... .... ...
... .............. ....... ......
..... ... .............. ....... ......
..... ... .................. .................. .................. ...
... ... ... ...
..................................... . . .
............................................................................ ................................................................................................ ............................................................
... ... V-1 . V-2 . V-3 .

Figure 3.4: Formulation of Operational Plan in which Vehicle 1 visits Cus-


tomers x1 , x2 , Vehicle 2 visits Customers x3 , x4 and Vehicle 3 visits Customers
x5 , x6 , x7 . Reprinted from Liu [129].

It is clear that this type of representation is intuitive, and the total number
of decision variables is n + 2m − 1. We also note that the above decision
variables x, y and t ensure that: (a) each vehicle will be used at most one
time; (b) all tours begin and end at the depot; (c) each customer will be
visited by one and only one vehicle; and (d) there is no subtour.
Section 3.4 - Vehicle Routing Problem 115

Arrival Times
Let fi (x, y, t) be the arrival time function of some vehicles at customers i
for i = 1, 2, · · · , n. We remind readers that fi (x, y, t) are determined by the
decision variables x, y and t, i = 1, 2, · · · , n. Since unloading can start either
immediately, or later, when a vehicle arrives at a customer, the calculation of
fi (x, y, t) is heavily dependent on the operational strategy. Here we assume
that the customer does not permit a delivery earlier than the time window.
That is, the vehicle will wait to unload until the beginning of the time window
if it arrives before the time window. If a vehicle arrives at a customer after
the beginning of the time window, unloading will start immediately. For each
k with 1 ≤ k ≤ m, if vehicle k is used (i.e., yk > yk−1 ), then we have

fxyk−1 +1 (x, y, t) = tk + T0xyk−1 +1

and

fxyk−1 +j (x, y, t) = fxyk−1 +j−1 (x, y, t) ∨ axyk−1 +j−1 + Txyk−1 +j−1 xyk−1 +j

for 2 ≤ j ≤ yk − yk−1 . If the vehicle k is used, i.e., yk > yk−1 , then the arrival
time fxyk−1 +1 (x, y, t) at the customer xyk−1 +1 is an uncertain variable whose
inverse uncertainty distribution is

Ψ−1
xy (x, y, t, α) = tk + Φ−1
0xy (α).
k−1 +1 k−1 +1

Generally, suppose the arrival time fxyk−1 +j−1 (x, y, t) has an inverse uncer-
tainty distribution Ψ−1
xyk−1 +j−1 (x, y, t, α). Then fxyk−1 +j (x, y, t) has an in-
verse uncertainty distribution

Ψ−1
xy (x, y, t, α) = Ψ−1
xy (x, y, t, α)∨axyk−1 +j−1 +Φ−1
xy xyk−1 +j (α)
k−1 +j k−1 +j−1 k−1 +j−1

for 2 ≤ j ≤ yk − yk−1 . This recursive process may produce all inverse


uncertainty distributions of arrival times at customers.

Travel Distance
Let g(x, y) be the total travel distance of all vehicles. Then we have
m
X
g(x, y) = gk (x, y) (3.29)
k=1

where
k −1
yP

 D
0xyk−1 +1 + Dxj xj+1 + Dxyk 0 , if yk > yk−1
gk (x, y) = j=yk−1 +1

0, if yk = yk−1
for k = 1, 2, · · · , m.
116 Chapter 3 - Uncertain Programming

Vehicle Routing Model


If we hope that each customer i (1 ≤ i ≤ n) is visited within its time window
[ai , bi ] with confidence level αi (i.e., the vehicle arrives at customer i before
time bi ), then we have the following chance constraint,

M{fi (x, y, t) ≤ bi } ≥ αi . (3.30)

If we want to minimize the total travel distance of all vehicles subject to the
time window constraint, then we have the following vehicle routing model,

 min g(x, y)

 x,y,t




 subject to:
M{fi (x, y, t) ≤ bi } ≥ αi , i = 1, 2, · · · , n




1 ≤ xi ≤ n, i = 1, 2, · · · , n (3.31)

xi 6 = xj , i =
6 j, i, j = 1, 2, · · · , n





0 ≤ y1 ≤ y2 ≤ · · · ≤ ym−1 ≤ n





xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1, integers

which is equivalent to

min g(x, y)
x,y,t






 subject to:

Ψ−1

i (x, y, t, αi ) ≤ bi , i = 1, 2, · · · , n



1 ≤ x i ≤ n, i = 1, 2, · · · ,n (3.32)

x i 6= x j , i =6 j, i, j = 1, 2, · · · , n





0 ≤ y1 ≤ y2 ≤ · · · ≤ ym−1 ≤ n





xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1,

integers

where Ψ−1i (x, y, t, α) are the inverse uncertainty distributions of fi (x, y, t)


for i = 1, 2, · · · , n, respectively.

Numerical Experiment
Assume that there are 3 vehicles and 7 customers with time windows shown in
Table 3.1, and each customer is visited within time windows with confidence
level 0.90.
We also assume that the distances are Dij = |i − j| for i, j = 0, 1, 2, · · · , 7,
and the travel times are normal uncertain variables

Tij ∼ N (2|i − j|, 1), i, j = 0, 1, 2, · · · , 7.

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


Section 3.5 - Project Scheduling Problem 117

Table 3.1: Time Windows of Customers

Node Window Node Window


1 [7 : 00, 9 : 00] 5 [15 : 00, 17 : 00]
2 [7 : 00, 9 : 00] 6 [19 : 00, 21 : 00]
3 [15 : 00, 17 : 00] 7 [19 : 00, 21 : 00]
4 [15 : 00, 17 : 00]

yield that the optimal solution is

x∗ = (1, 3, 2, 5, 7, 4, 6),
y ∗ = (2, 5), (3.33)
t∗ = (6 : 18, 4 : 18, 8 : 18).

In other words, the optimal operational plan is


Vehicle 1: depot → 1 → 3 → depot (the latest starting time is 6:18)
Vehicle 2: depot → 2 → 5 → 7 → depot (the latest starting time is 4:18)
Vehicle 3: depot → 4 → 6 → depot (the latest starting time is 8:18)
whose total travel distance is 32.

3.5 Project Scheduling Problem


Project scheduling problem is to determine the schedule of allocating re-
sources so as to balance the total cost and the completion time. The study
of project scheduling problem with uncertain factors was started by Liu [129]
in 2010. This section presents an uncertain programming model for project
scheduling problem in which the duration times are assumed to be uncertain
variables with known uncertainty distributions.
Project scheduling is usually represented by a directed acyclic network
where nodes correspond to milestones, and arcs to activities which are basi-
cally characterized by the times and costs consumed.
Let (V, A) be a directed acyclic graph, where V = {1, 2, · · · , n, n + 1} is
the set of nodes, A is the set of arcs, (i, j) ∈ A is the arc of the graph (V, A)
from nodes i to j. It is well-known that we can rearrange the indexes of the
nodes in V such that i < j for all (i, j) ∈ A.
Before we begin to study project scheduling problem with uncertain ac-
tivity duration times, we first make some assumptions: (a) all of the costs
needed are obtained via loans with some given interest rate; and (b) each
activity can be processed only if the loan needed is allocated and all the
foregoing activities are finished.
In order to model the project scheduling problem, we introduce the fol-
lowing indices and parameters:
118 Chapter 3 - Uncertain Programming

....... .......
...... ......... ...... .........
.. .. .. ... ..
....
. .. .. . .2
...
. ...........................................................................
. .. .
5
.
.. ..
..
......
..... .
...... ............... .
........... ................ ...........
.
. . ......
...... ...... ......
...... ...... ......
...... ...... ......
...... ...... ......
........... ........... ......
......
....
..... ...
...... ......
...... . ....... ...... .......
. ......
......
........ ................
. . . . . . . .
....... ............... ....... ............... ....... ........ ......... ...
..... ....................................................................... ....................................................................... ........................................................................ ...
... 1 .
..
............................
.... ... 3....
... .....
.
. .
........
.... ... 6 ....
... ........ . .... ...
..
.....
.
8
......... .......
.
...
...... ........ ....... ........ ..... . . .
.......
...... ...... ...
.
...... ......
. ........
.
...... ...... ......
...... ...... ......
...... ...... ......
...... ......
...... ...... ......
......
...... .
......
. .
..........
...... . ........ .
......... ................
....... ... .......... ..... ..... ...........
... .... ....
. . ..
.... ........................................................................
4
...
.....................
. 7...
....................
..

Figure 3.5: A Project with 8 Milestones and 11 Activities. Reprinted from


Liu [129].

ξij : uncertain duration time of activity (i, j) in A;


Φij : uncertainty distribution of ξij ;
cij : cost of activity (i, j) in A;
r: interest rate;
xi : integer decision variable representing the allocating time of all loans
needed for all activities (i, j) in A.

Starting Times
For simplicity, we write ξ = {ξij : (i, j) ∈ A} and x = (x1 , x2 , · · · , xn ). Let
Ti (x, ξ) denote the starting time of all activities (i, j) in A. According to the
assumptions, the starting time of the total project (i.e., the starting time of
of all activities (1, j) in A) should be

T1 (x, ξ) = x1 (3.34)

whose inverse uncertainty distribution may be written as

Ψ−1
1 (x, α) = x1 . (3.35)

From the starting time T1 (x, ξ), we deduce that the starting time of activity
(2, 5) is
T2 (x, ξ) = x2 ∨ (x1 + ξ12 ) (3.36)
whose inverse uncertainty distribution may be written as

Ψ−1 −1
2 (x, α) = x2 ∨ (x1 + Φ12 (α)). (3.37)

Generally, suppose that the starting time Tk (x, ξ) of all activities (k, i) in A
has an inverse uncertainty distribution Ψ−1 k (x, α). Then the starting time
Ti (x, ξ) of all activities (i, j) in A should be

Ti (x, ξ) = xi ∨ max (Tk (x, ξ) + ξki ) (3.38)


(k,i)∈A
Section 3.5 - Project Scheduling Problem 119

whose inverse uncertainty distribution is

Ψ−1 Ψ−1 −1

i (x, α) = xi ∨ max k (x, α) + Φki (α) . (3.39)
(k,i)∈A

This recursive process may produce all inverse uncertainty distributions of


starting times of activities.

Completion Time
The completion time T (x, ξ) of the total project (i.e, the finish time of all
activities (k, n + 1) in A) is

T (x, ξ) = max (Tk (x, ξ) + ξk,n+1 ) (3.40)


(k,n+1)∈A

whose inverse uncertainty distribution is



Ψ−1 (x, α) = max Ψ−1 −1
k (x, α) + Φk,n+1 (α) . (3.41)
(k,n+1)∈A

Total Cost
Based on the completion time T (x, ξ), the total cost of the project can be
written as
dT (x,ξ)−xi e
X
C(x, ξ) = cij (1 + r) (3.42)
(i,j)∈A

where dae represents the minimal integer greater than or equal to a. Note that
C(x, ξ) is a discrete uncertain variable whose inverse uncertainty distribution
is
dΨ−1 (x;α)−xi e
X
Υ−1 (x, α) = cij (1 + r) (3.43)
(i,j)∈A

for 0 < α < 1.

Project Scheduling Model


In order to minimize the expected cost of the project under the completion
time constraint, we may construct the following project scheduling model,

 min E[C(x, ξ)]
 x


subject to:

(3.44)


 M{T (x, ξ) ≤ T0 } ≥ α0

x ≥ 0, integer vector

where T0 is a due date of the project, α0 is a predetermined confidence level,


T (x, ξ) is the completion time defined by (3.40), and C(x, ξ) is the total cost
120 Chapter 3 - Uncertain Programming

defined by (3.42). This model is equivalent to


 Z 1

 min Υ−1 (x, α)dα
 x 0



subject to: (3.45)
Ψ−1 (x, α0 ) ≤ T0






x ≥ 0, integer vector

where Ψ−1 (x, α) is the inverse uncertainty distribution of T (x, ξ) determined


by (3.41) and Υ−1 (x, α) is the inverse uncertainty distribution of C(x, ξ)
determined by (3.43).

Numerical Experiment

Consider a project scheduling problem shown by Figure 3.5 in which there are
8 milestones and 11 activities. Assume that all duration times of activities
are linear uncertain variables,

ξij ∼ L(3i, 3j), ∀(i, j) ∈ A

and assume that the costs of activities are

cij = i + j, ∀(i, j) ∈ A.

In addition, we also suppose that the interest rate is r = 0.02, the due date is
T0 = 60, and the confidence level is α0 = 0.85. The Matlab Uncertainty Tool-
box (http://orsc.edu.cn/liu/resources.htm) yields that the optimal solution
is
x∗ = (7, 24, 17, 16, 35, 33, 30). (3.46)

In other words, the optimal allocating times of all loans needed for all activ-
ities are shown in Table 3.2 whose expected total cost is 190.6, and

M{T (x∗ , ξ) ≤ 60} = 0.88.

Table 3.2: Optimal Allocating Times of Loans

Date 7 16 17 24 30 33 35
Node 1 4 3 2 7 6 5
Loan 12 11 27 7 15 14 13
Section 3.6 - Uncertain Multiobjective Programming 121

3.6 Uncertain Multiobjective Programming


It has been increasingly recognized that many real decision-making problems
involve multiple, noncommensurable, and conflicting objectives which should
be considered simultaneously. In order to optimize multiple objectives, mul-
tiobjective programming has been well developed and applied widely. For
modelling multiobjective decision-making problems with uncertain param-
eters, Liu and Chen [141] presented the following uncertain multiobjective
programming,

 min
 (E[f1 (x, ξ)], E[f2 (x, ξ)], · · · , E[fm (x, ξ)])
x
subject to: (3.47)
M{gj (x, ξ) ≤ 0} ≥ αj , j = 1, 2, · · · , p

where fi (x, ξ) are objective functions for i = 1, 2, · · · , m, and gj (x, ξ) are


constraint functions for j = 1, 2, · · · , p.
Since the objectives are usually in conflict, there is no optimal solution
that simultaneously minimizes all the objective functions. In this case, we
have to introduce the concept of Pareto solution, which means that it is
impossible to improve any one objective without sacrificing on one or more
of the other objectives.

Definition 3.3 A feasible solution x∗ is said to be Pareto to the uncertain


multiobjective programming (3.47) if there is no feasible solution x such that

E[fi (x, ξ)] ≤ E[fi (x∗ , ξ)], i = 1, 2, · · · , m (3.48)

and E[fj (x, ξ)] < E[fj (x∗ , ξ)] for at least one index j.

If the decision maker has a real-valued preference function aggregating


the m objective functions, then we may minimize the aggregating preference
function subject to the same set of chance constraints. This model is referred
to as a compromise model whose solution is called a compromise solution.
It has been proved that the compromise solution is Pareto to the original
multiobjective model.
The first well-known compromise model is set up by weighting the objec-
tive functions, i.e.,
 m
P
 min λi E[fi (x, ξ)]


x i=1
(3.49)

 subject to:
M{gj (x, ξ) ≤ 0} ≥ αj ,

j = 1, 2, · · · , p

where the weights λ1 , λ2 , · · · , λm are nonnegative numbers with λ1 + λ2 +


· · · + λm = 1, for example, λi ≡ 1/m for i = 1, 2, · · · , m.
122 Chapter 3 - Uncertain Programming

The second way is related to minimizing the distance function from a


solution
(E[f1 (x, ξ)], E[f2 (x, ξ)], · · · , E[fm (x, ξ)]) (3.50)

to an ideal vector (f1∗ , f2∗ , · · · , fm ), where fi∗ are the optimal values of the
ith objective functions without considering other objectives, i = 1, 2, · · · , m,
respectively. That is,
m

λi (E[fi (x, ξ)] − fi∗ )2
P
 min


 x i=1
(3.51)

 subject to:
M{gj (x, ξ) ≤ 0} ≥ αj , j = 1, 2, · · · , p

where the weights λ1 , λ2 , · · · , λm are nonnegative numbers with λ1 + λ2 +


· · · + λm = 1, for example, λi ≡ 1/m for i = 1, 2, · · · , m.
By the third way a compromise solution can be found via an interactive
approach consisting of a sequence of decision phases and computation phases.
Various interactive approaches have been developed.

3.7 Uncertain Goal Programming


The concept of goal programming was presented by Charnes and Cooper
[11] in 1961 and subsequently studied by many researchers. Goal program-
ming can be regarded as a special compromise model for multiobjective op-
timization and has been applied in a wide variety of real-world problems.
In multiobjective decision-making problems, we assume that the decision-
maker is able to assign a target level for each goal and the key idea is to
minimize the deviations (positive, negative, or both) from the target levels.
In the real-world situation, the goals are achievable only at the expense of
other goals and these goals are usually incompatible. In order to balance
multiple conflicting objectives, a decision-maker may establish a hierarchy of
importance among these incompatible goals so as to satisfy as many goals as
possible in the order specified. For multiobjective decision-making problems
with uncertain parameters, Liu and Chen [141] proposed an uncertain goal
programming,
l m


(uij d+
P P

 min Pj i + vij di )
x j=1



 i=1


 subject to:
(3.52)


 E[fi (x, ξ)] + d− +
i − di = bi , i = 1, 2, · · · , m
M{gj (x, ξ) ≤ 0} ≥ αj , j = 1, 2, · · · , p






d+ ≥ i = 1, 2, · · · , m

i , di 0,
where Pj is the preemptive priority factor which expresses the relative im-
portance of various goals, Pj Pj+1 , for all j, uij is the weighting factor
Section 3.8 - Uncertain Multilevel Programming 123

corresponding to positive deviation for goal i with priority j assigned, vij


is the weighting factor corresponding to negative deviation for goal i with

priority j assigned, d+i is the positive deviation from the target of goal i, di
is the negative deviation from the target of goal i, fi is a function in goal con-
straints, gj is a function in real constraints, bi is the target value according
to goal i, l is the number of priorities, m is the number of goal constraints,
and p is the number of real constraints. Note that the positive and negative
deviations are calculated by
(
+
E[fi (x, ξ)] − bi , if E[fi (x, ξ)] > bi
di = (3.53)
0, otherwise

and (
bi − E[fi (x, ξ)], if E[fi (x, ξ)] < bi
d−
i = (3.54)
0, otherwise
for each i. Sometimes, the objective function in the goal programming model
is written as follows,
(m m m
)
X X X
+ − + − + −
lexmin (ui1 di + vi1 di ), (ui2 di + vi2 di ), · · · , (uil di + vil di )
i=1 i=1 i=1

where lexmin represents lexicographically minimizing the objective vector.

3.8 Uncertain Multilevel Programming


Multilevel programming offers a means of studying decentralized decision
systems in which we assume that the leader and followers may have their
own decision variables and objective functions, and the leader can only influ-
ence the reactions of followers through his own decision variables, while the
followers have full authority to decide how to optimize their own objective
functions in view of the decisions of the leader and other followers.
Assume that in a decentralized two-level decision system there is one
leader and m followers. Let x and y i be the control vectors of the leader
and the ith followers, i = 1, 2, · · · , m, respectively. We also assume that the
objective functions of the leader and ith followers are F (x, y 1 , · · · , y m , ξ) and
fi (x, y 1 , · · · , y m , ξ), i = 1, 2, · · · , m, respectively, where ξ is an uncertain
vector.
Let the feasible set of control vector x of the leader be defined by the
chance constraint
M{G(x, ξ) ≤ 0} ≥ α (3.55)
where G is a constraint function, and α is a predetermined confidence level.
Then for each decision x chosen by the leader, the feasibility of control vec-
tors y i of the ith followers should be dependent on not only x but also
124 Chapter 3 - Uncertain Programming

y 1 , · · · , y i−1 , y i+1 , · · · , y m , and generally represented by the chance con-


straints,
M{gi (x, y 1 , y 2 , · · · , y m , ξ) ≤ 0} ≥ αi (3.56)
where gi are constraint functions, and αi are predetermined confidence levels,
i = 1, 2, · · · , m, respectively.
Assume that the leader first chooses his control vector x, and the fol-
lowers determine their control array (y 1 , y 2 , · · · , y m ) after that. In order to
minimize the expected objective of the leader, Liu and Yao [140] proposed
the following uncertain multilevel programming,

min E[F (x, y ∗1 , y ∗2 , · · · , y ∗m , ξ)]



x






 subject to:
M{G(x, ξ) ≤ 0} ≥ α





(y ∗1 , y ∗2 , · · · , y ∗m ) solves problems (i = 1, 2, · · · , m) (3.57)
 



  yi
 min E[f i (x, y 1 , y 2 , · · · , y m , ξ)]




 subject to:
 
M{gi (x, y 1 , y 2 , · · · , y m , ξ) ≤ 0} ≥ αi .
 

Definition 3.4 Let x be a feasible control vector of the leader. A Nash


equilibrium of followers is the feasible array (y ∗1 , y ∗2 , · · · , y ∗m ) with respect to
x if
E[fi (x, y ∗1 , · · · , y ∗i−1 , y i , y ∗i+1 , · · · , y ∗m , ξ)]
(3.58)
≥ E[fi (x, y ∗1 , · · · , y ∗i−1 , y ∗i , y ∗i+1 , · · · , y ∗m , ξ)]
for any feasible array (y ∗1 , · · · , y ∗i−1 , y i , y ∗i+1 , · · · , y ∗m ) and i = 1, 2, · · · , m.

Definition 3.5 Suppose that x∗ is a feasible control vector of the leader and
(y ∗1 , y ∗2 , · · · , y ∗m ) is a Nash equilibrium of followers with respect to x∗ . We call
the array (x∗ , y ∗1 , y ∗2 , · · · , y ∗m ) a Stackelberg-Nash equilibrium to the uncertain
multilevel programming (3.57) if

E[F (x, y 1 , y 2 , · · · , y m , ξ)] ≥ E[F (x∗ , y ∗1 , y ∗2 , · · · , y ∗m , ξ)] (3.59)

for any feasible control vector x and the Nash equilibrium (y 1 , y 2 , · · · , y m )


with respect to x.

3.9 Bibliographic Notes


Uncertain programming was founded by Liu [124] in 2009 and was applied to
machine scheduling problem, vehicle routing problem and project scheduling
problem by Liu [129] in 2010.
As extensions of uncertain programming theory, Liu and Chen [141] de-
veloped an uncertain multiobjective programming and an uncertain goal pro-
gramming. In addition, Liu and Yao [140] suggested an uncertain multilevel
Section 3.9 - Bibliographic Notes 125

programming for modeling decentralized decision systems with uncertain fac-


tors.
After that, the uncertain programming has obtained fruitful results in
both theory and practice. For exploring more books and papers, the inter-
ested reader may visit the website at http://orsc.edu.cn/online.
Chapter 4

Uncertain Statistics

The study of uncertain statistics was started by Liu [129] in 2010. It is a


methodology for collecting and interpreting expert’s experimental data by un-
certainty theory. This chapter will design a questionnaire survey for collecting
expert’s experimental data, and introduce empirical uncertainty distribution
(i.e., linear interpolation method), principle of least squares, method of mo-
ments, and Delphi method for determining uncertainty distributions from
expert’s experimental data.

4.1 Expert’s Experimental Data


Uncertain statistics is based on expert’s experimental data rather than histor-
ical data. How do we obtain expert’s experimental data? Liu [129] proposed
a questionnaire survey for collecting expert’s experimental data. The start-
ing point is to invite one or more domain experts who are asked to complete
a questionnaire about the meaning of an uncertain variable ξ like “how far
from Beijing to Tianjin”.
We first ask the domain expert to choose a possible value x (say 110km)
that the uncertain variable ξ may take, and then quiz him
“How likely is ξ less than or equal to x?” (4.1)
Denote the expert’s belief degree by α (say 0.6). Note that the expert’s belief
degree of ξ greater than x must be 1 − α due to the self-duality of uncertain
measure. An expert’s experimental data
(x, α) = (110, 0.6) (4.2)
is thus acquired from the domain expert.
Repeating the above process, the following expert’s experimental data are
obtained by the questionnaire,
(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ). (4.3)

© Springer-Verlag Berlin Heidelberg 2015 127


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_5
128 Chapter 4 - Uncertain Statistics

............................................................................
.....
.....
x . .....
.......................................................................
.....
..... .... .....
..... . .
......
α .....
..... .... ........
..... .. .....
. .
...
1−α
..... .. .....
...
.................................................................................................................................................................................................................................... ξ
.....
M{ξ ≤ x} ... M{ξ ≥ x}

Figure 4.1: Expert’s Experimental Data (x, α). Reprinted from Liu [129].

Remark 4.1: None of x, α and n could be assigned a value in the question-


naire before asking the domain expert. Otherwise, the domain expert may
have no knowledge or experiments enough to answer your questions.

4.2 Questionnaire Survey


Beijing is the capital of China, and Tianjin is a coastal city. Assume that
the real distance between them is not exactly known for us. It is more ac-
ceptable to regard such an unknown quantity as an uncertain variable than
a random variable or a fuzzy variable. Chen and Ralescu [18] employed un-
certain statistics to estimate the travel distance between Beijing and Tianjin.
The consultation process is as follows:

Q1: May I ask you how far is from Beijing to Tianjin? What do you think
is the minimum distance?

A1: 100km. (an expert’s experimental data (100, 0) is acquired)

Q2: What do you think is the maximum distance?

A2: 150km. (an expert’s experimental data (150, 1) is acquired)

Q3: What do you think is a likely distance?

A3: 130km.

Q4: What is the belief degree that the real distance is less than 130km?

A4: 0.6. (an expert’s experimental data (130, 0.6) is acquired)

Q5: Is there another number this distance may be?

A5: 140km.

Q6: What is the belief degree that the real distance is less than 140km?

A6: 0.9. (an expert’s experimental data (140, 0.9) is acquired)

Q7: Is there another number this distance may be?

A7: 120km.
Section 4.4 - Principle of Least Squares 129

Q8: What is the belief degree that the real distance is less than 120km?
A8: 0.3. (an expert’s experimental data (120, 0.3) is acquired)
Q9: Is there another number this distance may be?
A9: No idea.
By using the questionnaire survey, five expert’s experimental data of the
travel distance between Beijing and Tianjin are acquired from the domain
expert,
(100, 0), (120, 0.3), (130, 0.6), (140, 0.9), (150, 1). (4.4)

4.3 Empirical Uncertainty Distribution


How do we determine the uncertainty distribution for an uncertain variable?
Assume that we have obtained a set of expert’s experimental data
(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ) (4.5)
that meet the following consistence condition (perhaps after a rearrangement)
x1 < x2 < · · · < xn , 0 ≤ α1 ≤ α2 ≤ · · · ≤ αn ≤ 1. (4.6)
Based on those expert’s experimental data, Liu [129] suggested an empirical
uncertainty distribution,


 0, if x < x1
(αi+1 − αi )(x − xi )


Φ(x) = αi + , if xi ≤ x ≤ xi+1 , 1 ≤ i < n (4.7)

 xi+1 − xi

1, if x > xn .

Essentially, it is a type of linear interpolation method.


The empirical uncertainty distribution Φ determined by (4.7) has an ex-
pected value
n−1
X αi+1 − αi−1
α1 + α2 αn−1 + αn
E[ξ] = x1 + xi + 1 − xn . (4.8)
2 i=2
2 2

If all xi ’s are nonnegative, then the k-th empirical moments are


n−1 k
1 XX
E[ξ k ] = α1 xk1 + (αi+1 − αi )xji xk−j k
i+1 + (1 − αn )xn . (4.9)
k + 1 i=1 j=0

Example 4.1: Recall that the five expert’s experimental data (100, 0),
(120, 0.3), (130, 0.6), (140, 0.9), (150, 1) of the travel distance between Bei-
jing and Tianjin have been acquired in Section 4.2. Based on those expert’s
experimental data, an empirical uncertainty distribution of travel distance is
shown in Figure 4.3.
130 Chapter 4 - Uncertain Statistics

Φ(x)
...
..........
...
...
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..............................................
1 ...
...
....
..
... .. ...
............•
... .....
... (x , α ) .
...
...
...
... .
............
...
. (x , α ) 5 5
4 4 • .............
... ....
... ...
... ...
... ...
... ...
... ..
... ...
.
... ...
... (x , α ) 2 2 • ..
...
............................................•
... ...
... ... (x , α ) 3 3
... ...
... ...
... .
....
... ..
...
... ...
... ...
... .....
(x , α ) ... ...
... 1 1 .....
... •.....
... ..
. .
..................................................................................................................................................................................................................................................................................... x
0 ..
..
.

Figure 4.2: Empirical Uncertainty Distribution Φ(x). Reprinted from Liu


[129].

4.4 Principle of Least Squares


Assume that an uncertainty distribution to be determined has a known func-
tional form Φ(x|θ) with an unknown parameter θ. In order to estimate the
parameter θ, Liu [129] employed the principle of least squares that minimizes
the sum of the squares of the distance of the expert’s experimental data to
the uncertainty distribution. This minimization can be performed in either
the vertical or horizontal direction. If the expert’s experimental data

(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ) (4.10)

are obtained and the vertical direction is accepted, then we have


n
X
min (Φ(xi |θ) − αi )2 . (4.11)
θ
i=1

The optimal solution θb of (4.11) is called the least squares estimate of θ, and
then the least squares uncertainty distribution is Φ(x|θ).
b

Example 4.2: Assume that an uncertainty distribution has a linear form


with two unknown parameters a and b, i.e.,


 0, if x ≤ a
Φ(x) = (x − a)/(b − a), if a ≤ x ≤ b (4.12)

1, if x ≥ b.

We also assume the following expert’s experimental data,

(1, 0.15), (2, 0.45), (3, 0.55), (4, 0.85), (5, 0.95). (4.13)
Section 4.4 - Principle of Least Squares 131

Φ(x)
...
..........
.
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..........•
1 ..
... ......
.........
........................................

... (140, 0.9) .........


.......
(150, 1)
... ..
..•
... ...
....
... .....
... .....
... ..
.....
... ...
....
... .....
... ....
....
.
... •
.
....
.
.
...
(130, 0.6) ....
... .....
... .....
... ..
.....
...
... ....
... .....
... .....
... .
......
... ...•
..
......
... (120, 0.3) .......
... .......
... ..
..........
.....
... .......
... .......
... ......
(100, 0) ... .
...........
................................................................................................................................................................................................................................ x
0 .....................................•
..

Figure 4.3: Empirical Uncertainty Distribution of Travel Distance between


Beijing and Tianjin. Note that the empirical expected distance is 125.5km
and the real distance is 127km in the google earth.

Φ(x|θ)
....
........
..
...
...
.........................
... ..................................
... •.... ...................................... •.
... .. ..
... ...
.......... •
... .....
.....
... ......
... .....
... .
.....
.
.
... ....
... ....
......
... .... ...
... .
.... •
... ..
....
... ....
... ....
... •.... ...........
... ......
.
... .....
... .......
...........
................................................................................................................................................................................................................................................................. x
...
0 ..
....
.

Figure 4.4: Principle of Least Squares

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


yield that a = 0.2273, b = 4.7727 and the least squares uncertainty distribu-
tion is


 0, if x ≤ 0.2273
Φ(x) = (x − 0.2273)/4.5454, if 0.2273 ≤ x ≤ 4.7727 (4.14)

1, if x ≥ 4.7727.

Example 4.3: Assume that an uncertainty distribution has a lognormal


form with two unknown parameters e and σ, i.e.,
−1
π(e − ln x)
Φ(x|e, σ) = 1 + exp √ . (4.15)

132 Chapter 4 - Uncertain Statistics

We also assume the following expert’s experimental data,

(0.6, 0.1), (1.0, 0.3), (1.5, 0.4), (2.0, 0.6), (2.8, 0.8), (3.6, 0.9). (4.16)

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


yield that e = 0.4825, σ = 0.7852 and the least squares uncertainty distribu-
tion is −1
0.4825 − ln x
Φ(x) = 1 + exp . (4.17)
0.4329

4.5 Method of Moments


Assume that a nonnegative uncertain variable has an uncertainty distribution

Φ(x|θ1 , θ2 , · · · , θp ) (4.18)

with unknown parameters θ1 , θ2 , · · · , θp . Given a set of expert’s experimental


data
(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ) (4.19)
with

0 ≤ x1 < x2 < · · · < xn , 0 ≤ α1 ≤ α2 ≤ · · · ≤ αn ≤ 1, (4.20)

Wang and Peng [230] proposed a method of moments to estimate the un-
known parameters of uncertainty distribution. At first, the kth empirical
moments of the expert’s experimental data are defined as that of the corre-
sponding empirical uncertainty distribution, i.e.,
n−1 k
1 XX
ξ k = α1 xk1 + (αi+1 − αi )xji xk−j k
i+1 + (1 − αn )xn . (4.21)
k + 1 i=1 j=0

The moment estimates θb1 , θb2 , · · · , θbp are then obtained by equating the first
p moments of Φ(x|θ1 , θ2 , · · · , θp ) to the corresponding first p empirical mo-
ments. In other words, the moment estimates θb1 , θb2 , · · · , θbp should solve the
system of equations,
Z +∞ √
(1 − Φ( k x | θ1 , θ2 , · · · , θp ))dx = ξ k , k = 1, 2, · · · , p (4.22)
0

where ξ 1 , ξ 2 , · · · , ξ p are empirical moments determined by (4.21).

Example 4.4: Assume that a questionnaire survey has successfully acquired


the following expert’s experimental data,

(1.2, 0.1), (1.5, 0.3), (1.8, 0.4), (2.5, 0.6), (3.9, 0.8), (4.6, 0.9). (4.23)
Section 4.6 - Multiple Domain Experts 133

Then the first three empirical moments are 2.5100, 7.7226 and 29.4936. We
also assume that the uncertainty distribution to be determined has a zigzag
form with three unknown parameters a, b and c, i.e.,


 0, if x ≤ a
 (x − a)/2(b − a),

if a ≤ x ≤ b
Φ(x|a, b, c) = (4.24)

 (x + c − 2b)/2(c − b), if b ≤ x ≤ c

if x ≥ c.

1,

From the expert’s experimental data, we may believe that the unknown pa-
rameters must be positive numbers. Thus the first three moments of the
zigzag uncertainty distribution Φ(x|a, b, c) are
a + 2b + c
,
4
a2 + ab + 2b2 + bc + c2
,
6
a3 + a2 b + ab2 + 2b3 + b2 c + bc2 + c3
.
8
It follows from the method of moments that the unknown parameters a, b, c
should solve the system of equations,


 a + 2b + c = 4 × 2.5100

a2 + ab + 2b2 + bc + c2 = 6 × 7.7226 (4.25)


 3 2 2 3 2 2 3
a + a b + ab + 2b + b c + bc + c = 8 × 29.4936.

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


yield that the moment estimates are (a, b, c) = (0.9804, 2.0303, 4.9991) and
the corresponding uncertainty distribution is


 0, if x ≤ 0.9804
 (x − 0.9804)/2.0998, if 0.9804 ≤ x ≤ 2.0303

Φ(x) = (4.26)

 (x + 0.9385)/5.9376, if 2.0303 ≤ x ≤ 4.9991

if x ≥ 4.9991.

1,

4.6 Multiple Domain Experts


Assume there are m domain experts and each produces an uncertainty distri-
bution. Then we may get m uncertainty distributions Φ1 (x), Φ2 (x), · · ·, Φm (x).
It was suggested by Liu [129] that the m uncertainty distributions should be
aggregated to an uncertainty distribution

Φ(x) = w1 Φ1 (x) + w2 Φ2 (x) + · · · + wm Φm (x) (4.27)


134 Chapter 4 - Uncertain Statistics

where w1 , w2 , · · · , wm are convex combination coefficients (i.e., they are non-


negative numbers and w1 + w2 + · · · + wn = 1) representing weights of the
domain experts. For example, we may set

1
wi = , ∀i = 1, 2, · · · , n. (4.28)
m

Since Φ1 (x), Φ2 (x), · · ·, Φm (x) are uncertainty distributions, they are increas-
ing functions taking values in [0, 1] and are not identical to either 0 or 1. It
is easy to verify that their convex combination Φ(x) is also an increasing
function taking values in [0, 1] and Φ(x) 6≡ 0, Φ(x) 6≡ 1. Hence Φ(x) is also
an uncertainty distribution by Peng-Iwamura theorem.

4.7 Delphi Method


Delphi method was originally developed in the 1950s by the RAND Corpo-
ration based on the assumption that group experience is more valid than
individual experience. This method asks the domain experts answer ques-
tionnaires in two or more rounds. After each round, a facilitator provides
an anonymous summary of the answers from the previous round as well as
the reasons that the domain experts provided for their opinions. Then the
domain experts are encouraged to revise their earlier answers in light of the
summary. It is believed that during this process the opinions of domain ex-
perts will converge to an appropriate answer. Wang, Gao and Guo [228]
recast Delphi method as a process to determine uncertainty distributions.
The main steps are listed as follows:

Step 1. The m domain experts provide their expert’s experimental data,

(xij , αij ), j = 1, 2, · · · , ni , i = 1, 2, · · · , m. (4.29)

Step 2. Use the i-th expert’s experimental data (xi1 , αi1 ), (xi2 , αi2 ), · · · ,
(xini , αini ) to generate the uncertainty distributions Φi of the i-
th domain experts, i = 1, 2, · · · , m, respectively.
Step 3. Compute Φ(x) = w1 Φ1 (x) + w2 Φ2 (x) + · · · + wm Φm (x) where
w1 , w2 , · · · , wm are convex combination coefficients representing
weights of the domain experts.
Step 4. If |αij − Φ(xij )| are less than a given level ε > 0 for all i and j, then
go to Step 5. Otherwise, the i-th domain experts receive the sum-
mary (for example, the function Φ obtained in the previous round
and the reasons of other experts), and then provide a set of revised
expert’s experimental data (xi1 , αi1 ), (xi2 , αi2 ), · · · , (xini , αini ) for
i = 1, 2, · · · , m. Go to Step 2.
Step 5. The last function Φ is the uncertainty distribution to be determined.
Section 4.8 - Bibliographic Notes 135

4.8 Bibliographic Notes


The study of uncertain statistics was started by Liu [129] in 2010 in which a
questionnaire survey for collecting expert’s experimental data was designed.
It was shown among others by Chen and Ralescu [18] that the questionnaire
survey may successfully acquire the expert’s experimental data.
Parametric uncertain statistics assumes that the uncertainty distribution
to be determined has a known functional form but with unknown parame-
ters. In order to estimate the unknown parameters, Liu [129] suggested the
principle of least squares, and Wang and Peng [230] proposed the method of
moments.
Nonparametric uncertain statistics does not rely on the expert’s experi-
mental data belonging to any particular uncertainty distribution. In order to
determine the uncertainty distributions, Liu [129] introduced the linear in-
terpolation method (i.e., empirical uncertainty distribution), and Chen and
Ralescu [18] proposed a series of spline interpolation methods.
When multiple domain experts are available, Wang, Gao and Guo [228]
recast Delphi method as a process to determine uncertainty distributions.
Chapter 5

Uncertain Risk Analysis

The term risk has been used in different ways in literature. Here the risk
is defined as the “accidental loss” plus “uncertain measure of such loss”.
Uncertain risk analysis is a tool to quantify risk via uncertainty theory. One
main feature of this topic is to model events that almost never occur. This
chapter will introduce a definition of risk index and provide some useful
formulas for calculating risk index. This chapter will also discuss structural
risk analysis and investment risk analysis in uncertain environments.

5.1 Loss Function


A system usually contains some factors ξ1 , ξ2 , · · · , ξn that may be under-
stood as lifetime, strength, demand, production rate, cost, profit, and re-
source. Generally speaking, some specified loss is dependent on those factors.
Although loss is a problem-dependent concept, usually such a loss may be
represented by a loss function.

Definition 5.1 Consider a system with factors ξ1 , ξ2 , · · · , ξn . A function f


is called a loss function if some specified loss occurs if and only if

f (ξ1 , ξ2 , · · · , ξn ) > 0. (5.1)

Example 5.1: Consider a series system in which there are n elements whose
lifetimes are uncertain variables ξ1 , ξ2 , · · · , ξn . Such a system works whenever
all elements work. Thus the system lifetime is

ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn . (5.2)

If the loss is understood as the case that the system fails before the time T ,
then we have a loss function

f (ξ1 , ξ2 , · · · , ξn ) = T − ξ1 ∧ ξ2 ∧ · · · ∧ ξn . (5.3)

© Springer-Verlag Berlin Heidelberg 2015 137


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_6
138 Chapter 5 - Uncertain Risk Analysis

................................. ................................. .................................


... ... ...
.... .... ....
Input ......................................... 1 ................................
..... ..... 2 .................................
....
.
... 3 .
.
................................... Output
.
............................... ............................... ..................................

Figure 5.1: A Series System. Reprinted from Liu [129].

Hence the system fails if and only if f (ξ1 , ξ2 , · · · , ξn ) > 0.

Example 5.2: Consider a parallel system in which there are n elements


whose lifetimes are uncertain variables ξ1 , ξ2 , · · · , ξn . Such a system works
whenever at least one element works. Thus the system lifetime is

ξ = ξ1 ∨ ξ2 ∨ · · · ∨ ξn . (5.4)

If the loss is understood as the case that the system fails before the time T ,
then the loss function is

f (ξ1 , ξ2 , · · · , ξn ) = T − ξ1 ∨ ξ2 ∨ · · · ∨ ξn . (5.5)

Hence the system fails if and only if f (ξ1 , ξ2 , · · · , ξn ) > 0.


.................................
.. ...
................................ 1 ...............................
... ................................... ...
... ...
... ................................ ...
... . ... ...
. . . .....................................................................
Input .... .
............................................................
..
.
... 2
..............................
.
. .. Output
... ...
... ...
.... ................................ ...
................................. ..................................
... 3
...............................
...

Figure 5.2: A Parallel System. Reprinted from Liu [129].

Example 5.3: Consider a k-out-of-n system in which there are n elements


whose lifetimes are uncertain variables ξ1 , ξ2 , · · · , ξn . Such a system works
whenever at least k of n elements work. Thus the system lifetime is

ξ = k-max [ξ1 , ξ2 , · · · , ξn ]. (5.6)

If the loss is understood as the case that the system fails before the time T ,
then the loss function is

f (ξ1 , ξ2 , · · · , ξn ) = T − k-max [ξ1 , ξ2 , · · · , ξn ]. (5.7)

Hence the system fails if and only if f (ξ1 , ξ2 , · · · , ξn ) > 0. Note that a series
system is an n-out-of-n system, and a parallel system is a 1-out-of-n system.

Example 5.4: Consider a standby system in which there are n redundant


elements whose lifetimes are ξ1 , ξ2 , · · · , ξn . For this system, only one element
Section 5.2 - Risk Index 139

is active, and one of the redundant elements begins to work only when the
active element fails. Thus the system lifetime is

ξ = ξ1 + ξ2 + · · · + ξn . (5.8)

If the loss is understood as the case that the system fails before the time T ,
then the loss function is

f (ξ1 , ξ2 , · · · , ξn ) = T − (ξ1 + ξ2 + · · · + ξn ). (5.9)

Hence the system fails if and only if f (ξ1 , ξ2 , · · · , ξn ) > 0.


..... ................................
....... .. ...
............................. ...................................
... 1
.................................
................................
....
..... ...
... .
. .... . . ............................ . ...
.
. . ..... .
. .
. ..
Input .............................................................. .................................. .................................................................... Output
... ..
. 2 ... ...
.... ............................. ...
.... ..
. ............................... ...
... .... . . . ..
... . ................................
............................ ...............................
... ... 3
.............................
.
....

Figure 5.3: A Standby System

5.2 Risk Index


In practice, the factors ξ1 , ξ2 , · · · , ξn of a system are usually uncertain vari-
ables rather than known constants. Thus the risk index is defined as the
uncertain measure that some specified loss occurs.

Definition 5.2 (Liu [128]) Assume that a system contains uncertain factors
ξ1 , ξ2 , · · ·, ξn and has a loss function f . Then the risk index is the uncertain
measure that the system is loss-positive, i.e.,

Risk = M{f (ξ1 , ξ2 , · · · , ξn ) > 0}. (5.10)

Theorem 5.1 (Liu [128], Risk Index Theorem) Assume a system contains
independent uncertain variables ξ1 , ξ2 , · · · , ξn with regular uncertainty distri-
butions Φ1 , Φ2 , · · · , Φn , respectively. If the loss function f (ξ1 , ξ2 , · · · , ξn ) is
strictly increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly decreasing with
respect to ξm+1 , ξm+2 , · · · , ξn , then the risk index is just the root α of the
equation

f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) = 0. (5.11)

Proof: It follows from Definition 5.2 and Theorem 2.21 immediately.

Remark 5.1: Since f (Φ−1 −1 −1 −1


1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) is
a strictly decreasing function with respect to α, its root α may be estimated
by the bisection method.
140 Chapter 5 - Uncertain Risk Analysis

Remark 5.2: Keep in mind that sometimes the equation (5.11) may not
have a root. In this case, if
f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) < 0 (5.12)
for all α, then we set the root α = 0; and if
f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) > 0 (5.13)
for all α, then we set the root α = 1.

5.3 Series System


Consider a series system in which there are n elements whose lifetimes are
independent uncertain variables ξ1 , ξ2 , · · · , ξn with uncertainty distributions
Φ1 , Φ2 , · · · , Φn , respectively. If the loss is understood as the case that the
system fails before the time T , then the loss function is
f (ξ1 , ξ2 , · · · , ξn ) = T − ξ1 ∧ ξ2 ∧ · · · ∧ ξn (5.14)
and the risk index is
Risk = M{f (ξ1 , ξ2 , · · · , ξn ) > 0}. (5.15)
Since f is a strictly decreasing function with respect to ξ1 , ξ2 , · · · , ξn , the risk
index theorem says that the risk index is just the root α of the equation
Φ−1 −1 −1
1 (α) ∧ Φ2 (α) ∧ · · · ∧ Φn (α) = T. (5.16)
It is easy to verify that
Risk = Φ1 (T ) ∨ Φ2 (T ) ∨ · · · ∨ Φn (T ). (5.17)

5.4 Parallel System


Consider a parallel system in which there are n elements whose lifetimes are
independent uncertain variables ξ1 , ξ2 , · · · , ξn with uncertainty distributions
Φ1 , Φ2 , · · · , Φn , respectively. If the loss is understood as the case that the
system fails before the time T , then the loss function is
f (ξ1 , ξ2 , · · · , ξn ) = T − ξ1 ∨ ξ2 ∨ · · · ∨ ξn (5.18)
and the risk index is
Risk = M{f (ξ1 , ξ2 , · · · , ξn ) > 0}. (5.19)
Since f is a strictly decreasing function with respect to ξ1 , ξ2 , · · · , ξn , the risk
index theorem says that the risk index is just the root α of the equation
Φ−1 −1 −1
1 (α) ∨ Φ2 (α) ∨ · · · ∨ Φn (α) = T. (5.20)
It is easy to verify that
Risk = Φ1 (T ) ∧ Φ2 (T ) ∧ · · · ∧ Φn (T ). (5.21)
Section 5.7 - Structural Risk Analysis 141

5.5 k-out-of-n System


Consider a k-out-of-n system in which there are n elements whose lifetimes are
independent uncertain variables ξ1 , ξ2 , · · · , ξn with uncertainty distributions
Φ1 , Φ2 , · · · , Φn , respectively. If the loss is understood as the case that the
system fails before the time T , then the loss function is

f (ξ1 , ξ2 , · · · , ξn ) = T − k-max [ξ1 , ξ2 , · · · , ξn ] (5.22)

and the risk index is

Risk = M{f (ξ1 , ξ2 , · · · , ξn ) > 0}. (5.23)

Since f is a strictly decreasing function with respect to ξ1 , ξ2 , · · · , ξn , the risk


index theorem says that the risk index is just the root α of the equation

k-max [Φ−1 −1 −1
1 (α), Φ2 (α), · · · , Φn (α)] = T. (5.24)

It is easy to verify that

Risk = k-min [Φ1 (T ), Φ2 (T ), · · · , Φn (T )]. (5.25)

Note that a series system is essentially an n-out-of-n system. In this case,


the risk index formula (5.25) becomes (5.17). In addition, a parallel system
is essentially a 1-out-of-n system. In this case, the risk index formula (5.25)
becomes (5.21).

5.6 Standby System


Consider a standby system in which there are n elements whose lifetimes are
independent uncertain variables ξ1 , ξ2 , · · · , ξn with uncertainty distributions
Φ1 , Φ2 , · · · , Φn , respectively. If the loss is understood as the case that the
system fails before the time T , then the loss function is

f (ξ1 , ξ2 , · · · , ξn ) = T − (ξ1 + ξ2 + · · · + ξn ) (5.26)

and the risk index is

Risk = M{f (ξ1 , ξ2 , · · · , ξn ) > 0}. (5.27)

Since f is a strictly decreasing function with respect to ξ1 , ξ2 , · · · , ξn , the risk


index theorem says that the risk index is just the root α of the equation

Φ−1 −1 −1
1 (α) + Φ2 (α) + · · · + Φn (α) = T. (5.28)
142 Chapter 5 - Uncertain Risk Analysis

5.7 Structural Risk Analysis


Consider a structural system in which the strengths and loads are assumed
to be uncertain variables. We will suppose that a structural system fails
whenever for each rod, the load variable exceeds its strength variable. If the
structural risk index is defined as the uncertain measure that the structural
system fails, then ( n )
[
Risk = M (ξi < ηi ) (5.29)
i=1

where ξ1 , ξ2 , · · · , ξn are strength variables, and η1 , η2 , · · · , ηn are load vari-


ables of the n rods.

Example 5.5: (The Simplest Case) Assume there is only a single strength
variable ξ and a single load variable η with continuous uncertainty distribu-
tions Φ and Ψ, respectively. In this case, the structural risk index is

Risk = M{ξ < η}.

It follows from the risk index theorem that the risk index is just the root α
of the equation
Φ−1 (α) = Ψ−1 (1 − α). (5.30)
Especially, if the strength variable ξ has a normal uncertainty distribution
N (es , σs ) and the load variable η has a normal uncertainty distribution
N (el , σl ), then the structural risk index is
−1
π(e − el )
Risk = 1 + exp √ s . (5.31)
3(σs + σl )

Example 5.6: (Constant Loads) Assume the uncertain strength variables


ξ1 , ξ2 , · · · , ξn are independent and have continuous uncertainty distributions
Φ1 , Φ2 , · · · , Φn , respectively. In many cases, the load variables η1 , η2 , · · · , ηn
degenerate to crisp values c1 , c2 , · · · , cn (for example, weight limits allowed
by the legislation), respectively. In this case, it follows from (5.29) and inde-
pendence that the structural risk index is
( n ) n
[ _
Risk = M (ξi < ci ) = M{ξi < ci }.
i=1 i=1

That is,
Risk = Φ1 (c1 ) ∨ Φ2 (c2 ) ∨ · · · ∨ Φn (cn ). (5.32)

Example 5.7: (Independent Load Variables) Assume the uncertain strength


variables ξ1 , ξ2 , · · · , ξn are independent and have continuous uncertainty dis-
Section 5.7 - Structural Risk Analysis 143

tributions Φ1 , Φ2 , · · · , Φn , respectively. Also assume the uncertain load vari-


ables η1 , η2 , · · · , ηn are independent and have continuous uncertainty distri-
butions Ψ1 , Ψ2 , · · · , Ψn , respectively. In this case, it follows from (5.29) and
independence that the structural risk index is
( n ) n
[ _
Risk = M (ξi < ηi ) = M{ξi < ηi }.
i=1 i=1

That is,
Risk = α1 ∨ α2 ∨ · · · ∨ αn (5.33)
where αi are the roots of the equations

Φ−1 −1
i (α) = Ψi (1 − α) (5.34)

for i = 1, 2, · · · , n, respectively.
However, generally speaking, the load variables η1 , η2 , · · · , ηn are neither
constants nor independent. For examples, the load variables η1 , η2 , · · · , ηn
may be functions of independent uncertain variables τ1 , τ2 , · · · , τm . In this
case, the formula (5.33) is no longer valid. Thus we have to deal with those
structural systems case by case.

Example 5.8: (Series System) Consider a structural system shown in Fig-


ure 5.4 that consists of n rods in series and an object. Assume that the
strength variables of the n rods are uncertain variables ξ1 , ξ2 , · · · , ξn with
uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. We also assume that
the gravity of the object is an uncertain variable η with uncertainty distri-
bution Ψ. For each i (1 ≤ i ≤ n), the load variable of the rod i is just the
gravity η of the object. Thus the structural system fails whenever the load
variable η exceeds at least one of the strength variables ξ1 , ξ2 , · · · , ξn . Hence
the structural risk index is
( n )
[
Risk = M (ξi < η) = M{ξ1 ∧ ξ2 ∧ · · · ∧ ξn < η}.
i=1

Define the loss function as

f (ξ1 , ξ2 , · · · , ξn , η) = η − ξ1 ∧ ξ2 ∧ · · · ∧ ξn .

Then
Risk = M{f (ξ1 , ξ2 , · · · , ξn , η) > 0}.
Since the loss function f is strictly increasing with respect to η and strictly
decreasing with respect to ξ1 , ξ2 , · · · , ξn , it follows from the risk index theo-
rem that the risk index is just the root α of the equation

Ψ−1 (1 − α) − Φ−1 −1 −1
1 (α) ∧ Φ2 (α) ∧ · · · ∧ Φn (α) = 0. (5.35)
144 Chapter 5 - Uncertain Risk Analysis

Or equivalently, let αi be the roots of the equations

Ψ−1 (1 − α) = Φ−1
i (α) (5.36)

for i = 1, 2, · · · , n, respectively. Then the structural risk index is

Risk = α1 ∨ α2 ∨ · · · ∨ αn . (5.37)

////////////////
....................................................................................................................................................................................
...
...
...
...
...
...
.....
..... .....
......
...
...
...
...
...
....
..... .....
......
...
...
...
...
...
....
..... .....
......
...
...
...
...
...
...
........................................
... ...
····
.... ...
····
... ...
...
····
...
... ...
···· ...
......................................
..

Figure 5.4: A Structural System with n Rods and an Object

Example 5.9: Consider a structural system shown in Figure 5.5 that consists
of 2 rods and an object. Assume that the strength variables of the left and
right rods are uncertain variables ξ1 and ξ2 with uncertainty distributions
Φ1 and Φ2 , respectively. We also assume that the gravity of the object is an
uncertain variable η with uncertainty distribution Ψ. In this case, the load
variables of left and right rods are respectively equal to

η sin θ2 η sin θ1
, .
sin(θ1 + θ2 ) sin(θ1 + θ2 )

Thus the structural system fails whenever for any one rod, the load variable
exceeds its strength variable. Hence the structural risk index is

η sin θ2 η sin θ1
Risk = M ξ1 < ∪ ξ2 <
sin(θ1 + θ2 ) sin(θ1 + θ2 )

ξ1 η ξ2 η
=M < ∪ <
sin θ2 sin(θ1 + θ2 ) sin θ1 sin(θ1 + θ2 )

ξ1 ξ2 η
=M ∧ <
sin θ2 sin θ1 sin(θ1 + θ2 )
Section 5.8 - Investment Risk Analysis 145

Define the loss function as


η ξ1 ξ2
f (ξ1 , ξ2 , η) = − ∧ .
sin(θ1 + θ2 ) sin θ2 sin θ1
Then
Risk = M{f (ξ1 , ξ2 , η) > 0}.
Since the loss function f is strictly increasing with respect to η and strictly
decreasing with respect to ξ1 , ξ2 , it follows from the risk index theorem that
the risk index is just the root α of the equation

Ψ−1 (1 − α) Φ−1 (α) Φ−1 (α)


− 1 ∧ 2 = 0. (5.38)
sin(θ1 + θ2 ) sin θ2 sin θ1
Or equivalently, let α1 be the root of the equation

Ψ−1 (1 − α) Φ−1 (α)


= 1 (5.39)
sin(θ1 + θ2 ) sin θ2
and let α2 be the root of the equation

Ψ−1 (1 − α) Φ−1 (α)


= 2 . (5.40)
sin(θ1 + θ2 ) sin θ1
Then the structural risk index is

Risk = α1 ∨ α2 . (5.41)

////////////////
.......................................................................................................................................................................................
... .. .
... ..
... ... ...
... . .
....
... .. ..
... .. ...
... ...
... .. ...
...
... ..
...
.
... .. ..
... .. ...
... .. ...
... ..
... ..
....
.
... .. ...
... .. ..
... .. ...
... ...
...
θ
... 1 ... 2.....
...
θ
... .. ...
... .. .....
... .. ..
... . ...
... .....
.....
..........................................
····
...
...
...
...
····
.... ...
···· ... ...
...
···· ...
...
....................................
..

Figure 5.5: A Structural System with 2 Rods and an Object

5.8 Investment Risk Analysis


Assume that an investor has n projects whose returns are uncertain variables
ξ1 , ξ2 , · · · , ξn . If the loss is understood as the case that total return ξ1 + ξ2 +
146 Chapter 5 - Uncertain Risk Analysis

· · · + ξn is below a predetermined value c (e.g., the interest rate), then the


investment risk index is

Risk = M{ξ1 + ξ2 + · · · + ξn < c}. (5.42)

If ξ1 , ξ2 , · · · , ξn are independent uncertain variables with uncertainty distri-


butions Φ1 , Φ2 , · · · , Φn , respectively, then the investment risk index is just
the root α of the equation

Φ−1 −1 −1
1 (α) + Φ2 (α) + · · · + Φn (α) = c. (5.43)

5.9 Value-at-Risk
As a substitute of risk index (5.10), a concept of value-at-risk is given by the
following definition.

Definition 5.3 (Peng [183]) Assume that a system contains uncertain fac-
tors ξ1 , ξ2 , · · ·, ξn and has a loss function f . Then the value-at-risk is defined
as
VaR(α) = sup{x | M{f (ξ1 , ξ2 , · · · , ξn ) ≥ x} ≥ α}. (5.44)

Note that VaR(α) represents the maximum possible loss when α percent of
the right tail distribution is ignored. In other words, the loss f (ξ1 , ξ2 , · · · , ξn )
will exceed VaR(α) with uncertain measure α. See Figure 5.6. If Φ(x) is the
uncertainty distribution of f (ξ1 , ξ2 , · · · , ξn ), then

VaR(α) = sup {x | Φ(x) ≤ 1 − α} . (5.45)

If its inverse uncertainty distribution Φ−1 (α) exists, then

VaR(α) = Φ−1 (1 − α). (5.46)

Φ(x)
....
........
...
.........................................................................
1 ... ...
..........
.......................
.................
... ...........
... . .........
.......
α ...
... .. .....
.....
.
...
.
... ........ ......
....................................
.... .....
... ..
.......
.
... ..... .
..... ...
... .....
... ..... ...
... ..
......
. ..
... ......
... .
...
...... ..
..
... ...
......
. ..
....
... .................. ..
... . ...
...
. .
..........
........................................................................................................................................................................................................................................................................ x
...
0 ...
... VaR(α)

Figure 5.6: Value-at-Risk


Section 5.10 - Expected Loss 147

Theorem 5.2 (Peng [183]) The value-at-risk VaR(α) is a monotone de-


creasing function with respect to α.

Proof: Let α1 and α2 be two numbers with 0 < α1 < α2 ≤ 1. Then for any
number r < VaR(α2 ), we have

M {f (ξ1 , ξ2 , · · · , ξn ) ≥ r} ≥ α2 > α1 .

Thus, by the definition of value-at-risk, we obtain VaR(α1 ) ≤ r < VaR(α2 ).


That is, VaR(α) is a monotone decreasing function with respect to α.

Theorem 5.3 (Peng [183]) Assume a system contains independent uncer-


tain variables ξ1 , ξ2 , · · · , ξn with regular uncertainty distributions Φ1 , Φ2 , · · · ,
Φn , respectively. If the loss function f (ξ1 , ξ2 , · · · , ξn ) is strictly increasing
with respect to ξ1 , ξ2 , · · · , ξm and strictly decreasing with respect to ξm+1 , ξm+2 ,
· · · , ξn , then

VaR(α) = f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)). (5.47)

Proof: It follows from the operational law of uncertain variables that the
loss f (ξ1 , ξ2 , · · · , ξn ) has an inverse uncertainty distribution

Φ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

The theorem follows from (5.46) immediately.

5.10 Expected Loss


Liu and Ralescu [151] proposed a concept of expected loss that is the expected
value of the loss f (ξ1 , ξ2 , · · · , ξn ) given f (ξ1 , ξ2 , · · · , ξn ) > 0. A formal defi-
nition is given below.

Definition 5.4 (Liu and Ralescu [151]) Assume that a system contains un-
certain factors ξ1 , ξ2 , · · ·, ξn and has a loss function f . Then the expected loss
is defined as Z +∞
L= M{f (ξ1 , ξ2 , · · · , ξn ) ≥ x}dx. (5.48)
0

If Φ(x) is the uncertainty distribution of the loss f (ξ1 , ξ2 , · · · , ξn ), then


we immediately have Z +∞
L= (1 − Φ(x))dx. (5.49)
0

If its inverse uncertainty distribution Φ−1 (α) exists, then the expected loss
is Z 1
+
L= Φ−1 (α) dα. (5.50)
0
148 Chapter 5 - Uncertain Risk Analysis

Theorem 5.4 (Liu and Ralescu [154]) Assume a system contains indepen-
dent uncertain variables ξ1 , ξ2 , · · · , ξn with regular uncertainty distributions
Φ1 , Φ2 , · · · , Φn , respectively. If the loss function f (ξ1 , ξ2 , · · · , ξn ) is strictly
increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly decreasing with respect
to ξm+1 , ξm+2 , · · · , ξn , then the expected loss is
Z 1
L= f + (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α))dα. (5.51)
0

Proof: It follows from the operational law of uncertain variables that the
loss f (ξ1 , ξ2 , · · · , ξn ) has an inverse uncertainty distribution

Φ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

The theorem follows from (5.50) immediately.

5.11 Hazard Distribution


Suppose that ξ is the lifetime of some element. Here we assume it is an
uncertain variable with a prior uncertainty distribution Φ. At some time t,
it is observed that the element is working. What is the residual lifetime of
the element? The following definition answers this question.

Definition 5.5 (Liu [128]) Let ξ be a nonnegative uncertain variable repre-


senting lifetime of some element. If ξ has a prior uncertainty distribution Φ,
then the hazard distribution at time t is
0, if Φ(x) ≤ Φ(t)




 Φ(x)


∧ 0.5, if Φ(t) < Φ(x) ≤ (1 + Φ(t))/2
Φ(x|t) = 1 − Φ(t) (5.52)


 Φ(x) − Φ(t) , if (1 + Φ(t))/2 ≤ Φ(x)



1 − Φ(t)
that is just the conditional uncertainty distribution of ξ given ξ > t.

The hazard distribution is essentially the posterior uncertainty distribu-


tion just after time t given that it is working at time t.

Exercise 5.1: Let ξ be a linear uncertain variable L(a, b), and t a real
number with a < t < b. Show that the hazard distribution at time t is


 0, if x ≤ t

 x−a


∧ 0.5, if t < x ≤ (b + t)/2
Φ(x|t) = b−t

 x−t



 ∧ 1, if (b + t)/2 ≤ x.
b−t
Section 5.12 - Bibliographic Notes 149

Theorem 5.5 (Liu [128], Conditional Risk Index Theorem) Assume that a
system contains uncertain factors ξ1 , ξ2 , · · ·, ξn , and has a loss function f .
Suppose ξ1 , ξ2 , · · · , ξn are independent uncertain variables with uncertainty
distributions Φ1 , Φ2 , · · · , Φn , respectively, and f (ξ1 , ξ2 , · · · , ξn ) is strictly in-
creasing with respect to ξ1 , ξ2 , · · · , ξm and strictly decreasing with respect to
ξm+1 , ξm+2 , · · · , ξn . If it is observed that all elements are working at some
time t, then the risk index is just the root α of the equation

f (Φ−1 −1 −1 −1
1 (1 − α|t), · · · , Φm (1 − α|t), Φm+1 (α|t), · · · , Φn (α|t)) = 0 (5.53)

where Φi (x|t) are hazard distributions determined by

0, if Φi (x) ≤ Φi (t)





 Φi (x)
∧ 0.5, if Φi (t) < Φi (x) ≤ (1 + Φi (t))/2

Φi (x|t) = 1 − Φi (t) (5.54)


 Φi (x) − Φi (t) ,


 if (1 + Φi (t))/2 ≤ Φi (x)
1 − Φi (t)

for i = 1, 2, · · · , n.

Proof: It follows from Definition 5.5 that each hazard distribution of ele-
ment is determined by (5.54). Thus the conditional risk index is obtained by
Theorem 5.1 immediately.

5.12 Bibliographic Notes


Uncertain risk analysis was proposed by Liu [128] in 2010 in which a risk
index was defined and a risk index theorem was proved. This tool was also
successfully applied among others to structural risk analysis and investment
risk analysis.
As a substitute of risk index, Peng [183] suggested a concept of value-
at-risk that is the maximum possible loss when the right tail distribution is
ignored. In addition, Liu and Ralescu [151, 154] investigated the concept of
expected loss that takes into account not only the chance of the loss but also
its severity.
Chapter 6

Uncertain Reliability
Analysis

Uncertain reliability analysis is a tool to deal with system reliability via


uncertainty theory. This chapter will introduce a definition of reliability
index and provide some useful formulas for calculating reliability index.

6.1 Structure Function


Many real systems may be simplified to a Boolean system in which each
element (including the system itself) has two states: working and failure.
Let Boolean variables xi denote the states of elements i for i = 1, 2, · · · , n,
and (
1, if element i works
xi = (6.1)
0, if element i fails.
We also suppose the Boolean variable X indicates the state of the system,
i.e., (
1, if the system works
X= (6.2)
0, if the system fails.
Usually, the state of the system is completely determined by the states of its
elements via the so-called structure function.

Definition 6.1 Assume that X is a Boolean system containing elements


x1 , x2 , · · · , xn . A Boolean function f is called a structure function of X
if
X = 1 if and only if f (x1 , x2 , · · · , xn ) = 1. (6.3)

It is obvious that X = 0 if and only if f (x1 , x2 , · · · , xn ) = 0 whenever f is


indeed the structure function of the system.

© Springer-Verlag Berlin Heidelberg 2015 151


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_7
152 Chapter 6 - Uncertain Reliability Analysis

Example 6.1: For a series system, the structure function is a mapping from
{0, 1}n to {0, 1}, i.e.,

f (x1 , x2 , · · · , xn ) = x1 ∧ x2 ∧ · · · ∧ xn . (6.4)

................................... ................................... ...................................


... . .
.................................... ................................... ................................... ..................................
Input ...
.
1 .
.
....
.
...
.
2 .
.
...
...
.. 3 ...
..
... Output
................................ ................................. .................................

Figure 6.1: A Series System. Reprinted from Liu [129].

Example 6.2: For a parallel system, the structure function is a mapping


from {0, 1}n to {0, 1}, i.e.,

f (x1 , x2 , · · · , xn ) = x1 ∨ x2 ∨ · · · ∨ xn . (6.5)

................................
.. ...
..............................
....
...
1..
...............................
................................
. ..
...
... ...
... ................................ ...
.
. .
. . .
Input .................................................................. ..................................................................... Output
....
2
..................................
.
....
... ...
... ................................ ...
... . . ..
................................ ..................................
... 3
.............................
...

Figure 6.2: A Parallel System. Reprinted from Liu [129].

Example 6.3: For a k-out-of-n system that works whenever at least k of the
n elements work, the structure function is a mapping from {0, 1}n to {0, 1},
i.e., (
1, if x1 + x2 + · · · + xn ≥ k
f (x1 , x2 , · · · , xn ) = (6.6)
0, if x1 + x2 + · · · + xn < k.
Especially, when k = 1, it is a parallel system; when k = n, it is a series
system.

6.2 Reliability Index


The element in a Boolean system is usually represented by a Boolean uncer-
tain variable, i.e.,
(
1 with uncertain measure a
ξ= (6.7)
0 with uncertain measure 1 − a.

In this case, we will say ξ is an uncertain element with reliability a. Reliability


index is defined as the uncertain measure that the system is working.
Section 6.4 - Parallel System 153

Definition 6.2 (Liu [128]) Assume a Boolean system has uncertain ele-
ments ξ1 , ξ2 , · · · , ξn and a structure function f . Then the reliability index
is the uncertain measure that the system is working, i.e.,
Reliability = M{f (ξ1 , ξ2 , · · · , ξn ) = 1}. (6.8)
Theorem 6.1 (Liu [128], Reliability Index Theorem) Assume that a system
contains uncertain elements ξ1 , ξ2 , · · ·, ξn , and has a structure function f . If
ξ1 , ξ2 , · · · , ξn are independent uncertain elements with reliabilities a1 , a2 , · · · ,
an , respectively, then the reliability index is

 sup min νi (xi ),
 f (x1 ,x2 ,··· ,xn )=1 1≤i≤n






 if sup min νi (xi ) < 0.5
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n


Reliability = (6.9)

 1− sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=0 1≤i≤n





if sup min νi (xi ) ≥ 0.5



1≤i≤n

f (x1 ,x2 ,··· ,xn )=1

where xi take values either 0 or 1, and νi are defined by


(
ai , if xi = 1
νi (xi ) = (6.10)
1 − ai , if xi = 0

for i = 1, 2, · · · , n, respectively.
Proof: Since ξ1 , ξ2 , · · · , ξn are independent Boolean uncertain variables and
f is a Boolean function, the equation (6.9) follows from Definition 6.2 and
Theorem 2.23 immediately.

6.3 Series System


Consider a series system having independent uncertain elements ξ1 , ξ2 , · · · , ξn
with reliabilities a1 , a2 , · · · , an , respectively. Note that the structure function
is
f (x1 , x2 , · · · , xn ) = x1 ∧ x2 ∧ · · · ∧ xn . (6.11)
It follows from the reliability index theorem that the reliability index is
Reliability = M{ξ1 ∧ ξ2 ∧ · · · ∧ ξn = 1} = a1 ∧ a2 ∧ · · · ∧ an . (6.12)

6.4 Parallel System


Consider a parallel system having independent uncertain elements ξ1 , ξ2 , · · · ,
ξn with reliabilities a1 , a2 , · · · , an , respectively. Note that the structure func-
tion is
f (x1 , x2 , · · · , xn ) = x1 ∨ x2 ∨ · · · ∨ xn . (6.13)
154 Chapter 6 - Uncertain Reliability Analysis

It follows from the reliability index theorem that the reliability index is

Reliability = M{ξ1 ∨ ξ2 ∨ · · · ∨ ξn = 1} = a1 ∨ a2 ∨ · · · ∨ an . (6.14)

6.5 k-out-of-n System


Consider a k-out-of-n system having independent uncertain elements ξ1 , ξ2 ,
· · · , ξn with reliabilities a1 , a2 , · · · , an , respectively. Note that the structure
function has a Boolean form,
(
1, if x1 + x2 + · · · + xn ≥ k
f (x1 , x2 , · · · , xn ) = (6.15)
0, if x1 + x2 + · · · + xn < k.

It follows from the reliability index theorem that the reliability index is the
kth largest value of a1 , a2 , · · · , an , i.e.,

Reliability = k-max[a1 , a2 , · · · , an ]. (6.16)

Note that a series system is essentially an n-out-of-n system. In this case,


the reliability index formula (6.16) becomes (6.12). In addition, a parallel
system is essentially a 1-out-of-n system. In this case, the reliability index
formula (6.16) becomes (6.14).

6.6 General System


It is almost impossible to find an analytic formula of reliability risk for general
systems. In this case, we have to employ numerical method.
................................ ................................
... ... .... ...
. .. ...
............................... ................................................................
....
...
..
...
1 ...
..
...
...
..
...
4 .................................
...
....
... ............................. .. .............................. ...
... ... ...
... .
.
................................ ...
... .. .. ...
.
. .
. .
... ...
Input .................................... .. ..
.................................. Output
.... 3
.
....
.
...
. ...
... ................................ ...
... .... ...
... ...............................
...
............................... ...
... . ... . ...
... ... .... ... .... .... ...
................................. . .
2
...
...................................
. .
............................................................
.
.
5
....................................
...............................
.
.

Figure 6.3: A Bridge System. Reprinted from Liu [129].

Consider a bridge system shown in Figure 6.3 that consists of 5 indepen-


dent uncertain elements whose states are denoted by ξ1 , ξ2 , ξ3 , ξ4 , ξ5 . Assume
each path works if and only if all elements on which are working and the
system works if and only if there is a path of working elements. Then the
structure function of the bridge system is

f (x1 , x2 , x3 , x4 , x5 ) = (x1 ∧ x4 ) ∨ (x2 ∧ x5 ) ∨ (x1 ∧ x3 ∧ x5 ) ∨ (x2 ∧ x3 ∧ x4 ).


Section 6.7 - Bibliographic Notes 155

The Boolean System Calculator, a function in the Matlab Uncertainty Tool-


box (http://orsc.edu.cn/liu/resources.htm), may yield the reliability index.
Assume the 5 independent uncertain elements have reliabilities

0.91, 0.92, 0.93, 0.94, 0.95

in uncertain measure. A run of Boolean System Calculator shows that the


reliability index is

Reliability = M{f (ξ1 , ξ2 , · · · , ξ5 ) = 1} = 0.92

in uncertain measure.

6.7 Bibliographic Notes


Uncertain reliability analysis was proposed by Liu [128] in 2010 in which a
reliability index was defined and a reliability index theorem was proved.
Chapter 7

Uncertain Propositional
Logic

Propositional logic, originated from the work of Aristotle (384-322 BC), is a


branch of logic that studies the properties of complex propositions composed
of simpler propositions and logical connectives. Note that the propositions
considered in propositional logic are not arbitrary statements but are the
ones that are either true or false and not both.
Uncertain propositional logic is a generalization of propositional logic in
which every proposition is abstracted into a Boolean uncertain variable and
the truth value is defined as the uncertain measure that the proposition is
true. This chapter will deal with uncertain propositional logic, including
uncertain proposition, truth value definition, and truth value theorem. This
chapter will also introduce uncertain predicate logic.

7.1 Uncertain Proposition


Definition 7.1 (Li and Liu [100]) An uncertain proposition is a statement
whose truth value is quantified by an uncertain measure.
That is, if we use X to express an uncertain proposition and use α to express
its truth value in uncertain measure, then the uncertain proposition X is
essentially a Boolean uncertain variable
(
1 with uncertain measure α
X= (7.1)
0 with uncertain measure 1 − α
where X = 1 means X is true and X = 0 means X is false.

Example 7.1: “Tom is tall with truth value 0.7” is an uncertain proposition,
where “Tom is tall” is a statement, and its truth value is 0.7 in uncertain
measure.

© Springer-Verlag Berlin Heidelberg 2015 157


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_8
158 Chapter 7 - Uncertain Propositional Logic

Example 7.2: “John is young with truth value 0.8” is an uncertain propo-
sition, where “John is young” is a statement, and its truth value is 0.8 in
uncertain measure.

Example 7.3: “Beijing is a big city with truth value 0.9” is an uncertain
proposition, where “Beijing is a big city” is a statement, and its truth value
is 0.9 in uncertain measure.

Connective Symbols
In addition to the proposition symbols X and Y , we also need the negation
symbol ¬, conjunction symbol ∧, disjunction symbol ∨, conditional symbol
→, and biconditional symbol ↔. Note that

¬X means “not X”; (7.2)

X ∧ Y means “X and Y ”; (7.3)


X ∨ Y means “X or Y ”; (7.4)
X → Y = (¬X) ∨ Y means “if X then Y ”, (7.5)
X ↔ Y = (X → Y ) ∧ (Y → X) means “X if and only if Y ”. (7.6)

Boolean Function of Uncertain Propositions


Assume X1 , X2 , · · · , Xn are uncertain propositions. Then their Boolean func-
tion
Z = f (X1 , X2 , · · · , Xn ) (7.7)
is a Boolean uncertain variable. Thus Z is also an uncertain proposition
provided that it makes sense. Usually, such a Boolean function is a finite
sequence of uncertain propositions and connective symbols. For example,

Z = ¬X1 , Z = X1 ∧ (¬X2 ), Z = X1 → X2 (7.8)

are all uncertain propositions.

Independence of Uncertain Propositions


Uncertain propositions are called independent if they are independent uncer-
tain variables. Assume X1 , X2 , · · · , Xn are independent uncertain proposi-
tions. Then
f1 (X1 ), f2 (X2 ) · · · , fn (Xn ) (7.9)
are also independent uncertain propositions for any Boolean functions f1 , f2 ,
· · · , fn . For example, if X1 , X2 , · · · , X5 are independent uncertain proposi-
tions, then ¬X1 , X2 ∨ X3 , X4 → X5 are also independent.
Section 7.2 - Truth Value 159

7.2 Truth Value


Truth value is a key concept in uncertain propositional logic, and is defined
as the uncertain measure that the uncertain proposition is true.

Definition 7.2 (Li and Liu [100]) Let X be an uncertain proposition. Then
the truth value of X is defined as the uncertain measure that X is true, i.e.,

T (X) = M{X = 1}. (7.10)

Example 7.4: Let X be an uncertain proposition with truth value α. Then

T (¬X) = M{X = 0} = 1 − α. (7.11)

Example 7.5: Let X and Y be two independent uncertain propositions with


truth values α and β, respectively. Then

T (X ∧ Y ) = M{X ∧ Y = 1} = M{(X = 1) ∩ (Y = 1)} = α ∧ β, (7.12)

T (X ∨ Y ) = M{X ∨ Y = 1} = M{(X = 1) ∪ (Y = 1)} = α ∨ β, (7.13)


T (X → Y ) = T (¬X ∨ Y ) = (1 − α) ∨ β. (7.14)

Theorem 7.1 (Law of Excluded Middle) Let X be an uncertain proposition.


Then X ∨ ¬X is a tautology, i.e.,

T (X ∨ ¬X) = 1. (7.15)

Proof: It follows from the definition of truth value and the property of
uncertain measure that

T (X ∨ ¬X) = M{X ∨ ¬X = 1} = M{(X = 1) ∪ (X = 0)} = M{Γ} = 1.

The theorem is proved.

Theorem 7.2 (Law of Contradiction) Let X be an uncertain proposition.


Then X ∧ ¬X is a contradiction, i.e.,

T (X ∧ ¬X) = 0. (7.16)

Proof: It follows from the definition of truth value and the property of
uncertain measure that

T (X ∧ ¬X) = M{X ∧ ¬X = 1} = M{(X = 1) ∩ (X = 0)} = M{∅} = 0.

The theorem is proved.


160 Chapter 7 - Uncertain Propositional Logic

Theorem 7.3 (Law of Truth Conservation) Let X be an uncertain proposi-


tion. Then we have
T (X) + T (¬X) = 1. (7.17)
Proof: It follows from the duality axiom of uncertain measure that
T (¬X) = M{¬X = 1} = M{X = 0} = 1 − M{X = 1} = 1 − T (X).
The theorem is proved.
Theorem 7.4 Let X be an uncertain proposition. Then X → X is a tau-
tology, i.e.,
T (X → X) = 1. (7.18)
Proof: It follows from the definition of conditional symbol and the law of
excluded middle that
T (X → X) = T (¬X ∨ X) = 1.
The theorem is proved.
Theorem 7.5 Let X be an uncertain proposition. Then we have
T (X → ¬X) = 1 − T (X). (7.19)
Proof: It follows from the definition of conditional symbol and the law of
truth conservation that
T (X → ¬X) = T (¬X ∨ ¬X) = T (¬X) = 1 − T (X).
The theorem is proved.
Theorem 7.6 (De Morgan’s Law) For any uncertain propositions X and Y ,
we have
T (¬(X ∧ Y )) = T ((¬X) ∨ (¬Y )), (7.20)
T (¬(X ∨ Y )) = T ((¬X) ∧ (¬Y )). (7.21)
Proof: It follows from the basic properties of uncertain measure that
T (¬(X ∧ Y )) = M{X ∧ Y = 0} = M{(X = 0) ∪ (Y = 0)}
= M{(¬X) ∨ (¬Y ) = 1} = T ((¬X) ∨ (¬Y ))
which proves the first equality. A similar way may verify the second equality.
Theorem 7.7 (Law of Contraposition) For any uncertain propositions X
and Y , we have
T (X → Y ) = T (¬Y → ¬X). (7.22)
Proof: It follows from the definition of conditional symbol and basic prop-
erties of uncertain measure that
T (X → Y ) = M{(¬X) ∨ Y = 1} = M{(X = 0) ∪ (Y = 1)}
= M{Y ∨ (¬X) = 1} = T (¬Y → ¬X).
The theorem is proved.
Section 7.3 - Chen-Ralescu Theorem 161

7.3 Chen-Ralescu Theorem


An important contribution to uncertain propositional logic is the Chen-
Ralescu theorem that provides a numerical method for calculating the truth
values of uncertain propositions.

Theorem 7.8 (Chen-Ralescu Theorem [14]) Assume that X1 , X2 , · · · , Xn


are independent uncertain propositions with truth values α1 , α2 , · · ·, αn , re-
spectively. Then for a Boolean function f , the uncertain proposition

Z = f (X1 , X2 , · · · , Xn ). (7.23)

has a truth value



 sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n







 if sup min νi (xi ) < 0.5
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

T (Z) = (7.24)

 1− sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=0 1≤i≤n




if sup min νi (xi ) ≥ 0.5



1≤i≤n

f (x1 ,x2 ,··· ,xn )=1

where xi take values either 0 or 1, and νi are defined by


(
αi , if xi = 1
νi (xi ) = (7.25)
1 − αi , if xi = 0

for i = 1, 2, · · · , n, respectively.

Proof: Since Z = 1 if and only if f (X1 , X2 , · · · , Xn ) = 1, we immediately


have
T (Z) = M{f (X1 , X2 , · · · , Xn ) = 1}.
Thus the equation (7.24) follows from Theorem 2.23 immediately.

Exercise 7.1: Let X1 , X2 , · · · , Xn be independent uncertain propositions


with truth values α1 , α2 , · · · , αn , respectively. Then

Z = X1 ∧ X2 ∧ · · · ∧ Xn (7.26)

is an uncertain proposition. Show that the truth value of Z is

T (Z) = α1 ∧ α2 ∧ · · · ∧ αn . (7.27)

Exercise 7.2: Let X1 , X2 , · · · , Xn be independent uncertain propositions


with truth values α1 , α2 , · · · , αn , respectively. Then

Z = X1 ∨ X2 ∨ · · · ∨ Xn (7.28)
162 Chapter 7 - Uncertain Propositional Logic

is an uncertain proposition. Show that the truth value of Z is

T (Z) = α1 ∨ α2 ∨ · · · ∨ αn . (7.29)

Example 7.6: Let X1 and X2 be independent uncertain propositions with


truth values α1 and α2 , respectively. Then

Z = X1 ↔ X2 (7.30)

is an uncertain proposition. It is clear that Z = f (X1 , X2 ) if we define

f (1, 1) = 1, f (1, 0) = 0, f (0, 1) = 0, f (0, 0) = 1.

At first, we have

sup min νi (xi ) = max{α1 ∧ α2 , (1 − α1 ) ∧ (1 − α2 )},


f (x1 ,x2 )=1 1≤i≤2

sup min νi (xi ) = max{(1 − α1 ) ∧ α2 , α1 ∧ (1 − α2 )}.


f (x1 ,x2 )=0 1≤i≤2

When α1 ≥ 0.5 and α2 ≥ 0.5, we have

sup min νi (xi ) = α1 ∧ α2 ≥ 0.5.


f (x1 ,x2 )=1 1≤i≤2

It follows from Chen-Ralescu theorem that

T (Z) = 1 − sup min νi (xi ) = 1 − (1 − α1 ) ∨ (1 − α2 ) = α1 ∧ α2 .


f (x1 ,x2 )=0 1≤i≤2

When α1 ≥ 0.5 and α2 < 0.5, we have

sup min νi (xi ) = (1 − α1 ) ∨ α2 ≤ 0.5.


f (x1 ,x2 )=1 1≤i≤2

It follows from Chen-Ralescu theorem that

T (Z) = sup min νi (xi ) = (1 − α1 ) ∨ α2 .


f (x1 ,x2 )=1 1≤i≤2

When α1 < 0.5 and α2 ≥ 0.5, we have

sup min νi (xi ) = α1 ∨ (1 − α2 ) ≤ 0.5.


f (x1 ,x2 )=1 1≤i≤2

It follows from Chen-Ralescu theorem that

T (Z) = sup min νi (xi ) = α1 ∨ (1 − α2 ).


f (x1 ,x2 )=1 1≤i≤2
Section 7.5 - Uncertain Predicate Logic 163

When α1 < 0.5 and α2 < 0.5, we have

sup min νi (xi ) = (1 − α1 ) ∧ (1 − α2 ) > 0.5.


f (x1 ,x2 )=1 1≤i≤2

It follows from Chen-Ralescu theorem that

T (Z) = 1 − sup min νi (xi ) = 1 − α1 ∨ α2 = (1 − α1 ) ∧ (1 − α2 ).


f (x1 ,x2 )=0 1≤i≤2

Thus we have


 α1 ∧ α2 , if α1 ≥ 0.5 and α2 ≥ 0.5
(1 − α1 ) ∨ α2 , ≥ 0.5

 if α1 and α2 < 0.5
T (Z) = (7.31)

 α1 ∨ (1 − α2 ), if α1 < 0.5 and α2 ≥ 0.5

(1 − α1 ) ∧ (1 − α2 ),

if α1 < 0.5 and α2 < 0.5.

7.4 Boolean System Calculator


Boolean System Calculator is a software that may compute the truth value
of uncertain formula. This software may be downloaded from the website at
http://orsc.edu.cn/liu/resources.htm. For example, assume ξ1 , ξ2 , ξ3 , ξ4 , ξ5
are independent uncertain propositions with truth values 0.1, 0.3, 0.5, 0.7, 0.9,
respectively. Consider an uncertain formula,

X = (ξ1 ∧ ξ2 ) ∨ (ξ2 ∧ ξ3 ) ∨ (ξ3 ∧ ξ4 ) ∨ (ξ4 ∧ ξ5 ). (7.32)

It is clear that the corresponding Boolean function of X has the form




 1, if x1 + x2 = 2

 1, if x2 + x3 = 2



f (x1 , x2 , x3 , x4 , x5 ) = 1, if x3 + x4 = 2

1, if x4 + x5 = 2





0, otherwise.

A run of Boolean System Calculator shows that the truth value of X is 0.7
in uncertain measure.

7.5 Uncertain Predicate Logic


Consider the following propositions: “Beijing is a big city”, and “Tianjin is a
big city”. Uncertain propositional logic treats them as unrelated propositions.
However, uncertain predicate logic represents them by a predicate proposition
X(a). If a represents Beijing, then

X(a) = “Beijing is a big city”. (7.33)


164 Chapter 7 - Uncertain Propositional Logic

If a represents Tianjin, then

X(a) = “Tianjin is a big city”. (7.34)

Definition 7.3 (Zhang and Li [267]) Uncertain predicate proposition is a


sequence of uncertain propositions indexed by one or more parameters.

In order to deal with uncertain predicate propositions, we need a universal


quantifier ∀ and an existential quantifier ∃. If X(a) is an uncertain predicate
proposition defined by (7.33) and (7.34), then

(∀a)X(a) = “Both Beijing and Tianjin are big cities”, (7.35)

(∃a)X(a) = “At least one of Beijing and Tianjin is a big city”. (7.36)

Theorem 7.9 (Zhang and Li [267], Law of Excluded Middle) Let X(a) be
an uncertain predicate proposition. Then

T ((∀a)X(a) ∨ (∃a)¬X(a)) = 1. (7.37)

Proof: Since ¬(∀a)X(a) = (∃a)¬X(a), it follows from the definition of truth


value and the property of uncertain measure that

T ((∀a)X(a) ∨ (∃a)¬X(a)) = M{((∀a)X(a) = 1) ∪ ((∀a)X(a) = 0)} = 1.

The theorem is proved.

Theorem 7.10 (Zhang and Li [267], Law of Contradiction) Let X(a) be an


uncertain predicate proposition. Then

T ((∀a)X(a) ∧ (∃a)¬X(a)) = 0. (7.38)

Proof: Since ¬(∀a)X(a) = (∃a)¬X(a), it follows from the definition of truth


value and the property of uncertain measure that

T ((∀a)X(a) ∧ (∃a)¬X(a)) = M{((∀a)X(a) = 1) ∩ ((∀a)X(a) = 0)} = 0.

The theorem is proved.

Theorem 7.11 (Zhang and Li [267], Law of Truth Conservation) Let X(a)
be an uncertain predicate proposition. Then

T ((∀a)X(a)) + T ((∃a)¬X(a)) = 1. (7.39)

Proof: Since ¬(∀a)X(a) = (∃a)¬X(a), it follows from the definition of truth


value and the property of uncertain measure that

T ((∃a)¬X(a)) = 1 − M{(∀a)X(a) = 1} = 1 − T ((∀a)X(a)).

The theorem is proved.


Section 7.5 - Uncertain Predicate Logic 165

Theorem 7.12 (Zhang and Li [267]) Let X(a) be an uncertain predicate


proposition. Then for any given b, we have

T ((∀a)X(a) → X(b)) = 1. (7.40)

Proof: The argument breaks into two cases. Case 1: If X(b) = 0, then
(∀a)X(a) = 0 and ¬(∀a)X(a) = 1. Thus

(∀a)X(a) → X(b) = ¬(∀a)X(a) ∨ X(b) = 1.

Case II: If X(b) = 1, then we immediately have

(∀a)X(a) → X(b) = ¬(∀a)X(a) ∨ X(b) = 1.

Thus we always have (7.40). The theorem is proved.

Theorem 7.13 (Zhang and Li [267]) Let X(a) be an uncertain predicate


proposition. Then for any given b, we have

T (X(b) → (∃a)X(a)) = 1. (7.41)

Proof: The argument breaks into two cases. Case 1: If X(b) = 0, then
¬X(b) = 1 and

X(b) → (∀a)X(a) = ¬X(b) ∨ (∃a)X(a) = 1.

Case II: If X(b) = 1, then (∃a)X(a) = 1 and

X(b) → (∃a)X(a) = ¬X(b) ∨ (∃a)X(a) = 1.

Thus we always have (7.41). The theorem is proved.

Theorem 7.14 (Zhang and Li [267]) Let X(a) be an uncertain predicate


proposition. Then
T ((∀a)X(a) → (∃a)X(a)) = 1. (7.42)

Proof: The argument breaks into two cases. Case 1: If (∀a)X(a) = 0, then
¬(∀a)X(a) = 1 and

(∀a)X(a) → (∃a)X(a) = ¬(∀a)X(a) ∨ (∃a)X(a) = 1.

Case II: If (∀a)X(a) = 1, then (∃a)X(a) = 1 and

(∀a)X(a) → (∃a)X(a) = ¬(∀a)X(a) ∨ (∃a)X(a) = 1.

Thus we always have (7.42). The theorem is proved.


166 Chapter 7 - Uncertain Propositional Logic

Theorem 7.15 (Zhang and Li [267]) Let X(a) be an uncertain predicate


proposition such that {X(a)|a ∈ A} is a class of independent uncertain propo-
sitions. Then
T ((∀a)X(a)) = inf T (X(a)), (7.43)
a∈A

T ((∃a)X(a)) = sup T (X(a)). (7.44)


a∈A

Proof: For each uncertain predicate proposition X(a), by the meaning of


universal quantifier, we obtain
( )
\
T ((∀a)X(a)) = M{(∀a)X(a) = 1} = M (X(a) = 1) .
a∈A

Since {X(a)|a ∈ A} is a class of independent uncertain propositions, we get


T ((∀a)X(a)) = inf M{X(a) = 1} = inf T (X(a)).
a∈A a∈A

The first equation is verified. Similarly, by the meaning of existential quan-


tifier, we obtain
( )
[
T ((∃a)X(a)) = M{(∃a)X(a) = 1} = M (X(a) = 1) .
a∈A

Since {X(a)|a ∈ A} is a class of independent uncertain propositions, we get


T ((∃a)X(a)) = sup M{X(a) = 1} = sup T (X(a)).
a∈A a∈A

The second equation is proved.


Theorem 7.16 (Zhang and Li [267]) Let X(a, b) be an uncertain predicate
proposition such that {X(a, b)|a ∈ A, b ∈ B} is a class of independent uncer-
tain propositions. Then
T ((∀a)(∃b)X(a, b)) = inf sup T (X(a, b)), (7.45)
a∈A b∈B

T ((∃a)(∀b)X(a, b)) = sup inf T (X(a, b)). (7.46)


a∈A b∈B

Proof: Since {X(a, b)|a ∈ A, b ∈ B} is a class of independent uncertain


propositions, both {(∃b)X(a, b)|a ∈ A} and {(∀b)X(a, b)|a ∈ A} are two
classes of independent uncertain propositions. It follows from Theorem 7.15
that
T ((∀a)(∃b)X(a, b)) = inf T ((∃b)X(a, b)) = inf sup T (X(a, b)),
a∈A a∈A b∈B

T ((∃a)(∀b)X(a, b)) = sup T ((∀b)X(a, b)) = sup inf T (X(a, b)).


a∈A a∈A b∈B
The theorem is proved.
Section 7.6 - Bibliographic Notes 167

7.6 Bibliographic Notes


Uncertain propositional logic was designed by Li and Liu [100] in which ev-
ery proposition is abstracted into a Boolean uncertain variable and the truth
value is defined as the uncertain measure that the proposition is true. An im-
portant contribution is Chen-Ralescu theorem [14] that provides a numerical
method for calculating the truth value of uncertain propositions.
Another topic is the uncertain predicate logic developed by Zhang and Li
[267] in which an uncertain predicate proposition is defined as a sequence of
uncertain propositions indexed by one or more parameters.
Chapter 8

Uncertain Entailment

Uncertain entailment is a methodology for calculating the truth value of an


uncertain formula via the maximum uncertainty principle when the truth
values of other uncertain formulas are given. In some sense, uncertain propo-
sitional logic and uncertain entailment are mutually inverse, the former at-
tempts to compose a complex proposition from simpler ones, while the latter
attempts to decompose a complex proposition into simpler ones.
This chapter will present an uncertain entailment model. In addition,
uncertain modus ponens, uncertain modus tollens and uncertain hypothetical
syllogism are deduced from the uncertain entailment model.

8.1 Uncertain Entailment Model


Assume X1 , X2 , · · · , Xn are independent uncertain propositions with un-
known truth values α1 , α2 , · · · , αn , respectively. Also assume that

Yj = fj (X1 , X2 , · · · , Xn ) (8.1)

are uncertain propositions with known truth values cj , j = 1, 2, · · · , m, re-


spectively. Now let
Z = f (X1 , X2 , · · · , Xn ) (8.2)
be an additional uncertain proposition. What is the truth value of Z? This
is just the uncertain entailment problem. In order to solve it, let us consider
what values α1 , α2 , · · · , αn may take. The first constraint is

0 ≤ αi ≤ 1, i = 1, 2, · · · , n. (8.3)

The second type of constraints is represented by

T (Yj ) = cj (8.4)

© Springer-Verlag Berlin Heidelberg 2015 169


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_9
170 Chapter 8 - Uncertain Entailment

where T (Yj ) are determined by α1 , α2 , · · · , αn via



 sup min νi (xi ),
fj (x1 ,x2 ,··· ,xn )=1 1≤i≤n







 if sup min νi (xi ) < 0.5
fj (x1 ,x2 ,··· ,xn )=1 1≤i≤n

T (Yj ) = (8.5)
 1− sup min νi (xi ),
fj (x1 ,x2 ,··· ,xn )=0 1≤i≤n





if sup min νi (xi ) ≥ 0.5



1≤i≤n

fj (x1 ,x2 ,··· ,xn )=1

for j = 1, 2, · · · , m and
(
αi , if xi = 1
νi (xi ) = (8.6)
1 − αi , if xi = 0

for i = 1, 2, · · · , n. Please note that the additional uncertain proposition


Z = f (X1 , X2 , · · · , Xn ) has a truth value

 sup min νi (xi ),
 f (x1 ,x2 ,··· ,xn )=1 1≤i≤n






 if sup min νi (xi ) < 0.5
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

T (Z) = (8.7)

 1− sup min νi (xi ),

 f (x1 ,x2 ,··· ,xn )=0 1≤i≤n


if sup min νi (xi ) ≥ 0.5.



1≤i≤n

f (x1 ,x2 ,··· ,xn )=1

Since the truth values α1 , α2 , · · · , αn are not uniquely determined, the truth
value T (Z) is not unique too. In this case, we have to use the maximum
uncertainty principle to determine the truth value T (Z). That is, T (Z)
should be assigned the value as close to 0.5 as possible. In other words,
we should minimize the value |T (Z) − 0.5| via choosing appreciate values of
α1 , α2 , · · · , αn . The uncertain entailment model is thus written by Liu [126]
as follows,


 min |T (Z) − 0.5|

 subject to:
(8.8)

 0 ≤ αi ≤ 1, i = 1, 2, · · · , n

T (Yj ) = cj , j = 1, 2, · · · , m

where T (Z), T (Yj ), j = 1, 2, · · · , m are functions of unknown truth values


α1 , α2 , · · · , αn .

Example 8.1: Let A and B be independent uncertain propositions. It is


known that
T (A ∨ B) = a, T (A ∧ B) = b. (8.9)
Section 8.2 - Uncertain Modus Ponens 171

What is the truth value of A → B? Denote the truth values of A and B by


α1 and α2 , respectively, and write

Y1 = A ∨ B, Y2 = A ∧ B, Z = A → B.

It is clear that
T (Y1 ) = α1 ∨ α2 = a,
T (Y2 ) = α1 ∧ α2 = b,
T (Z) = (1 − α1 ) ∨ α2 .
In this case, the uncertain entailment model (8.8) becomes


 min |(1 − α1 ) ∨ α2 − 0.5|




 subject to:
0 ≤ α1 ≤ 1


(8.10)

 0 ≤ α2 ≤ 1

α1 ∨ α2 = a





α1 ∧ α2 = b.

When a ≥ b, there are only two feasible solutions (α1 , α2 ) = (a, b) and
(α1 , α2 ) = (b, a). If a + b < 1, the optimal solution produces

T (Z) = (1 − α1∗ ) ∨ α2∗ = 1 − a;

if a + b = 1, the optimal solution produces

T (Z) = (1 − α1∗ ) ∨ α2∗ = a or b;

if a + b > 1, the optimal solution produces

T (Z) = (1 − α1∗ ) ∨ α2∗ = b.

When a < b, there is no feasible solution and the truth values are ill-assigned.
In summary, from T (A ∨ B) = a and T (A ∧ B) = b we entail


 1 − a, if a ≥ b and a + b < 1
 a or b, if a ≥ b and a + b = 1

T (A → B) = (8.11)

 b, if a ≥ b and a + b > 1


illness, if a < b.

8.2 Uncertain Modus Ponens


Uncertain modus ponens was presented by Liu [126]. Let A and B be inde-
pendent uncertain propositions. Assume A and A → B have truth values a
172 Chapter 8 - Uncertain Entailment

and b, respectively. What is the truth value of B? Denote the truth values
of A and B by α1 and α2 , respectively, and write

Y1 = A, Y2 = A → B, Z = B.

It is clear that
T (Y1 ) = α1 = a,
T (Y2 ) = (1 − α1 ) ∨ α2 = b,
T (Z) = α2 .
In this case, the uncertain entailment model (8.8) becomes


 min |α2 − 0.5|




 subject to:
0 ≤ α1 ≤ 1


(8.12)

 0 ≤ α2 ≤ 1




 α1 = a

(1 − α1 ) ∨ α2 = b.

When a + b > 1, there is a unique feasible solution and then the optimal
solution is
α1∗ = a, α2∗ = b.
Thus T (B) = α2∗ = b. When a + b = 1, the feasible set is {a} × [0, b] and the
optimal solution is
α1∗ = a, α2∗ = 0.5 ∧ b.
Thus T (B) = α2∗ = 0.5 ∧ b. When a + b < 1, there is no feasible solution and
the truth values are ill-assigned. In summary, from

T (A) = a, T (A → B) = b (8.13)

we entail 

 b, if a + b > 1
T (B) = 0.5 ∧ b, if a + b = 1 (8.14)

illness, if a + b < 1.

This result coincides with the classical modus ponens that if both A and
A → B are true, then B is true.

8.3 Uncertain Modus Tollens


Uncertain modus tollens was presented by Liu [126]. Let A and B be inde-
pendent uncertain propositions. Assume A → B and B have truth values a
Section 8.4 - Uncertain Hypothetical Syllogism 173

and b, respectively. What is the truth value of A? Denote the truth values
of A and B by α1 and α2 , respectively, and write

Y1 = A → B, Y2 = B, Z = A.

It is clear that
T (Y1 ) = (1 − α1 ) ∨ α2 = a,

T (Y2 ) = α2 = b,

T (Z) = α1 .

In this case, the uncertain entailment model (8.8) becomes




 min |α1 − 0.5|




 subject to:
0 ≤ α1 ≤ 1


(8.15)

 0 ≤ α2 ≤ 1

(1 − α1 ) ∨ α2 = a





α2 = b.

When a > b, there is a unique feasible solution and then the optimal solution
is
α1∗ = 1 − a, α2∗ = b.

Thus T (A) = α1∗ = 1 − a. When a = b, the feasible set is [1 − a, 1] × {b} and


the optimal solution is

α1∗ = (1 − a) ∨ 0.5, α2∗ = b.

Thus T (A) = α1∗ = (1 − a) ∨ 0.5. When a < b, there is no feasible solution


and the truth values are ill-assigned. In summary, from

T (A → B) = a, T (B) = b (8.16)

we entail


 1 − a, if a > b
T (A) = (1 − a) ∨ 0.5, if a = b (8.17)

illness, if a < b.

This result coincides with the classical modus tollens that if A → B is true
and B is false, then A is false.
174 Chapter 8 - Uncertain Entailment

8.4 Uncertain Hypothetical Syllogism


Uncertain hypothetical syllogism was presented by Liu [126]. Let A, B, C be
independent uncertain propositions. Assume A → B and B → C have truth
values a and b, respectively. What is the truth value of A → C? Denote the
truth values of A, B, C by α1 , α2 , α3 , respectively, and write

Y1 = A → B, Y2 = B → C, Z = A → C.

It is clear that
T (Y1 ) = (1 − α1 ) ∨ α2 = a,
T (Y2 ) = (1 − α2 ) ∨ α3 = b,
T (Z) = (1 − α1 ) ∨ α3 .
In this case, the uncertain entailment model (8.8) becomes


 min |(1 − α1 ) ∨ α3 − 0.5|

subject to:





0 ≤ α1 ≤ 1




0 ≤ α2 ≤ 1 (8.18)

0 ≤ α3 ≤ 1





(1 − α1 ) ∨ α2 = a





 (1 − α2 ) ∨ α3 = b.

Write the optimal solution by (α1∗ , α2∗ , α3∗ ). When a ∧ b ≥ 0.5, we have

T (A → C) = (1 − α1∗ ) ∨ α3∗ = a ∧ b.

When a + b ≥ 1 and a ∧ b < 0.5, we have

T (A → C) = (1 − α1∗ ) ∨ α3∗ = 0.5.

When a + b < 1, there is no feasible solution and the truth values are ill-
assigned. In summary, from

T (A → B) = a, T (B → C) = b (8.19)

we entail


 a ∧ b, if a ≥ 0.5 and b ≥ 0.5

T (A → C) = 0.5, if a + b ≥ 1 and a ∧ b < 0.5 (8.20)


illness, if a + b < 1.

This result coincides with the classical hypothetical syllogism that if both
A → B and B → C are true, then A → C is true.
Section 8.5 - Bibliographic Notes 175

8.5 Bibliographic Notes


Uncertain entailment was proposed by Liu [126] for determining the truth
value of an uncertain proposition via the maximum uncertainty principle
when the truth values of other uncertain propositions are given. From the
uncertain entailment model, Liu [126] also deduced uncertain modus ponens,
uncertain modus tollens, and uncertain hypothetical syllogism.
Chapter 9

Uncertain Set

Uncertain set was first proposed by Liu [127] in 2010 for modeling unsharp
concepts. This chapter will introduce the concepts of uncertain set, mem-
bership function, independence, expected value, variance, entropy, and dis-
tance. This chapter will also introduce the operational law for uncertain sets
via membership functions or inverse membership functions, and uncertain
statistics for determining membership functions.

9.1 Uncertain Set


Roughly speaking, an uncertain set is a set-valued function on an uncertainty
space, and attempts to model “unsharp concepts” that are essentially sets
but their boundaries are not sharply described (because of the ambiguity of
human language). Some typical examples include “young”, “tall”, “warm”,
and “most”. A formal definition is given as follows.

Definition 9.1 (Liu [127]) An uncertain set is a function ξ from an uncer-


tainty space (Γ, L, M) to a collection of sets of real numbers such that both
{B ⊂ ξ} and {ξ ⊂ B} are events for any Borel set B.

Remark 9.1: It is clear that uncertain set (Liu [127]) is very different from
random set (Robbins [198] and Matheron [167]) and fuzzy set (Zadeh [260]).
The essential difference among them is that different measures are used, i.e.,
random set uses probability measure, fuzzy set uses possibility measure and
uncertain set uses uncertain measure.

Remark 9.2: What is the difference between uncertain variable and un-
certain set? Both of them belong to the same broad category of uncertain
concepts. However, they are differentiated by their mathematical definitions:
the former refers to one value, while the latter to a collection of values. Es-
sentially, the difference between uncertain variable and uncertain set focuses

© Springer-Verlag Berlin Heidelberg 2015 177


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_10
178 Chapter 9 - Uncertain Set

on the property of exclusivity. If the concept has exclusivity, then it is an


uncertain variable. Otherwise, it is an uncertain set. Consider the statement
“John is a young man”. If we are interested in John’s real age, then “young”
is an uncertain variable because it is an exclusive concept (John’s age can-
not be more than one value). For example, if John is 20 years old, then it
is impossible that John is 25 years old. In other words, “John is 20 years
old” does exclude the possibility that “John is 25 years old”. By contrast,
if we are interested in what ages can be regarded “young”, then “young” is
an uncertain set because the concept now has no exclusivity. For example,
both 20-year-old and 25-year-old men can be considered “young”. In other
words, “a 20-year-old man is young” does not exclude the possibility that “a
25-year-old man is young”.

Example 9.1: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 } with


power set L. Then the set-valued function

 [1, 3], if γ = γ1


ξ(γ) = [2, 4], if γ = γ2 (9.1)


[3, 5], if γ = γ3

is an uncertain set on (Γ, L, M). See Figure 9.1.

<..
..
..........
...
.........................................................
.
.... .... ....
... ... ...
...................................... ... ...
... ... .. ... ...
... ... ... ... ...
... ... .. ... ...
..................................................... . .
.
......... .
... .. . .......
... ... .. ... .. . ..
... ... ... ... ..
... ... ... .. ... ...
.................................
... ... ...... ..
... .... ... .. ..
... ... ... .. ..
.................... .. ..
... .. .. ..
... .. .. ..
... .. .. ..
............................................................................................................................................................................
γ ..... γ γ Γ
. 1 2 3

Figure 9.1: An Uncertain Set

Example 9.2: Take an uncertainty space (Γ, L, M) to be < with Borel


algebra L. Then the set-valued function

ξ(γ) = [γ, γ + 1], ∀γ ∈ Γ (9.2)

is an uncertain set on (Γ, L, M).

Theorem 9.1 Let ξ be an uncertain set and let B be a Borel set. Then the
set
{B 6⊂ ξ} = {γ ∈ Γ B 6⊂ ξ(γ)} (9.3)
is an event.
Section 9.1 - Uncertain Set 179

Proof: Since ξ is an uncertain set and B is a Borel set, the set {B ⊂ ξ} is an


event. Thus {B 6⊂ ξ} is an event by using the relation {B 6⊂ ξ} = {B ⊂ ξ}c .

Theorem 9.2 Let ξ be an uncertain set and let B be a Borel set. Then the
set
{ξ 6⊂ B} = {γ ∈ Γ ξ(γ) 6⊂ B} (9.4)
is an event.

Proof: Since ξ is an uncertain set and B is a Borel set, the set {ξ ⊂ B} is an


event. Thus {ξ 6⊂ B} is an event by using the relation {ξ 6⊂ B} = {ξ ⊂ B}c .

Union, Intersection and Complement


Definition 9.2 Let ξ and η be two uncertain sets on the uncertainty space
(Γ, L, M). Then (i) the union ξ ∪ η of the uncertain sets ξ and η is

(ξ ∪ η)(γ) = ξ(γ) ∪ η(γ), ∀γ ∈ Γ; (9.5)

(ii) the intersection ξ ∩ η of the uncertain sets ξ and η is

(ξ ∩ η)(γ) = ξ(γ) ∩ η(γ), ∀γ ∈ Γ; (9.6)

(iii) the complement ξ c of the uncertain set ξ is

ξ c (γ) = ξ(γ)c , ∀γ ∈ Γ. (9.7)

Example 9.3: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 }. Let ξ


and η be two uncertain sets,
 

 [1, 2], if γ = γ1 
 (2, 3), if γ = γ1
 
ξ(γ) = [1, 3], if γ = γ2 η(γ) = (2, 4), if γ = γ2

 

[1, 4], if γ = γ3 , (2, 5), if γ = γ3 .
 

Then their union is




 [1, 3), if γ = γ1

(ξ ∪ η)(γ) = [1, 4), if γ = γ2


[1, 5), if γ = γ3 ,

their intersection is


 ∅, if γ = γ1

(ξ ∩ η)(γ) = (2, 3], if γ = γ2


(2, 4], if γ = γ3 ,

180 Chapter 9 - Uncertain Set

and their complements are




 (−∞, 1) ∪ (2, +∞), if γ = γ1

c
ξ (γ) = (−∞, 1) ∪ (3, +∞), if γ = γ2


(−∞, 1) ∪ (4, +∞), if γ = γ3 ,



 (−∞, 2] ∪ [3, +∞), if γ = γ1

c
η (γ) = (−∞, 2] ∪ [4, +∞), if γ = γ2


(−∞, 2] ∪ [5, +∞), if γ = γ3 .

Theorem 9.3 Let ξ be an uncertain set and let < be the set of real numbers.
Then
ξ ∪ < = <, ξ ∩ < = ξ. (9.8)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
the union is
(ξ ∪ <)(γ) = ξ(γ) ∪ < = <.
Thus we have ξ ∪ < = <. In addition, the intersection is
(ξ ∩ <)(γ) = ξ(γ) ∩ < = ξ(γ).
Thus we have ξ ∩ < = ξ.
Theorem 9.4 Let ξ be an uncertain set and let ∅ be the empty set. Then
ξ ∪ ∅ = ξ, ξ ∩ ∅ = ∅. (9.9)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
the union is
(ξ ∪ ∅)(γ) = ξ(γ) ∪ ∅ = ξ(γ).
Thus we have ξ ∪ ∅ = ξ. In addition, the intersection is
(ξ ∩ ∅)(γ) = ξ(γ) ∩ ∅ = ∅.
Thus we have ξ ∩ ∅ = ∅.
Theorem 9.5 (Idempotent Law) Let ξ be an uncertain set. Then we have
ξ ∪ ξ = ξ, ξ ∩ ξ = ξ. (9.10)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
the union is
(ξ ∪ ξ)(γ) = ξ(γ) ∪ ξ(γ) = ξ(γ).
Thus we have ξ ∪ ξ = ξ. In addition, the intersection is
(ξ ∩ ξ)(γ) = ξ(γ) ∩ ξ(γ) = ξ(γ).
Thus we have ξ ∩ ξ = ξ.
Section 9.1 - Uncertain Set 181

Theorem 9.6 (Double-Negation Law) Let ξ be an uncertain set. Then we


have
(ξ c )c = ξ. (9.11)
Proof: For each γ ∈ Γ, it follows from the definition of complement that
(ξ c )c (γ) = (ξ c (γ))c = (ξ(γ)c )c = ξ(γ).
Thus we have (ξ c )c = ξ.
Theorem 9.7 (Law of Excluded Middle and Law of Contradiction) Let ξ be
an uncertain set and let ξ c be its complement. Then
ξ ∪ ξ c = <, ξ ∩ ξ c = ∅. (9.12)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
the union is
(ξ ∪ ξ c )(γ) = ξ(γ) ∪ ξ c (γ) = ξ(γ) ∪ ξ(γ)c = <.
Thus we have ξ ∪ ξ c ≡ <. In addition, the intersection is
(ξ ∩ ξ c )(γ) = ξ(γ) ∩ ξ c (γ) = ξ(γ) ∩ ξ(γ)c = ∅.
Thus we have ξ ∩ ξ c ≡ ∅.
Theorem 9.8 (Commutative Law) Let ξ and η be uncertain sets. Then we
have
ξ ∪ η = η ∪ ξ, ξ ∩ η = η ∩ ξ. (9.13)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
(ξ ∪ η)(γ) = ξ(γ) ∪ η(γ) = η(γ) ∪ ξ(γ) = (η ∪ ξ)(γ).
Thus we have ξ ∪ η = η ∪ ξ. In addition, it follows that
(ξ ∩ η)(γ) = ξ(γ) ∩ η(γ) = η(γ) ∩ ξ(γ) = (η ∩ ξ)(γ).
Thus we have ξ ∩ η = η ∩ ξ.
Theorem 9.9 (Associative Law) Let ξ, η, τ be uncertain sets. Then we have
(ξ ∪ η) ∪ τ = ξ ∪ (η ∪ τ ), (ξ ∩ η) ∩ τ = ξ ∩ (η ∩ τ ). (9.14)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
((ξ ∪ η) ∪ τ )(γ) = (ξ(γ) ∪ η(γ)) ∪ τ (γ)
= ξ(γ) ∪ (η(γ) ∪ τ (γ)) = (ξ ∪ (η ∪ τ ))(γ).
Thus we have (ξ ∪ η) ∪ τ = ξ ∪ (η ∪ τ ). In addition, it follows that
((ξ ∩ η) ∩ τ )(γ) = (ξ(γ) ∩ η(γ)) ∩ τ (γ)
= ξ(γ) ∩ (η(γ) ∩ τ (γ)) = (ξ ∩ (η ∩ τ ))(γ).
Thus we have (ξ ∩ η) ∩ τ = ξ ∩ (η ∩ τ ).
182 Chapter 9 - Uncertain Set

Theorem 9.10 (Distributive Law) Let ξ, η, τ be uncertain sets. Then we


have

ξ ∪ (η ∩ τ ) = (ξ ∪ η) ∩ (ξ ∪ τ ), ξ ∩ (η ∪ τ ) = (ξ ∩ η) ∪ (ξ ∩ τ ). (9.15)

Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that

(ξ ∪ (η ∩ τ ))(γ) = ξ(γ) ∪ (η(γ) ∩ τ (γ))


= (ξ(γ) ∪ η(γ)) ∩ (ξ(γ) ∪ τ (γ))
= ((ξ ∪ η) ∩ (ξ ∪ τ ))(γ).

Thus we have ξ ∪ (η ∩ τ ) = (ξ ∪ η) ∩ (ξ ∪ τ ). In addition, it follows that

(ξ ∩ (η ∪ τ ))(γ) = ξ(γ) ∩ (η(γ) ∪ τ (γ))


= (ξ(γ) ∩ η(γ)) ∪ (ξ(γ) ∩ τ (γ))
= ((ξ ∩ η) ∪ (ξ ∩ τ ))(γ).

Thus we have ξ ∩ (η ∪ τ ) = (ξ ∩ η) ∪ (ξ ∩ τ ).

Theorem 9.11 (Absorbtion Law) Let ξ and η be uncertain sets. Then we


have
ξ ∪ (ξ ∩ η) = ξ, ξ ∩ (ξ ∪ η) = ξ. (9.16)

Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that

(ξ ∪ (ξ ∩ η))(γ) = ξ(γ) ∪ (ξ(γ) ∩ η(γ)) = ξ(γ).

Thus we have ξ ∪ (ξ ∩ η) = ξ. In addition, since

(ξ ∩ (ξ ∪ η))(γ) = ξ(γ) ∩ (ξ(γ) ∪ η(γ)) = ξ(γ),

we get ξ ∩ (ξ ∪ η) = ξ.

Theorem 9.12 (De Morgan’s Law) Let ξ and η be uncertain sets. Then

(ξ ∪ η)c = ξ c ∩ η c , (ξ ∩ η)c = ξ c ∪ η c . (9.17)

Proof: For each γ ∈ Γ, it follows from the definition of complement that

(ξ ∪ η)c (γ) = ((ξ(γ) ∪ η(γ))c = ξ(γ)c ∩ η(γ)c = (ξ c ∩ η c )(γ).

Thus we have (ξ ∪ η)c = ξ c ∩ η c . In addition, since

(ξ ∩ η)c (γ) = ((ξ(γ) ∩ η(γ))c = ξ(γ)c ∪ η(γ)c = (ξ c ∪ η c )(γ),

we get (ξ ∩ η)c = ξ c ∪ η c .
Section 9.2 - Membership Function 183

Function of Uncertain Sets


Definition 9.3 Let ξ1 , ξ2 , · · · , ξn be uncertain sets on the uncertainty space
(Γ, L, M), and let f be a measurable function. Then ξ = f (ξ1 , ξ2 , · · · , ξn ) is
an uncertain set defined by

ξ(γ) = f (ξ1 (γ), ξ2 (γ), · · · , ξn (γ)), ∀γ ∈ Γ. (9.18)

Example 9.4: Let ξ be an uncertain set on the uncertainty space (Γ, L, M)


and let A be a crisp set. Then ξ + A is also an uncertain set determined by

(ξ + A)(γ) = ξ(γ) + A, ∀γ ∈ Γ. (9.19)

Example 9.5: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 }. Let ξ


and η be two uncertain sets,
 

 [1, 2], if γ = γ1 
 (2, 3), if γ = γ1
 
ξ(γ) = [1, 3], if γ = γ2 η(γ) = (2, 4), if γ = γ2

 

[1, 4], if γ = γ3 , (2, 5), if γ = γ3 .
 

Then their sum is




 (3, 5), if γ = γ1

(ξ + η)(γ) = (3, 7), if γ = γ2


(3, 9), if γ = γ3 ,

and their product is




 (2, 6), if γ = γ1

(ξ × η)(γ) = (2, 12), if γ = γ2


(2, 20), if γ = γ3 .

9.2 Membership Function


Definition 9.4 (Liu [133]) An uncertain set ξ is said to have a membership
function µ if for any Borel set B, we have

M{B ⊂ ξ} = inf µ(x), (9.20)


x∈B

M{ξ ⊂ B} = 1 − sup µ(x). (9.21)


x∈B c

The above equations will be called measure inversion formulas.


184 Chapter 9 - Uncertain Set

µ(x) µ(x)
... ...
.......... ..........
... ................... ... ...................
.... .... .... ....
... ... .... ... ... ....
... ... ... ... ... ...
... ... ... ... ... ...
... ... ... ... ... ...
.. ... .. ...
... . ... ... .
...................................................................
..
...
... ...
. ...
... sup µ(x) ........ ...
... .....
. . ... .
... ..
... ..
. ... c .
. .. ..
. ... ..
... .... ...
.
x∈B .... .... ... ....
inf µ(x) ........ ... ... ..................................................................... ... ... .. ....
... ... .. ...... ... ... .. ..........
x∈B ....... .. .. ...... ....... .. .. ......
. . . . .. ....... . . . . .. .......
..
....... .
. .. ..... ......... .
. .. .....
.. . .. ..... .. . ....
..... .... . .. ..... .... ... ..
.
...........................................................................................................................................................................
.
.. .
................................ ...............................
. x ................................................................................................................................................................................
.
..
.
.
................................ ...............................
. x
0 ...
. B 0 ...
. B

Figure 9.2: M{B ⊂ ξ} = inf µ(x) and M{ξ ⊂ B} = 1− sup µ(x). Reprinted
x∈B x∈B c
from Liu [133].

Remark 9.3: When an uncertain set ξ does have a membership function µ,


it follows from the first measure inversion formula that

µ(x) = M{x ∈ ξ}. (9.22)

Remark 9.4: The value of µ(x) represents the membership degree that x
belongs to the uncertain set ξ. If µ(x) = 1, then x completely belongs to ξ;
if µ(x) = 0, then x does not belong to ξ at all. Thus the larger the value of
µ(x) is, the more true x belongs to ξ.

Remark 9.5: If an element x belongs to an uncertain set with membership


degree α, then x does not belong to the uncertain set with membership degree
1 − α. This fact follows from the duality property of uncertain measure. In
other words, if the uncertain set has a membership function µ, then for any
real number x, we have M{x 6∈ ξ} = 1 − M{x ∈ ξ} = 1 − µ(x). That is,

M{x 6∈ ξ} = 1 − µ(x). (9.23)

Exercise 9.1: The set < of real numbers is a special uncertain set ξ(γ) ≡ <.
Show that such an uncertain set has a membership function

µ(x) ≡ 1, ∀x ∈ < (9.24)

that is just the characteristic function of <.

Exercise 9.2: The empty set ∅ is a special uncertain set ξ(γ) ≡ ∅. Show
that such an uncertain set has a membership function

µ(x) ≡ 0, ∀x ∈ < (9.25)

that is just the characteristic function of ∅.


Section 9.2 - Membership Function 185

Exercise 9.3: A crisp set A of real numbers is a special uncertain set


ξ(γ) ≡ A. Show that such an uncertain set has a membership function
(
1, if x ∈ A
µ(x) = (9.26)
0, if x 6∈ A

that is just the characteristic function of A.

Exercise 9.4: Take an uncertainty space (Γ, L, M) to be the interval [0, 1]


with Borel algebra and Lebesgue measure. (i) Show that the uncertain set

ξ(γ) = [γ − 1, 1 − γ] (9.27)

has a membership function


(
1 − |x|, if x ∈ [−1, 1]
µ(x) = (9.28)
0, otherwise.

(ii) What is the membership function of ξ(γ) = (γ − 1, 1 − γ)? (iii) What


do those two uncertain sets make you think about?

Exercise 9.5: It is not true that every uncertain set has a membership
function. Show that the uncertain set
(
[2, 4] with uncertain measure 0.6
ξ= (9.29)
[1, 3] with uncertain measure 0.4

has no membership function.

Definition 9.5 An uncertain set ξ is called triangular if it has a membership


function
 x−a
 , if a ≤ x ≤ b
b−a

µ(x) = (9.30)
 x − c , if b ≤ x ≤ c

b−c
denoted by (a, b, c) where a, b, c are real numbers with a < b < c.

Definition 9.6 An uncertain set ξ is called trapezoidal if it has a member-


ship function
x−a


 , if a ≤ x ≤ b
 b−a



µ(x) = 1, if b ≤ x ≤ c (9.31)

x − d


, if c ≤ x ≤ d



c−d
denoted by (a, b, c, d) where a, b, c, d are real numbers with a < b < c < d.
186 Chapter 9 - Uncertain Set

µ(x) µ(x)
... ...
.......... ..........
... ...
... ..... ... ......................................................
... ........ ... .... ....
... ... . ... ... .... . ...
... ... .. .... ... .. ..
. . ..
. .
... ... . .... ... .. .
. . .....
... ... .. .... ... .. .
. . ..
... ... . ... ... .. ..
. . ...
. .
... ... . ...
... ... ... . . ....
... ... . ... ... .. .
. . ...
... ... .
. ... ... ... .
.
.
. ...
... ... . ... ... ..
. . . ...
...
... ... . ... ... ... . . ...
... .. . ... ... .. . .
. . ... . . . ...
... ... . ... ... ..
. . . ...
... ... . ... ... ..
. . . ...
. . . . . . .
......................................................................................................................................
. .
.
x .............................................................................................................................................................
. .
.
x
.. ..
a ... b c a ... b c d

Figure 9.3: Triangular and Trapezoidal Membership Functions. Reprinted


from Liu [133].

What is “young”?
Sometimes we say “those students are young”. What ages can be considered
“young”? In this case, “young” may be regarded as an uncertain set whose
membership function is


 0, if x ≤ 15

 (x − 15)/5, if 15 ≤ x ≤ 20



µ(x) = 1, if 20 ≤ x ≤ 35 (9.32)

 (45 − x)/10, if 35 ≤ x ≤ 45




0, if x ≥ 45.

Note that we do not say “young” if the age is below 15.

µ(x)
...
..........
... .........................................................................................
... .... ......
... ..... .. ....
... ... . .. ....
... .. ...
. .. ...
... .. ..
. .. ...
... .. ..
. ..
...
...
... ... .. .. ...
... .. ..
. .. ...
... ... .. ..
...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
...
... ... .. .. ...
... ..
. .. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ..
. .. . ..
............................................................................................................................................................................................................................................................... x
...
15yr 20yr .. 35yr 45yr

Figure 9.4: Membership Function of “young”

What is “tall”?
Sometimes we say “those sportsmen are tall”. What heights (centimeters)
can be considered “tall”? In this case, “tall” may be regarded as an uncertain
Section 9.2 - Membership Function 187

set whose membership function is




 0, if x ≤ 180

 (x − 180)/5, if 180 ≤ x ≤ 185



µ(x) = 1, if 185 ≤ x ≤ 195 (9.33)

(200 − x)/5, if 195 ≤ x ≤ 200





0, if x ≥ 200.

Note that we do not say “tall” if the height is over 200cm.

µ(x)
.....
.......
.... ..........................................................................................
... ..... .....
.. ... .. .. ....
... ... .. .. ....
... ... .. .. ...
... .. ..
. .. ....
... ..
. .. .. ...
... .... .. .. ...
...
... ... .. ..
.. ...
... ... . ...
... ... .. .. ...
... ... .. .. ...
... .. .. .. ...
. .. .. ...
... ... ...
... ..
. .. .. ...
... ..
. .. .. ...
... .... ..
..
..
..
...
...
... .
.. .
.............................................................................................................................................................................................................................................................
....
x
180cm 185cm . 195cm 200cm

Figure 9.5: Membership Function of “tall”

What is “warm”?

Sometimes we say “those days are warm”. What temperatures can be con-
sidered “warm”? In this case, “warm” may be regarded as an uncertain set
whose membership function is


 0, if x ≤ 15

 (x − 15)/3, if 15 ≤ x ≤ 18



µ(x) = 1, if 18 ≤ x ≤ 24 (9.34)

 (28 − x)/4,

 if 24 ≤ x ≤ 28


0, if 28 ≤ x.

What is “most”?

Sometimes we say “most students are boys”. What percentages can be con-
sidered “most”? In this case, “most” may be regarded as an uncertain set
188 Chapter 9 - Uncertain Set

µ(x)
...
..........
... ....................................................................
... .... .....
... ..... .. ...
... ... . .. ....
... ... ... .. ....
... .. ..
. .. ...
... ... .. .. ...
... .. .. .. ...
. ...
... ..
. .. .. ...
... ... .. . ...
... ..
. .. .
. ...
... .. .. .
. ...
. . ...
... ... .. . ...
... ..
. .. .
. ...
... .. .. .
. ...
. .
... ... .. . ...
...
... ... .. .
.
. ...
... ..
. .. . ..
........................................................................................................................................................................................................................................ x
... ◦ ◦ ◦ ◦
15 C 18 C .. 24 C 28 C

Figure 9.6: Membership Function of “warm”

whose membership function is




 0, if 0 ≤ x ≤ 0.7

20(x − 0.7), if 0.7 ≤ x ≤ 0.75




µ(x) = 1, if 0.75 ≤ x ≤ 0.85 (9.35)

20(0.9 − x), if 0.85 ≤ x ≤ 0.9





0, if 0.9 ≤ x ≤ 1.

µ(x)
.
....
.......
..
... .....................................................................
... ..... ....
... ... . .. ....
... ... ... .. ...
... ... .. .. ....
... .. ..
. .. ...
... ... .. .. ....
... ..
. .. .. ...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
... ..
. .. .. ...
... ... .. .. ...
... .. .. .. ...
. ...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
. .
. .. .. .
.......................................................................................................................................................................................................................
. .
x
....
70% 75% .. 85% 90%

Figure 9.7: Membership Function of “most”

What uncertain sets have membership functions?


It is known that some uncertain sets do not have membership functions.
What uncertain sets have membership functions?
Case I: If an uncertain set ξ degenerates to a crisp set A, then ξ has a
membership function that is just the characteristic function of A.
Case II: Let ξ be an uncertain set taking values in a nested class of sets.
That is, for any given γ1 and γ2 ∈ Γ, at least one of the following alternatives
holds,
(i) ξ(γ1 ) ⊂ ξ(γ2 ), (9.36)
Section 9.2 - Membership Function 189

(ii) ξ(γ2 ) ⊂ ξ(γ1 ). (9.37)


Then the uncertain set ξ has a membership function.

Sufficient and Necessary Condition


Theorem 9.13 (Liu [130]) A real-valued function µ is a membership func-
tion if and only if
0 ≤ µ(x) ≤ 1. (9.38)

Proof: If µ is a membership function of some uncertain set ξ, then µ(x) =


M{x ∈ ξ} and 0 ≤ µ(x) ≤ 1. Conversely, suppose µ is a function such that
0 ≤ µ(x) ≤ 1. Take an uncertainty space (Γ, L, M) to be the interval [0, 1]
with Borel algebra and Lebesgue measure. Then the uncertain set

ξ(γ) = {x ∈ < | µ(x) ≥ γ} (9.39)

has the membership function µ. See Figure 9.8.


....
........ ..........
..... ....... .............
.... .
... ...............................................
.. ....
... .................................................................
.. .....
... .
...........................................................................
.
... ... ..
γ ............ ...
... ......
...........................................................................................
. ....
... ........................................................................................................
... .... .. . ...
.......................................................................................................................
.
... ....... . ...
... ............................................................................................................................................
.......... .. .. ....
. . ...
...
...
...
...
..... .
. . .... . .... . . ...... .. ....... ...
. ......
. .........
................ ....... . ..... ..
.... .... ..
...
...
...
. ..
...
...........................
.
....... ... .. .......
...... .... .. .......
....... .
..
.
.................................................................................................................................................................................................................... x
.. ... ...
...
. ξ(γ)
.............................
.
..
........................... .

Figure 9.8: Take (Γ, L, M) to be [0, 1] with Borel algebra and Lebesgue mea-
sure. Then ξ(γ) = {x ∈ < | µ(x) ≥ γ} has the membership function µ. Keep
in mind that ξ is not the unique uncertain set whose membership function is
µ.

Membership Function of Nonempty Uncertain Set


An uncertain set ξ is said to be nonempty if ξ(γ) 6= ∅ for almost all γ ∈ Γ.
That is,
M{ξ = ∅} = 0. (9.40)
Note that nonempty uncertain set does not necessarily have a membership
function. However, when it does have, the following theorem gives a sufficient
and necessary condition of membership function.

Theorem 9.14 Let ξ be an uncertain set whose membership function µ ex-


ists. Then ξ is nonempty if and only if

sup µ(x) = 1. (9.41)


x∈<
190 Chapter 9 - Uncertain Set

Proof: Since the membership function µ exists, it follows from the measure
inversion formula that
M{ξ = ∅} = 1 − sup µ(x) = 1 − sup µ(x).
x∈∅c x∈<

Thus ξ is a nonempty uncertain set if and only if (9.41) holds.

Inverse Membership Function


Definition 9.7 (Liu [133]) Let ξ be an uncertain set with membership func-
tion µ. Then the set-valued function
µ−1 (α) = x ∈ < µ(x) ≥ α , ∀α ∈ [0, 1]

(9.42)
is called the inverse membership function of ξ. Sometimes, for each given α,
the set µ−1 (α) is also called the α-cut of µ.

µ(x)
..
........ ........
... ....... ..............
... ..... .....
... ..... .....
... .
...... .....
.. .....
... .. .....
... . ... .....
.............. .
. ....
α ....
... ..
.
..
.
..............................................
.
.. .....
... .
... ... ..
... ........
.
... .... ... .. .....
... ........ .. .....
... ...... ... .
.....
.....
......
. .. .
. .....
.
. .... .. .. ......
..
... ... .......
...... . .
. ........
..... ..
. ... ..
..............................................................................................................................................................................................................
....
... ... x
0 .......................... −1 ............................
.. µ . (α)

Figure 9.9: Inverse Membership Function µ−1 (α). Reprinted from Liu [133].

Remark 9.6: It is clear that inverse membership function always exists.


Please also note that µ−1 (α) may take value of the empty set ∅.

Example 9.6: The triangular uncertain set ξ = (a, b, c) has an inverse


membership function
µ−1 (α) = [(1 − α)a + αb, αb + (1 − α)c]. (9.43)

Example 9.7: The trapezoidal uncertain set ξ = (a, b, c, d) has an inverse


membership function
µ−1 (α) = [(1 − α)a + αb, αc + (1 − α)d]. (9.44)
Theorem 9.15 Let ξ be an uncertain set with inverse membership function
µ−1 (α). Then the membership function of ξ is determined by
µ(x) = sup α ∈ [0, 1] x ∈ µ−1 (α) .

(9.45)
Section 9.2 - Membership Function 191

Proof: It is easy to verify that µ−1 is the inverse membership function of µ.


Thus µ is the membership function of ξ.

Theorem 9.16 (Liu [133], Sufficient and Necessary Condition) A function


µ−1 (α) is an inverse membership function if and only if it is a monotone
decreasing set-valued function with respect to α ∈ [0, 1]. That is,

µ−1 (α) ⊂ µ−1 (β), if α > β. (9.46)

Proof: Suppose µ−1 (α) is an inverse membership function of some uncertain


set. For any x ∈ µ−1 (α), we have µ(x) ≥ α. Since α > β, we have µ(x) > β
and then x ∈ µ−1 (β). Hence µ−1 (α) ⊂ µ−1 (β). Conversely, suppose µ−1 (α)
is a monotone decreasing set-valued function. Then

µ(x) = sup α ∈ [0, 1] x ∈ µ−1 (α)


is a membership function of some uncertain set. It is easy to verify that


µ−1 (α) is the inverse membership function of the uncertain set. The theorem
is proved.

Uncertain set does not necessarily take values of its α-cuts!


Please keep in mind that uncertain set does not necessarily take values of its
α-cuts. In fact, an α-cut is included in the uncertain set with uncertain mea-
sure α. Conversely, the uncertain set is included in its α-cut with uncertain
measure 1 − α. More precisely, we have the following theorem.

Theorem 9.17 (Liu [133]) Let ξ be an uncertain set with inverse member-
ship function µ−1 (α). Then for each α ∈ [0, 1], we have

M{µ−1 (α) ⊂ ξ} ≥ α, (9.47)

M{ξ ⊂ µ−1 (α)} ≥ 1 − α. (9.48)

Proof: For each x ∈ µ−1 (α), we have µ(x) ≥ α. It follows from the measure
inversion formula that

M{µ−1 (α) ⊂ ξ} = inf µ(x) ≥ α.


x∈µ−1 (α)

For each x 6∈ µ−1 (α), we have µ(x) < α. It follows from the measure inversion
formula that

M{ξ ⊂ µ−1 (α)} = 1 − sup µ(x) ≥ 1 − α.


x6∈µ−1 (α)
192 Chapter 9 - Uncertain Set

Regular Membership Function


Definition 9.8 (Liu [133]) A membership function µ is said to be regular
if there exists a point x0 such that µ(x0 ) = 1 and µ(x) is unimodal about
the mode x0 . That is, µ(x) is increasing on (−∞, x0 ] and decreasing on
[x0 , +∞).

If µ is a regular membership function, then µ−1 (α) is an interval for each


α. In this case, the function

µ−1
l (α) = inf µ
−1
(α) (9.49)

is called the left inverse membership function, and the function

µ−1
r (α) = sup µ
−1
(α) (9.50)

is called the right inverse membership function. It is clear that the left inverse
membership function µ−1 l (α) is increasing, and the right inverse membership
function µ−1r (α) is decreasing with respect to α.
Conversely, suppose an uncertain set ξ has a left inverse membership
function µ−1 −1
l (α) and right inverse membership function µr (α). Then the
membership function µ is determined by

0, if x ≤ µ−1



 l (0)
−1 −1 −1

 α, if µl (0) ≤ x ≤ µl (1) and µl (α) = x




µ(x) = 1, if µ−1 −1
l (1) ≤ x ≤ µr (1) (9.51)


β, if µ−1 −1 −1
r (1) ≤ x ≤ µr (0) and µr (β) = x





0, if x ≥ µ−1

r (0).

Note that the values of α and β may not be unique. In this case, we will take
the maximum values.

9.3 Independence
Definition 9.9 (Liu [136]) The uncertain sets ξ1 , ξ2 , · · · , ξn are said to be
independent if for any Borel sets B1 , B2 , · · · , Bn , we have
( n ) n
\ ^
M ∗
(ξi ⊂ Bi ) = M {ξi∗ ⊂ Bi } (9.52)
i=1 i=1

and ( )
n
[ n
_
M (ξi∗ ⊂ Bi ) = M {ξi∗ ⊂ Bi } (9.53)
i=1 i=1

where ξi∗ are arbitrarily chosen from {ξi , ξic }, i = 1, 2, · · · , n, respectively.


Section 9.3 - Independence 193

Remark 9.7: Note that (9.52) represents 2n equations. For example, when
n = 2, the four equations are
M{(ξ1 ⊂ B1 ) ∩ (ξ2 ⊂ B2 )} = M{ξ1 ⊂ B1 } ∧ M{ξ2 ⊂ B2 },
M{(ξ1c ⊂ B1 ) ∩ (ξ2 ⊂ B2 )} = M{ξ1c ⊂ B1 } ∧ M{ξ2 ⊂ B2 },
M{(ξ1 ⊂ B1 ) ∩ (ξ2c ⊂ B2 )} = M{ξ1 ⊂ B1 } ∧ M{ξ2c ⊂ B2 },
M{(ξ1c ⊂ B1 ) ∩ (ξ2c ⊂ B2 )} = M{ξ1c ⊂ B1 } ∧ M{ξ2c ⊂ B2 }.
Also note that (9.53) represents other 2n equations. For example, when
n = 2, the four equations are
M{(ξ1 ⊂ B1 ) ∪ (ξ2 ⊂ B2 )} = M{ξ1 ⊂ B1 } ∨ M{ξ2 ⊂ B2 },
M{(ξ1c ⊂ B1 ) ∪ (ξ2 ⊂ B2 )} = M{ξ1c ⊂ B1 } ∨ M{ξ2 ⊂ B2 },
M{(ξ1 ⊂ B1 ) ∪ (ξ2c ⊂ B2 )} = M{ξ1 ⊂ B1 } ∨ M{ξ2c ⊂ B2 },
M{(ξ1c ⊂ B1 ) ∪ (ξ2c ⊂ B2 )} = M{ξ1c ⊂ B1 } ∨ M{ξ2c ⊂ B2 }.
Theorem 9.18 Let ξ1 , ξ2 , · · · , ξn be uncertain sets, and let ξi∗ be arbitrar-
ily chosen uncertain sets from {ξi , ξic }, i = 1, 2, · · · , n, respectively. Then
ξ1 , ξ2 , · · · , ξn are independent if and only if ξ1∗ , ξ2∗ , · · · , ξn∗ are independent.
Proof: Let ξi∗∗ be arbitrarily chosen uncertain sets from {ξi∗ , ξi∗c }, i =
1, 2, · · · , n, respectively. Then ξ1∗ , ξ2∗ , · · · , ξn∗ and ξ1∗∗ , ξ2∗∗ , · · · , ξn∗∗ represent
the same 2n combinations. This fact implies that (9.52) and (9.53) are equiv-
alent to ( n )
\ ^n
M ∗∗
(ξi ⊂ Bi ) = M {ξi∗∗ ⊂ Bi } , (9.54)
i=1 i=1
( n
) n
[ _
M (ξi∗∗ ⊂ Bi ) = M {ξi∗∗ ⊂ Bi } . (9.55)
i=1 i=1
Hence ξ1 , ξ2 , · · · , ξn are independent if and only if ξ1∗ , ξ2∗ , · · · , ξn∗ are indepen-
dent.

Exercise 9.6: Show that the following four statements are equivalent: (i)
ξ1 and ξ2 are independent; (ii) ξ1c and ξ2 are independent; (iii) ξ1 and ξ2c are
independent; and (iv) ξ1c and ξ2c are independent.
Theorem 9.19 The uncertain sets ξ1 , ξ2 , · · · , ξn are independent if and only
if for any Borel sets B1 , B2 , · · · , Bn , we have
( n ) n
\ ^
M (ξi∗ 6⊂ Bi ) = M {ξi∗ 6⊂ Bi } (9.56)
i=1 i=1

and ( )
n
[ n
_
M (ξi∗ 6⊂ Bi ) = M {ξi∗ 6⊂ Bi } (9.57)
i=1 i=1
where ξi∗ are arbitrarily chosen from {ξi , ξic }, i = 1, 2, · · · , n, respectively.
194 Chapter 9 - Uncertain Set

Proof: Since {ξi∗ 6⊂ Bi }c = {ξi∗ ⊂ Bi } for i = 1, 2, · · · , n, it follows from the


duality of uncertain measure that
( n ) ( n )
\ [
M (ξi 6⊂ Bi ) = 1 − M
∗ ∗
(ξi ⊂ Bi ) , (9.58)
i=1 i=1

n
^ n
_
M {ξi∗ 6⊂ Bi } = 1 − M{ξi∗ ⊂ Bi }, (9.59)
i=1 i=1
( n
) ( n
)
[ \
M (ξi∗ 6⊂ Bi ) =1−M (ξi∗ ⊂ Bi ) , (9.60)
i=1 i=1
n
_ n
^
M {ξi∗ 6⊂ Bi } = 1 − M{ξi∗ ⊂ Bi }. (9.61)
i=1 i=1

It follows from (9.58), (9.59), (9.60) and (9.61) that (9.56) and (9.57) are
valid if and only if
( n ) n
\ ^
M ∗
(ξi ⊂ Bi ) = M{ξi∗ ⊂ Bi }, (9.62)
i=1 i=1
( n
) n
[ _
M (ξi∗ ⊂ Bi ) = M{ξi∗ ⊂ Bi }. (9.63)
i=1 i=1

The above two equations are also equivalent to the independence of the un-
certain sets ξ1 , ξ2 , · · · , ξn . The theorem is thus proved.

Theorem 9.20 The uncertain sets ξ1 , ξ2 , · · · , ξn are independent if and only


if for any Borel sets B1 , B2 , · · · , Bn , we have
( n ) n
\ ^
M (Bi ⊂ ξi ) = ∗
M {Bi ⊂ ξi∗ } (9.64)
i=1 i=1

and ( )
n
[ n
_
M (Bi ⊂ ξi∗ ) = M {Bi ⊂ ξi∗ } (9.65)
i=1 i=1

where ξi∗ are arbitrarily chosen from {ξi , ξic }, i = 1, 2, · · · , n, respectively.

Proof: Since {Bi ⊂ ξi∗ } = {ξi∗c ⊂ Bic } for i = 1, 2, · · · , n, we immediately


have ( n ) ( n )
\ \
M (Bi ⊂ ξi∗ ) = M (ξi∗c ⊂ Bic ) , (9.66)
i=1 i=1
n
^ n
^
M {Bi ⊂ ξi∗ } = M{ξi∗c ⊂ Bic }, (9.67)
i=1 i=1
Section 9.3 - Independence 195

( n
) ( n
)
[ [
M (Bi ⊂ ξi∗ ) =M (ξi∗c ⊂ Bic ) , (9.68)
i=1 i=1
n
_ n
_
M {Bi ⊂ ξi∗ } = M{ξi∗c ⊂ Bic }. (9.69)
i=1 i=1

It follows from (9.66), (9.67), (9.68) and (9.69) that (9.64) and (9.65) are
valid if and only if
( n ) n
\ ^
M ∗c c
(ξi ⊂ Bi ) = M{ξi∗c ⊂ Bic }, (9.70)
i=1 i=1
( n
) n
[ _
M (ξi∗c ⊂ Bic ) = M{ξi∗c ⊂ Bic }. (9.71)
i=1 i=1

The above two equations are also equivalent to the independence of the un-
certain sets ξ1 , ξ2 , · · · , ξn . The theorem is thus proved.

Theorem 9.21 The uncertain sets ξ1 , ξ2 , · · · , ξn are independent if and only


if for any Borel sets B1 , B2 , · · · , Bn , we have
( n ) n
\ ^
M (Bi 6⊂ ξi ) =∗
M {Bi 6⊂ ξi∗ } (9.72)
i=1 i=1

and ( )
n
[ n
_
M (Bi 6⊂ ξi∗ ) = M {Bi 6⊂ ξi∗ } (9.73)
i=1 i=1

where ξi∗ are arbitrarily chosen from {ξi , ξic }, i = 1, 2, · · · , n, respectively.

Proof: Since {Bi 6⊂ ξi∗ }c = {Bi ⊂ ξi∗ } for i = 1, 2, · · · , n, it follows from the
duality of uncertain measure that
( n ) ( n )
\ [
M (Bi 6⊂ ξi ) = 1 − M

(Bi ⊂ ξi ) ,∗
(9.74)
i=1 i=1

n
^ n
_
M {Bi 6⊂ ξi∗ } = 1 − M{Bi ⊂ ξi∗ }, (9.75)
i=1 i=1
( n
) ( n
)
[ \
M (Bi 6⊂ ξi∗ ) =1−M (Bi ⊂ ξi∗ ) , (9.76)
i=1 i=1
n
_ n
^
M {Bi 6⊂ ξi∗ } = 1 − M{Bi ⊂ ξi∗ }. (9.77)
i=1 i=1
196 Chapter 9 - Uncertain Set

It follows from (9.74), (9.75), (9.76) and (9.77) that (9.72) and (9.73) are
valid if and only if
( n ) n
\ ^
M ∗
(Bi ⊂ ξi ) = M{Bi ⊂ ξi∗ }, (9.78)
i=1 i=1
( n
) n
[ _
M (Bi ⊂ ξi∗ ) = M{Bi ⊂ ξi∗ }. (9.79)
i=1 i=1

The above two equations are also equivalent to the independence of the un-
certain sets ξ1 , ξ2 , · · · , ξn . The theorem is thus proved.

9.4 Set Operational Law


This section will discuss the union, intersection and complement of indepen-
dent uncertain sets via membership functions.

Union of Uncertain Sets


Theorem 9.22 (Liu [133]) Let ξ and η be independent uncertain sets with
membership functions µ and ν, respectively. Then their union ξ ∪ η has a
membership function
λ(x) = µ(x) ∨ ν(x). (9.80)

Proof: In order to prove µ ∨ ν is a membership function of ξ ∪ η, we must


verify the two measure inversion formulas. Let B be any Borel set, and write

β = inf µ(x) ∨ ν(x).


x∈B

Then B ⊂ µ−1 (β) ∪ ν −1 (β). By the independence of ξ and η, we have

M{B ⊂ (ξ ∪ η)} ≥ M{(µ−1 (β) ∪ ν −1 (β)) ⊂ (ξ ∪ η)}


≥ M{(µ−1 (β) ⊂ ξ) ∩ (ν −1 (β) ⊂ η)}
= M{µ−1 (β) ⊂ ξ} ∧ M{ν −1 (β) ⊂ η}
≥ β ∧ β = β.

Thus
M{B ⊂ (ξ ∪ η)} ≥ inf µ(x) ∨ ν(x). (9.81)
x∈B

On the other hand, for any x ∈ B, we have

M{B ⊂ (ξ ∪ η)} ≤ M{x ∈ (ξ ∪ η)} = M{(x ∈ ξ) ∪ (x ∈ η)}


= M{x ∈ ξ} ∨ M{x ∈ η} = µ(x) ∨ ν(x).
Section 9.4 - Set Operational Law 197

Thus
M{B ⊂ (ξ ∪ η)} ≤ inf µ(x) ∨ ν(x). (9.82)
x∈B

It follows from (9.81) and (9.82) that

M{B ⊂ (ξ ∪ η)} = inf µ(x) ∨ ν(x). (9.83)


x∈B

The first measure inversion formula is verified. Next we prove the second
measure inversion formula. By the independence of ξ and η, we have

M{(ξ ∪ η) ⊂ B} = M{(ξ ⊂ B) ∩ (η ⊂ B)} = M{ξ ⊂ B} ∧ M{η ⊂ B}



= 1 − sup µ(x) ∧ 1 − sup ν(x)
x∈B c x∈B c

= 1 − sup µ(x) ∨ ν(x).


x∈B c

That is,
M{(ξ ∪ η) ⊂ B} = 1 − sup µ(x) ∨ ν(x). (9.84)
x∈B c

The second measure inversion formula is verified. Therefore, the union ξ ∪ η


is proved to have the membership function µ ∨ ν by the measure inversion
formulas (9.83) and (9.84).

λ(x)
..
........
µ(x) ν(x)
... ........ ........
..... ........ ..... ........
... .... ... .... ...
... ... ... ... ...
... ... ... ..
. ...
... ... ...
... .
.. ...
...
... ... ... ..
. ...
... ... .... .
.. ...
... ... .... ..
. ...
... ..
. ... . .
.. ...
... .. . . ...
. ... ... ...
... ... . .
.. ...
... ..
. . ...
... .... ... ... ...
... ..
... . . .. ....
... .....
. ... ..
.
.....
.....
... .
.....
. .. . ... ......
.. .. ...
... .......... .. . .......
.
.............................................................................................................................................................. . ..................................................................................................... x
....
..

Figure 9.10: Membership Function of Union of Uncertain Sets. Reprinted


from Liu [133].

Intersection of Uncertain Sets


Theorem 9.23 (Liu [133]) Let ξ and η be independent uncertain sets with
membership functions µ and ν, respectively. Then their intersection ξ ∩ η has
a membership function
λ(x) = µ(x) ∧ ν(x). (9.85)
198 Chapter 9 - Uncertain Set

Proof: In order to prove µ ∧ ν is a membership function of ξ ∩ η, we must


verify the two measure inversion formulas. Let B be any Borel set. By the
independence of ξ and η, we have

M{B ⊂ (ξ ∩ η)} = M{(B ⊂ ξ) ∩ (B ⊂ η)} = M{B ⊂ ξ} ∧ M{B ⊂ η}

= inf µ(x) ∧ inf ν(x) = inf µ(x) ∧ ν(x).


x∈B x∈B x∈B

That is,
M{B ⊂ (ξ ∩ η)} = inf µ(x) ∧ ν(x). (9.86)
x∈B

The first measure inversion formula is verified. In order to prove the second
measure inversion formula, we write

β = sup µ(x) ∧ ν(x).


x∈B c

Then for any given number ε > 0, we have µ−1 (β + ε) ∩ ν −1 (β + ε) ⊂ B. By


the independence of ξ and η, we obtain

M{(ξ ∩ η) ⊂ B} ≥ M{(ξ ∩ η) ⊂ (µ−1 (β + ε) ∩ ν −1 (β + ε))}


≥ M{(ξ ⊂ µ−1 (β + ε)) ∩ (η ⊂ ν −1 (β + ε))}
= M{ξ ⊂ µ−1 (β + ε)} ∧ M{η ⊂ ν −1 (β + ε)}
≥ (1 − β − ε) ∧ (1 − β − ε) = 1 − β − ε.

Letting ε → 0, we get

M{(ξ ∩ η) ⊂ B} ≥ 1 − sup µ(x) ∧ ν(x). (9.87)


x∈B c

On the other hand, for any x ∈ B c , we have

M{(ξ ∩ η) ⊂ B} ≤ M{x 6∈ (ξ ∩ η)} = M{(x 6∈ ξ) ∪ (x 6∈ η)}


= M{x 6∈ ξ} ∨ M{x 6∈ η} = (1 − µ(x)) ∨ (1 − ν(x))
= 1 − µ(x) ∧ ν(x).

Thus
M{(ξ ∩ η) ⊂ B} ≤ 1 − sup µ(x) ∧ ν(x). (9.88)
x∈B c

It follows from (9.87) and (9.88) that

M{(ξ ∩ η) ⊂ B} = 1 − sup µ(x) ∧ (x). (9.89)


x∈B c

The second measure inversion formula is verified. Therefore, the intersection


ξ∩η is proved to have the membership function µ∧ν by the measure inversion
formulas (9.86) and (9.89).
Section 9.5 - Arithmetic Operational Law 199

λ(x)
...
..........
µ(x) ν(x)
... ......... .........
... .. .. .. ..
... .. .. .. ..
... ..
. .. ... ..
. .. . ..
... . .
.
... ... ..
.. .. ..
..
... .. . . . ..
... .
. .. . . ..
... .. .. .. ..
... .. .. .. ..
.. ... .
... .. .
.....
..
..
... . ..
... ... ... .... ..
..
... .. .... ....
.. .
... ...... ..
... .. ....
... ..
..... ..
...
... ..... ..... .... ...
. ....... .........
.....................................................................................................................................................................................................................................
.
. . .
. ... ........................ x
..
..
..

Figure 9.11: Membership Function of Intersection of Uncertain Sets.


Reprinted from Liu [133].

Complement of Uncertain Set

Theorem 9.24 (Liu [133]) Let ξ be an uncertain set with membership func-
tion µ. Then its complement ξ c has a membership function

λ(x) = 1 − µ(x). (9.90)

Proof: In order to prove 1 − µ is a membership function of ξ c , we must


verify the two measure inversion formulas. Let B be a Borel set. It follows
from the definition of membership function that

M{B ⊂ ξ c } = M{ξ ⊂ B c } = 1 − sup µ(x) = inf (1 − µ(x)),


x∈(B c )c x∈B

M{ξ c ⊂ B} = M{B c ⊂ ξ} = inf c µ(x) = 1 − sup (1 − µ(x)).


x∈B x∈B c

Thus ξ c has a membership function 1 − µ.

λ(x)
..
.........
µ(x)
... .............. ........... ...............
.
... ........ ... .. .........
...
....... .. .. .......
...
......
..... ... ..
. ...
.......
..... . .. ....
... ..... .. .. ....
... ..... .. .. .....
... .... .. .. .....
.... ... .. ........
... .... .. .......
... ....
... ...
... .. ... ... ..
... .. ..... ... ...
... . .. ... ....
. ..
. .... .
. ..
... . . .
.... .
.. ..
... .. ..... .... ..
... . . .. ..
. ..
...... ...
.. ..... ...
. ...
... . . ..
. .
.. ....
. ........
.
......
.. .
...
...
. .... .
................................................................................................................................................................................................................................................
................................. x
....
..

Figure 9.12: Membership Function of Complement of Uncertain Set.


Reprinted from Liu [133].
200 Chapter 9 - Uncertain Set

9.5 Arithmetic Operational Law


This section will present an arithmetic operational law of independent uncer-
tain sets via inverse membership functions, including addition, subtraction,
multiplication and division.

Theorem 9.25 (Liu [133]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain sets


with inverse membership functions µ−1 −1 −1
1 , µ2 , · · · , µn , respectively. If f is a
measurable function, then the uncertain set

ξ = f (ξ1 , ξ2 , · · · , ξn ) (9.91)

has an inverse membership function,

λ−1 (α) = f (µ−1 −1 −1


1 (α), µ2 (α), · · · , µn (α)). (9.92)

Proof: For simplicity, we only prove the case n = 2. Let B be any Borel
set, and write
β = inf λ(x).
x∈B

Then B ⊂ λ−1 (β). Since λ−1 (β) = f (µ−1 −1


1 (β), µ2 (β)), by the independence
of ξ1 and ξ2 , we have

M{B ⊂ ξ} ≥ M{λ−1 (β) ⊂ ξ} = M{f (µ−1 −1


1 (β), µ2 (β)) ⊂ ξ}

≥ M{(µ−1 −1
1 (β) ⊂ ξ1 ) ∩ (µ2 (β) ⊂ ξ2 )}

= M{µ−1 −1
1 (β) ⊂ ξ1 } ∧ M{µ2 (β) ⊂ ξ2 }

≥ β ∧ β = β.

Thus
M{B ⊂ ξ} ≥ inf λ(x). (9.93)
x∈B

On the other hand, for any given number ε > 0, we have B 6⊂ λ−1 (β + ε).
Since λ−1 (β + ε) = f (µ−1 −1
1 (β + ε), µ2 (β + ε)), we obtain

M{B 6⊂ ξ} ≥ M{ξ ⊂ λ−1 (β + ε)} = M{ξ ⊂ f (µ−1 −1


1 (β + ε), µ2 (β + ε))}

≥ M{(ξ1 ⊂ µ−1 −1
1 (β + ε)) ∩ (ξ2 ⊂ µ2 (β + ε))}

= M{ξ1 ⊂ µ−1 −1
1 (β + ε)} ∧ M{ξ2 ⊂ µ2 (β + ε)}

≥ (1 − β − ε) ∧ (1 − β − ε) = 1 − β − ε

and then
M{B ⊂ ξ} = 1 − M{B 6⊂ ξ} ≤ β + ε.
Letting ε → 0, we get

M{B ⊂ ξ} ≤ β = inf λ(x). (9.94)


x∈B
Section 9.5 - Arithmetic Operational Law 201

It follows from (9.93) and (9.94) that

M{B ⊂ ξ} = inf λ(x). (9.95)


x∈B

The first measure inversion formula is verified. In order to prove the second
measure inversion formula, we write

β = sup λ(x).
x∈B c

Then for any given number ε > 0, we have λ−1 (β + ε) ⊂ B. Please note that
λ−1 (β + ε) = f (µ−1 −1
1 (β + ε), µ2 (β + ε)). By the independence of ξ1 and ξ2 ,
we obtain
M{ξ ⊂ B} ≥ M{ξ ⊂ λ−1 (β + ε)} = M{ξ ⊂ f (µ−1 −1
1 (β + ε), µ2 (β + ε))}

≥ M{(ξ1 ⊂ µ−1 −1
1 (β + ε)) ∩ (ξ2 ⊂ µ2 (β + ε))}

= M{ξ1 ⊂ µ−1 −1
1 (β + ε)} ∧ M{ξ2 ⊂ µ2 (β + ε)}

≥ (1 − β − ε) ∧ (1 − β − ε) = 1 − β − ε.

Letting ε → 0, we get

M{ξ ⊂ B} ≥ 1 − sup λ(x). (9.96)


x∈B c

On the other hand, for any given number ε > 0, we have λ−1 (β − ε) 6⊂ B.
Since λ−1 (β − ε) = f (µ−1 −1
1 (β − ε), µ2 (β − ε)), we obtain

M{ξ 6⊂ B} ≥ M{λ−1 (β − ε) ⊂ ξ} = M{f (µ−1 −1


1 (β − ε), µ2 (β − ε)) ⊂ ξ}

≥ M{(µ−1 −1
1 (β − ε) ⊂ ξ1 ) ∩ (µ2 (β − ε) ⊂ ξ2 )}

= M{µ−1 −1
1 (β − ε) ⊂ ξ1 } ∧ M{µ2 (β − ε) ⊂ ξ2 }

≥ (β − ε) ∧ (β − ε) = β − ε

and then
M{ξ ⊂ B} = 1 − M{ξ 6⊂ B} ≤ 1 − β + ε.
Letting ε → 0, we get

M{ξ ⊂ B} ≤ 1 − β = 1 − sup λ(x). (9.97)


x∈B c

It follows from (9.96) and (9.97) that

M{ξ ⊂ B} = 1 − sup λ(x). (9.98)


x∈B c

The second measure inversion formula is verified. Therefore, ξ is proved to


have the membership function λ by the measure inversion formulas (9.95)
and (9.98).
202 Chapter 9 - Uncertain Set

Example 9.8: Let ξ = (a1 , a2 , a3 ) and η = (b1 , b2 , b3 ) be two independent


triangular uncertain sets. At first, ξ has an inverse membership function,
µ−1 (α) = [(1 − α)a1 + αa2 , αa2 + (1 − α)a3 ], (9.99)
and η has an inverse membership function,
ν −1 (α) = [(1 − α)b1 + αb2 , αb2 + (1 − α)b3 ]. (9.100)
It follows from the operational law that the sum ξ + η has an inverse mem-
bership function,
λ−1 (α) = [(1 − α)(a1 + b1 ) + α(a2 + b2 ), α(a2 + b2 ) + (1 − α)(a3 + b3 )]. (9.101)
In other words, the sum ξ + η is also a triangular uncertain set, and
ξ + η = (a1 + b1 , a2 + b2 , a3 + b3 ). (9.102)

Example 9.9: Let ξ = (a1 , a2 , a3 ) and η = (b1 , b2 , b3 ) be two indepen-


dent triangular uncertain sets. It follows from the operational law that the
difference ξ − η has an inverse membership function,
λ−1 (α) = [(1 − α)(a1 − b3 ) + α(a2 − b2 ), α(a2 − b2 ) + (1 − α)(a3 − b1 )]. (9.103)
In other words, the difference ξ − η is also a triangular uncertain set, and
ξ − η = (a1 − b3 , a2 − b2 , a3 − b1 ). (9.104)

Example 9.10: Let ξ = (a1 , a2 , a3 ) be a triangular uncertain set, and k


a real number. When k ≥ 0, the product k · ξ has an inverse membership
function,
λ−1 (α) = [(1 − α)(ka1 ) + α(ka2 ), α(ka2 ) + (1 − α)(ka3 )]. (9.105)
That is, the product k · ξ is a triangular uncertain set (ka1 , ka2 , ka3 ). When
k < 0, the product k · ξ has an inverse membership function,
λ−1 (α) = [(1 − α)(ka3 ) + α(ka2 ), α(ka2 ) + (1 − α)(ka1 )]. (9.106)
That is, the product k · ξ is a triangular uncertain set (ka3 , ka2 , ka1 ). In
summary, we have
(
(ka1 , ka2 , ka3 ), if k ≥ 0
k·ξ = (9.107)
(ka3 , ka2 , ka1 ), if k < 0.

Exercise 9.7: Let ξ = (a1 , a2 , a3 , a4 ) and η = (b1 , b2 , b3 , b4 ) be two indepen-


dent trapezoidal uncertain sets, and k a real number. Show that
ξ + η = (a1 + b1 , a2 + b2 , a3 + b3 , a4 + b4 ), (9.108)
ξ − η = (a1 − b4 , a2 − b3 , a3 − b2 , a4 − b1 ), (9.109)
(
(ka1 , ka2 , ka3 , ka4 ), if k ≥ 0
k·ξ = (9.110)
(ka4 , ka3 , ka2 , ka1 ), if k < 0.
Section 9.5 - Arithmetic Operational Law 203

Monotone Function of Regular Uncertain Sets


In practice, it is usually required to deal with monotone functions of regular
uncertain sets. In this case, we have the following shortcut.

Theorem 9.26 (Liu [133]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain sets


with regular membership functions µ1 , µ2 , · · · , µn , respectively. If the func-
tion f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · , xm and
strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , then the uncertain set

ξ = f (ξ1 , ξ2 , · · · , ξn ) (9.111)

has a regular membership function, and

λ−1 −1 −1 −1 −1
l (α) = f (µ1l (α), · · · , µml (α), µm+1,r (α), · · · , µnr (α)), (9.112)

−1 −1 −1
λ−1 −1
r (α) = f (µ1r (α), · · · , µmr (α), µm+1,l (α), · · · , µnl (α)), (9.113)

where λ−1 −1 −1 −1
l , µ1l , µ2l , · · · , µnl are left inverse membership functions, and λr ,
−1
−1 −1 −1
µ1r , µ2r , · · · , µnr are right inverse membership functions of ξ, ξ1 , ξ2 , · · · , ξn ,
respectively.

Proof: Note that µ−1 −1 −1


1 (α), µ2 (α), · · · , µn (α) are intervals for each α. Since
f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · , xm and
strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , the value

λ−1 (α) = f (µ−1 −1 −1 −1


1 (α), · · · , µm (α), µm+1 (α), · · · , µn (α))

is also an interval. Thus ξ has a regular membership function, and its left and
right inverse membership functions are determined by (9.112) and (9.113),
respectively.

Exercise 9.8: Let ξ and η be independent uncertain sets with left inverse
membership functions µ−1
l and νl−1 and right inverse membership functions
−1 −1
µr and νr , respectively. Show that the sum ξ + η has left and right inverse
membership functions,

λ−1 −1 −1
l (α) = µl (α) + νl (α), (9.114)

λ−1 −1 −1
r (α) = µr (α) + νr (α). (9.115)

Exercise 9.9: Let ξ and η be independent uncertain sets with left inverse
membership functions µ−1
l and νl−1 and right inverse membership functions
−1 −1
µr and νr , respectively. Show that the difference ξ − η has left and right
inverse membership functions,

λ−1 −1 −1
l (α) = µl (α) − νr (α), (9.116)
204 Chapter 9 - Uncertain Set

−1
λ−1 −1
r (α) = µr (α) − νl (α). (9.117)

Exercise 9.10: Let ξ and η be independent and positive uncertain sets with
left inverse membership functions µ−1
l and νl−1 and right inverse membership
−1 −1
functions µr and νr , respectively. Show that

ξ
(9.118)
ξ+η

has left and right inverse membership functions,

µ−1
l (α)
λ−1
l (α) = , (9.119)
−1
µl (α) + νr−1 (α)

µ−1
r (α)
λ−1
r (α) = . (9.120)
−1
µr (α) + νl−1 (α)

9.6 Expected Value


Recall that an uncertain set ξ is nonempty if ξ(γ) 6= ∅ for almost all γ ∈
Γ. This section will introduce a concept of expected value for nonempty
uncertain set.

Definition 9.10 (Liu [127]) Let ξ be a nonempty uncertain set. Then the
expected value of ξ is defined by
Z +∞ Z 0
E[ξ] = M{ξ x}dx − M{ξ x}dx (9.121)
0 −∞

provided that at least one of the two integrals is finite.

Please note that ξ x represents “ξ is imaginarily included in [x, +∞)”,


and ξ x represents “ξ is imaginarily included in (−∞, x]”. What are the
appropriate values of M{ξ x} and M{ξ x}? Unfortunately, this problem
is not as simple as you think.
......................................................................................
................ ............
............ ..........
...
............. ..... ....... ....... ....... .... ........
... .. . .......
....... ... ..... ... ..... .......
.
................ ... .. ..
..... ......
............ .................. .... ................................ ....
.....
.....
........ ................... ....
........
...
..... .....
.................. ..
.. ..
...
..... ...
.............
. .... ... ...
....... ...
... .. ...
... ...
...
.....
ξ≥x ..
.
...
. ξx .
.
.
. ξ 6< x ..
.
..
......... .
.... . ..
.
............ ..... ..
. . .
... ......... .... . ..
.......... ........... ........ . ...
...... . ............................................................... ..... ....
...... ...... ....... .....
...... ...... ......
....... . ....... ..... ....... ..
..........
........ .. .
......... ....... ....... ....... ....... ....... ........
........... .........
.............. ...........
...................... ..............
...............................................................

Figure 9.13: {ξ ≥ x} ⊂ {ξ x} ⊂ {ξ <


6 x}
Section 9.6 - Expected Value 205

Intuitively, for M{ξ x}, it is too conservative if we take the value


M{ξ ≥ x}, and it is too adventurous if we take the value 1 − M{ξ < x}. Thus
we assign M{ξ x} the middle value between M{ξ ≥ x} and 1 − M{ξ < x}.
That is,
1
M{ξ x} = (M{ξ ≥ x} + 1 − M{ξ < x}) . (9.122)
2
Similarly, we also define
1
M{ξ x} = (M{ξ ≤ x} + 1 − M{ξ > x}) . (9.123)
2

Example 9.11: In order to illustrate the expected value operator, let us


consider an uncertain set,

 [1, 2] with uncertain measure
 0.6
ξ= [2, 3] with uncertain measure 0.3

[3, 4] with uncertain measure 0.2.

It follows from the definition of M{ξ x} and M{ξ x} that




 1, if x ≤ 1

 0.7, if 1 < x ≤ 2



M{ξ x} = 0.3, if 2 < x ≤ 3

 0.1, if 3 < x ≤ 4




0, if x > 4,

M{ξ x} ≡ 0, ∀x ≤ 0.
Thus
Z 1 Z 2 Z 3 Z 4
E[ξ] = 1dx + 0.7dx + 0.3dx + 0.1dx = 2.1.
0 1 2 3

How to Obtain Expected Value from Membership Function?


Let ξ be an uncertain set with membership function µ. In order to calculate
its expected value via (9.121), we must determine the values of M{ξ x}
and M{ξ x} from the membership function µ.

Theorem 9.27 Let ξ be an uncertain set with membership function µ. Then


for any real number x, we have

1
M{ξ x} = sup µ(y) + 1 − sup µ(y) , (9.124)
2 y≥x y<x

1
M{ξ x} = sup µ(y) + 1 − sup µ(y) . (9.125)
2 y≤x y>x
206 Chapter 9 - Uncertain Set

Proof: Since the uncertain set ξ has a membership function µ, the second
measure inversion formula tells us that
M{ξ ≥ x} = 1 − sup µ(y),
y<x

M{ξ < x} = 1 − sup µ(y).


y≥x

Thus (9.124) follows from (9.122) immediately. We may also prove (9.125)
similarly.
Theorem 9.28 (Liu [129]) Let ξ be an uncertain set with regular member-
ship function µ. Then
1 +∞ 1 x0
Z Z
E[ξ] = x0 + µ(x)dx − µ(x)dx (9.126)
2 x0 2 −∞
where x0 is a point such that µ(x0 ) = 1.
Proof: Since µ is increasing on (−∞, x0 ] and decreasing on [x0 , +∞), it
follows from Theorem 9.27 that for almost all x, we have
(
1 − µ(x)/2, if x ≤ x0
M{ξ x} = (9.127)
µ(x)/2, if x ≥ x0
and (
µ(x)/2, if x ≤ x0
M{ξ x} = (9.128)
1 − µ(x)/2, if x ≥ x0
for any real number x. If x0 ≥ 0, then
Z +∞ Z 0
E[ξ] = M{ξ x}dx − M{ξ x}dx
0 −∞
Z x0 Z +∞ Z 0
µ(x) µ(x) µ(x)
= 1− dx + dx − dx
0 2 x0 2 −∞ 2
1 +∞ 1 x0
Z Z
= x0 + µ(x)dx − µ(x)dx.
2 x0 2 −∞
If x0 < 0, then
Z +∞ Z 0
E[ξ] = M{ξ x}dx − M{ξ x}dx
0 −∞
Z +∞ Z x0 Z 0
µ(x) µ(x) µ(x)
= dx − dx − 1− dx
0 2 −∞ 2 x0 2
1 +∞ 1 x0
Z Z
= x0 + µ(x)dx − µ(x)dx.
2 x0 2 −∞
Section 9.6 - Expected Value 207

The theorem is thus proved.

Remark 9.8: If the membership function of the uncertain set ξ is not


assumed to be regular, then
Z +∞ Z x0
1 1
E[ξ] = x0 + sup µ(y)dx − sup µ(y)dx. (9.129)
2 x0 y≥x 2 −∞ y≤x

Exercise 9.11: Show that the triangular uncertain set ξ = (a, b, c) has an
expected value
a + 2b + c
E[ξ] = . (9.130)
4

Exercise 9.12: Show that the trapezoidal uncertain set ξ = (a, b, c, d) has
an expected value
a+b+c+d
E[ξ] = . (9.131)
4

Theorem 9.29 (Liu [133]) Let ξ be a nonempty uncertain set with member-
ship function µ. Then
Z 1
1
inf µ−1 (α) + sup µ−1 (α) dα

E[ξ] = (9.132)
2 0

where inf µ−1 (α) and sup µ−1 (α) are the infimum and supremum of the α-cut,
respectively.

Proof: Since ξ is a nonempty uncertain set and has a finite expected value,
we may assume that there exists a point x0 such that µ(x0 ) = 1 (perhaps
after a small perturbation). It is clear that the two integrals
Z +∞ Z 1
sup µ(y)dx and (sup µ−1 (α) − x0 )dα
x0 y≥x 0

make an identical acreage. Thus


Z +∞ Z 1 Z 1
−1
sup µ(y)dx = (sup µ (α) − x0 )dα = sup µ−1 (α)dα − x0 .
x0 y≥x 0 0

Similarly, we may prove


Z x0 Z 1 Z 1
sup µ(y)dx = (x0 − inf µ−1 (α))dα = x0 − inf µ−1 (α)dα.
−∞ y≤x 0 0
208 Chapter 9 - Uncertain Set

It follows from (9.129) that


1 +∞ 1 x0
Z Z
E[ξ] = x0 + sup µ(y)dx − sup µ(y)dx
2 x0 y≥x 2 −∞ y≤x
Z 1 Z 1
1 −1 1 −1
= x0 + sup µ (α)dα − x0 − x0 − inf µ (α)dα
2 0 2 0

1 1
Z
= (inf µ−1 (α) + sup µ−1 (α))dα.
2 0
The theorem is thus verified.
Theorem 9.30 (Liu [133]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain sets
with regular membership functions µ1 , µ2 , · · · , µn , respectively. If the func-
tion f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · , xm and
strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , then the uncertain set
ξ = f (ξ1 , ξ2 , · · · , ξn ) (9.133)
has an expected value
Z 1
1
µ−1 −1

E[ξ] = l (α) + µr (α) dα (9.134)
2 0

where µ−1 −1
l (α) and µr (α) are determined by

µ−1 −1 −1 −1 −1
l (α) = f (µ1l (α), · · · , µml (α), µm+1,r (α), · · · , µnr (α)), (9.135)
−1 −1 −1
µ−1 −1
r (α) = f (µ1r (α), · · · , µmr (α), µm+1,l (α), · · · , µnl (α)). (9.136)
Proof: It follows from Theorems 9.26 and 9.29 immediately.

Exercise 9.13: Let ξ and η be independent and nonnegative uncertain sets


with regular membership functions µ and ν, respectively. Show that
1 1 −1
Z
µl (α)νl−1 (α) + µ−1 −1

E[ξη] = r (α)νr (α) dα. (9.137)
2 0

Exercise 9.14: Let ξ and η be independent and positive uncertain sets with
regular membership functions µ and ν, respectively. Show that
1 1 µ−1 µ−1
Z
ξ l (α) r (α)
E = + −1 dα. (9.138)
η 2 0 νr−1 (α) νl (α)

Exercise 9.15: Let ξ and η be independent and positive uncertain sets with
regular membership functions µ and ν, respectively. Show that
1 1 µ−1 µ−1
Z
ξ l (α) r (α)
E = + −1 dα. (9.139)
ξ+η 2 0 µ−1 −1
l (α) + νr (α) µr (α) + νl−1 (α)
Section 9.7 - Variance 209

Linearity of Expected Value Operator


Theorem 9.31 (Liu [133]) Let ξ and η be independent uncertain sets with
finite expected values. Then for any real numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (9.140)

Proof: Denote the membership functions of ξ and η by µ and ν, respectively.


Then
1 1
Z
inf µ−1 (α) + sup µ−1 (α) dα,

E[ξ] =
2 0
1 1
Z
inf ν −1 (α) + sup ν −1 (α) dα.

E[η] =
2 0
Step 1: We first prove E[aξ] = aE[ξ]. The product aξ has an inverse
membership function,
λ−1 (α) = aµ−1 (α).
It follows from Theorem 9.29 that
1 1
Z
inf λ−1 (α) + sup λ−1 (α) dα

E[aξ] =
2 0
a 1
Z
inf µ−1 (α) + sup µ−1 (α) dα = aE[ξ].

=
2 0

Step 2: We then prove E[ξ + η] = E[ξ] + E[η]. The sum ξ + η has an


inverse membership function,

λ−1 (α) = µ−1 (α) + ν −1 (α).

It follows from Theorem 9.29 that


1 1
Z
inf λ−1 (α) + sup λ−1 (α) dα

E[ξ + η] =
2 0
1 1
Z
inf µ−1 (α) + sup µ−1 (α) dα

=
2 0
1 1
Z
inf ν −1 (α) + sup ν −1 (α) dα

+
2 0
= E[ξ] + E[η].

Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
E[aξ + bη] = E[aξ] + E[bη] = aE[ξ] + bE[η].
The theorem is proved.
210 Chapter 9 - Uncertain Set

9.7 Variance
The variance of uncertain set provides a degree of the spread of the member-
ship function around its expected value.

Definition 9.11 (Liu [130]) Let ξ be an uncertain set with finite expected
value e. Then the variance of ξ is defined by

V [ξ] = E[(ξ − e)2 ]. (9.141)

This definition says that the variance is just the expected value of (ξ −e)2 .
Since (ξ − e)2 is a nonnegative uncertain set, we also have
Z +∞
V [ξ] = M{(ξ − e)2 x}dx. (9.142)
0

Please note that (ξ − e) x represents “(ξ − e)2 is imaginarily included in


2

[x, +∞)”. What is the appropriate value of M{(ξ − e)2 x}? Intuitively,
it is too conservative if we take the value M{(ξ − e)2 ≥ x}, and it is too
adventurous if we take the value 1 − M{(ξ − e)2 < x}. Thus we assign
M{(ξ − e)2 x} the middle value between them. That is,
1
M{(ξ − e)2 x} = M{(ξ − e)2 ≥ x} + 1 − M{(ξ − e)2 < x} . (9.143)

2
Theorem 9.32 If ξ is an uncertain set with finite expected value, a and b
are real numbers, then
V [aξ + b] = a2 V [ξ]. (9.144)

Proof: If ξ has an expected value e, then aξ + b has an expected value ae + b.


It follows from the definition of variance that

V [aξ + b] = E (aξ + b − ae − b)2 = a2 E[(ξ − e)2 ] = a2 V [ξ].


Theorem 9.33 Let ξ be an uncertain set with expected value e. Then V [ξ] =
0 if and only if ξ = {e} almost surely.

Proof: We first assume V [ξ] = 0. It follows from the equation (9.142) that
Z +∞
M{(ξ − e)2 x}dx = 0
0

which implies M{(ξ − e)2 x} = 0 for any x > 0. Hence M{ξ = {e}} = 1.
Conversely, assume M{ξ = {e}} = 1. Then we have M{(ξ − e)2 x} = 0 for
any x > 0. Thus
Z +∞
V [ξ] = M{(ξ − e)2 x}dx = 0.
0

The theorem is proved.


Section 9.8 - Entropy 211

How to Obtain Variance from Membership Function?


Let ξ be an uncertain set with membership function µ. In order to calculate
its variance by (9.142), we must determine the value of M{(ξ − e)2 x} from
the membership function µ.

Theorem 9.34 Let ξ be an uncertain set with membership function µ. Then


for any real numbers e and x, we have
!
1
M{(ξ − e) x} =
2
sup µ(y) + 1 − sup µ(y) . (9.145)
2 (y−e)2 ≥x (y−e)2 <x

Proof: Since ξ is an uncertain set with membership function µ, it follows


from the measure inversion formula that for any real numbers e and x, we
have
M{(ξ − e)2 ≥ x} = 1 − sup µ(y),
(y−e)2 <x

M{(ξ − e)2 < x} = 1 − sup µ(y).


(y−e)2 ≥x

The equation (9.145) is thus proved by (9.143).

Theorem 9.35 Let ξ be an uncertain set with membership function µ and


finite expected value e. Then
!
1 +∞
Z
V [ξ] = sup µ(y) + 1 − sup µ(y) dx. (9.146)
2 0 (y−e)2 ≥x (y−e)2 <x

Proof: This theorem follows from (9.142) and (9.145) immediately.

9.8 Entropy
This section provides a definition of entropy to characterize the uncertainty
of uncertain sets.

Definition 9.12 (Liu [130]) Suppose that ξ is an uncertain set with mem-
bership function µ. Then its entropy is defined by
Z +∞
H[ξ] = S(µ(x))dx (9.147)
−∞

where S(t) = −t ln t − (1 − t) ln(1 − t).

Remark 9.9: Note that the entropy (9.147) has the same form with de Luca
and Termini’s entropy for fuzzy set [32].
212 Chapter 9 - Uncertain Set

Remark 9.10: If ξ is a discrete uncertain set taking values in {x1 , x2 , · · · },


then the entropy becomes

X
H[ξ] = S(µ(xi )). (9.148)
i=1

Example 9.12: A crisp set A of real numbers is a special uncertain set


ξ(γ) ≡ A. Then its membership function is
(
1, if x ∈ A
µ(x) =
0, if x 6∈ A

and entropy is
Z +∞ Z +∞
H[ξ] = S(µ(x))dx = 0dx = 0.
−∞ −∞

Exercise 9.16: Let ξ = (a, b, c) be a triangular uncertain set. Show that its
entropy is
c−a
H[ξ] = . (9.149)
2

Exercise 9.17: Let ξ = (a, b, c, d) be a trapezoidal uncertain set. Show that


its entropy is
b−a+d−c
H[ξ] = . (9.150)
2
Theorem 9.36 Let ξ be an uncertain set. Then H[ξ] ≥ 0 and equality holds
if ξ is essentially a crisp set.

Proof: The nonnegativity is clear. In addition, when an uncertain set tends


to a crisp set, its entropy tends to the minimum value 0.

Theorem 9.37 Let ξ be an uncertain set on the interval [a, b]. Then

H[ξ] ≤ (b − a) ln 2 (9.151)

and equality holds if ξ has a membership function µ(x) = 0.5 on [a, b].

Proof: The theorem follows from the fact that the function S(t) reaches its
maximum value ln 2 at t = 0.5.

Theorem 9.38 Let ξ be an uncertain set, and let ξ c be its complement. Then

H[ξ c ] = H[ξ]. (9.152)


Section 9.8 - Entropy 213

Proof: Write the membership function of ξ by µ. Then its complement ξ c


has a membership function 1 − µ(x). It follows from the definition of entropy
that Z +∞ Z +∞
H[ξ c ] = S (1 − µ(x)) dx = S(µ(x))dx = H[ξ].
−∞ −∞

The theorem is proved.

Theorem 9.39 (Yao [249]) Let ξ be an uncertain set with regular member-
ship function µ. Then
Z 1
α
H[ξ] = (µ−1 −1
l (α) − µr (α)) ln dα. (9.153)
0 1 − α

Proof: It is clear that S(α) = −α ln α − (1 − α) ln(1 − α) is a derivable


function whose derivative is
α
S 0 (α) = − ln .
1−α
Let x0 be a point such that µ(x0 ) = 1. Then we have
Z +∞ Z x0 Z +∞
H[ξ] = S(µ(x))dx = S(µ(x))dx + S(µ(x))dx
−∞ −∞ x0
Z x0 Z µ(x) Z +∞ Z µ(x)
0
= S (α)dαdx + S 0 (α)dαdx.
−∞ 0 x0 0

It follows from Fubini theorem that


Z 1 Z x0 Z 1 Z µ−1
r (α)
0
H[ξ] = S (α)dxdα + S 0 (α)dxdα
0 µ−1
l (α) 0 x0
Z 1 Z 1
= (x0 − µ−1 0
l (α))S (α)dα + (µ−1 0
r (α) − x0 )S (α)dα
0 0
Z 1
−1
= (µ−1 0
r (α) − µl (α))S (α)dα
0
Z 1
α
= (µ−1 −1
l (α) − µr (α)) ln dα.
0 1−α
The theorem is verified.

Positive Linearity of Entropy


Theorem 9.40 (Yao [249]) Let ξ and η be independent uncertain sets. Then
for any real numbers a and b, we have

H[aξ + bη] = |a|H[ξ] + |b|H[η]. (9.154)


214 Chapter 9 - Uncertain Set

Proof: Without loss of generality, assume the uncertain sets ξ and η have
regular membership functions µ and ν, respectively.
Step 1: We prove H[aξ] = |a|H[ξ]. If a > 0, then the left and right
inverse membership functions of aξ are

λ−1 −1
l (α) = aµl (α), λ−1 −1
r (α) = aµr (α).

It follows from Theorem 9.39 that


Z 1
α
H[aξ] = (aµ−1 −1
l (α) − aµr (α)) ln dα = aH[ξ] = |a|H[ξ].
0 1−α

If a = 0, then we immediately have H[aξ] = 0 = |a|H[ξ]. If a < 0, then we


have
λ−1 −1
l (α) = aµr (α), λ−1 −1
r (α) = aµl (α)

and
Z 1
−1 α
H[aξ] = (aµ−1
r (α) − aµl (α)) ln dα = (−a)H[ξ] = |a|H[ξ].
0 1−α

Thus we always have H[aξ] = |a|H[ξ].


Step 2: We prove H[ξ + η] = H[ξ] + H[η]. Note that the left and right
inverse membership functions of ξ + η are

λ−1 −1 −1
l (α) = µl (α) + νl (α), λ−1 −1 −1
r (α) = µr (α) + νr (α).

It follows from Theorem 9.39 that


Z 1
α
H[ξ + η] = (λ−1 −1
l (α) − λr (α)) ln dα
0 1−α
Z 1
α
= (µ−1 −1 −1 −1
l (α) + νl (α) − µr (α) − νr (α)) ln dα
0 1−α
= H[ξ] + H[η].

Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that

H[aξ + bη] = H[aξ] + H[bη] = |a|H[ξ] + |b|H[η].

The theorem is proved.

Exercise 9.18: Let ξ be an uncertain set, and let A be a crisp set. Show
that
H[ξ + A] = H[ξ]. (9.155)
That is, the entropy is invariant under arbitrary translations.
Section 9.10 - Conditional Membership Function 215

9.9 Distance
Definition 9.13 (Liu [130]) The distance between uncertain sets ξ and η is
defined as
d(ξ, η) = E[|ξ − η|]. (9.156)

That is, the distance between ξ and η is just the expected value of |ξ − η|.
Since |ξ − η| is a nonnegative uncertain set, we have
Z +∞
d(ξ, η) = M{|ξ − η| x}dx. (9.157)
0

Please note that |ξ − η| x represents “|ξ − η| is imaginarily included in


[x, +∞)”. What is the appropriate value of M{|ξ − η| x}? Intuitively, it is
too conservative if we take the value M{|ξ −η| ≥ x}, and it is too adventurous
if we take the value 1 − M{|ξ − η| < x}. Thus we assign M{|ξ − η| x} the
middle value between them. That is,
1
M{|ξ − η| x} = (M{|ξ − η| ≥ x} + 1 − M{|ξ − η| < x}) . (9.158)
2
Theorem 9.41 Let ξ and η be uncertain sets. Then for any real number x,
we have
!
1
M{|ξ − η| x} = sup λ(y) + 1 − sup λ(y) (9.159)
2 |y|≥x |y|<x

where λ is the membership function of ξ − η.

Proof: Since ξ − η is an uncertain set with membership function λ, it follows


from the measure inversion formula that for any real number x, we have

M{|ξ − η| ≥ x} = 1 − sup µ(y),


|y|<x

M{|ξ − η| < x} = 1 − sup µ(y).


|y|≥x

The equation (9.159) is thus proved by (9.158).

Theorem 9.42 Let ξ and η be uncertain sets. Then the distance between ξ
and η is
!
1 +∞
Z
d(ξ, η) = sup λ(y) + 1 − sup λ(y) dx (9.160)
2 0 |y|≥x |y|<x

where λ is the membership function of ξ − η.

Proof: The theorem follows from (9.157) and (9.159) immediately.


216 Chapter 9 - Uncertain Set

9.10 Conditional Membership Function


What is the conditional membership function of an uncertain set ξ after it
has been learned that some event A has occurred? This section will answer
this question. At first, it follows from the definition of conditional uncertain
measure that
M{(B ⊂ ξ)∩(ξ ⊂ A)} M{(B ⊂ ξ)∩(ξ ⊂ A)}

 , if < 0.5
M{ξ ⊂ A} M{ξ ⊂ A}





M{B ⊂ ξ|A} = M{(B 6⊂ ξ)∩(ξ ⊂ A)} M{(B 6⊂ ξ)∩(ξ ⊂ A)}
1− , if < 0.5
M{ξ ⊂ A} M{ξ ⊂ A}






0.5, otherwise,

M{(ξ ⊂ B)∩(ξ ⊂ A)} M{(ξ ⊂ B)∩(ξ ⊂ A)}



 , if < 0.5
M{ξ ⊂ M{ξ ⊂ A}



 A}

M{ξ ⊂ B|A} = M{(ξ 6⊂ B)∩(ξ ⊂ A)} M{(ξ 6⊂ B)∩(ξ ⊂ A)}
1− , if < 0.5
M{ξ ⊂ A} M{ξ ⊂ A}






0.5, otherwise.

Definition 9.14 Let ξ be an uncertain set, and let A be an event with


M{A} > 0. Then the conditional uncertain set ξ given A is said to have
a membership function µ(x|A) if for any Borel set B, we have

M{B ⊂ ξ|A} = inf µ(x|A), (9.161)


x∈B

M{ξ ⊂ B|A} = 1 − sup µ(x|A). (9.162)


x∈B c

9.11 Uncertain Statistics


In order to determine the membership function of uncertain set, Liu [130]
designed a questionnaire survey for collecting expert’s experimental data,
and introduced the empirical membership function (i.e., linear interpolation
method) and the principle of least squares.

Expert’s Experimental Data


Expert’s experimental data were suggested by Liu [130] to represent expert’s
knowledge about the membership function to be determined. The first step
is to ask the domain expert to choose a possible point x that the uncertain
set ξ may contain, and then quiz him

“How likely does x belong to ξ?” (9.163)


Section 9.11 - Uncertain Statistics 217

Assume the expert’s belief degree is α in uncertain measure. Note that the
expert’s belief degree of x not belonging to ξ must be 1 − α due to the duality
of uncertain measure. An expert’s experimental data (x, α) is thus acquired
from the domain expert. Repeating the above process, the following expert’s
experimental data are obtained by the questionnaire,

(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ). (9.164)

Empirical Membership Function


How do we determine the membership function for an uncertain set? The first
method is the linear interpolation method developed by Liu [130]. Assume
that we have obtained a set of expert’s experimental data

(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ). (9.165)

Without loss of generality, we also assume x1 < x2 < · · · < xn . Based


on those expert’s experimental data, an empirical membership function is
determined as follows,

 αi + (αi+1 − αi )(x − xi ) , if xi ≤ x ≤ xi+1 , 1 ≤ i < n

µ(x) = xi+1 − xi

0, otherwise.

µ(x)
..
.........
...
.... .•
......................................................•
...
.. .... ...
... ..... ...
... ..... ...
... ..
..... ...
... •... ...
. ...
... ..
. .
... ... •...........
... ..
. .....
... ..
. .....
.....
... .
... ...
.
.....
....
...• •....
.
.
... ..
. ...
... ..
.... ...
... ..
.... ...
... ..
.... ...
... ..
.... ...
...
... ..
.... ...
... ..
.... • ...
... ..
...
. ...
... •..... ...
. . .
..................................................................................................................................................................................................................................................
. .
x
...

Figure 9.14: Empirical Membership Function µ(x)

Principle of Least Squares


Principle of least squares was first employed to determine membership func-
tion by Liu [130]. Assume that a membership function to be determined has
a known functional form µ(x|θ) with an unknown parameter θ. In order to
estimate the parameter θ, we may employ the principle of least squares that
218 Chapter 9 - Uncertain Set

minimizes the sum of the squares of the distance of the expert’s experimental
data to the membership function. If the expert’s experimental data

(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ) (9.166)

are obtained, then we have


n
X
min (µ(xi |θ) − αi )2 . (9.167)
θ
i=1

The optimal solution θb of (9.167) is called the least squares estimate of θ,


and then the least squares membership function is µ(x|θ).b

Example 9.13: Assume that a membership function has a trapezoidal form


(a, b, c, d). We also assume the following expert’s experimental data,

(1, 0.15), (2, 0.45), (3, 0.90), (6, 0.85), (7, 0.60), (8, 0.20). (9.168)

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


yield that the least squares membership function has a trapezoidal form
(0.6667, 3.3333, 5.6154, 8.6923).

What is “about 100km”?


Let us pay attention to the concept of “about 100km”. When we are inter-
ested in what distances can be considered “about 100km”, it is reasonable to
regard such a concept as an uncertain set. In order to determine the mem-
bership function of “about 100km”, a questionnaire survey was made for
collecting expert’s experimental data. The consultation process is as follows:

Q1: May I ask you what distances belong to “about 100km”? What do you
think is the minimum distance?

A1: 80km. (an expert’s experimental data (80, 0) is acquired)

Q2: What do you think is the maximum distance?

A2: 120km. (an expert’s experimental data (120, 0) is acquired)

Q3: What distance do you think belongs to “about 100km”?

A3: 95km.

Q4: What is the belief degree that 95km belongs to “about 100km”?

A4: 1. (an expert’s experimental data (95, 1) is acquired)

Q5: Is there another distance that belongs to “about 100km”?


Section 9.12 - Bibliographic Notes 219

A5: 105km.

Q6: What is the belief degree that 105km belongs to “about 100km”?

A6: 1. (an expert’s experimental data (105, 1) is acquired)

Q7: Is there another distance that belongs to “about 100km”?

A7: 90km.

Q8: What is the belief degree that 90km belongs to “about 100km”?

A8: 0.5. (an expert’s experimental data (90, 0.5) is acquired)

Q9: Is there another distance that belongs to “about 100km”?

A9: 110km.

Q10: What is the belief degree that 110km belongs to “about 100km”?

A10: 0.5. (an expert’s experimental data (110, 0.5) is acquired)

Q11: Is there another distance that belongs to “about 100km”?

A11: No idea.

Until now six expert’s experimental data (80, 0), (90, 0.5), (95, 1), (105, 1),
(110, 0.5), (120, 0) are acquired from the domain expert. Based on those
expert’s experimental data, an empirical membership function of “about
100km” is produced and shown by Figure 9.15.

µ(x)
....
........
..... (95, 1) (105, 1)
..
1 .... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .•
...
.......................................•
...
........ .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
...
... ... ...
...
... ..
. ...
... .... ...
... ... ...
... .
. ...
... .... ...
...
... ... ...
... .
. ...
... .... ...
... ... ...
... ... ...
... .
.
. ...
...
...
(90, 0.5) ..•
.
. (110, 0.5) •......
.. ...
... .. ...
... .... ...
... ... ...
... ... ...
...
... ... ...
... ... ...
... ...
. ...
... ... ...
...
... ..
. ...
... ... ...
... ...
. ...
(80, 0) ... .
... (120, 0) ...
..
................................................................................................................................................•
......................................................• ............................................................. x
0 ...

Figure 9.15: Empirical Membership Function of “about 100km”


220 Chapter 9 - Uncertain Set

9.12 Bibliographic Notes


In order to model unsharp concepts like “young”, “tall” and “most”, the
concept of uncertain set was proposed by Liu [127] in 2010, and the concepts
of membership function and inverse membership function were presented by
Liu [133] in 2012. Following that, Liu [136] defined the independence of un-
certain sets, and provided an operational law through membership functions
in 2013.
The expected value of uncertain set was defined by Liu [127]. Then Liu
[129] gave a formula for caluculating the expected value by membership func-
tion, and Liu [133] provided a formula by inverse membership function. Based
on expected value operator, Liu [130] presented the concept of variance and
distance between uncertain sets.
The first concept of entropy was presented by Liu [130] for measuring the
uncertainty of uncertain set. As extensions of entropy, Wang and Ha [229]
suggested a quadratic entropy, and Yao [249] proposed a cross entropy for
comparing a membership function against a reference membership function.
In order to determine membership functions, a questionnaire survey for
collecting expert’s experimental data was designed by Liu [130]. Based on ex-
pert’s experimental data, Liu [130] also suggested linear interpolation method
and principle of least squares to determine membership functions. When
multiple domain experts are available, Delphi method was introduced to un-
certain statistics by Wang and Wang [231].
Chapter 10

Uncertain Logic

Uncertain logic is a methodology for calculating the truth values of uncertain


propositions via uncertain set theory. This chapter will introduce individual
feature data, uncertain quantifier, uncertain subject, uncertain predicate,
uncertain proposition, and truth value. Uncertain logic may provide a flexible
means for extracting linguistic summary from a collection of raw data.

10.1 Individual Feature Data


At first, we should have a universe A of individuals we are talking about.
Without loss of generality, we may assume that A consists of n individuals
and is represented by
A = {a1 , a2 , · · · , an }. (10.1)
In order to deal with the universe A, we should have feature data of all
individuals a1 , a2 , · · · , an . When we talk about “those days are warm”, we
should know the individual feature data of all days, for example,

A = {22, 23, 25, 28, 30, 32, 36} (10.2)

whose elements are temperatures in centigrades. When we talk about “those


students are young”, we should know the individual feature data of all stu-
dents, for example,

A = {21, 22, 22, 23, 24, 25, 26, 27, 28, 30, 32, 35, 36, 38, 40} (10.3)

whose elements are ages in years. When we talk about “those sportsmen
are tall”, we should know the individual feature data of all sportsmen, for
example,
175, 178, 178, 180, 183, 184, 186, 186
A= (10.4)
188, 190, 192, 192, 193, 194, 195, 196
whose elements are heights in centimeters.

© Springer-Verlag Berlin Heidelberg 2015 221


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_11
222 Chapter 10 - Uncertain Logic

Sometimes the individual feature data are represented by vectors rather


a scalar number. When we talk about “those young students are tall”, we
should know the individual feature data of all students, for example,
 
 (24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188) 
A = (28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188) (10.5)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)
 

whose elements are ages and heights in years and centimeters, respectively.

10.2 Uncertain Quantifier


If we want to represent all individuals in the universe A, we use the universal
quantifier (∀) and
∀ = “for all”. (10.6)
If we want to represent some (at least one) individuals, we use the existential
quantifier (∃) and
∃ = “there exists at least one”. (10.7)
In addition to the two quantifiers, there are numerous imprecise quantifiers in
human language, for example, almost all, almost none, many, several, some,
most, a few, about half. This section will model them by the concept of
uncertain quantifier.

Definition 10.1 (Liu [130]) Uncertain quantifier is an uncertain set repre-


senting the number of individuals.

Example 10.1: The universal quantifier (∀) on the universe A is a special


uncertain quantifier,
∀ ≡ {n} (10.8)
whose membership function is
(
1, if x = n
λ(x) = (10.9)
0, otherwise.

Example 10.2: The existential quantifier (∃) on the universe A is a special


uncertain quantifier,
∃ ≡ {1, 2, · · · , n} (10.10)
whose membership function is
(
0, if x = 0
λ(x) = (10.11)
1, otherwise.
Section 10.2 - Uncertain Quantifier 223

Example 10.3: The quantifier “there does not exist one” on the universe A
is a special uncertain quantifier

Q ≡ {0} (10.12)

whose membership function is


(
1, if x = 0
λ(x) = (10.13)
0, otherwise.

Example 10.4: The quantifier “there exist exactly m” on the universe A is


a special uncertain quantifier
Q ≡ {m} (10.14)
whose membership function is
(
1, if x = m
λ(x) = (10.15)
0, otherwise.

Example 10.5: The quantifier “there exist at least m” on the universe A is


a special uncertain quantifier

Q ≡ {m, m + 1, · · · , n} (10.16)

whose membership function is


(
1, if m ≤ x ≤ n
λ(x) = (10.17)
0, if 0 ≤ x < m.

Example 10.6: The quantifier “there exist at most m” on the universe A is


a special uncertain quantifier

Q ≡ {0, 1, 2, · · · , m} (10.18)

whose membership function is


(
1, if 0 ≤ x ≤ m
λ(x) = (10.19)
0, if m < x ≤ n.

Example 10.7: The uncertain quantifier Q of “almost all ” on the universe


A may have a membership function


 0, if 0 ≤ x ≤ n − 5
λ(x) = (x − n + 5)/3, if n − 5 ≤ x ≤ n − 2 (10.20)

1, if n − 2 ≤ x ≤ n.

224 Chapter 10 - Uncertain Logic

λ(x)
...
..........
...
................................................................................ ................................
... .... ..
... ... ..
... ... .. ...
... ... .. ..
... ... .. ..
... ..
. .
. ..
... ... .
. ..
... ... .
. ..
... ..
. .
. ..
... .... .
.
. ..
... ..
. . ..
... .. .
. ..
. .
... ..
. . ..
... ... .
. ..
... ... .
. ..
... ..
. .
. ..
.. .
... . . .
.............................................................................................................................................................................................................................................................. x
....
n−5 n−2 n

Figure 10.1: Membership Function of Quantifier “almost all ”

Example 10.8: The uncertain quantifier Q of “almost none” on the universe


A may have a membership function


 1, if 0 ≤ x ≤ 2
λ(x) = (5 − x)/3, if 2 ≤ x ≤ 5 (10.21)

0, if 5 ≤ x ≤ n.

λ(x)
..
.........
.
..................................
.
.... .......
... .. ....
... .. ....
... .. ...
...
... .. ...
... .. ...
... .. ...
... .. ...
... .. ...
... .. ...
...
... .. ...
... .. ...
... .. ...
... .. ...
...
... .. ...
... .. ...
.
...........................................................................................................................................................................................................................................................
.
..
.
x
2 ... 5

Figure 10.2: Membership Function of Quantifier “almost none”

Example 10.9: The uncertain quantifier Q of “about 10 ” on the universe A


may have a membership function


 0, if 0 ≤ x ≤ 7

 (x − 7)/2, if 7 ≤ x ≤ 9



λ(x) = 1, if 9 ≤ x ≤ 11 (10.22)

(13 − x)/2, if 11 ≤ x ≤ 13





0, if 13 ≤ x ≤ n.

Example 10.10: In many cases, it is more convenient for us to use a per-


centage than an absolute quantity. For example, we may use the uncertain
Section 10.2 - Uncertain Quantifier 225

λ(x)
...
..........
...
.............................................. ...............................................................
... ..... .. ........
... ... ..
... ... . ... .. ....
... .. ...
. .. .. ...
... ... .. . .. ....
... .. .. .. .. ...
. . ...
... ..
. .. . .. ...
... ... .. .. .. ...
... ..
. . .
. . ...
.. ..
... .... .
. ...
... ..
. ... .. ... ...
...
... ... .. ... .. ...
... ... .. .
.
.. ...
... ..
. .. .
..
.. ...
...
... ... ..
.
.. ...
... ... .. .
..
.. ...
... ... .. .. ...
................................................................................................................................................................................................................................................................ x
....
7 9 10 11 13

Figure 10.3: Membership Function of Quantifier “about 10 ”

quantifier Q of “about 70% ”. In this case, a possible membership function of


Q is 

 0, if 0 ≤ x ≤ 0.6

 20(x − 0.6), if 0.6 ≤ x ≤ 0.65



λ(x) = 1, if 0.65 ≤ x ≤ 0.75 (10.23)

20(0.8 − x), if 0.75 ≤ x ≤ 0.8





0, if 0.8 ≤ x ≤ 1.

λ(x)
....
.........
..........................................................................................................
... .... .....
... .... ......
... ... .. .. ...
... .. ...
. .. ....
... .... .. .. ...
... . .
. .. ...
... .... ..
. .. ....
... .
. . .. ...
.. ...
... .... .
.. ...
... .
. . .. ...
..
... .... .
.. ...
... .
. . .. ...
..
... .... .. ...
... ... ... .. ...
...
... .
. . .. ...
... .... ... .. ...
. . . ..
.....................................................................................................................................................................................................................................................................
. .
x
...
60% 65% 75% 80%

Figure 10.4: Membership Function of Quantifier “about 70% ”

Definition 10.2 An uncertain quantifier is said to be unimodal if its mem-


bership function is unimodal.

Example 10.11: The uncertain quantifiers “almost all”, “almost none”,


“about 10” and “about 70%” are unimodal.

Definition 10.3 An uncertain quantifier is said to be monotone if its mem-


bership function is monotone. Especially, an uncertain quantifier is said to be
increasing if its membership function is increasing; and an uncertain quanti-
fier is said to be decreasing if its membership function is decreasing.
226 Chapter 10 - Uncertain Logic

The uncertain quantifiers “almost all” and “almost none” are monotone,
but “about 10” and “about 70%” are not monotone. Note that both increas-
ing uncertain quantifiers and decreasing uncertain quantifiers are monotone.
In addition, any monotone uncertain quantifiers are unimodal.

Negated Quantifier
What is the negation of an uncertain quantifier? The following definition
gives a formal answer.
Definition 10.4 Let Q be an uncertain quantifier. Then the negated quan-
tifier ¬Q is the complement of Q in the sense of uncertain set, i.e.,
¬Q = Qc . (10.24)

Example 10.12: Let ∀ = {n} be the universal quantifier. Then its negated
quantifier
¬∀ ≡ {0, 1, 2, · · · , n − 1}. (10.25)

Example 10.13: Let ∃ = {1, 2, · · · , n} be the existential quantifier. Then


its negated quantifier is
¬∃ ≡ {0}. (10.26)
Theorem 10.1 Let Q be an uncertain quantifier whose membership function
is λ. Then the negated quantifier ¬Q has a membership function
¬λ(x) = 1 − λ(x). (10.27)
Proof: This theorem follows from the operational law of uncertain set im-
mediately.

Example 10.14: Let Q be the uncertain quantifier “almost all ” defined by


(10.20). Then its negated quantifier ¬Q has a membership function


 1, if 0 ≤ x ≤ n − 5
¬λ(x) = (n − x − 2)/3, if n − 5 ≤ x ≤ n − 2 (10.28)

0, if n − 2 ≤ x ≤ n.

Example 10.15: Let Q be the uncertain quantifier “about 70% ” defined by


(10.23). Then its negated quantifier ¬Q has a membership function


 1, if 0 ≤ x ≤ 0.6

 20(0.65 − x), if 0.6 ≤ x ≤ 0.65



¬λ(x) = 0, if 0.65 ≤ x ≤ 0.75 (10.29)

 20(x − 0.75), if 0.75 ≤ x ≤ 0.8




1, if 0.8 ≤ x ≤ 1.

Section 10.2 - Uncertain Quantifier 227

.....
.......
... ¬λ(x) λ(x)
..................................................................................................................................... ....... ....... ...
... ... ..
... ...
... ...
... ... .
... ... ...
... ...
... .
... ... ...
... ... .
... ......
... ...
... .. ...
... ... .....
... . ...
... ... ...
...
... .. ...
... .
.. ...
... .
...
..
.
. .
.
........................................................................................................................................................................................................................................................... x
....
n−5 n−2

Figure 10.5: Membership Function of Negated Quantifier of “almost all ”

..
........
. ¬λ(x) λ(x) ¬λ(x)
...................................................................................... .. ....... ....... ....... ....... . .......................................................
.... ... ... ... ...
... .
... ... . ...
... ... ... ... ..
.
... ... .. ...
... ... ..... ... ....
... ... .. ...
... .. .
... ..... .
......
... . .
... ..... ...
... ... .... ... ..
... ... ... ...
... ... ... .. .
.
... . ... ... ...
... .
... .. ... ...
... ... ... ..
..
...
..
... . ..
. ..
. ... .
.
...........................................................................................................................................................................................................................................................................
.
..
..
. .
x
60% 65% 75% 80%

Figure 10.6: Membership Function of Negated Quantifier of “about 70% ”

Theorem 10.2 Let Q be an uncertain quantifier. Then we have ¬¬Q = Q.

Proof: This theorem follows from ¬¬Q = ¬Qc = (Qc )c = Q.

Theorem 10.3 If Q is a monotone uncertain quantifier, then ¬Q is also


monotone. Especially, if Q is increasing, then ¬Q is decreasing; if Q is de-
creasing, then ¬Q is increasing.

Proof: Assume λ is the membership function of Q. Then ¬Q has a member-


ship function 1 − λ(x). The theorem follows from this fact immediately.

Dual Quantifier
Definition 10.5 Let Q be an uncertain quantifier. Then the dual quantifier
of Q is
Q∗ = ∀ − Q. (10.30)

Remark 10.1: Note that Q and Q∗ are dependent uncertain sets such that
Q + Q∗ ≡ ∀. Since the cardinality of the universe A is n, we also have

Q∗ = n − Q. (10.31)
228 Chapter 10 - Uncertain Logic

Example 10.16: Since ∀ ≡ {n}, we immediately have ∀∗ = {0} = ¬∃. That


is
∀∗ ≡ ¬∃. (10.32)

Example 10.17: Since ¬∀ = {0, 1, 2, · · · , n − 1}, we immediately have


(¬∀)∗ = {1, 2, · · · , n} = ∃. That is,

(¬∀)∗ ≡ ∃. (10.33)

Example 10.18: Since ∃ ≡ {1, 2, · · · , n}, we have ∃∗ = {0, 1, 2, · · · , n−1} =


¬∀. That is,
∃∗ ≡ ¬∀. (10.34)

Example 10.19: Since ¬∃ = {0}, we immediately have (¬∃)∗ = {n} = ∀.


That is,
(¬∃)∗ = ∀. (10.35)

Theorem 10.4 Let Q be an uncertain quantifier whose membership function


is λ. Then the dual quantifier Q∗ has a membership function

λ∗ (x) = λ(n − x) (10.36)

where n is the cardinality of the universe A.

Proof: This theorem follows from the operational law of uncertain set im-
mediately.

Example 10.20: Let Q be the uncertain quantifier “almost all ” defined by


(10.20). Then its dual quantifier Q∗ has a membership function


 1, if 0 ≤ x ≤ 2

λ (x) = (5 − x)/3, if 2 ≤ x ≤ 5 (10.37)

0, if 5 ≤ x ≤ n.

Example 10.21: Let Q be the uncertain quantifier “about 70% ” defined by


(10.23). Then its dual quantifier Q∗ has a membership function


 0, if 0 ≤ x ≤ 0.2

 20(x − 0.2), if 0.2 ≤ x ≤ 0.25



λ∗ (x) = 1, if 0.25 ≤ x ≤ 0.35 (10.38)

 20(0.4 − x), if 0.35 ≤ x ≤ 0.4




0, if 0.4 ≤ x ≤ 1.

Section 10.3 - Uncertain Subject 229

..... ∗
.......
...λ (x) λ(x)
................................. ....... ....... ...
... ... ..
... ...
... ...
... ... .
... ... ...
... ...
... ... ..
... ..
... ...
... ... ..
... ... ...
... ...
... ....
... ... .
... ... .
... ... ...
... ...
...
...
...
.
... .
... ...
. . ..
............................................................................................................................................................................................................................................................. x
....
5 n−5

Figure 10.7: Membership Function of Dual Quantifier of “almost all ”


..
......... ∗
...
....
λ (x) λ(x)
.. .................................
. ..... ....... .......
... ... ...
..
...
... ... ...
... ...
..
... ..
. ... ...
... ..
. ... .. ..
... ..
. .
...
.
.
... ... ... ..
...
..
... ..
. .
... .
.
... ... ... ...
... ..
. ... .
. ..
... ..
. .
... ..
... ..
. . ...
.. ... . ..
... . .
... .
.
... ... ... ...
... .... ...
.
.
.
..
..
... ... ... ...
... ... ... .. .
...............................................................................................................................................................................................................................................................................
.
. .
. . .
. x
...
20% .
40% 60% 80%

Figure 10.8: Membership Function of Dual Quantifier of “about 70% ”

Theorem 10.5 Let Q be an uncertain quantifier. Then we have Q∗∗ = Q.

Proof: The theorem follows from Q∗∗ = ∀ − Q∗ = ∀ − (∀ − Q) = Q.

Theorem 10.6 If Q is a unimodal uncertain quantifier, then Q∗ is also uni-


modal. Especially, if Q is a monotone, then Q∗ is monotone; if Q is increasing,
then Q∗ is decreasing; if Q is decreasing, then Q∗ is increasing.

Proof: Assume λ is the membership function of Q. Then Q∗ has a member-


ship function λ(n − x). The theorem follows from this fact immediately.

10.3 Uncertain Subject


Sometimes, we are interested in a subset of the universe of individuals, for
example, “warm days”, “young students” and “tall sportsmen”. This section
will model them by the concept of uncertain subject.

Definition 10.6 (Liu [130]) Uncertain subject is an uncertain set containing


some specified individuals in the universe.

Example 10.22: “Warm days are here again” is a statement in which “warm
days” is an uncertain subject that is an uncertain set on the universe of “all
230 Chapter 10 - Uncertain Logic

days”, whose membership function may be defined by




 0, if x ≤ 15

 (x − 15)/3, if 15 ≤ x ≤ 18



ν(x) = 1, if 18 ≤ x ≤ 24 (10.39)

(28 − x)/4, if 24 ≤ x ≤ 28





0, if 28 ≤ x.

ν(x)
....
.........
... ....................................................................
... .... .....
... ..... .. ...
... ... . .. ....
... ... ... .. ....
... ... .. .. ...
... ... .. .. ...
...
... .. ..
. .. ...
... ..
. .. .. ...
... ..
. .. . ...
... ... .. .
. ...
... .. .. .
. ...
. . ...
... ..
. .. .
. ...
... ... .. . ...
... .. .. .
. ...
. . ...
... ... .. .
. ...
... ... .. . ...
... .. .. .
. ..
.
.............................................................................................................................................................................................................................................................................. x
... ◦ ◦ ◦ ◦
15 C 18 C
.. 24 C 28 C

Figure 10.9: Membership Function of Subject “warm days”

Example 10.23: “Young students are tall” is a statement in which “young


students” is an uncertain subject that is an uncertain set on the universe of
“all students”, whose membership function may be defined by


 0, if x ≤ 15

 (x − 15)/5, if 15 ≤ x ≤ 20



ν(x) = 1, if 20 ≤ x ≤ 35 (10.40)

(45 − x)/10, if 35 ≤ x ≤ 45





0, if x ≥ 45.

Example 10.24: “Tall students are heavy” is a statement in which “tall


students” is an uncertain subject that is an uncertain set on the universe of
“all students”, whose membership function may be defined by


 0, if x ≤ 180

(x − 180)/5, if 180 ≤ x ≤ 185




ν(x) = 1, if 185 ≤ x ≤ 195 (10.41)

(200 − x)/5, if 195 ≤ x ≤ 200





0, if x ≥ 200.

Let S be an uncertain subject with membership function ν on the universe


A = {a1 , a2 , · · · , an } of individuals. Then S is an uncertain set of A such
Section 10.4 - Uncertain Predicate 231

ν(x)
...
..........
... .........................................................................................
... .... ......
... ..... .. ....
... ... . .. ....
... .. ...
. .. ...
... .. ..
. .. ...
... .. ..
. ..
...
...
... ... .. .. ...
... .. ..
. .. ...
... ... .. ..
...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
...
... ... .. .. ...
... ..
. .. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ..
. .. . ..
............................................................................................................................................................................................................................................................... x
...
15yr 20yr .. 35yr 45yr

Figure 10.10: Membership Function of Subject “young students”

ν(x)
..
.........
... .........................................................................................
.... ..... .....
.. .... .. ....
... ... .. .. ....
... ... ..
... ... .. .. ....
... ... .. .. ...
.. .. ...
... ... ...
... ..
. .. .. ...
... ..
. .. .. ...
... .... ..
..
..
..
...
...
... ... .. .. ...
... ... ...
... .. .. .. ...
. .. .. ...
... ... ...
... ... .. .. ...
... ... .. .. ...
... ... .. .. ...
. . .. .. .
..........................................................................................................................................................................................................................................................
. .
.
x
..
180cm 185cm ... 195cm 200cm

Figure 10.11: Membership Function of Subject “tall students”

that
M{ai ∈ S} = ν(ai ), i = 1, 2, · · · , n. (10.42)

In many cases, we are interested in some individuals a’s with ν(a) ≥ ω, where
ω is a confidence level. Thus we have a subuniverse,

Sω = {a ∈ A | ν(a) ≥ ω} (10.43)

that will play a new universe of individuals we are talking about, and the
individuals out of Sω will be ignored at the confidence level ω.

Theorem 10.7 Let ω1 and ω2 be confidence levels with ω1 > ω2 , and let Sω1
and Sω2 be subuniverses with confidence levels ω1 an ω2 , respectively. Then

Sω1 ⊂ Sω2 . (10.44)

That is, Sω is a decreasing sequence of sets with respect to ω.

Proof: If a ∈ Sω1 , then ν(a) ≥ ω1 > ω2 . Thus a ∈ Sω2 . It follows that


Sω1 ⊂ Sω2 . Note that Sω1 and Sω2 may be empty.
232 Chapter 10 - Uncertain Logic

10.4 Uncertain Predicate

There are numerous imprecise predicates in human language, for example,


warm, cold, hot, young, old, tall, small, and big. This section will model them
by the concept of uncertain predicate.

Definition 10.7 (Liu [130]) Uncertain predicate is an uncertain set repre-


senting a property that the individuals have in common.

Example 10.25: “Today is warm” is a statement in which “warm” is an


uncertain predicate that may be represented by a membership function


 0, if x ≤ 15

 (x − 15)/3, if 15 ≤ x ≤ 18



µ(x) = 1, if 18 ≤ x ≤ 24 (10.45)

 (28 − x)/4,

 if 24 ≤ x ≤ 28


0, if 28 ≤ x.

µ(x)
...
..........
... ....................................................................
... .... .....
... ..... .. ...
... ... . .. ....
... ... ... .. ....
... ... .. .. ...
... .. ..
. .. ...
... .. .. .. ...
. ...
... ..
. .. .. ...
... ... .. . ...
... ... .. .
. ...
... .. .. .
. ...
. . ...
... ..
. .. . ...
... ..
. .. .
. ...
... .. .. .
. ...
. .
... ... .. . ...
...
... ... .. .
.
. ...
... ..
. .. . ..
.............................................................................................................................................................................................................................................................................. x
... ◦ ◦ ◦ ◦
15 C 18 C
.. 24 C 28 C

Figure 10.12: Membership Function of Predicate “warm”

Example 10.26: “John is young” is a statement in which “young” is an


uncertain predicate that may be represented by a membership function


 0, if x ≤ 15

(x − 15)/5, if 15 ≤ x ≤ 20




µ(x) = 1, if 20 ≤ x ≤ 35 (10.46)

(45 − x)/10, if 35 ≤ x ≤ 45





0, if x ≥ 45.

Section 10.4 - Uncertain Predicate 233

µ(x)
...
..........
... .........................................................................................
... .... ......
... ..... .. ....
... ... . .. ....
... .. ...
. .. ...
... .. ..
. .. ...
... .. ..
. ..
...
...
... ... .. .. ...
... .. ..
. .. ...
... ... .. ..
...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
...
... ... .. .. ...
... ..
. .. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ..
. .. . ..
............................................................................................................................................................................................................................................................... x
...
15yr 20yr .. 35yr 45yr

Figure 10.13: Membership Function of Predicate “young”

Example 10.27: “Tom is tall” is a statement in which “tall” is an uncertain


predicate that may be represented by a membership function


 0, if x ≤ 180

 (x − 180)/5, if 180 ≤ x ≤ 185



µ(x) = 1, if 185 ≤ x ≤ 195 (10.47)

(200 − x)/5, if 195 ≤ x ≤ 200





0, if x ≥ 200.

µ(x)
...
..........
.... ..........................................................................................
... ..... .....
... ... .. .. ....
... .... ... .. ....
... ... .. .. ...
... ... . .. ....
... .. .. .. ...
. ..
... ... .. ...
...
... ... .. ..
.. ...
... ..
. . ...
... ..
. .. .. ...
... .... ..
..
..
..
...
...
... ... .. .. ...
... ... .. .. ...
... ... .. .. ...
... ... .. ..
...
... .. ...
.
. .. .. ...
... .
. .
.............................................................................................................................................................................................................................................................
....
x
180cm 185cm . 195cm 200cm

Figure 10.14: Membership Function of Predicate “tall”

Negated Predicate
Definition 10.8 Let P be an uncertain predicate. Then its negated predicate
¬P is the complement of P in the sense of uncertain set, i.e.,
¬P = P c . (10.48)
Theorem 10.8 Let P be an uncertain predicate with membership function
µ. Then its negated predicate ¬P has a membership function
¬µ(x) = 1 − µ(x). (10.49)
234 Chapter 10 - Uncertain Logic

Proof: The theorem follows from the definition of negated predicate and the
operational law of uncertain set immediately.

Example 10.28: Let P be the uncertain predicate “warm” defined by


(10.45). Then its negated predicate ¬P has a membership function


 1, if x ≤ 15

 (18 − x)/3, if 15 ≤ x ≤ 18



¬µ(x) = 0, if 18 ≤ x ≤ 24 (10.50)

(x − 24)/4, if 24 ≤ x ≤ 28





1, if 28 ≤ x.

.
....
.......
..
¬µ(x) µ(x) ¬µ(x)
........................................................... . ....... ....... ....... ....... ....... .. .............................................
... ... ... ... ...
... ... ..
... ... .
... ... ...
.
... . .. .
... ... ...
... ... ... ... ...
... ... . .. ...
.
... ... . ... . .
.
... ...... .....
...
... ..
... .. .. .. ..
.. ..... .. ...
... .
.
... .. ..... ... ...
... .. ... ... ..
... . ... ... ...
... ...
... .
... ..
... .
... . .... ... ...
. . .
..........................................................................................................................................................................................................................................................................
. . .
x
..
.. ◦ ◦ ◦ ◦
. 15 C 18 C 24 C 28 C

Figure 10.15: Membership Function of Negated Predicate of “warm”

Example 10.29: Let P be the negated predicate “young” defined by (10.46).


Then its negated predicate ¬P has a membership function


 1, if x ≤ 15

 (20 − x)/5, if 15 ≤ x ≤ 20



¬µ(x) = 0, if 20 ≤ x ≤ 35 (10.51)

(x − 35)/10, if 35 ≤ x ≤ 45





1, if x ≥ 45.

Example 10.30: Let P be the uncertain predicate “tall ” defined by (10.47).


Then its negated predicate ¬P has a membership function


 1, if x ≤ 180

 (185 − x)/5, if 180 ≤ x ≤ 185



¬µ(x) = 0, if 185 ≤ x ≤ 195 (10.52)

(x − 195)/5, if 195 ≤ x ≤ 200





1, if x ≥ 200.

Section 10.5 - Uncertain Proposition 235

...
µ(x)
.......... µ(x) ¬µ(x)
................................................. . ....... ....... ....... ....... ....... ....... ...... ..............................
... ... ... ...
... ... ... ...
... ... .
... ..
.....
... ... . ... ..
... .. ...
... ... .... ...
... ... ... ..
... ... .. ..
.....
... ..... ... ....
... .. ....
... ..... ...
.. .... .... .....
... ... .
..
.
... .. . .
. ...
... ... ..... .
... ..
... ... ... ...
...
.
... ... .
.... ..
... ...
.... ...
.. .. ... .. .
. . .
.....................................................................................................................................................................................................................................................................
...
x
15yr 20yr
.. 35yr 45yr

Figure 10.16: Membership Function of Negated Predicate of “young”

....
µ(x)
......... µ(x) ¬µ(x)
................................................. ....... ....... ....... ....... ....... ....... .......
... ..............................
... ... .. .. ...
... ... ... ..
...
...
. .. ...
.
... ...
... ... ... ...
... ...
.. ... ...
... ... .. ..
... ... ...
... ... .. ... .. .
... ..... .....
... ... ...
... .. ... ... ..
... .. .... ... ..
... . ... ..
. ...
... ..
... ... ... ...
... ... .. ...
... .... .. ...
. ..
. ... .
.
... . ... .
. ...
. . . .
.....................................................................................................................................................................................................................................................................
. . .
x
....
180cm 185cm . 195cm 200cm

Figure 10.17: Membership Function of Negated Predicate of “tall ”

Theorem 10.9 Let P be an uncertain predicate. Then we have ¬¬P = P .

Proof: The theorem follows from ¬¬P = ¬P c = (P c )c = P.

10.5 Uncertain Proposition


Definition 10.9 (Liu [130]) Assume that Q is an uncertain quantifier, S is
an uncertain subject, and P is an uncertain predicate. Then the triplet

(Q, S, P ) =“ Q of S are P ” (10.53)

is called an uncertain proposition.

Remark 10.2: Let A be the universe of individuals. Then (Q, A, P ) is a


special uncertain proposition because A itself is a special uncertain subject.

Remark 10.3: Let ∀ be the universal quantifier. Then (∀, A, P ) is an


uncertain proposition representing “all of A are P ”.

Remark 10.4: Let ∃ be the existential quantifier. Then (∃, A, P ) is an


uncertain proposition representing “at least one of A is P ”.
236 Chapter 10 - Uncertain Logic

Example 10.31: “Almost all students are young” is an uncertain proposi-


tion in which the uncertain quantifier Q is “almost all”, the uncertain subject
S is “students” (the universe itself) and the uncertain predicate P is “young”.

Example 10.32: “Most young students are tall” is an uncertain proposition


in which the uncertain quantifier Q is “most”, the uncertain subject S is
“young students” and the uncertain predicate P is “tall”.
Theorem 10.10 (Liu [130], Logical Equivalence Theorem) Let (Q, S, P ) be
an uncertain proposition. Then
(Q∗ , S, P ) = (Q, S, ¬P ) (10.54)
where Q∗ is the dual quantifier of Q and ¬P is the negated predicate of P .
Proof: Note that (Q∗ , S, P ) represents “Q∗ of S are P ”. In fact, the state-
ment “Q∗ of S are P ” implies “Q∗∗ of S are not P ”. Since Q∗∗ = Q, we obtain
(Q, S, ¬P ). Conversely, the statement “Q of S are not P ” implies “Q∗ of S
are P ”, i.e., (Q∗ , S, P ). Thus (10.54) is verified.

Example 10.33: When Q∗ = ¬∀, we have Q = ∃. If S = A, then (10.54)


becomes the classical equivalence
(¬∀, A, P ) = (∃, A, ¬P ). (10.55)

Example 10.34: When Q∗ = ¬∃, we have Q = ∀. If S = A, then (10.54)


becomes the classical equivalence
(¬∃, A, P ) = (∀, A, ¬P ). (10.56)

10.6 Truth Value


Let (Q, S, P ) be an uncertain proposition. The truth value of (Q, S, P ) should
be the uncertain measure that “Q of S are P ”. That is,
T (Q, S, P ) = M{Q of S are P }. (10.57)
However, it is impossible for us to deduce the value of M{Q of S are P } from
the information of Q, S and P within the framework of uncertain set theory.
Thus we need an additional formula to compose Q, S and P .
Definition 10.10 (Liu [130]) Let (Q, S, P ) be an uncertain proposition in
which Q is a unimodal uncertain quantifier with membership function λ, S
is an uncertain subject with membership function ν, and P is an uncertain
predicate with membership function µ. Then the truth value of (Q, S, P ) with
respect to the universe A is
!
T (Q, S, P ) = sup ω ∧ sup inf µ(a) ∧ sup inf ¬µ(a) (10.58)
0≤ω≤1 K∈Kω a∈K K∈K∗
ω
a∈K
Section 10.6 - Truth Value 237

where
Kω = {K ⊂ Sω | λ(|K|) ≥ ω} , (10.59)
K∗ω = {K ⊂ Sω | λ(|Sω | − |K|) ≥ ω} , (10.60)
Sω = {a ∈ A | ν(a) ≥ ω} . (10.61)

Remark 10.5: Keep in mind that the truth value formula (10.58) is vacuous
if the individual feature data of the universe A are not available.

Remark 10.6: The symbol |K| represents the cardinality of the set K. For
example, |∅| = 0 and |{2, 5, 6}| = 3.

Remark 10.7: Note that µ is the membership function of the negated


predicate of P , and
¬µ(a) = 1 − µ(a). (10.62)

Remark 10.8: When the subset K of individuals becomes an empty set ∅,


we will define
inf µ(a) = inf ¬µ(a) = 1. (10.63)
a∈∅ a∈∅

Remark 10.9: If Q is an uncertain percentage rather than an absolute


quantity, then Kω and K∗ω are defined by

|K|
Kω = K ⊂ Sω λ ≥ω , (10.64)
|Sω |

∗ |K|
Kω = K ⊂ Sω λ 1 − ≥ω . (10.65)
|Sω |

Remark 10.10: If the uncertain subject S degenerates to the universe A,


then the truth value of (Q, A, P ) is
!
T (Q, A, P ) = sup ω ∧ sup inf µ(a) ∧ sup inf ¬µ(a) (10.66)
0≤ω≤1 K∈Kω a∈K K∈K∗
ω
a∈K

where
Kω = {K ⊂ A | λ(|K|) ≥ ω} , (10.67)
K∗ω = {K ⊂ A | λ(|A| − |K|) ≥ ω} . (10.68)

Exercise 10.1: If the uncertain quantifier Q = ∀ and the uncertain subject


S = A, then for any ω > 0, we have

Kω = {A}, K∗ω = {∅}. (10.69)


238 Chapter 10 - Uncertain Logic

Show that
T (∀, A, P ) = inf µ(a). (10.70)
a∈A

Exercise 10.2: If the uncertain quantifier Q = ∃ and the uncertain subject


S = A, then for any ω > 0, we have

Kω = {any nonempty subsets of A}, (10.71)

K∗ω = {any proper subsets of A}. (10.72)


Note that Kω contains A but K∗ω does not. Show that

T (∃, A, P ) = sup µ(a). (10.73)


a∈A

Exercise 10.3: If the uncertain quantifier Q = ¬∀ and the uncertain subject


S = A, then for any ω > 0, we have

Kω = {any proper subsets of A}, (10.74)

K∗ω = {any nonempty subsets of A}. (10.75)


Show that
T (¬∀, A, P ) = 1 − inf µ(a). (10.76)
a∈A

Exercise 10.4: If the uncertain quantifier Q = ¬∃ and the uncertain subject


S = A, then for any ω > 0, we have

Kω = {∅}, K∗ω = {A}. (10.77)

Show that
T (¬∃, A, P ) = 1 − sup µ(a). (10.78)
a∈A

Theorem 10.11 (Liu [130], Truth Value Theorem) Let (Q, S, P ) be an un-
certain proposition in which Q is a unimodal uncertain quantifier with mem-
bership function λ, S is an uncertain subject with membership function ν,
and P is an uncertain predicate with membership function µ. Then the truth
value of (Q, S, P ) is

T (Q, S, P ) = sup (ω ∧ ∆(kω ) ∧ ∆∗ (kω∗ )) (10.79)


0≤ω≤1

where
kω = min {x | λ(x) ≥ ω} , (10.80)
∆(kω ) = the kω -th largest value of {µ(ai ) | ai ∈ Sω }, (10.81)
kω∗ = |Sω | − max{x | λ(x) ≥ ω}, (10.82)

∆ (kω∗ ) = the kω∗ -th largest value of {1 − µ(ai ) | ai ∈ Sω }. (10.83)
Section 10.7 - Algorithm 239

Proof: Since the supremum is achieved at the subset with minimum cardi-
nality, we have

sup inf µ(a) = sup inf µ(a) = ∆(kω ),


K∈Kω a∈K K⊂Sω ,|K|=kω a∈K

sup inf ¬µ(a) = sup inf ¬µ(a) = ∆∗ (kω∗ ).


K∈K∗ a∈K ∗
K⊂Sω ,|K|=kω a∈K
ω

The theorem is thus verified. Please note that ∆(0) = ∆∗ (0) = 1.

Remark 10.11: If Q is an uncertain percentage, then kω and kω∗ are defined


by

x
kω = min x λ ≥ω , (10.84)
|Sω |

x
kω∗ = |Sω | − max x λ ≥ω . (10.85)
|Sω |

Remark 10.12: If the uncertain subject S degenerates to the universe of


individuals A = {a1 , a2 , · · · , an }, then the truth value of (Q, A, P ) is

T (Q, A, P ) = sup (ω ∧ ∆(kω ) ∧ ∆∗ (kω∗ )) (10.86)


0≤ω≤1

where
kω = min {x | λ(x) ≥ ω} , (10.87)

∆(kω ) = the kω -th largest value of µ(a1 ), µ(a2 ), · · · , µ(an ), (10.88)

kω∗ = n − max{x | λ(x) ≥ ω}, (10.89)

∆∗ (kω∗ ) = the kω∗ -th largest value of 1 − µ(a1 ), · · · , 1 − µ(an ). (10.90)

Exercise 10.5: If the uncertain quantifier Q = {m, m+1, · · · , n} (i.e., “there


exist at least m”) with m ≥ 1, then we have kω = m and kω∗ = 0. Show that

T (Q, A, P ) = the mth largest value of µ(a1 ), µ(a2 ), · · · , µ(an ). (10.91)

Exercise 10.6: If the uncertain quantifier Q = {0, 1, 2, . . . , m} (i.e., “there


exist at most m”) with m < n, then we have kω = 0 and kω∗ = n − m. Show
that

T (Q, A, P ) = the (n − m)th largest value of 1−µ(a1 ), 1−µ(a2 ), · · · , 1−µ(an ).


240 Chapter 10 - Uncertain Logic

10.7 Algorithm
In order to calculate T (Q, S, P ) based on the truth value formula (10.58), a
truth value algorithm is given as follows:
Step 1. Set ω = 1 and ε = 0.01 (a predetermined precision).
Step 2. Calculate Sω = {a ∈ A | ν(a) ≥ ω} and k = min{x | λ(x) ≥ ω} as
well as k ∗ = |Sω | − max{x | λ(x) ≥ ω}.
Step 3. If ∆(k) ∧ ∆∗ (k ∗ ) < ω, then ω ← ω − ε and go to Step 2. Otherwise,
output the truth value ω and stop.

Remark 10.13: If Q is an uncertain percentage, then kω and kω∗ in the truth


value algorithm are replaced with (10.84) and (10.85), respectively.

Example 10.35: Assume that the daily temperatures of some week from
Monday to Sunday are

22, 23, 25, 28, 30, 32, 36 (10.92)

in centigrades, respectively. Consider an uncertain proposition

(Q, A, P ) = “two or three days are warm”. (10.93)

Note that the uncertain quantifier is Q = {2, 3}. We also suppose the uncer-
tain predicate P = “warm” has a membership function


 0, if x ≤ 15

(x − 15)/3, if 15 ≤ x ≤ 18




µ(x) = 1, if 18 ≤ x ≤ 24 (10.94)

(28 − x)/4, if 24 ≤ x ≤ 28





0, if 28 ≤ x.

It is clear that Monday and Tuesday are warm with truth value 1, and
Wednesday is warm with truth value 0.75. But Thursday to Sunday are
not “warm” at all (in fact, they are “hot”). Intuitively, the uncertain propo-
sition “two or three days are warm” should be completely true. The truth
value algorithm (http://orsc.edu.cn/liu/resources.htm) yields that the truth
value is
T (“two or three days are warm”) = 1. (10.95)
This is an intuitively expected result. In addition, we also have

T (“two days are warm”) = 0.25, (10.96)

T (“three days are warm”) = 0.75. (10.97)


Section 10.7 - Algorithm 241

Example 10.36: Assume that in a class there are 15 students whose ages
are
21, 22, 22, 23, 24, 25, 26, 27, 28, 30, 32, 35, 36, 38, 40 (10.98)
in years. Consider an uncertain proposition

(Q, A, P ) = “almost all students are young”. (10.99)

Suppose the uncertain quantifier Q = “almost all” has a membership function




 0, if 0 ≤ x ≤ 10
λ(x) = (x − 10)/3, if 10 ≤ x ≤ 13 (10.100)

1, if 13 ≤ x ≤ 15,

and the uncertain predicate P = “young” has a membership function




 0, if x ≤ 15

(x − 15)/5, if 15 ≤ x ≤ 20




µ(x) = 1, if 20 ≤ x ≤ 35 (10.101)

(45 − x)/10, if 35 ≤ x ≤ 45





0, if x ≥ 45.

The truth value algorithm (http://orsc.edu.cn/liu/resources.htm) yields that


the uncertain proposition has a truth value

T (“almost all students are young”) = 0.9. (10.102)

Example 10.37: Assume that in a team there are 16 sportsmen whose


heights are
175, 178, 178, 180, 183, 184, 186, 186
(10.103)
188, 190, 192, 192, 193, 194, 195, 196
in centimeters. Consider an uncertain proposition

(Q, A, P ) = “about 70% of sportsmen are tall”. (10.104)

Suppose the uncertain quantifier Q = “about 70%” has a membership func-


tion 

 0, if 0 ≤ x ≤ 0.6

 20(x − 0.6), if 0.6 ≤ x ≤ 0.65



λ(x) = 1, if 0.65 ≤ x ≤ 0.75 (10.105)

20(0.8 − x), if 0.75 ≤ x ≤ 0.8





0, if 0.8 ≤ x ≤ 1

242 Chapter 10 - Uncertain Logic

and the uncertain predicate P = “tall” has a membership function




 0, if x ≤ 180

(x − 180)/5, if 180 ≤ x ≤ 185




µ(x) = 1, if 185 ≤ x ≤ 195 (10.106)

(200 − x)/5, if 195 ≤ x ≤ 200





0, if x ≥ 200.

The truth value algorithm (http://orsc.edu.cn/liu/resources.htm) yields that


the uncertain proposition has a truth value

T (“about 70% of sportsmen are tall”) = 0.8. (10.107)

Example 10.38: Assume that in a class there are 18 students whose ages
and heights are

(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
(28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188) (10.108)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)

in years and centimeters. Consider an uncertain proposition

(Q, S, P ) = “most young students are tall”. (10.109)

Suppose the uncertain quantifier (percentage) Q = “most” has a membership


function


 0, if 0 ≤ x ≤ 0.7

 20(x − 0.7), if 0.7 ≤ x ≤ 0.75



λ(x) = 1, if 0.75 ≤ x ≤ 0.85 (10.110)

20(0.9 − x), if 0.85 ≤ x ≤ 0.9





0, if 0.9 ≤ x ≤ 1.

Note that each individual is described by a feature data (y, z), where y rep-
resents ages and z represents heights. In this case, the uncertain subject
S = “young students” has a membership function


 0, if y ≤ 15

 (y − 15)/5, if 15 ≤ y ≤ 20



ν(y) = 1, if 20 ≤ y ≤ 35 (10.111)

(45 − y)/10, if 35 ≤ y ≤ 45





0, if y ≥ 45

Section 10.8 - Linguistic Summarizer 243

and the uncertain predicate P = “tall” has a membership function




 0, if z ≤ 180

 (z − 180)/5, if 180 ≤ z ≤ 185



µ(z) = 1, if 185 ≤ z ≤ 195 (10.112)

 (200 − z)/5, if

 195 ≤ z ≤ 200


0, if z ≥ 200.

The truth value algorithm yields that the uncertain proposition has a truth
value
T (“most young students are tall”) = 0.8. (10.113)

10.8 Linguistic Summarizer


Linguistic summary is a human language statement that is concise and easy-
to-understand by humans. For example, “most young students are tall” is
a linguistic summary of students’ ages and heights. Thus a linguistic sum-
mary is a special uncertain proposition whose uncertain quantifier, uncertain
subject and uncertain predicate are linguistic terms. Uncertain logic pro-
vides a flexible means that is capable of extracting linguistic summary from
a collection of raw data.
What inputs does the uncertain logic need? First, we should have some
raw data (i.e., the individual feature data),

A = {a1 , a2 , · · · , an }. (10.114)

Next, we should have some linguistic terms to represent quantifiers, for exam-
ple, “most” and “all”. Denote them by a collection of uncertain quantifiers,

Q = {Q1 , Q2 , · · · , Qm }. (10.115)

Then, we should have some linguistic terms to represent subjects, for exam-
ple, “young students” and “old students”. Denote them by a collection of
uncertain subjects,
S = {S1 , S2 , · · · , Sn }. (10.116)
Last, we should have some linguistic terms to represent predicates, for exam-
ple, “short” and “tall”. Denote them by a collection of uncertain predicates,

P = {P1 , P2 , · · · , Pk }. (10.117)

One problem of data mining is to choose an uncertain quantifier Q ∈ Q, an


uncertain subject S ∈ S and an uncertain predicate P ∈ P such that the
truth value of the linguistic summary “Q of S are P ” to be extracted is at
least β, i.e.,
T (Q, S, P ) ≥ β (10.118)
244 Chapter 10 - Uncertain Logic

for the universe A = {a1 , a2 , · · · , an }, where β is a confidence level. In order


to solve this problem, Liu [130] proposed the following linguistic summarizer,

Find Q, S and P







 subject to:
Q∈Q


(10.119)

 S∈S

P ∈P





T (Q, S, P ) ≥ β.

Each solution (Q, S, P ) of the linguistic summarizer (10.119) produces a lin-


guistic summary “Q of S are P ”.

Example 10.39: Assume that in a class there are 18 students whose ages
and heights are

(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
(28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188) (10.120)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)

in years and centimeters. Suppose we have three linguistic terms “about


half”, “most” and “all” as uncertain quantifiers whose membership functions
are 

 0, if 0 ≤ x ≤ 0.4

 20(x − 0.4), if 0.4 ≤ x ≤ 0.45



λhalf (x) = 1, if 0.45 ≤ x ≤ 0.55 (10.121)

20(0.6 − x), if 0.55 ≤ x ≤ 0.6





0, if 0.6 ≤ x ≤ 1,



 0, if 0 ≤ x ≤ 0.7

 20(x − 0.7), if 0.7 ≤ x ≤ 0.75



λmost (x) = 1, if 0.75 ≤ x ≤ 0.85 (10.122)

 20(0.9 − x),

 if 0.85 ≤ x ≤ 0.9


0, if 0.9 ≤ x ≤ 1,

(
1, if x = 1
λall (x) = (10.123)
0, if 0 ≤ x < 1,

respectively. Denote the collection of uncertain quantifiers by

Q = {“about half ”, “most”,“all”}. (10.124)


Section 10.8 - Linguistic Summarizer 245

We also have three linguistic terms “young students”, “middle-aged students”


and “old students” as uncertain subjects whose membership functions are


 0, if y ≤ 15

 (y − 15)/5, if 15 ≤ y ≤ 20



νyoung (y) = 1, if 20 ≤ y ≤ 35 (10.125)

 (45 − y)/10, if 35 ≤ y ≤ 45




0, if y ≥ 45,



 0, if y ≤ 40

(y − 40)/5, if 40 ≤ y ≤ 45




νmiddle (y) = 1, if 45 ≤ y ≤ 55 (10.126)

(60 − y)/5, if 55 ≤ y ≤ 60





0, if y ≥ 60,



 0, if y ≤ 55

 (y − 55)/5, if 55 ≤ y ≤ 60



νold (y) = 1, if 60 ≤ y ≤ 80 (10.127)

(85 − y)/5, if 80 ≤ y ≤ 85





1, if y ≥ 85,

respectively. Denote the collection of uncertain subjects by

S = {“young students”, “middle-aged students”, “old students”}. (10.128)

Finally, we suppose that there are two linguistic terms “short” and “tall” as
uncertain predicates whose membership functions are


 0, if z ≤ 145

 (z − 145)/5, if 145 ≤ z ≤ 150



µshort (z) = 1, if 150 ≤ z ≤ 155 (10.129)

(160 − z)/5, if 155 ≤ z ≤ 160





0, if z ≥ 200,



 0, if z ≤ 180

 (z − 180)/5, if 180 ≤ z ≤ 185



µtall (z) = 1, if 185 ≤ z ≤ 195 (10.130)

(200 − z)/5, if 195 ≤ z ≤ 200





0, if z ≥ 200,

respectively. Denote the collection of uncertain predicates by

P = {“short”, “tall”}. (10.131)


246 Chapter 10 - Uncertain Logic

We would like to extract an uncertain quantifier Q ∈ Q, an uncertain subject


S ∈ S and an uncertain predicate P ∈ P such that the truth value of the
linguistic summary “Q of S are P ” to be extracted is at least 0.8, i.e.,

T (Q, S, P ) ≥ 0.8 (10.132)

where 0.8 is a predetermined confidence level. The linguistic summarizer


(10.119) yields

Q = “most”, S = “young students”, P = “tall”

and then extracts a linguistic summary “most young students are tall”.

10.9 Bibliographic Notes


Based on uncertain set theory, uncertain logic was designed by Liu [130]
in 2011 for dealing with human language by using the truth value formula
for uncertain propositions. As an application of uncertain logic, Liu [130]
also proposed a linguistic summarizer that provides a means for extracting
linguistic summary from a collection of raw data.
Chapter 11

Uncertain Inference

Uncertain inference is a process of deriving consequences from human knowl-


edge via uncertain set theory. This chapter will introduce a family of uncer-
tain inference rules, uncertain system, and uncertain control with application
to an inverted pendulum system.

11.1 Uncertain Inference Rule

Let X and Y be two concepts. It is assumed that we only have a single if-then
rule,
“if X is ξ then Y is η” (11.1)

where ξ and η are two uncertain sets. We first introduce the following infer-
ence rule.

Inference Rule 11.1 (Liu [127]) Let X and Y be two concepts. Assume a
rule “if X is an uncertain set ξ then Y is an uncertain set η”. From X is a
constant a we infer that Y is an uncertain set

η ∗ = η|a∈ξ (11.2)

which is the conditional uncertain set of η given a ∈ ξ. The inference rule is


represented by
Rule: If X is ξ then Y is η
From: X is a constant a (11.3)
Infer: Y is η ∗ = η|a∈ξ

Theorem 11.1 Let ξ and η be independent uncertain sets with membership


functions µ and ν, respectively. If ξ ∗ is a constant a, then the inference rule

© Springer-Verlag Berlin Heidelberg 2015 247


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_12
248 Chapter 11 - Uncertain Inference

11.1 yields that η ∗ has a membership function


ν(y)


 , if ν(y) < µ(a)/2


 µ(a)

ν ∗ (y) = ν(y) + µ(a) − 1 (11.4)
, if ν(y) > 1 − µ(a)/2
µ(a)






0.5, otherwise.
Proof: It follows from the inference rule 11.1 that η ∗ has a membership
function
ν ∗ (y) = M{y ∈ η|a ∈ ξ}.
By using the definition of conditional uncertainty, we have
M{y ∈ η} M{y ∈ η}

 , if < 0.5
M{a ∈ ξ} M{a ∈ ξ}




M{y ∈ η|a ∈ ξ} = M{y 6∈ η} M{y 6∈ η}
1− , if < 0.5
M{a ∈ ξ} M{a ∈ ξ}





0.5, otherwise.
The equation (11.4) follows from M{y ∈ η} = ν(y), M{y 6∈ η} = 1 − ν(y)
and M{a ∈ ξ} = µ(a) immediately. The theorem is proved.
....
.........
.... µ ν
1 ... . . . . . . . . . . . . . . . . . ......... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .........
... .. ... .... ......
... ... ..... ...... .....
... ... ... .... ......
... .... ... ......... .........
... ... ..
... .... ... ... . .
..• ... .. .. ∗ .... .....
...
... .. .
... .
...
... .. ..
... ..
ν ... ...
. .
... . . . . . . . . . . . ...... . ... . . . . . . . . ....... . . . . . . . . . . . . . . . . . . . . . ........................... . . . . . . ...........................
0.5 ...
... ..
.
.. .. ...
..
... ..
. .
.. .... ... ...
... ...
... .... .
. ....
. ....
.
.
... ..
... ...
... ... . .. ...
. ... ...
. ...
... ... . ... .
.. ... ... ...
... ..
. . . .
..... ......
... . ... . .
. .....
... . . .... ...
....
. ......
... ..
. . .... .
.
. ......
. .. . .
..
..........................................................................................................................................................................................................................................................................................................
. . .
..
0 ...
..
a

Figure 11.1: Graphical Illustration of Inference Rule. Reprinted from Liu


[129].

Inference Rule 11.2 (Gao, Gao and Ralescu [49]) Let X, Y and Z be three
concepts. Assume a rule “if X is an uncertain set ξ and Y is an uncertain set
η then Z is an uncertain set τ ”. From X is a constant a and Y is a constant
b we infer that Z is an uncertain set
τ ∗ = τ |(a∈ξ)∩(b∈η) (11.5)
which is the conditional uncertain set of τ given a ∈ ξ and b ∈ η. The
inference rule is represented by
Rule: If X is ξ and Y is η then Z is τ
From: X is a and Y is b (11.6)
Infer: Z is τ ∗ = τ |(a∈ξ)∩(b∈η)
Section 11.1 - Uncertain Inference Rule 249

Theorem 11.2 Let ξ, η, τ be independent uncertain sets with membership


functions µ, ν, λ, respectively. If ξ ∗ is a constant a and η ∗ is a constant b,
then the inference rule 11.2 yields that τ ∗ has a membership function

λ(z) µ(a) ∧ ν(b)



 , if λ(z) <
µ(a) ∧ ν(b)



 2

λ∗ (z) = λ(z) + µ(a) ∧ ν(b) − 1 µ(a) ∧ ν(b) (11.7)
, if λ(z) > 1 −
µ(a) ∧ ν(b) 2






0.5, otherwise.

Proof: It follows from the inference rule 11.2 that τ ∗ has a membership
function
λ∗ (z) = M{z ∈ τ |(a ∈ ξ) ∩ (b ∈ η)}.
By using the definition of conditional uncertainty, M{z ∈ τ |(a ∈ ξ) ∩ (b ∈ η)}
is
M{z ∈ τ } M{z ∈ τ }

 , if < 0.5
M{(a ∈ ∩ ∈ M{(a ∈ ξ) ∩ (b ∈ η)}



 ξ) (b η)}

M{z 6∈ τ } M{z 6∈ τ }
1− , if < 0.5
M{(a ∈ ξ) ∩ (b ∈ η)} M{(a ∈ ξ) ∩ (b ∈ η)}






0.5, otherwise.
The theorem follows from M{z ∈ τ } = λ(z), M{z 6∈ τ } = 1 − λ(z) and
M{(a ∈ ξ) ∩ (b ∈ η)} = µ(a) ∧ ν(b) immediately.

Inference Rule 11.3 (Gao, Gao and Ralescu [49]) Let X and Y be two
concepts. Assume two rules “if X is an uncertain set ξ1 then Y is an uncertain
set η1 ” and “if X is an uncertain set ξ2 then Y is an uncertain set η2 ”. From
X is a constant a we infer that Y is an uncertain set

M{a ∈ ξ1 } · η1 |a∈ξ1 M{a ∈ ξ2 } · η2 |a∈ξ2


η∗ = + . (11.8)
M{a ∈ ξ1 } + M{a ∈ ξ2 } M{a ∈ ξ1 } + M{a ∈ ξ2 }

The inference rule is represented by

Rule 1: If X is ξ1 then Y is η1
Rule 2: If X is ξ2 then Y is η2
(11.9)
From: X is a constant a
Infer: Y is η ∗ determined by (11.8)

Theorem 11.3 Let ξ1 , ξ2 , η1 , η2 be independent uncertain sets with mem-


bership functions µ1 , µ2 , ν1 , ν2 , respectively. If ξ ∗ is a constant a, then the
inference rule 11.3 yields

µ1 (a) µ2 (a)
η∗ = η∗ + η∗ (11.10)
µ1 (a) + µ2 (a) 1 µ1 (a) + µ2 (a) 2
250 Chapter 11 - Uncertain Inference

where η1∗ and η2∗ are uncertain sets whose membership functions are respec-
tively given by

ν1 (y)


 , if ν1 (y) < µ1 (a)/2


 µ1 (a)

ν1∗ (y) = ν1 (y) + µ1 (a) − 1 (11.11)
, if ν1 (y) > 1 − µ1 (a)/2
µ1 (a)






0.5, otherwise,

ν2 (y)


 , if ν2 (y) < µ2 (a)/2


 µ2 (a)

ν2∗ (y) = ν2 (y) + µ2 (a) − 1 (11.12)
, if ν2 (y) > 1 − µ2 (a)/2
µ2 (a)






0.5, otherwise.

Proof: It follows from the inference rule 11.3 that the uncertain set η ∗ is
just
M{a ∈ ξ1 } · η1 |a∈ξ1 M{a ∈ ξ2 } · η2 |a∈ξ2
η∗ = + .
M{a ∈ ξ1 } + M{a ∈ ξ2 } M{a ∈ ξ1 } + M{a ∈ ξ2 }
The theorem follows from M{a ∈ ξ1 } = µ1 (a) and M{a ∈ ξ2 } = µ2 (a)
immediately.

Inference Rule 11.4 Let X1 , X2 , · · · , Xm be concepts. Assume rules “if X1


is ξi1 and · · · and Xm is ξim then Y is ηi ” for i = 1, 2, · · · , k. From X1 is a1
and · · · and Xm is am we infer that Y is an uncertain set
k
X ci · ηi |(a1 ∈ξi1 )∩(a2 ∈ξi2 )∩···∩(am ∈ξim )
η∗ = (11.13)
i=1
c1 + c2 + · · · + ck

where the coefficients are determined by

ci = M {(a1 ∈ ξi1 ) ∩ (a2 ∈ ξi2 ) ∩ · · · ∩ (am ∈ ξim )} (11.14)

for i = 1, 2, · · · , k. The inference rule is represented by

Rule 1: If X1 is ξ11 and · · · and Xm is ξ1m then Y is η1


Rule 2: If X1 is ξ21 and · · · and Xm is ξ2m then Y is η2
···
(11.15)
Rule k: If X1 is ξk1 and · · · and Xm is ξkm then Y is ηk
From: X1 is a1 and · · · and Xm is am
Infer: Y is η ∗ determined by (11.13)

Theorem 11.4 Assume ξi1 , ξi2 , · · · , ξim , ηi are independent uncertain sets
with membership functions µi1 , µi2 , · · · , µim , νi , i = 1, 2, · · · , k, respectively.
Section 11.2 - Uncertain System 251


If ξ1∗ , ξ2∗ , · · · , ξm are constants a1 , a2 , · · · , am , respectively, then the inference
rule 11.4 yields
k
X ci · ηi∗
η∗ = (11.16)
i=1 1
c + c2 + · · · + ck

where ηi∗ are uncertain sets whose membership functions are given by

νi (y)


 , if νi (y) < ci /2


 ci

νi∗ (y) = νi (y) + ci − 1 (11.17)
, if νi (y) > 1 − ci /2
ci






0.5, otherwise

and ci are constants determined by

ci = min µil (al ) (11.18)


1≤l≤m

for i = 1, 2, · · · , k, respectively.

Proof: For each i, since a1 ∈ ξi1 , a2 ∈ ξi2 , · · · , am ∈ ξim are independent


events, we immediately have
 
\m 
M (aj ∈ ξij ) = min M{aj ∈ ξij } = min µil (al )
  1≤j≤m 1≤l≤m
j=1

for i = 1, 2, · · · , k. From those equations, we may prove the theorem by the


inference rule 11.4 immediately.

11.2 Uncertain System


Uncertain system, proposed by Liu [127], is a function from its inputs to
outputs based on the uncertain inference rule. Usually, an uncertain system
consists of 5 parts:

1. inputs that are crisp data to be fed into the uncertain system;

2. a rule-base that contains a set of if-then rules provided by the experts;

3. an uncertain inference rule that infers uncertain consequents from the


uncertain antecedents;

4. an expected value operator that converts the uncertain consequents to


crisp values;

5. outputs that are crisp data yielded from the expected value operator.
252 Chapter 11 - Uncertain Inference

Now let us consider an uncertain system in which there are m crisp inputs
α1 , α2 , · · · , αm , and n crisp outputs β1 , β2 , · · · , βn . At first, we infer n un-
certain sets η1∗ , η2∗ , · · · , ηn∗ from the m crisp inputs by the rule-base (i.e., a set
of if-then rules),

If ξ11 and ξ12 and· · · and ξ1m then η11 and η12 and· · · and η1n
If ξ21 and ξ22 and· · · and ξ2m then η21 and η22 and· · · and η2n
(11.19)
···
If ξk1 and ξk2 and· · · and ξkm then ηk1 and ηk2 and· · · and ηkn

and the uncertain inference rule


k
X ci · ηij |(α1 ∈ξi1 )∩(α2 ∈ξi2 )∩···∩(αm ∈ξim )
ηj∗ = (11.20)
i=1
c1 + c2 + · · · + ck

for j = 1, 2, · · · , n, where the coefficients are determined by

ci = M {(α1 ∈ ξi1 ) ∩ (α2 ∈ ξi2 ) ∩ · · · ∩ (αm ∈ ξim )} (11.21)

for i = 1, 2, · · · , k. Thus by using the expected value operator, we obtain

βj = E[ηj∗ ] (11.22)

for j = 1, 2, · · · , n. Until now we have constructed a function from inputs


α1 , α2 , · · · , αm to outputs β1 , β2 , · · · , βn . Write this function by f , i.e.,

(β1 , β2 , · · · , βn ) = f (α1 , α2 , · · · , αm ). (11.23)

Then we get an uncertain system f .


............................................................................................ ............................ ...........................................................................
. .
α1 ............................... ........................................................................................... .................................. ∗ ........................ ∗ ..........................
η 1 ...... ... ...... 1 β = E[η ] 1 ...... .. β1
... ...
Inference Rule... .... ... .
. . .
∗ ...........................
α2 ................................ ................................................................................................... ................................. η ∗ ....................
.
2 ...... .. ..... 2
.
.
β = E[η ] 2 ..... .. β2
...
.. .... .... ... .... .. ... ... .. ... ..
... ................................................................. ... ... ... ... ...
. ...
...
..
... ....
..
... ...
...
. ....
...
...
...
. ....
... .
. . Rule Base ... ... . . .
.
αm ............................. ............................................................... ................................
.
η ∗ ........................
. . ∗ ............................
.........................................................................................
n
.........................
... β = E[η ]
..
. n n
.........................................................................
.. βn

Figure 11.2: An Uncertain System. Reprinted from Liu [129].

Theorem 11.5 Assume ξi1 , ξi2 , · · · , ξim , ηi1 , ηi2 , · · · , ηin are independent un-
certain sets with membership functions µi1 , µi2 , · · · , µim , νi1 , νi2 , · · · , νin , i =
1, 2, · · · , k, respectively. Then the uncertain system from (α1 , α2 , · · · , αm ) to
(β1 , β2 , · · · , βn ) is
k ∗
X ci · E[ηij ]
βj = (11.24)
c + c2 + · · · + ck
i=1 1
Section 11.2 - Uncertain System 253


for j = 1, 2, · · · , n, where ηij are uncertain sets whose membership functions
are given by
νij (y)


 , if νij (y) < ci /2


 ci

∗ νij (y) + ci − 1
νij (y) = (11.25)
, if νij (y) > 1 − ci /2
c



 i


0.5, otherwise

and ci are constants determined by

ci = min µil (αl ) (11.26)


1≤l≤m

for i = 1, 2, · · · , k, j = 1, 2, · · · , n, respectively.

Proof: It follows from the inference rule 11.4 that the uncertain sets ηj∗ are
k ∗
X ci · ηij
ηj∗ =
i=1
c1 + c2 + · · · + ck

for j = 1, 2, · · · , n. Since = 1, 2, · · · , k, j = 1, 2, · · · , n are independent
ηij ,i
uncertain sets, we get the theorem immediately by the linearity of expected
value operator.

Remark 11.1: The uncertain system allows the uncertain sets ηij in the
rule-base (11.19) become constants bij , i.e.,

ηij = bij (11.27)

for i = 1, 2, · · · , k and j = 1, 2, · · · , n. In this case, the uncertain system


(11.24) becomes
k
X ci · bij
βj = (11.28)
i=1 1
c + c 2 + · · · + ck

for j = 1, 2, · · · , n.

Remark 11.2: The uncertain system allows the uncertain sets ηij in the
rule-base (11.19) become functions hij of inputs α1 , α2 , · · · , αm , i.e.,

ηij = hij (α1 , α2 , · · · , αm ) (11.29)

for i = 1, 2, · · · , k and j = 1, 2, · · · , n. In this case, the uncertain system


(11.24) becomes
k
X ci · hij (α1 , α2 , · · · , αm )
βj = (11.30)
i=1
c1 + c2 + · · · + ck
for j = 1, 2, · · · , n.
254 Chapter 11 - Uncertain Inference

Uncertain Systems are Universal Approximator


Uncertain systems are capable of approximating any continuous function on
a compact set (i.e., bounded and closed set) to arbitrary accuracy. This is the
reason why uncertain systems may play a controller. The following theorem
shows this fact.
Theorem 11.6 (Peng and Chen [186]) For any given continuous function
g on a compact set D ⊂ <m and any given ε > 0, there exists an uncertain
system f such that
kf (α1 , α2 , · · · , αm ) − g(α1 , α2 , · · · , αm )k < ε (11.31)
for any (α1 , α2 , · · · , αm ) ∈ D.
Proof: Without loss of generality, we assume that the function g is a real-
valued function with only two variables α1 and α2 , and the compact set is
a unit rectangle D = [0, 1] × [0, 1]. Since g is continuous on D and then is
uniformly continuous, for any given number ε > 0, there is a number δ > 0
such that
|g(α1 , α2 ) − g(α10 , α20 )| < ε (11.32)
0 0

whenever k(α1 , α2 ) − (α1 , α2 )k < δ. Let k be an integer larger than 1/( 2δ),
and write

i−1 i j−1 j
Dij = (α1 , α2 ) < α1 ≤ , < α2 ≤ (11.33)
k k k k
for i, j = 1, 2, · · · , k. Note that {Dij } is a sequence of disjoint rectangles
whose “diameter” is less than δ. Define uncertain sets

i−1 i
ξi = , , i = 1, 2, · · · , k, (11.34)
k k

j−1 j
ηj = , , j = 1, 2, · · · , k. (11.35)
k k
Then we assume a rule-base with k × k if-then rules,
Rule ij: If ξi and ηj then g(i/k, j/k), i, j = 1, 2, · · · , k. (11.36)
According to the uncertain inference rule, the corresponding uncertain system
from D to < is
f (α1 , α2 ) = g(i/k, j/k), if (α1 , α2 ) ∈ Dij , i, j = 1, 2, · · · , k. (11.37)
It follows from (11.32) that for any (α1 , α2 ) ∈ Dij ⊂ D, we have
|f (α1 , α2 ) − g(α1 , α2 )| = |g(i/k, j/k) − g(α1 , α2 )| < ε. (11.38)
The theorem is thus verified. Hence uncertain systems are universal approx-
imators!
Section 11.4 - Inverted Pendulum 255

11.3 Uncertain Control


Uncertain controller, designed by Liu [127], is a special uncertain system that
maps the state variables of a process under control to the action variables.
Thus an uncertain controller consists of the same 5 parts of uncertain system:
inputs, a rule-base, an uncertain inference rule, an expected value operator,
and outputs. The distinguished point is that the inputs of uncertain controller
are the state variables of the process under control, and the outputs are the
action variables.
Figure 11.3 shows an uncertain control system consisting of an uncertain
controller and a process. Note that t represents time, α1 (t), α2 (t), · · · , αm (t)
are not only the inputs of uncertain controller but also the outputs of process,
and β1 (t), β2 (t), · · · , βn (t) are not only the outputs of uncertain controller but
also the inputs of process.
........................................................................
.. ...
...
Inputs of Controller ...
...
. Outputs of Controller
.......................................................................................................................................... .................................................................................................................................................
... ... Process ... ...
...
...
Outputs of Process ...
.........................................................................
... Inputs of Process ...
...
... ...
.......
... ...
..
........................................ ............................................................................................................. ................................ ........................................................................................ .......................................
... . ... . ... . ... . ...
............................. ............................................................................................ .......................... ∗ ∗
... . . .......................... . .
α (t)
... . η (t) . β (t)=E[η (t)] .............................. β (t) ....
... 1 .... .... .... ... ... .... 1 ... .... 1 1 ... .... 1 ...
.... .... .... ....
...
...
.
. . Inference Rule
. ..
........................... .............................................................................................. ........................ ∗
.
. ... ... ...
.
.
∗ ... .
.
.
.
.
...
...
α (t) .....
...
2
.
.
.
.
....
. ... ...
....
.
...
. .
.......
. ... ...
.
... . 2
.
.
η (t) .......................
.
... β (t)=E[η (t)]
. 2
.
. 2
............................
.
...
.. .
.
.
.
2 β (t) .....
...
.. ...
....
...
...
...
...
....
............................................................................
.....
..
....
.
.. .....
..
....
.
.. .....
..
....
.
.. ...
....
. ... ... ... ... ... .
... .
..
.
....
. .
... .
..
. .
....
.
... .
..
.
....
. ...
...
...
...
.
. .
...
.
.
Rule Base ...
.
....
.
....
.
. . .
. ∗
....
.
. . .
. ∗
....
.
. . .
.
...
...
α (t)
.............m
...
.......................
.
.
.
.
.
.
...........................
.
.
.
.
......................................................................... .
..........................................................................................................
...
.
........................
.
. n
.
.
η (t)
.............................
......................
.
.
...
.
β (t)=E[η (t)]
. n
.
.
. n
.....................................................................................
...
.
..........................
.
. . n
. β (t)
....................................
.
.
...

Figure 11.3: An Uncertain Control System. Reprinted from Liu [129].

11.4 Inverted Pendulum


Inverted pendulum system is a nonlinear unstable system that is widely used
as a benchmark for testing control algorithms. Many good techniques already
exist for balancing inverted pendulum. Among others, Gao [52] successfully
balanced an inverted pendulum by the uncertain controller with 5 × 5 if-then
rules.
The uncertain controller has two inputs (“angle” and “angular velocity”)
and one output (“force”). Three of them will be represented by uncertain
sets labeled by
“negative large” NL
“negative small” NS
“zero” Z
“positive small” PS
“positive large” PL
The membership functions of those uncertain sets are shown in Figures 11.5,
11.6 and 11.7.
256 Chapter 11 - Uncertain Inference

.....
.......
....
...
A(t)
...............
...........
........... . ..........
... ...
.....
... ........... ... ...
... ......... ...
... ..
... ...
... ... ...
... ... ...
... ..........
... .. ..
... ...
... ... ...
... ... ...
... ..........
... .. ..
... ...
... ..........
... ........
... ........
.... ...
.......
...

.................................................................................................................
....
. ..
F (t) ............................... ....
... ...... . ..... .
. .
... ..................... .. .. .
...................... ....
.

. . .

................................................................................................................
....................
..
...................
...............................................................................................................................................................................................

Figure 11.4: An Inverted Pendulum in which A(t) represents the angular po-
sition and F (t) represents the force that moves the cart at time t. Reprinted
from Liu [129].

NL
..................................................
NS ... ...
Z ...
PS PL
................................................
... ... ... ... ... ... ... ...
... .. ... .. ... .. ... ..
... . .... ..... .
.... ..... .
.... ..... .
....
... ... ... ...
... ... ... ...
.
... ...
.
... ...
.
... .... ... .... ... .... ... ....
... ... ... ... ... ... ... ...
...... ..... ..... .....
.. . . .
....... ...... ...... ......
... ..... ... ... ... ... ... ...
... ... ... ..... ... ..... ... .....
.... .
. .... ... .... ... .... ...
... ... . ... . ... . ...
.. ... .... ... .... ... .... ...
...
.
... ..
. . ... ...
.
... ...
.
...
...........................................................................................................................................................................................................................................................................
.. ...
. .
... .
...
(rad)
−π/2 −π/4 0 π/4 π/2

Figure 11.5: Membership Functions of “Angle”

Intuitively, when the inverted pendulum has a large clockwise angle and
a large clockwise angular velocity, we should give it a large force to the right.
Thus we have an if-then rule,

If the angle is negative large


and the angular velocity is negative large,
then the force is positive large.

Similarly, when the inverted pendulum has a large counterclockwise angle


and a large counterclockwise angular velocity, we should give it a large force
to the left. Thus we have an if-then rule,

If the angle is positive large


and the angular velocity is positive large,
then the force is negative large.

Note that each input or output has 5 states and each state is represented by
an uncertain set. This implies that the rule-base contains 5 × 5 if-then rules.
In order to balance the inverted pendulum, the 25 if-then rules in Table 11.1
are accepted.
Section 11.5 - Bibliographic Notes 257

NL
..................................................
NS Z PS PL
..... ...
... ...
...
... ...
................................................
...
...
... ... .... .. ... .. ... ..
... ... ..... ... ..... ... ..... ...
... .... .... ..
. ... .
.. ... .
..
... .. ... ... ... ... ... ...
... ..... ... .... ... .... ... ....
... .. ... ... ... ... ... ...
...... ..... .... .....
.. .
.. .. .
..
... .... ... .... ...... ......
... ..... ... ..... .. ... .. ...
... ... ... ..... ... .....
.. ...
. ..
...
. . ... ... . ... ...
.
. .... ... .
. .
... .. .... ... .
. ...
... ... .. ... ..... ... .. ...
.... .. .. .. .
. .. .. ...
. .
........................................................................................................................................................................................................................................................................................... (rad/sec)
−π/4 −π/8 0 π/8 π/4

Figure 11.6: Membership Functions of “Angular Velocity”

NL
....
NS Z PS PL
. .. ....... ...... ...... ......
... .... ... .... .. ... .. ... .. ...
... ..... ... ..... .. ..... .. ..... .. .....
... ... ... . ..... ... ..... ... ..... ...
. . ... . ... . ... . ...
... ...
... .....
. ... ..... ... ..... ... ..... ...
... ... .. ... .. ... .. ...
.... ... ...
. . ...... ...... ...... ...
.... ......... ......... ......... ......... ...
...
..
. .. ...
. ... ... ... ... ... ... ...
..
. .. ...
. ... ... ... ... ... ... ...
.... ...
.
...
.... .... ...
.... .... ...
.... .
... ...
....
...
...
... ..
. ... .. .
.. ... .. ... ... .. .
.. ... ...
.... .... .... .. .
. .... .. .
. .... .. .
. .... ...
.
.. ... . . . . . . . ..
....................................................................................................................................................................................................................................................................................... (N)
−60 −40 −20 0 20 40 60

Figure 11.7: Membership Functions of “Force”

A lot of simulation results show that the uncertain controller may balance
the inverted pendulum successfully.

11.5 Bibliographic Notes


The basic uncertain inference rule was initialized by Liu [127] in 2010 by
the tool of conditional uncertain set. After that, Gao, Gao and Ralescu [49]
extended the uncertain inference rule to the case with multiple antecedents
and multiple if-then rules.
Based on the uncertain inference rules, Liu [127] suggested the concept of
uncertain system, and then presented the tool of uncertain controller. As an
important contribution, Peng and Chen [186] proved that uncertain systems

Table 11.1: Rule Base with 5 × 5 If-Then Rules


XXX
XXXvelocity NL NS Z PS PL
angle XXX
X
NL PL PL PL PS Z
NS PL PL PS Z NS
Z PL PS Z NS NL
PS PS Z NS NL NL
PL Z NS NL NL NL
258 Chapter 11 - Uncertain Inference

are universal approximator and then demonstrated that the uncertain con-
troller is a reasonable tool. As a successful application, Gao [52] balanced an
inverted pendulum by using the uncertain controller.
Chapter 12

Uncertain Process

The study of uncertain process was started by Liu [123] in 2008 for modeling
the evolution of uncertain phenomena. This chapter will give the concept of
uncertain process, and introduce sample path, uncertainty distribution, in-
dependent increment, stationary increment, extreme value, first hitting time,
and time integral of uncertain process.

12.1 Uncertain Process


An uncertain process is essentially a sequence of uncertain variables indexed
by time. A formal definition is given below.

Definition 12.1 (Liu [123]) Let (Γ, L, M) be an uncertainty space and let T
be a totally ordered set (e.g. time). An uncertain process is a function Xt (γ)
from T × (Γ, L, M) to the set of real numbers such that {Xt ∈ B} is an event
for any Borel set B at each time t.

Remark 12.1: If Xt is an uncertain process, then Xt is an uncertain variable


at each time t.

Example 12.1: Let a and b be real numbers with a < b. Assume Xt is a


linear uncertain variable, i.e.,

Xt ∼ L(at, bt) (12.1)

at each time t. Then Xt is an uncertain process.

Example 12.2: Let a, b, c be real numbers with a < b < c. Assume Xt is a


zigzag uncertain variable, i.e.,

Xt ∼ Z(at, bt, ct) (12.2)

at each time t. Then Xt is an uncertain process.

© Springer-Verlag Berlin Heidelberg 2015 259


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_13
260 Chapter 12 - Uncertain Process

Example 12.3: Let e and σ be real numbers with σ > 0. Assume Xt is a


normal uncertain variable, i.e.,
Xt ∼ N (et, σt) (12.3)
at each time t. Then Xt is an uncertain process.

Example 12.4: Let e and σ be real numbers with σ > 0. Assume Xt is a


lognormal uncertain variable, i.e.,
Xt ∼ LOGN (et, σt) (12.4)
at each time t. Then Xt is an uncertain process.

Sample Path
Definition 12.2 (Liu [123]) Let Xt be an uncertain process. Then for each
γ ∈ Γ, the function Xt (γ) is called a sample path of Xt .
Note that each sample path is a real-valued function of time t. In addition,
an uncertain process may also be regarded as a function from an uncertainty
space to a collection of sample paths.

<..
...
.......
..... ...
... ... .... ......
.. .......... ....... ..............
... ...... .. .. ..... ....
... ..
. ... ..
... ... .......
...
... ..... ..
... ... .....
. ........ ..
.
.
...
. .. ....... ... ...... ...
... .. . . ..
... ......
. ... ......... ... .... ...... .... ..
.
... ............. .. ..... ........ ......
. . . ... ....
... .. .......
. .
.. . ....
. .
.
. .
... .. .
.
... ........ .. .... ........ ........... .... ....... ......
... ...... ...... ........... ...... ... ... ... ... .....
... ... ........ .. ..... ..
... ... .... ....
.
..
... ...
.... .... ...
... ... .....
......
....
..............................................................................................................................................................................................................................................................
t

Figure 12.1: A Sample Path of Uncertain Process. Reprinted from Liu [129].

Definition 12.3 An uncertain process Xt is said to be sample-continuous if


almost all sample paths are continuous functions with respect to time t.

Uncertain Field
Uncertain field is a generalization of uncertain process when the index set T
becomes a partially ordered set (e.g. time × space, or a surface).
Definition 12.4 (Liu [139]) Let (Γ, L, M) be an uncertainty space and let T
be a partially ordered set (e.g. time × space). An uncertain field is a function
Xt (γ) from T × (Γ, L, M) to the set of real numbers such that {Xt ∈ B} is
an event for any Borel set B at each time t.
Section 12.2 - Uncertainty Distribution 261

12.2 Uncertainty Distribution


An uncertainty distribution of uncertain process is a sequence of uncertainty
distributions of uncertain variables indexed by time. Thus an uncertainty
distribution of uncertain process is a surface rather than a curve. A formal
definition is given below.

Definition 12.5 (Liu [139]) An uncertain process Xt is said to have an


uncertainty distribution Φt (x) if at each time t, the uncertain variable Xt
has the uncertainty distribution Φt (x).

Example 12.5: The linear uncertain process Xt ∼ L(at, bt) has an uncer-
tainty distribution,

if x ≤ at

 0,

 x − at

Φt (x) = , if at ≤ x ≤ bt (12.5)

 (b − a)t


1, if x ≥ bt.

Example 12.6: The zigzag uncertain process Xt ∼ Z(at, bt, ct) has an
uncertainty distribution,


 0, if x ≤ at


 x − at
, if at ≤ x ≤ bt


2(b − a)t


Φt (x) = (12.6)
 x + ct − 2bt

 , if bt ≤ x ≤ ct
2(c − b)t





if x ≥ ct.

1,

Example 12.7: The normal uncertain process Xt ∼ N (et, σt) has an un-
certainty distribution,
−1
π(et − x)
Φt (x) = 1 + exp √ . (12.7)
3σt

Example 12.8: The lognormal uncertain process Xt ∼ LOGN (et, σt) has
an uncertainty distribution,
−1
π(et − ln x)
Φt (x) = 1 + exp √ . (12.8)
3σt
262 Chapter 12 - Uncertain Process

Theorem 12.1 (Liu [139], Sufficient and Necessary Condition) A function


Φt (x) : T × < → [0, 1] is an uncertainty distribution of uncertain process if
and only if at each time t, it is a monotone increasing function with respect
to x except Φt (x) ≡ 0 and Φt (x) ≡ 1.
Proof: If Φt (x) is an uncertainty distribution of some uncertain process
Xt , then at each time t, Φt (x) is the uncertainty distribution of uncertain
variable Xt . It follows from Peng-Iwamura theorem that Φt (x) is a monotone
increasing function with respect to x and Φt (x) 6≡ 0, Φt (x) 6≡ 1. Conversely,
if at each time t, Φt (x) is a monotone increasing function except Φt (x) ≡ 0
and Φt (x) ≡ 1, it follows from Peng-Iwamura theorem that there exists an
uncertain variable ξt whose uncertainty distribution is just Φt (x). Define
Xt = ξt , ∀t ∈ T.
Then Xt is an uncertain process and has the uncertainty distribution Φt (x).
The theorem is verified.
Theorem 12.2 Let Xt be an uncertain process with uncertainty distribution
Φt (x), and let f (x) be a measurable function. Then f (Xt ) is also an uncertain
process. Furthermore, (i) if f (x) is a strictly increasing function, then f (Xt )
has an uncertainty distribution
Ψt (x) = Φt (f −1 (x)); (12.9)
and (ii) if f (x) is a strictly decreasing function and Φt (x) is continuous with
respect to x, then f (Xt ) has an uncertainty distribution
Ψt (x) = 1 − Φt (f −1 (x)). (12.10)
Proof: At each time t, since Xt is an uncertain variable, it follows from
Theorem 2.1 that f (Xt ) is also an uncertain variable. Thus f (Xt ) is an
uncertain process. The equations (12.9) and (12.10) may be verified by the
operational law of uncertain variables immediately.

Example 12.9: Let Xt be an uncertain process with uncertainty distri-


bution Φt (x). Show that the uncertain process aXt + b has an uncertainty
distribution, (
Φt ((x − b)/a), if a > 0
Ψt (x) = (12.11)
1 − Φt ((x − b)/a), if a < 0.

Regular Uncertainty Distribution


Definition 12.6 (Liu [139]) An uncertainty distribution Φt (x) is said to be
regular if at each time t, it is a continuous and strictly increasing function
with respect to x at which 0 < Φt (x) < 1, and
lim Φt (x) = 0, lim Φt (x) = 1. (12.12)
x→−∞ x→+∞
Section 12.2 - Uncertainty Distribution 263

It is clear that linear uncertainty distribution, zigzag uncertainty distribu-


tion, normal uncertainty distribution and lognormal uncertainty distribution
of uncertain process are all regular.
Note that we have stipulated that a crisp initial value X0 has a regu-
lar uncertainty distribution. That is, we allow the initial value of regular
uncertain process to be a constant whose uncertainty distribution is
(
1, if x ≥ X0
Φ0 (x) = (12.13)
0, if x < X0

and say Φ0 (x) is a continuous and strictly increasing function with respect
to x at which 0 < Φ0 (x) < 1 even though it is discontinuous at X0 .

Inverse Uncertainty Distribution


Definition 12.7 (Liu [139]) Let Xt be an uncertain process with regular
uncertainty distribution Φt (x). Then the inverse function Φ−1
t (α) is called
the inverse uncertainty distribution of Xt .

Note that at each time t, the inverse uncertainty distribution Φ−1t (α) is
well defined on the open interval (0, 1). If needed, we may extend the domain
to [0, 1] via

Φ−1 −1
t (0) = lim Φt (α), Φ−1 −1
t (1) = lim Φt (α). (12.14)
α↓0 α↑1

Φ−1
t (α)
...
.......... ......
....... α = 0.9
...........
...............

.... ......
... ...........
.
... ........
.......... ........
...........................
α = 0.8
... .........................
... ............. .......
... ................ ..............
........
.. ... ...........
.......................
... ............................................ ........ α = 0.7
... ......... ........................
.......... . ........
... ............................................ .......................................................................... .......................
............................................................. ............................ ........... ...
.................................................................................. .............................................. ... ....
α = 0.6
....................................................................................................................................................................................
.......................................................
.. ............................... ................................................................................
α = 0.5
............................................................................. .........
... .................... ................... ....................................
......... .
.........................
........
........
..........
α = 0.4
...........................
.
... ........ .......................................................... .........
...
...
........
........
..........
..........
........
.......
α = 0.3
............................
... ..................
..................... .......
.........
...
...
...
.........
.......
......
α = 0.2
........................
......
... ......
.......
... ........
... ......................
...
...
α = 0.1
......................................................................................................................................................................................................................................................
t

Figure 12.2: Inverse Uncertainty Distribution of Uncertain Process

Example 12.10: The linear uncertain process Xt ∼ L(at, bt) has an inverse
uncertainty distribution,

Φ−1
t (α) = (1 − α)at + αbt. (12.15)
264 Chapter 12 - Uncertain Process

Example 12.11: The zigzag uncertain process Xt ∼ Z(at, bt, ct) has an
inverse uncertainty distribution,
(
−1 (1 − 2α)at + 2αbt, if α < 0.5
Φt (α) = (12.16)
(2 − 2α)bt + (2α − 1)ct, if α ≥ 0.5.

Example 12.12: The normal uncertain process Xt ∼ N (et, σt) has an


inverse uncertainty distribution,

σt 3 α
Φ−1
t (α) = et + ln . (12.17)
π 1−α

Example 12.13: The lognormal uncertain process Xt ∼ LOGN (et, σt) has
an inverse uncertainty distribution,
√ !
−1 σt 3 α
Φt (α) = exp et + ln . (12.18)
π 1−α

Theorem 12.3 (Liu [139], Sufficient and Necessary Condition) A function


Φ−1
t (α) : T × (0, 1) → < is an inverse uncertainty distribution of uncertain
process if and only if at each time t, it is a continuous and strictly increasing
function with respect to α.

Proof: Suppose Φ−1 t (α) is an inverse uncertainty distribution of uncertain


process Xt . Then at each time t, Φ−1 t (α) is an inverse uncertainty distri-
bution of uncertain variable Xt . It follows from Theorem 2.6 that Φ−1 t (α)
is a continuous and strictly increasing function with respect to α ∈ (0, 1).
Conversely, if Φ−1
t (α) is a continuous and strictly increasing function with
respect to α ∈ (0, 1), it follows from Theorem 2.6 that there exists an uncer-
tain variable ξt whose inverse uncertainty distribution is just Φ−1
t (α). Define

Xt = ξt , ∀t ∈ T.

Then Xt is an uncertain process and has the inverse uncertainty distribution


Φ−1
t (α). The theorem is proved.

Remark 12.2: Note that we stipulate that a crisp initial value X0 has an
inverse uncertainty distribution

Φ−1
0 (α) ≡ X0 (12.19)

and say Φ−10 (α) is a continuous and strictly increasing function with respect
to α ∈ (0, 1) even though it is not.
Section 12.3 - Independence and Operational Law 265

12.3 Independence and Operational Law


Definition 12.8 (Liu [139]) Uncertain processes X1t , X2t , · · · , Xnt are said
to be independent if for any positive integer k and any times t1 , t2 , · · · , tk ,
the uncertain vectors
ξ i = (Xit1 , Xit2 , · · · , Xitk ), i = 1, 2, · · · , n (12.20)
are independent, i.e., for any k-dimensional Borel sets B1 , B2 , · · · , Bn , we
have ( n )
\ n
^
M (ξ i ∈ Bi ) = M{ξ i ∈ Bi }. (12.21)
i=1 i=1

Exercise 12.1: Let X1t , X2t , · · · , Xnt be independent uncertain processes,


and let t1 , t2 , · · · , tn be any times. Show that
X1t1 , X2t2 , · · · , Xntn (12.22)
are independent uncertain variables.

Exercise 12.2: Let Xt and Yt be independent uncertain processes. For any


times t1 , t2 , · · · , tk and s1 , s2 , · · · , sm , show that
(Xt1 , Xt2 , · · · , Xtk ) and (Ys1 , Ys2 , · · · , Ysm ) (12.23)
are independent uncertain vectors.
Theorem 12.4 (Liu [139]) Uncertain processes X1t , X2t , · · · , Xnt are inde-
pendent if and only if for any positive integer k, any times t1 , t2 , · · · , tk , and
any k-dimensional Borel sets B1 , B2 , · · · , Bn , we have
( n ) n
[ _
M (ξ i ∈ Bi ) = M{ξ i ∈ Bi } (12.24)
i=1 i=1

where ξ i = (Xit1 , Xit2 , · · · , Xitk ) for i = 1, 2, · · · , n.


Proof: It follows from Theorem 2.64 that ξ 1 , ξ 2 , · · · , ξ n are independent
uncertain vectors if and only if (12.24) holds. The theorem is thus verified.
Theorem 12.5 (Liu [139], Operational Law) Let X1t , X2t , · · · , Xnt be inde-
pendent uncertain processes with regular uncertainty distributions Φ1t , Φ2t ,
· · · , Φnt , respectively. If the function f (x1 , x2 , · · · , xn ) is strictly increasing
with respect to x1 , x2 , · · · , xm and strictly decreasing with respect to xm+1 ,
xm+2 , · · · , xn , then the uncertain process
Xt = f (X1t , X2t , · · · , Xnt ) (12.25)
has an inverse uncertainty distribution
Φ−1 −1 −1 −1 −1
t (α) = f (Φ1t (α), · · · , Φmt (α), Φm+1,t (1 − α), · · · , Φnt (1 − α)). (12.26)
266 Chapter 12 - Uncertain Process

Proof: At any time t, it is clear that X1t , X2t , · · · , Xnt are independent un-
certain variables with inverse uncertainty distributions Φ−1 −1
1t (α), Φ2t (α), · · · ,
−1
Φnt (α), respectively. The theorem follows from the operational law of un-
certain variables immediately.

12.4 Independent Increment Process


An independent increment process is an uncertain process that has indepen-
dent increments. A formal definition is given below.

Definition 12.9 (Liu [123]) An uncertain process Xt is said to have inde-


pendent increments if

Xt0 , Xt1 − Xt0 , Xt2 − Xt1 , · · · , Xtk − Xtk−1 (12.27)

are independent uncertain variables where t0 is the initial time and t1 , t2 , · · ·, tk


are any times with t0 < t1 < · · · < tk .

That is, an independent increment process means that its increments are
independent uncertain variables whenever the time intervals do not overlap.
Please note that the increments are also independent of the initial state.

Theorem 12.6 Let Xt be an independent increment process. Then for any


real numbers a and b, the uncertain process

Yt = aXt + b (12.28)

is also an independent increment process.

Proof: Since Xt is an independent increment process, the uncertain variables

Xt0 , Xt1 − Xt0 , Xt2 − Xt1 , · · · , Xtk − Xtk−1

are independent. It follows from Yt = aXt + b and Theorem 2.8 that

Yt0 , Yt1 − Yt0 , Yt2 − Yt1 , · · · , Ytk − Ytk−1

are also independent. That is, Yt is an independent increment process.

Remark 12.3: Generally speaking, a nonlinear function of independent in-


crement process does not necessarily have independent increments. A typical
example is the square of independent increment process.

Theorem 12.7 (Liu [139]) Let Xt be an independent increment process.


Then for any times s < t, the uncertain variables Xs and Xt − Xs are
independent.
Section 12.4 - Independent Increment Process 267

Proof: Since Xt is an independent increment process, the initial value and


increments
X0 , Xs − X0 , Xt − Xs
are independent. It follows from Xs = X0 + (Xs − X0 ) that Xs and Xt − Xs
are independent uncertain variables.

Theorem 12.8 (Liu [139], Sufficient and Necessary Condition) A function


Φ−1
t (α) : T × (0, 1) → < is an inverse uncertainty distribution of independent
increment process if and only if (i) at each time t, Φ−1
t (α) is a continuous and
strictly increasing function; and (ii) for any times s < t, Φ−1 −1
t (α) − Φs (α)
is a monotone increasing function with respect to α.

Proof: Let Xt be an independent increment process with inverse uncertainty


distribution Φ−1 −1
t (α). First, it follows from Theorem 12.3 that Φt (α) is
a continuous and strictly increasing function with respect to α. Next, it
follows from Theorem 12.7 that Xs and Xt − Xs are independent uncertain
variables. Since Xs has an inverse uncertainty distribution Φ−1
s (α) and Xt =
Xs + (Xt − Xs ), for any α < β, we immediately have

Φ−1 −1 −1 −1
t (β) − Φt (α) ≥ Φs (β) − Φs (α).

That is,
Φ−1 −1 −1 −1
t (β) − Φs (β) ≥ Φt (α) − Φs (α).

Hence Φ−1 −1
t (α) − Φs (α) is a monotone (not strictly) increasing function with
respect to α.
Conversely, let us prove that there exists an independent increment pro-
cess whose inverse uncertainty distribution is just Φ−1 t (α). Without loss of
generality, we only consider the range of t ∈ [0, 1]. Let n be a positive
integer. Since Φ−1t (α) is a continuous and strictly increasing function and
Φ−1
t (α) − Φ−1
s (α) is a monotone increasing function with respect to α, there
exist independent uncertain variables ξ0n , ξ1n , · · · , ξnn such that ξ0n has an
inverse uncertainty distribution

Υ−1 −1
0n (α) = Φ0 (α)

and ξin have uncertainty distributions


n o
Υin (x) = sup α | Φ−1
i/n (α) − Φ −1
(i−1)/n (α) = x ,

i = 1, 2, · · · , n, respectively. Define an uncertain process



k
 X k
ξin , if t = (k = 0, 1, · · · , n)


n n
Xt = i=0


linear, otherwise.

268 Chapter 12 - Uncertain Process

It may prove that Xtn converges in distribution as n → ∞. Furthermore, we


may verify that the limit is indeed an independent increment process and has
the inverse uncertainty distribution Φ−1t (α). The theorem is verified.

Remark 12.4: It follows from Theorem 12.8 that the uncertainty distri-
bution of independent increment process has a horn-like shape, i.e., for any
s < t and α < β, we have
−1 −1
Φ−1 −1
s (β) − Φs (α) < Φt (β) − Φt (α). (12.29)

Φ−1
t (α)
...
.......... α = 0.9................
....................
.....
... ..............
.. ......... ............ α = 0.8 ............. ...........
... ....
... ......... .............
........ ............
... ....... ........... α = 0.7 ..........
... ........ .......... ...............
... ............ . .
. ............... ......
. .... ...............
... .... ....
. ............. ... .... ..
.........
...... . .
... ........ ...........
...
... .....
......
.
.......
...... ............. ..................
.
.......... α = 0.6 .................
................. ......
..................
... ......... ............ ................. .... .............................. .
. . ... ...............
...
... .................................... .... ...
.. .... ..... ........ .............. ...
.............. ..
....................................................... .. ..
..................
................................................................................................................................................................................. .
..............
............... .............. .. .
.
.
.
. α = 0.5 ..
....................... ................ ... ..
... ............... ............ .............. .. .
.............. .. ..
... ....... .......... ............. ...............
.. ...... ..
... ..... ...... . ..
. .. ... ........................
..... ....... ............ . .
... ..... .
...... ............. .................
.. . ......................... .....
... ...... ........ .......... ... ..................
.........
...
...
......
......
.......
........
.........
.........
...........
............
............
α = 0.4
... ....... ......... .............
..............
... ....... .......... ...............
........ ...........
... .....
...
........
.........
.......... α = 0.3
............
.............
...............
... ............ .................
...
...
.............
α = 0.2
................
...................
.
... ............
... α = 0.1
.....................................................................................................................................................................................................................................
t

Figure 12.3: Inverse Uncertainty Distribution of Independent Increment Pro-


cess: A Horn-like Family of Functions of t indexed by α

Exercise 12.3: Show that there exists an independent increment process


with linear uncertainty distribution.

Exercise 12.4: Show that there exists an independent increment process


with zigzag uncertainty distribution.

Exercise 12.5: Show that there exists an independent increment process


with normal uncertainty distribution.

Exercise 12.6: Show that there does not exist an independent increment
process with lognormal uncertainty distribution.

12.5 Stationary Independent Increment Process


An uncertain process Xt is said to have stationary increments if its increments
are identically distributed uncertain variables whenever the time intervals
Section 12.5 - Stationary Independent Increment Process 269

have the same length, i.e., for any given t > 0, the increments Xs+t − Xs are
identically distributed uncertain variables for all s > 0.

Definition 12.10 (Liu [123]) An uncertain process is said to be a stationary


independent increment process if it has not only stationary increments but
also independent increments.

It is clear that a stationary independent increment process is a special


independent increment process.

Theorem 12.9 Let Xt be a stationary independent increment process. Then


for any real numbers a and b, the uncertain process

Yt = aXt + b (12.30)

is also a stationary independent increment process.

Proof: Since Xt is an independent increment process, it follows from The-


orem 12.6 that Yt is also an independent increment process. On the other
hand, since Xt is a stationary increment process, the increments Xs+t − Xs
are identically distributed uncertain variables for all s > 0. Thus

Ys+t − Ys = a(Xs+t − Xs )

are also identically distributed uncertain variables for all s > 0, and Yt is a
stationary increment process. Hence Yt is a stationary independent increment
process.

Theorem 12.10 (Chen [17]) Suppose Xt is a stationary independent in-


crement process. Then Xt and (1 − t)X0 + tX1 are identically distributed
uncertain variables for any time t ≥ 0.

Proof: We first prove the theorem when t is a rational number. Assume t =


q/p where p and q are irreducible integers. Let Φ be the common uncertainty
distribution of increments

X1/p − X0/p , X2/p − X1/p , X3/p − X2/p , · · ·

Then

Xt − X0 = (X1/p − X0/p ) + (X2/p − X1/p ) + · · · + (Xq/p − X(q−1)/p )

has an uncertainty distribution

Ψ(x) = Φ(x/q). (12.31)

In addition,

t(X1 − X0 ) = t((X1/p − X0/p ) + (X2/p − X1/p ) + · · · + (Xp/p − X(p−1)/p ))


270 Chapter 12 - Uncertain Process

has an uncertainty distribution

Υ(x) = Φ(x/p/t) = Φ(x/p/(q/p)) = Φ(x/q). (12.32)

It follows from (12.31) and (12.32) that Xt −X0 and t(X1 −X0 ) are identically
distributed, and so are Xt and (1 − t)X0 + tX1 .

Remark 12.5: If Xt is a stationary independent increment process with


X0 = 0, then Xt /t and X1 are identically distributed uncertain variables. In
other words, there is an uncertainty distribution Φ such that

Xt
∼ Φ(x) (12.33)
t
or equivalently, x
Xt ∼ Φ (12.34)
t
for any time t > 0. Note that Φ is just the uncertainty distribution of X1 .

Theorem 12.11 (Liu [139]) Let Xt be a stationary independent increment


process whose initial value and increments have inverse uncertainty distribu-
tions. Then there exist two continuous and strictly increasing functions µ(α)
and ν(α) such that Xt has an inverse uncertainty distribution

Φ−1
t (α) = µ(α) + ν(α)t. (12.35)

Conversely, if there exist two continuous and strictly increasing functions


µ(α) and ν(α) such that (12.35) holds, then there exists a stationary inde-
pendent increment process Xt whose inverse uncertainty distribution is just
Φ−1
t (α). Furthermore, Xt has a Lipschitz continuous version.

Proof: Assume Xt is a stationary independent increment process whose ini-


tial value and increments have inverse uncertainty distributions. Then X0
and X1 − X0 are independent uncertain variables whose inverse uncertainty
distributions exist and are denoted by µ(α) and ν(α), respectively. Then µ(α)
and ν(α) are continuous and strictly increasing functions. Furthermore, it
follows from Theorem 12.10 that Xt and X0 + (X1 − X0 )t are identically
distributed uncertain variables. Hence Xt has the inverse uncertainty distri-
bution Φ−1t (α) = µ(α) + ν(α)t.
Conversely, let us prove that there exists a stationary independent incre-
ment process whose inverse uncertainty distribution is just Φ−1 t (α). Without
loss of generality, we only consider the range of t ∈ [0, 1]. Let

ξ(r) r represents rational numbers in [0, 1]

be a countable sequence of independent uncertain variables, where ξ(0) has


an inverse uncertainty distribution µ(α) and ξ(r) have a common inverse
Section 12.5 - Stationary Independent Increment Process 271

uncertainty distribution ν(α) for all rational numbers r in (0, 1]. For each
positive integer n, we define an uncertain process

k
 ξ(0) + 1 i k
 X
ξ , if t = (k = 1, 2, · · · , n)

n n i=1 n n
Xt =


linear, otherwise.

It may prove that Xtn converges in distribution as n → ∞. Furthermore, we


may verify that the limit is a stationary independent increment process and
has the inverse uncertainty distribution Φ−1
t (α). The theorem is verified.

Φ−1
t (α)
....
......... α = 0.9 .......
.......
.....
.. .......
...
... α = 0.8 .........
....... .............
.....
.. ..
... ....... ..
... ............. ..............
α = 0.7 ..........
... ...... ....... ........
....... ............. ..............
... ....... .. α = 0.6 ... ....
... ....... ....... ........ .........
... ............. .............. ............... ....... .........
.... .......... ............ .....
...
... ....
.....
....
....... ....... ........ ................
.... α = 0.5
...... ..
. ...
..........
...... ........
... ....... ........ ......... .. ..........
... ..
........................ ............... ................. ................... α = 0.4 .... ............
.
...... ....... ....... ........ ......... . ..
... ....... ....... ......... ......... ................... ............
....... ....... ........ ......... ............
...
....... ........ ......... ......... ...........
.. ............
α = 0.3 .......
... ............ ...............
... .. . . ......................................................................... ...................... .. .... ......................
.. . .. .. . ... .... .....
... ..................................................................................................... ............................. α = 0.2 ...... ............
....................................................................................... .......................... ..................
....................
..... ... .... .... ...... ...... ....................
..................................................................................................................................... α = 0.1
.. . ........ .... .......... .. ....................
.................................................................................... ................................
.... ..... ...............................
.................................................................................................
.........
............................................................................................................................................................................................................................................
... t

Figure 12.4: Inverse Uncertainty Distribution of Stationary Independent In-


crement Process: A Family of Linear Functions of t indexed by α

Exercise 12.7: Show that there exists a stationary independent increment


process with linear uncertainty distribution.

Exercise 12.8: Show that there exists a stationary independent increment


process with zigzag uncertainty distribution.

Exercise 12.9: Show that there exists a stationary independent increment


process with normal uncertainty distribution.

Exercise 12.10: Show that there does not exist a stationary independent
increment process with lognormal uncertainty distribution.

Theorem 12.12 (Liu [129]) Let Xt be a stationary independent increment


process. Then there exist two real numbers a and b such that

E[Xt ] = a + bt (12.36)

for any time t ≥ 0.


272 Chapter 12 - Uncertain Process

Proof: It follows from Theorem 12.10 that Xt and X0 + (X1 − X0 )t are


identically distributed uncertain variables. Thus we have

E[Xt ] = E[X0 + (X1 − X0 )t].

Since X0 and X1 − X0 are independent uncertain variables, we obtain

E[Xt ] = E[X0 ] + E[X1 − X0 ]t.

Hence (12.36) holds for a = E[X0 ] and b = E[X1 − X0 ].

Theorem 12.13 (Liu [129]) Let Xt be a stationary independent increment


process with an initial value 0. Then for any times s and t, we have

E[Xs+t ] = E[Xs ] + E[Xt ]. (12.37)

Proof: It follows from Theorem 12.12 that there exists a real number b such
that E[Xt ] = bt for any time t ≥ 0. Hence

E[Xs+t ] = b(s + t) = bs + bt = E[Xs ] + E[Xt ].

Theorem 12.14 (Chen [17]) Let Xt be a stationary independent increment


process with a crisp initial value X0 . Then there exists a real number b such
that
V [Xt ] = bt2 (12.38)

for any time t ≥ 0.

Proof: It follows from Theorem 12.10 that Xt and (1 − t)X0 + tX1 are
identically distributed uncertain variables. Since X0 is a constant, we have

V [Xt ] = V [(1 − t)X0 + tX1 ] = t2 V [X1 ].

Hence (12.38) holds for b = V [X1 ].

Theorem 12.15 (Chen [17]) Let Xt be a stationary independent increment


process with a crisp initial value X0 . Then for any times s and t, we have
p p p
V [Xs+t ] = V [Xs ] + V [Xt ]. (12.39)

Proof: It follows from Theorem 12.14 that there exists a real number b such
that V [Xt ] = bt2 for any time t ≥ 0. Hence
p √ √ √ p p
V [Xs+t ] = b(s + t) = bs + bt = V [Xs ] + V [Xt ].
Section 12.6 - Extreme Value Theorem 273

12.6 Extreme Value Theorem


This section will present a series of extreme value theorems for sample-
continuous independent increment processes.

Theorem 12.16 (Liu [135], Extreme Value Theorem) Let Xt be a sample-


continuous independent increment process with uncertainty distribution Φt (x).
Then the supremum
sup Xt (12.40)
0≤t≤s

has an uncertainty distribution

Ψ(x) = inf Φt (x); (12.41)


0≤t≤s

and the infimum


inf Xt (12.42)
0≤t≤s

has an uncertainty distribution

Ψ(x) = sup Φt (x). (12.43)


0≤t≤s

Proof: Let 0 = t1 < t2 < · · · < tn = s be a partition of the closed interval


[0, s]. It is clear that

Xti = Xt1 + (Xt2 − Xt1 ) + · · · + (Xti − Xti−1 )

for i = 1, 2, · · · , n. Since the increments

Xt1 , Xt2 − Xt1 , · · · , Xtn − Xtn−1

are independent uncertain variables, it follows from Theorem 2.15 that the
maximum
max Xti
1≤i≤n

has an uncertainty distribution

min Φti (x).


1≤i≤n

Since Xt is sample-continuous, we have

max Xti → sup Xt


1≤i≤n 0≤t≤s

and
min Φti (x) → inf Φt (x)
1≤i≤n 0≤t≤s
274 Chapter 12 - Uncertain Process

as n → ∞. Thus (12.41) is proved. Similarly, it follows from Theorem 2.15


that the minimum
min Xti
1≤i≤n

has an uncertainty distribution

max Φti (x).


1≤i≤n

Since Xt is sample-continuous, we have

min Xti → inf Xt


1≤i≤n 0≤t≤s

and
max Φti (x) → sup Φt (x)
1≤i≤n 0≤t≤s

as n → ∞. Thus (12.43) is verified.

Theorem 12.17 (Liu [135]) Let Xt be a sample-continuous independent in-


crement process with uncertainty distribution Φt (x). If f is a strictly increas-
ing function, then the supremum

sup f (Xt ) (12.44)


0≤t≤s

has an uncertainty distribution

Ψ(x) = inf Φt (f −1 (x)); (12.45)


0≤t≤s

and the infimum


inf f (Xt ) (12.46)
0≤t≤s

has an uncertainty distribution

Ψ(x) = sup Φt (f −1 (x)). (12.47)


0≤t≤s

Proof: Since f is a strictly increasing function, f (Xt ) ≤ x if and only if


Xt ≤ f −1 (x). It follows from the extreme value theorem that

Ψ(x) = M sup f (Xt ) ≤ x
0≤t≤s

=M sup Xt ≤ f −1
(x)
0≤t≤s

= inf Φt (f −1 (x)).
0≤t≤s
Section 12.6 - Extreme Value Theorem 275

Similarly, we have

Ψ(x) = M inf f (Xt ) ≤ x
0≤t≤s

=M inf Xt ≤ f −1 (x)
0≤t≤s

= sup Φt (f −1 (x)).
0≤t≤s

The theorem is proved.

Exercise 12.11: Let Xt be a sample-continuous independent increment


process with uncertainty distribution Φt (x). Show that the supremum

sup exp(Xt ) (12.48)


0≤t≤s

has an uncertainty distribution

Ψ(x) = inf Φt (ln x); (12.49)


0≤t≤s

and the infimum


inf exp(Xt ) (12.50)
0≤t≤s

has an uncertainty distribution

Ψ(x) = sup Φt (ln x). (12.51)


0≤t≤s

Exercise 12.12: Let Xt be a sample-continuous and positive independent


increment process with uncertainty distribution Φt (x). Show that the supre-
mum
sup ln Xt (12.52)
0≤t≤s

has an uncertainty distribution

Ψ(x) = inf Φt (exp(x)); (12.53)


0≤t≤s

and the infimum


inf ln Xt (12.54)
0≤t≤s

has an uncertainty distribution

Ψ(x) = sup Φt (exp(x)). (12.55)


0≤t≤s
276 Chapter 12 - Uncertain Process

Exercise 12.13: Let Xt be a sample-continuous and nonnegative indepen-


dent increment process with uncertainty distribution Φt (x). Show that the
supremum
sup Xt2 (12.56)
0≤t≤s

has an uncertainty distribution



Ψ(x) = inf Φt ( x); (12.57)
0≤t≤s

and the infimum


inf Xt2 (12.58)
0≤t≤s

has an uncertainty distribution



Ψ(x) = sup Φt ( x). (12.59)
0≤t≤s

Theorem 12.18 (Liu [135]) Let Xt be a sample-continuous independent in-


crement process with continuous uncertainty distribution Φt (x). If f is a
strictly decreasing function, then the supremum
sup f (Xt ) (12.60)
0≤t≤s

has an uncertainty distribution


Ψ(x) = 1 − sup Φt (f −1 (x)); (12.61)
0≤t≤s

and the infimum


inf f (Xt ) (12.62)
0≤t≤s

has an uncertainty distribution


Ψ(x) = 1 − inf Φt (f −1 (x)). (12.63)
0≤t≤s

Proof: Since f is a strictly decreasing function, f (Xt ) ≤ x if and only if


Xt ≥ f −1 (x). It follows from the extreme value theorem that

Ψ(x) = M sup f (Xt ) ≤ x = M inf Xt ≥ f (x) −1
0≤t≤s 0≤t≤s

=1−M inf Xt < f (x) = 1 − sup Φt (f −1 (x)).
−1
0≤t≤s 0≤t≤s

Similarly, we have

Ψ(x) = M inf f (Xt ) ≤ x = M sup Xt ≥ f (x)
−1
0≤t≤s 0≤t≤s

=1−M sup Xt < f (x) = 1 − inf Φt (f −1 (x)).
−1
0≤t≤s 0≤t≤s
Section 12.7 - First Hitting Time 277

The theorem is proved.

Exercise 12.14: Let Xt be a sample-continuous independent increment pro-


cess with continuous uncertainty distribution Φt (x). Show that the supre-
mum
sup exp(−Xt ) (12.64)
0≤t≤s

has an uncertainty distribution

Ψ(x) = 1 − sup Φt (− ln x); (12.65)


0≤t≤s

and the infimum


inf exp(−Xt ) (12.66)
0≤t≤s

has an uncertainty distribution

Ψ(x) = 1 − inf Φt (− ln x). (12.67)


0≤t≤s

Exercise 12.15: Let Xt be a sample-continuous and positive independent


increment process with continuous uncertainty distribution Φt (x). Show that
the supremum
1
sup (12.68)
0≤t≤s Xt

has an uncertainty distribution



1
Ψ(x) = 1 − sup Φt ; (12.69)
0≤t≤s x

and the infimum


1
inf (12.70)
0≤t≤s Xt
has an uncertainty distribution

1
Ψ(x) = 1 − inf Φt . (12.71)
0≤t≤s x

12.7 First Hitting Time


Definition 12.11 Let Xt be an uncertain process and let z be a given level.
Then the uncertain variable

τz = inf t ≥ 0 Xt = z (12.72)

is called the first hitting time that Xt reaches the level z.


278 Chapter 12 - Uncertain Process

X. t
....
.........
.... ..
... ... ..... .....
... ......... ........ .............
... ..... ... ... ....... ....
... ...
. ...
... ... ........
z ..........................................................................................
...
... ..
.
...
.
.
.. .....
...
... ............ .
..... ...
.
.
...
..
... .... ... ...... ........ .... .. .
......... ..
... . ..
... ...... .... .... ........... ............. . ... ..
... ... ........ ... ... ... ........
.. ........ ... .. ...
... .. ...... ...... ... ...
... ....... .. .... ........ ........... ... ..... ......
..
... .... ... .. ... . ...... .. ... ... ... ..
... ... ........... .... ... .. ... .. ... ... ..
....
... ..... ..... ..... ...
... ... ..
... .. ... ..
... .......... ..
... ... ..
...... .
.....................................................................................................................................................................................................................................................
τz
... t

Figure 12.5: First Hitting Time

Theorem 12.19 Let Xt be an uncertain process and let z be a given level.


Then the first hitting time τz that Xt reaches the level z has an uncertainty
distribution,

M sup X ≥ z , if X0 < z

t



0≤t≤s
Υ(s) = (12.73)
M

inf X ≤ z , if X > z.

t 0


0≤t≤s

Proof: When X0 < z, it follows from the definition of first hitting time that

τz ≤ s if and only if sup Xt ≥ z.


0≤t≤s

Thus the uncertainty distribution of τz is



Υ(s) = M{τz ≤ s} = M sup Xt ≥ z .
0≤t≤s

When X0 > z, it follows from the definition of first hitting time that

τz ≤ s if and only if inf Xt ≤ z.


0≤t≤s

Thus the uncertainty distribution of τz is



Υ(s) = M{τz ≤ s} = M inf Xt ≤ z .
0≤t≤s

The theorem is verified.

Theorem 12.20 (Liu [135]) Let Xt be a sample-continuous independent in-


crement process with continuous uncertainty distribution Φt (x). If f is a
Section 12.8 - Time Integral 279

strictly increasing function and z is a given level, then the first hitting time
τz that f (Xt ) reaches the level z has an uncertainty distribution,

inf Φt (f −1 (z)), if z > f (X0 )



 1 − 0≤t≤s

Υ(s) = (12.74)

 sup Φt (f −1 (z)), if z < f (X0 ).
0≤t≤s

Proof: Note that Xt is a sample-continuous independent increment process


and f is a strictly increasing function. When z > f (X0 ), it follows from the
extreme value theorem that

Υ(s) = M{τz ≤ s} = M sup f (Xt ) ≥ z = 1 − inf Φt (f −1 (z)).
0≤t≤s 0≤t≤s

When z < f (X0 ), it follows from the extreme value theorem that

Υ(s) = M{τz ≤ s} = M inf f (Xt ) ≤ z = sup Φt (f −1 (z)).
0≤t≤s 0≤t≤s

The theorem is verified.

Theorem 12.21 (Liu [135]) Let Xt be a sample-continuous independent in-


crement process with continuous uncertainty distribution Φt (x). If f is a
strictly decreasing function and z is a given level, then the first hitting time
τz that f (Xt ) reaches the level z has an uncertainty distribution,


 sup Φt (f −1 (z)), if z > f (X0 )
0≤t≤s
Υ(s) = (12.75)
 1 − inf Φt (f −1 (z)), if z < f (X0 ).

0≤t≤s

Proof: Note that Xt is an independent increment process and f is a strictly


decreasing function. When z > f (X0 ), it follows from the extreme value
theorem that

Υ(s) = M{τz ≤ s} = M sup f (Xt ) ≥ z = sup Φt (f −1 (z)).
0≤t≤s 0≤t≤s

When z < f (X0 ), it follows from the extreme value theorem that

Υ(s) = M{τz ≤ s} = M inf f (Xt ) ≤ z = 1 − inf Φt (f −1 (z)).
0≤t≤s 0≤t≤s

The theorem is verified.


280 Chapter 12 - Uncertain Process

12.8 Time Integral


This section will give a definition of time integral that is an integral of un-
certain process with respect to time.

Definition 12.12 (Liu [123]) Let Xt be an uncertain process. For any par-
tition of closed interval [a, b] with a = t1 < t2 < · · · < tk+1 = b, the mesh is
written as
∆ = max |ti+1 − ti |. (12.76)
1≤i≤k

Then the time integral of Xt with respect to t is


Z b k
X
Xt dt = lim Xti · (ti+1 − ti ) (12.77)
a ∆→0
i=1

provided that the limit exists almost surely and is finite. In this case, the
uncertain process Xt is said to be time integrable.

Since Xt is an uncertain variable at each time t, the limit in (12.77) is


also an uncertain variable provided that the limit exists almost surely and
is finite. Hence an uncertain process Xt is time integrable if and only if the
limit in (12.77) is an uncertain variable.

Theorem 12.22 If Xt is a sample-continuous uncertain process on [a, b],


then it is time integrable on [a, b].

Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. Since the uncertain process Xt is sample-continuous, almost all sample
paths are continuous functions with respect to t. Hence the limit
k
X
lim Xti (ti+1 − ti )
∆→0
i=1

exists almost surely and is finite. On the other hand, since Xt is an uncertain
variable at each time t, the above limit is also a measurable function. Hence
the limit is an uncertain variable and then Xt is time integrable.

Theorem 12.23 If Xt is a time integrable uncertain process on [a, b], then


it is time integrable on each subinterval of [a, b]. Moreover, if c ∈ [a, b], then
Z b Z c Z b
Xt dt = Xt dt + Xt dt. (12.78)
a a c

Proof: Let [a0 , b0 ] be a subinterval of [a, b]. Since Xt is a time integrable


uncertain process on [a, b], for any partition

a = t1 < · · · < tm = a0 < tm+1 < · · · < tn = b0 < tn+1 < · · · < tk+1 = b,
Section 12.9 - Bibliographic Notes 281

the limit
k
X
lim Xti (ti+1 − ti )
∆→0
i=1
exists almost surely and is finite. Thus the limit
n−1
X
lim Xti (ti+1 − ti )
∆→0
i=m

exists almost surely and is finite. Hence Xt is time integrable on the subin-
terval [a0 , b0 ]. Next, for the partition
a = t1 < · · · < tm = c < tm+1 < · · · < tk+1 = b,
we have
k
X m−1
X k
X
Xti (ti+1 − ti ) = Xti (ti+1 − ti ) + Xti (ti+1 − ti ).
i=1 i=1 i=m

Note that
Z b k
X
Xt dt = lim Xti (ti+1 − ti ),
a ∆→0
i=1
Z c m−1
X
Xt dt = lim Xti (ti+1 − ti ),
a ∆→0
i=1
Z b k
X
Xt dt = lim Xti (ti+1 − ti ).
c ∆→0
i=m
Hence the equation (12.78) is proved.
Theorem 12.24 (Linearity of Time Integral) Let Xt and Yt be time inte-
grable uncertain processes on [a, b], and let α and β be real numbers. Then
Z b Z b Z b
(αXt + βYt )dt = α Xt dt + β Yt dt. (12.79)
a a a

Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. It follows from the definition of time integral that
Z b X k
(αXt + βYt )dt = lim (αXti + βYti )(ti+1 − ti )
a ∆→0
i=1
k
X k
X
= lim α Xti (ti+1 − ti ) + lim β Yti (ti+1 − ti )
∆→0 ∆→0
i=1 i=1
Z b Z b
= α Xt dt + β Yt dt.
a a

Hence the equation (12.79) is proved.


282 Chapter 12 - Uncertain Process

12.9 Bibliographic Notes


The study of uncertain process was started by Liu [123] in 2008 for modeling
the evolution of uncertain phenomena. In order to describe uncertain process,
Liu [139] proposed the concepts of uncertainty distribution and inverse uncer-
tainty distribution. In addition, independence concept of uncertain processes
was also introduced by Liu [139].
Independent increment process was initialized by Liu [123], and a suffi-
cient and necessary condition was proved by Liu [139] for its inverse uncer-
tainty distribution. In addition, Liu [135] presented an extreme value theorem
and obtained the uncertainty distribution of first hitting time of independent
increment process.
Stationary independent increment process was initialized by Liu [123],
and its inverse uncertainty distribution was investigated by Liu [139]. Fur-
thermore, Liu [129] showed that the expected value is a linear function of
time, and Chen [17] verified that the variance is proportional to the square
of time.
Chapter 13

Uncertain Renewal
Process

Uncertain renewal process is an uncertain process in which events occur con-


tinuously and independently of one another in uncertain times. This chapter
will introduce uncertain renewal process, renewal reward process, and alter-
nating renewal process. This chapter will also provide block replacement
policy, age replacement policy, and an uncertain insurance model.

13.1 Uncertain Renewal Process


Definition 13.1 (Liu [123]) Let ξ1 , ξ2 , · · · be iid uncertain interarrival times.
Define S0 = 0 and Sn = ξ1 + ξ2 + · · · + ξn for n ≥ 1. Then the uncertain
process
Nt = max {n | Sn ≤ t} (13.1)
n≥0

is called an uncertain renewal process.

It is clear that Sn is a stationary independent increment process with re-


spect to n. Since ξ1 , ξ2 , · · · denote the interarrival times of successive events,
Sn can be regarded as the waiting time until the occurrence of the nth event.
In this case, the renewal process Nt is the number of renewals in (0, t]. Note
that Nt is not sample-continuous, but each sample path of Nt is a right-
continuous and increasing step function taking only nonnegative integer val-
ues. Furthermore, since the interarrival times are always assumed to be
positive uncertain variables, the size of each jump of Nt is always 1. In other
words, Nt has at most one renewal at each time. In particular, Nt does not
jump at time 0.

Theorem 13.1 (Fundamental Relationship) Let Nt be a renewal process


with uncertain interarrival times ξ1 , ξ2 , · · · , and Sn = ξ1 + ξ2 + · · · + ξn .

© Springer-Verlag Berlin Heidelberg 2015 283


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_14
284 Chapter 13 - Uncertain Renewal Process

N. t
....
.........
....
..
4 ....
...
....... ..............................
..
... ..
... ..
.........................................................
3 ..........
... .. ..
..
... ..
.. ..
... ..
.......................................
2 ..........
.... .. ..
..
..
..
... .. ..
.. .. ..
. ..
.......... .
1 ....
..
........................................................
.. .
..
..
.
.
.
..
..
.. .. . ..
... . . .
..................................................................................................................................................................................................................................
0 ...
...
.... .... .... .... ... t
ξ ... 1 ... ξ ...
. 2 ξ ...
.
... 3 ... ξ ...
. 4
...
...
.... .. .. .. .
..

S0 S1 S2 S3 S4

Figure 13.1: A Sample Path of Renewal Process. Reprinted from Liu [129].

Then we have
Nt ≥ n ⇔ Sn ≤ t (13.2)
for any time t and integer n. Furthermore, we also have
Nt ≤ n ⇔ Sn+1 > t. (13.3)
It follows from the fundamental relationship that Nt ≥ n is equivalent to
Sn ≤ t. Thus we immediately have
M{Nt ≥ n} = M{Sn ≤ t}. (13.4)
Since Nt ≤ n is equivalent to Sn+1 > t, by using the duality axiom, we also
have
M{Nt ≤ n} = 1 − M{Sn+1 ≤ t}. (13.5)
Theorem 13.2 (Liu [129]) Let Nt be a renewal process with uncertain inter-
arrival times ξ1 , ξ2 , · · · If those interarrival times have a common uncertainty
distribution Φ, then Nt has an uncertainty distribution

t
Υt (x) = 1 − Φ , ∀x ≥ 0 (13.6)
bxc + 1
where bxc represents the maximal integer less than or equal to x.
Proof: Note that Sn+1 has an uncertainty distribution Φ(x/(n + 1)). It
follows from (13.5) that

t
M{Nt ≤ n} = 1 − M{Sn+1 ≤ t} = 1 − Φ .
n+1
Since Nt takes integer values, for any x ≥ 0, we have

t
Υt (x) = M{Nt ≤ x} = M{Nt ≤ bxc} = 1 − Φ .
bxc + 1
The theorem is verified.
Section 13.1 - Uncertain Renewal Process 285

Υt (x)
...
..........
...
...
... Υ (5) t
... • .........................................
... ..
... Υ (4) t ..
... • . .. ...
...
...
...
...
...
...
...
...
...
...
.........
... ..
... Υ (3) t
.
.. ..
... • ............................................ ..
..
... ..
.. .
.
. ..
... .. .
.. ..
..
... .. .
.. ..
... Υ (2) t
..
.. .
. ..
... . ..
... •............................................ ..
. ..
..
... .. .. .. ..
.
.. .. .
.
... . .. .
.. ..
..
... .. .
.. . ..
.. ..
Υ (1)... t .
.. .
.. .
.. ..
..
... •.......................................... . .
.. .. ..
... .. .. .. .
.. ..
... ..
.. .
.. .
.. .
.. ..
. ..
Υ (0) t • .
....
...
...
...
...
...
...
...
...
...
...
...
...
... .
.
.
..
..
.
..
..
..
.
..
..
..
... .. . . . ..
............................................................................................................................................................................................................................................................................................ x
...
0 1 ..
....
2 3 4 5
.

Figure 13.2: Uncertainty Distribution Υt (x) of Renewal Process Nt .


Reprinted from Liu [129].

Theorem 13.3 (Liu [129]) Let Nt be a renewal process with uncertain in-
terarrival times ξ1 , ξ2 , · · · Then the average renewal number
Nt 1
→ (13.7)
t ξ1
in the sense of convergence in distribution as t → ∞.
Proof: The uncertainty distribution Υt of Nt has been given by Theo-
rem 13.2 as follows,
t
Υt (x) = 1 − Φ
bxc + 1
where Φ is the uncertainty distribution of ξ1 . It follows from the operational
law that the uncertainty distribution of Nt /t is

t
Ψt (x) = 1 − Φ
btxc + 1
where btxc represents the maximal integer less than or equal to tx. Thus at
each continuity point x of 1 − Φ(1/x), we have

1
lim Ψt (x) = 1 − Φ
t→∞ x
which is just the uncertainty distribution of 1/ξ1 . Hence Nt /t converges in
distribution to 1/ξ1 as t → ∞.
Theorem 13.4 (Liu [129], Elementary Renewal Theorem) Let Nt be a re-
newal process with uncertain interarrival times ξ1 , ξ2 , · · · If E[1/ξ1 ] exists,
then
E[Nt ] 1
lim =E . (13.8)
t→∞ t ξ1
286 Chapter 13 - Uncertain Renewal Process

If those interarrival times have a common uncertainty distribution Φ, then


Z +∞
E[Nt ] 1
lim = Φ dx. (13.9)
t→∞ t 0 x
If the uncertainty distribution Φ is regular, then
Z 1
E[Nt ] 1
lim = −1 (α)
dα. (13.10)
t→∞ t 0 Φ

Proof: Write the uncertainty distributions of Nt /t and 1/ξ1 by Ψt (x) and


G(x), respectively. Theorem 13.3 says that Ψt (x) → G(x) as t → ∞ at
each continuity point x of G(x). Note that Ψt (x) ≥ G(x). It follows from
Lebesgue dominated convergence theorem and the existence of E[1/ξ1 ] that
Z +∞ Z +∞
E[Nt ] 1
lim = lim (1 − Ψt (x))dx = (1 − G(x))dx = E .
t→∞ t t→∞ 0 0 ξ1

Since 1/ξ1 has an uncertainty distribution 1 − Φ(1/x), we have


Z +∞
E[Nt ] 1 1
lim =E = Φ dx.
t→∞ t ξ1 0 x
Furthermore, since 1/ξ has an inverse uncertainty distribution
1
G−1 (α) = ,
Φ−1 (1 − α)
we get Z 1 Z 1
1 1 1
E = −1
dα = −1
dα.
ξ 0 Φ (1 − α) 0 Φ (α)
The theorem is proved.

Exercise 13.1: A renewal process Nt is called linear if ξ1 , ξ2 , · · · are iid


linear uncertain variables L(a, b) with a > 0. Show that
E[Nt ] ln b − ln a
lim = . (13.11)
t→∞ t b−a

Exercise 13.2: A renewal process Nt is called zigzag if ξ1 , ξ2 , · · · are iid


zigzag uncertain variables Z(a, b, c) with a > 0. Show that

E[Nt ] 1 ln b − ln a ln c − ln b
lim = + . (13.12)
t→∞ t 2 b−a c−b

Exercise 13.3: A renewal process Nt is called lognormal if ξ1 , ξ2 , · · · are iid


lognormal uncertain variables LOGN (e, σ). Show that
( √ √ √
E[Nt ] 3σ exp(−e) csc( 3σ), if σ < π/ 3
lim = √ (13.13)
t→∞ t +∞, if σ ≥ π/ 3.
Section 13.3 - Renewal Reward Process 287

13.2 Block Replacement Policy


Block replacement policy means that an element is always replaced at fail-
ure or periodically with time s. Assume that the lifetimes of elements are
iid uncertain variables ξ1 , ξ2 , · · · with a common uncertainty distribution Φ.
Then the replacement times form an uncertain renewal process Nt . Let a
denote the “failure replacement” cost of replacing an element when it fails
earlier than s, and b the “planned replacement” cost of replacing an element
at planned time s. Note that a > b > 0 is always assumed. It is clear that
the cost of one period is aNs + b and the average cost is
aNs + b
. (13.14)
s
Theorem 13.5 (Yao [250]) Assume the lifetimes of elements are iid uncer-
tain variables ξ1 , ξ2 , · · · with a common uncertainty distribution Φ, and Nt is
the uncertain renewal process representing the replacement times. Then the
average cost has an expected value

!
aNs + b 1 X s
E = a Φ +b . (13.15)
s s n=1
n

Proof: Note that the uncertainty distribution of Nt is a step function. It


follows from Theorem 13.2 that
Z +∞ ∞
s X s
E[Ns ] = Φ dx = Φ .
0 bxc + 1 n=1
n

Thus (13.15) is verified by



aNs + b aE[Ns ] + b
E = . (13.16)
s s
Finally, please note that

aNs + b
lim E = +∞, (13.17)
s↓0 s
Z +∞
aNs + b 1
lim E =a Φ dx. (13.18)
s→∞ s 0 x

What is the optimal time s?


When the block replacement policy is accepted, one problem is concerned
with finding an optimal time s in order to minimize the average cost, i.e.,

!
1 X s
min a Φ +b . (13.19)
s s n
n=1
288 Chapter 13 - Uncertain Renewal Process

13.3 Renewal Reward Process


Let (ξ1 , η1 ), (ξ2 , η2 ), · · · be a sequence of pairs of uncertain variables. We
shall interpret ηi as the rewards (or costs) associated with the i-th interarrival
times ξi for i = 1, 2, · · · , respectively.

Definition 13.2 (Liu [129]) Let ξ1 , ξ2 , · · · be iid uncertain interarrival times,


and let η1 , η2 , · · · be iid uncertain rewards. Assume that (ξ1 , ξ2 , · · · ) and
(η1 , η2 , · · · ) are independent uncertain vectors. Then
Nt
X
Rt = ηi (13.20)
i=1

is called a renewal reward process, where Nt is the renewal process with un-
certain interarrival times ξ1 , ξ2 , · · ·

A renewal reward process Rt denotes the total reward earned by time t.


In addition, if ηi ≡ 1, then Rt degenerates to a renewal process Nt . Please
also note that Rt = 0 whenever Nt = 0.

Theorem 13.6 (Liu [129]) Let Rt be a renewal reward process with uncer-
tain interarrival times ξ1 , ξ2 , · · · and uncertain rewards η1 , η2 , · · · Assume
those interarrival times and rewards have uncertainty distributions Φ and Ψ,
respectively. Then Rt has an uncertainty distribution

t x
Υt (x) = max 1 − Φ ∧Ψ . (13.21)
k≥0 k+1 k
Here we set x/k = +∞ and Ψ(x/k) = 1 when k = 0.

Proof: It follows from the definition of renewal reward process that the
renewal process Nt is independent of uncertain rewards η1 , η2 , · · · , and Rt
has an uncertainty distribution
(N ) (∞ k
)
X t [ X
Υt (x) = M ηi ≤ x = M (Nt = k) ∩ ηi ≤ x
i=1 k=0 i=1

( )
[ x
=M (Nt = k) ∩ η1 ≤ (this is a polyrectangle)
k
k=0
n x o
= max M (Nt ≤ k) ∩ η1 ≤ (polyrectangular theorem)
k≥0 k
n xo
= max M {Nt ≤ k} ∧ M η1 ≤ (independence)
k≥0 k

t x
= max 1 − Φ ∧Ψ .
k≥0 k+1 k
The theorem is proved.
Section 13.3 - Renewal Reward Process 289

Υt (x)
...
.......... . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .........................................
... .. .. .. .. .. .. .. .. .. . . ..
... ........ .. .. .. .. .. .. .. .
...... ..... .. .. .. .. .. .. .. .. ..
...
... . .
. . ...
. .
. .... .. .. ..
... ..... ........ ..........
... ...
. ...... .. ...... ..............
.. .. . .. .
. ..... ...
.. .
.
.... .. .. .. .. .. .. .. .. .. .. ........ .. .. .. .. .. .. .. .. .. ............. .. .. .. .. .. .. .. ......................................................................
... .. .
. ...
. .. ..
. ...
. .... ....
... .. ...... ....
... .. ... ...... ....
. .. ... ..... ....
.... .. .. .. .. .. .. .. ....... .. .. .. .. .. .. ................................................ . ...
.
.. . . .. .
... .
.. ...
... ... ....
... .. ... ... ....
... .. ... ... ....
... ... ... ..... . ....
. .
... . ... .. ...
. .. .. ... ...
.... .. .. .. .. ................................. ... ...
. .. .. ...
.... ... .. .
..
....
... .. .. ... ...
... ... .. .. ...
... .. .. .. ...
... .. .. ... .....
... . ..
... .. .
.. ... ....
.. ... .. .... ....
............... ... .... .....
. .
.... .... ............... ....
.................................................................................................................................................................................................................................................................... x
..
0 ....
.

Figure 13.3: Uncertainty Distribution Υt (x) of Renewal Reward Process Rt


in which the dashed horizontal lines are 1 − Φ(t/(k + 1)) and the dashed
curves are Ψ(x/k) for k = 0, 1, 2, · · · Reprinted from Liu [129].

Theorem 13.7 (Liu [129]) Assume that Rt is a renewal reward process with
uncertain interarrival times ξ1 , ξ2 , · · · and uncertain rewards η1 , η2 , · · · Then
the reward rate
Rt η1
→ (13.22)
t ξ1
in the sense of convergence in distribution as t → ∞.
Proof: It follows from Theorem 13.6 that the uncertainty distribution of Rt
is
t x
Υt (x) = max 1 − Φ ∧Ψ .
k≥0 k+1 k
Then Rt /t has an uncertainty distribution

t tx
Ψt (x) = max 1 − Φ ∧Ψ .
k≥0 k+1 k
When t → ∞, we have
Ψt (x) → sup(1 − Φ(y)) ∧ Ψ(xy)
y≥0

which is just the uncertainty distribution of η1 /ξ1 . Hence Rt /t converges in


distribution to η1 /ξ1 as t → ∞.
Theorem 13.8 (Liu [129], Renewal Reward Theorem) Assume that Rt is a
renewal reward process with uncertain interarrival times ξ1 , ξ2 , · · · and un-
certain rewards η1 , η2 , · · · If E[η1 /ξ1 ] exists, then

E[Rt ] η1
lim =E . (13.23)
t→∞ t ξ1
290 Chapter 13 - Uncertain Renewal Process

If those interarrival times and rewards have regular uncertainty distributions


Φ and Ψ, respectively, then
1
Ψ−1 (α)
Z
E[Rt ]
lim = dα. (13.24)
t→∞ t 0 Φ−1 (1 − α)

Proof: It follows from Theorem 13.6 that Rt /t has an uncertainty distribu-


tion
t tx
Ft (x) = max 1 − Φ ∧Ψ
k≥0 k+1 k

and η1 /ξ1 has an uncertainty distribution

G(x) = sup(1 − Φ(y)) ∧ Ψ(xy).


y≥0

Note that Ft (x) → G(x) and Ft (x) ≥ G(x). It follows from Lebesgue domi-
nated convergence theorem and the existence of E[η1 /ξ1 ] that
Z +∞ Z +∞
E[Rt ] η1
lim = lim (1 − Ft (x))dx = (1 − G(x))dx = E .
t→∞ t t→∞ 0 0 ξ1

Finally, since η1 /ξ1 has an inverse uncertainty distribution

Ψ−1 (α)
G−1 (α) = ,
Φ−1 (1 − α)

the equation (13.24) is obtained.

13.4 Uncertain Insurance Model


Liu [135] assumed that a is the initial capital of an insurance company, b is
the premium rate, bt is the total income up to time t, and the uncertain claim
process is a renewal reward process

Nt
X
Rt = ηi (13.25)
i=1

with iid uncertain interarrival times ξ1 , ξ2 , · · · and iid uncertain claim amounts
η1 , η2 , · · · Then the capital of the insurance company at time t is

Zt = a + bt − Rt (13.26)

and Zt is called an insurance risk process.


Section 13.4 - Uncertain Insurance Model 291

Z. t
....
.........
....
... .
........ .......
... ...... .. ...... ..
... ...... .... .......... .....
... ..
..
...... ........ ...
.. ..
... ...... ..
...
... ...... ...
... ...
....... .. ...
... .. ...
... ..
.
.. . ..
. ..
..
.
........
... .......... .... ..
..... ....
. .
..
..... .. ...
... .......... ....
... .... ... ........ ..
... ........ ... .
....
.
.. ... ...... .. ... ....... ...
........ .
. .
. ...
a ... ....
.
....
......
. ...
..
.
. ..
..
... .
..
.
.
...
... ... .......... .. .. ..
...
...
... ... ...... .. .. ..
... ..... ...
.. .. .. .. ...
... .. .. .. ...
... .. .. .. .. ...
... .. .. .. .. ...
... .. .. .. .. ...
... .. .. .. .. ...
.... .. . . . ..
.............................................................................................................................................................................................................................................................................................
... ... .... t
0 S ...
...
S 1 2 S S 3 4 ....
... .
.
.....
... .... ......
... ... .....
... ... . .
.. .......

Figure 13.4: An Insurance Risk Process

Ruin Index
Ruin index is the uncertain measure that the capital of the insurance company
becomes negative.

Definition 13.3 (Liu [135]) Let Zt be an insurance risk process. Then the
ruin index is defined as the uncertain measure that Zt eventually becomes
negative, i.e.,
Ruin = M inf Zt < 0 . (13.27)
t≥0

It is clear that the ruin index is a special case of the risk index in the
sense of Liu [128].

Theorem 13.9 (Liu [135], Ruin Index Theorem) Let Zt = a + bt − Rt be


an insurance risk process where a and b are positive numbers, and Rt is a
renewal reward process with iid uncertain interarrival times ξ1 , ξ2 , · · · and iid
uncertain claim amounts η1 , η2 , · · · If ξ1 and η1 have continuous uncertainty
distributions Φ and Ψ, respectively, then the ruin index is

x−a x
Ruin = max sup Φ ∧ 1−Ψ . (13.28)
k≥1 x≥0 kb k

Proof: For each positive integer k, it is clear that the arrival time of the kth
claim is
Sk = ξ1 + ξ2 + · · · + ξk
whose uncertainty distribution is Φ(s/k). Define an uncertain process in-
dexed by k as follows,

Yk = a + bSk − (η1 + η2 + · · · + ηk ).
292 Chapter 13 - Uncertain Renewal Process

It is easy to verify that Yk is an independent increment process with respect


to k. In addition, Yk is just the capital at the arrival time Sk and has an
uncertainty distribution

z+x−a x
Fk (z) = sup Φ ∧ 1−Ψ .
x≥0 kb k

Since a ruin occurs only at the arrival times, we have



Ruin = M inf Zt < 0 = M min Yk < 0 .
t≥0 k≥1

It follows from the extreme value theorem that



x−a x
Ruin = max Fk (0) = max sup Φ ∧ 1−Ψ .
k≥1 k≥1 x≥0 kb k

The theorem is proved.

Ruin Time
Definition 13.4 (Liu [135]) Let Zt be an insurance risk process. Then the
ruin time is determined by

τ = inf t ≥ 0 Zt < 0 . (13.29)

If Zt ≥ 0 for all t ≥ 0, then we define τ = +∞. Note that the ruin time is
just the first hitting time that the total capital Zt becomes negative. Since
inf t≥0 Zt < 0 if and only if τ < +∞, the relation between ruin index and
ruin time is
Ruin = M inf Zt < 0 = M{τ < +∞}.
t≥0

Theorem 13.10 (Yao [257]) Let Zt = a + bt − Rt be an insurance risk


process where a and b are positive numbers, and Rt is a renewal reward
process with iid uncertain interarrival times ξ1 , ξ2 , · · · and iid uncertain claim
amounts η1 , η2 , · · · If ξ1 and η1 have regular uncertainty distributions Φ and
Ψ, respectively, then the ruin time has an uncertainty distribution
x
a + bx

Υ(t) = max sup Φ ∧ 1−Ψ . (13.30)
k≥1 x≤t k k

Proof: For each positive integer k, let us write Sk = ξ1 + ξ2 + · · · + ξk ,


Yk = a + bSk − (η1 + η2 + · · · + ηk ) and
x
a + bx

αk = sup Φ ∧ 1−Ψ .
x≤t k k
Section 13.4 - Uncertain Insurance Model 293

Then
αk = sup α | kΦ−1 (α) ≤ t ∧ sup α | a + kΦ−1 (α) − kΨ−1 (1 − α) < 0 .

On the one hand, it follows from the definition of the ruin time τ that
(∞ )
[
M{τ ≤ t} = M inf Zs < 0 = M (Sk ≤ t, Yk < 0)
0≤s≤t
k=1

( k k k
!)
[ X X X
=M ξi ≤ t, a + b ξi − ηi < 0
k=1 i=1 i=1 i=1
∞ \
( k
)
[
≥M (ξi ≤ Φ−1 (αk )) ∩ (ηi > Ψ−1 (1 − αk ))
k=1 i=1

( k
)
_ \
≥ M (ξi ≤ Φ−1 (αk )) ∩ (ηi > Ψ−1 (1 − αk ))
k=1 i=1
∞ ^
_ k
M (ξi ≤ Φ−1 (αk )) ∩ (ηi > Ψ−1 (1 − αk ))

=
k=1 i=1
∞ ^
_ k
M ξi ≤ Φ−1 (αk ) ∧ M ηi > Ψ−1 (1 − αk )

=
k=1 i=1
∞ ^
_ k ∞
_
= αk ∧ αk = αk .
k=1 i=1 k=1

On the other hand, we have


(∞ k k k
!)
[ X X X
M{τ ≤ t} = M ξi ≤ t, a + b ξi − ηi < 0
k=1 i=1 i=1 i=1

( k
)
[ [
≤M (ξi ≤ Φ −1
(αk )) ∪ (ηi > Ψ −1
(1 − αk ))
k=1 i=1
(∞ ∞ )
[[
=M (ξi ≤ Φ −1
(αk )) ∪ (ηi > Ψ −1
(1 − αk ))
i=1 k=i
(∞ ∞
! ∞
!)
[ _ ^
≤M ξi ≤ Φ−1
(αk ) ∪ ηi > Ψ−1
(1 − αk )
i=1 k=i k=i
∞ ∞ ∞
( ) ( )
_ _ ^
= M ξi ≤ Φ−1 (αk ) ∨ M ηi > Ψ−1 (1 − αk )
i=1 k=i k=i

∞ _ ∞ ∞
!
_ ^ _
= αk ∨ 1− (1 − αk ) = αk .
i=1 k=i k=i k=1
294 Chapter 13 - Uncertain Renewal Process

Thus we obtain

_
M{τ ≤ t} = αk
k=1
and the theorem is verified.

13.5 Age Replacement Policy


Age replacement means that an element is always replaced at failure or at
an age s. Assume that the lifetimes of the elements are iid uncertain vari-
ables ξ1 , ξ2 , · · · with a common uncertainty distribution Φ. Then the actual
lifetimes of the elements are iid uncertain variables
ξ1 ∧ s, ξ2 ∧ s, · · · (13.31)
which may generate an uncertain renewal process
( n
)
X
Nt = max n (ξi ∧ s) ≤ t . (13.32)
n≥0
i=1

Let a denote the “failure replacement” cost of replacing an element when


it fails earlier than s, and b the “planned replacement” cost of replacing an
element at the age s. Note that a > b > 0 is always assumed. Define
(
a, if x < s
f (x) = (13.33)
b, if x = s.

Then f (ξi ∧ s) is just the cost of replacing the ith element, and the average
replacement cost before the time t is
N
t
1X
f (ξi ∧ s). (13.34)
t i=1

Theorem 13.11 (Yao and Ralescu [245]) Assume ξ1 , ξ2 , · · · are iid uncer-
tain lifetimes and s is a positive number. Then
Nt
1X f (ξ1 ∧ s)
f (ξi ∧ s) → (13.35)
t i=1 ξ1 ∧ s

in the sense of convergence in distribution as t → ∞.


Proof: At first, the average replacement cost before time t may be rewritten
as
XNt XNt

Nt
f (ξi ∧ s) (ξi ∧ s)
1X i=1 i=1
f (ξi ∧ s) = N × . (13.36)
t i=1 X t t
(ξi ∧ s)
i=1
Section 13.5 - Age Replacement Policy 295

For any real number x, on the one hand, we have

(N Nt
)
X t X
f (ξi ∧ s)/ (ξi ∧ s) ≤ x
i=1 i=1

( n n
!)
[ X X
= (Nt = n) ∩ f (ξi ∧ s)/ (ξi ∧ s) ≤ x
n=1 i=1 i=1

( n
)
[ \
⊃ (Nt = n) ∩ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
n=1 i=1
∞ ∞
( )
[ \
⊃ (Nt = n) ∩ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
n=1 i=1
\∞
⊃ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
i=1

and

 
 XNt 
f (ξi ∧ s)

 


 
 (∞ )

i=1
 \ f (ξi ∧ s) f (ξ1 ∧ s)
M ≤x ≥M ≤x =M ≤x .
 Nt 
i=1
ξi ∧ s ξ1 ∧ s
 X(ξ ∧ s)
 



 i 


i=1

On the other hand, we have

(N Nt
)
X t X
f (ξi ∧ s)/ (ξi ∧ s) ≤ x
i=1 i=1

( n n
!)
[ X X
= (Nt = n) ∩ f (ξi ∧ s)/ (ξi ∧ s) ≤ x
n=1 i=1 i=1

( n
)
[ [
⊂ (Nt = n) ∩ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
n=1 i=1
∞ ∞
( )
[ [
⊂ (Nt = n) ∩ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
n=1 i=1
[∞
⊂ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
i=1
296 Chapter 13 - Uncertain Renewal Process

and
 
 XNt 
f (ξi ∧ s)

 


 
 (∞ )

i=1
 [ f (ξi ∧ s) f (ξ1 ∧ s)
M ≤x ≤M ≤x =M ≤x .

 XNt 
 i=1
ξi ∧ s ξ1 ∧ s



 (ξi ∧ s) 



i=1

Thus for any real number x, we have


 
 XNt 
f (ξi ∧ s)

 


 


i=1
 f (ξ1 ∧ s)
M ≤ x = M ≤ x .

 XNt 
 ξ1 ∧ s



 (ξi ∧ s) 



i=1

Hence
Nt
X
f (ξi ∧ s)
i=1 f (ξ1 ∧ s)
and
Nt
X ξ1 ∧ s
(ξi ∧ s)
i=1
are identically distributed uncertain variables. Since
Nt
X
(ξi ∧ s)
i=1
→1
t
as t → ∞, it follows from (13.36) that (13.35) holds. The theorem is verified.
Theorem 13.12 (Yao and Ralescu [245]) Assume ξ1 , ξ2 , · · · are iid uncer-
tain lifetimes with a common continuous uncertainty distribution Φ, and s is
a positive number. Then the long-run average replacement cost is
" N # Z s
t
1X b a−b Φ(x)
lim E f (ξi ∧ s) = + Φ(s) + a dx. (13.37)
t→∞ t i=1 s s 0 x2

Proof: Let Ψ(x) be the uncertainty distribution of f (ξ1 ∧ s)/(ξ1 ∧ s). It


follows from (13.33) that f (ξ1 ∧ s) ≥ b and ξ1 ∧ s ≤ s. Thus we have
f (ξ1 ∧ s) b

ξ1 ∧ s s
almost surely. If x < b/s, then

f (ξ1 ∧ s)
Ψ(x) = M ≤x = 0.
ξ1 ∧ s
Section 13.5 - Age Replacement Policy 297

If b/s ≤ x < a/s, then



f (ξ1 ∧ s)
Ψ(x) = M ≤ x = M{ξ1 ≥ s} = 1 − Φ(s).
ξ1 ∧ s
If x ≥ a/s, then

f (ξ1 ∧ s) a n ao a
Ψ(x) = M ≤x =M ≤ x = M ξ1 ≥ =1−Φ .
ξ1 ∧ s ξ1 x x
Hence we have


 0, if x < b/s

Ψ(x) = 1 − Φ(s), if b/s ≤ x < a/s


1 − Φ(a/x), if x ≥ a/s

and
Z +∞ Z s
f (ξ1 ∧ s) b a−b Φ(x)
E = (1 − Ψ(x))dx = + Φ(s) + a dx.
ξ1 ∧ s 0 s s 0 x2
Since
Nt
X
(ξi ∧ s)
i=1
≤ 1,
t
it follows from (13.36) that
( N )
t
1X f (ξ1 ∧ s)
M f (ξi ∧ s) ≤ x ≥ M ≤x
t i=1 ξ∧s
for any real number x. By using the Lebesgue dominated convergence theo-
rem, we get
" N # Z +∞ ( N )!
t t
1X 1X
lim E f (ξi ∧ s) = lim 1−M f (ξi ∧ s) ≤ x dx
t→∞ t i=1 t→∞ 0 t i=1
Z +∞
f (ξ1 ∧ s)
= 1−M ≤x dx
0 ξ1 ∧ s

f (ξ1 ∧ s)
=E .
ξ1 ∧ s
Hence the theorem is proved. Please also note that
" N #
t
1X
lim lim E f (ξi ∧ s) = +∞, (13.38)
s→0+ t→∞ t i=1
" N # Z +∞
t
1X Φ(x)
lim lim E f (ξi ∧ s) = a dx. (13.39)
s→+∞ t→∞ t i=1 0 x2
298 Chapter 13 - Uncertain Renewal Process

What is the optimal age s?


When the age replacement policy is accepted, one problem is to find the
optimal age s such that the average replacement cost is minimized. That is,
the optimal age s should solve
Z s
b a−b Φ(x)
min + Φ(s) + a dx . (13.40)
s≥0 s s 0 x2

13.6 Alternating Renewal Process


Let (ξ1 , η1 ), (ξ2 , η2 ), · · · be a sequence of pairs of uncertain variables. We
shall interpret ξi as the “on-times” and ηi as the “off-times” for i = 1, 2, · · · ,
respectively. In this case, the i-th cycle consists of an on-time ξi followed by
an off-time ηi .

Definition 13.5 (Yao and Li [242]) Let ξ1 , ξ2 , · · · be iid uncertain on-times,


and let η1 , η2 , · · · be iid uncertain off-times. Assume that (ξ1 , ξ2 , · · · ) and
(η1 , η2 , · · · ) are independent uncertain vectors. Then

Nt
X Nt
X Nt
X

t − η , if (ξ + η ) ≤ t < (ξi + ηi ) + ξNt +1

i i i




i=1 i=1 i=1
At = (13.41)
 N
X t +1 Nt
X N
Xt +1

ξi , if (ξi + ηi ) + ξNt +1 ≤ t < (ξi + ηi )




i=1 i=1 i=1

is called an alternating renewal process, where Nt is the renewal process with


uncertain interarrival times ξ1 + η1 , ξ2 + η2 , · · ·

Note that the alternating renewal process At is just the total time at which
the system is on up to time t. It is clear that
Nt
X N
Xt +1

ξi ≤ At ≤ ξi (13.42)
i=1 i=1

for each time t. We are interested in the limit property of the rate at which
the system is on.

Theorem 13.13 (Yao and Li [242]) Assume At is an alternating renewal


process with uncertain on-times ξ1 , ξ2 , · · · and uncertain off-times η1 , η2 , · · ·
Then the availability rate
At ξ1
→ (13.43)
t ξ1 + η1
in the sense of convergence in distribution as t → ∞.
Section 13.6 - Alternating Renewal Process 299

Proof: Write the uncertainty distributions of ξ1 and η1 by Φ and Ψ, respec-


tively. Then the uncertainty distribution of ξ1 /(ξ1 + η1 ) is
Υ(x) = sup Φ(xy) ∧ (1 − Ψ(y − xy)).
y>0

On the one hand, we have


( N )
t
1X
M ξi ≤ x
t i=1
(∞ k
!)
[ 1X
=M (Nt = k) ∩ ξi ≤ x
t i=1
k=0

( k+1
! k
!)
[ X 1X
≤M (ξi + ηi ) > t ∩ ξi ≤ x
i=1
t i=1
k=0

( k+1
! k
!)
[ X 1X
≤M tx + ξk+1 + ηi > t ∩ ξi ≤ x
i=1
t i=1
k=0

( k+1
! k
!)
[ ξk+1 1X 1X
=M + ηi > 1 − x ∩ ξi ≤ x .
t t i=1 t i=1
k=0

Since
ξk+1
→ 0, as t → ∞
t
and
k+1
X k
X
ηi ∼ (k + 1)η1 , ξi ∼ kξ1 ,
i=1 i=1
we have ( )
Nt
1X
lim M ξi ≤ x
t→∞ t i=1
(∞ )
[ t(1 − x) tx
≤ lim M η1 > ∩ ξ1 ≤
t→∞ k+1 k
k=0

t(1 − x) tx
= lim sup M η1 > ∧ M ξ1 ≤
t→∞ k≥0 k+1 k

t(1 − x) tx
= lim sup 1 − Ψ ∧Φ
t→∞ k≥0 k+1 k
= sup Φ(xy) ∧ (1 − Ψ(y − xy)) = Υ(x).
y>0
That is, ( )
N t
1X
lim M ξi ≤ x ≤ Υ(x). (13.44)
t→∞ t i=1
300 Chapter 13 - Uncertain Renewal Process

On the other hand, we have


( N +1 )
t
1 X
M ξi > x
t i=1
(∞ k+1
!)
[ 1X
=M (Nt = k) ∩ ξi > x
t i=1
k=0
(∞ k
! k+1
!)
[ X 1X
≤M (ξi + ηi ) ≤ t ∩ ξi > x
t i=1
k=0 i=1
(∞ k
! k+1
!)
[ X 1X
≤M tx − ξk+1 + ηi ≤ t ∩ ξi > x
i=1
t i=1
k=0
(∞ k
! k+1
!)
[ 1X ξk+1 1X
=M ηi − ≤1−x ∩ ξi > x .
t i=1 t t i=1
k=0

Since
ξk+1
→ 0, as t → ∞
t
and
k
X k+1
X
ηi ∼ kη1 , ξi ∼ (k + 1)ξ1 ,
i=1 i=1

we have
( Nt +1
)
1 X
lim M ξi > x
t→∞ t i=1
(∞ )
[ t(1 − x) tx
≤ lim M η1 ≤ ∩ ξ1 >
t→∞ k k+1
k=0

t(1 − x) tx
= lim sup M η1 ≤ ∧ M ξ1 >
t→∞ k≥0 k k+1

t(1 − x) tx
= lim sup Ψ ∧ 1−Φ
t→∞ k≥0 k+1 k+1
= sup(1 − Φ(xy)) ∧ Ψ(y − xy).
y>0

By using the duality of uncertain measure, we get


( N +1 )
t
1 X
lim M ξi ≤ x ≥ 1 − sup(1 − Φ(xy)) ∧ Ψ(y − xy)
t→∞ t i=1 y>0

= inf Φ(xy) ∨ (1 − Ψ(y − xy)) = Υ(x).


y>0
Section 13.7 - Bibliographic Notes 301

That is, ( )
Nt +1
1 X
lim M ξi ≤ x ≥ Υ(x). (13.45)
t→∞ t i=1
Since
Nt Nt +1
1X At 1 X
ξi ≤ ≤ ξi ,
t i=1 t t i=1
we obtain
( N
) ( Nt +1
)
t
1X At 1 X
M ξi ≤ x ≥M ≤x ≥M ξi ≤ x .
t i=1 t t i=1

It follows from (13.44) and (13.45) that for any real number x, we have

At
lim ≤ x = Υ(x).
t→∞ t
Hence the availability rate At /t converges in distribution to ξ1 /(ξ1 +η1 ). The
theorem is proved.
Theorem 13.14 (Yao and Li [242], Alternating Renewal Theorem) Assume
At is an alternating renewal process with uncertain on-times ξ1 , ξ2 , · · · and
uncertain off-times η1 , η2 , · · · If E[ξ1 /(ξ1 + η1 )] exists, then

E[At ] ξ1
lim =E . (13.46)
t→∞ t ξ1 + η1
If those on-times and off-times have regular uncertainty distributions Φ and
Ψ, respectively, then
Z 1
E[At ] Φ−1 (α)
lim = −1 (α) + Ψ−1 (1 − α)
dα. (13.47)
t→∞ t 0 Φ

Proof: Write the uncertainty distributions of At /t and ξ1 /(ξ1 + η1 ) by Ft (x)


and G(x), respectively. Since At /t converges in distribution to ξ1 /(ξ1 + η1 ),
we have Ft (x) → G(x) as t → ∞. It follows from Lebesgue dominated
convergence theorem that
Z 1 Z 1
E[At ] ξ1
lim = lim (1 − Ft (x))dx = (1 − G(x))dx = E .
t→∞ t t→∞ 0 0 ξ1 + η 1
Finally, since the uncertain variable ξ1 /(ξ1 + η1 ) is strictly increasing with
respect to ξ1 and strictly decreasing with respect to η1 , it has an inverse
uncertainty distribution
Φ−1 (α)
G−1 (α) = .
Φ−1 (α) + Ψ(1 − α)
The equation (13.47) is thus obtained.
302 Chapter 13 - Uncertain Renewal Process

13.7 Bibliographic Notes


The concept of uncertain renewal process was first proposed by Liu [123] in
2008. Two years later, Liu [129] proved an uncertain elementary renewal the-
orem for determining the average renewal number. Liu [129] also provided
the concept of uncertain renewal reward process and verified an uncertain
renewal reward theorem for determining the long-run reward rate. In addi-
tion, Yao and Li [242] presented the concept of uncertain alternating renewal
process and proved an uncertain alternating renewal theorem for determining
the availability rate.
Based on the theory of uncertain renewal process, Liu [135] presented an
uncertain insurance model by assuming the claim is an uncertain renewal
reward process, and proved a formula for calculating ruin index. In addition,
Yao [257] derived an uncertainty distribution of ruin time. Furthermore, Yao
[250] discussed the uncertain block replacement policy, and Yao and Ralescu
[245] investigated the uncertain age replacement policy and obtained the
long-run average replacement cost.
Chapter 14

Uncertain Calculus

Uncertain calculus is a branch of mathematics that deals with differentiation


and integration of uncertain processes. This chapter will introduce Liu pro-
cess, Liu integral, fundamental theorem, chain rule, change of variables, and
integration by parts.

14.1 Liu Process


In 2009, Liu [125] investigated a type of stationary independent increment
process whose increments are normal uncertain variables. Later, this process
was named by the academic community as Liu process due to its importance
and usefulness. A formal definition is given below.

Definition 14.1 (Liu [125]) An uncertain process Ct is said to be a canon-


ical Liu process if
(i) C0 = 0 and almost all sample paths are Lipschitz continuous,
(ii) Ct has stationary and independent increments,
(iii) every increment Cs+t − Cs is a normal uncertain variable with expected
value 0 and variance t2 .

It is clear that a canonical Liu process Ct is a stationary independent


increment process and has a normal uncertainty distribution with expected
value 0 and variance t2 . The uncertainty distribution of Ct is
−1
πx
Φt (x) = 1 + exp − √ (14.1)
3t

and inverse uncertainty distribution is



−1 t 3 α
Φt (α) = ln (14.2)
π 1−α

© Springer-Verlag Berlin Heidelberg 2015 303


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_15
304 Chapter 14 - Uncertain Calculus

Φ−1
t (α)
..
......... α = 0.9 ........
.........
..
... .........
.... ...
............
...
.. .........
... .........
........
α = 0.8 ...........
... ......... .............
... ............ .............
... ................
. ...... ..................
... ....
... ......... ............. α = 0.7 ................
........ ............. .....................
... ........ .......................... .....................
... ......... .....................
... . . . . . . . . ........................................................................... α = 0.6 ............. ...
........... ......
. ...... ...............
... ... ...... .............
.. ....................................................................................................................................
....................................................................................................................................................................................................................................................................................
0 ... .........................................................................................................................................................
......... ............... .......................
α = 0.5 ...........................................
... ..................... ...............
...
...
......... .............
......... ..
......... ..........................
......... .............
α = 0.4
.....................
.....................
.....................
... ......... ............. ...............
...
...
.........
.........
.........
α = 0.3
.............
.............
.............
... ......... .............
......... .............
...
... α = 0.2
.........
.........
.........
.
... .........
.........
... .........
... .........
.....
... α = 0.1
...
.
.
...................................................................................................................................................................................................................................................................
.. t

Figure 14.1: Inverse Uncertainty Distribution of Canonical Liu Process

that are homogeneous linear functions of time t for any given α. See Fig-
ure 14.1.
A canonical Liu process is defined by three properties in the above defini-
tion. Does such an uncertain process exist? The following theorem answers
this question.

Theorem 14.1 (Liu [129], Existence Theorem) There exists a canonical Liu
process.

Proof: It follows from Theorem 12.11 that there exists a stationary inde-
pendent increment process Ct whose inverse uncertainty distribution is

−1 σ 3 α
Φt (α) = ln t.
π 1−α
Furthermore, Ct has a Lipschitz continuous version. It is also easy to verify
that every increment Cs+t − Cs is a normal uncertain variable with expected
value 0 and variance t2 . Hence there exists a canonical Liu process.

Theorem 14.2 Let Ct be a canonical Liu process. Then for each time t >
0, the ratio Ct /t is a normal uncertain variable with expected value 0 and
variance 1. That is,
Ct
∼ N (0, 1) (14.3)
t
for any t > 0.

Proof: Since Ct is a normal uncertain variable N (0, t), the operational law
tells us that Ct /t has an uncertainty distribution
−1
πx
Ψ(x) = Φt (tx) = 1 + exp − √ .
3
Section 14.1 - Liu Process 305

Hence Ct /t is a normal uncertain variable with expected value 0 and variance


1. The theorem is verified.

Theorem 14.3 (Liu [129]) Let Ct be a canonical Liu process. Then for each
time t, we have
t2
≤ E[Ct2 ] ≤ t2 . (14.4)
2
Proof: Note that Ct is a normal uncertain variable and has an uncertainty
distribution Φt (x) in (14.1). It follows from the definition of expected value
that
Z +∞ Z +∞
√ √
E[Ct2 ] = M{Ct2 ≥ x}dx = M{(Ct ≥ x) ∪ (Ct ≤ − x)}dx.
0 0

On the one hand, we have


Z +∞ √ √
E[Ct2 ] ≤ (M{Ct ≥ x} + M{Ct ≤ − x})dx
0
Z +∞ √ √
= (1 − Φt ( x) + Φt (− x))dx = t2 .
0

On the other hand, we have


+∞ +∞
√ √ t2
Z Z
E[Ct2 ] ≥ M{Ct ≥ x}dx = (1 − Φt ( x))dx = .
0 0 2

Hence (14.4) is proved.

Theorem 14.4 (Iwamura and Xu [69]) Let Ct be a canonical Liu process.


Then for each time t, we have

1.24t4 < V [Ct2 ] < 4.31t4 . (14.5)

Proof: Let q be the expected value of Ct2 . On the one hand, it follows from
the definition of variance that
Z +∞
V [Ct2 ] = M{(Ct2 − q)2 ≥ x}dx
0
Z +∞ q


≤ M Ct ≥ q + x dx
0
Z +∞ q


+ M Ct ≤ − q + x dx
0
Z +∞ q

q


+ M − q − x ≤ Ct ≤ q − x dx.
0
306 Chapter 14 - Uncertain Calculus

Since t2 /2 ≤ q ≤ t2 , we have

Z +∞ q


First Term = M Ct ≥ q + x dx
0
Z +∞ q √

≤ M Ct ≥ t2 /2 + x dx
0
√ !!−1
 
+∞
p
t2 /2 + x
Z
π
= 1 − 1 + exp − √  dx
0 3t

≤ 1.725t4 ,

Z +∞ q


Second Term = M Ct ≤ − q + x dx
0
Z +∞ q √

≤ M Ct ≤ − t /2 + x dx
2
0

+∞
p √ !!−1
t2 /2 + x
Z
π
= 1 + exp √ dx
0 3t

≤ 1.725t4 ,

Z +∞ q

q


Third Term = M − q − x ≤ Ct ≤ q − x dx
0
Z +∞ q


≤ M Ct ≤ q − x dx
0
Z +∞ q


≤ M Ct ≤ t2 − x dx
0

+∞
p √ !!−1
t2 + x
Z
π
= 1 + exp − √ dx
0 3t

< 0.86t4 .

It follows from the above three upper bounds that

V [Ct2 ] < 1.725t4 + 1.725t4 + 0.86t4 = 4.31t4 .


Section 14.1 - Liu Process 307

On the other hand, we have


Z +∞
2
V [Ct ] = M{(Ct2 − q)2 ≥ x}dx
0
Z +∞ q


≥ M Ct ≥ q + x dx
0
Z +∞ q


≥ M Ct ≥ t + x dx
2
0
√ !!−1
 p 
+∞
t2 +
Z
π x
= 1 − 1 + exp − √  dx
0 3t

> 1.24t4 .

The theorem is thus verified. An open problem is to improve the bounds of


the variance of the square of canonical Liu process.

Definition 14.2 Let Ct be a canonical Liu process. Then for any real num-
bers e and σ > 0, the uncertain process

At = et + σCt (14.6)

is called an arithmetic Liu process, where e is called the drift and σ is called
the diffusion.

It is clear that the arithmetic Liu process At is a type of stationary in-


dependent increment process. In addition, the arithmetic Liu process At has
a normal uncertainty distribution with expected value et and variance σ 2 t2 ,
i.e.,
At ∼ N (et, σt) (14.7)
whose uncertainty distribution is
−1
π(et − x)
Φt (x) = 1 + exp √ (14.8)
3σt
and inverse uncertainty distribution is

σt 3 α
Φ−1
t (α) = et + ln . (14.9)
π 1−α
Definition 14.3 Let Ct be a canonical Liu process. Then for any real num-
bers e and σ > 0, the uncertain process

Gt = exp(et + σCt ) (14.10)

is called a geometric Liu process, where e is called the log-drift and σ is called
the log-diffusion.
308 Chapter 14 - Uncertain Calculus

Note that the geometric Liu process Gt has a lognormal uncertainty dis-
tribution, i.e.,
Gt ∼ LOGN (et, σt) (14.11)

whose uncertainty distribution is


−1
π(et − ln x)
Φt (x) = 1 + exp √ (14.12)
3σt

and inverse uncertainty distribution is


√ !
σt 3 α
Φ−1
t (α) = exp et + ln . (14.13)
π 1−α

Furthermore, the geometric Liu process Gt has an expected value,


( √ √ √
σt 3 exp(et) csc(σt 3), if t < π/(σ 3)
E[Gt ] = √ (14.14)
+∞, if t ≥ π/(σ 3).

14.2 Liu Integral


As the most popular topic of uncertain integral, Liu integral allows us to
integrate an uncertain process (the integrand) with respect to Liu process
(the integrator). The result of Liu integral is another uncertain process.

Definition 14.4 (Liu [125]) Let Xt be an uncertain process and let Ct be


a canonical Liu process. For any partition of closed interval [a, b] with a =
t1 < t2 < · · · < tk+1 = b, the mesh is written as

∆ = max |ti+1 − ti |. (14.15)


1≤i≤k

Then Liu integral of Xt with respect to Ct is defined as


Z b k
X
Xt dCt = lim Xti · (Cti+1 − Cti ) (14.16)
a ∆→0
i=1

provided that the limit exists almost surely and is finite. In this case, the
uncertain process Xt is said to be integrable.

Since Xt and Ct are uncertain variables at each time t, the limit in (14.16)
is also an uncertain variable provided that the limit exists almost surely and
is finite. Hence an uncertain process Xt is integrable with respect to Ct if
and only if the limit in (14.16) is an uncertain variable.
Section 14.2 - Liu Integral 309

Example 14.1: For any partition 0 = t1 < t2 < · · · < tk+1 = s, it follows
from (14.16) that
Z s k
X
dCt = lim (Cti+1 − Cti ) ≡ Cs − C0 = Cs .
0 ∆→0
i=1

That is,
Z s
dCt = Cs . (14.17)
0

Example 14.2: For any partition 0 = t1 < t2 < · · · < tk+1 = s, it follows
from (14.16) that

k
X
Cs2 = Ct2i+1 − Ct2i
i=1
k k
X 2 X
= Cti+1 − Cti +2 Cti Cti+1 − Cti
i=1 i=1
Z s
→0+2 Ct dCt
0

as ∆ → 0. That is,
Z s
1 2
Ct dCt = C . (14.18)
0 2 s

Example 14.3: For any partition 0 = t1 < t2 < · · · < tk+1 = s, it follows
from (14.16) that

k
X
sCs = ti+1 Cti+1 − ti Cti
i=1
k
X k
X
= Cti+1 (ti+1 − ti ) + ti (Cti+1 − Cti )
i=1 i=1
Z s Z s
→ Ct dt + tdCt
0 0

as ∆ → 0. That is,
Z s Z s
Ct dt + tdCt = sCs . (14.19)
0 0

Theorem 14.5 If Xt is a sample-continuous uncertain process on [a, b], then


it is integrable with respect to Ct on [a, b].
310 Chapter 14 - Uncertain Calculus

Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. Since the uncertain process Xt is sample-continuous, almost all sample
paths are continuous functions with respect to t. Hence the limit
k
X
lim Xti (Cti+1 − Cti )
∆→0
i=1

exists almost surely and is finite. On the other hand, since Xt and Ct are
uncertain variables at each time t, the above limit is also a measurable func-
tion. Hence the limit is an uncertain variable and then Xt is integrable with
respect to Ct .

Theorem 14.6 If Xt is an integrable uncertain process on [a, b], then it is


integrable on each subinterval of [a, b]. Moreover, if c ∈ [a, b], then
Z b Z c Z b
Xt dCt = Xt dCt + Xt dCt . (14.20)
a a c

Proof: Let [a0 , b0 ] be a subinterval of [a, b]. Since Xt is an integrable uncer-


tain process on [a, b], for any partition

a = t1 < · · · < tm = a0 < tm+1 < · · · < tn = b0 < tn+1 < · · · < tk+1 = b,

the limit
k
X
lim Xti (Cti+1 − Cti )
∆→0
i=1

exists almost surely and is finite. Thus the limit


n−1
X
lim Xti (Cti+1 − Cti )
∆→0
i=m

exists almost surely and is finite. Hence Xt is integrable on the subinterval


[a0 , b0 ]. Next, for the partition

a = t1 < · · · < tm = c < tm+1 < · · · < tk+1 = b,

we have
k
X m−1
X k
X
Xti (Cti+1 − Cti ) = Xti (Cti+1 − Cti ) + Xti (Cti+1 − Cti ).
i=1 i=1 i=m

Note that
Z b k
X
Xt dCt = lim Xti (Cti+1 − Cti ),
a ∆→0
i=1
Section 14.2 - Liu Integral 311

Z c m−1
X
Xt dCt = lim Xti (Cti+1 − Cti ),
a ∆→0
i=1
Z b k
X
Xt dCt = lim Xti (Cti+1 − Cti ).
c ∆→0
i=m
Hence the equation (14.20) is proved.
Theorem 14.7 (Linearity of Liu Integral) Let Xt and Yt be integrable un-
certain processes on [a, b], and let α and β be real numbers. Then
Z b Z b Z b
(αXt + βYt )dCt = α Xt dCt + β Yt dCt . (14.21)
a a a

Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. It follows from the definition of Liu integral that
Z b Xk
(αXt + βYt )dCt = lim (αXti + βYti )(Cti+1 − Cti )
a ∆→0
i=1
k
X k
X
= lim α Xti (Cti+1 − Cti ) + lim β Yti (Cti+1 − Cti )
∆→0 ∆→0
i=1 i=1
Z b Z b
= α Xt dCt + β Yt dCt .
a a

Hence the equation (14.21) is proved.


Theorem 14.8 Let f (t) be an integrable function with respect to t. Then
the Liu integral Z s
f (t)dCt (14.22)
0
is a normal uncertain variable at each time s, and
Z s Z s
f (t)dCt ∼ N 0, |f (t)|dt . (14.23)
0 0

Proof: Since the increments of Ct are stationary and independent normal


uncertain variables, for any partition of closed interval [0, s] with 0 = t1 <
t2 < · · · < tk+1 = s, it follows from Theorem 2.12 that
k k
!
X X
f (ti )(Cti+1 − Cti ) ∼ N 0, |f (ti )|(ti+1 − ti ) .
i=1 i=1

That is, the sum is also a normal uncertain variable. Since f is an integrable
function, we have
Xk Z s
|f (ti )|(ti+1 − ti ) → |f (t)|dt
i=1 0
312 Chapter 14 - Uncertain Calculus

as the mesh ∆ → 0. Hence we obtain


Z s k
X Z s
f (t)dCt = lim f (ti )(Cti+1 − Cti ) ∼ N 0, |f (t)|dt .
0 ∆→0 0
i=1

The theorem is proved.

Exercise 14.1: Let s be a given time with s > 0. Show that the Liu integral
Z s
tdCt (14.24)
0

is a normal uncertain variable N (0, s2 /2) and has an uncertainty distribution


−1
2πx
Φs (x) = 1 + exp − √ . (14.25)
3s2

Exercise 14.2: For any real number α with 0 < α < 1, the uncertain process
Z s
Fs = (s − t)−α dCt (14.26)
0

is called a fractional Liu process with index α. Show that Fs is a normal


uncertain variable and
s1−α

Fs ∼ N 0, (14.27)
1−α
whose uncertainty distribution is
−1
π(1 − α)x
Φs (x) = 1 + exp − √ . (14.28)
3s1−α
Definition 14.5 (Chen and Ralescu [20]) Let Ct be a canonical Liu process
and let Zt be an uncertain process. If there exist uncertain processes µt and
σt such that Z t Z t
Zt = Z0 + µs ds + σs dCs (14.29)
0 0
for any t ≥ 0, then Zt is called a Liu process with drift µt and diffusion σt .
Furthermore, Zt has an uncertain differential

dZt = µt dt + σt dCt . (14.30)

Example 14.4: It follows from the equation (14.17) that the canonical Liu
process Ct can be written as
Z t
Ct = dCs .
0
Section 14.3 - Fundamental Theorem 313

Thus Ct is a Liu process with drift 0 and diffusion 1, and has an uncertain
differential dCt .

Example 14.5: It follows from the equation (14.18) that Ct2 can be written
as Z t
2
Ct = 2 Cs dCs .
0

Thus Ct2is a Liu process with drift 0 and diffusion 2Ct , and has an uncertain
differential
d(Ct2 ) = 2Ct dCt .

Example 14.6: It follows from the equation (14.19) that tCt can be written
as Z Z t t
tCt = Cs ds + sdCs .
0 0

Thus tCt is a Liu process with drift Ct and diffusion t, and has an uncertain
differential
d(tCt ) = Ct dt + tdCt .

Theorem 14.9 (Chen and Ralescu [20]) Liu process is a sample-continuous


uncertain process.

Proof: Let Zt be a Liu process. Then there exist two uncertain processes
µt and σt such that
Z t Z t
Zt = Z0 + µs ds + σs dCs .
0 0

For each γ ∈ Γ, we have


Z t Z t
|Zt (γ) − Zr (γ)| = µs (γ)ds + σs (γ)dCs (γ) → 0
r r

as r → t. Thus Zt is sample-continuous and the theorem is proved.

14.3 Fundamental Theorem


Theorem 14.10 (Liu [125], Fundamental Theorem of Uncertain Calculus)
Let h(t, c) be a continuously differentiable function. Then Zt = h(t, Ct ) is a
Liu process and has an uncertain differential

∂h ∂h
dZt = (t, Ct )dt + (t, Ct )dCt . (14.31)
∂t ∂c
314 Chapter 14 - Uncertain Calculus

Proof: Write ∆Ct = Ct+∆t − Ct = C∆t . It follows from Theorems 14.3


and 14.4 that ∆t and ∆Ct are infinitesimals with the same order. Since the
function h is continuously differentiable, by using Taylor series expansion,
the infinitesimal increment of Zt has a first-order approximation,
∂h ∂h
∆Zt = (t, Ct )∆t + (t, Ct )∆Ct .
∂t ∂c
Hence we obtain the uncertain differential (14.31) because it makes
Z s Z s
∂h ∂h
Zs = Z0 + (t, Ct )dt + (t, Ct )dCt . (14.32)
0 ∂t 0 ∂c

This formula is an integral form of the fundamental theorem.

Example 14.7: Let us calculate the uncertain differential of tCt . In this


case, we have h(t, c) = tc whose partial derivatives are
∂h ∂h
(t, c) = c, (t, c) = t.
∂t ∂c
It follows from the fundamental theorem of uncertain calculus that

d(tCt ) = Ct dt + tdCt . (14.33)

Thus tCt is a Liu process with drift Ct and diffusion t.

Example 14.8: Let us calculate the uncertain differential of the arithmetic


Liu process At = et + σCt . In this case, we have h(t, c) = et + σc whose
partial derivatives are
∂h ∂h
(t, c) = e, (t, c) = σ.
∂t ∂c
It follows from the fundamental theorem of uncertain calculus that

dAt = edt + σdCt . (14.34)

Thus At is a Liu process with drift e and diffusion σ.

Example 14.9: Let us calculate the uncertain differential of the geometric


Liu process Gt = exp(et + σCt ). In this case, we have h(t, c) = exp(et + σc)
whose partial derivatives are
∂h ∂h
(t, c) = eh(t, c), (t, c) = σh(t, c).
∂t ∂c
It follows from the fundamental theorem of uncertain calculus that

dGt = eGt dt + σGt dCt . (14.35)

Thus Gt is a Liu process with drift eGt and diffusion σGt .


Section 14.5 - Change of Variables 315

14.4 Chain Rule


Chain rule is a special case of the fundamental theorem of uncertain calculus.

Theorem 14.11 (Liu [125], Chain Rule) Let f (c) be a continuously differ-
entiable function. Then f (Ct ) has an uncertain differential

df (Ct ) = f 0 (Ct )dCt . (14.36)

Proof: Since f (c) is a continuously differentiable function, we immediately


have
∂ ∂
f (c) = 0, f (c) = f 0 (c).
∂t ∂c
It follows from the fundamental theorem of uncertain calculus that the equa-
tion (14.36) holds.

Example 14.10: Let us calculate the uncertain differential of Ct2 . In this


case, we have f (c) = c2 and f 0 (c) = 2c. It follows from the chain rule that

dCt2 = 2Ct dCt . (14.37)

Example 14.11: Let us calculate the uncertain differential of sin(Ct ). In


this case, we have f (c) = sin(c) and f 0 (c) = cos(c). It follows from the chain
rule that
d sin(Ct ) = cos(Ct )dCt . (14.38)

Example 14.12: Let us calculate the uncertain differential of exp(Ct ). In


this case, we have f (c) = exp(c) and f 0 (c) = exp(c). It follows from the chain
rule that
d exp(Ct ) = exp(Ct )dCt . (14.39)

14.5 Change of Variables


Theorem 14.12 (Liu [125], Change of Variables) Let f be a continuously
differentiable function. Then for any s > 0, we have
Z s Z Cs
0
f (Ct )dCt = f 0 (c)dc. (14.40)
0 C0

That is, Z s
f 0 (Ct )dCt = f (Cs ) − f (C0 ). (14.41)
0

Proof: Since f is a continuously differentiable function, it follows from the


chain rule that
df (Ct ) = f 0 (Ct )dCt .
316 Chapter 14 - Uncertain Calculus

By using the fundamental theorem of uncertain calculus, we get


Z s
f (Cs ) = f (C0 ) + f 0 (Ct )dCt .
0

Hence the theorem is verified.

Example 14.13: Since the function f (c) = c has an antiderivative c2 /2, it


follows from the change of variables of integral that
Z s
1 1 1
Ct dCt = Cs2 − C02 = Cs2 .
0 2 2 2

Example 14.14: Since the function f (c) = c2 has an antiderivative c3 /3, it


follows from the change of variables of integral that
Z s
1 1 1
Ct2 dCt = Cs3 − C03 = Cs3 .
0 3 3 3

Example 14.15: Since the function f (c) = exp(c) has an antiderivative


exp(c), it follows from the change of variables of integral that
Z s
exp(Ct )dCt = exp(Cs ) − exp(C0 ) = exp(Cs ) − 1.
0

14.6 Integration by Parts


Theorem 14.13 (Liu [125], Integration by Parts) Suppose Xt and Yt are
Liu processes. Then
d(Xt Yt ) = Yt dXt + Xt dYt . (14.42)
Proof: Note that ∆Xt and ∆Yt are infinitesimals with the same order. Since
the function xy is a continuously differentiable function with respect to x and
y, by using Taylor series expansion, the infinitesimal increment of Xt Yt has
a first-order approximation,
∆(Xt Yt ) = Yt ∆Xt + Xt ∆Yt .
Hence we obtain the uncertain differential (14.42) because it makes
Z s Z s
Xs Ys = X0 Y0 + Yt dXt + Xt dYt . (14.43)
0 0

The theorem is thus proved.

Example 14.16: In order to illustrate the integration by parts, let us cal-


culate the uncertain differential of
Zt = exp(t)Ct2 .
Section 14.7 - Bibliographic Notes 317

In this case, we define

Xt = exp(t), Yt = Ct2 .

Then
dXt = exp(t)dt, dYt = 2Ct dCt .

It follows from the integration by parts that

dZt = exp(t)Ct2 dt + 2 exp(t)Ct dCt .

Example 14.17: The integration by parts may also calculate the uncertain
differential of
Z t
Zt = sin(t + 1) sdCs .
0

In this case, we define


Z t
Xt = sin(t + 1), Yt = sdCs .
0

Then
dXt = cos(t + 1)dt, dYt = tdCt .

It follows from the integration by parts that


Z t
dZt = sdCs cos(t + 1)dt + sin(t + 1)tdCt .
0

Example 14.18: Let f and g be continuously differentiable functions. It is


clear that
Zt = f (t)g(Ct )

is an uncertain process. In order to calculate the uncertain differential of Zt ,


we define
Xt = f (t), Yt = g(Ct )

Then
dXt = f 0 (t)dt, dYt = g 0 (Ct )dCt .

It follows from the integration by parts that

dZt = f 0 (t)g(Ct )dt + f (t)g 0 (Ct )dCt .


318 Chapter 14 - Uncertain Calculus

14.7 Bibliographic Notes


The concept of uncertain integral was first proposed by Liu [123] in 2008 in
order to integrate uncertain processes with respect to Liu process. One year
later, Liu [125] recast his work via the fundamental theorem of uncertain
calculus from which the techniques of chain rule, change of variables, and
integration by parts were derived.
Note that uncertain integral may also be defined with respect to other
integrators. For example, Liu and Yao [132] suggested an uncertain integral
with respect to multiple Liu processes. In addition, Chen and Ralescu [20]
presented an uncertain integral with respect to general Liu process. In order
to deal with uncertain process with jumps, Yao integral [241] was defined as
a type of uncertain integral with respect to uncertain renewal process. Since
then, the theory of uncertain calculus was well developed.
Chapter 15

Uncertain Differential
Equation

Uncertain differential equation is a type of differential equation involving


uncertain processes. This chapter will discuss the existence, uniqueness and
stability of solutions of uncertain differential equations, and introduce Yao-
Chen formula that represents the solution of an uncertain differential equation
by a family of solutions of ordinary differential equations. On the basis of
this formula, some formulas to calculate extreme value, first hitting time, and
time integral of solution are provided. Furthermore, some numerical methods
for solving general uncertain differential equations are designed.

15.1 Uncertain Differential Equation


Definition 15.1 (Liu [123]) Suppose Ct is a canonical Liu process, and f
and g are two functions. Then
dXt = f (t, Xt )dt + g(t, Xt )dCt (15.1)
is called an uncertain differential equation. A solution is a Liu process Xt
that satisfies (15.1) identically in t.

Remark 15.1: The uncertain differential equation (15.1) is equivalent to


the uncertain integral equation
Z s Z s
Xs = X0 + f (t, Xt )dt + g(t, Xt )dCt . (15.2)
0 0

Theorem 15.1 Let ut and vt be two integrable uncertain processes. Then


the uncertain differential equation
dXt = ut dt + vt dCt (15.3)

© Springer-Verlag Berlin Heidelberg 2015 319


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_16
320 Chapter 15 - Uncertain Differential Equation

has a solution Z t Z t
Xt = X0 + us ds + vs dCs . (15.4)
0 0

Proof: This theorem is essentially the definition of uncertain differential or


a direct deduction of the fundamental theorem of uncertain calculus.

Example 15.1: Let a and b be real numbers. Consider the uncertain differ-
ential equation
dXt = adt + bdCt . (15.5)
It follows from Theorem 15.1 that the solution is
Z t Z t
Xt = X0 + ads + bdCs .
0 0

That is,
Xt = X0 + at + bCt . (15.6)
Theorem 15.2 Let ut and vt be two integrable uncertain processes. Then
the uncertain differential equation
dXt = ut Xt dt + vt Xt dCt (15.7)
has a solution Z t Z t
Xt = X0 exp us ds + vs dCs . (15.8)
0 0

Proof: At first, the original uncertain differential equation is equivalent to


dXt
= ut dt + vt dCt .
Xt
It follows from the fundamental theorem of uncertain calculus that
dXt
d ln Xt = = ut dt + vt dCt
Xt
and then Z t Z t
ln Xt = ln X0 + us ds + vs dCs .
0 0
Therefore the uncertain differential equation has a solution (15.8).

Example 15.2: Let a and b be real numbers. Consider the uncertain differ-
ential equation
dXt = aXt dt + bXt dCt . (15.9)
It follows from Theorem 15.2 that the solution is
Z t Z t
Xt = X0 exp ads + bdCs .
0 0

That is,
Xt = X0 exp (at + bCt ) . (15.10)
Section 15.1 - Uncertain Differential Equation 321

Linear Uncertain Differential Equation


Theorem 15.3 (Chen and Liu [12]) Let u1t , u2t , v1t , v2t be integrable uncer-
tain processes. Then the linear uncertain differential equation

dXt = (u1t Xt + u2t )dt + (v1t Xt + v2t )dCt (15.11)

has a solution
Z t Z t
u2s v2s
Xt = Ut X0 + ds + dCs (15.12)
0 Us 0 Us
where Z t Z t
Ut = exp u1s ds + v1s dCs . (15.13)
0 0

Proof: At first, we define two uncertain processes Ut and Vt via uncertain


differential equations,
u2t v2t
dUt = u1t Ut dt + v1t Ut dCt , dVt = dt + dCt .
Ut Ut
It follows from the integration by parts that

d(Ut Vt ) = Vt dUt + Ut dVt = (u1t Ut Vt + u2t )dt + (v1t Ut Vt + v2t )dCt .

That is, the uncertain process Xt = Ut Vt is a solution of the uncertain


differential equation (15.11). Note that
Z t Z t
Ut = U0 exp u1s ds + v1s dCs ,
0 0
Z t Z t
u2s v2s
Vt = V0 + ds + dCs .
0 Us 0 Us
Taking U0 = 1 and V0 = X0 , we get the solution (15.12). The theorem is
proved.

Example 15.3: Let m, a, σ be real numbers. Consider a linear uncertain


differential equation

dXt = (m − aXt )dt + σdCt . (15.14)

At first, we have
Z t Z t
Ut = exp (−a)ds + 0dCs = exp(−at).
0 0

It follows from Theorem 15.3 that the solution is


Z t Z t
Xt = exp(−at) X0 + m exp(as)ds + σ exp(as)dCs .
0 0
322 Chapter 15 - Uncertain Differential Equation

That is,
Z t
m m
Xt = + exp(−at) X0 − + σ exp(−at) exp(as)dCs (15.15)
a a 0

provided that a 6= 0. Note that Xt is a normal uncertain variable, i.e.,


m m σ σ
Xt ∼ N + exp(−at) X0 − , − exp(−at) . (15.16)
a a a a

Example 15.4: Let m and σ be real numbers. Consider a linear uncertain


differential equation
dXt = mdt + σXt dCt . (15.17)
At first, we have
Z t Z t
Ut = exp 0ds + σdCs = exp(σCt ).
0 0

It follows from Theorem 15.3 that the solution is


Z t Z t
Xt = exp(σCt ) X0 + m exp(−σs)ds + 0dCs .
0 0

That is, Z t
Xt = exp(σCt ) X0 + m exp(−σCs )ds . (15.18)
0

15.2 Analytic Methods


This section will provide two analytic methods for solving some nonlinear
uncertain differential equations.

First Analytic Method


This subsection will introduce an analytic method for solving nonlinear un-
certain differential equations like

dXt = f (t, Xt )dt + σt Xt dCt (15.19)

and
dXt = αt Xt dt + g(t, Xt )dCt . (15.20)

Theorem 15.4 (Liu [148]) Let f be a function of two variables and let σt
be an integrable uncertain process. Then the uncertain differential equation

dXt = f (t, Xt )dt + σt Xt dCt (15.21)


Section 15.2 - Analytic Methods 323

has a solution
Xt = Yt−1 Zt (15.22)
where Z t
Yt = exp − σs dCs (15.23)
0
and Zt is the solution of the uncertain differential equation
dZt = Yt f (t, Yt−1 Zt )dt (15.24)
with initial value Z0 = X0 .
Proof: At first, by using the chain rule, the uncertain process Yt has an
uncertain differential
Z t
dYt = − exp − σs dCs σt dCt = −Yt σt dCt .
0

It follows from the integration by parts that


d(Xt Yt ) = Xt dYt + Yt dXt = −Xt Yt σt dCt + Yt f (t, Xt )dt + Yt σt Xt dCt .
That is,
d(Xt Yt ) = Yt f (t, Xt )dt.
Defining Zt = Xt Yt , we obtain Xt = Yt−1 Zt and dZt = Yt f (t, Yt−1 Zt )dt.
Furthermore, since Y0 = 1, the initial value Z0 is just X0 . The theorem is
thus verified.

Example 15.5: Let α and σ be real numbers with α 6= 1. Consider the


uncertain differential equation
dXt = Xtα dt + σXt dCt . (15.25)
At first, we have Yt = exp(−σCt ) and Zt satisfies the uncertain differential
equation,
dZt = exp(−σCt )(exp(σCt )Zt )α dt = exp((α − 1)σCt )Ztα dt.
Since α 6= 1, we have
dZt1−α = (1 − α) exp((α − 1)σCt )dt.
It follows from the fundamental theorem of uncertain calculus that
Z t
1−α 1−α
Zt = Z0 + (1 − α) exp((α − 1)σCs )ds.
0

Since the initial value Z0 is just X0 , we have


Z t 1/(1−α)
Zt = X01−α + (1 − α) exp((α − 1)σCs )ds .
0
324 Chapter 15 - Uncertain Differential Equation

Theorem 15.4 says the uncertain differential equation (15.25) has a solution
Xt = Yt−1 Zt , i.e.,
Z t 1/(1−α)
1−α
Xt = exp(σCt ) X0 + (1 − α) exp((α − 1)σCs )ds .
0

Theorem 15.5 (Liu [148]) Let g be a function of two variables and let αt
be an integrable uncertain process. Then the uncertain differential equation

dXt = αt Xt dt + g(t, Xt )dCt (15.26)

has a solution
Xt = Yt−1 Zt (15.27)
where Z t
Yt = exp − αs ds (15.28)
0
and Zt is the solution of the uncertain differential equation

dZt = Yt g(t, Yt−1 Zt )dCt (15.29)

with initial value Z0 = X0 .

Proof: At first, by using the chain rule, the uncertain process Yt has an
uncertain differential
Z t
dYt = − exp − αs ds αt dt = −Yt αt dt.
0

It follows from the integration by parts that

d(Xt Yt ) = Xt dYt + Yt dXt = −Xt Yt αt dt + Yt αt Xt dt + Yt g(t, Xt )dCt .

That is,
d(Xt Yt ) = Yt g(t, Xt )dCt .
Defining Zt = Xt Yt , we obtain Xt = Yt−1 Zt and dZt = Yt g(t, Yt−1 Zt )dCt .
Furthermore, since Y0 = 1, the initial value Z0 is just X0 . The theorem is
thus verified.

Example 15.6: Let α and β be real numbers with β 6= 1. Consider the


uncertain differential equation

dXt = αXt dt + Xtβ dCt . (15.30)

At first, we have Yt = exp(−αt) and Zt satisfies the uncertain differential


equation,

dZt = exp(−αt)(exp(αt)Zt )β dCt = exp((β − 1)αt)Ztβ dCt .


Section 15.2 - Analytic Methods 325

Since β 6= 1, we have

dZt1−β = (1 − β) exp((β − 1)αt)dCt .

It follows from the fundamental theorem of uncertain calculus that


Z t
Zt1−β = Z01−β + (1 − β) exp((β − 1)αs)dCs .
0

Since the initial value Z0 is just X0 , we have


Z t 1/(1−β)
Zt = X01−β + (1 − β) exp((β − 1)αs)dCs .
0

Theorem 15.5 says the uncertain differential equation (15.30) has a solution
Xt = Yt−1 Zt , i.e.,
Z t 1/(1−β)
Xt = exp(αt) X01−β + (1 − β) exp((β − 1)αs)dCs .
0

Second Analytic Method


This subsection will introduce an analytic method for solving nonlinear un-
certain differential equations like

dXt = f (t, Xt )dt + σt dCt (15.31)

and
dXt = αt dt + g(t, Xt )dCt . (15.32)

Theorem 15.6 (Yao [247]) Let f be a function of two variables and let σt
be an integrable uncertain process. Then the uncertain differential equation

dXt = f (t, Xt )dt + σt dCt (15.33)

has a solution
Xt = Yt + Zt (15.34)
where Z t
Yt = σs dCs (15.35)
0

and Zt is the solution of the uncertain differential equation

dZt = f (t, Yt + Zt )dt (15.36)

with initial value Z0 = X0 .


326 Chapter 15 - Uncertain Differential Equation

Proof: At first, Yt has an uncertain differential dYt = σt dCt . It follows that


d(Xt − Yt ) = dXt − dYt = f (t, Xt )dt + σt dCt − σt dCt .
That is,
d(Xt − Yt ) = f (t, Xt )dt.
Defining Zt = Xt − Yt , we obtain Xt = Yt + Zt and dZt = f (t, Yt + Zt )dt.
Furthermore, since Y0 = 0, the initial value Z0 is just X0 . The theorem is
proved.

Example 15.7: Let α and σ be real numbers with α 6= 0. Consider the


uncertain differential equation
dXt = α exp(Xt )dt + σdCt . (15.37)
At first, we have Yt = σCt and Zt satisfies the uncertain differential equation,
dZt = α exp(σCt + Zt )dt.
6 0, we have
Since α =
d exp(−Zt ) = −α exp(σCt )dt.
It follows from the fundamental theorem of uncertain calculus that
Z t
exp(−Zt ) = exp(−Z0 ) − α exp(σCs )ds.
0

Since the initial value Z0 is just X0 , we have


Z t
Zt = X0 − ln 1 − α exp(X0 + σCs )ds .
0

Hence Z t
Xt = X0 + σCt − ln 1 − α exp(X0 + σCs )ds .
0

Theorem 15.7 (Yao [247]) Let g be a function of two variables and let αt
be an integrable uncertain process. Then the uncertain differential equation
dXt = αt dt + g(t, Xt )dCt (15.38)
has a solution
Xt = Yt + Zt (15.39)
where Z t
Yt = αs ds (15.40)
0
and Zt is the solution of the uncertain differential equation
dZt = g(t, Yt + Zt )dCt (15.41)
with initial value Z0 = X0 .
Section 15.3 - Existence and Uniqueness 327

Proof: The uncertain process Yt has an uncertain differential dYt = αt dt. It


follows that

d(Xt − Yt ) = dXt − dYt = αt dt + g(t, Xt )dCt − αt dt.

That is,
d(Xt − Yt ) = g(t, Xt )dCt .
Defining Zt = Xt − Yt , we obtain Xt = Yt + Zt and dZt = g(t, Yt + Zt )dCt .
Furthermore, since Y0 = 0, the initial value Z0 is just X0 . The theorem is
proved.

Example 15.8: Let α and σ be real numbers with σ 6= 0. Consider the


uncertain differential equation

dXt = αdt + σ exp(Xt )dCt . (15.42)

At first, we have Yt = αt and Zt satisfies the uncertain differential equation,

dZt = σ exp(αt + Zt )dCt .

Since σ 6= 0, we have

d exp(−Zt ) = σ exp(αt)dCt .

It follows from the fundamental theorem of uncertain calculus that


Z t
exp(−Zt ) = exp(−Z0 ) + σ exp(αs)dCs .
0

Since the initial value Z0 is just X0 , we have


Z t
Zt = X0 − ln 1 − σ exp(X0 + αs)dCs .
0

Hence Z t
Xt = X0 + αt − ln 1 − σ exp(X0 + αs)dCs .
0

15.3 Existence and Uniqueness


Theorem 15.8 (Chen and Liu [12], Existence and Uniqueness Theorem)
The uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt (15.43)

has a unique solution if the coefficients f (t, x) and g(t, x) satisfy linear growth
condition
|f (t, x)| + |g(t, x)| ≤ L(1 + |x|), ∀x ∈ <, t ≥ 0 (15.44)
328 Chapter 15 - Uncertain Differential Equation

and Lipschitz condition

|f (t, x) − f (t, y)| + |g(t, x) − g(t, y)| ≤ L|x − y|, ∀x, y ∈ <, t ≥ 0 (15.45)

for some constant L. Moreover, the solution is sample-continuous.

Proof: We first prove the existence of solution by a successive approximation


(0)
method. Define Xt = X0 , and
Z t Z t
(n)
Xt = X0 + f s, Xs(n−1) ds + g s, Xs(n−1) dCs
0 0

for n = 1, 2, · · · and write


(n)
Dt (γ) = max Xs(n+1) (γ) − Xs(n) (γ)
0≤s≤t

for each γ ∈ Γ. It follows from linear growth condition and Lipschitz condi-
tion that
Z s Z s
(0)
Dt (γ) = max f (v, X0 )dv + g(v, X0 )dCv (γ)
0≤s≤t 0 0
Z t Z t
≤ |f (v, X0 )| dv + Kγ |g(v, X0 )| dv
0 0

≤ (1 + |X0 |)L(1 + Kγ )t

where Kγ is the Lipschitz constant to the sample path Ct (γ). In fact, by


using the induction method, we may verify

(n) Ln+1 (1 + Kγ )n+1 n+1


Dt (γ) ≤ (1 + |X0 |) t
(n + 1)!
(k)
for each n. This means that, for each γ ∈ Γ, the sample paths Xt (γ)
converges uniformly on any given time interval. Write the limit by Xt (γ)
that is just a solution of the uncertain differential equation because
Z t Z t
Xt = X0 + f (s, Xs )ds + g(s, Xs )ds.
0 0

Next we prove that the solution is unique. Assume that both Xt and Xt∗
are solutions of the uncertain differential equation. Then for each γ ∈ Γ, it
follows from linear growth condition and Lipschitz condition that
Z t

|Xt (γ) − Xt (γ)| ≤ L(1 + Kγ ) |Xv (γ) − Xv∗ (γ)|dv.
0

By using Gronwall inequality, we obtain

|Xt (γ) − Xt∗ (γ)| ≤ 0 · exp(L(1 + Kγ )t) = 0.


Section 15.4 - Stability 329

Hence Xt = Xt∗ . The uniqueness is verified. Finally, for each γ ∈ Γ, we have


Z t Z t
|Xt (γ) − Xr (γ)| = f (s, Xs (γ))ds + g(s, Xs (γ))dCs (γ) → 0
r r

as r → t. Thus Xt is sample-continuous and the theorem is proved.

15.4 Stability
Definition 15.2 (Liu [125]) An uncertain differential equation is said to be
stable if for any two solutions Xt and Yt , we have
lim M{|Xt − Yt | < ε for all t ≥ 0} = 1 (15.46)
|X0 −Y0 |→0

for any given number ε > 0.

Example 15.9: In order to illustrate the concept of stability, let us consider


the uncertain differential equation
dXt = adt + bdCt . (15.47)
It is clear that two solutions with initial values X0 and Y0 are
Xt = X0 + at + bCt ,
Yt = Y0 + at + bCt .
Then for any given number ε > 0, we have
lim M{|Xt − Yt | < ε for all t ≥ 0} = lim M{|X0 − Y0 | < ε} = 1.
|X0 −Y0 |→0 |X0 −Y0 |→0

Hence the uncertain differential equation (15.47) is stable.

Example 15.10: Some uncertain differential equations are not stable. For
example, consider
dXt = Xt dt + bdCt . (15.48)
It is clear that two solutions with different initial values X0 and Y0 are
Z t
Xt = exp(t)X0 + b exp(t) exp(−s)dCs ,
0
Z t
Yt = exp(t)Y0 + b exp(t) exp(−s)dCs .
0
Then for any given number ε > 0, we have
lim M{|Xt − Yt | < ε for all t ≥ 0}
|X0 −Y0 |→0

= lim M{exp(t)|X0 − Y0 | < ε for all t ≥ 0} = 0.


|X0 −Y0 |→0

Hence the uncertain differential equation (15.48) is unstable.


330 Chapter 15 - Uncertain Differential Equation

Theorem 15.9 (Yao, Gao and Gao [243], Stability Theorem) The uncertain
differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt (15.49)

is stable if the coefficients f (t, x) and g(t, x) satisfy linear growth condition

|f (t, x)| + |g(t, x)| ≤ K(1 + |x|), ∀x ∈ <, t ≥ 0 (15.50)

for some constant K and strong Lipschitz condition

|f (t, x) − f (t, y)| + |g(t, x) − g(t, y)| ≤ L(t)|x − y|, ∀x, y ∈ <, t ≥ 0 (15.51)

for some bounded and integrable function L(t) on [0, +∞).

Proof: Since L(t) is bounded on [0, +∞), there is a constant R such that
L(t) ≤ R for any t. Then the strong Lipschitz condition (15.51) implies the
following Lipschitz condition,

|f (t, x) − f (t, y)| + |g(t, x) − g(t, y)| ≤ R|x − y|, ∀x, y ∈ <, t ≥ 0. (15.52)

It follows from linear growth condition (15.50), Lipschitz condition (15.52)


and the existence and uniqueness theorem that the uncertain differential
equation (15.49) has a unique solution. Let Xt and Yt be two solutions with
initial values X0 and Y0 , respectively. Then for each γ, we have
d|Xt (γ) − Yt (γ)| ≤ |f (t, Xt (γ)) − f (t, Yt (γ))| + |g(t, Xt (γ)) − g(t, Yt (γ))|
≤ L(t)|Xt (γ) − Yt (γ)|dt + L(t)K(γ)|Xt (γ) − Yt (γ)|dt
= L(t)(1 + K(γ))|Xt (γ) − Yt (γ)|dt

where K(γ) is the Lipschitz constant of the sample path Ct (γ). It follows
that
Z +∞
|Xt (γ) − Yt (γ)| ≤ |X0 − Y0 | exp (1 + K(γ)) L(s)ds .
0

Thus for any given ε > 0, we always have


M{|Xt − Yt | < ε for all t ≥ 0}
Z +∞
≥ M |X0 − Y0 | exp (1 + K(γ)) L(s)ds < ε .
0

Since Z +∞
M |X0 − Y0 | exp (1 + K(γ)) L(s)ds < ε → 1
0
as |X0 − Y0 | → 0, we obtain

lim M{|Xt − Yt | < ε for all t ≥ 0} = 1.


|X0 −Y0 |→0
Section 15.6 - Yao-Chen Formula 331

Hence the uncertain differential equation is stable.

Exercise 15.1: Suppose u1t , u2t , v1t , v2t are bounded functions with respect
to t such that
Z +∞ Z +∞
|u1t |dt < +∞, |v1t |dt < +∞. (15.53)
0 0

Show that the linear uncertain differential equation

dXt = (u1t Xt + u2t )dt + (v1t Xt + v2t )dCt (15.54)

is stable.

15.5 α-Path
Definition 15.3 (Yao and Chen [246]) Let α be a number with 0 < α < 1.
An uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt (15.55)

is said to have an α-path Xtα if it solves the corresponding ordinary differen-


tial equation
dXtα = f (t, Xtα )dt + |g(t, Xtα )|Φ−1 (α)dt (15.56)
where Φ−1 (α) is the inverse standard normal uncertainty distribution, i.e.,

3 α
Φ−1 (α) = ln . (15.57)
π 1−α

Remark 15.2: Note that each α-path Xtα is a real-valued function of time t,
but is not necessarily one of sample paths. Furthermore, almost all α-paths
are continuous functions with respect to time t.

Example 15.11: The uncertain differential equation dXt = adt + bdCt with
X0 = 0 has an α-path
Xtα = at + |b|Φ−1 (α)t (15.58)
where Φ−1 is the inverse standard normal uncertainty distribution.

Example 15.12: The uncertain differential equation dXt = aXt dt+bXt dCt
with X0 = 1 has an α-path

Xtα = exp at + |b|Φ−1 (α)t



(15.59)

where Φ−1 is the inverse standard normal uncertainty distribution.


332 Chapter 15 - Uncertain Differential Equation

Xtα
.....
.......
.... α = 0.9 .............
......
... .............
.............
..
... .. .. ....... .............
... .............
..............
... ..............
... ..............
...............
... .............................
..............................
...........................................................
... ............................................ .......................
... ............................................ ...............
................
... .................................. ............ ................
... ..................................... ........... ................
................
... .......................... ........ ........... .................
... ... ............. ...... ....... ........ .................
........ .......
...
...
... ... ........... ...... ........
... .... ..... ..... ...... .......
... ..... ..... ...... ....... ........
.............
.........
α = 0.8
... ... ..... ...... ...... ...... ....... .........
... ..... ..... ...... ....... ....... .........
... ... ..... ..... ...... ...... .... . ... ..........
... ... .... ..... ..... ....... .............. .........
..........
... ... ..... ...... ...... .......
... ..... ..... ...... .......
.......
....... ..........
... ... ..... ..... ...... .. ........ ..........
.....
... .... ..... ...... ...... ............ ........
........ α = 0.7
... .... ..... ..... ...... ... ... ...........
.... .... ..... .....
... ..... ..... ..... ...... .............. .........
... ..... ..... ....... ....... .......
.......
.........
.........
..... ...... ...... .......
... .. ........ .....
...
.....
.....
..
.
..
..... .......... ........... ..............
. .
α = 0.6
........
.........
... ..... .......... ............ .............. .........
..
.
... ..... ...... ....... ........... .........
...
...
.....
.....
......
......
....... .............
.......
α = 0.5
.......
........
.........
.........
...... ... ........ .....
... .
...
......
.......
.......
.........
........ α = 0.4
........
..........
.........
..........
... .........
...
...
.......
........
......... α = 0.3 .........
..........
..
......
..........
...
...
α = 0.2 ..........
...........
........
...
... α = 0.1
...
............................................................................................................................................................................................................................................
... t

Figure 15.1: A Spectrum of α-Paths of dXt = aXt dt + bXt dCt . Reprinted


from Liu [129].

15.6 Yao-Chen Formula


Yao-Chen formula relates uncertain differential equations and ordinary dif-
ferential equations, just like that Feynman-Kac formula relates stochastic
differential equations and partial differential equations.

Theorem 15.10 (Yao-Chen Formula [246]) Let Xt and Xtα be the solution
and α-path of the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt , (15.60)

respectively. Then
M{Xt ≤ Xtα , ∀t} = α, (15.61)
M{Xt > Xtα , ∀t} = 1 − α. (15.62)

Proof: At first, for each α-path Xtα , we divide the time interval into two
parts,
T + = t g (t, Xtα ) ≥ 0 ,

T − = t g (t, Xtα ) < 0 .


It is obvious that T + ∩ T − = ∅ and T + ∪ T − = [0, +∞). Write



dCt (γ) −1
Λ+1 = γ ≤ Φ (α) for any t ∈ T +
,
dt
Section 15.6 - Yao-Chen Formula 333


dCt (γ)
Λ−
1 = γ ≥ Φ−1 (1 − α) for any t ∈ T −
dt
where Φ−1 is the inverse standard normal uncertainty distribution. Since T +
and T − are disjoint sets and Ct has independent increments, we get

M{Λ+
1 } = α, M{Λ−
1 } = α, M{Λ+ −
1 ∩ Λ1 } = α.


For any γ ∈ Λ+
1 ∩ Λ1 , we always have

dCt (γ)
g(t, Xt (γ)) ≤ |g(t, Xtα )|Φ−1 (α), ∀t.
dt
Hence Xt (γ) ≤ Xtα for all t and

M{Xt ≤ Xtα , ∀t} ≥ M{Λ+
1 ∩ Λ1 } = α. (15.63)

On the other hand, let us define



dCt (γ) −1
Λ+
2 = γ > Φ (α) for any t ∈ T +
,
dt

− dCt (γ) −1 −
Λ2 = γ < Φ (1 − α) for any t ∈ T .
dt
Since T + and T − are disjoint sets and Ct has independent increments, we
obtain

M{Λ+
2 } = 1 − α, M{Λ−
2 } = 1 − α, M{Λ+ −
2 ∩ Λ2 } = 1 − α.


For any γ ∈ Λ+
2 ∩ Λ2 , we always have

dCt (γ)
g(t, Xt (γ)) > |g(t, Xtα )|Φ−1 (α), ∀t.
dt
Hence Xt (γ) > Xtα for all t and

M{Xt > Xtα , ∀t} ≥ M{Λ+
2 ∩ Λ2 } = 1 − α. (15.64)

Note that {Xt ≤ Xtα , ∀t} and {Xt 6≤ Xtα , ∀t} are opposite events with each
other. By using the duality axiom, we obtain

M{Xt ≤ Xtα , ∀t} + M{Xt 6≤ Xtα , ∀t} = 1.

It follows from M{Xt > Xtα , ∀t} ⊂ M{Xt 6≤ Xtα , ∀t} and monotonicity
theorem that

M{Xt ≤ Xtα , ∀t} + M{Xt > Xtα , ∀t} ≤ 1. (15.65)

Thus (15.61) and (15.62) follow from (15.63), (15.64) and (15.65) immedi-
ately.
334 Chapter 15 - Uncertain Differential Equation

Remark 15.3: It is also shown that Yao-Chen formula may be written as


M{Xt < Xtα , ∀t} = α, (15.66)
M{Xt ≥ Xtα , ∀t} = 1 − α. (15.67)
Please mention that {Xt < Xtα ,
∀t} and {Xt ≥ Xtα ,
∀t} are disjoint events
but not opposite. Generally speaking, their union is not the universal set,
and it is possible that
M{(Xt < Xtα , ∀t) ∪ (Xt ≥ Xtα , ∀t)} < 1. (15.68)
However, for any α, it is always true that
M{Xt < Xtα , ∀t} + M{Xt ≥ Xtα , ∀t} ≡ 1. (15.69)

Uncertainty Distribution of Solution


Theorem 15.11 (Yao and Chen [246]) Let Xt and Xtα be the solution and
α-path of the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt , (15.70)
respectively. Then the solution Xt has an inverse uncertainty distribution
Ψ−1 α
t (α) = Xt . (15.71)
Proof: Note that {Xt ≤ Xtα } ⊃ {Xs ≤ Xsα , ∀s} holds. By using the
monotonicity theorem and Yao-Chen formula, we obtain
M{Xt ≤ Xtα } ≥ M{Xs ≤ Xsα , ∀s} = α. (15.72)
Similarly, we also have
M{Xt > Xtα } ≥ M{Xs > Xsα , ∀s} = 1 − α. (15.73)
In addition, since {Xt ≤ Xtα } and {Xt > Xtα } are opposite events, the duality
axiom makes
M{Xt ≤ Xtα } + M{Xt > Xtα } = 1. (15.74)
It follows from (15.72), (15.73) and (15.74) that M{Xt ≤ Xtα } = α. The
theorem is thus verified.

Exercise 15.2: Show that the solution of the uncertain differential equation
dXt = adt + bdCt with X0 = 0 has an inverse uncertainty distribution
Ψ−1
t (α) = at + |b|Φ
−1
(α)t (15.75)
−1
where Φ is the inverse standard normal uncertainty distribution.

Exercise 15.3: Show that the solution of the uncertain differential equation
dXt = aXt dt + bXt dCt with X0 = 1 has an inverse uncertainty distribution
Ψ−1 −1

t (α) = exp at + |b|Φ (α)t (15.76)
where Φ−1 is the inverse standard normal uncertainty distribution.
Section 15.6 - Yao-Chen Formula 335

Expected Value of Solution


Theorem 15.12 (Yao and Chen [246]) Let Xt and Xtα be the solution and
α-path of the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt , (15.77)

respectively. Then for any monotone (increasing or decreasing) function J,


we have Z 1
E[J(Xt )] = J(Xtα )dα. (15.78)
0

Proof: At first, it follows from Yao-Chen formula that Xt has an uncertainty


distribution Ψ−1 α
t (α) = Xt . Next, we may have a monotone function become
a strictly monotone function by a small perturbation. When J is a strictly
increasing function, it follows from Theorem 2.9 that J(Xt ) has an inverse
uncertainty distribution
Υ−1 α
t (α) = J(Xt ).

Thus we have
Z 1 Z 1
E[J(Xt )] = Υ−1
t (α)dα = J(Xtα )dα.
0 0

When J is a strictly decreasing function, it follows from Theorem 2.16 that


J(Xt ) has an inverse uncertainty distribution

Υ−1 1−α
t (α) = J(Xt ).

Thus we have
Z 1 Z 1 Z 1
E[J(Xt )] = Υ−1
t (α)dα = J(Xt1−α )dα = J(Xtα )dα.
0 0 0

The theorem is thus proved.

Exercise 15.4: Let Xt and Xtα be the solution and α-path of some uncertain
differential equation. Show that
Z 1
E[Xt ] = Xtα dα, (15.79)
0

Z 1
+
E[(Xt − K) ] = (Xtα − K)+ dα, (15.80)
0
Z 1
E[(K − Xt )+ ] = (K − Xtα )+ dα. (15.81)
0
336 Chapter 15 - Uncertain Differential Equation

Extreme Value of Solution


Theorem 15.13 (Yao [244]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt , (15.82)

respectively. Then for any time s > 0 and strictly increasing function J(x),
the supremum
sup J(Xt ) (15.83)
0≤t≤s

has an inverse uncertainty distribution

Ψ−1 α
s (α) = sup J(Xt ); (15.84)
0≤t≤s

and the infimum


inf J(Xt ) (15.85)
0≤t≤s

has an inverse uncertainty distribution

Ψ−1 α
s (α) = inf J(Xt ). (15.86)
0≤t≤s

Proof: Since J(x) is a strictly increasing function with respect to x, it is


always true that

α
sup J(Xt ) ≤ sup J(Xt ) ⊃ {Xt ≤ Xtα , ∀t}.
0≤t≤s 0≤t≤s

By using Yao-Chen formula, we obtain



M sup J(Xt ) ≤ sup J(Xt ) ≥ M{Xt ≤ Xtα , ∀t} = α.
α
(15.87)
0≤t≤s 0≤t≤s

Similarly, we have

M sup J(Xt ) > sup J(Xt ) ≥ M{Xt > Xtα , ∀t} = 1 − α.
α
(15.88)
0≤t≤s 0≤t≤s

It follows from (15.87), (15.88) and the duality axiom that



M sup J(Xt ) ≤ sup J(Xt ) = α α
(15.89)
0≤t≤s 0≤t≤s

which proves (15.84). Next, it is easy to verify that



inf J(Xt ) ≤ inf J(Xtα ) ⊃ {Xt ≤ Xtα , ∀t}.
0≤t≤s 0≤t≤s
Section 15.6 - Yao-Chen Formula 337

By using Yao-Chen formula, we obtain



M inf J(Xt ) ≤ inf J(Xt ) ≥ M{Xt ≤ Xtα , ∀t} = α.
α
(15.90)
0≤t≤s 0≤t≤s

Similarly, we have

M inf J(Xt ) > inf J(Xtα ) ≥ M{Xt > Xtα , ∀t} = 1 − α. (15.91)
0≤t≤s 0≤t≤s

It follows from (15.90), (15.91) and the duality axiom that



M inf J(Xt ) ≤ inf J(Xtα ) = α (15.92)
0≤t≤s 0≤t≤s

which proves (15.86). The theorem is thus verified.

Exercise 15.5: Let r and K be real numbers. Show that the supremum

sup exp(−rt)(Xt − K)
0≤t≤s

has an inverse uncertainty distribution

Ψ−1 α
s (α) = sup exp(−rt)(Xt − K)
0≤t≤s

for any given time s > 0.

Theorem 15.14 (Yao [244]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt , (15.93)

respectively. Then for any time s > 0 and strictly decreasing function J(x),
the supremum
sup J(Xt ) (15.94)
0≤t≤s

has an inverse uncertainty distribution

Ψ−1 1−α
s (α) = sup J(Xt ); (15.95)
0≤t≤s

and the infimum


inf J(Xt ) (15.96)
0≤t≤s

has an inverse uncertainty distribution

Ψ−1 1−α
s (α) = inf J(Xt ). (15.97)
0≤t≤s
338 Chapter 15 - Uncertain Differential Equation

Proof: Since J(x) is a strictly decreasing function with respect to x, it is


always true that

sup J(Xt ) ≤ sup J(Xt1−α ) ⊃ {Xt ≥ Xt1−α , ∀t}.
0≤t≤s 0≤t≤s

By using Yao-Chen formula, we obtain



M sup J(Xt ) ≤ sup J(Xt ) ≥ M{Xt ≥ Xt1−α , ∀t} = α.
1−α
(15.98)
0≤t≤s 0≤t≤s

Similarly, we have

M sup J(Xt ) > sup J(Xt1−α ) ≥ M{Xt < Xt1−α , ∀t} = 1 − α. (15.99)
0≤t≤s 0≤t≤s

It follows from (15.98), (15.99) and the duality axiom that



M sup J(Xt ) ≤ sup J(Xt1−α ) = α (15.100)
0≤t≤s 0≤t≤s

which proves (15.95). Next, it is easy to verify that



inf J(Xt ) ≤ inf J(Xt1−α ) ⊃ {Xt ≥ Xt1−α , ∀t}.
0≤t≤s 0≤t≤s

By using Yao-Chen formula, we obtain



M inf J(Xt ) ≤ inf J(Xt1−α ) ≥ M{Xt ≥ Xt1−α , ∀t} = α. (15.101)
0≤t≤s 0≤t≤s

Similarly, we have

M inf J(Xt ) > inf J(Xt ) ≥ M{Xt < Xt1−α , ∀t} = 1 − α. (15.102)
1−α
0≤t≤s 0≤t≤s

It follows from (15.101), (15.102) and the duality axiom that



M inf J(Xt ) ≤ inf J(Xt1−α ) = α (15.103)
0≤t≤s 0≤t≤s

which proves (15.97). The theorem is thus verified.

Exercise 15.6: Let r and K be real numbers. Show that the supremum
sup exp(−rt)(K − Xt )
0≤t≤s

has an inverse uncertainty distribution


Ψ−1 1−α
s (α) = sup exp(−rt)(K − Xt )
0≤t≤s

for any given time s > 0.


Section 15.6 - Yao-Chen Formula 339

First Hitting Time of Solution


Theorem 15.15 (Yao [244]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt (15.104)

with an initial value X0 , respectively. Then for any given level z and strictly
increasing function J(x), the first hitting time τz that J(Xt ) reaches z has
an uncertainty distribution

α


 1 − inf α sup J(X t ) ≥ z , if z > J(X0 )
0≤t≤s

Ψ(s) = (15.105)

J(Xtα ) ≤z ,


 sup α inf if z < J(X0 ).
0≤t≤s

Proof: At first, assume z > J(X0 ) and write



α
α0 = inf α sup J(Xt ) ≥ z .
0≤t≤s

Then we have
sup J(Xtα0 ) = z,
0≤t≤s

{τz ≤ s} = sup J(Xt ) ≥ z ⊃ {Xt ≥ Xtα0 , ∀t},
0≤t≤s

{τz > s} = sup J(Xt ) < z ⊃ {Xt < Xtα0 , ∀t}.
0≤t≤s

By using Yao-Chen formula, we obtain

M{τz ≤ s} ≥ M{Xt ≥ Xtα0 , ∀t} = 1 − α0 ,

M{τz > s} ≥ M{Xt < Xtα0 , ∀t} = α0 .


It follows from M{τz ≤ s} + M{τz > s} = 1 that M{τz ≤ s} = 1 − α0 . Hence
the first hitting time τz has an uncertainty distribution

Ψ(s) = M{τz ≤ s} = 1 − inf α sup J(Xtα ) ≥ z .
0≤t≤s

Similarly, assume z < J(X0 ) and write



α
α0 = sup α inf J(Xt ) ≤ z .
0≤t≤s

Then we have
inf J(Xtα0 ) = z,
0≤t≤s
340 Chapter 15 - Uncertain Differential Equation


{τz ≤ s} = inf J(Xt ) ≤ z ⊃ {Xt ≤ Xtα0 , ∀t},
0≤t≤s

{τz > s} = inf J(Xt ) > z ⊃ {Xt > Xtα0 , ∀t}.
0≤t≤s

By using Yao-Chen formula, we obtain

M{τz ≤ s} ≥ M{Xt ≤ Xtα0 , ∀t} = α0 ,

M{τz > s} ≥ M{Xt > Xtα0 , ∀t} = 1 − α0 .


It follows from M{τz ≤ s} + M{τz > s} = 1 that M{τz ≤ s} = α0 . Hence
the first hitting time τz has an uncertainty distribution

Ψ(s) = M{τz ≤ s} = sup α inf J(Xt ) ≤ z .α
0≤t≤s

The theorem is verified.

Theorem 15.16 (Yao [244]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt (15.106)

with an initial value X0 , respectively. Then for any given level z and strictly
decreasing function J(x), the first hitting time τz that J(Xt ) reaches z has
an uncertainty distribution

α
 sup α sup J(Xt ) ≥ z , if z > J(X0 )



0≤t≤s
Ψ(s) = (15.107)

 1 − inf α J(Xtα ) ≤ z , if z < J(X0 ).

 inf
0≤t≤s

Proof: At first, assume z > J(X0 ) and write



α
α0 = sup α sup J(Xt ) ≥ z .
0≤t≤s

Then we have
sup J(Xtα0 ) = z,
0≤t≤s

{τz ≤ s} = sup J(Xt ) ≥ z ⊃ {Xt ≤ Xtα0 , ∀t},
0≤t≤s

{τz > s} = sup J(Xt ) < z ⊃ {Xt > Xtα0 , ∀t}.
0≤t≤s
Section 15.6 - Yao-Chen Formula 341

By using Yao-Chen formula, we obtain

M{τz ≤ s} ≥ M{Xt ≤ Xtα0 , ∀t} = α0 ,

M{τz > s} ≥ M{Xt > Xtα0 , ∀t} = 1 − α0 .


It follows from M{τz ≤ s} + M{τz > s} = 1 that M{τz ≤ s} = α0 . Hence
the first hitting time τz has an uncertainty distribution

Ψ(s) = M{τz ≤ s} = sup α sup J(Xtα ) ≥ z .
0≤t≤s

Similarly, assume z < J(X0 ) and write



α
α0 = inf α inf J(Xt ) ≤ z .
0≤t≤s

Then we have
inf J(Xtα0 ) = z,
0≤t≤s

{τz ≤ s} = inf J(Xt ) ≤ z ⊃ {Xt ≥ Xtα0 , ∀t},
0≤t≤s

{τz > s} = inf J(Xt ) > z ⊃ {Xt < Xtα0 , ∀t}.
0≤t≤s

By using Yao-Chen formula, we obtain

M{τz ≤ s} ≥ M{Xt ≥ Xtα0 , ∀t} = 1 − α0 ,

M{τz > s} ≥ M{Xt < Xtα0 , ∀t} = α0 .


It follows from M{τz ≤ s} + M{τz > s} = 1 that M{τz ≤ s} = 1 − α0 . Hence
the first hitting time τz has an uncertainty distribution

Ψ(s) = M{τz ≤ s} = 1 − inf α inf J(Xtα ) ≤ z .
0≤t≤s

The theorem is verified.

Time Integral of Solution


Theorem 15.17 (Yao [244]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt , (15.108)

respectively. Then for any time s > 0 and strictly increasing function J(x),
the time integral Z s
J(Xt )dt (15.109)
0
342 Chapter 15 - Uncertain Differential Equation

has an inverse uncertainty distribution


Z s
Ψ−1
s (α) = J(Xtα )dt. (15.110)
0

Proof: Since J(x) is a strictly increasing function with respect to x, it is


always true that
Z s Z s
J(Xt )dt ≤ J(Xtα )dt ⊃ {J(Xt ) ≤ J(Xtα ), ∀t} ⊃ {Xt ≤ Xtα , ∀t}.
0 0

By using Yao-Chen formula, we obtain


Z s Z s
M J(Xt )dt ≤ J(Xtα )dt ≥ M{Xt ≤ Xtα , ∀t} = α. (15.111)
0 0

Similarly, we have
Z s Z s
M J(Xt )dt > J(Xtα )dt ≥ M{Xt > Xtα , ∀t} = 1 − α. (15.112)
0 0

It follows from (15.111), (15.112) and the duality axiom that


Z s Z s
M J(Xt )dt ≤ J(Xtα )dt = α. (15.113)
0 0

The theorem is thus verified.

Exercise 15.7: Let r and K be real numbers. Show that the time integral
Z s
exp(−rt)(Xt − K)dt
0

has an inverse uncertainty distribution


Z s
Ψ−1
s (α) = exp(−rt)(Xtα − K)dt
0

for any given time s > 0.


Theorem 15.18 (Yao [244]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt , (15.114)
respectively. Then for any time s > 0 and strictly decreasing function J(x),
the time integral Z s
J(Xt )dt (15.115)
0
has an inverse uncertainty distribution
Z s
Ψ−1
s (α) = J(Xt1−α )dt. (15.116)
0
Section 15.7 - Numerical Methods 343

Proof: Since J(x) is a strictly decreasing function with respect to x, it is


always true that
Z s Z s
J(Xt )dt ≤ J(Xt )dt ⊃ {Xt ≥ Xt1−α , ∀t}.
1−α
0 0

By using Yao-Chen formula, we obtain


Z s Z s
M J(Xt )dt ≤ J(Xt )dt ≥ M{Xt ≥ Xt1−α , ∀t} = α.
1−α
(15.117)
0 0

Similarly, we have
Z s Z s
M J(Xt )dt > J(Xt1−α )dt ≥ M{Xt < Xt1−α , ∀t} = 1 − α. (15.118)
0 0

It follows from (15.117), (15.118) and the duality axiom that


Z s Z s
M J(Xt )dt ≤ J(Xt1−α )dt = α. (15.119)
0 0

The theorem is thus verified.

Exercise 15.8: Let r and K be real numbers. Show that the time integral
Z s
exp(−rt)(K − Xt )dt
0

has an inverse uncertainty distribution


Z s
−1
Ψs (α) = exp(−rt)(K − Xt1−α )dt
0

for any given time s > 0.

15.7 Numerical Methods


It is almost impossible to find analytic solutions for general uncertain differ-
ential equations. This fact provides a motivation to design some numerical
methods to solve the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt . (15.120)

In order to do so, a key point is to obtain a spectrum of α-paths of the un-


certain differential equation. For this purpose, Yao and Chen [246] designed
a Euler method:

Step 1. Fix α on (0, 1).


344 Chapter 15 - Uncertain Differential Equation

Step 2. Solve dXtα = f (t, Xtα )dt + |g(t, Xtα )|Φ−1 (α)dt by any method of or-
dinary differential equation and obtain the α-path Xtα , for example,
by using the recursion formula
α
Xi+1 = Xiα + f (ti , Xiα )h + |g(ti , Xiα )|Φ−1 (α)h (15.121)

where Φ−1 is the inverse standard normal uncertainty distribution


and h is the step length.
Step 3. The α-path Xtα is obtained.

Remark 15.4: Shen and Yao [209] designed a Runge-Kutta method that
replaces the recursion formula (15.121) with

α h
Xi+1 = Xiα + (k1 + 2k2 + 2k3 + k4 ) (15.122)
6
where
k1 = f (ti , Xiα ) + |g(ti , Xiα )|Φ−1 (α), (15.123)

k2 = f (ti + h/2, Xiα + h2 k1 /2) + |g(ti + h/2, Xiα + h2 k1 /2)|Φ−1 (α), (15.124)

k3 = f (ti + h/2, Xiα + h2 k2 /2) + |g(ti + h/2, Xiα + h2 k2 /2)|Φ−1 (α), (15.125)

k4 = f (ti + h, Xiα + h2 k3 ) + |g(ti + h, Xiα + h2 k3 )|Φ−1 (α). (15.126)

Example 15.13: In order to illustrate the numerical method, let us consider


an uncertain differential equation
p
dXt = (t − Xt )dt + (1 + Xt )dCt , X0 = 1. (15.127)

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


solve this equation successfully and obtain all α-paths of the uncertain dif-
ferential equation. Furthermore, we may get

E[X1 ] ≈ 0.870. (15.128)

Example 15.14: Now we consider a nonlinear uncertain differential equa-


tion p
dXt = Xt dt + (1 − t)Xt dCt , X0 = 1. (15.129)
Note that (1 − t)Xt takes not only positive values but also negative values.
The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
obtain all α-paths of the uncertain differential equation. Furthermore, we
may get
E[(X2 − 3)+ ] ≈ 2.845. (15.130)
Section 15.8 - Bibliographic Notes 345

15.8 Bibliographic Notes


The study of uncertain differential equation was pioneered by Liu [123] in
2008. This work was immediately followed upon by many researchers. Nowa-
days, the uncertain differential equation has achieved fruitful results in both
theory and practice.
The existence and uniqueness theorem of solution of uncertain differential
equation was first proved by Chen and Liu [12] under linear growth condi-
tion and Lipschitz continuous condition. The theorem was verified again by
Gao [53] under local linear growth condition and local Lipschitz continuous
condition.
The first concept of stability of uncertain differential equation was pre-
sented by Liu [125], and some stability theorems were proved by Yao, Gao and
Gao [243]. Following that, different types of stability of uncertain differen-
tial equations were explored, for example, stability in mean (Yao and Sheng
[252]), stability in moment (Sheng and Wang [210]), almost sure stability
(Liu, Ke and Fei [143]), and exponential stability (Sheng [214]).
In order to solve uncertain differential equations, Chen and Liu [12] ob-
tained an analytic solution to linear uncertain differential equations. In ad-
dition, Liu [148] and Yao [247] presented a spectrum of analytic methods to
solve some special classes of nonlinear uncertain differential equations.
More importantly, Yao and Chen [246] showed that the solution of an
uncertain differential equation can be represented by a family of solutions of
ordinary differential equations, thus relating uncertain differential equations
and ordinary differential equations. On the basis of Yao-Chen formula, Yao
[244] presented some formulas to calculate extreme value, first hitting time,
and time integral of solution of uncertain differential equation. Furthermore,
some numerical methods for solving general uncertain differential equations
were designed among others by Yao and Chen [246] and Shen and Yao [209].
Uncertain differential equation was extended by many researchers. For
example, uncertain delay differential equation was studied among others by
Barbacioru [4], Ge and Zhu [54], and Liu and Fei [142]. In addition, uncertain
differential equation with jumps was suggested by Yao [241], and backward
uncertain differential equation was discussed by Ge and Zhu [55].
Uncertain differential equation has been widely applied in many fields
such as uncertain finance (Liu [134]), uncertain optimal control (Zhu [284]),
and uncertain differential game (Yang and Gao [238]).
Chapter 16

Uncertain Finance

This chapter will introduce uncertain stock model, uncertain interest rate
model, and uncertain currency model by using the tool of uncertain differen-
tial equation.

16.1 Uncertain Stock Model


Liu [125] supposed that the stock price follows an uncertain differential equa-
tion and presented an uncertain stock model in which the bond price Xt and
the stock price Yt are determined by
(
dXt = rXt dt
(16.1)
dYt = eYt dt + σYt dCt

where r is the riskless interest rate, e is the log-drift, σ is the log-diffusion, and
Ct is a canonical Liu process. Note that the bond price is Xt = X0 exp(rt)
and the stock price is
Yt = Y0 exp(et + σCt ) (16.2)
whose inverse uncertainty distribution is
√ !
σt 3 α
Φ−1
t (α) = Y0 exp et + ln . (16.3)
π 1−α

European Option
Definition 16.1 A European call option is a contract that gives the holder
the right to buy a stock at an expiration time s for a strike price K.

The payoff from a European call option is (Ys −K)+ since the option is ra-
tionally exercised if and only if Ys > K. Considering the time value of money
resulted from the bond, the present value of the payoff is exp(−rs)(Ys − K)+ .

© Springer-Verlag Berlin Heidelberg 2015 347


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5_17
348 Chapter 16 - Uncertain Finance

Hence the European call option price should be the expected present value
of the payoff.

Definition 16.2 Assume a European call option has a strike price K and
an expiration time s. Then the European call option price is

fc = exp(−rs)E[(Ys − K)+ ]. (16.4)

Y.t
....
.........
.... ....... ....
.. ... ... ........... .......
Y .................................................................................................................................................................. .... ....... . ....
s .... ... ......... ... .
.........
... .. .... ...
... . ... .. ...
... ........... .......
. ..
. .
. ..
... .... ... ...... ........ ... .. ........
. .
.
.
... ...... .... ... .. ........ .......... ........ . ...
... ... ... ........ ..
. ... ... ..... ....
... . .. .
.. . .. .
..
. .
............ ... ... . ....
... .. .. ..... . . .. . ..
.....................................................................................................................................................................................................
K ...
...
... ...... .. ...
.... ... ....
. .......
.
.. . ..
...
... .... .
... .......... ..
..
... ...
...... ...
..... ..
Y 0 ... .
. ..
...
.... .
..
. .
.....................................................................................................................................................................................................................................................................................
... s t
0 ....
..

Figure 16.1: Payoff (Ys − K)+ from European Call Option

Theorem 16.1 (Liu [125]) Assume a European call option for the uncertain
stock model (16.1) has a strike price K and an expiration time s. Then the
European call option price is
Z 1
√ ! !+
σs 3 α
fc = exp(−rs) Y0 exp es + ln −K dα. (16.5)
0 π 1−α

Proof: Since (Ys − K)+ is an increasing function with respect to Ys , it has


an inverse uncertainty distribution
√ ! !+
σs 3 α
Ψ−1
s (α) = Y0 exp es + ln −K .
π 1−α

It follows from Definition 16.2 that the European call option price formula is
just (16.5).

Remark 16.1: It is clear that the European call option price is a decreasing
function of interest rate r. That is, the European call option will devaluate
if the interest rate is raised; and the European call option will appreciate in
value if the interest rate is reduced. In addition, the European call option
price is also a decreasing function of the strike price K.
Section 16.1 - Uncertain Stock Model 349

Example 16.1: Assume the interest rate r = 0.08, the log-drift e =


0.06, the log-diffusion σ = 0.32, the initial price Y0 = 20, the strike price
K = 25 and the expiration time s = 2. The Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) yields the European call option price

fc = 6.91.

Definition 16.3 A European put option is a contract that gives the holder
the right to sell a stock at an expiration time s for a strike price K.

The payoff from a European put option is (K −Ys )+ since the option is ra-
tionally exercised if and only if Ys < K. Considering the time value of money
resulted from the bond, the present value of this payoff is exp(−rs)(K −Ys )+ .
Hence the European put option price should be the expected present value
of the payoff.

Definition 16.4 Assume a European put option has a strike price K and
an expiration time s. Then the European put option price is

fp = exp(−rs)E[(K − Ys )+ ]. (16.6)

Theorem 16.2 (Liu [125]) Assume a European put option for the uncertain
stock model (16.1) has a strike price K and an expiration time s. Then the
European put option price is
Z 1 √ !!+
σs 3 α
fp = exp(−rs) K − Y0 exp es + ln dα. (16.7)
0 π 1−α

Proof: Since (K − Ys )+ is a decreasing function with respect to Ys , it has


an inverse uncertainty distribution
√ ! !+
−1 σs 3 1 − α
Ψs (α) = Y0 exp es + ln −K .
π α

It follows from Definition 16.4 that the European put option price is
Z 1 √ !!+
σs 3 1 − α
fp = exp(−rs) K − Y0 exp es + ln dα
0 π α
Z 1 √ !!+
σs 3 α
= exp(−rs) K − Y0 exp es + ln dα.
0 π 1−α

The European put option price formula is verified.

Remark 16.2: It is easy to verify that the option price is a decreasing


function of the interest rate r, and is an increasing function of the strike
price K.
350 Chapter 16 - Uncertain Finance

Example 16.2: Assume the interest rate r = 0.08, the log-drift e =


0.06, the log-diffusion σ = 0.32, the initial price Y0 = 20, the strike price
K = 25 and the expiration time s = 2. The Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) yields the European put option price

fp = 4.40.

American Option
Definition 16.5 An American call option is a contract that gives the holder
the right to buy a stock at any time prior to an expiration time s for a strike
price K.

It is clear that the payoff from an American call option is the supremum
of (Yt −K)+ over the time interval [0, s]. Considering the time value of money
resulted from the bond, the present value of this payoff is

sup exp(−rt)(Yt − K)+ . (16.8)


0≤t≤s

Hence the American call option price should be the expected present value
of the payoff.

Definition 16.6 Assume an American call option has a strike price K and
an expiration time s. Then the American call option price is

fc = E sup exp(−rt)(Yt − K)+ . (16.9)
0≤t≤s

Theorem 16.3 (Chen [13]) Assume an American call option for the uncer-
tain stock model (16.1) has a strike price K and an expiration time s. Then
the American call option price is
Z 1 √ ! !+
σt 3 α
fc = sup exp(−rt) Y0 exp et + ln −K dα.
0 0≤t≤s π 1−α

Proof: It follows from Theorem 15.13 that sup0≤t≤s exp(−rt)(Yt − K)+ has
an inverse uncertainty distribution
√ ! !+
−1 σt 3 α
Ψs (α) = sup exp(−rt) Y0 exp et + ln −K .
0≤t≤s π 1−α

Hence the American call option price formula follows from Definition 16.6
immediately.

Remark 16.3: It is easy to verify that the option price is a decreasing


function with respect to either the interest rate r or the strike price K.
Section 16.1 - Uncertain Stock Model 351

Example 16.3: Assume the interest rate r = 0.08, the log-drift e =


0.06, the log-diffusion σ = 0.32, the initial price Y0 = 40, the strike price
K = 38 and the expiration time s = 2. The Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) yields the American call option price

fc = 19.8.

Definition 16.7 An American put option is a contract that gives the holder
the right to sell a stock at any time prior to an expiration time s for a strike
price K.

It is clear that the payoff from an American put option is the supremum
of (K −Yt )+ over the time interval [0, s]. Considering the time value of money
resulted from the bond, the present value of this payoff is

sup exp(−rt)(K − Yt )+ . (16.10)


0≤t≤s

Hence the American put option price should be the expected present value
of the payoff.

Definition 16.8 Assume an American put option has a strike price K and
an expiration time s. Then the American put option price is

+
fp = E sup exp(−rt)(K − Yt ) . (16.11)
0≤t≤s

Theorem 16.4 (Chen [13]) Assume an American put option for the uncer-
tain stock model (16.1) has a strike price K and an expiration time s. Then
the American put option price is
Z 1
√ !!+
σt 3 α
fp = sup exp(−rt) K − Y0 exp et + ln dα.
0 0≤t≤s π 1−α

Proof: It follows from Theorem 15.14 that sup0≤t≤s exp(−rt)(K − Yt )+ has


an inverse uncertainty distribution
√ !!+
σt 3 1 − α
Ψ−1
s (α) = sup exp(−rt) K − Y0 exp et + ln .
0≤t≤s π α

Hence the American put option price formula follows from Definition 16.8
immediately.

Remark 16.4: It is easy to verify that the option price is a decreasing


function of the interest rate r, and is an increasing function of the strike
price K.
352 Chapter 16 - Uncertain Finance

Example 16.4: Assume the interest rate r = 0.08, the log-drift e =


0.06, the log-diffusion σ = 0.32, the initial price Y0 = 40, the strike price
K = 38 and the expiration time s = 2. The Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) yields the American put option price
fp = 3.90.

Asian Option
Definition 16.9 An Asian call option is a contract whose payoff at the ex-
piration time s is
Z s +
1
Yt dt − K (16.12)
s 0
where K is a strike price.
Considering the time value of money resulted from the bond, the present
value of the payoff from an Asian call option is
Z s +
1
exp(−rs) Yt dt − K . (16.13)
s 0
Hence the Asian call option price should be the expected present value of the
payoff.
Definition 16.10 Assume an Asian call option has a strike price K and an
expiration time s. Then the Asian call option price is
" Z + #
1 s
fc = exp(−rs)E Yt dt − K . (16.14)
s 0

Theorem 16.5 (Sun and Chen [218]) Assume an Asian call option for the
uncertain stock model (16.1) has a strike price K and an expiration time s.
Then the Asian call option price is
Z 1 √ ! !+
Y0 s
Z
σt 3 α
fc = exp(−rs) exp et + ln dt − K dα.
0 s 0 π 1−α

Proof: It follows from Theorem 15.17 that the inverse uncertainty distribu-
tion of time integral Z s
Yt dt
0
is √ !
Z s
σt 3 α
Ψ−1
s (α) = Y0 exp et + ln dt.
0 π 1−α
Hence the Asian call option price formula follows from Definition 16.10 im-
mediately.
Section 16.1 - Uncertain Stock Model 353

Definition 16.11 An Asian put option is a contract whose payoff at the


expiration time s is
+
1 s
Z
K− Yt dt (16.15)
s 0
where K is a strike price.
Considering the time value of money resulted from the bond, the present
value of the payoff from an Asian put option is
+
1 s
Z
exp(−rs) K − Yt dt . (16.16)
s 0
Hence the Asian put option price should be the expected present value of the
payoff.
Definition 16.12 Assume an Asian put option has a strike price K and an
expiration time s. Then the Asian put option price is
" + #
1 s
Z
fp = exp(−rs)E K− Yt dt . (16.17)
s 0

Theorem 16.6 (Sun and Chen [218]) Assume an Asian put option for the
uncertain stock model (16.1) has a strike price K and an expiration time s.
Then the Asian put option price is
Z 1 √ ! !+
Y0 s
Z
σt 3 α
fc = exp(−rs) K− exp et + ln dt dα.
0 s 0 π 1−α

Proof: It follows from Theorem 15.17 that the inverse uncertainty distribu-
tion of time integral Z s
Yt dt
0
is √ !
Z s
σt 3 α
Ψ−1
s (α) = Y0 exp et + ln dt.
0 π 1−α
Hence the Asian put option price formula follows from Definition 16.12 im-
mediately.

General Stock Model


Generally, we may assume the stock price follows a general uncertain differ-
ential equation and obtain a general stock model in which the bond price Xt
and the stock price Yt are determined by
(
dXt = rXt dt
(16.18)
dYt = F (t, Yt )dt + G(t, Yt )dCt
354 Chapter 16 - Uncertain Finance

where r is the riskless interest rate, F and G are two functions, and Ct is a
canonical Liu process.
Note that the α-path Ytα of the stock price Yt can be calculated by some
numerical methods. Assume the strike price is K and the expiration time is
s. It follows from Definition 16.2 and Theorem 15.12 that the European call
option price is Z 1
fc = exp(−rs) (Ysα − K)+ dα. (16.19)
0
It follows from Definition 16.4 and Theorem 15.12 that the European put
option price is Z 1
fp = exp(−rs) (K − Ysα )+ dα. (16.20)
0
It follows from Definition 16.6 and Theorem 15.13 that the American call
option price is
Z 1
α +
fc = sup exp(−rt)(Yt − K) dα. (16.21)
0 0≤t≤s

It follows from Definition 16.8 and Theorem 15.14 that the American put
option price is
Z 1
α +
fp = sup exp(−rt)(K − Yt ) dα. (16.22)
0 0≤t≤s

It follows from Definition 16.9 and Theorem 15.17 that the Asian call option
price is
Z 1 " Z s + #
1 α
fc = exp(−rs) Y dt − K dα. (16.23)
0 s 0 t
It follows from Definition 16.11 and Theorem 15.18 that the Asian put option
price is
Z 1 " + #
1 s α
Z
fp = exp(−rs) K− Y dt dα. (16.24)
0 s 0 t

Multifactor Stock Model


Now we assume that there are multiple stocks whose prices are determined
by multiple Liu processes. In this case, we have a multifactor stock model in
which the bond price Xt and the stock prices Yit are determined by

 dXt = rXt dt


n
X (16.25)


 dYit = e Y
i it dt + σij Yit dCjt , i = 1, 2, · · · , m
j=1

where r is the riskless interest rate, ei are the log-drifts, σij are the log-
diffusions, Cjt are independent Liu processes, i = 1, 2, · · · , m, j = 1, 2, · · · , n.
Section 16.1 - Uncertain Stock Model 355

Portfolio Selection
For the multifactor stock model (16.25), we have the choice of m + 1 different
investments. At each time t we may choose a portfolio (βt , β1t , · · · , βmt ) (i.e.,
the investment fractions meeting βt + β1t + · · · + βmt = 1). Then the wealth
Zt at time t should follow the uncertain differential equation
m
X m X
X n
dZt = rβt Zt dt + ei βit Zt dt + σij βit Zt dCjt . (16.26)
i=1 i=1 j=1

That is,
 
m
Z tX n Z tX
X m
Zt = Z0 exp(rt) exp  (ei − r)βis ds + σij βis dCjs  .
0 i=1 j=1 0 i=1

Portfolio selection problem is to find an optimal portfolio (βt , β1t , · · · , βmt )


such that the wealth Zs is maximized in the sense of expected value.

No-Arbitrage
The stock model (16.25) is said to be no-arbitrage if there is no portfolio
(βt , β1t , · · · , βmt ) such that for some time s > 0, we have

M{exp(−rs)Zs ≥ Z0 } = 1 (16.27)

and
M{exp(−rs)Zs > Z0 } > 0 (16.28)
where Zt is determined by (16.26) and represents the wealth at time t.

Theorem 16.7 (Yao’s No-Arbitrage Theorem [248]) The multifactor stock


model (16.25) is no-arbitrage if and only if the system of linear equations
    
σ11 σ12 · · · σ1n x1 e1 − r
 σ21 σ22 · · · σ2n    x2   e 2 − r 
   
= (16.29)

 .. .. .. . .
..   ..  
    .
.. 
 . . . 
σm1 σm2 ··· σmn xn em − r

has a solution, i.e., (e1 −r, e2 −r, · · · , em −r) is a linear combination of column
vectors (σ11 , σ21 , · · · , σm1 ), (σ12 , σ22 , · · · , σm2 ), · · · , (σ1n , σ2n , · · · , σmn ).

Proof: When the portfolio (βt , β1t , · · · , βmt ) is accepted, the wealth at each
time t is
 
Z tXm Xn Z tXm
Zt = Z0 exp(rt) exp  (ei − r)βis ds + σij βis dCjs  .
0 i=1 j=1 0 i=1
356 Chapter 16 - Uncertain Finance

Thus
m
Z tX n Z tX
X m
ln(exp(−rt)Zt ) − ln Z0 = (ei − r)βis ds + σij βis dCjs
0 i=1 j=1 0 i=1

is a normal uncertain variable with expected value


m
Z tX
(ei − r)βis ds
0 i=1

and variance  2
n Z
X t m
X
 σij βis ds .
j=1 0 i=1

Assume the system (16.29) has a solution. The argument breaks down
into two cases. Case I: for any given time t and portfolio (βt , β1t , · · · , βmt ),
suppose
Xn Z t Xm
σij βis ds = 0.
j=1 0 i=1

Then
m
X
σij βis = 0, j = 1, 2, · · · , n, s ∈ (0, t].
i=1

Since the system (16.29) has a solution, we have


m
X
(ei − r)βis = 0, s ∈ (0, t]
i=1

and
m
Z tX
(ei − r)βis ds = 0.
0 i=1

This fact implies that

ln(exp(−rt)Zt ) − ln Z0 = 0

and
M{exp(−rt)Zt > Z0 } = 0.
That is, the stock model (16.25) is no-arbitrage. Case II: for any given time
t and portfolio (βt , β1t , · · · , βmt ), suppose
n Z
X t m
X
σij βis ds 6= 0.
j=1 0 i=1
Section 16.1 - Uncertain Stock Model 357

Then ln(exp(−rt)Zt ) − ln Z0 is a normal uncertain variable with nonzero


variance and
M{ln(exp(−rt)Zt ) − ln Z0 ≥ 0} < 1.
That is,
M{exp(−rt)Zt ≥ Z0 } < 1
and the multifactor stock model (16.25) is no-arbitrage.
Conversely, assume the system (16.29) has no solution. Then there exist
real numbers α1 , α2 , · · · , αm such that
m
X
σij αi = 0, j = 1, 2, · · · , n
i=1

and
m
X
(ei − r)αi > 0.
i=1
Now we take a portfolio
(βt , β1t , · · · , βmt ) ≡ (1 − (α1 + α2 + · · · + αm ), α1 , α2 , · · · , αm ).
Then
m
Z tX
ln(exp(−rt)Zt ) − ln Z0 = (ei − r)αi ds > 0.
0 i=1

Thus we have
M{exp(−rt)Zt > Z0 } = 1.
Hence the multifactor stock model (16.25) is arbitrage. The theorem is thus
proved.
Theorem 16.8 The multifactor stock model (16.25) is no-arbitrage if its
log-diffusion matrix
 
σ11 σ12 ··· σ1n
 σ21 σ22 ··· σ2n 
(16.30)
 
 .. .. .. .. 
 . . . . 
σm1 σm2 ··· σmn
has rank m, i.e., the row vectors are linearly independent.
Proof: If the log-diffusion matrix (16.30) has rank m, then the system of
equations (16.29) has a solution. It follows from Theorem 16.7 that the
multifactor stock model (16.25) is no-arbitrage.
Theorem 16.9 The multifactor stock model (16.25) is no-arbitrage if its
log-drifts are all equal to the interest rate r, i.e.,
ei = r, i = 1, 2, · · · , m. (16.31)
358 Chapter 16 - Uncertain Finance

Proof: Since the log-drifts ei = r for any i = 1, 2, · · · , m, we immediately


have
(e1 − r, e2 − r, · · · , em − r) ≡ (0, 0, · · · , 0)
that is a linear combination of (σ11 , σ21 , · · · , σm1 ), (σ12 , σ22 , · · · , σm2 ), · · · ,
(σ1n , σ2n , · · · , σmn ). It follows from Theorem 16.7 that the multifactor stock
model (16.25) is no-arbitrage.

16.2 Uncertain Interest Rate Model


Real interest rates do not remain unchanged. Chen and Gao [21] assumed
that the interest rate follows an uncertain differential equation and presented
an uncertain interest rate model,

dXt = (m − aXt )dt + σdCt (16.32)

where m, a, σ are positive numbers. Besides, Jiao and Yao [75] investigated
the uncertain interest rate model,
p
dXt = (m − aXt )dt + σ Xt dCt . (16.33)

More generally, we may assume the interest rate Xt follows a general uncer-
tain differential equation and obtain a general interest rate model,

dXt = F (t, Xt )dt + G(t, Xt )dCt (16.34)

where F and G are two functions, and Ct is a canonical Liu process.

Zero-Coupon Bond
A zero-coupon bond is a bond bought at a price lower than its face value
that is the amount it promises to pay at the maturity date. For simplicity,
we assume the face value is always 1 dollar. One problem is how to price a
zero-coupon bond.

Definition 16.13 Let Xt be the uncertain interest rate. Then the price of a
zero-coupon bond with a maturity date s is
Z s
f = E exp − Xt dt . (16.35)
0

Theorem 16.10 Let Xtα be the α-path of the uncertain interest rate Xt .
Then the price of a zero-coupon bond with maturity date s is
Z 1 Z s
f= exp − Xtα dt dα. (16.36)
0 0
Section 16.3 - Uncertain Currency Model 359

Proof: It follows from Theorem 15.17 that the inverse uncertainty distribu-
tion of time integral Z s
Xt dt
0
is Z s
Ψ−1
s (α) = Xtα dt.
0

Hence the price formula of zero-coupon bond follows from Definition 16.13
immediately.

16.3 Uncertain Currency Model


Liu, Chen and Ralescu [152] assumed that the exchange rate follows an un-
certain differential equation and proposed an uncertain currency model,


 dXt = uXt dt (Domestic Currency)

dYt = vYt dt (Foreign Currency) (16.37)


dZt = eZt dt + σZt dCt (Exchange Rate)

where Xt represents the domestic currency with domestic interest rate u, Yt


represents the foreign currency with foreign interest rate v, and Zt represents
the exchange rate that is domestic currency price of one unit of foreign cur-
rency at time t. Note that the domestic currency price is Xt = X0 exp(ut),
the foreign currency price is Yt = Y0 exp(vt), and the exchange rate is

Zt = Z0 exp(et + σCt ) (16.38)

whose inverse uncertainty distribution is


√ !
σt 3 α
Φ−1
t (α) = Z0 exp et + ln . (16.39)
π 1−α

European Currency Option


Definition 16.14 A European currency option is a contract that gives the
holder the right to exchange one unit of foreign currency at an expiration
time s for K units of domestic currency.

Suppose that the price of this contract is f in domestic currency. Then


the investor pays f for buying the contract at time 0, and receives (Zs − K)+
in domestic currency at the expiration time s. Thus the expected return of
the investor at time 0 is

− f + exp(−us)E[(Zs − K)+ ]. (16.40)


360 Chapter 16 - Uncertain Finance

On the other hand, the bank receives f for selling the contract at time 0,
and pays (1 − K/Zs )+ in foreign currency at the expiration time s. Thus the
expected return of the bank at the time 0 is

f − exp(−vs)Z0 E[(1 − K/Zs )+ ]. (16.41)

The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,

− f + exp(−us)E[(Zs − K)+ ] = f − exp(−vs)Z0 E[(1 − K/Zs )+ ]. (16.42)

Thus the European currency option price is given by the definition below.

Definition 16.15 (Liu, Chen and Ralescu [152]) Assume a European cur-
rency option has a strike price K and an expiration time s. Then the Euro-
pean currency option price is
1 1
f= exp(−us)E[(Zs − K)+ ] + exp(−vs)Z0 E[(1 − K/Zs )+ ]. (16.43)
2 2
Theorem 16.11 (Liu, Chen and Ralescu [152]) Assume a European cur-
rency option for the uncertain currency model (16.37) has a strike price K
and an expiration time s. Then the European currency option price is
Z 1 √ ! !+
1 σs 3 α
f = exp(−us) Z0 exp es + ln −K dα
2 0 π 1−α
Z 1 √ !!+
1 σs 3 α
+ exp(−vs) Z0 − K/ exp es + ln dα.
2 0 π 1−α

Proof: Since (Zs − K)+ and Z0 (1 − K/Zs )+ are increasing functions with
respect to Zs , they have inverse uncertainty distributions
√ ! !+
σs 3 α
Ψ−1s (α) = Z0 exp es + ln −K ,
π 1−α
√ !!+
σs 3 α
Υ−1
s (α) = Z0 − K/ exp es + ln ,
π 1−α
respectively. Thus the European currency option price formula follows from
Definition 16.15 immediately.

Remark 16.5: The European currency option price of the uncertain cur-
rency model (16.37) is a decreasing function of K, u and v.

Example 16.5: Assume the domestic interest rate u = 0.08, the foreign in-
terest rate v = 0.07, the log-drift e = 0.06, the log-diffusion σ = 0.32, the ini-
tial exchange rate Z0 = 5, the strike price K = 6 and the expiration time s =
Section 16.3 - Uncertain Currency Model 361

2. The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm)


yields the European currency option price

f = 0.977.

American Currency Option


Definition 16.16 An American currency option is a contract that gives the
holder the right to exchange one unit of foreign currency at any time prior
to an expiration time s for K units of domestic currency.

Suppose that the price of this contract is f in domestic currency. Then


the investor pays f for buying the contract, and receives

sup exp(−ut)(Zt − K)+ (16.44)


0≤t≤s

in domestic currency. Thus the expected return of the investor at time 0 is



+
− f + E sup exp(−ut)(Zt − K) . (16.45)
0≤t≤s

On the other hand, the bank receives f for selling the contract, and pays

sup exp(−vt)(1 − K/Zt )+ . (16.46)


0≤t≤s

in foreign currency. Thus the expected return of the bank at time 0 is



+
f − E sup exp(−vt)Z0 (1 − K/Zt ) . (16.47)
0≤t≤s

The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,

+
−f + E sup exp(−ut)(Zt − K)
0≤t≤s
(16.48)
+
=f −E sup exp(−vt)Z0 (1 − K/Zt ) .
0≤t≤s

Thus the American currency option price is given by the definition below.

Definition 16.17 (Liu, Chen and Ralescu [152]) Assume an American cur-
rency option has a strike price K and an expiration time s. Then the Amer-
ican currency option price is

1 + 1 +
f = E sup exp(−ut)(Zt − K) + E sup exp(−vt)Z0 (1 − K/Zt ) .
2 0≤t≤s 2 0≤t≤s
362 Chapter 16 - Uncertain Finance

Theorem 16.12 (Liu, Chen and Ralescu [152]) Assume an American cur-
rency option for the uncertain currency model (16.37) has a strike price K
and an expiration time s. Then the American currency option price is
√ ! !+
1 1
Z
σt 3 α
f = sup exp(−ut) Z0 exp et + ln −K dα
2 0 0≤t≤s π 1−α
√ !!+
1 1
Z
σt 3 α
+ sup exp(−vt) Z0 − K/ exp et + ln dα.
2 0 0≤t≤s π 1−α

Proof: It follows from Theorem 15.13 that sup0≤t≤s exp(−ut)(Zt − K)+ and
sup0≤t≤s exp(−vt)Z0 (1 − K/Zt )+ have inverse uncertainty distributions
√ ! !+
−1 σt 3 α
Ψs (α) = sup exp(−ut) Z0 exp et + ln −K ,
0≤t≤s π 1−α
√ !!+
σt 3 α
Υ−1
s (α) = sup exp(−vt) Z0 − K/ exp et + ln ,
0≤t≤s π 1−α
respectively. Thus the American currency option price formula follows from
Definition 16.17 immediately.

General Currency Model


If the exchange rate follows a general uncertain differential equation, then we
have a general currency model,


 dXt = uXt dt (Domestic Currency)

dYt = vYt dt (Foreign Currency) (16.49)


dZt = F (t, Zt )dt + G(t, Zt )dCt (Exchange Rate)

where u and v are interest rates, F and G are two functions, and Ct is a
canonical Liu process.
Note that the α-path Ztα of the exchange rate Zt can be calculated by some
numerical methods. Assume the strike price is K and the expiration time
is s. It follows from Definition 16.15 and Theorem 15.12 that the European
currency option price is
1 1
Z
exp(−us)(Zsα − K)+ + exp(−vs)Z0 (1 − K/Zsα )+ dα.

f=
2 0
It follows from Definition 16.17 and Theorem 15.13 that the American cur-
rency option price is
1 1
Z
α + α +
f= sup exp(−ut)(Zt − K) + sup exp(−vt)Z0 (1 − K/Zt ) dα.
2 0 0≤t≤s 0≤t≤s
Section 16.4 - Bibliographic Notes 363

16.4 Bibliographic Notes


The classical finance theory assumed that stock price, interest rate, and ex-
change rate follow stochastic differential equations. However, this preassump-
tion was challenged among others by Liu [134] in which a convincing paradox
was presented to show why the real stock price is impossible to follow any
stochastic differential equations. As an alternative, Liu [134] suggested to
develop a theory of uncertain finance.
Uncertain differential equations were first introduced into finance by Liu
[125] in 2009 in which an uncertain stock model was proposed and European
option price formulas were provided. Besides, Chen [13] derived American
option price formulas, Sun and Chen [218] verified Asian option price formu-
las, and Yao [248] proved a no-arbitrage theorem for this type of uncertain
stock model. It is emphasized that other uncertain stock models were also
actively investigated by Peng and Yao [182], Yu [259], and Chen, Liu and
Ralescu [19], among others.
Uncertain differential equations were used to simulate interest rate by
Chen and Gao [21] in 2013 and an uncertain interest rate model was pre-
sented. On the basis of this model, the price of zero-coupon bond was also
produced. Besides, Jiao and Yao [75] investigated another type of uncertain
interest rate model.
Uncertain differential equations were employed to model currency ex-
change rate by Liu, Chen and Ralescu [152] in which an uncertain currency
model was proposed and some currency option price formulas were also de-
rived for the uncertain currency markets. In addition, Shen and Yao [208]
discussed another type of uncertain currency model.
Appendix A

Probability Theory

It is generally believed that the study of probability theory was started by


Pascal and Fermat in the 17th century when they succeeded in deriving the
exact probabilities for certain gambling problem. After that, probability
theory was subsequently studied by many researchers. A great progress was
achieved when von Mises [226] initialized the concept of sample space in
1931. A complete axiomatic foundation of probability theory was given by
Kolmogorov [88] in 1933. Since then, probability theory has been developed
steadily and widely applied in science and engineering.
The emphasis in this appendix is mainly on probability measure, ran-
dom variable, probability distribution, independence, operational law, ex-
pected value, variance, moment, entropy, law of large numbers, conditional
probability, stochastic process, stochastic calculus, and stochastic differential
equation.

A.1 Probability Measure


Let Ω be a nonempty set, and let A be a σ-algebra over Ω. Each element in A
is called an event. In order to present an axiomatic definition of probability,
the following three axioms are assumed:
Axiom 1. (Normality Axiom) Pr{Ω} = 1 for the universal set Ω.
Axiom 2. (Nonnegativity Axiom) Pr{A} ≥ 0 for any event A.
Axiom 3. (Additivity Axiom) For every countable sequence of mutually dis-
joint events A1 , A2 , · · · , we have
(∞ ) ∞
[ X
Pr Ai = Pr{Ai }. (A.1)
i=1 i=1

© Springer-Verlag Berlin Heidelberg 2015 365


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5
366 Appendix A - Probability Theory

Definition A.1 The set function Pr is called a probability measure if it sat-


isfies the normality, nonnegativity, and additivity axioms.

Example A.1: Let Ω = {ω1 , ω2 , · · · }, and let A be the power set of Ω.


Assume that p1 , p2 , · · · are nonnegative numbers such that p1 + p2 + · · · = 1.
Define a set function on A as
X
Pr{A} = pi . (A.2)
ωi ∈A

Then Pr is a probability measure.

Example A.2: Let φ be a nonnegative and integrable function on < (the


set of real numbers) such that
Z
φ(x)dx = 1. (A.3)
<

Define a set function on the Borel algebra as


Z
Pr{A} = φ(x)dx. (A.4)
A

Then Pr is a probability measure.

Definition A.2 Let Ω be a nonempty set, let A be a σ-algebra over Ω, and let
Pr be a probability measure. Then the triplet (Ω, A, Pr) is called a probability
space.

Example A.3: Let Ω = {ω1 , ω2 , · · · }, let A be the power set of Ω, and let Pr
be a probability measure defined by (A.2). Then (Ω, A, Pr) is a probability
space.

Example A.4: Let Ω = [0, 1], let A be the Borel algebra over Ω, and let Pr
be the Lebesgue measure. Then (Ω, A, Pr) is a probability space. For many
purposes it is sufficient to use it as the basic probability space.

Theorem A.1 (Probability Continuity Theorem) Let (Ω, A, Pr) be a proba-


bility space. If A1 , A2 , · · · ∈ A and limi→∞ Ai exists, then
n o
lim Pr{Ai } = Pr lim Ai . (A.5)
i→∞ i→∞

Proof: Step 1: Suppose {Ai } is an increasing sequence of events. Write


Ai → A and A0 = ∅. Then {Ai \Ai−1 } is a sequence of disjoint events and

[ k
[
(Ai \Ai−1 ) = A, (Ai \Ai−1 ) = Ak
i=1 i=1
Section A.1 - Probability Measure 367

for k = 1, 2, · · · Thus we have


∞ ∞
S P
Pr{A} = Pr (Ai \Ai−1 ) = Pr {Ai \Ai−1 }
i=1 i=1
k
k

P S
= lim Pr {Ai \Ai−1 } = lim Pr (Ai \Ai−1 )
k→∞ i=1 k→∞ i=1

= lim Pr{Ak }.
k→∞

Step 2: If {Ai } is a decreasing sequence of events, then the sequence


{A1 \Ai } is clearly increasing. It follows that
n o
Pr{A1 } − Pr{A} = Pr lim (A1 \Ai ) = lim Pr {A1 \Ai }
i→∞ i→∞

= Pr{A1 } − lim Pr{Ai }


i→∞

which implies that Pr{Ai } → Pr{A}.


Step 3: If {Ai } is a sequence of events such that Ai → A, then for each
k, we have

\ [∞
Ai ⊂ Ak ⊂ Ai .
i=k i=k

Since Pr is an increasing set function, we have


(∞ ) (∞ )
\ [
Pr Ai ≤ Pr{Ak } ≤ Pr Ai .
i=k i=k

Note that

\ ∞
[
Ai ↑ A, Ai ↓ A.
i=k i=k

It follows from Steps 1 and 2 that Pr{Ai } → Pr{A}.

Product Probability
Let (Ωk , Ak , Prk ), k = 1, 2, · · · be a sequence of probability spaces. Now we
write
Ω = Ω1 × Ω2 × · · · , A = A1 × A2 × · · · (A.6)
It has been proved that there is a unique probability measure Pr on the
product σ-algebra A such that
(∞ ) ∞
Y Y
Pr Ak = Prk {Ak } (A.7)
k=1 k=1
368 Appendix A - Probability Theory

where Ak are arbitrarily chosen events from Ak for k = 1, 2, · · · , respectively.


This conclusion is called product probability theorem. Such a probability
measure is called product probability measure, denoted by

Pr = Pr1 × Pr2 × · · · (A.8)

Remark A.1: Please mention that the product probability theorem cannot
be deduced from the three axioms except we presuppose that the product
probability meets the three axioms. If I was allowed to reconstruct probability
theory, I would like to replace the product probability theorem with Axiom 4:
Let (Ωk , Ak , Prk ) be probability spaces for k = 1, 2, · · · The product probability
measure Pr is a probability measure satisfying
(∞ ) ∞
Y Y
Pr Ak = Prk {Ak } (A.9)
k=1 k=1

where Ak are arbitrarily chosen events from Ak for k = 1, 2, · · · , respectively.


One advantage is to force the practitioners to justify the product probability
for their own problems.

Definition A.3 Assume (Ωk , Ak , Prk ) are probability spaces for k = 1, 2, · · ·


Let Ω = Ω1 × Ω2 × · · · , A = A1 × A2 × · · · and Pr = Pr1 × Pr2 × · · · Then
the triplet (Ω, A, Pr) is called a product probability space.

Independence of Events
Definition A.4 The events A1 , A2 , · · · , An are said to be independent if
( n ) n
\ Y

Pr Ai = Pr{A∗i }. (A.10)
i=1 i=1

where A∗i are arbitrarily chosen from {Ai , Ω}, i = 1, 2, · · · , n, respectively,


and Ω is the sure event.

Remark A.2: Especially, two events A1 and A2 are independent if and only
if
Pr {A1 ∩ A2 } = Pr{A1 } × Pr{A2 }. (A.11)

Example A.5: The impossible event ∅ is independent of any event A because

Pr{∅ ∩ A} = Pr{∅} = 0 = Pr{∅} × Pr{A}.

Example A.6: The sure event Ω is independent of any event A because

Pr{Ω ∩ A} = Pr{A} = Pr{Ω} × Pr{A}.


Section A.2 - Random Variable 369

Theorem A.2 Let (Ωk , Ak , Prk ) be probability spaces and Ak ∈ Ak for k =


1, 2, · · · , n. Then the events
Ω1 × · · · × Ωk−1 × Ak × Ωk+1 × · · · × Ωn , k = 1, 2, · · · , n (A.12)
are always independent in the product probability space. That is, the events
A1 , A2 , · · · , An (A.13)
are always independent if they are from different probability spaces.
Proof: For simplicity, we only prove the case of n = 2. It follows from
the product probability theorem that the product probability measure of the
intersection is
Pr{(A1 × Ω2 ) ∩ (Ω1 × A2 )} = Pr{A1 × A2 } = Pr1 {A1 } × Pr2 {A2 }.
By using Pr{A1 × Ω2 } = Pr1 {A1 } and Pr{Ω1 × A2 } = Pr2 {A2 }, we obtain
Pr{(A1 × Ω2 ) ∩ (Ω1 × A2 )} = Pr{A1 × Ω2 } × Pr{Ω1 × A2 }.
Thus A1 × Ω2 and Ω1 × A2 are independent events. Furthermore, since A1
and A2 are understood as A1 × Ω2 and Ω1 × A2 in the product probability
space, respectively, the two events A1 and A2 are also independent.

A.2 Random Variable


Definition A.5 A random variable is a function from a probability space
(Ω, A, Pr) to the set of real numbers such that {ξ ∈ B} is an event for any
Borel set B.

Example A.7: Take (Ω, A, Pr) to be {ω1 , ω2 } with Pr{ω1 } = Pr{ω2 } = 0.5.
Then the function (
0, if ω = ω1
ξ(ω) =
1, if ω = ω2
is a random variable.

Example A.8: Take (Ω, A, Pr) to be the interval [0, 1] with Borel algebra
and Lebesgue measure. We define ξ as an identity function from [0, 1] to
[0, 1]. Since ξ is a measurable function, it is a random variable.
Definition A.6 Let ξ1 , ξ2 , · · · , ξn be random variables on the probability space
(Ω, A, Pr), and let f be a real-valued measurable function. The
ξ = f (ξ1 , ξ2 , · · · , ξn ) (A.14)
is a random variable defined by
ξ(ω) = f (ξ1 (ω), ξ2 (ω), · · · , ξn (ω)), ∀ω ∈ Ω. (A.15)
370 Appendix A - Probability Theory

Theorem A.3 Let ξ1 , ξ2 , · · · , ξn be random variables, and let f be a real-


valued measurable function. Then f (ξ1 , ξ2 , · · · , ξn ) is a random variable.
Proof: Since ξ1 , ξ2 , · · · , ξn are random variables, they are measurable func-
tions from a probability space (Ω, A, Pr) to the set of real numbers. Thus
f (ξ1 , ξ2 , · · · , ξn ) is also a measurable function from the probability space
(Ω, A, Pr) to the set of real numbers. Hence f (ξ1 , ξ2 , · · · , ξn ) is a random
variable.

A.3 Probability Distribution


Definition A.7 The probability distribution Φ of a random variable ξ is
defined by
Φ(x) = Pr {ξ ≤ x} (A.16)
for any real number x.
That is, Φ(x) is the probability that the random variable ξ takes a value less
than or equal to x. A function Φ : < → [0, 1] is a probability distribution if
and only if it is an increasing and right-continuous function with
lim Φ(x) = 0; lim Φ(x) = 1. (A.17)
x→−∞ x→+∞

Example A.9: Take (Ω, A, Pr) to be {ω1 , ω2 } with Pr{ω1 } = Pr{ω2 } = 0.5.
We now define a random variable as follows,

0, if ω = ω1
ξ(ω) =
1, if ω = ω2 .
Then ξ has a probability distribution

 0, if x < 0

Φ(x) = 0.5, if 0 ≤ x < 1

1, if x ≥ 1.

Definition A.8 The probability density function φ: < → [0, +∞) of a ran-
dom variable ξ is a function such that
Z x
Φ(x) = φ(y)dy (A.18)
−∞

holds for any real number x, where Φ is the probability distribution of the
random variable ξ.
Theorem A.4 (Probability Inversion Theorem) Let ξ be a random variable
whose probability density function φ exists. Then for any Borel set B, we
have Z
Pr{ξ ∈ B} = φ(y)dy. (A.19)
B
Section A.3 - Probability Distribution 371

Proof: Assume that C is the class of all subsets C of < for which the relation
Z
Pr{ξ ∈ C} = φ(y)dy (A.20)
C

holds. We will show that C contains all Borel sets. On the one hand, we
may prove that C is a monotone class (if Ai ∈ C and Ai ↑ A or Ai ↓ A, then
A ∈ C). On the other hand, we may verify that C contains all intervals of the
form (−∞, a], (a, b], (b, ∞) and ∅ since
Z a
Pr{ξ ∈ (−∞, a]} = Φ(a) = φ(y)dy,
−∞
Z +∞
Pr{ξ ∈ (b, +∞)} = Φ(+∞) − Φ(b) = φ(y)dy,
b
Z b
Pr{ξ ∈ (a, b]} = Φ(b) − Φ(a) = φ(y)dy,
a
Z
Pr{ξ ∈ ∅} = 0 = φ(y)dy

where Φ is the probability distribution of ξ. Let F be the algebra consisting of
all finite unions of disjoint sets of the form (−∞, a], (a, b], (b, ∞) and ∅. Note
that for any disjoint sets C1 , C2 , · · · , Cm of F and C = C1 ∪ C2 ∪ · · · ∪ Cm ,
we have
Xm Xm Z Z
Pr{ξ ∈ C} = Pr{ξ ∈ Cj } = φ(y)dy = φ(y)dy.
j=1 j=1 Cj C

That is, C ∈ C. Hence we have F ⊂ C. Since the smallest σ-algebra contain-


ing F is just the Borel algebra, the monotone class theorem (if F ⊂ C and
σ(F) is the smallest σ-algebra containing F, then σ(F) ⊂ C) implies that C
contains all Borel sets.

Definition A.9 A random variable ξ has a uniform distribution if its prob-


ability density function is
1
φ(x) = , a≤x≤b (A.21)
b−a
where a and b are real numbers with a < b.

Definition A.10 A random variable ξ has an exponential distribution if its


probability density function is

1 x
φ(x) = exp − , x≥0 (A.22)
β β
where β is a positive number.
372 Appendix A - Probability Theory

Definition A.11 A random variable ξ has a normal distribution if its prob-


ability density function is

(x − µ)2

1
φ(x) = √ exp − , −∞ < x < +∞ (A.23)
σ 2π 2σ 2

where µ and σ are real numbers with σ > 0.

Definition A.12 A random variable ξ has a lognormal distribution if its


logarithm is normally distributed, i.e., its probability density function is

(ln x − µ)2

1
φ(x) = √ exp − , x>0 (A.24)
xσ 2π 2σ 2

where µ and σ are real numbers with σ > 0.

A.4 Independence
Definition A.13 The random variables ξ1 , ξ2 , · · · , ξn are said to be inde-
pendent if ( n )
\ n
Y
Pr (ξi ∈ Bi ) = Pr{ξi ∈ Bi } (A.25)
i=1 i=1

for any Borel sets B1 , B2 , · · · , Bn .

Example A.10: Let ξ1 (ω1 ) and ξ2 (ω2 ) be random variables on the probabil-
ity spaces (Ω1 , A1 , Pr1 ) and (Ω2 , A2 , Pr2 ), respectively. It is clear that they
are also random variables on the product probability space (Ω1 , A1 , Pr1 ) ×
(Ω2 , A2 , Pr2 ). Then for any Borel sets B1 and B2 , we have

Pr{(ξ1 ∈ B1 ) ∩ (ξ2 ∈ B2 )}
= Pr {(ω1 , ω2 ) | ξ1 (ω1 ) ∈ B1 , ξ2 (ω2 ) ∈ B2 }
= Pr {(ω1 | ξ1 (ω1 ) ∈ B1 ) × (ω2 | ξ2 (ω2 ) ∈ B2 )}
= Pr1 {ω1 | ξ1 (ω1 ) ∈ B1 } × Pr2 {ω2 | ξ2 (ω2 ) ∈ B2 }
= Pr {ξ1 ∈ B1 } × Pr {ξ2 ∈ B2 } .

Thus ξ1 and ξ2 are independent in the product probability space. In fact, it


is true that random variables are always independent if they are defined on
different probability spaces.

Theorem A.5 Let ξ1 , ξ2 , · · · , ξn be independent random variables, and let


f1 , f2 , · · · , fn be measurable functions. Then f1 (ξ1 ), f2 (ξ2 ), · · · , fn (ξn ) are
independent random variables.
Section A.5 - Operational Law 373

Proof: For any Borel sets B1 , B2 , · · · , Bn , it follows from the definition of


independence that
( n ) ( n )
\ \
−1
Pr (fi (ξi ) ∈ Bi ) = Pr (ξi ∈ fi (Bi ))
i=1 i=1
n
Y n
Y
= Pr{ξi ∈ fi−1 (Bi )} = Pr{fi (ξi ) ∈ Bi }.
i=1 i=1

Thus f1 (ξ1 ), f2 (ξ2 ), · · · , fn (ξn ) are independent random variables.

A.5 Operational Law


Theorem A.6 Let ξ1 , ξ2 , · · · , ξn be independent random variables with prob-
ability distributions Φ1 , Φ2 , · · · , Φn , respectively, and let f : <n → < be a
measurable function. Then the random variable

ξ = f (ξ1 , ξ2 , · · · , ξn ) (A.26)

has a probability distribution


Z
Φ(x) = dΦ1 (x1 )dΦ2 (x2 ) · · · dΦn (xn ). (A.27)
f (x1 ,x2 ,··· ,xn )≤x

Proof: It follows from the additivity axiom of probability measure and the
independence of the random variables ξ1 , ξ2 , · · · , ξn that

Φ(x) = Pr{f (ξ1 , ξ2 , · · · , ξn ) ≤ x}


Z ( n )
\
= Pr (xi < ξi ≤ xi + dxi )
f (x1 ,x2 ,··· ,xn )≤x i=1
Z n
Y
= Pr{xi < ξi ≤ xi + dxi }
f (x1 ,x2 ,··· ,xn )≤x i=1
Z Yn
= (Φi (xi + dxi ) − Φi (xi ))
f (x1 ,x2 ,··· ,xn )≤x i=1
Z
= dΦ1 (x1 )dΦ2 (x2 ) · · · dΦn (xn ).
f (x1 ,x2 ,··· ,xn )≤x

The theorem is proved.

Remark A.3: If ξ1 , ξ2 , · · · , ξn have probability density functions φ1 , φ2 , · · · ,


φn , respectively, then ξ = f (ξ1 , ξ2 , · · · , ξn ) has a probability distribution
Z
Φ(x) = φ1 (x1 )φ2 (x2 ) · · · φn (xn )dx1 dx2 · · · dxn (A.28)
f (x1 ,x2 ,··· ,xn )≤x
374 Appendix A - Probability Theory

because dΦi (xi ) = φi (xi )dxi for i = 1, 2, · · · , n.

Exercise A.1: Let ξ1 , ξ2 , · · · , ξn be independent random variables with


probability distributions Φ1 , Φ2 , · · · , Φn , respectively. Show that the sum
ξ = ξ1 + ξ2 + · · · + ξn (A.29)
has a probability distribution
Z
Φ(x) = dΦ1 (x1 )dΦ2 (x2 ) · · · dΦn (xn ). (A.30)
x1 +x2 +···+xn ≤x

Especially, let ξ1 and ξ2 be independent random variables with probability


distributions Φ1 and Φ2 , respectively. Then ξ = ξ1 + ξ2 has a probability
distribution Z +∞
Φ(x) = Φ1 (x − y)dΦ2 (y) (A.31)
−∞
that is called the convolution of Φ1 and Φ2 .

Exercise A.2: Let ξ1 , ξ2 , · · · , ξn be independent random variables with


probability distributions Φ1 , Φ2 , · · · , Φn , respectively. Show that the max-
imum
ξ = ξ1 ∨ ξ2 ∨ · · · ∨ ξn (A.32)
has a probability distribution
Φ(x) = Φ1 (x)Φ2 (x) · · · Φn (x). (A.33)

Exercise A.3: Let ξ1 , ξ2 , · · · , ξn be independent random variables with


probability distributions Φ1 , Φ2 , · · · , Φn , respectively. Show that the min-
imum
ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn (A.34)
has a probability distribution
Φ(x) = 1 − (1 − Φ1 (x))(1 − Φ2 (x)) · · · (1 − Φn (x)). (A.35)

Operational Law for Boolean System


Theorem A.7 Assume that ξ1 , ξ2 , · · · , ξn are independent Boolean random
variables, i.e., (
1 with probability ai
ξi = (A.36)
0 with probability 1 − ai
for i = 1, 2, · · · , n. If f is a Boolean function, then ξ = f (ξ1 , ξ2 , · · · , ξn ) is a
Boolean random variable such that
n
!
X Y
Pr{ξ = 1} = µi (xi ) f (x1 , x2 , · · · , xn ) (A.37)
(x1 ,x2 ,··· ,xn )∈{0,1}n i=1
Section A.6 - Expected Value 375

where (
ai , if xi = 1
µi (xi ) = (A.38)
1 − ai , if xi = 0
for i = 1, 2, · · · , n.
Proof: It follows from the additivity axiom of probability measure and the
independence of the random variables ξ1 , ξ2 , · · · , ξn that
( n )
X \
Pr{ξ = 1} = Pr (ξi = xi ) I(f (x1 , x2 , · · · , xn ) = 1)
(x1 ,x2 ,··· ,xn )∈{0,1}n i=1
n
!
X Y
= Pr{ξi = xi } f (x1 , x2 , · · · , xn )
(x1 ,x2 ,··· ,xn )∈{0,1}n i=1
n
!
X Y
= µi (xi ) f (x1 , x2 , · · · , xn )
(x1 ,x2 ,··· ,xn )∈{0,1}n i=1

where I(·) is the indicator function. The theorem is proved.

Exercise A.4: Let ξ1 , ξ2 , · · · , ξn be independent Boolean random variables


defined by (A.36). Show that
ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn (A.39)
is a Boolean random variable such that
Pr{ξ = 1} = a1 a2 · · · an . (A.40)

Exercise A.5: Let ξ1 , ξ2 , · · · , ξn be independent Boolean random variables


defined by (A.36). Show that
ξ = ξ1 ∨ ξ2 ∨ · · · ∨ ξn (A.41)
is a Boolean random variable such that
Pr{ξ = 1} = 1 − (1 − a1 )(1 − a2 ) · · · (1 − an ). (A.42)

Exercise A.6: Let ξ1 , ξ2 , · · · , ξn be independent Boolean random variables


defined by (A.36). Show that
ξ = k-max [ξ1 , ξ2 , · · · , ξn ] (A.43)
is a Boolean random variable such that
n
!
X Y
Pr{ξ = 1} = µi (xi ) k-max [x1 , x2 , · · · , xn ] (A.44)
(x1 ,x2 ,··· ,xn )∈{0,1}n i=1

where (
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , n). (A.45)
1 − ai , if xi = 0
376 Appendix A - Probability Theory

A.6 Expected Value


Definition A.14 Let ξ be a random variable. Then the expected value of ξ
is defined by
Z +∞ Z 0
E[ξ] = Pr{ξ ≥ x}dx − Pr{ξ ≤ x}dx (A.46)
0 −∞

provided that at least one of the two integrals is finite.

Exercise A.7: Assume that ξ is a discrete random variable taking values xi


with probabilities pi , i = 1, 2, · · · , m, respectively. Show that
m
X
E[ξ] = pi xi .
i=1

Theorem A.8 Let ξ be a random variable with probability distribution Φ.


Then Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx. (A.47)
0 −∞

Proof: It follows from the probability inversion theorem that for almost all
numbers x, we have Pr{ξ ≥ x} = 1 − Φ(x) and Pr{ξ ≤ x} = Φ(x). By using
the definition of expected value operator, we obtain
Z +∞ Z 0
E[ξ] = Pr{ξ ≥ x}dx − Pr{ξ ≤ x}dx
0 −∞
Z +∞ Z 0
= (1 − Φ(x))dx − Φ(x)dx.
0 −∞

The theorem is proved.

Theorem A.9 Let ξ be a random variable with probability distribution Φ.


Then Z +∞
E[ξ] = xdΦ(x). (A.48)
−∞

Proof: It follows from the integration by parts and Theorem A.8 that the
expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z +∞ Z 0 Z +∞
= xdΦ(x) + xdΦ(x) = xdΦ(x).
0 −∞ −∞
Section A.6 - Expected Value 377

The theorem is proved.

Remark A.4: Let φ(x) be the probability density function of ξ. Then we


immediately have Z +∞
E[ξ] = xφ(x)dx (A.49)
−∞
because dΦ(x) = φ(x)dx.
Theorem A.10 Let ξ be a random variable with regular probability distri-
bution Φ. Then Z 1
E[ξ] = Φ−1 (α)dα. (A.50)
0

Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem A.8 that the expected value is
Z +∞ Z 1
E[ξ] = xdΦ(x) = Φ−1 (α)dα.
−∞ 0

The theorem is proved.


Theorem A.11 Let ξ1 , ξ2 , · · · , ξn be independent random variables with prob-
ability distributions Φ1 , Φ2 , · · · , Φn , respectively, and let f : <n → < be a
measurable function. Then ξ = f (ξ1 , ξ2 , · · · , ξn ) has an expected value
Z
E[ξ] = f (x1 , x2 , · · · , xn )dΦ1 (x1 )dΦ2 (x2 ) · · · dΦn (xn ). (A.51)
<n

Proof: It follows from the operational law of random variables that ξ has a
probability distribution
Z
Φ(x) = dΦ1 (x1 )dΦ2 (x2 ) · · · dΦn (xn )
f (x1 ,x2 ,··· ,xn )≤x
Z
= I(f (x1 , x2 , · · · , xn ) ≤ x)dΦ1 (x1 )dΦ2 (x2 ) · · · dΦn (xn )
<n

where I(·) is the indicator function. Furthermore, we have


Z
dΦ(x) = dI(f (x1 , x2 , · · · , xn ) ≤ x)dΦ1 (x1 )dΦ2 (x2 ) · · · dΦn (xn ).
<n

It follows from Theorem A.9 that


Z +∞ Z
E[f (ξ)] = x dI(f (x1 , x2 , · · ·, xn ) ≤ x)dΦ1 (x1 )dΦ2 (x2 ) · · · dΦn (xn )
−∞ <n
Z Z +∞
= xdI(f (x1 , x2 , · · ·, xn ) ≤ x)dΦ1 (x1 )dΦ2 (x2 ) · · · dΦn (xn )
<n −∞
Z
= f (x1 , x2 , · · · , xn )dΦ1 (x1 )dΦ2 (x2 ) · · · dΦn (xn ).
<n
378 Appendix A - Probability Theory

The theorem is proved.

Theorem A.12 Let ξ1 , ξ2 , · · · , ξn be independent random variables with prob-


ability density functions φ1 , φ2 , · · · , φn , respectively, and let f : <n → < be a
measurable function. Then ξ = f (ξ1 , ξ2 , · · · , ξn ) has an expected value
Z
E[ξ] = f (x1 , x2 , · · · , xn )φ1 (x1 )φ2 (x2 ) · · · φn (xn )dx1 dx2 · · · dxn . (A.52)
<n

Proof: It follows from dΦi (xi ) = φi (xi )dxi , i = 1, 2, · · · , n and Theo-


rem A.11 immediately.

Theorem A.13 Let ξ and η be independent random variables with finite


expected values. Then
E[ξη] = E[ξ]E[η]. (A.53)

Proof: Let ξ and η have probability distributions Φ and Ψ, respectively. It


follows from Theorem A.11 that
Z +∞ Z +∞
E[ξη] = xydΦ(x)dΨ(y)
−∞ −∞
Z +∞ Z +∞
= xdΦ(x) ydΨ(y) = E[ξ]E[η].
−∞ −∞

The theorem is verified.

Theorem A.14 Let ξ and η be random variables with finite expected values.
Then for any numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (A.54)

Proof: Step 1: We first prove that E[ξ + b] = E[ξ] + b for any real number
b. When b ≥ 0, we have
Z ∞ Z 0
E[ξ + b] = Pr{ξ + b ≥ x}dx − Pr{ξ + b ≤ x}dx
0 −∞
Z ∞ Z 0
= Pr{ξ ≥ x − b}dx − Pr{ξ ≤ x − b}dx
0 −∞
Z b
= E[ξ] + (Pr{ξ ≥ x − b} + Pr{ξ < x − b}) dx
0

= E[ξ] + b.

If b < 0, then we have


Z 0
E[ξ + b] = E[ξ] − (Pr{ξ ≥ x − b} + Pr{ξ < x − b}) dx = E[ξ] + b.
b
Section A.6 - Expected Value 379

Step 2: We prove that E[aξ] = aE[ξ] for any real number a. If a = 0,


then the equation E[aξ] = aE[ξ] holds trivially. If a > 0, we have
Z ∞ Z 0
E[aξ] = Pr{aξ ≥ x}dx − Pr{aξ ≤ x}dx
0 −∞
Z ∞ Z 0
n xo n xo
= Pr ξ ≥ dx − Pr ξ ≤ dx
0 a −∞ a
Z ∞ n Z 0
xo x n xo x
=a Pr ξ ≥ d −a Pr ξ ≤ d
0 a a −∞ a a
= aE[ξ].

If a < 0, we have
Z ∞ Z 0
E[aξ] = Pr{aξ ≥ x}dx − Pr{aξ ≤ x}dx
0 −∞
Z ∞ Z 0
n xo n xo
= Pr ξ ≤ dx − Pr ξ ≥ dx
0 a −∞ a
Z ∞ n Z 0
xo x n xo x
=a Pr ξ ≥ d −a Pr ξ ≤ d
0 a a −∞ a a
= aE[ξ].

Step 3: We prove that E[ξ + η] = E[ξ] + E[η] when both ξ and η


are nonnegative simple random variables taking values a1 , a2 , · · · , am and
b1 , b2 , · · · , bn , respectively. Then ξ + η is also a nonnegative simple random
variable taking values ai + bj , i = 1, 2, · · · , m, j = 1, 2, · · · , n. Thus we have
m P
P n
E[ξ + η] = (ai + bj ) Pr{ξ = ai , η = bj }
i=1 j=1
Pm P n m P
P n
= ai Pr{ξ = ai , η = bj } + bj Pr{ξ = ai , η = bj }
i=1 j=1 i=1 j=1
Pm n
P
= ai Pr{ξ = ai } + bj Pr{η = bj }
i=1 j=1

= E[ξ] + E[η].

Step 4: We prove that E[ξ + η] = E[ξ] + E[η] when both ξ and η are
nonnegative random variables. For every i ≥ 1 and every ω ∈ Ω, we define

 k − 1 , if k − 1 ≤ ξ(ω) < k , k = 1, 2, · · · , i2i

ξi (ω) = 2i 2i 2i

 i, if i ≤ ξ(ω),
380 Appendix A - Probability Theory


 k − 1 , if k − 1 ≤ η(ω) < k , k = 1, 2, · · · , i2i

ηi (ω) = 2i 2i 2i

 i, if i ≤ η(ω).
Then {ξi }, {ηi } and {ξi + ηi } are three sequences of nonnegative simple
random variables such that ξi ↑ ξ, ηi ↑ η and ξi + ηi ↑ ξ + η as i → ∞. Note
that the functions Pr{ξi > x}, Pr{ηi > x}, Pr{ξi + ηi > x}, i = 1, 2, · · · are
also simple. It follows from the probability continuity theorem that
Pr{ξi > x} ↑ Pr{ξ > x}, ∀x ≥ 0
as i → ∞. Since the expected value E[ξ] exists, we have
Z +∞ Z +∞
E[ξi ] = Pr{ξi > x}dx → Pr{ξ > x}dx = E[ξ]
0 0

as i → ∞. Similarly, we may prove that E[ηi ] → E[η] and E[ξi +ηi ] → E[ξ+η]
as i → ∞. It follows from Step 3 that E[ξ + η] = E[ξ] + E[η].
Step 5: We prove that E[ξ + η] = E[ξ] + E[η] when ξ and η are arbitrary
random variables. Define
( (
ξ(ω), if ξ(ω) ≥ −i η(ω), if η(ω) ≥ −i
ξi (ω) = ηi (ω) =
−i, otherwise, −i, otherwise.
Since the expected values E[ξ] and E[η] are finite, we have
lim E[ξi ] = E[ξ], lim E[ηi ] = E[η], lim E[ξi + ηi ] = E[ξ + η].
i→∞ i→∞ i→∞

Note that (ξi + i) and (ηi + i) are nonnegative random variables. It follows
from Steps 1 and 4 that
E[ξ + η] = lim E[ξi + ηi ]
i→∞
= lim (E[(ξi + i) + (ηi + i)] − 2i)
i→∞
= lim (E[ξi + i] + E[ηi + i] − 2i)
i→∞
= lim (E[ξi ] + i + E[ηi ] + i − 2i)
i→∞
= lim E[ξi ] + lim E[ηi ]
i→∞ i→∞
= E[ξ] + E[η].

Step 6: The linearity E[aξ + bη] = aE[ξ] + bE[η] follows immediately


from Steps 2 and 5. The theorem is proved.
Theorem A.15 Let ξ be a random variable, and let t be a positive number.
If E[|ξ|t ] < ∞, then
lim xt Pr{|ξ| ≥ x} = 0. (A.55)
x→∞
Conversely, let ξ be a random variable satisfying (A.55) for some t > 0. Then
E[|ξ|s ] < ∞ for any 0 ≤ s < t.
Section A.6 - Expected Value 381

Proof: It follows from the definition of expected value that


Z ∞
E[|ξ|t ] = Pr{|ξ|t ≥ r}dr < ∞.
0

Thus we have Z ∞
lim Pr{|ξ|t ≥ r}dr = 0.
x→∞ xt /2

The equation (A.55) is proved by the following relation,


Z ∞ Z xt
t 1 t
Pr{|ξ| ≥ r}dr ≥ Pr{|ξ|t ≥ r}dr ≥ x Pr{|ξ| ≥ x}.
xt /2 xt /2 2

Conversely, if (A.55) holds, then there exists a number a such that

xt Pr{|ξ| ≥ x} ≤ 1, ∀x ≥ a.

Thus we have
Z a Z +∞
E[|ξ|s ] = Pr {|ξ|s ≥ r} dr + Pr {|ξ|s ≥ r} dr
0 a
Z a Z +∞
≤ Pr {|ξ|s ≥ r} dr + srs−1 Pr {|ξ| ≥ r} dr
0 0
Z a Z +∞
≤ Pr {|ξ|s ≥ r} dr + s rs−t−1 dr
0 0
Z ∞
< +∞. by rp dr < ∞ for any p < −1
0

The theorem is proved.

Example A.11: The condition (A.55) does not ensure that E[|ξ|t ] < ∞.
We consider the positive random variable
r
i
t 2 1
ξ= with probability i , i = 1, 2, · · ·
i 2
It is clear that
r !t ∞
t t 2n X 1 2
lim x Pr{ξ ≥ x} = lim i
= lim = 0.
x→∞ n→∞ n i=n
2 n→∞ n

However, the expected value of ξ t is



r !t ∞
i
t 2 1 1
X X
t
E[ξ ] = · i = = ∞.
i=1
i 2 i=1
i
382 Appendix A - Probability Theory

Theorem A.16 Let ξ be a random variable, and let f be a nonnegative


function. If f is even and increasing on [0, ∞), then for any given number
t > 0, we have
E[f (ξ)]
Pr{|ξ| ≥ t} ≤ . (A.56)
f (t)

Proof: It is clear that Pr{|ξ| ≥ f −1 (r)} is a monotone decreasing function


of r on [0, ∞). It follows from the nonnegativity of f (ξ) that
Z +∞ Z +∞
E[f (ξ)] = Pr{f (ξ) ≥ x}dx = Pr{|ξ| ≥ f −1 (x)}dx
0 0
Z f (t) Z f (t)
−1
≥ Pr{|ξ| ≥ f (x)}dx ≥ Pr{|ξ| ≥ f −1 (f (t))}dx
0 0
Z f (t)
= Pr{|ξ| ≥ t}dx = f (t) · Pr{|ξ| ≥ t}
0

which proves the inequality.

Theorem A.17 (Markov Inequality) Let ξ be a random variable. Then for


any given numbers t > 0 and p > 0, we have
E[|ξ|p ]
Pr{|ξ| ≥ t} ≤ . (A.57)
tp
Proof: It is a special case of Theorem A.16 when f (x) = |x|p .

A.7 Variance
Definition A.15 Let ξ be a random variable with finite expected value e.
Then the variance of ξ is defined by V [ξ] = E[(ξ − e)2 ].

Since (ξ − e)2 is a nonnegative random variable, we also have


Z +∞
V [ξ] = Pr{(ξ − e)2 ≥ x}dx. (A.58)
0

Theorem A.18 If ξ is a random variable whose variance exists, and a and


b are real numbers, then V [aξ + b] = a2 V [ξ].

Proof: Let e be the expected value of ξ. Then E[aξ + b] = ae + b. It follows


from the definition of variance that

V [aξ + b] = E (aξ + b − ae − b)2 = a2 E[(ξ − e)2 ] = a2 V [ξ].


Theorem A.19 Let ξ be a random variable with expected value e. Then


V [ξ] = 0 if and only if Pr{ξ = e} = 1. That is, the random variable ξ is
essentially the constant e.
Section A.7 - Variance 383

Proof: We first assume V [ξ] = 0. It follows from the equation (A.58) that
Z +∞
Pr{(ξ − e)2 ≥ x}dx = 0
0

which implies Pr{(ξ − e)2 ≥ x} = 0 for any x > 0. Hence we have

Pr{(ξ − e)2 = 0} = 1.

That is, Pr{ξ = e} = 1. Conversely, assume Pr{ξ = e} = 1. Then we


immediately have Pr{(ξ − e)2 = 0} = 1 and Pr{(ξ − e)2 ≥ x} = 0 for any
x > 0. Thus Z +∞
V [ξ] = Pr{(ξ − e)2 ≥ x}dx = 0.
0
The theorem is proved.

Theorem A.20 If ξ1 , ξ2 , · · · , ξn are independent random variables with fi-


nite variances, then

V [ξ1 + ξ2 + · · · + ξn ] = V [ξ1 ] + V [ξ2 ] + · · · + V [ξn ]. (A.59)

Proof: Let ξ1 , ξ2 , · · · , ξn have expected values e1 , e2 , · · · , en , respectively.


Then we have

E[ξ1 + ξ2 + · · · + ξn ] = e1 + e2 + · · · + en .

It follows from the definition of variance that


" n # n n−1 n
X X X X
2

V ξi = E (ξi − ei ) + 2 E [(ξi − ei )(ξj − ej )] .
i=1 i=1 i=1 j=i+1

Since ξ1 , ξ2 , · · · , ξn are independent, E [(ξi − ei )(ξj − ej )] = 0 for all i, j with


i 6= j. Thus (A.59) holds.

Theorem A.21 (Chebyshev Inequality) Let ξ be a random variable whose


variance exists. Then for any given number t > 0, we have
V [ξ]
Pr {|ξ − E[ξ]| ≥ t} ≤ . (A.60)
t2
Proof: It is a special case of Theorem A.16 when the random variable ξ is
replaced with ξ − E[ξ], and f (x) = x2 .

Theorem A.22 (Kolmogorov Inequality) Let ξ1 , ξ2 , · · · , ξn be independent


random variables with finite expected values. Write Si = ξ1 + ξ2 + · · · + ξi for
each i ≥ 1. Then for any given number t > 0, we have

V [Sn ]
Pr max |Si − E[Si ]| ≥ t ≤ . (A.61)
1≤i≤n t2
384 Appendix A - Probability Theory

Proof: Without loss of generality, assume that E[ξi ] = 0 for each i. We set

A1 = {|S1 | ≥ t} , Ai = {|Sj | < t, j = 1, 2, · · · , i − 1, and |Si | ≥ t}

for i = 2, 3, · · · , n. It is clear that



A= max |Si | ≥ t
1≤i≤n

is the union of disjoint sets A1 , A2 , · · · , An . Since E[Sn ] = 0, we have


Z +∞ n Z
X +∞
Pr{Sn2 Pr (Sn2 ≥ r) ∩ Ak dr.

V [Sn ] = ≥ r}dr ≥ (A.62)
0 k=1 0

Now for any k with 1 ≤ k ≤ n, it follows from the independence that


Z +∞
Pr (Sn2 ≥ r) ∩ Ak dr

0
Z +∞
Pr ((Sk + ξk+1 + · · · + ξn )2 ≥ r) ∩ Ak dr

=
0
Z +∞
Pr (Sk2 + ξk+1
2
+ · · · + ξn2 ≥ r) ∩ Ak dr

=
0
n
X n
X
+2 E[IAk Sk ]E[ξj ] + Pr{Ak }E[ξj ]E[ξl ]
j=k+1 j6=l;j,l=k+1
Z +∞
Pr (Sk2 ≥ r) ∩ Ak dr


0

≥ t2 Pr{Ak }.

Using (A.62), we get

n
X
V [Sn ] ≥ t2 Pr{Ai } = t2 Pr{A}
i=1

which implies that the Kolmogorov inequality holds.

Theorem A.23 Let ξ be a random variable with probability distribution Φ


and expected value e. Then
Z +∞ √ √
V [ξ] = (1 − Φ(e + x) + Φ(e − x))dx. (A.63)
0
Section A.7 - Variance 385

Proof: It follows from the additivity of probability measure that the variance
is Z +∞
V [ξ] = Pr{(ξ − e)2 ≥ x}dx
0
Z +∞ √ √
= Pr{(ξ ≥ e + x) ∪ (ξ ≤ e − x)}dx
0
Z +∞ √ √
= (Pr{ξ ≥ e + x} + Pr{ξ ≤ e − x})dx
0
Z +∞ √ √
= (1 − Φ(e + x) + Φ(e − x))dx.
0
The theorem is proved.
Theorem A.24 Let ξ be a random variable with probability distribution Φ
and expected value e. Then
Z +∞
V [ξ] = (x − e)2 dΦ(x). (A.64)
−∞

Proof: For the equation (A.63), substituting e + y with x and y with
(x − e)2 , the change of variables and integration by parts produce
Z +∞ Z +∞ Z +∞

(1 − Φ(e + y))dy = (1 − Φ(x))d(x − e)2 = (x − e)2 dΦ(x).
0 e e

Similarly, substituting e − y with x and y with (x − e)2 , we obtain
Z +∞ Z −∞ Z e

Φ(e − y)dy = Φ(x)d(x − e)2 = (x − e)2 dΦ(x).
0 e −∞

It follows that the variance is


Z +∞ Z e Z +∞
V [ξ] = (x − e)2 dΦ(x) + (x − e)2 dΦ(x) = (x − e)2 dΦ(x).
e −∞ −∞

The theorem is verified.

Remark A.5: Let φ(x) be the probability density function of ξ. Then we


immediately have Z +∞
V [ξ] = (x − e)2 φ(x)dx. (A.65)
−∞
because dΦ(x) = φ(x)dx.
Theorem A.25 Let ξ be a random variable with regular probability distri-
bution Φ and expected value e. Then
Z 1
V [ξ] = (Φ−1 (α) − e)2 dα. (A.66)
0
386 Appendix A - Probability Theory

Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem A.24 that the variance is

Z +∞ Z 1
V [ξ] = 2
(x − e) dΦ(x) = (Φ−1 (α) − e)2 dα.
−∞ 0

The theorem is verified.

A.8 Moment

Definition A.16 Let ξ be a random variable, and let k be a positive integer.


Then E[ξ k ] is called the kth moment of ξ.

Theorem A.26 Let ξ be a random variable with probability distribution Φ,


and let k be an odd number. Then the k-th moment of ξ is

Z +∞ √
Z 0 √
k
E[ξ ] = (1 − Φ( k x))dx − Φ( k x)dx. (A.67)
0 −∞

Proof: Since k is an odd number, it follows from the definition of expected


value operator that

Z +∞ Z 0
k
k
E[ξ ] = Pr{ξ ≥ x}dx − Pr{ξ k ≤ x}dx
0 −∞
Z +∞ √
Z 0 √
= Pr{ξ ≥ k
x}dx − Pr{ξ ≤ k
x}dx
0 −∞
Z +∞ √
Z 0 √
= (1 − Φ( k x))dx − Φ( k x)dx.
0 −∞

The theorem is proved.

Theorem A.27 Let ξ be a random variable with probability distribution Φ,


and let k be an even number. Then the k-th moment of ξ is

Z +∞ √ √
E[ξ k ] = (1 − Φ( k x) + Φ(− k x))dx. (A.68)
0
Section A.8 - Moment 387

Proof: Since k is an odd number, ξ k is a nonnegative random variable. It


follows from the definition of expected value operator that
Z +∞
E[ξ k ] = Pr{ξ k ≥ x}dx
0
Z +∞ √ √
= Pr{(ξ ≥ k
x) ∪ (ξ ≤ − k x)}dx
0
Z +∞ √ √
= (Pr{ξ ≥ k
x} + Pr{ξ ≤ − k x})dx
0
Z +∞ √ √
= (1 − Φ( k x) + Φ(− k x))dx.
0

The theorem is verified.


Theorem A.28 Let ξ be a random variable with probability distribution Φ,
and let k be a positive integer. Then the k-th moment of ξ is
Z +∞
E[ξ k ] = xk dΦ(x). (A.69)
−∞

Proof: When k is an odd number, Theorem A.26 says that the k-th moment
is Z +∞ Z 0
√ √
E[ξ k ] = (1 − Φ( k y))dy − Φ( k y)dy.
0 −∞

Substituting k y with x and y with xk , the change of variables and integration
by parts produce
Z +∞ Z +∞ Z +∞
√ k
(1 − Φ( y))dy =
k
(1 − Φ(x))dx = xk dΦ(x)
0 0 0

and
0 0 0

Z Z Z
k
Φ( y)dy =
k
Φ(x)dx = − xk dΦ(x).
−∞ −∞ −∞
Thus we have
Z +∞ Z 0 Z +∞
E[ξ k ] = xk dΦ(x) + xk dΦ(x) = xk dΦ(x).
0 −∞ −∞

When k is an even number, Theorem A.27 says that the k-th moment is
Z +∞
k √ √
E[ξ ] = (1 − Φ( k y) + Φ(− k y))dy.
0

Substituting k y with x and y with xk , the change of variables and integration
by parts produce
Z +∞ Z +∞ Z +∞

(1 − Φ( k y))dy = (1 − Φ(x))dxk = xk dΦ(x).
0 0 0
388 Appendix A - Probability Theory


Similarly, substituting − k y with x and y with xk , we obtain
Z +∞ Z 0 Z 0
√ k
Φ(− y)dy =
k
Φ(x)dx = xk dΦ(x).
0 −∞ −∞

It follows that the k-th moment is


Z +∞ Z 0 Z +∞
E[ξ k ] = xk dΦ(x) + xk dΦ(x) = xk dΦ(x).
0 −∞ −∞

The theorem is thus verified for any positive integer k.


Theorem A.29 Let ξ be a random variable with regular probability distri-
bution Φ, and let k be a positive integer. Then the k-th moment of ξ is
Z 1
k
E[ξ ] = (Φ−1 (α))k dα. (A.70)
0

Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem A.28 that the k-th moment is
Z +∞ Z 1
k
E[ξ ] = k
x dΦ(x) = (Φ−1 (α))k dα.
−∞ 0

The theorem is verified.

A.9 Entropy
Given a random variable, what is the degree of difficulty of predicting the
specified value that the random variable will take? In order to answer this
question, Shannon [205] defined a concept of entropy as a measure of uncer-
tainty.
Definition A.17 Let ξ be a random variable with probability density func-
tion φ. Then its entropy is defined by
Z +∞
H[ξ] = − φ(x) ln φ(x)dx. (A.71)
−∞

Example A.12: Let ξ be a uniformly distributed random variable on [a, b].


Then its entropy is H[ξ] = ln(b − a). This example shows that the entropy
may assume both positive and negative values since ln(b − a) < 0 if b − a < 1;
and ln(b − a) > 0 if b − a > 1.

Example A.13: Let ξ be an exponentially distributed random variable with


expected value β. Then its entropy is H[ξ] = 1 + ln β.

Example A.14: Let ξ be a normally distributed random variable√with


expected value e and variance σ 2 . Then its entropy is H[ξ] = 1/2 + ln 2πσ.
Section A.9 - Entropy 389

Maximum Entropy Principle

Given some constraints, for example, expected value and variance, there are
usually multiple compatible probability distributions. For this case, we would
like to select the distribution that maximizes the value of entropy and satisfies
the prescribed constraints. This method is often referred to as the maximum
entropy principle (Jaynes [70]).

Example A.15: Let ξ be a random variable on [a, b] whose probability


density function exists. The maximum entropy principle attempts to find
the probability density function φ(x) that maximizes the entropy
Z b
− φ(x) ln φ(x)dx
a

Rb
subject to the natural constraint a
φ(x)dx = 1. The Lagrangian is
!
Z b Z b
L=− φ(x) ln φ(x)dx − λ φ(x)dx − 1 .
a a

It follows from the Euler-Lagrange equation that the maximum entropy prob-
ability density function meets

ln φ(x) + 1 + λ = 0

and has the form φ(x) = exp(−1 − λ). Substituting it into the natural
constraint, we get
1
φ∗ (x) = , a≤x≤b
b−a
which is just a uniform probability density function, and the maximum en-
tropy is H[ξ ∗ ] = ln(b − a).

Example A.16: Let ξ be a random variable on (−∞, +∞) whose probability


density function exists. Assume that the expected value and variance of ξ are
prescribed to be µ and σ 2 , respectively. The maximum entropy probability
density function φ(x) should maximize the entropy
Z +∞
− φ(x) ln φ(x)dx
−∞

subject to the constraints


Z +∞ Z +∞ Z +∞
φ(x)dx = 1, xφ(x)dx = µ, (x − µ)2 φ(x)dx = σ 2 .
−∞ −∞ −∞
390 Appendix A - Probability Theory

The Lagrangian is
Z +∞ Z +∞
L=− φ(x) ln φ(x)dx − λ1 φ(x)dx − 1
−∞ −∞
Z +∞ Z +∞
−λ2 xφ(x)dx − µ − λ3 (x − µ)2 φ(x)dx − σ 2 .
−∞ −∞

The maximum entropy probability density function meets Euler-Lagrange


equation
ln φ(x) + 1 + λ1 + λ2 x + λ3 (x − µ)2 = 0

and has the form φ(x) = exp(−1 − λ1 − λ2 x − λ3 (x − µ)2 ). Substituting it


into the constraints, we get

(x − µ)2

∗ 1
φ (x) = √ exp − , x∈<
σ 2π 2σ 2

which is just a normal


√ probability density function, and the maximum entropy
is H[ξ ∗ ] = 1/2 + ln 2πσ.

A.10 Random Sequence


Random sequence is a sequence of random variables indexed by integers. This
section introduces four convergence concepts of random sequence: conver-
gence almost surely (a.s.), convergence in probability, convergence in mean,
and convergence in distribution.

Table A.1: Relations among Convergence Concepts

Convergence Almost Surely


& Convergence Convergence

% in Probability in Distribution
Convergence in Mean

Definition A.18 The random sequence {ξi } is said to be convergent a.s. to


ξ if and only if there exists an event A with Pr{A} = 1 such that

lim |ξi (ω) − ξ(ω)| = 0 (A.72)


i→∞

for every ω ∈ A. In that case we write ξi → ξ, a.s.


Section A.10 - Random Sequence 391

Definition A.19 The random sequence {ξi } is said to be convergent in prob-


ability to ξ if
lim Pr {|ξi − ξ| ≥ ε} = 0 (A.73)
i→∞

for every ε > 0.

Definition A.20 The random sequence {ξi } is said to be convergent in mean


to ξ if
lim E[|ξi − ξ|] = 0. (A.74)
i→∞

Definition A.21 Let Φ, Φ1 , Φ2 , · · · be the probability distributions of ran-


dom variables ξ, ξ1 , ξ2 , · · · , respectively. We say the random sequence {ξi }
converges in distribution to ξ if

lim Φi (x) = Φ(x) (A.75)


i→∞

for all x at which Φ(x) is continuous.

Convergence Almost Surely vs. Convergence in Probability


Theorem A.30 The random sequence {ξi } converges a.s. to ξ if and only if
for every ε > 0, we have
(∞ )
[
lim Pr {|ξi − ξ| ≥ ε} = 0. (A.76)
n→∞
i=n

Proof: For every i ≥ 1 and ε > 0, we define


n o
A = ω ∈ Ω lim ξi (ω) 6= ξ(ω) ,
i→∞

Ai (ε) = ω ∈ Ω |ξi (ω) − ξ(ω)| ≥ ε .

It is clear that
∞ [

!
[ \
A= Ai (ε) .
ε>0 n=1 i=n

Note that ξi → ξ, a.s. if and only if Pr{A} = 0. That is, ξi → ξ, a.s. if and
only if (∞ ∞ )
\ [
Pr Ai (ε) = 0
n=1 i=n

for every ε > 0. Since



[ ∞ [
\ ∞
Ai (ε) ↓ Ai (ε),
i=n n=1 i=n
392 Appendix A - Probability Theory

it follows from the probability continuity theorem that


(∞ ) (∞ ∞ )
[ \ [
lim Pr Ai (ε) = Pr Ai (ε) = 0.
n→∞
i=n n=1 i=n

The theorem is proved.

Theorem A.31 If the random sequence {ξi } converges a.s. to ξ, then {ξi }
converges in probability to ξ.

Proof: It follows from the convergence a.s. and Theorem A.30 that
(∞ )
[
lim Pr {|ξi − ξ| ≥ ε} = 0
n→∞
i=n

for each ε > 0. For every n ≥ 1, since



[
{|ξn − ξ| ≥ ε} ⊂ {|ξi − ξ| ≥ ε},
i=n

we have Pr{|ξn − ξ| ≥ ε} → 0 as n → ∞. Hence the theorem holds.

Example A.17: Convergence in probability does not imply convergence a.s.


For example, take (Ω, A, Pr) to be the interval [0, 1] with Borel algebra and
Lebesgue measure. For any positive integer i, there is an integer j such that
i = 2j + k, where k is an integer between 0 and 2j − 1. We define a random
variable by
(
1, if k/2j ≤ ω ≤ (k + 1)/2j
ξi (ω) =
0, otherwise
for i = 1, 2, · · · and ξ = 0. For any small number ε > 0, we have

1
Pr {|ξi − ξ| ≥ ε} = →0
2j
as i → ∞. That is, the sequence {ξi } converges in probability to ξ. However,
for any ω ∈ [0, 1], there is an infinite number of intervals of the form [k/2j , (k+
1)/2j ] containing ω. Thus ξi (ω) 6→ 0 as i → ∞. In other words, the sequence
{ξi } does not converge a.s. to ξ.

Convergence in Probability vs. Convergence in Mean


Theorem A.32 If the random sequence {ξi } converges in mean to ξ, then
{ξi } converges in probability to ξ.
Section A.10 - Random Sequence 393

Proof: It follows from the Markov inequality that, for any given number
ε > 0,
E[|ξi − ξ|]
Pr {|ξi − ξ| ≥ ε} ≤ →0
ε
as i → ∞. Thus {ξi } converges in probability to ξ.

Example A.18: Convergence in probability does not imply convergence in


mean. For example, take (Ω, A, Pr) to be {ω1 , ω2 , · · · } with Pr{ωj } = 1/2j
for j = 1, 2, · · · The random variables are defined by
i
2 , if j = i
ξi (ωj ) =
0, otherwise
for i = 1, 2, · · · and ξ = 0. For any small number ε > 0, we have
1
Pr {|ξi − ξ| ≥ ε} = →0
2i
as i → ∞. That is, the sequence {ξi } converges in probability to ξ. However,
we have
1
E [|ξi − ξ|] = 2i · i = 1
2
for each i. That is, the sequence {ξi } does not converge in mean to ξ.

Convergence Almost Surely vs. Convergence in Mean

Example A.19: Convergence a.s. does not imply convergence in mean. For
example, take (Ω, A, Pr) to be {ω1 , ω2 , · · · } with Pr{ωj } = 1/2j for j =
1, 2, · · · The random variables are defined by
i
2 , if j = i
ξi (ωj ) =
0, otherwise
for i = 1, 2, · · · and ξ = 0. Then {ξi } converges a.s. to ξ. However, the
sequence {ξi } does not converge in mean to ξ.

Example A.20: Convergence in mean does not imply convergence a.s. For
example, take (Ω, A, Pr) to be the interval [0, 1] with Borel algebra and
Lebesgue measure. For any positive integer i, there is an integer j such
that i = 2j + k, where k is an integer between 0 and 2j − 1. We define a
random variable by
(
1, if k/2j ≤ ω ≤ (k + 1)/2j
ξi (ω) =
0, otherwise
for i = 1, 2, · · · and ξ = 0. Then
1
E [|ξi − ξ|] = →0
2j
394 Appendix A - Probability Theory

as i → ∞. That is, the sequence {ξi } converges in mean to ξ. However, {ξi }


does not converge a.s. to ξ.

Convergence in Probability vs. Convergence in Distribution


Theorem A.33 If the random sequence {ξi } converges in probability to ξ,
then {ξi } converges in distribution to ξ.

Proof: Let x be any given continuity point of the probability distribution


Φ. On the one hand, for any y > x, we have

{ξi ≤ x} = {ξi ≤ x, ξ ≤ y} ∪ {ξi ≤ x, ξ > y} ⊂ {ξ ≤ y} ∪ {|ξi − ξ| ≥ y − x}

which implies that

Φi (x) ≤ Φ(y) + Pr{|ξi − ξ| ≥ y − x}.

Since {ξi } converges in probability to ξ, we have Pr{|ξi − ξ| ≥ y − x} → 0.


Thus we obtain lim supi→∞ Φi (x) ≤ Φ(y) for any y > x. Letting y → x, we
get
lim sup Φi (x) ≤ Φ(x). (A.77)
i→∞

On the other hand, for any z < x, we have

{ξ ≤ z} = {ξ ≤ z, ξi ≤ x} ∪ {ξ ≤ z, ξi > x} ⊂ {ξi ≤ x} ∪ {|ξi − ξ| ≥ x − z}

which implies that

Φ(z) ≤ Φi (x) + Pr{|ξi − ξ| ≥ x − z}.

Since Pr{|ξi − ξ| ≥ x − z} → 0 as i → ∞, we obtain Φ(z) ≤ lim inf i→∞ Φi (x)


for any z < x. Letting z → x, we get

Φ(x) ≤ lim inf Φi (x). (A.78)


i→∞

It follows from (A.77) and (A.78) that Φi (x) → Φ(x) as i → ∞. The theorem
is proved.

Example A.21: Convergence in distribution does not imply convergence


in probability. For example, take (Ω, A, Pr) to be {ω1 , ω2 } with Pr{ω1 } =
Pr{ω2 } = 0.5, and
−1, if ω = ω1
ξ(ω) =
1, if ω = ω2 .
We also define ξi = −ξ for all i. Then ξi and ξ are identically distributed.
Thus {ξi } converges in distribution to ξ. But, for any small number ε > 0,
we have Pr{|ξi − ξ| > ε} = Pr{Ω} = 1. That is, the sequence {ξi } does not
converge in probability to ξ.
Section A.11 - Law of Large Numbers 395

A.11 Law of Large Numbers


The laws of large numbers include two types: (a) the weak laws of large
numbers dealing with convergence in probability; (b) the strong laws of large
numbers dealing with convergence a.s. In order to introduce them, we will
denote
Sn = ξ1 + ξ2 + · · · + ξn (A.79)
for each n throughout this section.

Weak Laws of Large Numbers


Theorem A.34 (Chebyshev’s Weak Law of Large Numbers) Let ξ1 , ξ2 , · · · be
a sequence of independent but not necessarily identically distributed random
variables with finite expected values. If there exists a number a > 0 such
that V [ξi ] < a for all i, then (Sn − E[Sn ])/n converges in probability to 0 as
n → ∞.

Proof: For any given ε > 0, it follows from Chebyshev inequality that

Sn − E[Sn ] 1 Sn V [Sn ] a
Pr ≥ ε ≤ 2V = 2 2 ≤ 2 →0
n ε n ε n ε n

as n → ∞. The theorem is proved. Especially, if those random variables


have a common expected value e, then Sn /n converges in probability to e.

Theorem A.35 Let ξ1 , ξ2 , · · · be a sequence of iid random variables with


finite expected value e. Then Sn /n converges in probability to e as n → ∞.

Proof: For each i, since the expected value of ξi is finite, there exists β > 0
such that E[|ξi |] < β < ∞. Let α be an arbitrary positive number, and let n
be an arbitrary positive integer. We define
(
∗ ξi , if |ξi | < nα
ξi =
0, otherwise

for i = 1, 2, · · · It is clear that {ξi∗ } is a sequence of iid random variables. Let


e∗n be the common expected value of ξi∗ , and Sn∗ = ξ1∗ + ξ2∗ + · · · + ξn∗ . Then
we have
V [ξi∗ ] ≤ E[ξi∗2 ] ≤ nαE[|ξi∗ |] ≤ nαβ,

S E[ξ1∗ ] + E[ξ2∗ ] + · · · + E[ξn∗ ]
E n = = e∗n ,
n n

Sn V [ξ1∗ ] + V [ξ2∗ ] + · · · + V [ξn∗ ]
V = ≤ αβ.
n n2
396 Appendix A - Probability Theory

It follows from Chebyshev inequality that


∗ ∗
Sn 1 Sn αβ
Pr − e∗n ≥ ε ≤ 2 V ≤ 2 (A.80)
n ε n ε
for every ε > 0. It is also clear that e∗n → e as n → ∞ by Lebesgue dominated
convergence theorem. Thus there exists an integer N ∗ such that |e∗n − e| < ε
whenever n ≥ N ∗ . Applying (A.80), we get
∗ ∗
Sn Sn ∗ αβ
Pr − e ≥ 2ε ≤ Pr − en ≥ ε ≤ 2 (A.81)
n n ε
for any n ≥ N ∗ . It follows from the iid hypothesis and Theorem A.15 that
n
X
Pr{Sn∗ 6= Sn } ≤ Pr{|ξi | ≥ nα} ≤ n Pr{|ξ1 | ≥ nα} → 0
i=1

as n → ∞. Thus there exists an integer N ∗∗ such that


Pr{Sn∗ 6= Sn } ≤ α, ∀n ≥ N ∗∗ .
Applying (A.81), for all n ≥ N ∗ ∨ N ∗∗ , we have

Sn αβ
Pr − e ≥ 2ε ≤ 2 + α → 0
n ε
as α → 0. It follows that Sn /n converges in probability to e as n → ∞.

Strong Laws of Large Numbers


Lemma A.1 (Toeplitz Lemma) Let a, a1 , a2 , · · · be a sequence of real num-
bers such that ai → a as i → ∞. Then
a1 + a2 + · · · + an
lim = a. (A.82)
n→∞ n
Proof: Let ε > 0 be given. Since ai → a, there exists an integer N such
that
ε
|ai − a| < , ∀i ≥ N.
2
It is also able to choose an integer N ∗ > N such that
N
1 X ε

|ai − a| < .
N i=1 2

Thus for any n > N ∗ , we have


n N n
1X 1 X 1 X
ai − a ≤ ∗ |ai − a| + |ai − a| < ε.
n i=1 N i=1 n
i=N +1

It follows from the arbitrariness of ε that Toeplitz lemma holds.


Section A.11 - Law of Large Numbers 397

Lemma A.2 (Kronecker


P∞ Lemma) Let a1 , a2 , · · · be a sequence of real num-
bers such that i=1 ai converges. Then
a1 + 2a2 + · · · + nan
lim = 0. (A.83)
n→∞ n
Proof: We set s0 = 0 and si = a1 + a2 + · · · + ai for i = 1, 2, · · · Then we
have
n n n−1
1X 1X 1X
iai = i(si − si−1 ) = sn − si .
n i=1 n i=1 n i=1
The sequence {si } converges to a finite limit, say s. It follows from Toeplitz
Pn−1
lemma that i=1 si /n → s as n → ∞. Thus Kronecker lemma is proved.
Theorem A.36 (Kolmogorov Strong Law of Large Numbers) Let ξ1 , ξ2 , · · ·
be a sequence of independent random variables with finite expected values. If

X V [ξi ]
< ∞, (A.84)
i=1
i2

then
Sn − E[Sn ]
→ 0, a.s. (A.85)
n
as n → ∞.
Proof: Since ξ1 , ξ2 , · · · are independent random variables with finite ex-
pected values, for every given ε > 0, we have
 !
[ ∞ n+j
X ξi − E[ξi ] 
Pr ≥ε

j=0 i=n
i 
 !
n+j
m
"n+j #
[ X ξi X ξi 
= lim Pr −E ≥ε
m→∞ 
j=0 i=n
i i=n
i 

n+j
( "n+j # )
X ξi X ξi
= lim Pr max −E ≥ε
m→∞ 0≤j≤m
i=n
i i=n
i
"n+m #
1 X ξi
≤ lim V (by Kolmogorov inequality)
m→∞ ε2 i=n
i
n+m ∞
1 X V [ξi ] 1 X V [ξi ]
= lim = →0 as n → ∞.
m→∞ ε2 i2 ε2 i=n i2
i=n
P∞
Thus i=1 (ξi −E[ξi ])/i
converges a.s. Applying Kronecker lemma, we obtain
n
Sn − E[Sn ] 1X ξi − E[ξi ]
= i → 0, a.s.
n n i=1 i
398 Appendix A - Probability Theory

as n → ∞. The theorem is proved.


Theorem A.37 (Strong Law of Large Numbers) Let ξ1 , ξ2 , · · · be a sequence
of iid random variables with finite expected value e. Then
Sn
→ e, a.s. (A.86)
n
as n → ∞.
Proof: For each i ≥ 1, let ξi∗ be ξi truncated at i, i.e.,
(
∗ ξi , if |ξi | < i
ξi =
0, otherwise,
and write Sn∗ = ξ1∗ + ξ2∗ + · · · + ξn∗ . Then we have
i
X
V [ξi∗ ] ≤ E[ξi∗2 ] ≤ j 2 Pr{j − 1 ≤ |ξ1 | < j}
j=1

for all i. Thus


∞ ∞ X
i
X V [ξ ∗ ] i
X j2
≤ Pr{j − 1 ≤ |ξ1 | < j}
i=1
i2 i=1 j=1
i2
∞ ∞
X
2
X 1
= j Pr{j − 1 ≤ |ξ1 | < j} 2
j=1 i=j
i
∞ ∞
X X 1 2
≤2 j Pr{j − 1 ≤ |ξ1 | < j} by 2

j=1 i=j
i j

X
=2+2 (j − 1) Pr{j − 1 ≤ |ξ1 | < j}
j=1

≤ 2 + 2e < ∞.
It follows from Theorem A.36 that
Sn∗ − E[Sn∗ ]
→ 0, a.s. (A.87)
n
as n → ∞. Note that ξi∗ ↑ ξi as i → ∞. Using the Lebesgue dominated
convergence theorem, we conclude that E[ξi∗ ] → e. It follows from Toeplitz
Lemma that
E[Sn∗ ] E[ξ1∗ ] + E[ξ2∗ ] + · · · + E[ξn∗ ]
= → e. (A.88)
n n
Since (ξi − ξi∗ ) → 0, a.s. as i → ∞, Toeplitz Lemma states that
n
Sn − Sn∗ 1X
= (ξi − ξi∗ ) → 0, a.s. (A.89)
n n i=1

It follows from (A.87), (A.88) and (A.89) that Sn /n → e a.s. as n → ∞.


Section A.12 - Conditional Probability 399

A.12 Conditional Probability


We consider the probability of an event A after it has been learned that some
other event B has occurred. This new probability is called the conditional
probability of A given B.

Definition A.22 Let (Ω, A, Pr) be a probability space, and A, B ∈ A. Then


the conditional probability of A given B is defined by

Pr{A ∩ B}
Pr{A|B} = (A.90)
Pr{B}

provided that Pr{B} > 0.

Example A.22: Let ξ be an exponentially distributed random variable with


expected value β. Then for any real numbers a > 0 and x > 0, the conditional
probability of ξ ≥ a + x given ξ ≥ a is

Pr{ξ ≥ a + x|ξ ≥ a} = exp(−x/β) = Pr{ξ ≥ x}

which means that the conditional probability is identical to the original prob-
ability. This is the so-called memoryless property of exponential distribution.
In other words, it is as good as new if it is functioning on inspection.

Theorem A.38 (Bayes Formula) Let the events A1 , A2 , · · · , An form a par-


tition of the space Ω such that Pr{Ai } > 0 for i = 1, 2, · · · , n, and let B be
an event with Pr{B} > 0. Then we have

Pr{Ak } Pr{B|Ak }
Pr{Ak |B} = P
n (A.91)
Pr{Ai } Pr{B|Ai }
i=1

for k = 1, 2, · · · , n.

Proof: Since A1 , A2 , · · · , An form a partition of the space Ω, we immediately


have
X n X n
Pr{B} = Pr{Ai ∩ B} = Pr{Ai } Pr{B|Ai }
i=1 i=1

which is also called the formula for total probability. Thus, for any k, we have

Pr{Ak ∩ B} Pr{Ak } Pr{B|Ak }


Pr{Ak |B} = = P
n .
Pr{B}
Pr{Ai } Pr{B|Ai }
i=1

The theorem is proved.


400 Appendix A - Probability Theory

Remark A.6: Especially, let A and B be two events with Pr{A} > 0 and
Pr{B} > 0. Then A and Ac form a partition of the space Ω, and the Bayes
formula is
Pr{A} Pr{B|A}
Pr{A|B} = . (A.92)
Pr{B}

Remark A.7: In statistical applications, the events A1 , A2 , · · · , An are often


called hypotheses. Furthermore, for each i, the Pr{Ai } is called a priori
probability of Ai , and Pr{Ai |B} is called a posteriori probability of Ai after
the occurrence of event B.
Definition A.23 The conditional probability distribution Φ: < → [0, 1] of a
random variable ξ given B is defined by
Φ(x|B) = Pr {ξ ≤ x|B} (A.93)
provided that Pr{B} > 0.

Example A.23: Let ξ and η be random variables. Then the conditional


probability distribution of ξ given η = y is
Pr{ξ ≤ x, η = y}
Φ(x|η = y) = Pr {ξ ≤ x|η = y} =
Pr{η = y}
provided that Pr{η = y} > 0.
Definition A.24 The conditional probability density function φ of a random
variable ξ given B is a nonnegative function such that
Z x
Φ(x|B) = φ(y|B)dy, ∀x ∈ < (A.94)
−∞

where Φ(x|B) is the conditional probability distribution of ξ given B.

Example A.24: Let (ξ, η) be a random vector with joint probability density
function ψ. Then the marginal probability density functions of ξ and η are
Z +∞ Z +∞
f (x) = ψ(x, y)dy, g(y) = ψ(x, y)dx,
−∞ −∞

respectively. Furthermore, we have


Z x Z y Z y Z x
ψ(r, t)
Pr{ξ ≤ x, η ≤ y} = ψ(r, t)drdt = dr g(t)dt
−∞ −∞ −∞ −∞ g(t)
which implies that the conditional probability distribution of ξ given η = y
is Z x
ψ(r, y)
Φ(x|η = y) = dr, a.s. (A.95)
−∞ g(y)
Section A.13 - Random Set 401

and the conditional probability density function of ξ given η = y is

ψ(x, y) ψ(x, y)
φ(x|η = y) = =Z +∞ , a.s. (A.96)
g(y)
ψ(x, y)dx
−∞

Note that (A.95) and (A.96) are defined only for g(y) 6= 0. In fact, the set
{y|g(y) = 0} has probability 0. Especially, if ξ and η are independent random
variables, then ψ(x, y) = f (x)g(y) and φ(x|η = y) = f (x).

A.13 Random Set


It is believed that the earliest study of random set was Robbins [198] in
1944, and a rigorous definition was given by Matheron [167] in 1975. In this
book, let us redefine the concept of random set and propose a concept of
membership function for it.

Definition A.25 A random set is a function ξ from a probability space


(Ω, A, Pr) to a collection of sets such that both {B ⊂ ξ} and {ξ ⊂ B} are
events for any Borel set B.

Example A.25: Take a probability space (Ω, A, Pr) to be {ω1 , ω2 , ω3 }. Then


the set-valued function


 [1, 3], if ω = ω1

ξ(ω) = [2, 4], if ω = ω2 (A.97)


[3, 5], if ω = ω3

is a random set on (Ω, A, Pr).

Definition A.26 A random set ξ is said to have a membership function µ


if for any Borel set B, we have

Pr{B ⊂ ξ} = inf µ(x), (A.98)


x∈B

Pr{ξ ⊂ B} = 1 − sup µ(x). (A.99)


x∈B c

The above equations will be called probability inversion formulas.

Remark A.8: When a random set ξ does have a membership function µ,


we immediately have
µ(x) = Pr{x ∈ ξ}. (A.100)
402 Appendix A - Probability Theory

Example A.26: A crisp set A of real numbers is a special random set


ξ(ω) ≡ A. Show that such a random set has a membership function
(
1, if x ∈ A
µ(x) = (A.101)
0, if x 6∈ A

that is just the characteristic function of A.

Example A.27: Take a probability space (Ω, A, Pr) to be the interval [0, 1]
with Borel algebra and Lebesgue measure. Then the random set
√ √
ξ(ω) = − 1 − ω, 1 − ω (A.102)

has a membership function


(
1 − x2 , if x ∈ [−1, 1]
µ(x) = (A.103)
0, otherwise.

Theorem A.39 A real-valued function µ is a membership function if and


only if
0 ≤ µ(x) ≤ 1. (A.104)

Proof: If µ is a membership function of some random set ξ, then µ(x) =


Pr{x ∈ ξ} and 0 ≤ µ(x) ≤ 1. Conversely, suppose µ is a function such that
0 ≤ µ(x) ≤ 1. Take a probability space (Ω, A, Pr) to be the interval [0, 1]
with Borel algebra and Lebesgue measure. Then the random set

ξ(ω) = {x | µ(x) ≥ ω} (A.105)

has the membership function µ.

Theorem A.40 Let ξ be a random set with membership function µ. Then


its complement ξ c has a membership function

λ(x) = 1 − µ(x). (A.106)

Proof: In order to prove 1−µ is a membership function of ξ c , we must verify


the two probability inversion formulas. Let B be a Borel set. It follows from
the definition of membership function that

Pr{B ⊂ ξ c } = Pr{ξ ⊂ B c } = 1 − sup µ(x) = inf (1 − µ(x)),


x∈(B c )c x∈B

Pr{ξ c ⊂ B} = Pr{B c ⊂ ξ} = inf c µ(x) = 1 − sup (1 − µ(x)).


x∈B x∈B c
c
Thus ξ has a membership function 1 − µ.
Section A.14 - Stochastic Process 403

Definition A.27 Let ξ be a random set with membership function µ. Then


the set-valued function

µ−1 (α) = x ∈ < µ(x) ≥ α , ∀α ∈ [0, 1]



(A.107)

is called the inverse membership function of ξ. Sometimes, for each given α,


the set µ−1 (α) is also called the α-cut of µ.

Theorem A.41 (Sufficient and Necessary Condition) A function µ−1 (α) is


an inverse membership function if and only if it is a monotone decreasing
set-valued function with respect to α ∈ [0, 1]. That is,

µ−1 (α) ⊂ µ−1 (β), if α > β. (A.108)

Proof: Suppose µ−1 (α) is an inverse membership function of some random


set. For any x ∈ µ−1 (α), we have µ(x) ≥ α. Since α > β, we have µ(x) > β
and then x ∈ µ−1 (β). Hence µ−1 (α) ⊂ µ−1 (β). Conversely, suppose µ−1 (α)
is a monotone decreasing set-valued function. Then

µ(x) = sup α ∈ [0, 1] x ∈ µ−1 (α)


is a membership function of some random set. It is easy to verify that µ−1 (α)
is the inverse membership function of the random set. The theorem is proved.

Theorem A.42 Let ξ be a random set with inverse membership function


µ−1 (α). Then for each α ∈ [0, 1], we have

Pr{µ−1 (α) ⊂ ξ} ≥ α, (A.109)

Pr{ξ ⊂ µ−1 (α)} ≥ 1 − α. (A.110)

Proof: For each x ∈ µ−1 (α), we have µ(x) ≥ α. It follows from the proba-
bility inversion formula that

Pr{µ−1 (α) ⊂ ξ} = inf µ(x) ≥ α.


x∈µ−1 (α)

For each x 6∈ µ−1 (α), we have µ(x) < α. It follows from the probability
inversion formula that

Pr{ξ ⊂ µ−1 (α)} = 1 − sup µ(x) ≥ 1 − α.


x6∈µ−1 (α)

A.14 Stochastic Process


A stochastic process is essentially a sequence of random variables indexed by
time.
404 Appendix A - Probability Theory

Definition A.28 Let (Ω, A, Pr) be a probability space and let T be a totally
ordered set (e.g. time). A stochastic process is a function Xt (ω) from T ×
(Ω, A, Pr) to the set of real numbers such that {Xt ∈ B} is an event for any
Borel set B at each time t.
For each fixed ω, the function Xt (ω) is called a sample path of the stochas-
tic process Xt . A stochastic process Xt is said to be sample-continuous if
almost all sample paths are continuous with respect to t.
Definition A.29 A stochastic process Xt is said to have independent incre-
ments if
Xt0 , Xt1 − Xt0 , Xt2 − Xt1 , · · · , Xtk − Xtk−1 (A.111)
are independent random variables where t0 is the initial time and t1 , t2 , · · ·, tk
are any times with t0 < t1 < · · · < tk .
Definition A.30 A stochastic process Xt is said to have stationary incre-
ments if, for any given t > 0, the increments Xs+t − Xs are identically
distributed random variables for all s > 0.
A stationary independent increment process is a stochastic process that
has not only independent increments but also stationary increments. If Xt is
a stationary independent increment process, then
Yt = aXt + b
is also a stationary independent increment process for any numbers a and b.

Renewal Process
Let ξi denote the times between the (i − 1)th and the ith events, known as
the interarrival times, i = 1, 2, · · · , respectively. Define S0 = 0 and
Sn = ξ1 + ξ2 + · · · + ξn , ∀n ≥ 1. (A.112)
Then Sn can be regarded as the waiting time until the occurrence of the nth
event after time t = 0.
Definition A.31 Let ξ1 , ξ2 , · · · be iid positive interarrival times. Define
S0 = 0 and Sn = ξ1 + ξ2 + · · · + ξn for n ≥ 1. Then the stochastic pro-
cess
Nt = max {n | Sn ≤ t} (A.113)
n≥0

is called a renewal process.


A renewal process is called a Poisson process with rate β if the interarrival
times are exponential random variables with a common probability density
function,
1 x
φ(x) = exp − , x ≥ 0. (A.114)
β β
Section A.15 - Stochastic Calculus 405

Wiener Process
In 1827 Robert Brown observed irregular movement of pollen grain suspended
in liquid. This movement is now known as Brownian motion. In 1923 Norbert
Wiener modeled Brownian motion by the following Wiener process.

Definition A.32 A stochastic process Wt is said to be a standard Wiener


process if
(i) W0 = 0 and almost all sample paths are continuous,
(ii) Wt has stationary and independent increments,
(iii) every increment Ws+t − Ws is a normal random variable with expected
value 0 and variance t.

Note that the lengths of almost all sample paths of Wiener process are
infinitely long during any fixed time interval, and are differentiable nowhere.
Furthermore, the squared variation of Wiener process on [0, t] is equal to t
both in mean square and almost surely.

A.15 Stochastic Calculus


Ito calculus, named after Kiyoshi Ito, is the most popular topic of stochastic
calculus. The central concept is the Ito integral that allows one to integrate
a stochastic process with respect to Wiener process. This section provides a
brief introduction to Ito calculus.

Definition A.33 Let Xt be a stochastic process and let Wt be a standard


Wiener process. For any partition of closed interval [a, b] with a = t1 < t2 <
· · · < tk+1 = b, the mesh is written as

∆ = max |ti+1 − ti |.
1≤i≤k

Then Ito integral of Xt with respect to Wt is


Z b k
X
Xt dWt = lim Xti · (Wti+1 − Wti ) (A.115)
a ∆→0
i=1

provided that the limit exists in mean square and is a random variable.

Example A.28: Let Wt be a standard Wiener process. It follows from the


definition of Ito integral that
Z s
dWt = Ws ,
0
Z s
1 2 1
Wt dWt = W − s.
0 2 s 2
406 Appendix A - Probability Theory

Definition A.34 Let Wt be a standard Wiener process and let Zt be a


stochastic process. If there exist two stochastic processes µt and σt such that
Z t Z t
Zt = Z0 + µs ds + σs dWs (A.116)
0 0

for any t ≥ 0, then Zt is called an Ito process with drift µt and diffusion σt .
Furthermore, Zt has a stochastic differential

dZt = µt dt + σt dWt . (A.117)

Theorem A.43 (Ito Formula) Let Wt be a standard Wiener process, and let
h(t, w) be a twice continuously differentiable function. Then Xt = h(t, Wt )
is an Ito process and has a stochastic differential

∂h ∂h 1 ∂2h
dXt = (t, Wt )dt + (t, Wt )dWt + (t, Wt )dt. (A.118)
∂t ∂w 2 ∂w2
Proof: Since the function h is twice continuously differentiable, by using
Taylor series expansion, the infinitesimal increment of Xt has a second-order
approximation

∂h ∂h 1 ∂2h
∆Xt = (t, Wt )∆t + (t, Wt )∆Wt + (t, Wt )(∆Wt )2
∂t ∂b 2 ∂b2
1 ∂2h 2 ∂2h
+ (t, Wt )(∆t) + (t, Wt )∆t∆Wt .
2 ∂t2 ∂t∂b
Since we can ignore the terms (∆t)2 and ∆t∆Wt and replace (∆Wt )2 with
∆t, the Ito formula is obtained because it makes
Z s Z s
1 s ∂2h
Z
∂h ∂h
Xs = X0 + (t, Wt )dt + (t, Wt )dWt + (t, Wt )dt
0 ∂t 0 ∂b 2 0 ∂b2

for any s ≥ 0.

Example A.29: Ito formula is the fundamental theorem of stochastic cal-


culus. Applying Ito formula, we obtain

d(tWt ) = Wt dt + tdWt ,

d(Wt2 ) = 2Wt dWt + dt.

A.16 Stochastic Differential Equation


In 1940s Kiyoshi Ito invented a type of stochastic differential equation that
is a differential equation driven by Wiener process. This section provides a
brief introduction to stochastic differential equation.
Section A.16 - Stochastic Differential Equation 407

Definition A.35 Suppose Wt is a standard Wiener process, and f and g


are two functions. Then
dXt = f (t, Xt )dt + g(t, Xt )dWt (A.119)
is called a stochastic differential equation. A solution is an Ito process Xt
that satisfies (A.119) identically in t.

Example A.30: Let Wt be a standard Wiener process. Then the stochastic


differential equation
dXt = adt + bdWt
has a solution
Xt = at + bWt .

Example A.31: Let Wt be a standard Wiener process. Then the stochastic


differential equation
dXt = aXt dt + bXt dWt
has a solution
b2

Xt = exp a− t + bWt .
2
Theorem A.44 (Existence and Uniqueness Theorem) The stochastic differ-
ential equation
dXt = f (t, Xt )dt + g(t, Xt )dWt (A.120)
has a unique solution if the coefficients f (t, x) and g(t, x) satisfy linear growth
condition
|f (t, x)| + |g(t, x)| ≤ L(1 + |x|), ∀x ∈ <, t ≥ 0 (A.121)
and Lipschitz condition
|f (t, x) − f (t, y)| + |g(t, x) − g(t, y)| ≤ L|x − y|, ∀x, y ∈ <, t ≥ 0 (A.122)
for some constant L. Moreover, the solution is sample-continuous.
Theorem A.45 (Feynman-Kac Formula) Consider the stochastic differen-
tial equation
dXt = f (t, Xt )dt + g(t, Xt )dWt . (A.123)
For any measurable function h(x) and fixed T > 0, the function
"Z #
T
U (t, x) = E h(Xs )ds Xt = x (A.124)
t

is the solution of the partial differential equation


∂U ∂U 1 ∂2U
(t, x) + f (t, x) (t, x) + g 2 (t, x) 2 (t, x) + h(x) = 0 (A.125)
∂t ∂x 2 ∂x
with the terminal condition
U (T, x) = 0. (A.126)
Appendix B

Chance Theory

Uncertainty and randomness are two basic types of indeterminacy. Chance


theory was pioneered by Liu [149] in 2013 for modeling complex systems with
not only uncertainty but also randomness. This appendix will introduce the
concepts of chance measure, uncertain random variable, chance distribution,
operational law, expected value, variance, and law of large numbers. As ap-
plications of chance theory, this appendix will also provide uncertain random
programming, uncertain random risk analysis, uncertain random reliability
analysis, uncertain random graph, uncertain random network, and uncertain
random process.

B.1 Chance Measure


Let (Γ, L, M) be an uncertainty space and let (Ω, A, Pr) be a probability
space. Then the product (Γ, L, M) × (Ω, A, Pr) is called a chance space.
Essentially, it is another triplet,
(Γ × Ω, L × A, M × Pr) (B.1)
where Γ × Ω is the universal set, L × A is the product σ-algebra, and M × Pr
is the product measure.
The universal set Γ × Ω is clearly the set of all ordered pairs of the form
(γ, ω), where γ ∈ Γ and ω ∈ Ω. That is,
Γ × Ω = {(γ, ω) | γ ∈ Γ, ω ∈ Ω} . (B.2)
The product σ-algebra L × A is the smallest σ-algebra containing mea-
surable rectangles of the form Λ × A, where Λ ∈ L and A ∈ A. Any element
in L × A is called an event in the chance space.
What is the product measure M × Pr? In order to answer this question,
let us consider an event Θ in L × A. For each ω ∈ Ω, the set
Θω = {γ ∈ Γ | (γ, ω) ∈ Θ} (B.3)

© Springer-Verlag Berlin Heidelberg 2015 409


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5
410 Appendix B - Chance Theory

is clearly an event in L. Thus the uncertain measure M{Θω } exists for each
ω ∈ Ω. However, unfortunately, M{Θω } is not necessarily a measurable
function with respect to ω. In other words, for a real number x, the set

Θ∗x = {ω ∈ Ω | M{Θω } ≥ x} (B.4)

is a subset of Ω but not necessarily an event in A. Thus the probability


measure Pr{Θ∗x } does not necessarily exist. In this case, we assign
inf Pr{A}, if inf Pr{A} < 0.5

A∈A,A⊃Θ∗ A∈A,A⊃Θ∗

x x



Pr{Θ∗x } = sup Pr{A}, if sup Pr{A} > 0.5 (B.5)

 A∈A,A⊂Θ∗ x A∈A,A⊂Θ∗
x


0.5, otherwise
in the light of maximum uncertainty principle. This ensures the probability
measure Pr{Θ∗x } exists for any real number x. Now it is ready to define
M × Pr of Θ as the expected value of M{Θω } with respect to ω ∈ Ω, i.e.,
Z 1
Pr{Θ∗x }dx. (B.6)
0

Note that the above-mentioned integral is neither an uncertain measure nor


a probability measure. We will call it chance measure and represent it by
Ch{Θ}.

Ω..
..
......... ................................
........... ........
.... ........ ......
... ...... ......
... ...
...... .....
... . . .... .....
...
. .. ..
ω ..........
....
..
............................................................................................................
.
.. .. .. ...
... .
... .... .. .. ....
... .. .. .. ...
... .... .. .. ...
... .. ...
...
... ..
... ..
... .
Θ .. ...
.. ...
... ... ..
... .. ...
... ... .. .. ...
..... .....
... ... .
... ..... ..
.....
... .. ..... .
.
... .. ....... ...
... .
... .. ...... ........ ..
...... . ..
... .. ........ ......
............. ........ ...
... .. ............................ ..
... ..
... .. ..
.......................................................................................................................................................................
... ..
..............................................
.. Γ
..
Θω ...........................................
. ..
.

Figure B.1: An Event Θ in L × A

Definition B.1 (Liu [149]) Let (Γ, L, M)×(Ω, A, Pr) be a chance space, and
let Θ ∈ L × A be an event. Then the chance measure of Θ is defined as
Θω
Z 1 z }| {
Pr ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ} ≥ x dx.

Ch{Θ} = (B.7)
0 | {z }
Θ∗
x
Section B.1 - Chance Measure 411

Theorem B.1 (Liu [149]) Let (Γ, L, M)×(Ω, A, Pr) be a chance space. Then

Ch{Λ × A} = M{Λ} × Pr{A} (B.8)

for any Λ ∈ L and any A ∈ A. Especially, we have

Ch{∅} = 0, Ch{Γ × Ω} = 1. (B.9)

Proof: Let us first prove the identity (B.8). For each ω ∈ Ω, we immediately
have
{γ ∈ Γ | (γ, ω) ∈ Λ × A} = Λ
and
M{γ ∈ Γ | (γ, ω) ∈ Λ × A} = M{Λ}.
For any real number x, if M{Λ} ≥ x, then

Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Λ × A} ≥ x} = Pr{A}.

If M{Λ} < x, then

Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Λ × A} ≥ x} = Pr{∅} = 0.

Thus
Z 1
Ch{Λ × A} = Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Λ × A} ≥ x} dx
0
Z M{Λ} Z 1
= Pr{A}dx + 0dx = M{Λ} × Pr{A}.
0 M{Λ}

Furthermore, it follows from (B.8) that

Ch{∅} = M{∅} × Pr{∅} = 0,

Ch{Γ × Ω} = M{Γ} × Pr{Ω} = 1.


The theorem is thus verified.

Theorem B.2 (Liu [149], Monotonicity Theorem) Let (Γ, L, M)×(Ω, A, Pr)
be a chance space. Then the chance measure Ch{Θ} is a monotone increasing
function with respect to Θ.

Proof: Let Θ1 and Θ2 be two events with Θ1 ⊂ Θ2 . Then for each ω, we


have
{γ ∈ Γ | (γ, ω) ∈ Θ1 } ⊂ {γ ∈ Γ | (γ, ω) ∈ Θ2 }
and
M{γ ∈ Γ | (γ, ω) ∈ Θ1 } ≤ M{γ ∈ Γ | (γ, ω) ∈ Θ2 }.
412 Appendix B - Chance Theory

Thus for any real number x, we have


Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ1 } ≥ x}
≤ Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ2 } ≥ x} .

By the definition of chance measure, we get


Z 1
Ch{Θ1 } = Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ1 } ≥ x} dx
0
Z 1
≤ Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ2 } ≥ x} dx = Ch{Θ2 }.
0

That is, Ch{Θ} is a monotone increasing function with respect to Θ. The


theorem is thus verified.

Theorem B.3 (Liu [149], Duality Theorem) The chance measure is self-
dual. That is, for any event Θ, we have

Ch{Θ} + Ch{Θc } = 1. (B.10)

Proof: Since both uncertain measure and probability measure are self-dual,
we have
Z 1
Ch{Θ} = Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ} ≥ x} dx
0
Z 1
= Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θc } ≤ 1 − x} dx
0
Z 1
= (1 − Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θc } > 1 − x}) dx
0
Z 1
=1− Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θc } > x} dx
0

= 1 − Ch{Θc }.

That is, Ch{Θ} + Ch{Θc } = 1, i.e., the chance measure is self-dual.

Theorem B.4 (Hou [63], Subadditivity Theorem) The chance measure is


subadditive. That is, for any countable sequence of events Θ1 , Θ2 , · · · , we
have (∞ ∞
)
[ X
Ch Θi ≤ Ch{Θi }. (B.11)
i=1 i=1

Proof: For each ω, it follows from the subadditivity of uncertain measure


that
∞ ∞
( )
[ X
M γ ∈ Γ | (γ, ω) ∈ Θi ≤ M{γ ∈ Γ | (γ, ω) ∈ Θi }.
i=1 i=1
Section B.2 - Uncertain Random Variable 413

Thus for any real number x, we have



( ( ) )
[
Pr ω ∈ Ω | M γ ∈ Γ | (γ, ω) ∈ Θi ≥x
i=1

( )
X
≤ Pr ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θi } ≥ x .
i=1

By the definition of chance measure, we get


(∞ ) Z ( ( ∞
) )
[ 1 [
Ch Θi = Pr ω ∈ Ω | M γ ∈ Γ | (γ, ω) ∈ Θi ≥ x dx
i=1 0 i=1

( )
Z 1 X
≤ Pr ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θi } ≥ x dx
0 i=1

( )
Z +∞ X
≤ Pr ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θi } ≥ x dx
0 i=1
∞ Z
X 1
= Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θi } ≥ x} dx
i=1 0

X∞
= Ch{Θi }.
i=1

That is, the chance measure is subadditive.

B.2 Uncertain Random Variable


Theoretically, an uncertain random variable is a measurable function on the
chance space. It is usually used to deal with measurable functions of uncertain
variables and random variables.

Definition B.2 (Liu [149]) An uncertain random variable is a function ξ


from a chance space (Γ, L, M) × (Ω, A, Pr) to the set of real numbers such
that {ξ ∈ B} is an event in L × A for any Borel set B.

Remark B.1: An uncertain random variable ξ(γ, ω) degenerates to a ran-


dom variable if it does not vary with γ. Thus a random variable is a special
uncertain random variable.

Remark B.2: An uncertain random variable ξ(γ, ω) degenerates to an un-


certain variable if it does not vary with ω. Thus an uncertain variable is a
special uncertain random variable.
414 Appendix B - Chance Theory

Theorem B.5 Let ξ1 , ξ2 , · · ·, ξn be uncertain random variables on the chance


space (Γ, L, M) × (Ω, A, Pr), and let f : <n → < be a measurable function.
Then ξ = f (ξ1 , ξ2 , · · · , ξn ) is an uncertain random variable determined by

ξ(γ, ω) = f (ξ1 (γ, ω), ξ2 (γ, ω), · · · , ξn (γ, ω)) (B.12)

for all (γ, ω) ∈ Γ × Ω.

Proof: Since ξ1 , ξ2 , · · · , ξn are uncertain random variables, we know that


they are measurable functions on the chance space, and ξ = f (ξ1 , ξ2 , · · · , ξn )
is also a measurable function. Hence ξ is an uncertain random variable.

Example B.1: A random variable η plus an uncertain variable τ makes an


uncertain random variable ξ, i.e.,

ξ(γ, ω) = η(ω) + τ (γ) (B.13)

for all (γ, ω) ∈ Γ × Ω.

Example B.2: Let η1 , η2 , · · · , ηm be random variables, and let τ1 , τ2 , · · · , τn


be uncertain variables. If f is a measurable function, then

ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) (B.14)

is an uncertain random variable determined by

ξ(γ, ω) = f (η1 (ω), η2 (ω), · · · , ηm (ω), τ1 (γ), τ2 (γ), · · · , τn (γ)) (B.15)

for all (γ, ω) ∈ Γ × Ω.

Theorem B.6 (Liu [149]) Let ξ be an uncertain random variable on the


chance space (Γ, L, M) × (Ω, A, Pr), and let B be a Borel set. Then {ξ ∈ B}
is an uncertain random event with chance measure
Z 1
Ch{ξ ∈ B} = Pr {ω ∈ Ω | M{γ ∈ Γ | ξ(γ, ω) ∈ B} ≥ x} dx. (B.16)
0

Proof: Since {ξ ∈ B} is an event in the chance space, the equation (B.16)


follows from Definition B.1 immediately.

Remark B.3: If the uncertain random variable degenerates to a random


variable η, then Ch{η ∈ B} = Ch{Γ × (η ∈ B)} = M{Γ} × Pr{η ∈ B} =
Pr{η ∈ B}. That is,
Ch{η ∈ B} = Pr{η ∈ B}. (B.17)
If the uncertain random variable degenerates to an uncertain variable τ , then
Ch{τ ∈ B} = Ch{(τ ∈ B) × Ω} = M{τ ∈ B} × Pr{Ω} = M{τ ∈ B}. That is,

Ch{τ ∈ B} = M{τ ∈ B}. (B.18)


Section B.3 - Chance Distribution 415

Theorem B.7 (Liu [149]) Let ξ be an uncertain random variable. Then the
chance measure Ch{ξ ∈ B} is a monotone increasing function of B and

Ch{ξ ∈ ∅} = 0, Ch{ξ ∈ <} = 1. (B.19)

Proof: Let B1 and B2 be Borel sets with B1 ⊂ B2 . Then we immediately


have {ξ ∈ B1 } ⊂ {ξ ∈ B2 }. It follows from the monotonicity of chance
measure that
Ch{ξ ∈ B1 } ≤ Ch{ξ ∈ B2 }.
Hence Ch{ξ ∈ B} is a monotone increasing function of B. Furthermore, we
have
Ch{ξ ∈ ∅} = Ch{∅} = 0,
Ch{ξ ∈ <} = Ch{Γ × Ω} = 1.
The theorem is verified.

Theorem B.8 (Liu [149]) Let ξ be an uncertain random variable. Then for
any Borel set B, we have

Ch{ξ ∈ B} + Ch{ξ ∈ B c } = 1. (B.20)

Proof: It follows from {ξ ∈ B}c = {ξ ∈ B c } and the duality of chance


measure immediately.

B.3 Chance Distribution


Definition B.3 (Liu [149]) Let ξ be an uncertain random variable. Then
its chance distribution is defined by

Φ(x) = Ch{ξ ≤ x} (B.21)

for any x ∈ <.

Example B.3: As a special uncertain random variable, the chance distri-


bution of a random variable η is just its probability distribution, that is,

Φ(x) = Ch{η ≤ x} = Pr{η ≤ x}. (B.22)

Example B.4: As a special uncertain random variable, the chance distri-


bution of an uncertain variable τ is just its uncertainty distribution, that
is,
Φ(x) = Ch{τ ≤ x} = M{τ ≤ x}. (B.23)

Theorem B.9 (Liu [149], Sufficient and Necessary Condition for Chance
Distribution) A function Φ : < → [0, 1] is a chance distribution if and only if
it is a monotone increasing function except Φ(x) ≡ 0 and Φ(x) ≡ 1.
416 Appendix B - Chance Theory

Proof: Assume Φ is a chance distribution of uncertain random variable ξ.


Let x1 and x2 be two real numbers with x1 < x2 . It follows from Theorem B.7
that
Φ(x1 ) = Ch{ξ ≤ x1 } ≤ Ch{ξ ≤ x2 } = Φ(x2 ).
Hence the chance distribution Φ is a monotone increasing function. Further-
more, if Φ(x) ≡ 0, then
Z 1
Pr {ω ∈ Ω | M{γ ∈ Γ | ξ(γ, ω) ≤ x} ≥ r} dr ≡ 0.
0

Thus for almost all ω ∈ Ω, we have

M{γ ∈ Γ | ξ(γ, ω) ≤ x} ≡ 0, ∀x ∈ <

which is in contradiction to the asymptotic theorem, and then Φ(x) 6≡ 0 is


verified. Similarly, if Φ(x) ≡ 1, then
Z 1
Pr {ω ∈ Ω | M{γ ∈ Γ | ξ(γ, ω) ≤ x} ≥ r} dr ≡ 1.
0

Thus for almost all ω ∈ Ω, we have

M{γ ∈ Γ | ξ(γ, ω) ≤ x} ≡ 1, ∀x ∈ <

which is also in contradiction to the asymptotic theorem, and then Φ(x) 6≡ 1


is proved.
Conversely, suppose Φ : < → [0, 1] is a monotone increasing function but
Φ(x) 6≡ 0 and Φ(x) 6≡ 1. It follows from Peng-Iwamura theorem that there is
an uncertain variable whose uncertainty distribution is just Φ(x). Since an
uncertain variable is a special uncertain random variable, we know that Φ is
a chance distribution.

Theorem B.10 (Liu [149], Chance Inversion Theorem) Let ξ be an uncer-


tain random variable with chance distribution Φ. Then for any real number
x, we have
Ch{ξ ≤ x} = Φ(x), Ch{ξ > x} = 1 − Φ(x). (B.24)

Proof: The equation Ch{ξ ≤ x} = Φ(x) follows from the definition of chance
distribution immediately. By using the duality of chance measure, we get

Ch{ξ > x} = 1 − Ch{ξ ≤ x} = 1 − Φ(x).

Remark B.4: When the chance distribution Φ is a continuous function, we


also have
Ch{ξ < x} = Φ(x), Ch{ξ ≥ x} = 1 − Φ(x). (B.25)
Section B.4 - Operational Law 417

B.4 Operational Law


Assume η1 , η2 , · · · , ηm are independent random variables with probability
distributions Ψ1 , Ψ2 , · · · , Ψm , and τ1 , τ2 , · · · , τn are independent uncertain
variables with uncertainty distributions Υ1 , Υ2 , · · ·, Υn , respectively. What
is the chance distribution of the uncertain random variable
ξ = f (η1 , · · · , ηm , τ1 , · · · , τn )? (B.26)
This section will provide an operational law to answer this question.
Theorem B.11 (Liu [150]) Let η1 , η2 , · · · , ηm be independent random vari-
ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , respectively, and let τ1 , τ2 ,
· · · , τn be uncertain variables (not necessarily independent). Then the uncer-
tain random variable
ξ = f (η1 , · · · , ηm , τ1 , · · · , τn ) (B.27)
has a chance distribution
Z
Φ(x) = M{f (y1 , · · ·, ym , τ1 , · · ·, τn ) ≤ x}dΨ1 (y1 ) · · · dΨm (ym ) (B.28)
<m

for any number x.


Proof: It follows from Theorem B.6 that the uncertain random variable ξ
has a chance distribution
Z 1
Φ(x) = Pr {ω ∈ Ω | M{γ ∈ Γ | ξ(γ, ω) ≤ x} ≥ r} dr
0
Z 1
= Pr {ω ∈ Ω | M{f (η1 (ω), · · · , ηm (ω), τ1 , · · · , τn ) ≤ x} ≥ r} dr
0
Z
= M{f (y1 , · · ·, ym , τ1 , · · ·, τn ) ≤ x}dΨ1 (y1 ) · · · dΨm (ym ).
<m

The theorem is verified.


Theorem B.12 (Liu [150]) Let η1 , η2 , · · · , ηm be independent random vari-
ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , respectively, and let τ1 , τ2 ,
· · · , τn be uncertain variables (not necessarily independent). Then the uncer-
tain random variable
ξ = f (η1 , · · · , ηm , τ1 , · · · , τn ) (B.29)
has a chance distribution
Z
Φ(x) = F (x; y1 , · · ·, ym )dΨ1 (y1 ) · · · dΨm (ym ) (B.30)
<m

where F (x; y1 , · · · , ym ) is the uncertainty distribution of the uncertain vari-


able f (y1 , · · · , ym , τ1 , · · · , τn ) for any real numbers y1 , · · · , ym .
418 Appendix B - Chance Theory

Proof: For any given numbers y1 , · · · , ym , it follows from the operational law
of uncertain variables that f (y1 , · · · , ym , τ1 , · · · , τn ) is an uncertain variable
with uncertainty distribution F (x; y1 , · · ·, ym ). By using (B.28), the chance
distribution of ξ is
Z
Φ(x) = M{f (y1 , · · · , ym , τ1 , · · · , τn ) ≤ x}dΨ1 (y1 ) · · · dΨm (ym )
<m
Z
= F (x; y1 , · · ·, ym )dΨ1 (y1 ) · · · dΨm (ym )
<m

that is just (B.30). The theorem is verified.

Remark B.5: Let τ1 , τ2 , · · · , τn be independent uncertain variables with


uncertainty distributions Υ1 , Υ2 , · · ·, Υn , respectively. If the function

f (η1 , · · · , ηm , τ1 , · · · , τn )

is strictly increasing with respect to τ1 , · · · , τk and strictly decreasing with


respect to τk+1 , · · · , τn , then F −1 (α; y1 , · · · , ym ) is equal to

f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α))

from which we may derive the uncertainty distribution F (x; y1 , · · · , ym ).

Exercise B.1: Let η1 , η2 , · · · , ηm be independent random variables with


probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be indepen-
dent uncertain variables with uncertainty distributions Υ1 , Υ2 , · · · , Υn , re-
spectively. Show that the sum

ξ = η1 + η2 + · · · + ηm + τ1 + τ2 + · · · + τn (B.31)

has a chance distribution


Z +∞
Φ(x) = Υ(x − y)dΨ(y) (B.32)
−∞

where
Z
Ψ(y) = dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (B.33)
y1 +y2 +···+ym ≤y

is the probability distribution of η1 + η2 + · · · + ηm , and

Υ(z) = sup Υ1 (z1 ) ∧ Υ2 (z2 ) ∧ · · · ∧ Υn (zn ) (B.34)


z1 +z2 +···+zn =z

is the uncertainty distribution of τ1 + τ2 + · · · + τn .

Exercise B.2: Let η1 , η2 , · · · , ηm be independent positive random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn
Section B.4 - Operational Law 419

be independent positive uncertain variables with uncertainty distributions


Υ1 , Υ2 , · · · , Υn , respectively. Show that the product

ξ = η1 η2 · · · ηm τ1 τ2 · · · τn (B.35)

has a chance distribution


Z +∞
Φ(x) = Υ(x/y)dΨ(y) (B.36)
0

where Z
Ψ(y) = dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (B.37)
y1 y2 ···ym ≤y

is the probability distribution of η1 η2 · · · ηm , and

Υ(z) = sup Υ1 (z1 ) ∧ Υ2 (z2 ) ∧ · · · ∧ Υn (zn ) (B.38)


z1 z2 ···zn =z

is the uncertainty distribution of τ1 τ2 · · · τn .

Exercise B.3: Let η1 , η2 , · · · , ηm be independent random variables with


probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be indepen-
dent uncertain variables with uncertainty distributions Υ1 , Υ2 , · · · , Υn , re-
spectively. Show that the minimum

ξ = η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn (B.39)

has a chance distribution

Φ(x) = Ψ(x) + Υ(x) − Ψ(x)Υ(x) (B.40)

where
Ψ(x) = 1 − (1 − Ψ1 (x))(1 − Ψ2 (x)) · · · (1 − Ψm (x)) (B.41)
is the probability distribution of η1 ∧ η2 ∧ · · · ∧ ηm , and

Υ(x) = Υ1 (x) ∨ Υ2 (x) ∨ · · · ∨ Υn (x) (B.42)

is the uncertainty distribution of τ1 ∧ τ2 ∧ · · · ∧ τn .

Exercise B.4: Let η1 , η2 , · · · , ηm be independent random variables with


probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be indepen-
dent uncertain variables with uncertainty distributions Υ1 , Υ2 , · · · , Υn , re-
spectively. Show that the maximum

ξ = η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn (B.43)

has a chance distribution

Φ(x) = Ψ(x)Υ(x) (B.44)


420 Appendix B - Chance Theory

where
Ψ(x) = Ψ1 (x)Ψ2 (x) · · · Ψm (x) (B.45)
is the probability distribution of η1 ∨ η2 ∨ · · · ∨ ηm , and
Υ(x) = Υ1 (x) ∧ Υ2 (x) ∧ · · · ∧ Υn (x) (B.46)
is the uncertainty distribution of τ1 ∨ τ2 ∨ · · · ∨ τn .

Some Useful Theorems


In many cases, it is required to calculate Ch{f (η1 , · · · , ηm , τ1 , · · · , τn ) ≤ 0}.
We may produce the chance distribution Φ(x) of f (η1 , · · · , ηm , τ1 , · · · , τn ) by
the operational law, and then the chance measure is just Φ(0). However, for
convenience, we may use the following theorems.
Theorem B.13 (Liu [151]) Let η1 , η2 , · · · , ηm be independent random vari-
ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm and let τ1 , τ2 , · · · , τn be
independent uncertain variables with regular uncertainty distributions Υ1 , Υ2 ,
· · · , Υn , respectively. If f (η1 , · · · , ηm , τ1 , · · · , τn ) is strictly increasing with
respect to τ1 , · · · , τk and strictly decreasing with respect to τk+1 , · · · , τn , then
Z
Ch{f (η1 , · · · , ηm , τ1 , · · · , τn ) ≤ 0} = G(y1 , · · ·, ym )dΨ1 (y1 ) · · · dΨm (ym )
<m

where G(y1 , · · · , ym ) is the root α of the equation


f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) = 0.

Proof: It follows from the definition of chance measure that for any numbers
y1 , · · · , ym , the theorem is true if the function G is
G(y1 , · · · , ym ) = M{f (y1 , · · · , ym , τ1 , · · · , τn ) ≤ 0}.
Furthermore, by using Theorem 2.20, we know that G is just the root α. The
theorem is proved.

Remark B.6: Sometimes, the equation may not have a root. In this case,
if
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) < 0

for all α, then we set the root α = 1; and if


f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) > 0

for all α, then we set the root α = 0.

Remark B.7: The root α may be estimated by the bisection method because
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) is a strictly
increasing function with respect to α. See Figure B.2.
Section B.4 - Operational Law 421

. ..
.... .....
.......
.. ... .
... .. ..
.
... .. .
.. ..
... .... ..
... ..... ..
... .......... ..
... ....... ..
... ................ ..
... ............... .
.... ...
.

...............................................................................................................................................................................................
..
..... ...
α
0 ...
...
..
.......
.
...
........
. .
1 ..
..
... ...
........ ..
..
... ........ ..
... .... ..
... ... ..
... ... ..
...... ..
...... ..
...
...
..

Figure B.2: f (y1 , · · · , ym , Υ−1 −1 −1 −1


1 (α), · · · , Υk (α), Υk+1 (1−α), · · · , Υn (1−α))

Theorem B.14 (Liu [151]) Let η1 , η2 , · · · , ηm be independent random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm and let τ1 , τ2 , · · · , τn be
independent uncertain variables with regular uncertainty distributions Υ1 , Υ2 ,
· · · , Υn , respectively. If f (η1 , · · · , ηm , τ1 , · · · , τn ) is strictly increasing with
respect to τ1 , · · · , τk and strictly decreasing with respect to τk+1 , · · · , τn , then
Z
Ch{f (η1 , · · · , ηm , τ1 , · · · , τn ) > 0} = G(y1 , · · ·, ym )dΨ1 (y1 ) · · · dΨm (ym )
<m

where G(y1 , · · · , ym ) is the root α of the equation

f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) = 0.

Proof: It follows from the definition of chance measure that for any numbers
y1 , · · · , ym , the theorem is true if the function G is

G(y1 , · · · , ym ) = M{f (y1 , · · · , ym , τ1 , · · · , τn ) > 0}.

Furthermore, by using Theorem 2.21, we know that G is just the root α. The
theorem is proved.

Remark B.8: Sometimes, the equation may not have a root. In this case,
if

f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) < 0

for all α, then we set the root α = 0; and if

f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) > 0

for all α, then we set the root α = 1.

Remark B.9: The root α may be estimated by the bisection method because
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) is a strictly
decreasing function with respect to α. See Figure B.3.
422 Appendix B - Chance Theory

...
..........
...... ..
....... ..
... ... ..
... .... ..
... .... ..
... ...... ..
... .....
...... ..
... ....... ..
... ........
......... ..
... .......... ..
... .......... .
....

...........................................................................................................................................................................................
..........
......... ...
α
0 ...
...
.........
.......
......
1 ..
..
... ...... .
.....
... ..... ...
... .... .
... ..
... ... ..
... ... .
... .....
... ..

Figure B.3: f (y1 , · · · , ym , Υ−1 −1 −1 −1


1 (1−α), · · · , Υk (1−α), Υk+1 (α), · · · , Υn (α))

Operational Law for Boolean System


Theorem B.15 (Liu [150]) Assume η1 , η2 , · · · , ηm are independent Boolean
random variables, i.e.,
(
1 with probability measure ai
ηi = (B.47)
0 with probability measure 1 − ai

for i = 1, 2, · · · , m, and τ1 , τ2 , · · · , τn are independent Boolean uncertain


variables, i.e.,
(
1 with uncertain measure bj
τj = (B.48)
0 with uncertain measure 1 − bj

for j = 1, 2, · · · , n. If f is a Boolean function (not necessarily monotone),


then
ξ = f (η1 , · · · , ηm , τ1 , · · · , τn ) (B.49)
is a Boolean uncertain random variable such that
m
!
X Y
Ch{ξ = 1} = µi (xi ) f ∗ (x1 , · · · , xm ) (B.50)
(x1 ,··· ,xm )∈{0,1}m i=1

where

 sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n







 if sup min νj (yj ) < 0.5
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n



f (x1 , · · · , xm ) = (B.51)

 1− sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=0 1≤j≤n





 if sup min νj (yj ) ≥ 0.5,



1≤j≤n
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1
Section B.4 - Operational Law 423

(
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m), (B.52)
1 − ai , if xi = 0
(
bj , if yj = 1
νj (yj ) = (j = 1, 2, · · · , n). (B.53)
1 − bj , if yj = 0

Proof: At first, when (x1 , · · · , xm ) is given, f (x1 , · · · , xm , τ1 , · · · , τn ) is es-


sentially a Boolean function of uncertain variables. It follows from the oper-
ational law of uncertain variables that
M{f (x1 , · · · , xm , τ1 , · · · , τn ) = 1} = f ∗ (x1 , · · · , xm )
that is determined by (B.51). On the other hand, it follows from the opera-
tional law of uncertain random variables that
m
!
X Y
Ch{ξ = 1} = µi (xi ) M{f (x1 , · · · , xm , τ1 , · · · , τn ) = 1}.
(x1 ,··· ,xm )∈{0,1}m i=1

Thus (B.50) is verified.

Remark B.10: When the uncertain variables disappear, the operational


law becomes
m
!
X Y
Pr{ξ = 1} = µi (xi ) f (x1 , x2 , · · · , xm ). (B.54)
(x1 ,x2 ,··· ,xm )∈{0,1}m i=1

Remark B.11: When the random variables disappear, the operational law
becomes

 sup min νj (yj ),
f (y1 ,y2 ,··· ,yn )=1 1≤j≤n







 if sup min νj (yj ) < 0.5
f (y1 ,y2 ,··· ,yn )=1 1≤j≤n


M{ξ = 1} = (B.55)

 1− sup min νj (yj ),
f (y1 ,y2 ,··· ,yn )=0 1≤j≤n





if sup min νj (yj ) ≥ 0.5.



f (y1 ,y2 ,··· ,yn )=1 1≤j≤n

Exercise B.5: Let η1 , η2 , · · · , ηm be independent Boolean random variables


defined by (B.47) and let τ1 , τ2 , · · · , τn be independent Boolean uncertain
variables defined by (B.48). Then the minimum
ξ = η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn (B.56)
is a Boolean uncertain random variable. Show that
Ch{ξ = 1} = a1 a2 · · · am (b1 ∧ b2 ∧ · · · ∧ bn ). (B.57)
424 Appendix B - Chance Theory

Exercise B.6: Let η1 , η2 , · · · , ηm be independent Boolean random variables


defined by (B.47) and let τ1 , τ2 , · · · , τn be independent Boolean uncertain
variables defined by (B.48). Then the maximum

ξ = η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn (B.58)

is a Boolean uncertain random variable. Show that

Ch{ξ = 1} = 1 − (1 − a1 )(1 − a2 ) · · · (1 − am )(1 − b1 ∨ b2 ∨ · · · ∨ bn ). (B.59)

Exercise B.7: Let η1 , η2 , · · · , ηm be independent Boolean random variables


defined by (B.47) and let τ1 , τ2 , · · · , τn be independent Boolean uncertain
variables defined by (B.48). Then the kth largest value

ξ = k-max [η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ] (B.60)

is a Boolean uncertain random variable. Show that


m
!
X Y
Ch{ξ = 1} = µi (xi ) f ∗ (x1 , x2 , · · · , xm ) (B.61)
(x1 ,x2 ,··· ,xm )∈{0,1}m i=1

where

f ∗ (x1 , x2 , · · · , xm ) = k-max [x1 , x2 , · · · , xm , b1 , b2 , · · · , bn ], (B.62)


(
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m). (B.63)
1 − ai , if xi = 0

B.5 Expected Value


Definition B.4 (Liu [149]) Let ξ be an uncertain random variable. Then
its expected value is defined by
Z +∞ Z 0
E[ξ] = Ch{ξ ≥ x}dx − Ch{ξ ≤ x}dx (B.64)
0 −∞

provided that at least one of the two integrals is finite.

Theorem B.16 (Liu [149]) Let ξ be an uncertain random variable with


chance distribution Φ. Then
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx. (B.65)
0 −∞
Section B.5 - Expected Value 425

Proof: It follows from the chance inversion theorem that for almost all
numbers x, we have Ch{ξ ≥ x} = 1 − Φ(x) and Ch{ξ ≤ x} = Φ(x). By using
the definition of expected value operator, we obtain
Z +∞ Z 0
E[ξ] = Ch{ξ ≥ x}dx − Ch{ξ ≤ x}dx
0 −∞
Z +∞ Z 0
= (1 − Φ(x))dx − Φ(x)dx.
0 −∞

Thus we obtain the equation (B.65).

Theorem B.17 Let ξ be an uncertain random variable with chance distri-


bution Φ. Then Z +∞
E[ξ] = xdΦ(x). (B.66)
−∞

Proof: It follows from the change of variables of integral and Theorem B.16
that the expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z +∞ Z 0 Z +∞
= xdΦ(x) + xdΦ(x) = xdΦ(x).
0 −∞ −∞

The theorem is proved.

Theorem B.18 Let ξ be an uncertain random variable with regular chance


distribution Φ. Then Z 1
E[ξ] = Φ−1 (α)dα. (B.67)
0

Proof: It follows from the change of variables of integral and Theorem B.16
that the expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z 1 Z Φ(0) Z 1
= Φ−1 (α)dα + Φ−1 (α)dα = Φ−1 (α)dα.
Φ(0) 0 0

The theorem is proved.

Theorem B.19 (Liu [150]) Let η1 , η2 , · · · , ηm be independent random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , respectively, and let τ1 , τ2 ,
426 Appendix B - Chance Theory

· · · , τn be uncertain variables (not necessarily independent), then the uncer-


tain random variable

ξ = f (η1 , · · · , ηm , τ1 , · · · , τn ) (B.68)

has an expected value


Z
E[ξ] = E[f (y1 , · · · , ym , τ1 , · · · , τn )]dΨ1 (y1 ) · · · dΨm (ym ) (B.69)
<m

where E[f (y1 , · · · , ym , τ1 , · · · , τn )] is the expected value of the uncertain vari-


able f (y1 , · · · , ym , τ1 , · · · , τn ) for any real numbers y1 , · · · , ym .

Proof: For simplicity, we only prove the case m = n = 2. Write the


uncertainty distribution of f (y1 , y2 , τ1 , τ2 ) by F (x; y1 , y2 ) for any real numbers
y1 and y2 . Then
Z +∞ Z 0
E[f (y1 , y2 , τ1 , τ2 )] = (1 − F (x; y1 , y2 ))dx − F (x; y1 , y2 )dx.
0 −∞

On the other hand, the uncertain random variable ξ = f (η1 , η2 , τ1 , τ2 ) has a


chance distribution
Z
Φ(x) = F (x; y1 , y2 )dΨ1 (y1 )dΨ2 (y2 ).
<2

It follows from Theorem B.16 that


Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z +∞ Z
= 1− F (x; y1 , y2 )dΨ1 (y1 )dΨ2 (y2 ) dx
0 <2
Z 0 Z
− F (x; y1 , y2 )dΨ1 (y1 )dΨ2 (y2 )dx
−∞ <2
Z Z +∞ Z 0
= (1 − F (x; y1 , y2 ))dx − F (x; y1 , y2 )dx dΨ1 (y1 )dΨ2 (y2 )
<2 0 −∞
Z
= E[f (y1 , y2 , τ1 , τ2 )]dΨ1 (y1 )dΨ2 (y2 ).
<2

Thus the theorem is proved.

Example B.5: Let η be a random variable and let τ be an uncertain variable.


Assume η has a probability distribution Ψ. It follows from Theorem B.19 that
the uncertain random variable η + τ has an expected value
Z Z
E[η + τ ] = E[y + τ ]dΨ(y) = (y + E[τ ])dΨ(y) = E[η] + E[τ ].
< <
Section B.5 - Expected Value 427

That is,
E[η + τ ] = E[η] + E[τ ]. (B.70)

Exercise B.8: Let η be a random variable and let τ be an uncertain variable.


Assume η has a probability distribution Ψ. Show that
E[ητ ] = E[η]E[τ ]. (B.71)
Theorem B.20 (Liu [150]) Let η1 , η2 , · · · , ηm be independent random vari-
ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be
independent uncertain variables with uncertainty distributions Υ1 , Υ2 , · · ·, Υn ,
respectively. If f (η1 , · · · , ηm , τ1 , · · · , τn ) is a strictly increasing function or
a strictly decreasing function with respect to τ1 , · · · , τn , then the uncertain
random variable
ξ = f (η1 , · · · , ηm , τ1 , · · · , τn ) (B.72)
has an expected value
Z Z 1
E[ξ] = f (y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α))dαdΨ1 (y1 ) · · · dΨm (ym ).
<m 0

Proof: Since f (y1 , · · · , ym , τ1 , · · · , τn ) is a strictly increasing function or a


strictly decreasing function with respect to τ1 , · · · , τn , we have
Z 1
E[f (y1 , · · · , ym , τ1 , · · · , τn )] = f (y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α))dα.
0

It follows from Theorem B.19 that the result holds.

Remark B.12: If f (η1 , · · · , ηm , τ1 , · · · , τn ) is strictly increasing with respect


to τ1 , · · · , τk and strictly decreasing with respect to τk+1 , · · · , τn , then the
integrand in the formula of expected value E[ξ] should be replaced with
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α) · · · , Υn (1 − α)).

Exercise B.9: Let η be a random variable with probability distribution Ψ,


and let τ be an uncertain variable with uncertainty distribution Υ. Show
that Z Z 1
y ∨ Υ−1 (α) dαdΨ(y)

E[η ∨ τ ] = (B.73)
< 0
and Z Z 1
y ∧ Υ−1 (α) dαdΨ(y).

E[η ∧ τ ] = (B.74)
< 0

Theorem B.21 (Liu [150], Linearity of Expected Value Operator) Assume


η1 and η2 are random variables (not necessarily independent), τ1 and τ2 are
independent uncertain variables, and f1 and f2 are measurable functions.
Then
E[f1 (η1 , τ1 ) + f2 (η2 , τ2 )] = E[f1 (η1 , τ1 )] + E[f2 (η2 , τ2 )]. (B.75)
428 Appendix B - Chance Theory

Proof: Since τ1 and τ2 are independent uncertain variables, for any real
numbers y1 and y2 , the functions f1 (y1 , τ1 ) and f2 (y2 , τ2 ) are also independent
uncertain variables. Thus
E[f1 (y1 , τ1 ) + f2 (y2 , τ2 )] = E[f1 (y1 , τ1 )] + E[f2 (y2 , τ2 )].
Let Ψ1 and Ψ2 be the probability distributions of random variables η1 and
η2 , respectively. Then we have
E[f1 (η1 , τ1 ) + f2 (η2 , τ2 )]
Z
= E[f1 (y1 , τ1 ) + f2 (y2 , τ2 )]dΨ1 (y1 )dΨ2 (y2 )
<2
Z
= (E[f1 (y1 , τ1 )] + E[f2 (y2 , τ2 )])dΨ1 (y1 )dΨ2 (y2 )
<2
Z Z
= E[f1 (y1 , τ1 )]dΨ1 (y1 ) + E[f2 (y2 , τ2 )]dΨ2 (y2 )
< <

= E[f1 (η1 , τ1 )] + E[f2 (η2 , τ2 )].


The theorem is proved.

Exercise B.10: Assume η1 and η2 are random variables, and τ1 and τ2 are
independent uncertain variables. Show that
E[η1 ∨ τ1 + η2 ∧ τ2 ] = E[η1 ∨ τ1 ] + E[η2 ∧ τ2 ]. (B.76)

B.6 Variance
Definition B.5 (Liu [149]) Let ξ be an uncertain random variable with finite
expected value e. Then the variance of ξ is
V [ξ] = E[(ξ − e)2 ]. (B.77)
Since (ξ − e)2 is a nonnegative uncertain random variable, we also have
Z +∞
V [ξ] = Ch{(ξ − e)2 ≥ x}dx. (B.78)
0

Theorem B.22 (Liu [149]) If ξ is an uncertain random variable with finite


expected value, a and b are real numbers, then
V [aξ + b] = a2 V [ξ]. (B.79)
Proof: Let e be the expected value of ξ. Then aξ + b has an expected value
ae + b. Thus the variance is
V [aξ + b] = E[(aξ + b − (ae + b))2 ] = E[a2 (ξ − e)2 ] = a2 V [ξ].
The theorem is verified.
Section B.6 - Variance 429

Theorem B.23 (Liu [149]) Let ξ be an uncertain random variable with ex-
pected value e. Then V [ξ] = 0 if and only if Ch{ξ = e} = 1.

Proof: We first assume V [ξ] = 0. It follows from the equation (B.78) that
Z +∞
Ch{(ξ − e)2 ≥ x}dx = 0
0

which implies Ch{(ξ − e)2 ≥ x} = 0 for any x > 0. Hence we have

Ch{(ξ − e)2 = 0} = 1.

That is, Ch{ξ = e} = 1. Conversely, assume Ch{ξ = e} = 1. Then we


immediately have Ch{(ξ − e)2 = 0} = 1 and Ch{(ξ − e)2 ≥ x} = 0 for any
x > 0. Thus Z +∞
V [ξ] = Ch{(ξ − e)2 ≥ x}dx = 0.
0
The theorem is proved.

How to Obtain Variance from Distributions?


Let ξ be an uncertain random variable with expected value e. If we only
know its chance distribution Φ, then the variance
Z +∞
V [ξ] = Ch{(ξ − e)2 ≥ x}dx
0
Z +∞ √ √
= Ch{(ξ ≥ e + x) ∪ (ξ ≤ e − x)}dx
0
Z +∞ √ √
≤ (Ch{ξ ≥ e + x} + Ch{ξ ≤ e − x})dx
0
Z +∞ √ √
= (1 − Φ(e + x) + Φ(e − x))dx.
0

Thus we have the following stipulation.

Stipulation B.1 (Guo and Wang [57]) Let ξ be an uncertain random vari-
able with chance distribution Φ and finite expected value e. Then
Z +∞
√ √
V [ξ] = (1 − Φ(e + x) + Φ(e − x))dx. (B.80)
0

Theorem B.24 (Sheng and Yao [211]) Let ξ be an uncertain random vari-
able with chance distribution Φ and finite expected value e. Then
Z +∞
V [ξ] = (x − e)2 dΦ(x). (B.81)
−∞
430 Appendix B - Chance Theory

Proof: This theorem is based on Stipulation B.1 that says the variance of ξ
is Z +∞ Z +∞
√ √
V [ξ] = (1 − Φ(e + y))dy + Φ(e − y)dy.
0 0

Substituting e + y with x and y with (x − e)2 , the change of variables and
integration by parts produce
Z +∞ Z +∞ Z +∞
√ 2
(1 − Φ(e + y))dy = (1 − Φ(x))d(x − e) = (x − e)2 dΦ(x).
0 e e
√ 2
Similarly, substituting e − y with x and y with (x − e) , we obtain
Z +∞ Z −∞ Z e
√ 2
Φ(e − y)dy = Φ(x)d(x − e) = (x − e)2 dΦ(x).
0 e −∞

It follows that the variance is


Z +∞ Z e Z +∞
2 2
V [ξ] = (x − e) dΦ(x) + (x − e) dΦ(x) = (x − e)2 dΦ(x).
e −∞ −∞

The theorem is verified.


Theorem B.25 (Sheng and Yao [211]) Let ξ be an uncertain random vari-
able with regular chance distribution Φ and finite expected value e. Then
Z 1
V [ξ] = (Φ−1 (α) − e)2 dα. (B.82)
0

Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem B.24 that the variance is
Z +∞ Z 1
V [ξ] = (x − e)2 dΦ(x) = (Φ−1 (α) − e)2 dα.
−∞ 0

The theorem is verified.


Theorem B.26 (Guo and Wang [57]) Let η1 , η2 , · · · , ηm be independent
random variables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 ,
· · · , τn be independent uncertain variables with uncertainty distributions Υ1 ,
Υ2 , · · ·, Υn , respectively. Then
ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) (B.83)
has a variance
Z Z +∞ √
V [ξ] = (1 − F (e + x; y1 , · · ·, ym )
<m 0 (B.84)

+F (e − x; y1 , · · ·, ym ))dxdΨ1 (y1 ) · · · Ψm (ym )
where F (x; y1 , · · ·, ym ) is the uncertainty distribution of the uncertain variable
f (y1 , · · ·, ym , τ1 , · · ·, τn ) and is determined by Υ1 , Υ2 , · · ·, Υn .
Section B.7 - Law of Large Numbers 431

Proof: It follows from the operational law of uncertain random variables


that ξ has a chance distribution
Z
Φ(x) = F (x; y1 , · · · , ym )dΨ1 (y1 ) · · · Ψm (ym ).
<m

Thus the theorem follows Stipulation B.1 immediately.

Exercise B.11: Let η be a random variable with probability distribution


Ψ, and let τ be an uncertain variable with uncertainty distribution Υ. Show
that the sum
ξ =η+τ (B.85)
has a variance
Z +∞Z +∞ √ √
V [ξ] = (1 − Υ(e + x − y) + Υ(e − x − y))dxdΨ(y). (B.86)
−∞ 0

B.7 Law of Large Numbers


Theorem B.27 (Yao and Gao [251], Law of Large Numbers) Let η1 , η2 , · · ·
be iid random variables with a common probability distribution Ψ, and let
τ1 , τ2 , · · · be iid uncertain variables. Assume f is a strictly monotone func-
tion. Then
Sn = f (η1 , τ1 ) + f (η2 , τ2 ) + · · · + f (ηn , τn ) (B.87)
is a sequence of uncertain random variables and
Z +∞
Sn
→ f (y, τ1 )dΨ(y) (B.88)
n −∞

in the sense of convergence in distribution as n → ∞.

Proof: According to the definition of convergence in distribution, it suffices


to prove
Z +∞
Sn
lim Ch ≤ f (y, z)dΨ(y)
n→∞ n −∞
Z +∞ Z +∞ (B.89)
=M f (y, τ1 )dΨ(y) ≤ f (y, z)dΨ(y)
−∞ −∞

for any real number z with


Z +∞ Z +∞
lim M f (y, τ1 )dΨ(y) ≤ f (y, w)dΨ(y)
w→z −∞ −∞
Z +∞ Z +∞
=M f (y, τ1 )dΨ(y) ≤ f (y, z)dΨ(y) .
−∞ −∞
432 Appendix B - Chance Theory

The argument breaks into two cases. Case 1: Assume f (y, z) is strictly
increasing with respect to z. Let Υ denote the common uncertainty distri-
bution of τ1 , τ2 , · · · It is clear that

M{f (y, τ1 ) ≤ f (y, z)} = Υ(z)

for any real numbers y and z. Thus we have


Z +∞ Z +∞
M f (y, τ1 )dΨ(y) ≤ f (y, z)dΨ(y) = Υ(z). (B.90)
−∞ −∞

In addition, since f (η1 , z), f (η2 , z), · · · are a sequence of iid random variables,
the law of large numbers for random variables tells us that
Z +∞
f (η1 , z) + f (η2 , z) + · · · + f (ηn , z)
→ f (y, z)dΨ(y), a.s.
n −∞

as n → ∞. Thus
Z +∞
Sn
lim Ch ≤ f (y, z)dΨ(y) = Υ(z). (B.91)
n→∞ n −∞

It follows from (B.90) and (B.91) that (B.89) holds. Case 2: Assume f (y, z)
is strictly decreasing with respect to z. Then −f (y, z) is strictly increasing
with respect to z. By using Case 1 we obtain
Z +∞
Sn
lim Ch − < −z = M − f (y, τ1 )dΨ(y) < −z .
n→∞ n −∞

That is,
Z +∞
Sn
lim Ch >z =M f (y, τ1 )dΨ(y) > z .
n→∞ n −∞

It follows from the duality property that


Z +∞
Sn
lim Ch ≤z =M f (y, τ1 )dΨ(y) ≤ z .
n→∞ n −∞

The theorem is thus proved.

Exercise B.12: Let η1 , η2 , · · · be iid random variables, and let τ1 , τ2 , · · · be


iid uncertain variables. Define

Sn = (η1 + τ1 ) + (η2 + τ2 ) + · · · + (ηn + τn ). (B.92)

Show that
Sn
→ E[η1 ] + τ1 (B.93)
n
Section B.8 - Uncertain Random Programming 433

in the sense of convergence in distribution as n → ∞.

Exercise B.13: Let η1 , η2 , · · · be iid positive random variables, and let


τ1 , τ2 , · · · be iid positive uncertain variables. Define

Sn = η1 τ1 + η2 τ2 + · · · + ηn τn . (B.94)

Show that
Sn
→ E[η1 ]τ1 (B.95)
n
in the sense of convergence in distribution as n → ∞.

B.8 Uncertain Random Programming


Assume that x is a decision vector, and ξ is an uncertain random vector.
Since an uncertain random objective function f (x, ξ) cannot be directly min-
imized, we may minimize its expected value, i.e.,

min E[f (x, ξ)]. (B.96)


x

Since the uncertain random constraints gj (x, ξ) ≤ 0, j = 1, 2, · · · , p do not


make a crisp feasible set, it is naturally desired that the uncertain random
constraints hold with confidence levels α1 , α2 , · · · , αp . Then we have a set of
chance constraints,

Ch{gj (x, ξ) ≤ 0} ≥ αj , j = 1, 2, · · · , p. (B.97)

In order to obtain a decision with minimum expected objective value subject


to a set of chance constraints, Liu [150] proposed the following uncertain
random programming model,

 min
 x
E[f (x, ξ)]
subject to: (B.98)

Ch{gj (x, ξ) ≤ 0} ≥ αj , j = 1, 2, · · · , p.

Definition B.6 (Liu [150]) A vector x is called a feasible solution to the


uncertain random programming model (B.98) if

Ch{gj (x, ξ) ≤ 0} ≥ αj (B.99)

for j = 1, 2, · · · , p.

Definition B.7 (Liu [150]) A feasible solution x∗ is called an optimal solu-


tion to the uncertain random programming model (B.98) if

E[f (x∗ , ξ)] ≤ E[f (x, ξ)] (B.100)

for any feasible solution x.


434 Appendix B - Chance Theory

Theorem B.28 (Liu [150]) Let η1 , η2 , · · · , ηm be independent random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be
independent uncertain variables with uncertainty distributions Υ1 , Υ2 , · · ·, Υn ,
respectively. If f (x, η1 , · · · , ηm , τ1 , · · · , τn ) is a strictly increasing function
or a strictly decreasing function with respect to τ1 , · · · , τn , then the expected
function
E[f (x, η1 , · · · , ηm , τ1 , · · · , τn )] (B.101)
is equal to
Z Z 1
f (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α))dαdΨ1 (y1 ) · · · dΨm (ym ).
<m 0

Proof: It follows from Theorem B.20 immediately.

Remark B.13: If f (x, η1 , · · · , ηm , τ1 , · · · , τn ) is strictly increasing with re-


spect to τ1 , · · · , τk and strictly decreasing with respect to τk+1 , · · · , τn , then
the integrand in the formula of expected value E[f (x, η1 , · · · , ηm , τ1 , · · · , τn )]
should be replaced with

f (x, y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)).

Theorem B.29 (Liu [150]) Let η1 , η2 , · · · , ηm be independent random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be
independent uncertain variables with uncertainty distributions Υ1 , Υ2 , · · ·, Υn ,
respectively. If gj (x, η1 , · · · , ηm , τ1 , · · · , τn ) is a strictly increasing function
with respect to τ1 , · · · , τn , then the chance constraint

Ch{gj (x, η1 , · · · , ηm , τ1 , · · · , τn ) ≤ 0} ≥ αj (B.102)

holds if and only if


Z
Gj (x, y1 , · · · , ym )dΨ1 (y1 ) · · · dΨm (ym ) ≥ αj (B.103)
<m

where Gj (x, y1 , · · · , ym ) is the root α of the equation

gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) = 0. (B.104)

Proof: Since Gj (x, y1 , · · · , ym ) is the root α of the equation (B.104), it


follows from Theorem B.13 that the chance measure

Ch{gj (x, η1 , · · · , ηm , τ1 , · · · , τn ) ≤ 0}

is equal to the integral


Z
Gj (x, y1 , · · · , ym )dΨ1 (y1 ) · · · dΨm (ym ).
<m
Section B.8 - Uncertain Random Programming 435

Hence the chance constraint (B.102) holds if and only if (B.103) is true. The
theorem is verified.

Remark B.14: Sometimes, the equation (B.104) may not have a root. In
this case, if
gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) < 0 (B.105)
for all α, then we set the root α = 1; and if

gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) > 0 (B.106)

for all α, then we set the root α = 0.

Remark B.15: The root α may be estimated by the bisection method be-
cause gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) is a strictly increasing function
with respect to α.

Remark B.16: If gj (x, η1 , · · · , ηm , τ1 , · · · , τn ) is strictly increasing with


respect to τ1 , · · · , τk and strictly decreasing with respect to τk+1 , · · · , τn ,
then the equation (B.104) becomes

gj (x, y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) = 0.

Theorem B.30 (Liu [150]) Let η1 , η2 , · · · , ηm be independent random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be
independent uncertain variables with uncertainty distributions Υ1 , Υ2 , · · ·, Υn ,
respectively. If f (x, η1 , · · · , ηm , τ1 , · · · , τn ) and gj (x, η1 , · · · , ηm , τ1 , · · · , τn )
are strictly increasing functions with respect to τ1 , · · · , τn for j = 1, 2, · · · , p,
then the uncertain random programming

 min E[f (x, η1 , · · · , ηm , τ1 , · · · , τn )]
x

subject to:

Ch{gj (x, η1 , · · · , ηm , τ1 , · · · , τn ) ≤ 0} ≥ αj , j = 1, 2, · · · , p

is equivalent to the crisp mathematical programming


 Z Z 1
f (x, y1 , · · ·, ym , Υ−1 −1
1 (α), · · ·, Υn (α))dαdΨ1 (y1 ) · · · dΨm (ym )


 min
 x <m 0


subject to:

 Z

Gj (x, y1 , · · · , ym )dΨ1 (y1 ) · · · dΨm (ym ) ≥ αj , j = 1, 2, · · · , p



<m

where Gj (x, y1 , · · · , ym ) are the roots α of the equations

gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) = 0 (B.107)

for j = 1, 2, · · · , p, respectively.
436 Appendix B - Chance Theory

Proof: It follows from Theorems B.28 and B.29 immediately.


After an uncertain random programming is converted into a crisp math-
ematical programming, we may solve it by any classical numerical methods
(e.g. iterative method) or intelligent algorithms (e.g. genetic algorithm).

B.9 Uncertain Random Risk Analysis


The study of uncertain random risk analysis was started by Liu and Ralescu
[151] with the concept of risk index.

Definition B.8 (Liu and Ralescu [151]) Assume that a system contains un-
certain random factors ξ1 , ξ2 , · · ·, ξn , and has a loss function f . Then the risk
index is the chance measure that the system is loss-positive, i.e.,

Risk = Ch{f (ξ1 , ξ2 , · · · , ξn ) > 0}. (B.108)

If all uncertain random factors degenerate to random ones, then the risk
index is the probability measure that the system is loss-positive (Roy [199]).
If all uncertain random factors degenerate to uncertain ones, then the risk
index is the uncertain measure that the system is loss-positive (Liu [128]).

Theorem B.31 (Liu and Ralescu [151], Risk Index Theorem) Assume a
system contains independent random variables η1 , η2 , · · · , ηm with probability
distributions Ψ1 , Ψ2 , · · ·, Ψm and independent uncertain variables τ1 , τ2 , · · ·, τn
with regular uncertainty distributions Υ1 , Υ2 , · · ·, Υn , respectively. If the loss
function f (η1 , · · ·, ηm , τ1 , · · ·, τn ) is strictly increasing with respect to τ1 , · · · , τk
and strictly decreasing with respect to τk+1 , · · · , τn , then the risk index is
Z
Risk = G(y1 , · · ·, ym )dΨ1 (y1 ) · · · dΨm (ym ) (B.109)
<m

where G(y1 , · · · , ym ) is the root α of the equation

f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) = 0.

Proof: It follows from Definition B.8 and Theorem B.14 immediately.

Remark B.17: Sometimes, the equation may not have a root. In this case,
if

f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) < 0

for all α, then we set the root α = 0; and if

f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) > 0

for all α, then we set the root α = 1.


Section B.9 - Uncertain Random Risk Analysis 437

Remark B.18: The root α may be estimated by the bisection method


because f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) is
a strictly decreasing function with respect to α.

Exercise B.14: (Series System) Consider a series system in which there are
m elements whose lifetimes are independent random variables η1 , η2 , · · · , ηm
with probability distributions Ψ1 , Ψ2 , · · · , Ψm and n elements whose lifetimes
are independent uncertain variables τ1 , τ2 , · · · , τn with uncertainty distribu-
tions Υ1 , Υ2 , · · · , Υn , respectively. If the loss is understood as the case that
the system fails before the time T , then the loss function is

f = T − η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn . (B.110)

Show that the risk index is

Risk = a + b − ab (B.111)

where

a = 1 − (1 − Ψ1 (T ))(1 − Ψ2 (T )) · · · (1 − Ψm (T )), (B.112)

b = Υ1 (T ) ∨ Υ2 (T ) ∨ · · · ∨ Υn (T ). (B.113)

Exercise B.15: (Parallel System) Consider a parallel system in which there


are m elements whose lifetimes are independent random variables η1 , η2 , · · · ,
ηm with probability distributions Ψ1 , Ψ2 , · · · , Ψm and n elements whose life-
times are independent uncertain variables τ1 , τ2 , · · · , τn with uncertainty dis-
tributions Υ1 , Υ2 , · · · , Υn , respectively. If the loss is understood as the case
that the system fails before the time T , then the loss function is

f = T − η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn . (B.114)

Show that the risk index is


Risk = ab (B.115)
where
a = Ψ1 (T )Ψ2 (T ) · · · Ψm (T ), (B.116)
b = Υ1 (T ) ∧ Υ2 (T ) ∧ · · · ∧ Υn (T ). (B.117)

Exercise B.16: (Standby System) Consider a standby system in which


there are m elements whose lifetimes are independent random variables η1 , η2 ,
· · · , ηm with probability distributions Ψ1 , Ψ2 , · · · , Ψm and n elements whose
lifetimes are independent uncertain variables τ1 , τ2 , · · · , τn with uncertainty
distributions Υ1 , Υ2 , · · · , Υn , respectively. If the loss is understood as the
case that the system fails before the time T , then the loss function is

f = T − (η1 + η2 + · · · + ηm + τ1 + τ2 + · · · + τn ). (B.118)
438 Appendix B - Chance Theory

Show that the risk index is


Z
Risk = G(y1 , y2 , · · ·, ym )dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (B.119)
<m

where G(y1 , y2 , · · · , ym ) is the root α of the equation

Υ−1 −1 −1
1 (α) + Υ2 (α) + · · · + Υn (α) = T − (y1 + y2 + · · · + ym ). (B.120)

Remark B.19: As a substitute of risk index, Liu and Ralescu [153] suggested
a concept of value-at-risk,

VaR(α) = sup{x | Ch{f (ξ1 , ξ2 , · · · , ξn ) ≥ x} ≥ α}. (B.121)

Note that VaR(α) represents the maximum possible loss when α percent
of the right tail distribution is ignored. In other words, the loss will ex-
ceed VaR(α) with chance measure α. If Φ(x) is the chance distribution of
f (ξ1 , ξ2 , · · · , ξn ), then

VaR(α) = sup {x | Φ(x) ≤ 1 − α} . (B.122)

If its inverse uncertainty distribution Φ−1 (α) exists, then

VaR(α) = Φ−1 (1 − α). (B.123)

When the uncertain random variables degenerate to random variables, the


value-at-risk becomes the one in Morgan [171]. When the uncertain random
variables degenerate to uncertain variables, the value-at-risk becomes the one
in Peng [183].

Remark B.20: Liu and Ralescu [151] proposed a concept of expected loss
that is the expected value of the loss f (ξ1 , ξ2 , · · · , ξn ) given f (ξ1 , ξ2 , · · · , ξn ) >
0. That is,
Z +∞
L= M{f (ξ1 , ξ2 , · · · , ξn ) ≥ x}dx. (B.124)
0

If Φ(x) is the chance distribution of the loss f (ξ1 , ξ2 , · · · , ξn ), then we imme-


diately have
Z +∞
L= (1 − Φ(x))dx. (B.125)
0

If its inverse uncertainty distribution Φ−1 (α) exists, then the expected loss
is
Z 1
+
L= Φ−1 (α) dα. (B.126)
0
Section B.10 - Uncertain Random Reliability Analysis 439

B.10 Uncertain Random Reliability Analysis


The study of uncertain random reliability analysis was started by Wen and
Kang [232] with the concept of reliability index.

Definition B.9 (Wen and Kang [232]) Assume a Boolean system has un-
certain random elements ξ1 , ξ2 , · · · , ξn and a structure function f . Then the
reliability index is the chance measure that the system is working, i.e.,

Reliability = Ch{f (ξ1 , ξ2 , · · · , ξn ) = 1}. (B.127)

If all uncertain random elements degenerate to random ones, then the


reliability index is the probability measure that the system is working. If all
uncertain random elements degenerate to uncertain ones, then the reliability
index (Liu [128]) is the uncertain measure that the system is working.

Theorem B.32 (Wen and Kang [232], Reliability Index Theorem) Assume
that a system has a structure function f and contains independent random
elements η1 , η2 , · · · , ηm with reliabilities a1 , a2 , · · · , am , and independent un-
certain elements τ1 , τ2 , · · · , τn with reliabilities b1 , b2 , · · · , bn , respectively.
Then the reliability index is
m
!
X Y
Reliability = µi (xi ) f ∗ (x1 , · · · , xm ) (B.128)
(x1 ,··· ,xm )∈{0,1}m i=1

where

 sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n







 if sup min νj (yj ) < 0.5
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n


f ∗ (x1 , · · · , xm ) = (B.129)

 1− sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=0 1≤j≤n





 if sup min νj (yj ) ≥ 0.5,



1≤j≤n
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1

(
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m), (B.130)
1 − ai , if xi = 0
(
bj , if yj = 1
νj (yj ) = (j = 1, 2, · · · , n). (B.131)
1 − bj , if yj = 0

Proof: It follows from Definition B.9 and Theorem B.15 immediately.

Exercise B.17: (Series System) Consider a series system in which there are
m independent random elements η1 , η2 , · · ·, ηm with reliabilities a1 , a2 , · · ·, am ,
440 Appendix B - Chance Theory

and n independent uncertain elements τ1 , τ2 , · · ·, τn with reliabilities b1 , b2 , · · · ,


bn , respectively. Note that the structure function is

f = η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn . (B.132)

Show that the reliability index is

Reliability = a1 a2 · · · am (b1 ∧ b2 ∧ · · · ∧ bn ). (B.133)

Exercise B.18: (Parallel System) Consider a parallel system in which


there are m independent random elements η1 , η2 , · · · , ηm with reliabilities
a1 , a2 , · · · , am , and n independent uncertain elements τ1 , τ2 , · · · , τn with re-
liabilities b1 , b2 , · · · , bn , respectively. Note that the structure function is

f = η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn . (B.134)

Show that the reliability index is

Reliability = 1 − (1 − a1 )(1 − a2 ) · · · (1 − am )(1 − b1 ∨ b2 ∨ · · · ∨ bn ). (B.135)

Exercise B.19: (k-out-of-(m + n) System) Consider a k-out-of-(m + n) sys-


tem in which there are m independent random elements η1 , η2 , · · · , ηm with
reliabilities a1 , a2 , · · ·, am , and n independent uncertain elements τ1 , τ2 , · · ·, τn
with reliabilities b1 , b2 , · · · , bn , respectively. Note that the structure function
is
f = k-max [η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ]. (B.136)
Show that the reliability index is
m
!
X Y
Reliability = µi (xi ) f ∗ (x1 , x2 , · · · , xm ) (B.137)
(x1 ,x2 ,··· ,xm )∈{0,1}m i=1

where

f ∗ (x1 , x2 , · · · , xm ) = k-max [x1 , x2 , · · · , xm , b1 , b2 , · · · , bn ], (B.138)


(
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m). (B.139)
1 − ai , if xi = 0

B.11 Uncertain Random Graph


In classic graph theory, the edges and vertices are all deterministic, either
exist or not. However, in practical applications, some indeterminate factors
will no doubt appear in graphs. Thus it is reasonable to assume that in a
graph some edges exist with some degrees in probability measure and others
Section B.11 - Uncertain Random Graph 441

exist with some degrees in uncertain measure. In order to model this type of
graph, Liu [138] presented a concept of uncertain random graph.
We say a graph is of order n if it has n vertices labeled by 1, 2, · · · , n. In
this section, we assume the graph is always of order n, and has a collection
of vertices,
V = {1, 2, · · · , n}. (B.140)
Let us define two collections of edges,

U = {(i, j) | 1 ≤ i < j ≤ n and (i, j) are uncertain edges}, (B.141)

R = {(i, j) | 1 ≤ i < j ≤ n and (i, j) are random edges}. (B.142)


Note that all deterministic edges are regarded as special uncertain ones. Then
U ∪ R = {(i, j) | 1 ≤ i < j ≤ n} that contains n(n − 1)/2 edges. We will call

α11 α12 ··· α1n


 
 α21 α22 ··· α2n 
T=
 
.. .. .. ..
 (B.143)
.
 
 . . . 
αn1 αn2 ··· αnn

an uncertain random adjacency matrix if αij represent the truth values in


uncertain measure or probability measure that the edges between vertices
i and j exist, i, j = 1, 2, · · · , n, respectively. Note that αii = 0 for i =
1, 2, · · · , n, and T is a symmetric matrix, i.e., αij = αji for i, j = 1, 2, · · · , n.
................ ........
.... .....
..... .......................................................................
 
1
...............
..
4
...............
..
. 0 0.8 0 0.5
... ...
.... ....
...
...
...
...

 0.8 0 1 0 

... ...  
...
...
..
...
...  0 1 0 0.3 
. ...
...............
. ................
....
2
..............
.......................................................................
. 3
...............
. 0.5 0 0.3 0

Figure B.4: An Uncertain Random Graph

Definition B.10 (Liu [138]) Assume V is the collection of vertices, U is the


collection of uncertain edges, R is the collection of random edges, and T is
the uncertain random adjacency matrix. Then the quartette (V, U, R, T) is
said to be an uncertain random graph.

Please note that the uncertain random graph becomes a random graph
(Erdős and Rényi [38], Gilbert [56]) if the collection U of uncertain edges
vanishes; and becomes an uncertain graph (Gao and Gao [50]) if the collection
R of random edges vanishes.
442 Appendix B - Chance Theory

In order to deal with uncertain random graph, let us introduce some


symbols. Write
x11 x12 · · · x1n
 

 21 x22 · · · x2n 
 x 
X=  .. .. .. .. 
 (B.144)
 . . . . 
xn1 xn2 ··· xnn
and
xij = 0 or 1, if (i, j) ∈ R 
 

 
xij = 0, if (i, j) ∈ U

 

X= X| . (B.145)

 xij = xji , i, j = 1, 2, · · · , n 

 
xii = 0, i = 1, 2, · · · , n
 

For each given matrix


y11 y12 ··· y1n
 

 y21 y22 ··· y2n 

Y = .. .. .. ..
, (B.146)
.
 
 . . . 
yn1 yn2 ··· ynn
the extension class of Y is defined by
xij = yij , if (i, j) ∈ R
 

 

∈ U
 

 x ij = 0 or 1, if (i, j) 
Y = X| . (B.147)

 xij = xji , i, j = 1, 2, · · · , n 

 
xii = 0, i = 1, 2, · · · , n
 

Example B.6: (Liu [138], Connectivity Index) An uncertain random graph


is connected for some realizations of uncertain and random edges, and discon-
nected for some other realizations. In order to show how likely an uncertain
random graph is connected, a connectivity index of an uncertain random
graph is defined as the chance measure that the uncertain random graph is
connected. Let (V, U, R, T) be an uncertain random graph. Liu [138] proved
that the connectivity index is
 
X Y
ρ=  νij (Y ) f ∗ (Y ) (B.148)
Y ∈X (i,j)∈R

where
sup min νij (X), if sup min νij (X) < 0.5

X∈Y ∗, f (X)=1 (i,j)∈U X∈Y ∗, f (X)=1 (i,j)∈U



f (Y ) =
1 −
 sup min νij (X), if sup min νij (X) ≥ 0.5,
X∈Y ∗, f (X)=0 (i,j)∈U X∈Y ∗, f (X)=1 (i,j)∈U
Section B.12 - Uncertain Random Network 443

(
αij , if xij = 1
νij (X) = (i, j) ∈ U, (B.149)
1 − αij , if xij = 0
(
1, if I + X + X 2 + · · · + X n−1 > 0
f (X) = (B.150)
0, otherwise,
X is the class of matrixes satisfying (B.145), and Y ∗ is the extension class of
Y satisfying (B.147).

Remark B.21: If the uncertain random graph becomes a random graph,


then the connectivity index is
 
X Y
ρ=  νij (X) f (X) (B.151)
X∈X 1≤i<j≤n

where  

 xij = 0 or 1, i, j = 1, 2, · · · , n 

X = X | xij = xji , i, j = 1, 2, · · · , n . (B.152)
 
xii = 0, i = 1, 2, · · · , n
 

Remark B.22: (Gao and Gao [50]) If the uncertain random graph becomes
an uncertain graph, then the connectivity index is

 sup min νij (X), if sup min νij (X) < 0.5
(X)=1 1≤i<j≤n X∈X,f (X)=1 1≤i<j≤n
 X∈X,f
ρ=
 1−
 sup min νij (X), if
1≤i<j≤n
sup min νij (X) ≥ 0.5
1≤i<j≤n
X∈X,f (X)=0 X∈X,f (X)=1

where X becomes
 

 xij = 0 or 1, i, j = 1, 2, · · · , n 

X = X | xij = xji , i, j = 1, 2, · · · , n . (B.153)
 
xii = 0, i = 1, 2, · · · , n
 

Exercise B.20: An Euler circuit in the graph is a circuit that passes through
each edge exactly once. In other words, a graph has an Euler circuit if it can
be drawn on paper without ever lifting the pencil and without retracing over
any edge. It has been proved that a graph has an Euler circuit if and only
if it is connected and each vertex has an even degree (i.e., the number of
edges that are adjacent to that vertex). In order to measure how likely an
uncertain random graph has an Euler circuit, an Euler index is defined as
the chance measure that the uncertain random graph has an Euler circuit.
Please give a formula for calculating Euler index.
444 Appendix B - Chance Theory

B.12 Uncertain Random Network


The term network is a synonym for a weighted graph, where the weights may
be understood as cost, distance or time consumed. Assume that in a network
some weights are random variables and others are uncertain variables. In
order to model this type of network, Liu [138] presented a concept of uncertain
random network.
In this section, we assume the uncertain random network is always of
order n, and has a collection of nodes,

N = {1, 2, · · · , n} (B.154)

where “1” is always the source node, and “n” is always the destination node.
Let us define two collections of arcs,

U = {(i, j) | (i, j) are uncertain arcs}, (B.155)

R = {(i, j) | (i, j) are random arcs}. (B.156)


Note that all deterministic arcs are regarded as special uncertain ones. Let
wij denote the weights of arcs (i, j), (i, j) ∈ U ∪ R, respectively. Then wij
are uncertain variables if (i, j) ∈ U, and random variables if (i, j) ∈ R. Write

W = {wij | (i, j) ∈ U ∪ R}. (B.157)

Definition B.11 (Liu [138]) Assume N is the collection of nodes, U is the


collection of uncertain arcs, R is the collection of random arcs, and W is the
collection of uncertain and random weights. Then the quartette (N, U, R, W)
is said to be an uncertain random network.

Please note that the uncertain random network becomes a random net-
work (Frank and Hakimi [43]) if all weights are random variables; and be-
comes an uncertain network (Liu [129]) if all weights are uncertain variables.
................ .................
... .. ... ... ..
....
.. ...
.. ... 2 ...............................................................
. 4 ... ..........
..
........ ................... ..................... ............
...
........ ... .
.. ......
.... .... . ......
...... .... .... ......
...... ... .... ......
.
. ......... ....
.... . .
.... ......
......
....... ....
.... . ..
.. ........ ..........
.
... ............. ..... .... . ............. .....
.... .. . .
..... . ...
....
...
1
.... ........ .
........... ...... ..... .
. .....
... ...... .
6
...... ..
.......... ..............
.
...... . . ..
..
...... ....
. ...
.. ........
...... .. .... ......
......
...... .... .... ......
...... ...
... .... ......
...... ...... ......
........ ....... ....... .................. ............
. . .
............ .. ...... .. ..........
.... ............................................................ ...
3
.... .....
...........
. ... ... 5 ....... ........
...
..

Figure B.5: An Uncertain Random Network

Figure B.5 shows an uncertain random network (N, U, R, W) of order 6 in


which
N = {1, 2, 3, 4, 5, 6}, (B.158)
Section B.13 - Uncertain Random Process 445

U = {(1, 2), (1, 3), (2, 4), (2, 5), (3, 4), (3, 5)}, (B.159)
R = {(4, 6), (5, 6)}, (B.160)
W = {w12 , w13 , w24 , w25 , w34 , w35 , w46 , w56 }. (B.161)

Example B.7: (Liu [138], Shortest Path Distribution) Consider an uncer-


tain random network (N, U, R, W). Assume the uncertain weights wij have
uncertainty distributions Υij for (i, j) ∈ U, and the random weights wij have
probability distributions Ψij for (i, j) ∈ R, respectively. Then the shortest
path distribution from a source node to a destination node is
Z +∞ Z +∞ Y
Φ(x) = ··· F (x; yij , (i, j) ∈ R) dΨij (yij ) (B.162)
0 0 (i,j)∈R

where F (x; yij , (i, j) ∈ R) is determined by its inverse uncertainty distribution

F −1 (α; yij , (i, j) ∈ R) = f (cij , (i, j) ∈ U ∪ R), (B.163)


(
Υ−1
ij (α), if (i, j) ∈ U
cij = (B.164)
yij , if (i, j) ∈ R,

and f may be calculated by the Dijkstra algorithm (Dijkstra [34]) for each
given α.

Remark B.23: If the uncertain random network becomes a random network,


then the shortest path distribution is
Z Y
Φ(x) = dΨij (yij ). (B.165)
f (yij ,(i,j)∈R)≤x (i,j)∈R

Remark B.24: (Gao [51]) If the uncertain random network becomes an


uncertain network, then the inverse shortest path distribution is

Φ−1 (α) = f (Υ−1


ij (α), (i, j) ∈ U). (B.166)

Exercise B.21: (Sheng and Gao [212]) Maximum flow problem is to find
a flow with maximum value from a source node to a destination node in an
uncertain random network. What is the maximum flow distribution?

B.13 Uncertain Random Process


Uncertain random process is a sequence of uncertain random variables in-
dexed by time. A formal definition is given below.
446 Appendix B - Chance Theory

Definition B.12 (Gao and Yao [47]) Let (Γ, L, M) × (Ω, A, Pr) be a chance
space and let T be a totally ordered set (e.g. time). An uncertain random
process is a function Xt (γ, ω) from T × (Γ, L, M) × (Ω, A, Pr) to the set of
real numbers such that {Xt ∈ B} is an event in L × A for any Borel set B
at each time t.

Example B.8: A stochastic process is a sequence of random variables in-


dexed by time, and then is a special type of uncertain random process.

Example B.9: An uncertain process is a sequence of uncertain variables


indexed by time, and then is a special type of uncertain random process.

Example B.10: Let Yt be a stochastic process, and let Zt be an uncertain


process. If f is a measurable function, then

Xt = f (Yt , Zt ) (B.167)

is an uncertain random process.

Definition B.13 (Gao and Yao [47]) Let η1 , η2 , · · · be iid random variables,
let τ1 , τ2 , · · · be iid uncertain variables, and let f be a positive and strictly
monotone function. Define S0 = 0 and

Sn = f (η1 , τ1 ) + f (η2 , τ2 ) + · · · + f (ηn , τn ) (B.168)

for n ≥ 1. Then
Nt = max n Sn ≤ t (B.169)
n≥0

is called an uncertain random renewal process with interarrival times f (η1 , τ1 ),


f (η2 , τ2 ), · · ·

Theorem B.33 (Gao and Yao [47]) Let η1 , η2 , · · · be iid random variables
with a common probability distribution Ψ, let τ1 , τ2 , · · · be iid uncertain vari-
ables, and let f be a positive and strictly monotone function. Assume Nt is an
uncertain random renewal process with interarrival times f (η1 , τ1 ), f (η2 , τ2 ),
· · · Then the average renewal number
Z +∞ −1
Nt
→ f (y, τ1 )dΨ(y) (B.170)
t −∞

in the sense of convergence in distribution as t → ∞.

Proof: Write Sn = f (η1 , τ1 ) + f (η2 , τ2 ) + · · · + f (ηn , τn ) for all n ≥ 1. Let


x be a continuous point of the uncertainty distribution of
Z +∞ −1
f (y, τ1 )dΨ(y) .
−∞
Section B.13 - Uncertain Random Process 447

It is clear that 1/x is a continuous point of the uncertainty distribution of


Z +∞
f (y, τ1 )dΨ(y).
−∞

At first, it follows from the definition of uncertain random renewal process


that

Nt Sbtxc+1 t
Ch ≤ x = Ch Sbtxc+1 > t = Ch >
t btxc + 1 btxc + 1

where btxc represents the maximal integer less than or equal to tx. Since
btxc ≤ tx < btxc + 1, we immediately have

btxc 1 t 1
· ≤ <
btxc + 1 x btxc + 1 x

and then

Sbtxc+1 1 Sbtxc+1 t Sbtxc+1 1
Ch > ≤ Ch > ≤ Ch > .
btxc + 1 x btxc + 1 btxc + 1 btxc x

It follows from the law of large numbers for uncertain random variables that

Sbtxc+1 1 Sbtxc+1 1
lim Ch > = 1 − lim Ch ≤
t→∞ btxc + 1 x t→∞ btxc + 1 x
Z +∞
1
=1−M f (y, τ1 )dΨ(y) ≤
−∞ x
(Z
+∞ −1 )
=M f (y, τ1 )dΨ(y) ≤x
−∞

and

Sbtxc+1 1 btxc + 1 Sbtxc+1 1
lim Ch > = 1 − lim Ch · ≤
t→∞ btxc x t→∞ btxc btxc + 1 x
Z +∞
1
=1−M f (y, τ1 )dΨ(y) ≤
−∞ x
(Z
+∞ −1 )
=M f (y, τ1 )dΨ(y) ≤x .
−∞

From the above three relations we get


(Z
+∞ −1 )
Sbtxc+1 t
lim Ch > =M f (y, τ1 )dΨ(y) ≤x
t→∞ btxc + 1 btxc + 1 −∞
448 Appendix B - Chance Theory

and then
(Z
+∞ −1 )
Nt
lim Ch ≤x =M f (y, τ1 )dΨ(y) ≤x .
t→∞ t −∞

The theorem is thus verified.

Exercise B.22: Let η1 , η2 , · · · be iid positive random variables, and let


τ1 , τ2 , · · · be iid positive uncertain variables. Assume Nt is an uncertain
random renewal process with interarrival times η1 + τ1 , η2 + τ2 , · · · Show that

Nt 1
→ (B.171)
t E[η1 ] + τ1

in the sense of convergence in distribution as t → ∞.

Exercise B.23: Let η1 , η2 , · · · be iid positive random variables, and let


τ1 , τ2 , · · · be iid positive uncertain variables. Assume Nt is an uncertain
random renewal process with interarrival times η1 τ1 , η2 τ2 , · · · Show that

Nt 1
→ (B.172)
t E[η1 ]τ1

in the sense of convergence in distribution as t → ∞.

Theorem B.34 (Yao [255]) Let η1 , η2 , · · · be iid random interarrival times,


and let τ1 , τ2 , · · · be iid uncertain rewards. Assume Nt is a stochastic renewal
process with interarrival times η1 , η2 , · · · Then

Nt
X
Rt = τi (B.173)
i=1

is an uncertain random renewal reward process, and

Rt τ1
→ (B.174)
t E[η1 ]

in the sense of convergence in distribution as t → ∞.

Proof: Let Υ denote the uncertainty distribution of τ1 . Then for each


realization of Nt , the uncertain variable

Nt
1 X
τi
Nt i=1
Section B.13 - Uncertain Random Process 449

follows the uncertainty distribution Υ. In addition, by the definition of chance


distribution, we have
Z 1
Rt Rt
Ch ≤x = Pr M ≤ x ≥ r dr
t 0 t
Z 1 ( ( Nt
) )
1 X tx
= Pr M τi ≤ ≥ r dr
0 Nt i=1 Nt
Z 1
tx
= Pr Υ ≥ r dr
0 Nt
for any real number x. Since Nt is a stochastic renewal process with iid
interarrival times η1 , η2 , · · · , we have
t
→ E[η1 ], a.s.
Nt
as t → ∞. It follows from the Lebesgue domain convergence theorem that
Z 1
Rt tx
lim Ch ≤ x = lim Pr Υ ≥ r dr
t→∞ t t→∞ 0 Nt
Z 1
= Pr {Υ(E[η1 ]x) ≥ r} dr = Υ(E[η1 ]x)
0

that is just the uncertainty distribution of τ1 /E[η1 ]. The theorem is thus


proved.
Theorem B.35 (Yao [255]) Let η1 , η2 , · · · be iid random rewards, and let
τ1 , τ2 , · · · be iid uncertain interarrival times. Assume Nt is an uncertain
renewal process with interarrival times τ1 , τ2 , · · · Then
Nt
X
Rt = ηi (B.175)
i=1

is an uncertain random renewal reward process, and


Rt E[η1 ]
→ (B.176)
t τ1
in the sense of convergence in distribution as t → ∞.
Proof: Let Υ denote the uncertainty distribution of τ1 . It follows from the
definition of chance distribution that for any real number x, we have
Z 1
Rt Rt
Ch ≤x = Pr M ≤ x ≥ r dr
t 0 t
Z 1 ( ( Nt
) )
1 1 X t
= Pr M · ηi ≤ ≥ r dr.
0 x Nt i=1 Nt
450 Appendix B - Chance Theory

Since Nt is an uncertain renewal process with iid interarrival times τ1 , τ2 , · · · ,


by using Theorem 13.3, we have
t
→ τ1
Nt
in the sense of convergence in distribution as t → ∞. In addition, for each
realization of Nt , the law of large numbers for random variables says
Nt
1 X
ηi → E[η1 ], a.s.
Nt i=1

as t → ∞ for each number x. It follows from the Lebesgue domain conver-


gence theorem that
Z 1
Rt E[η1 ] E[η1 ]
lim Ch ≤x = Pr 1 − Υ ≥ r dr = 1 − Υ
t→∞ t 0 x x
that is just the uncertainty distribution of E[η1 ]/τ1 . The theorem is thus
proved.

B.14 Bibliographic Notes


Probability theory was developed by Kolmogorov [88] in 1933 for modeling
frequencies, while uncertainty theory was founded by Liu [122] in 2007 for
modeling belief degrees. However, in many cases, uncertainty and random-
ness simultaneously appear in a complex system. In order to describe this
phenomenon, chance theory was pioneered by Liu [149] in 2013 with the con-
cepts of uncertain random variable, chance measure and chance distribution.
Liu [149] also proposed the concepts of expected value and variance of uncer-
tain random variables. As an important contribution to chance theory, Liu
[150] presented an operational law of uncertain random variables. In addi-
tion, Yao and Gao [251] verified a law of large numbers for uncertain random
variables, and Hou [64] investigated the distance between uncertain random
variables.
Stochastic programming was first studied by Dantzig [28] in 1965, while
uncertain programming was first proposed by Liu [124] in 2009. In order to
model optimization problems with not only uncertainty but also randomness,
uncertain random programming was founded by Liu [150] in 2013. As exten-
sions, Zhou, Yang and Wang [279] proposed uncertain random multiobjective
programming for optimizing multiple, noncommensurable and conflicting ob-
jectives, Qin [191] proposed uncertain random goal programming in order to
satisfy as many goals as possible in the order specified, and Ke [83] proposed
uncertain random multilevel programming for studying decentralized decision
systems in which the leader and followers may have their own decision vari-
ables and objective functions. After that, uncertain random programming
was developed steadily and applied widely.
Section B.14 - Bibliographic Notes 451

Probabilistic risk analysis was dated back to 1952 when Roy [199] pro-
posed his safety-first criterion for portfolio selection. Another important con-
tribution is the probabilistic value-at-risk methodology developed by Morgan
[171] in 1996. On the other hand, uncertain risk analysis was proposed by Liu
[128] in 2010 for evaluating the risk index that is the uncertain measure of an
uncertain system being loss-positive. More generally, in order to quantify the
risk of uncertain random systems, Liu and Ralescu [151] invented the tool
of uncertain random risk analysis. Furthermore, value-at-risk methodology
was presented by Liu and Ralescu [153] and expected loss was investigated
by Liu and Ralescu [154] for dealing with uncertain random systems.
Probabilistic reliability analysis was traced back to 1944 when Pugsley
[187] first proposed structural accident rates for the aeronautics industry.
Nowadays, probabilistic reliability analysis has become a widely used disci-
pline. As a new methodology, uncertain reliability analysis was developed
by Liu [128] in 2010 for evaluating the reliability index. More generally, for
dealing with uncertain random systems, Wen and Kang [232] presented the
tool of uncertain random reliability analysis.
Random graph was defined by Erdős and Rényi [38] in 1959 and indepen-
dently by Gilbert [56] at nearly the same time. As an alternative, uncertain
graph was proposed by Gao and Gao [50] in 2013 via uncertainty theory.
Assuming some edges exist with some degrees in probability measure and
others exist with some degrees in uncertain measure, Liu [138] defined the
concept of uncertain random graph in 2014.
Random network was first investigated by Frank and Hakimi [43] in 1965
for modeling communication network with random capacities. From then on,
the random network was well developed and widely applied. As a break-
through approach, uncertain network was first explored by Liu [129] in 2010
for modeling project scheduling problem with uncertain duration times. More
generally, assuming some weights are random variables and others are uncer-
tain variables, Liu [138] initialized the concept of uncertain random network
in 2014.
One of the earliest investigations of stochastic process was Bachelier [3] in
1900, and the study of uncertain process was started by Liu [123] in 2008. In
order to deal with uncertain random phenomenon evolving in time, Gao and
Yao [47] presented an uncertain random process in the light of chance theory.
Gao and Yao [47] also proposed an uncertain random renewal process. As
extensions, Yao [255] discussed an uncertain random renewal reward process,
and Yao [256] investigated an uncertain random alternating renewal process.
Appendix C

Frequently Asked
Questions

This appendix will answer some frequently asked questions related to prob-
ability theory and uncertainty theory as well as their applications. This
appendix will also show why fuzzy set is a wrong model in both theory and
practice. Finally, I will clarify what uncertainty is.

C.1 What is the meaning that an object follows the laws


of probability theory?
We say an object (e.g. frequency) follows the laws of probability theory if
it meets not only the three axioms (Kolmogorov [88]) but also the product
probability theorem of probability theory:
Axiom 1 (Normality Axiom) Pr{Ω} = 1 for the universal set Ω;
Axiom 2 (Nonnegativity Axiom) Pr{A} ≥ 0 for any event A;
Axiom 3 (Additivity Axiom) For every countable sequence of mutually dis-
joint events A1 , A2 , · · · , we have
(∞ ) ∞
[ X
Pr Ai = Pr{Ai }; (C.1)
i=1 i=1

Theorem (Product Probability) Let (Ωk , Ak , Prk ) be probability spaces for


k = 1, 2, · · · Then there is a unique probability measure Pr such that
(∞ ) ∞
Y Y
Pr Ak = Prk {Ak } (C.2)
k=1 k=1

where Ak are arbitrarily chosen events from Ak for k = 1, 2, · · · , respectively.

© Springer-Verlag Berlin Heidelberg 2015 453


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5
454 Appendix C - Frequently Asked Questions

It is easy for us to understand why we need to justify that the object


meets the three axioms. However, some readers may wonder why we need to
justify that the object meets the product probability theorem. The reason is
that product probability theorem cannot be deduced from Kolmogorov’s axioms
except we presuppose that the product probability meets the three axioms.
Would that surprise you? In fact, the same probability theory may be derived
if the product probability theorem was replaced with an axiom:
Axiom 4 (Product Probability) Let (Ωk , Ak , Prk ) be probability spaces for
k = 1, 2, · · · The product probability measure Pr is a probability measure
satisfying (∞ ∞
)
Y Y
Pr Ak = Prk {Ak } (C.3)
k=1 k=1

where Ak are arbitrarily chosen events from Ak for k = 1, 2, · · · , respectively.


One advantage of this revision is to force the practitioners to justify the
product probability for their own problems. Please keep in mind that “an
object follows the laws of probability theory” is equivalent to “it meets the
four axioms of probability theory”, or “it meets the three axioms plus the
product probability theorem”. This assertion is stronger than “an object
meets the three axioms of Kolmogorov”. In other words, the three axioms
do not ensure that an object follows the laws of probability theory.
There exist two broad categories of interpretations of probability, one is
frequency interpretation and the other is belief interpretation. The frequency
interpretation takes the probability to be the frequency with which an event
happens (Venn [224], Reichenbach [197], von Mises [225]), while the belief
interpretation takes the probability to be the degree to which we believe an
event will happen (Ramsey [196], de Finetti [31], Savage [202]).
The debate between different interpretations has been lasting from the
nineteenth century. Personally, I agree with the frequency interpretation,
but strongly oppose the belief interpretation of probability because frequency
follows the laws of probability theory but belief degree does not. The detailed
reasons will be given in the following a few sections.

C.2 Why does frequency follow the laws of probability


theory?
In order to show that the frequency follows the laws of probability theory, we
must verify that the frequency meets not only the three axioms of Kolmogorov
but also the product probability theorem.
First, the frequency of the universal set takes value 1 because the uni-
versal set always happens. Thus the frequency meets the normality axiom.
Second, it is obvious that the frequency is a number between 0 and 1. Thus
the frequency of any event is nonnegative, and the frequency meets the non-
Section C.3 - Belief Interpretation of Probability 455

negativity axiom. Third, for any disjoint events A and B, if A happens α


times and B happens β times, it is clear that the union A ∪ B happens α + β
times. This means the frequency is additive and then meets the additivity
axiom. Finally, numerous experiments showed that if A and B are two events
from different probability spaces (essentially they come from two different ex-
periments) and happen α and β times (in percentage), respectively, then the
product A × B happens α × β times. See Figure C.1. Thus the frequency
meets the product probability theorem. Hence the frequency does follow the
laws of probability theory. In fact, frequency is the only empirical basis for
probability theory.
...
..........
....
..
.................................
.. ........................................................................................
......... .... ..
...
...
...
.... .... ... ...
.. ... ... ...
...
... ...
B ..
...
.
.
.
...
.
.
.
α×β ...
.....
.
. .
. ...
.... .... .... ...
........ .. .
. .
. .
. ...
................................. .. ............................................................................................
... .. ..
... .. ..
... .. ..
... .. ..
... .. ..
.
.........................................................................................................................................................................................
. .
.. .. ..
... ... ..
.. .......................................... ..........................................
A

Figure C.1: Let A and B be two events from different probability spaces
(essentially they come from two different experiments). If A happens α times
and B happens β times, then the product A × B happens α × β times, where
α and β are understood as percentage numbers.

C.3 Why is probability theory unable to model belief


degree?
In order to obtain the belief degree of some event, the decision maker needs
to launch a consultation process with a domain expert. The decision maker is
the user of belief degree while the domain expert is the holder. For justifying
whether probability theory is able to model belief degree or not, we must
check if the belief degree follows the laws of probability theory.
First, “1” means “complete belief ” and we cannot be in more belief than
“complete belief ”. This means the belief degree of any event cannot exceed
1. In particular, the belief degree of the universal set takes value 1 because it
is completely believable. Hence the belief degree meets the normality axiom
of probability theory.
Second, the belief degree meets the nonnegativity axiom because “0”
means “complete disbelief ” and we cannot be in more disbelief than “com-
plete disbelief ”.
Third, de Finetti [31] interpreted the belief degree of an event as the fair
betting ratio (price/stake) of a bet that offers $1 if the event happens and
456 Appendix C - Frequently Asked Questions

nothing otherwise. For example, if the domain expert thinks the belief degree
of an event A is α, then the price of the bet about A is α × 100¢. Here the
word “fair” means both the domain expert and the decision maker are willing
to either buy or sell this bet at this price.
Besides, Ramsey [196] suggested a Dutch book argument1 that says the
belief degree is irrational if there exists a book that guarantees either the do-
main expert or the decision maker a loss. For the moment, we are assumed
to agree with it.
Let A1 be a bet that offers $1 if A1 happens, and let A2 be a bet that
offers $1 if A2 happens. Assume the belief degrees of A1 and A2 are α1
and α2 , respectively. This means the prices of A1 and A2 are $α1 and $α2 ,
respectively. Now we consider the bet A1 ∪ A2 that offers $1 if either A1 or
A2 happens, and write the belief degree of A1 ∪ A2 by α. This means the
price of A1 ∪ A2 is $α. If α > α1 + α2 , then you (i) sell A1 , (ii) sell A2 , and
(iii) buy A1 ∪ A2 . It is clear that you are guaranteed to lose α − α1 − α2 > 0.
Thus there exists a Dutch book and the assumption α > α1 + α2 is irrational.
If α < α1 + α2 , then you (i) buy A1 , (ii) buy A2 , and (iii) sell A1 ∪ A2 . It is
clear that you are guaranteed to lose α1 + α2 − α > 0. Thus there exists a
Dutch book and the assumption α < α1 + α2 is irrational. Hence we have to
assume α = α1 + α2 and the belief degree meets the additivity axiom (but
this assertion is questionable because you cannot reverse “buy” and “sell”
arbitrarily due to the unequal status of the decision maker and the domain
expert).
Until now we have verified that the belief degree meets the three axioms
of probability theory. Almost all subjectivists stop here and assert that belief
degree follows the laws of probability theory. Unfortunately, the evidence is
not enough for this conclusion because we have not verified whether the belief
degree meets the product probability theorem or not.
Recall the example of truck-cross-over-bridge on Page 6. Let Ai represent
that the ith bridge strengths are greater than 90 tons, i = 1, 2, · · · , 50, re-
spectively. For each i, since your belief degree for Ai is 75%, you are willing
to pay 75¢ for the bet that offers $1 if Ai happens. If the belief degree did
follow the laws of probability theory, then it would be fair to pay

75% × 75% × · · · × 75% ×100¢ ≈ 0.00006¢ (C.4)


| {z }
50
for a bet that offers $1 if A1 × A2 × · · · × A50 happens. Notice that the odd
is over a million and A1 × A2 × · · · × A50 definitely happens because the real
strengths of the 50 bridges are assumed to range from 95 to 110 tons. All
1 A Dutch book in a betting market is a set of bets which guarantees a loss, regardless

of the outcome of the gamble. For example, let A be a bet that offers $1 if A happens, let
B be a bet that offers $1 if B happens, and let A ∨ B be a bet that offers $1 if either A or
B happens. If the prices of A, B and A ∨ B are 30¢, 40¢ and 80¢, respectively, and you (i)
sell A, (ii) sell B, and (iii) buy A ∨ B, then you are guaranteed to lose 10¢ no matter what
happens. Thus there exists a Dutch book, and the prices are considered to be irrational.
Section C.5 - Belief Degree Follows the Laws of Uncertainty 457

of us will be happy to bet on it. But who is willing to offer such a bet? It
seems that no one does, and then the belief degree of A1 × A2 × · · · × A50 is
not the product of each individuals. Hence the belief degree does not follow
the laws of probability theory.
It is thus concluded that the belief interpretation of probability is un-
acceptable. The main mistake of subjectivists is that they only justify the
belief degree meets the three axioms of probability theory, but do not check
if it meets the product probability theorem.

C.4 Why should belief degree be understood as an odd-


smaker’s betting ratio rather than a fair one?

There are many similarities between a betting market and a consultation


process. First, the oddsmaker and the bettor are two sides in the betting
market, while the domain expert and the decision maker are two sides in the
consultation process. Second, the oddsmaker is the maker of betting ratio
while the domain expert is the holder of belief degree. Third, the bettor
is the buyer of bets while the decision maker is the user of belief degrees.
Fourth, the oddsmaker wants to get a commission while the domain expert
is conservatism.
The status of the domain expert and the decision maker is unequal and
they cannot exchange the roles with each other. Because of the conser-
vatism, the human beings usually overweight unlikely events. Thus the deci-
sion maker cannot expect the domain expert provides a “fair” belief degree
just like that the bettor cannot expect the oddsmaker provides a “fair” bet-
ting ratio (“fair” implies the sum of the betting ratios of all outcomes is just
1, but the real sum is usually between 1.1 and 1.3). This is the reason why I
do not agree with de Finetti on fair betting ratio.
Instead, I think the belief degree should be understood as an oddsmaker’s
betting ratio that is not “fair” at all. The bettor (decision maker) can choose
to buy the bets from the oddsmaker (domain expert) but cannot sell them
at this prices. Meanwhile, the oddsmaker (domain expert) sells the bets to
the bettor (domain expert) but never buys them. In other words, the bettor
(decision maker) is always a buyer while the oddsmaker (domain expert) is
always a seller.
The oddsmaker is never willing to accept a negative commission. There-
fore, I would like to suggest a negative commission argument that says the
belief degree is irrational if there exists a book that guarantees the oddsmaker
(domain expert) a loss. Keep in mind that the decision maker and the do-
main expert cannot exchange their roles due to the unequal status of them.
It is thus concluded that the belief degree is considered to be irrational if it
makes the domain expert accept a sure-loss book.
458 Appendix C - Frequently Asked Questions

C.5 Why does belief degree follow the laws of uncer-


tainty theory?
In order to justify the belief degree follows the laws of uncertainty theory,
we must show that it meets the four axioms of uncertainty theory (Liu
[122][125]):
Axiom 1 (Normality Axiom) M{Γ} = 1 for the universal set Γ;
Axiom 2 (Duality Axiom) M{Λ} + M{Λc } = 1 for any event Λ;
Axiom 3 (Subadditivity Axiom) For every countable sequence of events Λ1 ,
Λ2 , · · · , we have (∞ )
[ X∞
M Λi ≤ M{Λi }; (C.5)
i=1 i=1

Axiom 4 (Product Axiom) Let (Γk , Lk , Mk ) be uncertainty spaces for k =


1, 2, · · · The product uncertain measure M is an uncertain measure satisfying
(∞ ) ∞
Y ^
M Λk = Mk {Λk } (C.6)
k=1 k=1

where Λk are arbitrarily chosen events from Lk for k = 1, 2, · · · , respectively.


First, “1” means “complete belief ” and we cannot be in more belief than
“complete belief ”. This means the belief degree of any event cannot exceed
1. In particular, the belief degree of the universal set takes value 1 because it
is completely believable. Thus the belief degree meets the normality axiom
of uncertainty theory.
Second, the law of truth conservation says the belief degrees of an event
and its negation sum to unity. For example, if a domain expert says an event
is true with belief degree α, then all of us will think that the event is false
with belief degree 1 − α. The belief degree is considered to be irrational if
it violates the law of truth conservation. Thus the belief degree meets the
duality axiom. In practice, this law is easy for human beings to obey.
Third, let Λ1 be a bet that offers $1 if Λ1 happens, and let Λ2 be a bet
that offers $1 if Λ2 happens. Assume the belief degrees of Λ1 and Λ2 are α1
and α2 , respectively. This means the prices of Λ1 and Λ2 are $α1 and $α2 ,
respectively. Now we consider the bet Λ1 ∪ Λ2 that offers $1 if either Λ1 or
Λ2 happens, and write the belief degree of Λ1 ∪ Λ2 by α. It follows from the
duality axiom that the belief degree of (Λ1 ∪ Λ2 )c is 1 − α. This means the
price of the bet about (Λ1 ∪ Λ2 )c (i.e., a bet that offers $1 if neither Λ1 nor
Λ2 happens) is $(1 − α). If α > α1 + α2 , then the decision maker buys Λ1 ,
Λ2 and (Λ1 ∪ Λ2 )c from the domain expert. It is clear the domain expert is
guaranteed to lose

1 − α1 − α2 − (1 − α) = α − α1 − α2 > 0. (C.7)
Section C.7 - What goes wrong with Cox’s theorem? 459

It follows from the negative commission argument that the assumption α >
α1 + α2 is irrational. Hence we have to assume α ≤ α1 + α2 and the belief
degree meets the subadditivity axiom. Note that the decision maker cannot
sell the bets to the domain expert due to the unequal status of them.
Finally, regarding the product axiom, let us recall the example of truck-
cross-over-bridge on Page 6. Suppose Ai represent that the ith bridge strengths
are greater than 90 tons, i = 1, 2, · · · , 50, respectively. For each i, since the
belief degree of Ai is 75%, the price of the bet about Ai is 75¢. It is reasonable
to pay
75¢ ∧ 75¢ ∧ · · · ∧ 75¢ = 75¢ (C.8)
| {z }
50
for a bet that offers $1 if A1 × A2 × · · · × A50 happens. Thus the belief degree
meets the product axiom of uncertainty theory.
Hence the belief degree follows the laws of uncertainty theory. It is easy to
prove that if a set of belief degrees violate the laws of uncertainty theory, then
there exists a book that guarantees the domain expert a loss. It is also easy
to prove that if a set of belief degrees follow the laws of uncertainty theory,
then there does not exist any book that guarantees the domain expert a loss.

C.6 What is the difference between probability theory


and uncertainty theory?
The main difference between probability theory (Kolmogorov [88]) and un-
certainty theory (Liu [122]) is that the probability measure of a product of
events is the product of the probability measures of the individual events,
i.e.,
Pr{A × B} = Pr{A} × Pr{B}, (C.9)
and the uncertain measure of a product of events is the minimum of the
uncertain measures of the individual events, i.e.,

M{A × B} = M{A} ∧ M{B}. (C.10)

This difference implies that random variables and uncertain variables obey
different operational laws.
Probability theory and uncertainty theory are complementary mathemat-
ical systems that provide two acceptable mathematical models to deal with
the indeterminate world. Probability is interpreted as frequency, while un-
certainty is interpreted as personal belief degree.

C.7 What goes wrong with Cox’s theorem?


Some people affirm that probability theory is the only legitimate approach.
Perhaps this misconception is rooted in Cox’s theorem [26] that any measure
460 Appendix C - Frequently Asked Questions

of belief is “isomorphic” to a probability measure. However, uncertain mea-


sure is considered coherent but not isomorphic to any probability measure.
What goes wrong with Cox’s theorem? Personally I think that Cox’s theo-
rem presumes the truth value of conjunction P ∧ Q is a twice differentiable
function f of truth values of the two propositions P and Q, i.e.,

T (P ∧ Q) = f (T (P ), T (Q)) (C.11)

and then excludes uncertain measure from its start because the function
f (x, y) = x ∧ y used in uncertainty theory is not differentiable with respect
to x and y. In fact, there does not exist any evidence that the truth value
of conjunction is completely determined by the truth values of individual
propositions, let alone a twice differentiable function.
On the one hand, it is recognized that probability theory is a legitimate
approach to deal with the frequency. On the other hand, at any rate, it is
impossible that probability theory is the unique one for modeling indetermi-
nacy. In fact, it has been demonstrated in this book that uncertainty theory
is successful to deal with belief degrees.

C.8 What is the difference between possibility theory


and uncertainty theory?
The essential difference between possibility theory (Zadeh [263]) and uncer-
tainty theory (Liu [122]) is that the former assumes

Pos{A ∪ B} = Pos{A} ∨ Pos{B} (C.12)

for any events A and B no matter if they are independent or not, and the
latter holds
M{A ∪ B} = M{A} ∨ M{B} (C.13)
only for independent events A and B. A lot of surveys showed that the
measure of a union of events is usually greater than the maximum of the
measures of individual events when they are not independent. This fact
states that human brains do not behave fuzziness.
Both uncertainty theory and possibility theory attempt to model belief
degrees, where the former uses the tool of uncertain measure and the latter
uses the tool of possibility measure. Thus they are complete competitors.

C.9 Why is fuzzy variable unable to model indetermi-


nate quantity?
A fuzzy variable is a function from a possibility space to the set of real
numbers (Nahmias [172]). Some people think that fuzzy variable is a suitable
tool for modeling indeterminate quantity. Is it really true? Unfortunately,
the answer is negative.
Section C.10 - Is fuzzy set able to model unsharp concept? 461

Let us reconsider the counterexample of truck-cross-over-bridge (Liu [131]).


If the bridge strength is regarded as a fuzzy variable ξ, then we may assign
it a membership function, say


 0, if x ≤ 80

 (x − 80)/10, if 80 ≤ x ≤ 90



µ(x) = 1, if 90 ≤ x ≤ 110 (C.14)

 (120 − x)/10, if 110 ≤ x ≤ 120




0, if x ≥ 120

that is just the trapezoidal fuzzy variable (80, 90, 110, 120). Please do not
argue why I choose such a membership function because it is not important for
the focus of debate. Based on the membership function µ and the definition
of possibility measure
Pos{ξ ∈ B} = sup µ(x), (C.15)
x∈B

it is easy for us to infer that


Pos{“bridge strength” = 100} = 1, (C.16)
Pos{“bridge strength” 6= 100} = 1. (C.17)
Thus we immediately conclude the following three propositions:
(a) the bridge strength is “exactly 100 tons” with possibility measure 1,
(b) the bridge strength is “not 100 tons” with possibility measure 1,
(c) “exactly 100 tons” is as possible as “not 100 tons”.
The first proposition says we are 100% sure that the bridge strength is “ex-
actly 100 tons”, neither less nor more. What a coincidence it should be!
It is doubtless that the belief degree of “exactly 100 tons” is almost zero,
and nobody is so naive to expect that “exactly 100 tons” is the true bridge
strength. The second proposition sounds good. The third proposition says
“exactly 100 tons” and “not 100 tons” have the same possibility measure.
Thus we have to regard them “equally likely”. Consider a bet: you get $1 if
the bridge strength is “exactly 100 tons”, and pay $1 if the bridge strength
is“not 100 tons”. Do you think the bet is fair? It seems that no one thinks
so. Hence the conclusion (c) is unacceptable because “exactly 100 tons” is
almost impossible compared with “not 100 tons”. This paradox shows that
those indeterminate quantities like the bridge strength cannot be quantified
by possibility measure and then they are not fuzzy concepts.

C.10 Why is fuzzy set unable to model unsharp con-


cept?
A fuzzy set is defined by its membership function µ which assigns to each
element x a real number µ(x) in the interval [0, 1], where the value of µ(x)
462 Appendix C - Frequently Asked Questions

represents the grade of membership of x in the fuzzy set. This definition was
given by Zadeh [260] in 1965. Although I strongly respect Professor Lotfi
Zadeh’s achievements, I disagree with him on the topic of fuzzy set.
Up to now, fuzzy set theory has not evolved as a mathematical system
because of its inconsistence. Theoretically, it is undeniable that there ex-
ist too many contradictions in fuzzy set theory. In practice, perhaps some
people believe that fuzzy set is a suitable tool to model unsharp concepts.
Unfortunately, it is not true. In order to convince the reader, let us examine
the concept of “young”. Without loss of generality, assume “young” has a
trapezoidal membership function (15, 20, 30, 40), i.e.,


 0, if x ≤ 15

 (x − 15)/5, if 15 ≤ x ≤ 20



µ(x) = 1, if 20 ≤ x ≤ 30 (C.18)

(40 − x)/10, if 30 ≤ x ≤ 40





0, if x ≥ 40.

It follows from the fuzzy set theory that “young” takes any values of α-cut
of µ, and then we infer that

Pos{[20yr, 30yr] ⊂ “young”} = 1, (C.19)

Pos{“young” ⊂ [20yr, 30yr]} = 1. (C.20)


Thus we immediately conclude two propositions:

(a) “young” includes [20yr, 30yr] with possibility measure 1,


(b) “young” is included in [20yr, 30yr] with possibility measure 1.

The first proposition sounds good. However, the second proposition seems
unacceptable because the belief degree that “young” is between 20yr to 30yr
is impossible to achieve up to 1 (in fact, the belief degree should be almost 0
due to the fact that 19yr and 31yr are also nearly sure to be “young”). This
result says that “young” cannot be regarded as a fuzzy set.

C.11 Does the stock price follow stochastic differential


equation or uncertain differential equation?
The origin of stochastic finance theory can be traced to Louis Bachelier’s
doctoral dissertation Théorie de la Speculation in 1900. However, Bache-
lier’s work had little impact for more than a half century. After Kiyosi Ito
invented stochastic calculus [66] in 1944 and stochastic differential equation
[67] in 1951, stochastic finance theory was well developed among others by
Samuelson [201], Black and Scholes [8] and Merton [168] during the 1960s
and 1970s.
Section C.11 - Challenge to Stochastic Finance Theory 463

Traditionally, stochastic finance theory presumes that the stock price (in-
cluding interest rate and currency exchange rate) follows Ito’s stochastic dif-
ferential equation. Is it really reasonable? In fact, this widely accepted
presumption was continuously challenged by many scholars.
As a paradox given by Liu [134], let us assume that the stock price Xt
follows the stochastic differential equation,

dXt = eXt dt + σXt dWt (C.21)

where e is the log-drift, σ is the log-diffusion, and Wt is a Wiener process.


Let us see what will happen with such an assumption. It follows from the
stochastic differential equation (C.21) that Xt is a geometric Wiener process,
i.e.,
Xt = X0 exp((e − σ 2 /2)t + σWt ) (C.22)
from which we derive
ln Xt − ln X0 − (e − σ 2 /2)t
Wt = (C.23)
σ
whose increment is
ln Xt+∆t − ln Xt − (e − σ 2 /2)∆t
∆Wt = . (C.24)
σ
Write
(e − σ 2 /2)∆t
A=− . (C.25)
σ
Note that the stock price Xt is actually a step function of time with a finite
number of jumps although it looks like a curve. During a fixed period (e.g.
one week), without loss of generality, we assume that Xt is observed to have
100 jumps. Now we divide the period into 10000 equal intervals. Then we
may observe 10000 samples of Xt . It follows from (C.24) that ∆Wt has 10000
samples that consists of 9900 A’s and 100 other numbers:

A, A, · · · , A, B, C, · · · , Z.
| {z } | {z } (C.26)
9900 100

Nobody can believe that those 10000 samples follow a normal probability
distribution with expected value 0 and variance ∆t. This fact is in contra-
diction with the property of Wiener process that the increment ∆Wt is a
normal random variable. Therefore, the real stock price Xt does not follow
the stochastic differential equation.
Perhaps some people think that the stock price does behave like a geomet-
ric Wiener process (or Ornstein-Uhlenbeck process) in macroscopy although
they recognize the paradox in microscopy. However, as the very core of
stochastic finance theory, Ito’s calculus is just built on the microscopic struc-
ture (i.e., the differential dWt ) of Wiener process rather than macroscopic
464 Appendix C - Frequently Asked Questions

.....
.......
....
99% ...............
...
..
... .. . ...
... . . ...
... ... ...
... ... ...
... .. ...
... ... ...
... ... ...
... ... ...
... .. ...
... .. .
... .. . ...
... .. ...
... ... ....
... ... .......................................
... ........ ... ......
.....
... ......... ... .....
...... .. ... .....
...
......... .... .
.
.....
.....
....... ... .............. ... .....
...
.... ... ... ...............
. ......
......
.. ..... ... .. .. ......................... .......
.. . .
.... ............................ ... ... ... ... ............................ . . . ......
. ....
......
................................ ... ... ... ... ... ... ... ... ............................................. ......
...
............................................................................................................................................................................................. ..
..
..

Figure C.2: There does not exist any continuous probability distribution
(curve) that can approximate to the frequency (histogram) of ∆Wt . Hence
it is impossible that the real stock price Xt follows any Ito’s stochastic dif-
ferential equation.

structure. More precisely, Ito’s calculus is dependent on the presumption


that dWt is a normal random variable with expected value 0 and variance
dt. This unreasonable presumption is what causes the second order term in
Ito’s formula,

∂h ∂h 1 ∂2h
dXt = (t, Wt )dt + (t, Wt )dWt + (t, Wt )dt. (C.27)
∂t ∂w 2 ∂w2
In fact, the increment of stock price is impossible to follow any continuous
probability distribution.
On the basis of the above paradox, personally I do not think Ito’s calculus
can play the essential tool of finance theory because Ito’s stochastic differen-
tial equation is impossible to model stock price. As a substitute, uncertain
calculus may be a potential mathematical foundation of finance theory. We
will have a theory of uncertain finance if the stock price, interest rate and
exchange rate are assumed to follow uncertain differential equations.

C.12 How did “uncertainty” evolve over the past 100


years?
After the word “randomness” was used to represent any probabilistic phe-
nomena, Knight (1921) and Keynes (1936) started to use the word “uncer-
tainty” to represent any non-probabilistic phenomena. The academic com-
munity also calls it Knightian uncertainty, Keynesian uncertainty, or true
uncertainty. Unfortunately, it seems impossible for us to develop a mathe-
matical theory to deal with such a broad class of uncertainty because “non-
probability” represents too many things. This disadvantage makes uncer-
tainty not able to become a scientific terminology. Despite that, it is recog-
Section C.12 - What is Uncertainty? 465

nized that Knight and Keynes made a great process to break the monopoly
of probability theory.
However, a major retrogression arose from Cox (1946) with a theorem
that human’s belief degree is isomorphic to a probability measure. Many
people do not notice that Cox’s theorem is based on an unreasonable as-
sumption, and then mistakenly believe that uncertainty and probability are
synonymous. This idea remains alive today under the name of subjective
probability (de Finetti, 1937). Yet numerous experiments demonstrated that
the belief degree does not follow the laws of probability theory.
An influential exploration by Zadeh (1965) was the fuzzy set theory that
was widely said to be successfully applied in many areas of our life. However,
fuzzy set theory has neither evolved as a mathematical system nor become
a suitable tool for rationally modeling belief degrees. The main mistake of
fuzzy set theory is based on the wrong assumption that the belief degree
of a union of events is the maximum of the belief degrees of the individual
events no matter if they are independent or not. A lot of surveys showed
that human brains do not behave fuzziness in the sense of Zadeh.
The latest development was uncertainty theory founded by Liu (2007).
Nowadays, uncertainty theory has become a branch of pure mathematics
that is not only a formal study of an abstract structure (i.e., uncertainty
space) but also applicable to modeling belief degrees. Perhaps some readers
may complain that I never clarify what uncertainty is. I think we can answer
it this way. Mathematically, uncertainty is anything that follows the laws of
uncertainty theory. Practically, uncertainty is anything that is described by
belief degrees. From then on, “uncertainty” became a scientific terminology
on the basis of uncertainty theory.
Bibliography

[1] Alefeld G, Herzberger J, Introduction to Interval Computations, Academic


Press, New York, 1983.
[2] Atanassov KT, Intuitionistic Fuzzy Sets: Theory and Applications, Physica-
Verlag, Heidelberg, 1999.
[3] Bachelier L, Théorie de la spéculation, Annales Scientifiques de L’École Nor-
male Supérieure, Vol.17, 21-86, 1900.
[4] Barbacioru IC, Uncertainty functional differential equations for finance, Sur-
veys in Mathematics and its Applications, Vol.5, 275-284, 2010.
[5] Bedford T, and Cooke MR, Probabilistic Risk Analysis, Cambridge University
Press, 2001.
[6] Bellman RE, Dynamic Programming, Princeton University Press, New Jersey,
1957.
[7] Bellman RE, and Zadeh LA, Decision making in a fuzzy environment, Man-
agement Science, Vol.17, 141-164, 1970.
[8] Black F, and Scholes M, The pricing of option and corporate liabilities, Jour-
nal of Political Economy, Vol.81, 637-654, 1973.
[9] Bouchon-Meunier B, Mesiar R, and Ralescu DA, Linear non-additive set-
functions, International Journal of General Systems, Vol.33, No.1, 89-98,
2004.
[10] Buckley JJ, Possibility and necessity in optimization, Fuzzy Sets and Systems,
Vol.25, 1-13, 1988.
[11] Charnes A, and Cooper WW, Management Models and Industrial Applica-
tions of Linear Programming, Wiley, New York, 1961.
[12] Chen XW, and Liu B, Existence and uniqueness theorem for uncertain dif-
ferential equations, Fuzzy Optimization and Decision Making, Vol.9, No.1,
69-81, 2010.
[13] Chen XW, American option pricing formula for uncertain financial market,
International Journal of Operations Research, Vol.8, No.2, 32-37, 2011.
[14] Chen XW, and Ralescu DA, A note on truth value in uncertain logic, Expert
Systems with Applications, Vol.38, No.12, 15582-15586, 2011.
[15] Chen XW, and Dai W, Maximum entropy principle for uncertain variables,
International Journal of Fuzzy Systems, Vol.13, No.3, 232-236, 2011.

© Springer-Verlag Berlin Heidelberg 2015 467


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5
468 Bibliography

[16] Chen XW, Kar S, and Ralescu DA, Cross-entropy measure of uncertain vari-
ables, Information Sciences, Vol.201, 53-60, 2012.
[17] Chen XW, Variation analysis of uncertain stationary independent increment
process, European Journal of Operational Research, Vol.222, No.2, 312-316,
2012.
[18] Chen XW, and Ralescu DA, B-spline method of uncertain statistics with
applications to estimate travel distance, Journal of Uncertain Systems, Vol.6,
No.4, 256-262, 2012.
[19] Chen XW, Liu YH, and Ralescu DA, Uncertain stock model with periodic
dividends, Fuzzy Optimization and Decision Making, Vol.12, No.1, 111-123,
2013.
[20] Chen XW, and Ralescu DA, Liu process and uncertain calculus, Journal of
Uncertainty Analysis and Applications, Vol.1, Article 3, 2013.
[21] Chen XW, and Gao J, Uncertain term structure model of interest rate, Soft
Computing, Vol.17, No.4, 597-604, 2013.
[22] Chen XW, Li XF, and Ralescu DA, A note on uncertain sequence, Inter-
national Journal of Uncertainty, Fuzziness and Knowledge-Based Systems,
Vol.22, No.2, 305-314, 2014.
[23] Chen Y, Fung RYK, and Yang J, Fuzzy expected value modelling approach for
determining target values of engineering characteristics in QFD, International
Journal of Production Research, Vol.43, No.17, 3583-3604, 2005.
[24] Chen Y, Fung RYK, and Tang JF, Rating technical attributes in fuzzy QFD
by integrating fuzzy weighted average method and fuzzy expected value op-
erator, European Journal of Operational Research, Vol.174, No.3, 1553-1566,
2006.
[25] Choquet G, Theory of capacities, Annales de l’Institute Fourier, Vol.5, 131-
295, 1954.
[26] Cox RT, Probability, frequency and reasonable expectation, American Jour-
nal of Physics, Vol.14, 1-13, 1946.
[27] Dai W, and Chen XW, Entropy of function of uncertain variables, Mathe-
matical and Computer Modelling, Vol.55, Nos.3-4, 754-760, 2012.
[28] Dantzig GB, Linear programming under uncertainty, Management Science,
Vol.1, 197-206, 1955.
[29] Das B, Maity K, Maiti A, A two warehouse supply-chain model under possi-
bility/necessity/credibility measures, Mathematical and Computer Modelling,
Vol.46, No.3-4, 398-409, 2007.
[30] de Cooman G, Possibility theory I-III, International Journal of General Sys-
tems, Vol.25, 291-371, 1997.
[31] de Finetti B, La prévision: ses lois logiques, ses sources subjectives, Annales
de l’Institut Henri Poincaré, Vol.7, 1-68, 1937.
[32] de Luca A, and Termini S, A definition of nonprobabilistic entropy in the
setting of fuzzy sets theory, Information and Control, Vol.20, 301-312, 1972.
[33] Dempster AP, Upper and lower probabilities induced by a multivalued map-
ping, Annals of Mathematical Statistics, Vol.38, No.2, 325-339, 1967.
Bibliography 469

[34] Dijkstra EW, A note on two problems in connection with graphs, Numerical
Mathematics, Vol.1, No.1, 269-271, 1959.
[35] Dubois D, and Prade H, Possibility Theory: An Approach to Computerized
Processing of Uncertainty, Plenum, New York, 1988.
[36] Elkan C, The paradoxical success of fuzzy logic, IEEE Expert, Vol.9, No.4,
3-8, 1994.
[37] Elkan C, The paradoxical controversy over fuzzy logic, IEEE Expert, Vol.9,
No.4, 47-49, 1994.
[38] Erdős P, and Rényi A, On random graphs, Publicationes Mathematicae, Vol.6,
290-297, 1959.
[39] Esogbue AO, and Liu B, Reservoir operations optimization via fuzzy criterion
decision processes, Fuzzy Optimization and Decision Making, Vol.5, No.3,
289-305, 2006.
[40] Fei WY, Optimal control of uncertain stochastic systems with Markovian
switching and its applications to portfolio decisions, Cybernetics and Systems,
Vol.45, 69-88, 2014.
[41] Feng Y, and Yang LX, A two-objective fuzzy k-cardinality assignment prob-
lem, Journal of Computational and Applied Mathematics, Vol.197, No.1, 233-
244, 2006.
[42] Feng YQ, Wu WC, Zhang BM, and Li WY, Power system operation risk
assessment using credibility theory, IEEE Transactions on Power Systems,
Vol.23, No.3, 1309-1318, 2008.
[43] Frank H, and Hakimi SL, Probabilistic flows through a communication net-
work, IEEE Transactions on Circuit Theory, Vol.12, 413-414, 1965.
[44] Fung RYK, Chen YZ, and Chen L, A fuzzy expected value-based goal pro-
graming model for product planning using quality function deployment, En-
gineering Optimization, Vol.37, No.6, 633-647, 2005.
[45] Gao J, and Liu B, Fuzzy multilevel programming with a hybrid intelligent
algorithm, Computers & Mathematics with Applications, Vol.49, 1539-1548,
2005.
[46] Gao J, Uncertain bimatrix game with applications, Fuzzy Optimization and
Decision Making, Vol.12, No.1, 65-78, 2013.
[47] Gao J, and Yao K, Some concepts and theorems of uncertain random process,
International Journal of Intelligent Systems, to be published.
[48] Gao X, Some properties of continuous uncertain measure, International Jour-
nal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.17, No.3, 419-
426, 2009.
[49] Gao X, Gao Y, and Ralescu DA, On Liu’s inference rule for uncertain sys-
tems, International Journal of Uncertainty, Fuzziness and Knowledge-Based
Systems, Vol.18, No.1, 1-11, 2010.
[50] Gao XL, and Gao Y, Connectedness index of uncertain graphs, International
Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.21, No.1,
127-137, 2013.
470 Bibliography

[51] Gao Y, Shortest path problem with uncertain arc lengths, Computers and
Mathematics with Applications, Vol.62, No.6, 2591-2600, 2011.
[52] Gao Y, Uncertain inference control for balancing inverted pendulum, Fuzzy
Optimization and Decision Making, Vol.11, No.4, 481-492, 2012.
[53] Gao Y, Existence and uniqueness theorem on uncertain differential equations
with local Lipschitz condition, Journal of Uncertain Systems, Vol.6, No.3,
223-232, 2012.
[54] Ge XT, and Zhu Y, Existence and uniqueness theorem for uncertain delay
differential equations, Journal of Computational Information Systems, Vol.8,
No.20, 8341-8347, 2012.
[55] Ge XT, and Zhu Y, A necessary condition of optimality for uncertain optimal
control problem, Fuzzy Optimization and Decision Making, Vol.12, No.1, 41-
51, 2013.
[56] Gilbert EN, Random graphs, Annals of Mathematical Statistics, Vol.30, No.4,
1141-1144, 1959.
[57] Guo HY, and Wang XS, Variance of uncertain random variables, Journal of
Uncertainty Analysis and Applications, Vol.2, Article 6, 2014.
[58] Guo R, Zhao R, Guo D, and Dunne T, Random fuzzy variable modeling on
repairable system, Journal of Uncertain Systems, Vol.1, No.3, 222-234, 2007.
[59] Ha MH, Li Y, and Wang XF, Fuzzy knowledge representation and reasoning
using a generalized fuzzy petri net and a similarity measure, Soft Computing,
Vol.11, No.4, 323-327, 2007.
[60] Han SW, Peng ZX, and Wang SQ, The maximum flow problem of uncertain
network, Information Sciences, Vol.265, 167-175, 2014.
[61] He Y, and Xu JP, A class of random fuzzy programming model and its ap-
plication to vehicle routing problem, World Journal of Modelling and Simu-
lation, Vol.1, No.1, 3-11, 2005.
[62] Hong DH, Renewal process with T-related fuzzy inter-arrival times and fuzzy
rewards, Information Sciences, Vol.176, No.16, 2386-2395, 2006.
[63] Hou YC, Subadditivity of chance measure, Journal of Uncertainty Analysis
and Applications, Vol.2, Article 14, 2014.
[64] Hou YC, Distance between uncertain random variables, http://orsc.edu.cn/
online/130510.pdf.
[65] Inuiguchi M, and Ramı́k J, Possibilistic linear programming: A brief review
of fuzzy mathematical programming and a comparison with stochastic pro-
gramming in portfolio selection problem, Fuzzy Sets and Systems, Vol.111,
No.1, 3-28, 2000.
[66] Ito K, Stochastic integral, Proceedings of the Japan Academy Series A, Vol.20,
No.8, 519-524, 1944.
[67] Ito K, On stochastic differential equations, Memoirs of the American Math-
ematical Society, No.4, 1-51, 1951.
[68] Iwamura K, and Kageyama M, Exact construction of Liu process, Applied
Mathematical Sciences, Vol.6, No.58, 2871-2880, 2012.
Bibliography 471

[69] Iwamura K, and Xu YL, Estimating the variance of the square of canonical
process, Applied Mathematical Sciences, Vol.7, No.75, 3731-3738, 2013.
[70] Jaynes ET, Information theory and statistical mechanics, Physical Reviews,
Vol.106, No.4, 620-630, 1957.
[71] Jaynes ET, Probability Theory: The Logic of Science, Cambridge University
Press, 2003.
[72] Jeffreys H, Theory of Probability, Oxford University Press, 1961.
[73] Ji XY, and Shao Z, Model and algorithm for bilevel newsboy problem
with fuzzy demands and discounts, Applied Mathematics and Computation,
Vol.172, No.1, 163-174, 2006.
[74] Ji XY, and Iwamura K, New models for shortest path problem with fuzzy arc
lengths, Applied Mathematical Modelling, Vol.31, 259-269, 2007.
[75] Jiao DY, and Yao K, An interest rate model in uncertain environment, Soft
Computing, to be published.
[76] Kacprzyk J, and Esogbue AO, Fuzzy dynamic programming: Main develop-
ments and applications, Fuzzy Sets and Systems, Vol.81, 31-45, 1996.
[77] Kacprzyk J, and Yager RR, Linguistic summaries of data using fuzzy logic,
International Journal of General Systems, Vol.30, 133-154, 2001.
[78] Kahneman D, and Tversky A, Prospect theory: An analysis of decision under
risk, Econometrica, Vol.47, No.2, 263-292, 1979.
[79] Ke H, and Liu B, Project scheduling problem with stochastic activity duration
times, Applied Mathematics and Computation, Vol.168, No.1, 342-353, 2005.
[80] Ke H, and Liu B, Project scheduling problem with mixed uncertainty of ran-
domness and fuzziness, European Journal of Operational Research, Vol.183,
No.1, 135-147, 2007.
[81] Ke H, and Liu B, Fuzzy project scheduling problem and its hybrid intelligent
algorithm, Applied Mathematical Modelling, Vol.34, No.2, 301-308, 2010.
[82] Ke H, Ma WM, Gao X, and Xu WH, New fuzzy models for time-cost trade-
off problem, Fuzzy Optimization and Decision Making, Vol.9, No.2, 219-231,
2010.
[83] Ke H, and Su TY, Uncertain random multilevel programming with applica-
tion to product control problem, Soft Computing, to be published.
[84] Keynes JM, The General Theory of Employment, Interest, and Money, Har-
court, New York, 1936.
[85] Klement EP, Puri ML, and Ralescu DA, Limit theorems for fuzzy random
variables, Proceedings of the Royal Society of London Series A, Vol.407, 171-
182, 1986.
[86] Klir GJ, and Folger TA, Fuzzy Sets, Uncertainty, and Information, Prentice-
Hall, Englewood Cliffs, 1980.
[87] Knight FH, Risk, Uncertainty, and Profit, Houghton Mifflin, Boston, 1921.
[88] Kolmogorov AN, Grundbegriffe der Wahrscheinlichkeitsrechnung, Julius
Springer, Berlin, 1933.
472 Bibliography

[89] Kruse R, and Meyer KD, Statistics with Vague Data, D. Reidel Publishing
Company, Dordrecht, 1987.
[90] Kwakernaak H, Fuzzy random variables–I: Definitions and theorems, Infor-
mation Sciences, Vol.15, 1-29, 1978.
[91] Kwakernaak H, Fuzzy random variables–II: Algorithms and examples for the
discrete case, Information Sciences, Vol.17, 253-278, 1979.
[92] Li J, Xu JP, and Gen M, A class of multiobjective linear programming
model with fuzzy random coefficients, Mathematical and Computer Modelling,
Vol.44, Nos.11-12, 1097-1113, 2006.
[93] Li PK, and Liu B, Entropy of credibility distributions for fuzzy variables,
IEEE Transactions on Fuzzy Systems, Vol.16, No.1, 123-129, 2008.
[94] Li SM, Ogura Y, and Kreinovich V, Limit Theorems and Applications of
Set-Valued and Fuzzy Set-Valued Random Variables, Kluwer, Boston, 2002.
[95] Li X, and Liu B, A sufficient and necessary condition for credibility measures,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.14, No.5, 527-535, 2006.
[96] Li X, and Liu B, Maximum entropy principle for fuzzy variables, Interna-
tional Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.15,
Supp.2, 43-52, 2007.
[97] Li X, and Liu B, On distance between fuzzy variables, Journal of Intelligent
& Fuzzy Systems, Vol.19, No.3, 197-204, 2008.
[98] Li X, and Liu B, Chance measure for hybrid events with fuzziness and ran-
domness, Soft Computing, Vol.13, No.2, 105-115, 2009.
[99] Li X, and Liu B, Foundation of credibilistic logic, Fuzzy Optimization and
Decision Making, Vol.8, No.1, 91-102, 2009.
[100] Li X, and Liu B, Hybrid logic and uncertain logic, Journal of Uncertain
Systems, Vol.3, No.2, 83-94, 2009.
[101] Liu B, Dependent-chance goal programming and its genetic algorithm based
approach, Mathematical and Computer Modelling, Vol.24, No.7, 43-52, 1996.
[102] Liu B, and Esogbue AO, Fuzzy criterion set and fuzzy criterion dynamic
programming, Journal of Mathematical Analysis and Applications, Vol.199,
No.1, 293-311, 1996.
[103] Liu B, Dependent-chance programming: A class of stochastic optimization,
Computers & Mathematics with Applications, Vol.34, No.12, 89-104, 1997.
[104] Liu B, and Iwamura K, Chance constrained programming with fuzzy param-
eters, Fuzzy Sets and Systems, Vol.94, No.2, 227-237, 1998.
[105] Liu B, and Iwamura K, A note on chance constrained programming with
fuzzy coefficients, Fuzzy Sets and Systems, Vol.100, Nos.1-3, 229-233, 1998.
[106] Liu B, Minimax chance constrained programming models for fuzzy decision
systems, Information Sciences, Vol.112, Nos.1-4, 25-38, 1998.
[107] Liu B, Dependent-chance programming with fuzzy decisions, IEEE Transac-
tions on Fuzzy Systems, Vol.7, No.3, 354-360, 1999.
Bibliography 473

[108] Liu B, and Esogbue AO, Decision Criteria and Optimal Inventory Processes,
Kluwer, Boston, 1999.
[109] Liu B, Uncertain Programming, Wiley, New York, 1999.
[110] Liu B, Dependent-chance programming in fuzzy environments, Fuzzy Sets
and Systems, Vol.109, No.1, 97-106, 2000.
[111] Liu B, and Iwamura K, Fuzzy programming with fuzzy decisions and fuzzy
simulation-based genetic algorithm, Fuzzy Sets and Systems, Vol.122, No.2,
253-262, 2001.
[112] Liu B, Fuzzy random chance-constrained programming, IEEE Transactions
on Fuzzy Systems, Vol.9, No.5, 713-720, 2001.
[113] Liu B, Fuzzy random dependent-chance programming, IEEE Transactions on
Fuzzy Systems, Vol.9, No.5, 721-726, 2001.
[114] Liu B, Theory and Practice of Uncertain Programming, Physica-Verlag, Hei-
delberg, 2002.
[115] Liu B, Toward fuzzy optimization without mathematical ambiguity, Fuzzy
Optimization and Decision Making, Vol.1, No.1, 43-63, 2002.
[116] Liu B, and Liu YK, Expected value of fuzzy variable and fuzzy expected value
models, IEEE Transactions on Fuzzy Systems, Vol.10, No.4, 445-450, 2002.
[117] Liu B, Random fuzzy dependent-chance programming and its hybrid intelli-
gent algorithm, Information Sciences, Vol.141, Nos.3-4, 259-271, 2002.
[118] Liu B, Inequalities and convergence concepts of fuzzy and rough variables,
Fuzzy Optimization and Decision Making, Vol.2, No.2, 87-100, 2003.
[119] Liu B, Uncertainty Theory: An Introduction to its Axiomatic Foundations,
Springer-Verlag, Berlin, 2004.
[120] Liu B, A survey of credibility theory, Fuzzy Optimization and Decision Mak-
ing, Vol.5, No.4, 387-408, 2006.
[121] Liu B, A survey of entropy of fuzzy variables, Journal of Uncertain Systems,
Vol.1, No.1, 4-13, 2007.
[122] Liu B, Uncertainty Theory, 2nd edn, Springer-Verlag, Berlin, 2007.
[123] Liu B, Fuzzy process, hybrid process and uncertain process, Journal of Un-
certain Systems, Vol.2, No.1, 3-16, 2008.
[124] Liu B, Theory and Practice of Uncertain Programming, 2nd edn, Springer-
Verlag, Berlin, 2009.
[125] Liu B, Some research problems in uncertainty theory, Journal of Uncertain
Systems, Vol.3, No.1, 3-10, 2009.
[126] Liu B, Uncertain entailment and modus ponens in the framework of uncertain
logic, Journal of Uncertain Systems, Vol.3, No.4, 243-251, 2009.
[127] Liu B, Uncertain set theory and uncertain inference rule with application to
uncertain control, Journal of Uncertain Systems, Vol.4, No.2, 83-98, 2010.
[128] Liu B, Uncertain risk analysis and uncertain reliability analysis, Journal of
Uncertain Systems, Vol.4, No.3, 163-170, 2010.
474 Bibliography

[129] Liu B, Uncertainty Theory: A Branch of Mathematics for Modeling Human


Uncertainty, Springer-Verlag, Berlin, 2010.
[130] Liu B, Uncertain logic for modeling human language, Journal of Uncertain
Systems, Vol.5, No.1, 3-20, 2011.
[131] Liu B, Why is there a need for uncertainty theory? Journal of Uncertain
Systems, Vol.6, No.1, 3-10, 2012.
[132] Liu B, and Yao K, Uncertain integral with respect to multiple canonical
processes, Journal of Uncertain Systems, Vol.6, No.4, 250-255, 2012.
[133] Liu B, Membership functions and operational law of uncertain sets, Fuzzy
Optimization and Decision Making, Vol.11, No.4, 387-410, 2012.
[134] Liu B, Toward uncertain finance theory, Journal of Uncertainty Analysis and
Applications, Vol.1, Article 1, 2013.
[135] Liu B, Extreme value theorems of uncertain process with application to in-
surance risk model, Soft Computing, Vol.17, No.4, 549-556, 2013.
[136] Liu B, A new definition of independence of uncertain sets, Fuzzy Optimization
and Decision Making, Vol.12, No.4, 451-461, 2013.
[137] Liu B, Polyrectangular theorem and independence of uncertain vectors, Jour-
nal of Uncertainty Analysis and Applications, Vol.1, Article 9, 2013.
[138] Liu B, Uncertain random graph and uncertain random network, Journal of
Uncertain Systems, Vol.8, No.1, 3-12, 2014.
[139] Liu B, Uncertainty distribution and independence of uncertain processes,
Fuzzy Optimization and Decision Making, to be published.
[140] Liu B, and Yao K, Uncertain multilevel programming: Algorithm and appli-
cations, Computers & Industrial Engineering, to be published.
[141] Liu B, and Chen XW, Uncertain multiobjective programming and uncertain
goal programming, http://orsc.edu.cn/online/131020.pdf.
[142] Liu HJ, and Fei WY, Neutral uncertain delay differential equations, Infor-
mation: An International Interdisciplinary Journal, Vol.16, No.2, 1225-1232,
2013.
[143] Liu HJ, Ke H, and Fei WY, Almost sure stability for uncertain differential
equation, Fuzzy Optimization and Decision Making, to be published.
[144] Liu JJ, Uncertain comprehensive evaluation method, Journal of Information
& Computational Science, Vol.8, No.2, 336-344, 2011.
[145] Liu LZ, and Li YZ, The fuzzy quadratic assignment problem with penalty:
New models and genetic algorithm, Applied Mathematics and Computation,
Vol.174, No.2, 1229-1244, 2006.
[146] Liu W, and Xu JP, Some properties on expected value operator for uncertain
variables, Information: An International Interdisciplinary Journal, Vol.13,
No.5, 1693-1699, 2010.
[147] Liu YH, and Ha MH, Expected value of function of uncertain variables, Jour-
nal of Uncertain Systems, Vol.4, No.3, 181-186, 2010.
[148] Liu YH, An analytic method for solving uncertain differential equations, Jour-
nal of Uncertain Systems, Vol.6, No.4, 244-249, 2012.
Bibliography 475

[149] Liu YH, Uncertain random variables: A mixture of uncertainty and random-
ness, Soft Computing, Vol.17, No.4, 625-634, 2013.
[150] Liu YH, Uncertain random programming with applications, Fuzzy Optimiza-
tion and Decision Making, Vol.12, No.2, 153-169, 2013.
[151] Liu YH, and Ralescu DA, Risk index in uncertain random risk analysis, In-
ternational Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.22, 2014, to be published.
[152] Liu YH, Chen XW, and Ralescu DA, Uncertain currency model and currency
option pricing, International Journal of Intelligent Systems, to be published.
[153] Liu YH, and Ralescu DA, Value-at-risk in uncertain random risk analysis,
Technical Report, 2014.
[154] Liu YH, and Ralescu DA, Expected loss of uncertain random systems, Tech-
nical Report, 2014.
[155] Liu YK, and Liu B, Random fuzzy programming with chance measures
defined by fuzzy integrals, Mathematical and Computer Modelling, Vol.36,
Nos.4-5, 509-524, 2002.
[156] Liu YK, and Liu B, Fuzzy random variables: A scalar expected value opera-
tor, Fuzzy Optimization and Decision Making, Vol.2, No.2, 143-160, 2003.
[157] Liu YK, and Liu B, Expected value operator of random fuzzy variable and
random fuzzy expected value models, International Journal of Uncertainty,
Fuzziness & Knowledge-Based Systems, Vol.11, No.2, 195-215, 2003.
[158] Liu YK, and Liu B, A class of fuzzy random optimization: Expected value
models, Information Sciences, Vol.155, Nos.1-2, 89-102, 2003.
[159] Liu YK, and Liu B, Fuzzy random programming with equilibrium chance
constraints, Information Sciences, Vol.170, 363-395, 2005.
[160] Liu YK, Fuzzy programming with recourse, International Journal of Uncer-
tainty, Fuzziness & Knowledge-Based Systems, Vol.13, No.4, 381-413, 2005.
[161] Liu YK, and Gao J, The independence of fuzzy variables with applications to
fuzzy random optimization, International Journal of Uncertainty, Fuzziness
& Knowledge-Based Systems, Vol.15, Supp.2, 1-20, 2007.
[162] Lu M, On crisp equivalents and solutions of fuzzy programming with different
chance measures, Information: An International Journal, Vol.6, No.2, 125-
133, 2003.
[163] Luhandjula MK, Fuzzy stochastic linear programming: Survey and future
research directions, European Journal of Operational Research, Vol.174, No.3,
1353-1367, 2006.
[164] Maiti MK, and Maiti MA, Fuzzy inventory model with two warehouses under
possibility constraints, Fuzzy Sets and Systems, Vol.157, No.1, 52-73, 2006.
[165] Mamdani EH, Applications of fuzzy algorithms for control of a simple dy-
namic plant, Proceedings of IEEE, Vol.121, No.12, 1585-1588, 1974.
[166] Marano GC, and Quaranta G, A new possibilistic reliability index definition,
Acta Mechanica, Vol.210, 291-303, 2010.
[167] Matheron G, Random Sets and Integral Geometry, Wiley, New York, 1975.
476 Bibliography

[168] Merton RC, Theory of rational option pricing, Bell Journal of Economics and
Management Science, Vol.4, 141-183, 1973.
[169] Möller B, and Beer M, Engineering computation under uncertainty, Comput-
ers and Structures, Vol.86, 1024-1041, 2008.
[170] Moore RE, Interval Analysis, Prentice-Hall, New Jersey, 1966.
[171] Morgan JP, Risk Metrics TM – Technical Document, 4th edn, Morgan Guar-
anty Trust Companies, New York, 1996.
[172] Nahmias S, Fuzzy variables, Fuzzy Sets and Systems, Vol.1, 97-110, 1978.
[173] Negoita CV, and Ralescu DA, Representation theorems for fuzzy concepts,
Kybernetes, Vol.4, 169-174, 1975.
[174] Negoita CV, and Ralescu DA, Simulation, Knowledge-based Computing, and
Fuzzy Statistics, Van Nostrand Reinhold, New York, 1987.
[175] Nguyen HT, Nguyen NT, and Wang TH, On capacity functionals in interval
probabilities, International Journal of Uncertainty, Fuzziness & Knowledge-
Based Systems, Vol.5, 359-377, 1997.
[176] Nguyen VH, Fuzzy stochastic goal programming problems, European Journal
of Operational Research, Vol.176, No.1, 77-86, 2007.
[177] Nilsson NJ, Probabilistic logic, Artificial Intelligence, Vol.28, 71-87, 1986.
[178] Øksendal B, Stochastic Differential Equations, 6th edn, Springer-Verlag,
Berlin, 2005.
[179] Pawlak Z, Rough sets, International Journal of Information and Computer
Sciences, Vol.11, No.5, 341-356, 1982.
[180] Pawlak Z, Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer,
Dordrecht, 1991.
[181] Peng J, and Liu B, Parallel machine scheduling models with fuzzy processing
times, Information Sciences, Vol.166, Nos.1-4, 49-66, 2004.
[182] Peng J, and Yao K, A new option pricing model for stocks in uncertainty
markets, International Journal of Operations Research, Vol.8, No.2, 18-26,
2011.
[183] Peng J, Risk metrics of loss function for uncertain system, Fuzzy Optimization
and Decision Making, Vol.12, No.1, 53-64, 2013.
[184] Peng ZX, and Iwamura K, A sufficient and necessary condition of uncertainty
distribution, Journal of Interdisciplinary Mathematics, Vol.13, No.3, 277-285,
2010.
[185] Peng ZX, and Iwamura K, Some properties of product uncertain measure,
Journal of Uncertain Systems, Vol.6, No.4, 263-269, 2012.
[186] Peng ZX, and Chen XW, Uncertain systems are universal approximators,
Journal of Uncertainty Analysis and Applications, Vol.2, Article 13, 2014.
[187] Pugsley AG, A philosophy of strength factors, Aircraft Engineering and
Aerospace Technology, Vol.16, No.1, 18-19, 1944.
[188] Puri ML, and Ralescu DA, Fuzzy random variables, Journal of Mathematical
Analysis and Applications, Vol.114, 409-422, 1986.
Bibliography 477

[189] Qin ZF, and Li X, Option pricing formula for fuzzy financial market, Journal
of Uncertain Systems, Vol.2, No.1, 17-21, 2008.
[190] Qin ZF, and Gao X, Fractional Liu process with application to finance, Math-
ematical and Computer Modelling, Vol.50, Nos.9-10, 1538-1543, 2009.
[191] Qin ZF, Uncertain random goal programming, http://orsc.edu.cn/online/
130323.pdf.
[192] Ralescu AL, and Ralescu DA, Extensions of fuzzy aggregation, Fuzzy Sets
and Systems, Vol.86, No.3, 321-330, 1997.
[193] Ralescu DA, A generalization of representation theorem, Fuzzy Sets and Sys-
tems, Vol.51, 309-311, 1992.
[194] Ralescu DA, Cardinality, quantifiers, and the aggregation of fuzzy criteria,
Fuzzy Sets and Systems, Vol.69, No.3, 355-365, 1995.
[195] Ralescu DA, and Sugeno M, Fuzzy integral representation, Fuzzy Sets and
Systems, Vol.84, No.2, 127-133, 1996.
[196] Ramsey FP, Truth and probability, In Foundations of Mathematics and Other
Logical Essays, Humanities Press, New York, 1931.
[197] Reichenbach H, The Theory of Probability, University of California Press,
Berkeley, 1948.
[198] Robbins HE, On the measure of a random set, Annals of Mathematical Statis-
tics, Vol.15, No.1, 70-74, 1944.
[199] Roy AD, Safety-first and the holding of assets, Econometrica, Vol.20, 431-149,
1952.
[200] Sakawa M, Nishizaki I, Uemura Y, Interactive fuzzy programming for two-
level linear fractional programming problems with fuzzy parameters, Fuzzy
Sets and Systems, Vol.115, 93-103, 2000.
[201] Samuelson PA, Rational theory of warrant pricing, Industrial Management
Review, Vol.6, 13-31, 1965.
[202] Savage LJ, The Foundations of Statistics, Wiley, New York, 1954.
[203] Savage LJ, The Foundations of Statistical Inference, Methuen, London, 1962.
[204] Shafer G, A Mathematical Theory of Evidence, Princeton University Press,
Princeton, 1976.
[205] Shannon CE, The Mathematical Theory of Communication, The University
of Illinois Press, Urbana, 1949.
[206] Shao Z, and Ji XY, Fuzzy multi-product constraint newsboy problem, Applied
Mathematics and Computation, Vol.180, No.1, 7-15, 2006.
[207] Shen Q and Zhao R, A credibilistic approach to assumption-based truth
maintenance, IEEE Transactions on Systems, Man, and Cybernetics Part
A, Vol.41, No.1, 85-96, 2011.
[208] Shen YY, and Yao K, A mean-reverting currency model in an uncertain
environment, http://orsc.edu.cn/online/131204.pdf.
[209] Shen YY, and Yao K, Runge-Kutta method for solving uncertain differential
equations, http://orsc.edu.cn/online/130502.pdf.
478 Bibliography

[210] Sheng YH, and Wang CG, Stability in the p-th moment for uncertain differen-
tial equation, Journal of Intelligent & Fuzzy Systems, Vol.26, No.3, 1263-1271,
2014.
[211] Sheng YH, and Yao K, Some formulas of variance of uncertain random vari-
able, Journal of Uncertainty Analysis and Applications, Vol.2, Article 12,
2014.
[212] Sheng YH, and Gao J, Chance distribution of the maximum flow of uncertain
random network, Journal of Uncertainty Analysis and Applications, Vol.2,
Article 15, 2014.
[213] Sheng YH, and Kar S, Some results of moments of uncertain variable through
inverse uncertainty distribution, Fuzzy Optimization and Decision Making, to
be published.
[214] Sheng YH, Exponential stability of uncertain differential equation, http://
orsc.edu.cn/online/130122.pdf.
[215] Shih HS, Lai YJ, and Lee ES, Fuzzy approach for multilevel programming
problems, Computers and Operations Research, Vol.23, 73-91, 1996.
[216] Slowinski R, and Teghem J, Fuzzy versus stochastic approaches to multicrite-
ria linear programming under uncertainty, Naval Research Logistics, Vol.35,
673-695, 1988.
[217] Sugeno M, Theory of Fuzzy Integrals and its Applications, Ph.D. Dissertation,
Tokyo Institute of Technology, 1974.
[218] Sun JJ, and Chen XW, Asian option pricing formula for uncertain financial
market, http://orsc.edu.cn/online/130511.pdf.
[219] Takagi T, and Sugeno M, Fuzzy identication of system and its applications to
modeling and control, IEEE Transactions on Systems, Man and Cybernatics,
Vol.15, No.1, 116-132, 1985.
[220] Taleizadeh AA, Niaki STA, and Aryanezhad MB, A hybrid method of Pareto,
TOPSIS and genetic algorithm to optimize multi-product multi-constraint
inventory control systems with random fuzzy replenishments, Mathematical
and Computer Modelling, Vol.49, Nos.5-6, 1044-1057, 2009.
[221] Tian DZ, Wang L, Wu J, and Ha MH, Rough set model based on uncertain
measure, Journal of Uncertain Systems, Vol.3, No.4, 252-256, 2009.
[222] Tian JF, Inequalities and mathematical properties of uncertain variables,
Fuzzy Optimization and Decision Making, Vol.10, No.4, 357-368, 2011.
[223] Torabi H, Davvaz B, Behboodian J, Fuzzy random events in incomplete prob-
ability models, Journal of Intelligent & Fuzzy Systems, Vol.17, No.2, 183-188,
2006.
[224] Venn J, The Logic of Chance, MacMillan, London, 1866.
[225] von Mises R, Wahrscheinlichkeit, Statistik und Wahrheit, Springer, Berlin,
1928.
[226] von Mises R, Wahrscheinlichkeitsrechnung und ihre Anwendung in der Statis-
tik und Theoretischen Physik, Leipzig and Wien, Franz Deuticke, 1931.
Bibliography 479

[227] Wang XS, Gao ZC, and Guo HY, Uncertain hypothesis testing for two ex-
perts’ empirical data, Mathematical and Computer Modelling, Vol.55, 1478-
1482, 2012.
[228] Wang XS, Gao ZC, and Guo HY, Delphi method for estimating uncer-
tainty distributions, Information: An International Interdisciplinary Journal,
Vol.15, No.2, 449-460, 2012.
[229] Wang XS, and Ha MH, Quadratic entropy of uncertain sets, Fuzzy Optimiza-
tion and Decision Making, Vol.12, No.1, 99-109, 2013.
[230] Wang XS, and Peng ZX, Method of moments for estimating uncertainty dis-
tributions, Journal of Uncertainty Analysis and Applications, Vol.2, Article
5, 2014.
[231] Wang XS, and Wang LL, Delphi method for estimating membership function
of the uncertain set, http://orsc.edu.cn/online/130330.pdf.
[232] Wen ML, and Kang R, Reliability analysis in uncertain random system,
http://orsc.edu.cn/online/120419.pdf.
[233] Wiener N, Differential space, Journal of Mathematical Physics, Vol.2, 131-
174, 1923.
[234] Yager RR, A new approach to the summarization of data, Information Sci-
ences, Vol.28, 69-86, 1982.
[235] Yager RR, Quantified propositions in a linguistic logic, International Journal
of Man-Machine Studies, Vol.19, 195-227, 1983.
[236] Yang LX, and Liu B, On inequalities and critical values of fuzzy random
variable, International Journal of Uncertainty, Fuzziness & Knowledge-Based
Systems, Vol.13, No.2, 163-175, 2005.
[237] Yang N, and Wen FS, A chance constrained programming approach to trans-
mission system expansion planning, Electric Power Systems Research, Vol.75,
Nos.2-3, 171-177, 2005.
[238] Yang XF, and Gao J, Uncertain differential games with application to capi-
talism, Journal of Uncertainty Analysis and Applications, Vol.1, Article 17,
2013.
[239] Yang XH, Moments and tails inequality within the framework of uncertainty
theory, Information: An International Interdisciplinary Journal, Vol.14,
No.8, 2599-2604, 2011.
[240] Yang XH, On comonotonic functions of uncertain variables, Fuzzy Optimiza-
tion and Decision Making, Vol.12, No.1, 89-98, 2013.
[241] Yao K, Uncertain calculus with renewal process, Fuzzy Optimization and
Decision Making, Vol.11, No.3, 285-297, 2012.
[242] Yao K, and Li X, Uncertain alternating renewal process and its application,
IEEE Transactions on Fuzzy Systems, Vol.20, No.6, 1154-1160, 2012.
[243] Yao K, Gao J, and Gao Y, Some stability theorems of uncertain differential
equation, Fuzzy Optimization and Decision Making, Vol.12, No.1, 3-13, 2013.
[244] Yao K, Extreme values and integral of solution of uncertain differential equa-
tion, Journal of Uncertainty Analysis and Applications, Vol.1, Article 2, 2013.
480 Bibliography

[245] Yao K, and Ralescu DA, Age replacement policy in uncertain environment,
Iranian Journal of Fuzzy Systems, Vol.10, No.2, 29-39, 2013.
[246] Yao K, and Chen XW, A numerical method for solving uncertain differential
equations, Journal of Intelligent & Fuzzy Systems, Vol.25, No.3, 825-832,
2013.
[247] Yao K, A type of nonlinear uncertain differential equations with analytic
solution, Journal of Uncertainty Analysis and Applications, Vol.1, Article 8,
2013.
[248] Yao K, A no-arbitrage theorem for uncertain stock model, Fuzzy Optimization
and Decision Making, to be published.
[249] Yao K, Entropy operator for membership function of uncertain set, Applied
Mathematics and Computation, to be published.
[250] Yao K, Block replacement policy in uncertain environment, http://orsc.edu.
cn/online/110612.pdf.
[251] Yao K, and Gao J, Law of large numbers for uncertain random variables,
http://orsc.edu.cn/online/120401.pdf.
[252] Yao K, and Sheng YH, Stability in mean for uncertain differential equation,
http://orsc.edu.cn/online/120611.pdf.
[253] Yao K, Time integral of independent increment uncertain process, http://
orsc.edu.cn/online/130302.pdf.
[254] Yao K, A formula to calculate the variance of uncertain variable, http://orsc.
edu.cn/online/130831.pdf.
[255] Yao K, Uncertain random renewal reward process, http://orsc.edu.cn/online/
131019.pdf.
[256] Yao K, Uncertain random alternating renewal process, http://orsc.edu.cn/
online/131108.pdf.
[257] Yao K, On the ruin time of an uncertain insurance model, http://orsc.edu.cn/
online/140115.pdf.
[258] You C, Some convergence theorems of uncertain sequences, Mathematical and
Computer Modelling, Vol.49, Nos.3-4, 482-487, 2009.
[259] Yu XC, A stock model with jumps for uncertain markets, International Jour-
nal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.20, No.3, 421-
432, 2012.
[260] Zadeh LA, Fuzzy sets, Information and Control, Vol.8, 338-353, 1965.
[261] Zadeh LA, Outline of a new approach to the analysis of complex systems and
decision processes, IEEE Transactions on Systems, Man and Cybernetics,
Vol.3, 28-44, 1973.
[262] Zadeh LA, The concept of a linguistic variable and its application to approx-
imate reasoning, Information Sciences, Vol.8, 199-251, 1975.
[263] Zadeh LA, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and
Systems, Vol.1, 3-28, 1978.
Bibliography 481

[264] Zadeh LA, A computational approach to fuzzy quantifiers in natural lan-


guages, Computers and Mathematics with Applications, Vol.9, No.1, 149-184,
1983.
[265] Zhang B, and Peng J, Euler index in uncertain graph, Applied Mathematics
and Computation, Vol.218, No.20, 10279-10288, 2012.
[266] Zhang XF, Ning YF, and Meng GW, Delayed renewal process with uncertain
interarrival times, Fuzzy Optimization and Decision Making, Vol.12, No.1,
79-87, 2013.
[267] Zhang XF, and Li X, A semantic study of the first-order predicate logic
with uncertainty involved, Fuzzy Optimization and Decision Making, to be
published.
[268] Zhang ZM, Some discussions on uncertain measure, Fuzzy Optimization and
Decision Making, Vol.10, No.1, 31-43, 2011.
[269] Zhao R and Liu B, Stochastic programming models for general redundancy
optimization problems, IEEE Transactions on Reliability, Vol.52, No.2, 181-
191, 2003.
[270] Zhao R, and Liu B, Renewal process with fuzzy interarrival times and rewards,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.11, No.5, 573-586, 2003.
[271] Zhao R, and Liu B, Redundancy optimization problems with uncertainty
of combining randomness and fuzziness, European Journal of Operational
Research, Vol.157, No.3, 716-735, 2004.
[272] Zhao R, and Liu B, Standby redundancy optimization problems with fuzzy
lifetimes, Computers & Industrial Engineering, Vol.49, No.2, 318-338, 2005.
[273] Zhao R, Tang WS, and Yun HL, Random fuzzy renewal process, European
Journal of Operational Research, Vol.169, No.1, 189-201, 2006.
[274] Zhao R, and Tang WS, Some properties of fuzzy random renewal process,
IEEE Transactions on Fuzzy Systems, Vol.14, No.2, 173-179, 2006.
[275] Zheng Y, and Liu B, Fuzzy vehicle routing model with credibility measure
and its hybrid intelligent algorithm, Applied Mathematics and Computation,
Vol.176, No.2, 673-683, 2006.
[276] Zhou J, and Liu B, New stochastic models for capacitated location-allocation
problem, Computers & Industrial Engineering, Vol.45, No.1, 111-125, 2003.
[277] Zhou J, and Liu B, Analysis and algorithms of bifuzzy systems, International
Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.12, No.3,
357-376, 2004.
[278] Zhou J, and Liu B, Modeling capacitated location-allocation problem with
fuzzy demands, Computers & Industrial Engineering, Vol.53, No.3, 454-468,
2007.
[279] Zhou J, Yang F, and Wang K, Multi-objective optimization in uncertain
random environments, Fuzzy Optimization and Decision Making, to be pub-
lished.
482 Bibliography

[280] Zhu Y, and Liu B, Continuity theorems and chance distribution of random
fuzzy variable, Proceedings of the Royal Society of London Series A, Vol.460,
2505-2519, 2004.
[281] Zhu Y, and Ji XY, Expected values of functions of fuzzy variables, Journal
of Intelligent & Fuzzy Systems, Vol.17, No.5, 471-478, 2006.
[282] Zhu Y, and Liu B, Fourier spectrum of credibility distribution for fuzzy vari-
ables, International Journal of General Systems, Vol.36, No.1, 111-123, 2007.
[283] Zhu Y, and Liu B, A sufficient and necessary condition for chance distribution
of random fuzzy variables, International Journal of Uncertainty, Fuzziness &
Knowledge-Based Systems, Vol.15, Supp.2, 21-28, 2007.
[284] Zhu Y, Uncertain optimal control with application to a portfolio selection
model, Cybernetics and Systems, Vol.41, No.7, 535-547, 2010.
[285] Zimmermann HJ, Fuzzy Set Theory and its Applications, Kluwer Academic
Publishers, Boston, 1985.
List of Frequently Used Symbols

M uncertain measure
(Γ, L, M) uncertainty space
ξ, η, τ uncertain variables
Φ, Ψ, Υ uncertainty distributions
Φ−1 , Ψ−1 , Υ−1 inverse uncertainty distributions
µ, ν, λ membership functions
µ−1 , ν −1 , λ−1 inverse membership functions
L(a, b) linear uncertain variable
Z(a, b, c) zigzag uncertain variable
N (e, σ) normal uncertain variable
LOGN (e, σ) lognormal uncertain variable
(a, b, c) triangular uncertain set
(a, b, c, d) trapezoidal uncertain set
E expected value
V variance
H entropy
Xt , Yt , Zt uncertain processes
Ct Liu process
Nt renewal process
Q uncertain quantifier
(Q, S, P ) uncertain proposition
∨ maximum operator
∧ minimum operator
¬ negation symbol
∀ universal quantifier
∃ existential quantifier
Pr probability measure
(Ω, A, Pr) probability space
Ch chance measure
k-max the kth largest value
k-min the kth smallest value
∅ the empty set
< the set of real numbers
iid independent and identically distributed

© Springer-Verlag Berlin Heidelberg 2015 483


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5
Index

absorbtion law, 182 convergence in measure, 94


age replacement policy, 294 convergence in probability, 391
algebra, 9 Delphi method, 134
α-path, 331 De Morgan’s law, 182
alternating renewal process, 298 diffusion, 307, 312
American option, 350 distance, 88, 215
Asian option, 352 distributive law, 182
associative law, 181 double-negation law, 181
asymptotic theorem, 15 drift, 307, 312
Bayes formula, 399 dual quantifier, 227
belief degree, 3 duality axiom, 12
betting ratio, 455 Dutch book argument, 456
bisection method, 58 empirical membership function, 217
block replacement policy, 287 empirical uncertainty distribution, 37
Boolean function, 60 entropy, 82, 211
Boolean system calculator, 66 Euler method, 343
Boolean uncertain variable, 60 European option, 347
Borel algebra, 10 event, 11
Borel set, 10 expected loss, 147, 438
bridge system, 154 expected value, 66, 204, 424
Brownian motion, 405 expert’s experimental data, 127, 216
chain rule, 315 exponential random variable, 371
chance distribution, 415 extreme value theorem, 51, 273
chance inversion theorem, 416 feasible solution, 105
chance measure, 410 Feynman-Kac formula, 407
change of variables, 315 first hitting time, 277, 339
Chebyshev inequality, 77, 383 frequency, 2
Chen-Ralescu theorem, 161 fundamental theorem of calculus, 313
commutative law, 181 fuzzy set, 461
comonotonic function, 72 goal programming, 122
complement of uncertain set, 179, 199 hazard distribution, 148
complete uncertainty space, 16 Hölder’s inequality, 74
compromise model, 121 hypothetical syllogism, 174
compromise solution, 121 idempotent law, 180
conditional probability, 399 imaginary inclusion, 204
conditional uncertainty, 26, 90, 216 independence, 21, 43, 192
convergence almost surely, 93, 390 independent increment, 266
convergence in distribution, 94, 391 indeterminacy, 1
convergence in mean, 94, 391 individual feature data, 221

© Springer-Verlag Berlin Heidelberg 2015 485


B. Liu, Uncertainty Theory, Springer Uncertainty Research,
DOI 10.1007/978-3-662-44354-5
Index 486

inference rule, 247 nonempty uncertain set, 189


integration by parts, 316 normal random variable, 372
intersection of uncertain sets, 179, 197 normal uncertain variable, 36
inverse membership function, 190, 403 normal uncertain vector, 101
inverse uncertainty distribution, 40 normality axiom, 12
inverted pendulum, 255 operational law, 44, 196, 265, 417
investment risk analysis, 145 optimal solution, 106
Ito formula, 406 option pricing, 347
Ito integral, 405 parallel system, 138
Ito process, 406 Pareto solution, 121
Jensen’s inequality, 75 Peng-Iwamura theorem, 33
Kolmogorov inequality, 383 Poisson process, 404
k-out-of-n system, 138 polyrectangular theorem, 24
law of contradiction, xiv, 181 portfolio selection, 355
law of excluded middle, xiv, 181 principle of least squares, 130, 217
law of large numbers, 395, 431 probability continuity theorem, 366
law of truth conservation, xiv probability density function, 370
linear uncertain variable, 36 probability distribution, 370
linguistic summarizer, 243 probability inversion theorem, 370
Liu integral, 308 probability measure, 365
Liu process, 303, 312 product axiom, 17
logical equivalence theorem, 236 product probability, 367
lognormal random variable, 372 product uncertain measure, 17
lognormal uncertain variable, 37 project scheduling problem, 117
loss function, 137 random set, 401
machine scheduling problem, 110 random variable, 369
Markov inequality, 74, 382 rational man, 8
maximum entropy principle, 86 regular membership function, 192
maximum flow problem, 445 regular uncertainty distribution, 39
maximum uncertainty principle, xiv reliability index, 153, 439
measurable function, 29 renewal process, 283, 404, 446
measurable set, 10 renewal reward process, 288
measure inversion formula, 183 risk index, 139, 436
measure inversion theorem, 38 ruin index, 291
membership function, 183, 401 ruin time, 292
method of moments, 132 rule-base, 252
Minkowski inequality, 75 Runge-Kutta method, 344
modus ponens, 171 sample path, 260
modus tollens, 172 series system, 137
moment, 79, 386 shortest path problem, 445
monotone quantifier, 225 σ-algebra, 9
monotonicity theorem, 14 stability, 329
multilevel programming, 123 Stackelberg-Nash equilibrium, 124
multiobjective programming, 121 standby system, 138
multivariate normal distribution, 101 stationary increment, 268
Nash equilibrium, 124 stochastic calculus, 405
negated quantifier, 226 stochastic differential equation, 406
negative commission argument, 457 stochastic process, 403
487 Index

strictly decreasing function, 52 uncertain proposition, 157, 235


strictly increasing function, 44 uncertain quantifier, 222
strictly monotone function, 54 uncertain random process, 445
structural risk analysis, 142 uncertain random programming, 433
structure function, 151 uncertain random variable, 413
subadditivity axiom, 12 uncertain reliability analysis, 152
time integral, 280, 341 uncertain renewal process, 283
trapezoidal uncertain set, 185 uncertain risk analysis, 137
triangular uncertain set, 185 uncertain sequence, 93
truck-cross-over-bridge, 6 uncertain set, 177
truth value, 159, 236 uncertain statistics, 127, 216
uncertain calculus, 303 uncertain stock model, 347
uncertain control, 255 uncertain system, 251
uncertain currency model, 359 uncertain variable, 29
uncertain differential equation, 319 uncertain vector, 98
uncertain entailment, 170 uncertainty, definition of, 465
uncertain finance, 347 uncertainty distribution, 31, 261
uncertain graph, 440 uncertainty space, 16
uncertain inference, 247 uniform random variable, 371
uncertain insurance model, 290 unimodal quantifier, 225
uncertain integral, 308 union of uncertain sets, 179, 196
uncertain interest rate model, 358 value-at-risk, 146, 438
uncertain logic, 221 variance, 76, 210, 428
uncertain measure, 13 vehicle routing problem, 113
uncertain network, 444 Wiener process, 405
uncertain process, 259 Yao-Chen formula, 332
uncertain programming, 105 zigzag uncertain variable, 36

You might also like