Game Development Reference

In-Depth Information

Actually, most of the 'interesting' sets of runs are Borel, although there exist

also subsets of
Run
that are not in

B

.

Let
A

∈B

, and let
f
:
Run

→{

0
,
1

}

be a function which to a given

w

∈

Run
assigns either 1 or 0, depending on whether
w

∈

A
or not, respec-

tively. Then
f
is

B

-measurable, because for every interval
I
in

R

we have

that
f
−
1
(
I
) is equal either to
Run
,
A
,
Run

\

A
,or

∅

, depending on whether

I

∩{

0
,
1

}

is equal to

{

0
,
1

}

,

{

1

}

,

{

0

}

,or

∅

, respectively.

A
probability measure
over a measurable space (Ω
,F
) is a function

P
:
F→
[0
,
1] such that, for each countable collection
{A
i
}
i∈I
of pairwise

disjoint elements of
F
, we have that
P
(
i∈I
A
i
)=
i∈I
P
(
A
i
), and moreover

P
(Ω)=1.A
probability space
is a triple (Ω
,F,P
) where (Ω
,F
)isa

measurable space and
P
is a probability measure over (Ω
,F
).

A
random variable
over a probability space (Ω
,

F

,

P

)isan

F

-measurable

function
X
:Ω

→
R

. The
expected value of
X
, denoted by

E

(
X
), is defined

as the Lebesgue integral
Ω
Xd

. A random variable
X
is
discrete
if there

is a finite or countably infinite subset
N
of

P

(
X
−
1
(
N
)) = 1. The

R

such that

P

expected value of a discrete random variable
X
is equal to
n∈N
n

·P

(
X
=
n
),

where
X
=
n
denotes the set
X
−
1
(

{

n

}

).

Markov chains

A
Markov chain
is a tuple

M

=(
S,

→

, Prob,μ
) where (
S,

→

) is a transition

system,
Prob
is a function which to each
s

S
assigns a positive probability

distribution over the outgoing transitions of
s
, and
μ
:
S

∈

→

[0
,
1] is an

x

→

initial
probability distribution. We write
s

t
to indicate that
s

→

t
and

Prob
(
s
)((
s, t
)) =
x
.

Consider the measurable space (
Run,

B

) introduced in Example 5.1,

i.e.,

Fpath
.By

Caratheodory's extension theorem (see, e.g., Billingsley [1995], Kemeny et al.

[1976]), there is a unique probability measure

B

is the least
σ
-field containing all
Run
(
w
) such that
w

∈

P
μ
over (
Run,

B

) such that

for every basic cylinder
Run
(
w
) we have that

P
μ
(
Run
(
w
)) is defined in

the 'natural' way, i.e., by multiplying
μ
(
w
(0)) with the probabilities of all

transitions in
w
. Thus, we obtain the probability space (
Run,

B

,

P

μ
).

Example 5.2
Consider the finite-state Markov chain
M
of Figure 5.1

with the initial distribution
μ
s
(recall that
μ
s
is a Dirac distribution where

μ
s
(
s
) = 1). Let
Reach
(
s, t
) be the set of all runs initiated in
s
which visit
t
.

Obviously,
Reach
(
s, t
)=
i
=1
Reach
i
(
s, t
), where
Reach
i
(
s, t
) consists of

all
w

Reach
(
s, t
) that visit
t
for the first time after exactly
i
transitions.

Further, every
Reach
i
(
s, t
) can be partitioned into 2
i−
1
pairwise disjoint basic

cylinders according to the first
i

∈

−

1 transitions. Each of these cylinders has

Search Nedrilad ::

Custom Search