Game Development Reference

In-Depth Information

Further, note that if
u

∈

V
is a good vertex, then there is at least one

u
where
u
is good. Similarly, if
u

optimal
u

→

∈

V
♦
is good then for every

u
we have that
u
is good, and if
u

optimal transition
u

→

∈

V
is good

G
, where the

u
then
u
is good. Hence, we can define a game

and
u

→

V
consists of all good vertices of
G
, and for all
u, u
∈

V
we

set of vertices

G
iff
u

have that (
u, u
) is a transition of

u
is an optimal transition of
G
.

→

G
are the same as in
G
. Now we prove the

The transition probabilities in

following three claims:

V
we have that
val
(
u, G
)=
val
(
u, G
).

(a) For every
u

∈

σ,π

v

(
Reach
(
T, G
))

val
(
v, G
)=
.

(b)

∃

σ

∈

Σ
G
∀

π

∈

Π
G
:

P

≥

σ,π

v

(c)

∃

σ

∈

Σ
G
∀

π

∈

Π
G
:

P

(
Reach
(
T,G
))

≥

.

Note that Claim (c) is exactly (5.9). We start by proving Claim (a). Let

u ∈

V
. Due to Proposition 5.6, there is a MD strategy
π ∈
Π
G
which is

optimal minimising in every vertex of
G
(particularly in
u
) and selects

only the optimal transitions. Hence, the strategy
π
can also be used in the

restricted game
G
and thus we obtain
val
(
u, G
)
≤ val
(
u, G
). Now suppose

that
val
(
u, G
)
< val
(
u, G
). By applying Proposition 5.6 to

G
, there is an

optimal minimising MD strategy
π

Π
G
. Further, for every vertex
t
of

G
which is not good there is a strategy
π
t
∈

∈

Π
G
such that for every
σ

∈

σ,π
t
(
Reach
(
T,G
))
< val
(
u, G
) (this follows immediately

from (5.10)). Now consider a strategy
π
∈

Σ
G
we have that

P

Π
G
which for every play of
G

initiated in
u
behaves in the following way:

G
,

•

As long as player

uses only the transitions of
G
that are preserved in

the strategy
π
behaves exactly like the strategy
π
.

r
which is not a transition in
G

for the first time, then the strategy
π
starts to behave either like the

optimal minimising strategy
π
or the strategy
π
r
, depending on whether

r
is good or not (observe that if
r
is good, then
val
(
r
,G
)
< val
(
r, G
)

because
r

•

When player

uses a transition
r

→

G
).

r
is not a transition of

→

Now it is easy to check that for every
σ

∈

Σ
G

we have that

σ,π
u
(
Reach
(
T,G
))
< val
(
u, G
), which contradicts the assumption that
u
is

good.

Now we prove Claim (b). Due to Proposition 5.5, for every
u ∈

P

V
we

can fix a strategy
σ
u
∈

Σ
G
and
n
u
≥

1 such that for every
π

∈

Π
G
we

(
Reach
n
u
(
T, G
))
> val
(
u, G
)
/
2. For every
k

σ
u
,π

u

have that

0, let
B
(
k
)

be the set of all vertices
u
reachable from
v
in
G
via a path of length

exactly
k
which does not visit
T
. Observe that
B
(
k
) is finite because
G
is

finitely branching. Further, for every
i

P

≥

≥

0 we define a bound
m
i
inductively

Search Nedrilad ::

Custom Search