Game Development Reference

In-Depth Information

u
and
v
are the initial vertices of
G
u
and
G
v
, and
T
u
and
T
v
are the sets

of target vertices of
G
u
and
G
v
, respectively. Consider a game
G
obtained

by taking the disjoint union of
G
u
and
G
v
extended with a fresh stochastic

vertex
s
with two outgoing transitions
s

−−→ u
and
s
0
.
5

0
.
5

−−→ v
. The set of target

vertices of
G
is
T
u
∪ T
v
, and the initial vertex is
s
. Since
val
(
u
) = 0 and

val
(
v
) = 1, we obtain that
val
(
s
)=
2
. Now consider the temporal objective

P
≥
0
.
5
F
t
. First, assume that player has a winning strategy
σ
in
s
. Since

player has no optimal maximising strategy in
v
, there is a constant
P
σ
<
2

such that

(
σ,π
)

s

P
σ

P

(
Reach
(
T
v
))

≤

for every
π

∈

Π. Since
val
(
u
) = 0, there

(
σ,π
)

s

(
Reach
(
T
u
))
<
2
−

P
σ

is a strategy
π
of player

♦

such that

P

for every

(
σ,π
)

s

T
v
))
<
2

σ

∈

Σ. Hence,

P

(
Reach
(
T
u

∪

which contradicts the assumption

that
σ
is a winning strategy for player

. Similarly, one can show that there

is no winning strategy for player

which would achieve the negated objective

P
<
0
.
5
F
t
against every strategy of player

♦

.

Now let us assume that
G
is finitely branching, and let us fix a vertex
v

of
G
. For technical convenience, assume that every target vertex
t
has only

one outgoing transition
t → t
. The second part of Proposition 5.9 is not

completely trivial, because player does not necessarily have an optimal

maximising strategy in
v
even if
G
is finitely branching. Observe that if

>val
(
v
), then player

has a (
P
F
t
)-winning strategy in
v
(he may use,

e.g., an
ε
-optimal maximising strategy where
ε
=(

−

val
(
v
))
/
2). Similarly, if

¬
P
F
t
)-winning strategy in
v
. Now assume

that
=
val
(
v
). Obviously, it su
ces to show that if player

<val
(
v
), then player

♦

has a (

♦

does
not
have

¬
P
F
t
)-winning strategy in
v
, then player

has a (
P
F
t
)-winning

a(

strategy in
v
. This means to show that

σ,π

v

∀

π

∈

Π

∃

σ

∈

Σ:

P

(
Reach
(
T
))

(5.8)

implies

σ,π

v

∃

σ

∈

Σ

∀

π

∈

Π:

P

(
Reach
(
T
))

.

(5.9)

Observe that if
is
>
, then (5.8) does not hold because player ♦ has an

optimal minimising strategy by Proposition 5.6. For the constraint

≥

0, the

statement is trivial. Hence, it su
ces to consider the case when

is

≥

and

=
val
(
v
)
>
0. Assume that (5.8) holds. We say that a vertex
u

∈

V
is
good

if

σ,
u
(
Reach
(
T
))

∀

π

∈

Π

∃

σ

∈

Σ:

P

≥

val
(
u
)
.

(5.10)

u
of
G
is
optimal
if either
u

Further, we say that a transition
u

→

∈

V
,or

V
♦
and
val
(
u
)=
val
(
u
). Observe that for every
u

u

∈

V
∪

∈

V
∪

V
♦
there

u
, because
G
is finitely branching.

is at least one optimal transition
u

→

Search Nedrilad ::

Custom Search