Game Development Reference
In-Depth Information
u a
u
1
4
1
4
v
r
3
4
3
4
d
P > 0 G (
( P > 0 F a )), and let σ be a HD strategy which in every
Let ϕ
¬
a
wv selects either v
0
or not, respectively. Here # d ( w ) and # u ( w ) denote the number of occurrences
of d and u in w , respectively. Obviously, the strategy σ can be implemented
by a one-counter automaton. The play G ( σ v initiated in v closely resembles a
one-way infinite random walk where the probability of going right is
or v
r , depending on whether # d ( w )
# u ( w )
3
4
and
4 . More precisely, the play G ( σ v corresponds
to the unfolding of the following infinite-state Markov chain (the initial state
is grey):
1
the probability of going left is
1
4
1
4
1
4
u
u
3
4
3
4
3
4
u a
v
v
r
v
r
d
d
Run ( G ( σ v )
A standard calculation shows that the probability of all w
1
initiated in v such that w visits a state satisfying a is equal to
3 . Note that
Run ( G ( σ v ) initiated in v which does not visit a state satisfying
a we have that w ( i )
for every w
= ν
( P > 0 F a ) for every i
|
¬
a
0. Since the probability
2
of all such runs is
3 , we obtain that the formula ϕ is valid in the state v
of G ( σ v . On the other hand, there is no finite-memory ϕ -winning strategy σ
in v , because then the play G ( σ v corresponds to an unfolding of a finite-state
Markov chain, and the formula ϕ does not have a finite-state model (see,
e.g., Brazdil et al. [2008]).
The memory requirements of ϕ -winning strategies for various fragments of
qualitative branching-time logics were analysed by Brazdil and Forejt [2007]
and Forejt [2009]. The decidability/complexity of the existence of HR (or
HD) ϕ -winning strategies in turn-based stochastic games with qualitative
branching-time objectives is still open.
 
Search Nedrilad ::




Custom Search