Game Development Reference
''No user-serviceable parts inside.'' You can train them (repeatedly if necessary),
you can use them, and you can throw them away. For all of the other methods
that we have covered in prior chapters, there are places where the AI programmer
can intervene. Places where numbers can be altered or scripts can be injected
abound in those methods, but not inside these black boxes. They have to be
trained to do everything expected of them, and that is often no small task.
Supervised training is notorious for its need of computational resources. In
addition, every tweak to the desired outputs is essentially a ''do-over'' in terms of
training. The combination of these two issues is a rather non-agile development
cycle. My non-game industry experience with supervised training could be
summarized as, ''Push the Start Learning button on Monday; come back on
Friday to test the results.'' When a game designer says, ''Make it do this!'' the AI
programmer starts the entire learning process over from scratch. Worse, there is
no guarantee that the system will learn the desired behavior. Nor is there any
guarantee that old behaviors will not degrade when the system is taught new ones.
Effectively training a learning system is a black art, and expertise comes mainly
through experience. Getting the desired outputs is not always intuitive or
obvious. This lack of surety makes the methods a technical risk.
Just as building a learning system is an acquired skill, so is testing one. The
learning systems I built were never released for general availability until after they
had successfully passed a field trial in the real world. Learning systems in games
need their own particular test regimes [Barnes02]. The task is not insurmoun-
table, but it does create project schedule risks—the test team has to learn how to
do something new.
The issue of completeness also crops up. A neural network operates as an inte-
grated whole. All of the inputs and desired outputs need to be present to conduct
the final training. So the AI cannot be incrementally developed in parallel with
everything else. Prior versions may not work in the final game. From a project-
management perspective, this is usually unacceptable; it pushes a risky tech-
nology out to the end of a project, when there is no time left to develop an
alternative if learning does not work out. Good project management calls for
mitigating risks as early as possible.
Neural networks loosely mimic the structure of the human brain. As shown in
Figure 10.3, a typical network has three layers: the input layer, a hidden layer, and