2003.03: Neural Net Game Player
|Controls:||Use the mouse to click where you would like the "boggle" to move, similar to the game Diablo by Blizzard Entertainment. The default controls are the arrow keys, but that interface proved too flaky for AI training.|
|Class:||EECS 498.002 - Artificial Intelligence in Video Games - Winter 2003 - Prof. John Laird|
This was the final project for a special topics class about AI in video games. Brian Walsh and I decided to use Neural Networks to help the game character learn how to play "Bogwars" successfully.
The simple explanation of neural nets is that they are a technique to generalize the data you give them. In this case, we gave the neural net several successful example plays of the game, and told it these were desirable. Then we gave the neural net some unsuccessful plays, and told it those were undesirable. We then gave it some time to process the data ("learn") and then we let it try to play the game.
This experiment was a failure. We did not implement the neuron structure in such a way that the agent was able to generalize. Instead, it kind of stumbled and jittered around the map, somewhat successfully, somewhat unsuccessfully. We may not have successfully taught the machine to play the game, but we had successfully implemented a popular AI technique in a given environment.
I wrote the new mouse control system, and the training data generator. Brian wrote the basic neuron class, and we then worked together to creatively design the types of interactions that the training system would pay attention to. After making a training schema, we began to work together on a network structure. We were going to implement a genetic algorithm to detect which structure would generalize in this environment better, but after some research it proved too difficult and time-consuming for just two of us. We then focused our efforts on watching the network to make sure it was being trained somewhat correctly, and then we began running full learning sessions, which took many hours at a time. After many simulation sessions, we took the statistics and concluded our agent couldn't find its way out of a paper bag.