Rule-Based AI
We’ll be looking at rule-based systems this week, with a focus on implementing them for controlling character behaviours in games. Plan A There are two possible routes for this weeks workshop. The first focuses on the GVG-AI competition framework. This is a good opporunity to see a very influential piece of AI work that is directly relevent to the material covered in this week’s online lecture. However, as Java isn’t one of the languages that the university usually uses, its taking a risk and will require a little setup.
We’ll be looking at rule-based systems this week, with a focus on implementing them for controlling character behaviours in games.
Plan A
There are two possible routes for this weeks workshop. The first focuses on the GVG-AI competition framework. This is a good opporunity to see a very influential piece of AI work that is directly relevent to the material covered in this week’s online lecture.
However, as Java isn’t one of the languages that the university usually uses, its taking a risk and will require a little setup. If this doesn’t work, we’ll fall back to plan B.
You will need to setup both of these applications. You should be able to download these into the scratch drive.
Getting GVG-AI
GVG-AI stands for the General Video Game (GVG) Artifical Intelligence (AI) project. It ran for many years across many academic conferences and consisted of different 'tracks'.
The 'single-player planning track' is the track we’ll look at today. Download the GVG-AI framework, from github. I recommend the 2.3 release.
Running the games
Look under src/tracks/SinglePlayer/
for the Test
class. This is the class that lets you run the single player games.
Right click on the class in the right-hand plane and select Run Test.Main()
. The Space Invaders game should
open and let you control the player.
Tip
|
you can use W,A,S,D and space to control the character. |
Comment out line 49 of Test.Java
, and uncomment line 52. This will enable one of the sample controllers to play the game instead. By default, this will be the Rolling-Horizon Evolutionary algorithm.
Observe the agent playing the game. What do you notice about it’s approach to the level?
Replace sampleRHEAController
on the line you uncommented with another agent, sampleMCTSController
for example and observe the agent’s performance. Explore the other agents.
Repeating the Tests
Examining the agents like this works well for demonstation purposes. The algorithms have a stocastic (randomised) element. We can repeat the test multiple times. Re-comment line 52 and uncomment the block numbered 4.
and re-run Test
.
Tip
|
reduce M to 3 to the code runs quicker.
|
The output will look something like the following:
Result (1->win; 0->lose): Player0:1, Player0-Score:71.0, timesteps:419 Result (1->win; 0->lose): Player0:1, Player0-Score:77.0, timesteps:406 Result (1->win; 0->lose): Player0:1, Player0-Score:71.0, timesteps:430 Results in game examples/gridphysics/aliens.txt, 1.0 , 73.0 Result (1->win; 0->lose): Player0:0, Player0-Score:11.0, timesteps:292 Result (1->win; 0->lose): Player0:0, Player0-Score:1.0, timesteps:264 Result (1->win; 0->lose): Player0:0, Player0-Score:114.0, timesteps:472 Results in game examples/gridphysics/angelsdemons.txt, 0.0 , 42.0 Result (1->win; 0->lose): Player0:0, Player0-Score:0.0, timesteps:1500
We’ll have a look at how the algorithms included in the framework.
First the simple algorithms. Open the following file:
tracks/singlePlayer/simple/sampleonesteplookahead/Agent
This is a simple search-based approach which tries every single action once and picks the best one.
Next, we’ll have a look at the mcts
agent. You can find that one in singlePlayer/advanced/sampleMCTS
. There are a few parameters defined in SingleTreeNode
, find line 27 and modify the ROLLOUT_DEPTH. This is how many 'steps' the rollout consists of.
Last updated 2022-11-24