X hits on this document

PDF document

Enhanced NPC Behaviour using Goal Oriented Action Planning - page 51 / 110





51 / 110

The actual method of calculating the heuristic and actual cost of the GOAP actions was designed from Orkin’s explanation of the F.E.A.R. A* machine (Orkin, 2005). When the planner A* search begins, the current and goal world states of the AStarPlannerNode are initially setup. The planner’s A* goal maintains a pointer to the actual GOAP goal being searched towards. The GOAP goal’s satisfaction conditions are applied to the AStarPlannerNode’s goal state (step 2 in figure 7). The current state is subsequently updated by adding each new symbol that has been added to the goal world state and setting its value from the agent’s current state (step 3 in figure 7). The heuristic cost returned is the number of symbols different between the current and goal states.

When determining the value of g for a neighbour node, it is first necessary to map from the input A* node and obtain the matching action using the A* map. The action’s cost is then returned as the value for g. The world states of the current AStarPlannerNode are transferred to the neighbour AStarPlannerNode so that the neighbour has the same current and goal world states as the current node before moving on to calculating the heuristic cost (step 9 in figure 7).

When determining the heuristic cost, the A* goal obtains the GOAP action from the A* map by passing in the neighbours node ID. This action is then executed which involves solving whatever unsatisfied world state symbols the action can solve, applying the action’s world state effects to the current state and finally merging the agent’s current world state with any new symbols added to the goal state (steps 5, 6 and 7 in figure 7). Again the heuristic cost returned is the number of symbols different between the current and goal states of the AStarPlannerNode after applying the GOAP action.

The planner A* goal must also determine if an A* search has finished. This is performed by checking if the current node’s goal and current states are the same. If so then a copy of the agent’s current world state is obtained. Each of the actions that are part of the plan so far are applied to this world state one by one. If the world state left after applying the


Document info
Document views297
Page views301
Page last viewedWed Dec 07 13:04:13 UTC 2016