1 of 13

4 min presentation in DD2438

Group 13:3

Adrian Chroust and Elias Wetterwik

https://docs.google.com/presentation/d/1N6Eb4bulVC4Azb7hfQbIsBE6mSlItiu4lf-UlCs63iw/edit?usp=sharing

2 of 13

Our Approach: Summarized

  • Behaviour tree for managing action selection in every time step
  • A* for everything
  • No distinction between offensive and defensive agents,�offense always comes first
  • Food clustering to avoid several agents going for the same food
  • Enemy chasing with collaboration
  • Combined observations of agents

3 of 13

4 of 13

Behaviour Tree

  • Evaluated every timestep (FixedUpdate)
  • Distinction between 6 cases:
    1. Player has the super pill and enemy does not
    2. Enemy has the super pill and player does not
    3. Nobody has the super pill and player is a ghost
    4. Nobody has the super pill and player is a pacman
    5. Both have the super pill and player is a ghost
    6. Both have the super pill and player is a pacman

5 of 13

Behaviour Tree: Super Pill Taken by One Team

6 of 13

Behaviour Tree: Super Pill Taken by Nobody

7 of 13

Behaviour Tree: Super Pill Taken by Both Teams

8 of 13

Behavior Tree

  • We make us of a rule based approach for our agents, defining different tasks at given certain conditions
    • We have defined these tasks and when they occur ourselves
  • To describe our agents behavior we make use of a behavior tree
    • The behavior tree consists of a number of tasks
    • Tasks are connected then through sequences and fallbacks
    • We implemented the tree using Panda BT free

9 of 13

Behavior Tree in more detail

  • The tree consists of 6 subtrees
    • Each tree describes a distinct case which would require different strategies
    • Firstly we check whether we or the enemy have an active super pill/capsule
    • If both have or none have a an active superpill, subtree depends on if the agent is a ghost or not
  • Our default “fallback”-action is guarding the border
    • When there is no food left and no enemies to chase, this is what our agents attempt to do.

10 of 13

Navigation

  • To navigate the maps we make use of A*
  • We can naturally adapt A* for our tasks by applying a wide range of heuristics, such as penalties and rewards, to achieve desired behaviors
  • Examples of this is when avoiding enemies and super pills or when pincering enemies
  • When two agents is chasing a hostile pacman, they will try to pincer the pacman
    • Occurs in narrow areas and pathways, which we refer to as “corridors”
    • If an agent notices it is taking the same path as another pacman, it will be induced to take an alternative route
    • Thus a pincer move is achieved

11 of 13

Improving accuracy of enemy observations

  • To improve the accuracy of observations we leverage all observations of agents
  • We use the average position which is weighted based on minimum dispersion
  • We also try to interpolate with regards predictions with velocity and position

12 of 13

Competition Recordings

Recordings of all games, blue is always our team:�https://1drv.ms/f/s!Ali9oqt2yxP0gp0Itlv8OFTKFXn97A?e=fxA4h8

13 of 13

Progress Status Week 17

  • Comment to customer paying 250 000 kr for the report:
    • We have a working solution. We have improved the solution further since last week and now believe we have a pretty good solution overall
  • Planned Time spent: 90%
    • (Out of the combined 200h)
  • Actual Time spent: 85%
    • Out of the combined 200h
  • Actual Progress: 90%
    • (estimate progress towards completing assignment)
  • Risk of not completing assignment: 0%