1 of 17

Complexity and Generation Evaluation

2 of 17

Experimental Setup

  • Region count: From 10 to 50 in steps of 5
  • Item Count: From 5 to 30 in steps of 5
    • But not exceeding the region count
  • Generated 100 worlds for each region/item count combination
  • Calculated complexity score with each metric for each world
  • Averaged score for each metric

3 of 17

Results- Sum

4 of 17

Results- Average

5 of 17

Results- Max

6 of 17

Results- Sum of Squares

7 of 17

Result Consideration

  • Sum and Sum of Squares are directly affected by the number of locations, while Max and Average are not
    • But is this desirable?
  • More locations does not necessarily add complexity, in fact it can remove complexity by having many “easy” locations
    • The complexity increases in Sum and Sum of Squares are also very extreme, especially in the latter

8 of 17

Result Consideration

  • Neither Max or Average are strictly increasing along a row or column, but Max is much closer to strictly increasing than average
  • However, max also allows one outlier location determine complexity for the whole world

9 of 17

Result Consideration

  • I was not satisfied with any of these results so I tried something different
    • Average’s result closest to desirable outcome
  • Take the average of the top x% of scores
    • 50% and 75% used

10 of 17

Results- Average of Top 50%

11 of 17

Results- Average of Top 75%

12 of 17

Results Consideration

  • Both of these are closer to increasing than average
  • Disregards many “easy” locations which may be available near the beginning of the game and gives a better view of the hard-to-reach areas which will likely be necessary to access to complete the game
    • These locations can obscure the true complexity of the world when taking a pure average
  • 50% giving better result, so this one was selected
    • Somewhat of a middle ground between Average and Max measuring methods

13 of 17

Test Worlds

  • Now want to generate 5 worlds of increasing complexity in to use as the test worlds for our evaluations
  • Chose the following highlighted values

14 of 17

Test World Generation

  • Similar to when creating the graphs, generate many worlds and find an average complexity
  • Once average complexity is found, generate worlds until one within 10% of average is generated
  • This graph is then saved

15 of 17

Generated World Graphs

Name

Region Count

Item Count

Item Locations

Complexity

World 1

10

5

26

4.31

World 2

25

10

74

11.11

World 3

35

15

105

22.25

World 4

45

20

135

32.63

World 5

50

30

143

53.23

16 of 17

Next Steps- Bias

  • Now that we have some more sample worlds, want to run fill algorithms on them and calculate the bias of the result
  • Current basic idea to calculate bias:
    • For each reachability sphere, compare the percentage of item locations in that sphere to the percentage of key items in the sphere
    • If unbiased, these percentages should be roughly equal
    • Need to implement, test, refine this idea

17 of 17

Next Steps- Interestingness

  • Have idea for a search which acts as a player
    • Collects all available locations in current region
    • Decide which region to go next using some heuristic for available item locations in that direction
  • Record a list of some metrics as search goes through
    • Regions traversed between finding a major or helpful item
    • Regions traversed between finding a major item
    • Locations searched per traversal
  • Once these metrics are calculated, must use them to calculate interestingness somehow
    • This will be the purpose of the survey which I will have a rough draft for next week