1 of 25

CMPM147

Generative Design

Markov Models

Lucas N. Ferreira

University of California, Santa Cruz

Summer 19

UC Santa Cruz

2 of 25

Announcements

Last lecture we talked about:

  • Genetic Algorithms
  • Representation (genotype vs phenotype)
  • Fitness Functions
  • Reproduction
  • Mutation

Upcoming deadlines:

  • Assignment 4: Due this Saturday

UC Santa Cruz

2

3 of 25

Markov Models

4 of 25

Markov Models

In probability theory, a Markov model is a stochastic model used to model randomly changing systems.

They can be applied to generate sequences of events that follow a given learned probability distribution.

UC Santa Cruz

4

5 of 25

Brief History

Markov models are named after the Russian mathematician Andrey Markov.��Markov studied such processes in the early 20th century, publishing his first paper on the topic in 1906

6 of 25

Markov Models

In probability theory, a Markov model is a statistical model used to model randomly changing systems.

There are four common markov models:

  • Markov Chains
  • Hidden Markov Model
  • Markov Decision Process
  • Partially Observable Markov Decision Process

UC Santa Cruz

6

7 of 25

Markov Chains

A Markov Chain is a statistical model of a system that moves sequentially from one state to another. It is defined by:

  • A discrete set S = {s1, s2, … sn} of states.

  • A conditional probability distribution P(st|st-1 )

describing the probability of changing from

state st-1 to a state st.

S1

S2

P(s2|s1 )

P(s1|s2 )

P(s2|ss )

P(s1|s1)

UC Santa Cruz

7

8 of 25

Markov Chains

For example, one can define a Markov chain of a baby's behavior (random variable) as follows:

States

S = { Playing (P), eating (E), sleeping (S) and crying (C) }.

Probability Distribution

P

E

S

C

P

0.5

0.3

0.2

0

E

0.3

0.4

0.3

0.1

S

0.2

0.3

0.3

0.2

C

0

0.3

0.2

0.4

UC Santa Cruz

8

9 of 25

Creating Markov Chains from Data

One can create a Markov Chain from a dataset of sequential data as follows:

  1. Create a state for each event in the dataset (include a state for <start> and <end>).
  2. For each event st, calculate the probability of st appear after another event st+1.

This process is called training.

UC Santa Cruz

9

10 of 25

Sampling from a Markov Chain to Generate Content

After training a markov chain, one can sample events from it's probability distribution to generate content:

  1. Starting from the <start> event.
  2. Generate a random event considering the probability of P (<start>| t).
  3. Repeat until you generate <end>

This process is called sampling.

UC Santa Cruz

10

11 of 25

Example: Name Generator

One can train a Markov Chain from a dataset of names (sequential data) to generate new random names:

Training

  • Create a dataset of common (e.g. Hobbit names) names

<start>Frodo<end>

<start>Samwise<end>

<start>Bilbo<end>

UC Santa Cruz

11

12 of 25

Example: Name Generator

One can train a Markov Chain from a dataset of names (sequential data) to generate new random names:

Training

2. Create a state for each character in the dataset (include a state for <start> and <end>).

State

a

b

d

f

i

l

m

o

<start>

<end>

UC Santa Cruz

12

13 of 25

Example: Name Generator

One can train a Markov Chain from a dataset of names (sequential data) to generate new random names:

Training

3. For each char st, calculate the probability P(st|st-1) of st-1 appear before another char st-1.

State

a

b

d

f

i

l

m

o

<start>

<end>

a

0

0

0

0

0

0

1

0

0

0

b

0

0

0

0

0.5

0

0

0.5

0

0

...

UC Santa Cruz

13

14 of 25

Example: Algorithmic Music Composition

The same idea can applied to generate monophonic music.

Training

  • Create a dataset of symbolic music (e.g. MIDI).

[C, quarter], [E, quarter], [F, quarter], [E, 8th], [D, 8th], [C, 8th], [C, quarter], [D, quarter], [C, 8th], [A, 8th], [C, quarter]

UC Santa Cruz

14

15 of 25

Example: Algorithmic Music Composition

The same idea can applied to generate monophonic music.

Training

2. Create a state for each pair [note, duration] in the dataset

(include a state for <start> and <end>).

State

[C, quarter]

[E, quarter]

[F, quarter]

[E, 8th]

[D, 8th]

[C, 8th]

[D, quarter]

[A, 8th]

<start>

<end>

UC Santa Cruz

15

16 of 25

Example: Algorithmic Music Composition

The same idea can applied to generate monophonic music.

Training

3. For each state s, calculate the probability of s appear after another state y.

State

[C, quarter]

[E, quarter]

[F, quarter]

[E, 8th]

[D, 8th]

[C, 8th]

<start>

<end>

[C, quarter]

0

0

0

0

1

0

0

0

[E, quarter]

0

0

1

0

0

0

0

0

...

UC Santa Cruz

16

17 of 25

Example: Algorithmic Music Composition

To model polyphonic music we can divide the pieces in timesteps (e.g. 16th notes) and create a state that represents end-of-timestep.

Every note in the same timestep is played simultaneously.

[C, quarter], [D, 8th], [E, 8th], <ts>, <ts>, <ts>, [C, quarter], <ts>, <ts>, <ts>, <ts>, [E, quarter], <ts>, <ts>, <ts>, <ts>, ...

UC Santa Cruz

17

18 of 25

High Order Markov Chain

Markov chains restrict the probability distribution to only depend on the previous state.

Higher-order Markov Chains relax this condition by

allowing the probability distribution P(st|st-1,...,st-d )

to include d previous states.

S1,1

S2,2

P(s2,2|s1,1, s1,2,s2,1 )

S1,2

S2,1

UC Santa Cruz

18

19 of 25

Example: Linear Level Generators

One can train a High Order Markov Chain from a dataset of linear levels (sequential data) to generate new random levels:

Training

  • Create a dataset of common (e.g. Super Mario Bros 1-1) levels.

UC Santa Cruz

19

20 of 25

Example: Linear Level Generators

One can train a High Order Markov Chain from a dataset of linear levels (sequential data) to generate new random levels:

Training

2. Define a level representation:

A level can be represented as a WxH two-dimensional array m, where each cell m[x][y] takes a value from a finite set of tiles T.

UC Santa Cruz

20

21 of 25

Example: Linear Level Generators

One can train a High Order Markov Chain from a dataset of linear levels (sequential data) to generate new random levels:

Training

3. Create a state for each tile in the dataset (include a state for <start> and <end>).

State

-

#

[

]

?

B

p

<S>

...

UC Santa Cruz

21

22 of 25

Example: Linear Level Generators

One can train a High Order Markov Chain from a dataset of linear levels (sequential data) to generate new random levels:

Training

4. Calculate the probability P(sx,y|sx-1, y, sx, y-1) of a tile sx,y to be after a tile sx-1, y and above the tile sx, y-1

State

-

#

[

]

?

B

p

<S>

...

(-,-)

P(-| -, -)

P(#| -, -)

P([| -, -)

P(]| -, -)

P(?| -, -)

P(B| -, -)

P(p| -, -)

P(<s>| -, -)

...

(-,#)

P(-| -, #)

P(#| -, #)

P([| -, #)

P(]| -, #)

P(?| -, #)

P(B| -, #)

P(p| -, #)

P(<s>| -, #)

...

(-,[)

P(-| -, [)

P(#| -, [)

P([| -, [)

P(]| -, [)

P(?| -, [)

P(B| -, [)

P(p| -, [)

P(<s>| -, [)

...

...

...

...

...

...

...

...

...

...

...

UC Santa Cruz

22

23 of 25

Sampling

One can train a High Order Markov Chain from a dataset of linear levels (sequential data) to generate new random levels:

Sampling

  1. Starting from the bottom left corner of the level.
  2. Sample tiles vertically until you reach a tile <E>.

UC Santa Cruz

23

24 of 25

How Much Training Data is Required?

In general, data-driven approaches (including Markov Chains) require a "large" amount of examples to learn a representative probability distribution.

The amount of data you need depends both on the complexity of your problem and on the complexity of your chosen algorithm.

UC Santa Cruz

24

25 of 25

References

Markov Chains

Applications

UC Santa Cruz

25