1 of 56

CSE 3521: Search

[Many slides are adapted from the UC Berkeley. CS188 Intro to AI at UC Berkeley and previous CSE 3521 course at OSU.]

2 of 56

Recap: Agents & Environments

  • Types of agents
    • Reflex agents
    • Planning agents
    • Goal-based agents, learning agents, utility-based agents, logical agents, …
    • They are not “mutually” exclusive!

  • Six types of environments: fully observable or not, single/multi-agent, episodic/sequential, static/dynamic, discrete/continuous, deterministic/stochastic

  • Appropriate agent design depends on the environment type

Environment

Agent

perception

action

Environments (states) can change!

3 of 56

Recap: Agents that plan ahead

  • Planning agents:
    • Ask “what if”
    • Decisions based on hypothesized consequences of actions
    • Must have a model of how the world evolves in response to actions
    • Must formulate a goal test
    • Consider how the world WOULD BE

4 of 56

Recap: Agents that plan ahead

  • Before taking real actions in the real environment, formulate an off-line problem in a simulated/imaginary environment (model).

  • The off-line problem:
    • Has a goal test/desired state(s): not necessary the same as the real problem
    • To find a sequence of actions that can achieve the goal, better with optimal cost/performance measure/utility

  • Execute the action sequence in the real environment
  • Execute the first action in the sequence, see how to state changes, and may re-plan!

5 of 56

Recap: Search problems

  • A systematic way to looking for the (optimal) action sequence to reach goal
  • Not the only way

6 of 56

Today

  • Search Problem

  • State Space Graphs and Search Trees

  • Uninformed Search Methods

    • Depth-First Search

    • Breadth-First Search

    • Uniform-Cost Search

7 of 56

Search problems

  • A search problem consists of:

    • A state space

    • A successor function

(with actions, costs)

    • A start state and a goal test

  • A solution is a sequence of actions (a plan) which transforms the start state to a goal state

“N”, 1.0

“E”, 1.0

Start:

Goal(s): or or ……

8 of 56

Example: traveling in Romania

  • State space:
    • Cities
  • Successor function:
    • Roads: Go to adjacent city with cost = distance
  • Start state:
    • Arad
  • Goal test:
    • Is state == Bucharest?

  • Solution?
    • If there exists one, it is a sequence of cities from Arad to Bucharest

9 of 56

What’s in a state space?

  • Problem: Pathing
    • States: (x, y) location
    • Actions: NSEW
    • Successor: update location only
    • Goal test: is (x, y)=END
  • Problem: Eat-All-Dots
    • States: {(x, y), dot Booleans}
    • Actions: NSEW
    • Successor: update location and dot Booleans
    • Goal test: dots all false

The world state includes every detail of the environment

A search state keeps only the details needed for planning (abstraction)

10 of 56

State space sizes? How many possible states?

  • World state:
    • Agent positions: 120 (i.e., 10 x 12)
    • Food count: 30 (i.e., 5 x 6)
    • Ghost positions: 12
    • Agent facing: NSEW�
  • How many
    • World states?

120 x (230) x (122) x 4

    • States for pathing?

120

    • States for eat-all-dots?

120 x (230)

11 of 56

Questions to you

  • Do you think “how to search faster and accurately” has anything to do with intelligence?

12 of 56

Today

  • Search Problem

  • State Space Graphs and Search Trees

  • Uninformed Search Methods

    • Depth-First Search

    • Breadth-First Search

    • Uniform-Cost Search

13 of 56

State space graphs and search trees

To “systematically” find the solution rather than brute-force

14 of 56

State space graphs

  • State space graph: a mathematical representation of a search problem
    • Nodes are states (i.e., abstracted world configurations)
    • Arcs represent successors (action and results)
    • The goal test is a set of goal nodes (maybe only one)

  • In a state space graph, each unique state occurs only once!

15 of 56

State space graphs

  • State space graph: a mathematical representation of a search problem
    • Nodes are states (i.e., abstracted world configurations)
    • Arcs represent successors (action and results)
    • The goal test is a set of goal nodes (maybe only one)

  • In a state space graph, each unique state occurs only once!

16 of 56

State space graphs

  • State space graph: a mathematical representation of a search problem
    • Nodes are states (i.e., abstracted world configurations)
    • Arcs represent successors (action and results)
    • The goal test is a set of goal nodes (maybe only one)

  • In a state space graph, each unique state occurs only once!

  • We can rarely build this full graph in memory (it’s too big), but it’s a useful idea

17 of 56

State space graphs

  • State space graph: a mathematical representation of a search problem
    • Nodes are states (i.e., abstracted world configurations)
    • Arcs represent successors (action and results)
    • The goal test is a set of goal nodes (maybe only one)

  • In a state space graph, each unique state occurs only once!

  • We can rarely build this full graph in memory (it’s too big), but it’s a useful idea

S

G

d

b

p

q

c

e

h

a

f

r

Tiny state space graph graph for a tiny search problem

18 of 56

Search trees (moves in state space graphs)

  • A search tree:
    • A “what if” tree of plans and their outcomes
    • The start state is the root node
    • Children correspond to successors
    • Nodes show states, but correspond to PLANS that achieve those states
    • For most problems, we can never actually build the whole tree

“E”, 1.0

“N”, 1.0

This is now / start

Possible futures

19 of 56

Search trees: nodes are not states but partial plans

G

b

a

c

a

b

c

a

G

(a-b)

(a-b-c)

(a-b-c-a)

(a-b-c-G)

  • Can we have “a” again? Or should it go back the root (then not a tree anymore)?
  • Yes! The node is actually (a-b-c-a), different from (a).

(a)

Different plans that achieve the same state, will be different nodes in the tree!

State space graph

Search tree

20 of 56

State space graphs vs. search trees

S

a

b

d

p

a

c

e

p

h

f

r

q

q

c

G

a

q

e

p

h

f

r

q

q

c

G

a

S

G

d

b

p

q

c

e

h

a

f

r

Each NODE in the search tree is a PATH in the state space graph.

Search Tree

State Space Graph

21 of 56

State space graphs vs. search trees

S

a

b

d

p

a

c

e

p

h

f

r

q

q

c

G

a

q

e

p

h

f

r

q

q

c

G

a

S

G

d

b

p

q

c

e

h

a

f

r

Each NODE in the search tree is a PATH in the state space graph.

Search Tree

State Space Graph

A unique STATE can show up at multiple NODES in a search tree; each NODE in the search tree is a unique plan.

22 of 56

State space graphs vs. search trees

S

a

b

d

p

a

c

e

p

h

f

r

q

q

c

G

a

q

e

p

h

f

r

q

q

c

G

a

S

G

d

b

p

q

c

e

h

a

f

r

We construct both on demand – and we construct as little as possible.

Each NODE in the search tree is a PATH in the state space graph.

Search Tree

State Space Graph

A unique STATE can show up at multiple NODES in a search tree; each NODE in the search tree is a unique plan.

23 of 56

Exercise: State space graphs vs. search trees

S

G

b

a

Consider this 4-state graph:

How big/deep is its search tree (from S)?

24 of 56

Exercise: State space graphs vs. search trees

S

G

b

a

Consider this 4-state graph:

Important: Lots of repeated structure in the search tree!

How big/deep is its search tree (from S)?

25 of 56

Questions?

26 of 56

Tree search

How to systematically builds/explores a search tree to find the solution!

27 of 56

Search example: Romania

28 of 56

Searching with a search tree

  • Search:
    • Expand out potential plans (tree nodes)
    • Maintain a fringe of partial plans under consideration
    • Try to expand as few tree nodes as possible

29 of 56

Searching with a search tree

  • Search:
    • Expand out potential plans (tree nodes)
    • Maintain a fringe of partial plans under consideration
    • Try to expand as few tree nodes as possible

30 of 56

Searching with a search tree

  • Search:
    • Expand out potential plans (tree nodes)
    • Maintain a fringe of partial plans under consideration
    • Try to expand as few tree nodes as possible
    • When to stop?

31 of 56

General tree search

Search problem

32 of 56

General tree search

  • Important components:

(1) Fringe (2) Expansion (3) Exploration strategy

  • Main question: which fringe nodes to explore?

Search problem (state space, successor function, start state, and goal test)

33 of 56

Questions?

34 of 56

Uninformed vs. informed search

  • Uninformed search
    • Given no information about the problem (other than its definition)
    • Find solutions by systematically visiting new states and testing for goal

  • Informed search
    • Given some ideas of where to look for solutions
    • Use problem-specific knowledge

35 of 56

Uninformed vs. informed search

  • Uninformed search
    • Given no information about the problem (other than its definition)
    • Find solutions by systematically visiting new states and testing for goal

  • Informed search
    • Given some ideas of where to look for solutions
    • Use problem-specific knowledge

36 of 56

Example: Tree search

S

G

d

b

p

q

c

e

h

a

f

r

37 of 56

Depth-First Search (DFS)

38 of 56

Depth-First Search (DFS)

S

a

b

d

p

a

c

e

p

h

f

r

q

q

c

G

a

q

e

p

h

f

r

q

q

c

G

a

S

G

d

b

p

q

c

e

h

a

f

r

q

p

h

f

d

b

a

c

e

r

Strategy: expand a deepest node first

Implementation: Fringe is a LIFO stack

39 of 56

Search algorithm properties

40 of 56

Search algorithm properties

  • Complete: Guaranteed to find a solution if one exists?
  • Optimal: Guaranteed to find the least cost path?
  • Time complexity?
  • Space complexity (fringe)?

  • Cartoon of search tree:
    • b is the branching factor
    • m is the maximum depth
    • solutions at various depths

  • Number of nodes in entire tree?
    • 1 + b + b2 + …. bm = O(bm)

b

1 node

b nodes

b2 nodes

bm nodes

m tiers

41 of 56

Depth-First Search (DFS) properties

  • What nodes DFS expand?
    • Some left prefix of the tree
    • Could process the whole tree!
    • If m is finite, takes time O(bm)

  • How much space does the fringe take?
    • Only has siblings on the current path to root, so O(bm)

  • Is it complete?
    • m could be infinite, so only if we prevent cycles

  • Is it optimal?
    • No, it finds the “leftmost” solution, regardless of depth or cost

b

1 node

b nodes

b2 nodes

bm nodes

m tiers

42 of 56

Questions?

43 of 56

Breadth-First Search (BFS)

44 of 56

Breadth-First Search (BFS)

S

a

b

d

p

a

c

e

p

h

f

r

q

q

c

G

a

q

e

p

h

f

r

q

q

c

G

a

S

G

d

b

p

q

c

e

h

a

f

r

Search

Tiers

Strategy: expand a shallowest node first

Implementation: Fringe is a FIFO queue

45 of 56

Breadth-First Search (BFS) properties

  • What nodes does BFS expand?
    • Processes all nodes above the shallowest solution
    • Let the depth of the shallowest solution be s
    • Search takes time O(bs)

  • How much space does the fringe take?
    • Has roughly the last tier, so O(bs)

  • Is it complete?
    • s must be finite if a solution exists, so yes!

  • Is it optimal?
    • Only if costs are all 1 (more on costs later)

b

1 node

b nodes

b2 nodes

bm nodes

s tiers

bs nodes

46 of 56

DFS vs. BFS

DFS

BFS

Time to find a solution

O(bm)

O(bs)

Fringe space

O(bm)

O(bs)

Complete

No

Yes

Optimal

No

Yes, if costs are all 1

  • Time to find a solution IS NOT the cost of the solution (action sequence)

  • Example: The time to find a traveling plan between City A and City B IS NOT the time to travel from City A to City B

47 of 56

Iterative deepening

  • Idea: get DFS’s space advantage with BFS’s time/shallow-solution advantages
    • Run a DFS with depth limit 1. If no solution…
    • Run a DFS with depth limit 2. If no solution…
    • Run a DFS with depth limit 3. …..

  • Isn’t that wastefully redundant?
    • Generally, most of the work is in the deepest level searched, so not so bad!

  • A preferred method with a large search space and the depth of solution not known

b

48 of 56

Questions?

49 of 56

Cost-sensitive search

  • BFS finds the shortest path (plan; solution) in terms of the number of actions.
  • It does not find the least-cost path.
  • We will now study a similar algorithm which does find the least-cost path.

START

GOAL

d

b

p

q

c

e

h

a

f

r

2

9

2

8

1

8

2

3

2

4

4

15

1

3

2

2

50 of 56

Uniform Cost Search (UCS)

51 of 56

Uniform Cost Search (UCS)

S

a

b

d

p

a

c

e

p

h

f

r

q

q

c

G

a

q

e

p

h

f

r

q

q

c

G

a

Strategy: expand the cheapest node first

Implementation: Fringe is a priority queue (priority: cumulative cost)

S

G

d

b

p

q

c

e

h

a

f

r

3

9

1

16

4

11

5

7

13

8

10

11

17

11

0

6

3

9

1

1

2

8

8

2

15

1

2

Cost contours

2

52 of 56

Uniform Cost Search (UCS) properties

  • What nodes does UCS expand?
    • Processes all nodes with cost less than the cheapest solution!
    • If the cheapest solution costs C* and the arcs cost at least ε , then the “effective depth” is roughly C*/ε
    • Takes time O(bC*/ε) (exponential in effective depth)
    • Additional cost of the priority queue

  • How much space does the fringe take?
    • Has roughly the last tier, so O(bC*/ε)

  • Is it complete?
    • Assuming the least-cost solution has a finite cost and the minimum arc cost is positive, yes!

  • Is it optimal?
    • Yes!

b

C*/ε “tiers”

c ≤ 3

c ≤ 2

c ≤ 1

53 of 56

The one queue

  • All the search algorithms are the same except for fringe/exploration strategies
    • Conceptually, all fringes are priority queues (i.e. collections of nodes with attached priorities)
    • Practically, for DFS and BFS, you can avoid the log(n) overhead from an actual priority queue, by using stacks and queues
    • Can even code one implementation that takes a variable queuing object

54 of 56

Search and models

  • Search operates over the models of the world
    • The agent doesn’t actually try all the plans out in the real world!
    • Planning is all “in simulation”
    • Your search is only as good as your models …

55 of 56

Search gone wrong?

56 of 56

Summary: Search algorithms

  • Search tree:
    • Nodes: represent plans for reaching states
    • Plans have costs (sum of action costs)

  • Search algorithm:
    • Systematically builds/explores a search tree
    • Chooses an ordering of the fringe (unexplored nodes)
    • Optimal: finds least-cost plans