1 of 28

Simulated Annealing (Intelligent Search)

  • “Annealing” is a process of metal casting where the metal is first melted at high temperature beyond its melting point and then is allowed to cool down until it returns to the solid form.
  • In the physical process of annealing, the hot material gradually loses energy and finally at one point of time reaches a state of minimum energy.
  • A common observation reveals that most physical processes have transitions from higher to lower energy states, but still remains a small probability that it may cross a valley of energy states and move to an energy state which is higher than the energy state of the valley.

2 of 28

Simulated Annealing: Example of Rolling Ball

Valley with minimum energy state

High energy state

A little higher energy state

than the valley

3 of 28

Simulated Annealing

  •  

4 of 28

Simulated Annealing

  • A random number is chosen in the range [0,1] and p’ is compared with the value of the random number.
  • If p’ is more, corresponding state is selected for the next transition.
  • The parameter T, also called temperature, is gradually reduced in the search program.
  • The logic is as T decreases, p’ also decreases, thereby allowing the algorithm to terminate at a stable state.

5 of 28

Procedure Simulated Annealing

Begin

  • Identify possible starting states and measure the distance (f) of their closeness with the goal. Push them into a stack in ascending order of their f.
  • Repeat

Pop stack-top element

If stack-top element is the goal, announce it and exit.

Else

  • Generate children of the stack-top element N and compute f for each of them
  • If measure of f for at least one child is improving, push those children onto the stack in ascending order of their f
  • If none of the children of N has a better f, then

6 of 28

Procedure Simulated Annealing

  • If none of the children of N has a better f, then
  • Select any of them randomly, compute its p’ and test whether p’ exceeds a randomly generated number in the interval [0,1]. If yes, select the next state. If no, generate another alternative legal state and test in this way until one move can be selected. Replace stack top element by the selected move
  • Reduce T slightly. If the reduced value is negative, set it to zero.

End

Until the stack is empty

End

7 of 28

Assignment

  • Explain how Simulated Annealing is better than Hill Climbing.
  • Solve the Sudoku puzzle using SA algorithm.

8 of 28

Heuristic Search

  • “Heuristics” stands for “thumbs rule”: Rules that work successfully in many cases but its success is not guaranteed
  • We expand nodes by judiciously selecting the more promising nodes, where these nodes are identified by measuring their strength compared to their competitive counterparts with the help of specialized intuitive functions, called heuristic functions.
  • Generally, employed for solving two distinct types of problems: (i) forward reasoning, (ii) backward reasoning
  • Backward reasoning means we move from the goal towards the starting state
  • In forward reasoning, as discussed before, we move from the starting state to towards the goal. When realized with heuristic functions, this class of search algorithms are called heuristic search for OR graphs or the Best First Search algorithms.

9 of 28

Heuristic Search

  • “Heuristics” stands for “thumbs rule”: Rules that work successfully in many cases but its success is not guaranteed
  • We expand nodes by judiciously selecting the more promising nodes, where these nodes are identified by measuring their strength compared to their competitive counterparts with the help of specialized intuitive functions, called heuristic functions.
  • Generally, employed for solving two distinct types of problems: (i) forward reasoning, (ii) backward reasoning
  • Backward reasoning means we move from the goal towards the starting state.
  • In forward reasoning, as discussed before, we move from the starting state to towards the goal. When realized with heuristic functions, this class of search algorithms are called heuristic search for OR graphs or the Best First Search algorithms.
  • Best First Search is a class of algorithms and depending on the variation of the performance measuring function, it is differently named. E.g., A*
  • Heuristic backward reasoning algorithms are generally called AND-OR graph search algorithms. E.g., AO*

10 of 28

Heuristic Search for OR Graphs

  • Most forward reasoning problems can be represented by an OR Graph.
  • When a number of rules are applicable to a current state, we could select a better state among the children as the next state.
  • Hill Climbing algorithm described previously has a depth first flavor.
  • In best first search, we start with a promising state and generate all its offsprings.
  • The performance (fitness) of each of the nodes is then examined and the most promising node, based on fitness, is selected for expansion.
  • This most promising node is then expanded and the fitness of all newborn children is measured.
  • Now, instead of selecting only from the generated children, all the nodes having no children are also examined and the most promising of these nodes is selected for expansion.
  • Unlike hill climbing, the best first search provides a scope of corrections, in case a wrong step has been selected earlier.

11 of 28

Procedure Best First Search

Begin

  • Identify possible starting states and estimate the distance (f) of their closeness with the goal. Put them in a list L
  • While L is not empty do
  • Identify the node n from L that has the minimum f. If there exist more than one node with minimum f, select any one of these arbitrarily.
  • If n is the goal

Return n along with the path from the starting node and exit

Else

Remove n from L and add all children of n to the list L with their labeled paths from the starting node.

End While

End

**How to define f is not explicitly mentioned in the algorithm

12 of 28

Reach G from A Using Best First Search

A

B

C

D

F

E

H

G

A->G 40

B->G 32

C->G 25

D->G 35

E->G 19

F->G 17

H->G 10

G->G 0

7

11

14

25

15

8

10

20

9

10

13 of 28

A* Algorithm

  • A node is called open if the node has been generated and the heuristic h’(x) has been applied on it but it has not been expanded yet.
  • A node is called closed if it has been expanded for generating offsprings.
  • To measure the goodness of a node in A* algorithm, we require two cost functions: (i) heuristic cost and (ii) generation cost.
  • The heuristic cost measures the distance of the current node x with respect to the goal and is denoted by h(x).
  • The cost of generating a node x, denoted by g(x) measures the distance of node x with respect to the starting node of the graph.
  • The total cost function at a node, denoted by f(x) is the sum of g(x) and h(x).

14 of 28

A* Algorithm

  • Generation cost g(x) can be measured as we generate node x through a few state transitions.
  • For instance, if node x was generated from the starting node through m transitions, then the cost g(x) will be either m or proportional to m.
  • h(x) denotes the cost that is yet to be spent to reach the goal from the current node x.
  • Any cost we assign to h(x) must be through prediction, and we denote this predicted cost as h’(x). Consequently, the predicted total cost f’(x) is given by f’(x) = g(x)+h’(x).

15 of 28

Reach G from A Using A*

A

B

C

D

F

E

H

G

A->G 40

B->G 32

C->G 25

D->G 35

E->G 19

F->G 17

H->G 10

G->G 0

7

11

14

25

15

8

10

20

9

10

16 of 28

Another Example

A

C

B

F

D

E

G

A->G 14

B->G 12

C->G 11

D->G 6

E->G 4

F->G 11

G->G 0

3

4

7

10

12

16

5

5

2

17 of 28

Another Example

18 of 28

Procedure A*

Refer to Geeksforgeeks

https://www.geeksforgeeks.org/a-search-algorithm/

19 of 28

A* Algorithm

  • A* algorithm is complete, i.e., it terminates with a solution if one exists
  • A* algorithm is admissible, i.e., it is guaranteed to return an optimal solution if one exists, if heuristic function is chosen suitably

20 of 28

AO* Search Algorithm

  • AO* search Algorithm is based on problem decomposition (Breakdown problem into small pieces)
  • Here, a problem can be divided or decomposed into a set of subproblems, where each subproblem can be solved separately
  • For each subproblem, sub-solution is evaluated and a combination of these sub-solutions will be a whole solution.
  • AND OR graphs or AND OR trees are used for representing this solution.

21 of 28

Problem Reduction In Artificial  Intelligence

  • AO* is informed search algorithm, i.e., it works based on heuristic.
  • Similar to divide and conquer strategy in which a solution to a problem can be obtained by decomposing it into smaller sub-problems.
  • This method generates arcs which are called as AND-OR arcs. One AND arc may point to any number of successor nodes, all of which must be solved in order for an arc to point to a solution.
  • AO* search algorithm is based on AND-OR graph so ,it is called AO* search algo.

22 of 28

Example: Goal: Acquire TV Set

Alternative 1: Either steal a TV Set

or

Alternative 2: Earn some money and buy a TV Set

Carefully look at the arcs

23 of 28

AO* Search Algorithm in AI

  • Unlike A* algorithm, here we have to handle the AND area appropriately
  • A* algorithms cannot search AND-OR Graphs efficiently
  • If Fig (a), Node A has been expanded producing two areas: one leading to B, and another leading to C and D
  • The numbers beside each node represent the value of f at that node (i.e., the estimated cost of reaching the goal state from the current state).
  • For simplicity, let us assume that every operation (i.e., applying a rule) has unit cost, i.e., each arc has a cost of 1.
  • With the available information till now , it appears that C is the most promising node to expand since its f value is 3 , the lowest but going through B would be better since to use C we must also use D and the cost would be 3+4+1+1=9. Through B it would be 5+1=6.

24 of 28

AO* Search Algorithm in AI

  • Thus the choice of the next node to expand depends not only on the f value, but also on whether that node is part of the current best path from the initial mode.
  • Refer to Figure (b). In this figure the node G appears to be the most promising node, with the least f value. But G is not on the current best path, since to use G we must use GH with a cost of 9 and again this demands that arcs be used (with a cost of 27).
  • The path from A through B, E-F is better with a total cost of (17+1=18).

25 of 28

Search an AND-OR Graph

  • Traverse the graph starting at the initial node and following the current best path and accumulate the set of nodes that are on the path and have not yet been expanded.
  • Pick one of these best unexpanded nodes and expand it. Add its successors to the graph and compute f (cost of the remaining distance) for each of them.
  • Change the f estimate of the newly expanded node to reflect the new information produced by its successors. Propagate this change backward through the graph. Decide which of the current best path.

26 of 28

27 of 28

28 of 28

Try Yourself