Unit - 2
Unit - 2
Introduction to Artificial
Intelligent [22AI002]
Dr. Aditi Moudgil
Assistant Professor
5
Search Algorithm Terminologies
1. Search: Searching is a step-by-step procedure to solve a
search problem in a given search space. The algorithms provide
search solutions through a sequence of actions that transform
the initial state to the goal state. A search problem can have
three main factors:
• Search Space: Search space represents a set of possible
solutions, which a system may have. It is typically structured as a
graph or a tree, where:
• Nodes represent states of the problem.
• Edges represent the possible actions or transitions between states.
• Start State: It is a state from where the agent begins the
search. Eg: In a pathfinding problem, the start state might be the current
position of a robot in a warehouse.
12
1. Uninformed Search Algorithms
Uninformed search is a class of general-purpose search
algorithms that operates in brute force-way. Uninformed search
algorithms do not have additional information about state or
search space other than how to traverse the tree, so it is also
called blind search.
Following are the various types of uninformed search algorithms:
• Breadth-first Search
• Depth-first Search
• Depth-limited Search
• Iterative deepening depth-first search
• Uniform cost search
• Bidirectional Search
13
Breadth-first Search
• Breadth-first search is the most common search strategy for
traversing a tree or graph. This algorithm searches
breadthwise in a tree or graph, so it is called breadth-first
search.
• BFS algorithm starts searching from the root node of the tree
and expands all successor nodes at the current level before
moving to nodes of next level.
• The breadth-first search algorithm is an example of a general-
graph search algorithm.
• Breadth-first search implemented using FIFO queue data
structure. 14
15
Breadth-first Search
• A standard BFS implementation puts each vertex of the graph into one of
two categories:
– Visited
– Not Visited
• The purpose of the algorithm is to mark each vertex as visited while
avoiding cycles.
The algorithm works as follows:
1. Start by putting any one of the graph's vertices at the back of a queue.
2. Take the front item of the queue and add it to the visited list (if not
already visited).
3. Create a list of that vertex's adjacent nodes. Add the ones which aren't
in the visited list to the back of the queue.
4. Keep repeating steps 2 and 3 until the queue is empty.
5. Print visited list for BFS Traversal.
The graph might have two different disconnected parts so to make sure that
we cover every vertex, we can also run the BFS algorithm on every node
16
Breadth-first Search Pseudocode
• create a queue Q
• mark v as visited and put v into Q
• while Q is non-empty
1. remove the head u of Q
2. mark and enqueue all (unvisited) neighbors of u
17
Breadth-first Search
Advantages:
• BFS will provide a solution if any solution exists.
• If there are more than one solution for a given
problem, then BFS will provide the minimal
solution which requires the least number of steps.
Disadvantages:
• It requires lots of memory since each level of the
tree must be saved into memory to expand the
next level.
• BFS needs lots of time if the solution is far away
from the root node.
• Space and Time Complexity = O() 18
Real-World Problem Solved using
BFS
20
Depth-first Search
• A standard DFS implementation puts each vertex of the graph into one of
two categories:
– Visited
– Not Visited
The purpose of the algorithm is to mark each vertex as visited while avoiding
cycles.
• The DFS algorithm works as follows:
1. Start by putting any one of the graph's vertices on top of a stack.
2. Take the top item of the stack and add it to the visited list (if not
already visited).
3. Create a list of that vertex's adjacent nodes. Add the ones which aren't
in the visited list to the top of the stack.
4. Keep repeating steps 2 and 3 until the stack is empty.
5. Print visited list for DFS Traversal.
21
Depth-first Search Pseudocode
DFS(G, u)
u.visited = true
for each v ∈ G.Adj[u]
if v.visited == false
DFS(G,v)
init() {
For each u ∈ G
u.visited = false
For each u ∈ G
DFS(G, u)
}
22
Depth-first Search
Advantage:
• DFS requires very little memory as it only needs
to store a stack of the nodes on the path from the
root node to the current node.
• It takes less time to reach the goal node than the
BFS algorithm (if it traverses in the right path).
Disadvantage:
• There is the possibility that many states keep re-
occurring, and there is no guarantee of finding
the solution.
• DFS algorithm goes for deep down searching and
sometimes it may go to the infinite loop. 23
Depth-Limited Search Algorithm
• A depth-limited search algorithm is similar to a
depth-first search with a predetermined limit.
Depth-limited search can solve the drawback of
the infinite path in the Depth-first search. In this
algorithm, the node at the depth limit will
treat as it has no successor nodes further.
• Depth-limited search can be terminated with two
Conditions of failure:
– Standard failure value: It indicates that the
problem does not have any solution.
– Cutoff failure value: It defines no solution for
the problem within a given depth limit.
24
Depth-Limited Search Algorithm
25
Depth-Limited Search Algorithm
Advantages:
• Depth-limited search is Memory efficient.
Disadvantages:
• Depth-limited search also has a disadvantage of
incompleteness.
• It may not be optimal if the problem has more than one
solution.
26
Uniform-Cost Search Algorithm
• Uniform-cost search is a searching algorithm used
for traversing a weighted tree or graph. This
algorithm comes into play when a different cost is
available for each edge.
• The primary goal of the uniform-cost search is to
find a path to the goal node which has the
lowest cumulative cost. Uniform-cost search
expands nodes according to their path costs from
the root node.
• It can be used to solve any graph/tree where the
optimal cost is in demand. A uniform-cost search
algorithm is implemented by the priority queue.
It gives maximum priority to the lowest27
Uniform-Cost Search Algorithm
• Algorithm for uniform cost search:
• Insert the root node into the priority queue
• Remove the element with the highest priority.
• If the removed node is the destination, print total
cost and stop the algorithm
• Else if, Check if the node is in the visited list
• Else Enqueue all the children of the current node
to the priority queue, with their cumulative cost
from the root as priority and the current not to
the visited list.
28
Uniform-Cost Search Algorithm
29
Uniform-Cost Search Algorithm
Advantages:
• Uniform cost search is optimal because at every
state the path with the least cost is chosen.
Disadvantages:
• It does not care about the number of steps
involve in searching and is only concerned about
path cost. Due to which this algorithm may be
stuck in an infinite loop.
30
Iterative deepening depth-first Search
• The iterative deepening algorithm is a
combination of DFS and BFS algorithms. This
search algorithm finds out the best depth limit
and does it by gradually increasing the limit until
a goal is found.
• This algorithm performs depth-first search up to a
certain "depth limit", and it keeps increasing the
depth limit after each iteration until the goal node
is found.
• This Search algorithm combines the benefits of
Breadth-first search's fast search and depth-first
search's memory efficiency.
31
Iterative deepening depth-first Search
• The iterative search algorithm is useful for
uninformed search when search space is large,
and the depth of goal node is unknown.
Advantages:
• It combines the benefits of BFS and DFS search
algorithms in terms of fast search and memory
efficiency.
Disadvantages:
• The main drawback of IDDFS is that it repeats all
the work of the previous phase.
32
Iterative deepening depth-first Search
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------> A, B, D, H, I, E, C, F, K, G
33
Bidirectional Search Algorithm
• Bidirectional search algorithm runs two simultaneous
searches, one from the initial state called forward-search and
the other from the goal node called backward-search, to find
the goal node.
• Bidirectional search replaces one single search graph with two
small subgraphs in which one starts the search from an initial
vertex and the other starts from the goal vertex.
• The search stops when these two graphs intersect each
other.
• Bidirectional search can use search techniques such as BFS,
DFS, etc. 34
Bidirectional Search Algorithm
35
Bidirectional Search Algorithm
Advantages:
• Bidirectional search is fast.
• Bidirectional search requires less memory
Disadvantages:
• Implementation of the bidirectional search tree is difficult.
• In bidirectional search, one should know the goal state in
advance.
36
2. Informed Search Algorithms
• The informed search algorithm is more useful for large search
spaces. An informed search algorithm uses the idea of
heuristic, so it is also called Heuristic search.
• Heuristics function: Heuristic is a function that is used in
Informed Search, and it finds the most promising path. It takes
the current state of the agent as its input and produces the
estimation of how close the agent is to the goal.
• Heuristic function estimates how close a state is to the goal.
• The heuristic method, however, might not always give the best
solution, but it guarantees to find a good solution in a
reasonable time.
37
2. Informed Search Algorithms
• A heuristic function helps the search algorithm choose a branch
from the ones that are available. It helps with the decision
process by using some extra knowledge about the search space.
38
2. Informed Search Algorithms
• While playing tic tac toe, there are many placements from which one
player can start, and each placement has its own chances of winning.
However, if the first player starts from the centermost area, they have
the most chances of winning. Hence, chances of winning can be a
heuristic.
39
2. Informed Search Algorithms
• Heuristic Function is represented by h(n), and it calculates the
cost of an optimal path between the pair of states. The value
of the heuristic function is always positive.
40
2. Informed Search Algorithms
Pure Heuristic Search:
• Pure heuristic search is the simplest form of heuristic search
algorithms. It expands nodes based on their heuristic value
h(n). It maintains two lists, OPEN and CLOSED lists. In the
CLOSED list, it places those nodes which have already
expanded, and in the OPEN list, it places nodes that have yet
not been expanded.
• On each iteration, each node n with the lowest heuristic value
is expanded and generates all its successors, and n is placed to
the closed list. The algorithm continues unit a goal state is
found. 41
2. Informed Search Algorithms
In the informed search we will discuss two main algorithms which
are given below:
1. Best First Search Algorithm (Greedy search)
2. A* Search Algorithm
42
Best-first Search Algorithm (Greedy Search)
• Greedy best-first search algorithm always selects the path
which appears best at that moment. It is the combination of
depth-first search and breadth-first search algorithms.
• It uses the heuristic function and search. Best-first search allows
us to take advantage of both algorithms. With the help of the
best-first search, at each step, we can choose the most
promising node. The greedy best-first algorithm is implemented
by the priority queue.
• In the best-first search algorithm, we expand the node which is
closest to the goal node, and the closest cost is estimated by
heuristic function, i.e. 43
Best-first Search Algorithm (Greedy Search)
• BFS uses the concept of a Priority queue and heuristic search.
To search the graph space, the BFS method uses two lists for
tracking the traversal. An ‘Open’ list that keeps track of the
current ‘immediate’ nodes available for traversal and a
‘CLOSED’ list that keeps track of the nodes already traversed.
44
Best-first Search Algorithm (Greedy Search)
Best first search algorithm:
Step 1: Place the starting node into the OPEN list.
Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node n, from the OPEN list which has the lowest value of
h(n), and place it in the CLOSED list.
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any node is a goal
node or not. If any successor node is the goal node, then return success and
terminate the search, else proceed to Step 6.
Step 6: For each successor node, the algorithm checks for evaluation function
f(n), and then check if the node has been in either OPEN or CLOSED list. If the
node has not been in both lists, then add it to the OPEN list.
Step 7: Return to Step 2.
45
Best-first Search Algorithm (Greedy Search)
• G is Goal State
46
Best-first Search Algorithm (Greedy Search)
In this search example, we are using two lists which are OPEN and CLOSED
Lists. Following are the iteration for traversing the above example.
47
Best-first Search Algorithm (Greedy Search)
Expand the nodes of S and put in the CLOSED list
Initialization: Open [B, A], Closed [S]
Iteration 1: Open [A], Closed [S, B]
Iteration 2: Open [E, F, A], Closed [S, B]
: Open [F, A], Closed [S, B, E]
Iteration 3: Open [F, A], Closed [S, B, E]
: Open [A], Closed [S, B, E, F]
Iteration 4: Open [G, I, A], Closed [S, B, E, F]
: Open [I, A], Closed [S, B, E, F, G]
Hence the final solution path will be: S----> B----->F----> G
48
Example of Best-first Search Algorithm
61
A* Search Algorithm
Algorithm of A* search:
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty
then return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest
value of evaluation function (g+h), if node n is goal node then
return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n
into the closed list. For each successor n', check whether n' is
already in the OPEN or CLOSED list, if not then compute evaluation
function for n' and place into the Open list. 62
A* Search Algorithm
Algorithm of A* search:
Step 5: Else if node n' is already in OPEN and CLOSED, then it
should be attached to the back pointer which reflects the lowest
g(n') value.
Step 6: Return to Step 2.
63
A* Search Algorithm
Advantages:
• A* search algorithm is the best algorithm than other search
algorithms.
• A* search algorithm is optimal and complete.
• This algorithm can solve very complex problems.
Disadvantages:
• It does not always produce the shortest path as it mostly based
on heuristics and approximation.
• A* search algorithm has some complexity issues.
• The main drawback of A* is memory requirement as it keeps all
generated nodes in the memory, so it is not practical for various
large-scale problems. 64
A* Search Algorithm
65
A* Search Algorithm
Initialization: {(S, 5)}
Iteration1: {(S--> A, 4), (S-->G, 10)}
Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B,
7), (S-->G, 10)}
Iteration 4 will give the final result,
as S--->A--->C--->G it provides the optimal path with cost 6.
66
Example of A* Search Algorithm
• Find the most cost-effective path to reach the final state from initial
state using A* Algorithm.
• Consider g(n) = Depth of node and h(n) = Number of misplaced
tiles.
70
Hill Climbing Algorithm
• It is also called greedy local search as it only looks to its good
immediate neighbor state and not beyond that.
• A node of hill climbing algorithm has two components which
are state and value.
• Hill Climbing is mostly used when a good heuristic is
available.
• In this algorithm, we don't need to maintain and handle the
search tree or graph as it only keeps a single current state.
71
Hill Climbing Algorithm
Features of Hill Climbing
Generate and Test variant: Hill Climbing is the variant of
Generate and Test method. The Generate and Test method
produce feedback which helps to decide which direction to move
in the search space.
Greedy approach: Hill-climbing algorithm search moves in the
direction which optimizes the cost.
No backtracking: It does not backtrack the search space, as it
does not remember the previous states.
72
Hill Climbing Algorithm
State-space Diagram for Hill Climbing
• The state-space landscape is a graphical representation of the
hill-climbing algorithm which is showing a graph between
various states of algorithm and Objective function/Cost.
• On Y-axis we have taken the function which can be an objective
function or cost function, and state-space on the X-axis.
• If the function on Y-axis is cost then, the goal of the search is to
find the global minimum and local minimum.
• If the function of the Y-axis is the Objective function, then the
goal of the search is to find the global maximum and local
maximum. 73
Hill Climbing Algorithm
State-space Diagram for Hill Climbing
74
Hill Climbing Algorithm
State-space Diagram for Hill Climbing
Local Maximum: Local maximum is a state which is better than its neighbor
states, but there is also another state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of the objective function.
Current state: It is a state in a landscape diagram where an agent is currently
present.
Flat local maximum: It is a flat space in the landscape where all the neighbor
states of current states have the same value.
Shoulder: It is a plateau region that has an uphill edge.
75
Hill Climbing Algorithm
• Simple hill climbing is the simplest way to implement a hill
climbing algorithm.
• It only evaluates the neighbor node state at a time and
selects the first one which optimizes current cost and sets it
as a current state.
• It only checks its one successor state, and if it finds better
than the current state, then move else be in the same state.
76
Hill Climbing Algorithm
Algorithm for Simple Hill Climbing:
Step 1: Evaluate the initial state, if it is goal state then return
success and Stop.
Step 2: Loop Until a solution is found or there is no new operator
left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
a. If it is goal state, then return success and quit.
77
Hill Climbing Algorithm
Algorithm for Simple Hill Climbing:
b. Else if it is better than the current state then assign
new state as a current state.
c. Else if not better than the current state, then return to
step2.
Step 5: Exit.
78
Example of Hill Climbing Algorithm
81
Means-Ends Analysis
How means-ends analysis Works:
• The means-ends analysis process can be applied recursively for a
problem. It is a strategy to control search in problem-solving.
• Following are the main steps that describe the working of MEA
technique for solving a problem.
– First, evaluate the difference between Initial State and final
State.
– Select the various operators which can be applied for each
difference.
– Apply the operator at each difference, which reduces the
difference between the current state and goal state.
82
Faculty Name - GroupNo 83
Means-Ends Analysis
Operator Subgoaling
• In the MEA process, we detect the differences between the
current state and the goal state. Once these differences occur,
then we can apply an operator to reduce the differences.
• But sometimes it is possible that an operator cannot be applied
to the current state.
• So we create the subproblem of the current state, in which
operator can be applied, such type of backward chaining in
which operators are selected, and then sub-goals are set up to
establish the preconditions of the operator is called Operator
Subgoaling. 84
Algorithm for Means-Ends Analysis
Let's take the Current state as CURRENT and Goal State as GOAL, then
following are the steps for the MEA algorithm.
Step 1: Compare CURRENT to GOAL, if there are no differences
between both then return Success and Exit.
Step 2: Else, select the most significant difference and reduce it by
doing the following steps until the success or failure occurs.
• Select a new operator O which is applicable for the current
difference, and if there is no such operator, then signal failure.
• Attempt to apply operator O to CURRENT. Make a description of
two_states.
i) O-Start, a state in which O’s preconditions are satisfied.
ii) O-Result, the state that would result if O were applied In O-
start.
85
Algorithm for Means-Ends Analysis
• If (First-Part <------ MEA (CURRENT, O-START)
And
(LAST-Part <----- MEA (O-Result, GOAL), are successful, then signal
Success and return the result of combining FIRST-PART, O, and
LAST-PART.
86
• In Means-End Analysis you try to reduce the difference between
initial state and goal state by creating sub goals until a sub goal can
be reached directly (probably you know several examples of
recursion which works on the basis of this).
93
Example of Mean-Ends Analysis
Solution: To solve the problem, we will first find the differences
between initial states and goal states, and for each difference, we will
generate a new state and will apply the operators. The operators we
have for this problem are:
• Move
• Delete
• Expand
1. Evaluating the initial state: In the first step, we will evaluate the
initial state and will compare the initial and Goal state to find the
differences between both states.
94
Example of Mean-Ends Analysis
2. Applying Delete operator: As we can check the first difference is
that in goal state there is no dot symbol which is present in the initial
state, so, first we will apply the Delete operator to remove this dot.
96
Example of Mean-Ends Analysis
97
THANK YOU
98