0% found this document useful (0 votes)
8 views

Chapter 3 Artificial Intelligence notes

The document discusses various search techniques used in problem-solving, particularly in artificial intelligence. It categorizes search strategies into uninformed (blind) and informed (heuristic) searches, detailing specific algorithms like Breadth-First Search, Depth-First Search, and A* Search, along with their properties, advantages, and disadvantages. Additionally, it emphasizes the importance of search terminology such as problem space, depth, and admissibility in evaluating the effectiveness of these search strategies.

Uploaded by

prabinchaudha07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Chapter 3 Artificial Intelligence notes

The document discusses various search techniques used in problem-solving, particularly in artificial intelligence. It categorizes search strategies into uninformed (blind) and informed (heuristic) searches, detailing specific algorithms like Breadth-First Search, Depth-First Search, and A* Search, along with their properties, advantages, and disadvantages. Additionally, it emphasizes the importance of search terminology such as problem space, depth, and admissibility in evaluating the effectiveness of these search strategies.

Uploaded by

prabinchaudha07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

3.

Search Techniques
Searching
 Searching is the process of finding the required states or nodes.
 Searching is to be performed through the state space.
 Search process is carried out by constructing a search tree.
 Search is a universal problem-solving technique.
 Search involves systematic trial and error exploration of alternative solutions.
 Many problems don’t have a simple algorithmic solution. Casting these problems as search
problems is often the easiest way of solving them.
Example: Route
Planning
 State: Being in any
one city
• Initial State:
Being in Arad
• Goal State:
Be in
Bucharest
• Operators:
actions of
driving from city X to city Y along the connecting roads, e.g., driving from Arad to Sibiu
State Vs. Nodes
 A state is a (representation of) a physical configuration
 Nodes in the search tree are data structures maintained by a search procedure representing
paths to a particular state
 The same state can appear in several nodes if there is more than one path to that state
 The path can be reconstructed by following edges back to the root of the tree
Search Terminology
 Problem Space: It is the environment in which the search takes place. (A set of states and set
of operators to change those states)
 Problem Instance: It is Initial state + Goal state
 Problem Space Graph: It represents problem state. States are shown by nodes and operators
are shown by edges.
 Depth of a problem: Length of a shortest path or shortest sequence of operators from Initial
State to goal state.
 Space Complexity: The maximum number of nodes that are stored in memory.
 Time Complexity: The maximum number of nodes that are created.
 Admissibility: A property of an algorithm to always find an optimal solution.
 Branching Factor: The average number of child nodes in the problem space graph.
 Depth: Length of the shortest path from initial state to goal state.
Search strategies
 A search strategy is defined by picking the order of node expansion
 Strategies are evaluated along the following dimensions:
- Completeness: does it generate to find a solution if there is any?
- Optimality: does it always find the highest quality (least-cost) solution?
- Time complexity: How long does it take to find a solution?
- Space complexity: How much memory does it need to perform the search?
 Time and space complexity are measured in terms of
- b: maximum branching factor of the search tree
- d: depth of the least-cost solution
- m: maximum depth of the state space (may be ∞)
The searching process in AI is broadly classified into two categories:
- Uninformed Search or Blind Search/Brute force search
 Breath First Search
 Depth First Search
 Depth Limit Search
 Bidirectional Search
- Informed search or Heuristic search
 Hill climbing Search
 Best first Search
 Greedy Search
 A* Search
 Simulated Annealing
Uninformed search strategies
The search algorithms that do not use any extra information regarding the problem are called the blind
searching or Brute force searching or uninformed searching. These can only expand current state to get a
new set of states and distinguish a goal state from non-goal state. These searches typically visit all of the
nodes of a tree in a certain order looking for a pre-specified goal. These type of searching are less
effective than informed search.
The different types of uninformed search strategies are:
A. Breadth-First Search (BFS)
 Proceeds level by level down the search tree
 Starting from the root node (initial state)
explores all children of the root node, left
to right
 If no solution is found, expands the first
(leftmost) child of the root node, then
expands the second node at depth 1 and so
on …
 Algorithm
- Place the start node in the queue
- Examine the node at the front of
the queue
a. If the queue is empty, stop
b. If the node is the goal, stop
c. Otherwise, add the children of the node to the end of the queue
 Properties
 Completeness: Complete if the goal node is at finite depth
 Optimality: It is guaranteed to find the shortest path
 Time Complexity: O(bd+1)
 Space Complexity: O(bd+1)
 BFS example (Find path from A to D)
Put the start node in the queue, Examine the first element of the queue. If it is the goal, stop,
otherwise put its children in the queue.
 Advantages
- Guaranteed to find a solution (if one exists);
- Depending on the problem, can be guaranteed to find an optimal solution.
 Disadvantages
- More complex to implement;
- Needs a lot of memory for storing the state space if the search space has a high
branching factor.
B. Depth First Search (DFS)
 DFS proceeds down a single branch of the tree at a time.
 It expands the root node, then the
leftmost child of the root node.
 Always expands a node at the deepest
level of the tree.
 Only when the search hits a dead end (a
partial solution which can’t be extended),
the search backtrack (retracing the steps)
and expand nodes at higher levels.
 Algorithm
- Put the start node on the stack
- While stack is not empty
a. Pop the stack
b. If the top of stack is the goal, stop
c. Other wiser push the nodes connected to the top of the stack on the stack (
provided they are not already on the stack)
 Properties
 Completeness: Incomplete as it may get stuck down going down an infinite branch
that doesn’t leads to solution.
 Optimality: The first solution found by the DFS may not be shortest.
 Space complexity: b as branching factor and d as tree depth level, Space complexity =
O(b.d)
 Time Complexity: O(bd)
 DFS example (find path from A to E)

 Advantages
- Simple to implement;
- Needs relatively small memory for storing the state-space.
 Disadvantages
- Can sometimes fail to find a solution;
- Can take a lot longer to find a solution.
C. Depth Limit Search
 Perform depth first search but only to a pre-specified depth limit L.
 No node on a path that is more than L steps from the initial state.
 Advantage: Cut off level is introduced in DFS Technique.
 Disadvantage: No guarantee to find the optimal solution.
 Properties of depth limit search
- Completeness: Incomplete as solution may be beyond specified depth level.
- Optimality: not optimal
- Space complexity: b as branching factor and l as tree depth level,
Space complexity = O (b.L)
- Time Complexity: O( bL)
D. Iterative deepening depth-first search
 Take the idea of depth limited search one step further.
 Starting at depth limit L = 0, we iteratively increase the depth limit, performing a depth
limited search for each depth limit.
 Stop if no solution is found, or if the depth limited search failed without cutting off any
nodes because of the depth limit.
 Iterative deepening search uses only linear space and not much more time than other
uninformed algorithms
 Search is helpful only if the solution is at given depth level.
 An example of Iterative deepening DFS (depth level 3)

First Iteration Search at level L=0

Second Iteration Search at level L=1

Third Iteration Search at level L=2


E. Bidirectional Search
It searches forward from initial state and backward from goal state till both meet to identify a
common state.
The path from initial state is concatenated with the inverse path from the goal state. Each search is
done only up to half of the total path.
Informed Search (Heuristic Search)
Informed search have problem specific knowledge apart from problem definition. They use experimental
algorithm which improves efficiency of search process and the idea is to develop a domain specific
heuristic function h(n) where h(n) guesses the cost of getting to the goal from node n.
 Heuristic Technique: A heuristic technique, simply called heuristic, is any approach to problem
solving, learning or discovery that employs a practical method not guaranteed to be optimal or
perfect, but sufficient for immediate goals.
 Genetic Algorithm: In the field of AI, a genetic algorithm is a search heuristic that mimics the
process of natural selection. The heuristic (also called meta heuristic) is routinely used to generate
useful solutions to optimize and search problems. The genetic algorithm belong to the larger class of
evolutionary algorithms which generate solutions to optimization problems using techniques inspired
by natural evolution such as inheritance, mutation, selection and cross over.
The different types of uninformed search strategies are:
A. Best-First Search (BFS)
Best-First search is a graph-based heuristic search algorithm. The graph is the search space that
can be represented as a series of nodes connected by paths. The name “best-first” refers to the
method of exploring the node with the best “score” first. An evaluation function is used to assign
a score to each candidate node. The evaluation function must represent some estimate of the cost
of the path from state to the closest goal state.
Judea Pearl described best-first search as estimating the promise of node n by a “heuristic
evaluation function f(n) which, in general, may depend on the description of n, the description of
the goal, the information gathered by the search up to that point, and most important, on any extra
knowledge about the problem domain.”
Algorithm
1. Put the initial node on a list START
2. If START = GOAL or START = EMPTY, then terminate search
3. Remove the first node from START and call this node-A
4. If A = GOAL, terminate the search with success
5. Else-if, node has successor and generate all of them. Find out how far they are from the
GOAL node.
6. Sort all the children generated so far by remaining distance from the goal. Name the list
as START-1. Replace START = START-1
7. Go to step-2

Applications
Best-first search and its more advanced variants have been used in such applications as games and
web crawlers.
 In a web crawler, each web page is treated as a node, and all the hyperlinks on the page
are treated as unvisited successor nodes. A crawler that uses best-first search generally
uses an evaluation function that assigns priority to links based on how closely the contents
of their parent page resemble the search query.
 In games, best-first search may be used as a path-finding algorithm for game characters.
For example, it could be used by an enemy agent to find the location of the player in the
game world.
 Types
- Greedy Best First Search
- A* search
B. Greedy Best First Search
 It tries to get as close as it can to the goal.
 It expands the node that appears to be closest to the goal
 It evaluates the node by using heuristic function only.
 Evaluation function f(n) = h(n)
- heuristic= (estimate of cost from n to goal)
- h(n) = 0 for goal state
 Properties
 Completeness: No – can get stuck in loops
 Time complexity: O( b m), but a good heuristic can give dramatic improvement
 Space complexity: O( b m ), keeps all nodes in memory
 Optimality: No
 Advantages:
The advantages of using a greedy algorithm is that solutions to smaller instances of the
problem can be straight forward and easy to understand.
 Disadvantages:
The disadvantage is that it is entirely possible that the most optimal short-term solutions
may lead to the worst long-term outcome.
 Applications
This algorithm is used in Huffman encoding, minimum spanning tree, Dijkstra’s
algorithm etc.
 Given following graph of cities, starting at Arad city, problem is to reach to the
Bucharest

Solution using greedy best first can be as below:


Step 1: Initial State

Step 2: After expanding A


Step 3: After expanding S

Step 4: After expanding F

C. A* Search
 It finds a minimal cost-path joining the start node and a goal node for node n.
 Evaluation function: f(n) = g(n) + h(n)
Where,
g(n) = cost so far to reach n from root
h(n) = estimated cost to goal from n
Thus, f(n) estimates always the lowest total cost of any solution path going through
node n.
 Avoid expanding paths that are already expensive
 The main drawback of A* algorithm and indeed of any best-first search is its memory
requirement.
 A* search example (Find path from A to B)
Here, evaluate nodes connected to source. Evaluate f(n)=g(n)+h(n) for each node.
Select node with lowest f(n) value.
Step 1:

Step 2:

Step 3:

Step 4:

Step 5:
 Admissible heuristics
- A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n),
where h*(n) is the true cost to reach the goal state from n.
- Example: hSLD(n) (never overestimates the actual road distance)
- An admissible heuristic never overestimates the cost to reach the goal, i.e., it
is optimistic
- Theorem: If h(n) is admissible, A* using GRAPH-SEARCH is complete
- Space Complexity = O(bd)
- Time Complexity = O(bd)
 Consistent heuristics
- A heuristic is consistent if for every node n, every successor n' of n generated
by any action a,
h(n) ≤ c(n ,a, n') + h(n')
- If h is consistent, we have
f (n') = g(n') + h(n')
= g(n) + c(n, a, n') + h(n')
≥ g(n) + h(n)
= f(n)
i.e., f(n) is non-decreasing along any path.
- Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal
Iterative Improvement Algorithms
A. Hill Climbing Search
 Hill climbing is an extension of depth-first search which uses some knowledge such as
estimates of the distance of each node from the goal to improve the search.
 It is simply a loop that continually moves in the direction of increasing value—that is,
uphill. It terminates when it reaches a “peak” where no neighbor has a higher value.
 The algorithm does not maintain a search tree, so the data structure for the current node
need only record the state and the value of the objective function.
 Hill climbing does not look ahead beyond the immediate neighbors of the current state.
 Hill climbing is sometimes called greedy local search because it grabs a good neighbor
state without thinking ahead about where to go next.
 Algorithm
- Push the root node on the stack
- Pop the stack & examine it
 if the top of the stack is the goal, stop
 if the stack is empty, stop, there is no path
 Otherwise, sort the children of the current node and then push
them on the stack
 Drawbacks
- Local maxima: A local maximum is a peak
that is higher than each of its neighboring
states but lower than the global maximum.
Hill-climbing algorithms that reach the
vicinity of a local maximum will be drawn
upward toward the peak but will then be stuck
with nowhere else to go.
- Plateaus: A plateau is an area of the search
space where evaluation function is flat, thus
requiring random walk.
- Ridges: Where there are steep slopes and the search direction is not towards the top
but towards the side.
 Remedies
- Back tracking for local maximum: The back tracking help in undoing what is
been done so far and permit to try totally different part to attain the global peak.
- Big Jump: A big jump is the solution to escape from plateaus because all
neighbors’ points have same value using the greedy approach.
- Random restart: Keep restarting the search from random locations until a goal is
found. This will be the solution for Ridge.
B. Simulated Annealing
 Simulated Annealing escapes local maxima by allowing some "bad" moves but
gradually decrease their frequency.
 Instead of restarting from a random point, we allow the search to take some downhill
steps to try to escape local maxima.
 Algorithm:
- A random pick is made for the move
- If it improves the situation, it is accepted straight away
- If it worsens the situation, it is accepted with some probability less than 1. i.e.
for bad moves the probability is low and for comparatively less bad moves, it
is higher.
 Probability of downward steps is controlled by temperature parameter.
 Applications of Simulated Annealing Search
- VLSI layout problem
- Factory scheduling
- Travelling salesman problem
Adversarial Search
 Competitive environments in which the agents goals are in conflict, give rise to adversarial
(oppositional) search, often known as games.
 In AI, games means deterministic, fully observable environments in which there are two agents
whose actions must alternate and in which utility values at the end of the game are always equal
and opposite.
E.g. If first player wins, the other player necessarily loses.
A. Minimax Algorithm
 It is a recursive algorithm for choosing the next move in a n-player game, usually a two
player game
 A value is associated with each position or state of the game
 The value is computed by means of a position evaluation function and it indicates how
good it would be for a player to reach the position.
 The player then makes the move that maximizes the minimum value of the position from
the opponents possible moves called maximizing player and other player minimize the
maximum value of the position called minimizing player.
 Properties
- Complete? Yes (if tree is finite)
- Optimal? Yes (against an optimal opponent)
- Time complexity? O(bm)
- Space complexity? O(bm) (depth-first exploration)
 MiniMax Game Search
a. It is a Depth-first search with limited depth.
b. Assume the opponent will make the best move possible.
c. Algorithm
minimax (player,board)
if(game over in current board position)
return winner
if(max's turn)
return maximal score of calling minimax
else (min's turn)
return minimal score of calling minimax
 Example
We first consider games with two players; MAX and MIN. MAX moves first, and then
they take turns moving until the game is over. Each level of the tree alternates, MAX is
trying to maximize her
score, and MIN is trying
to minimize MAX score
in order to undermine
her success. At the end
of the game, points are
awarded to the winning
player and penalties are
given to the loser.
B. Alpha Beta Pruning(Clipping)
 Minimax Algorithm explores some parts of tree it doesn’t have to..
 The exact implementation of alpha-beta keeps track of the best move for each side as it
moves throughout the tree.
 Alpha–beta pruning gets its name from the following two parameters that describe
bounds on the backed up values that appear anywhere along the path:
- α = the value of the best (i.e., highest-value) choice we have found so far at any
choice point along the path for MAX.
- β = the value of the best (i.e., lowest-value) choice we have found so far at any
choice point along the path for MIN.

Q. Consider the following game tree (drawn from the point of view of the Maximizing player)

a) Use the mini-max procedure and show what moves should be chosen by the
two players.
b) Use the alpha-beta pruning procedure and show what nodes would not need to be
examined.
Soln:
a.

b. The cut branches need not be examined:

You might also like