0% found this document useful (0 votes)
11 views

familunit 2

The document discusses various search algorithms in Artificial Intelligence, including uninformed and informed search techniques, their properties, and types such as breadth-first search, depth-first search, and uniform-cost search. It covers the terminologies associated with search algorithms, their advantages and disadvantages, as well as their time and space complexities. Additionally, it highlights the importance of search algorithms in problem-solving for rational agents in AI.

Uploaded by

amburepiyush
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

familunit 2

The document discusses various search algorithms in Artificial Intelligence, including uninformed and informed search techniques, their properties, and types such as breadth-first search, depth-first search, and uniform-cost search. It covers the terminologies associated with search algorithms, their advantages and disadvantages, as well as their time and space complexities. Additionally, it highlights the importance of search algorithms in problem-solving for rational agents in AI.

Uploaded by

amburepiyush
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Unit 2

Problem Solving
By,

Mrs. Madhuri A. Giri


Search Algorithms in Artificial Intelligence:
Terminologies, Properties of search Algorithms, Types
of search algorithms: uninformed search and informed
search, State Space search
Heuristic Search Techniques: Generate-and-Test; Hill
Climbing; Properties of A* algorithm, Best-first Search;
Problem Reduction.
Constraint Satisfaction problem: Interference in CSPs;
Back tracking search for CSPs; Local Search for CSPs;
structure of CSP Problem.
Beyond Classical Search: Local search algorithms and
optimization problem, local search in continuous
spaces, searching with nondeterministic action and
partial observation, online search agent and unknown
environments.
Search Algorithms in AI
• In Artificial Intelligence, Search techniques are universal
problem-solving methods. Rational
agents or Problem-solving agents in AI mostly used these
search strategies or algorithms to solve a specific problem
and provide the best result. Problem-solving agents are the
goal-based agents and use atomic representation. In this
topic, we will learn various problem-solving search
algorithms.
Search Algorithm Terminologies:
• Search: Searching is a step by step procedure to solve a
search-problem in a given search space. A search problem can
have three main factors:
– Search Space: Search space represents a set of possible solutions, which
a system may have.
– Start State: It is a state from where agent begins the search.

– Goal test: It is a function which observe the current state and returns
whether the goal state is achieved or not.

• Search tree: A tree representation of search problem is called


Search tree. The root of the search tree is the root node which is
corresponding to the initial state.
• Actions: It gives the description of all the available
actions to the agent.
• Transition model: A description of what each action
do, can be represented as a transition model.
• Path Cost: It is a function which assigns a numeric cost
to each path.
• Solution: It is an action sequence which leads from the
start node to the goal node.
• Optimal Solution: If a solution has the lowest cost
among all solutions.
Properties of Search Algorithms:
• Following are the four essential properties of search algorithms to

compare the efficiency of these algorithms:

• Completeness: A search algorithm is said to be complete if it guarantees

to return a solution if at least any solution exists for any random input.

• Optimality: If a solution found for an algorithm is guaranteed to be the

best solution (lowest path cost) among all other solutions, then such a

solution for is said to be an optimal solution.

• Time Complexity: Time complexity is a measure of time for an algorithm

to complete its task.

• Space Complexity: It is the maximum storage space required at any point

during the search, as the complexity of the problem.


Types of search algorithms:
Uninformed/Blind Search:
• The uninformed search does not contain any
domain knowledge such as closeness, the location
of the goal.
• It operates in a brute-force way as it only includes
information about how to traverse the tree and
how to identify leaf and goal nodes.
• Uninformed search applies a way in which search
tree is searched without any information about
the search space like initial state operators and
test for the goal, so it is also called blind search.
• It examines each node of the tree until it achieves
the goal node.
It can be divided into six main types:

• Breadth-first search
• Uniform cost search
• Depth-first search
• Depth limited search
• Iterative deepening depth-first search
• Bidirectional Search
Informed Search
• Informed search algorithms use domain knowledge. In an
informed search, problem information is available which can
guide the search. Informed search strategies can find a
solution more efficiently than an uninformed search strategy.
Informed search is also called a Heuristic search.
• A heuristic is a way which might not always be guaranteed for
best solutions but guaranteed to find a good solution in
reasonable time.
• Informed search can solve much complex problem which could
not be solved in another way.
• An example of informed search algorithms is a traveling
salesman problem.
– Greedy Search
– A* Search
1. Breadth-first Search
• Breadth-first search is the most common search
strategy for traversing a tree or graph. This algorithm
searches breadthwise in a tree or graph, so it is called
breadth-first search.
• BFS algorithm starts searching from the root node of
the tree and expands all successor node at the current
level before moving to nodes of next level.
• The breadth-first search algorithm is an example of a
general-graph search algorithm.
• Breadth-first search implemented using FIFO queue
data structure.
Advantages:

• BFS will provide a solution if any solution exists.


• If there are more than one solutions for a given
problem, then BFS will provide the minimal solution
which requires the least number of steps.
• It also helps in finding the shortest path in goal
state, since it needs all nodes at the same
hierarchical level before making a move to nodes at
lower levels.
• It is also very easy to comprehend with the help of
this we can assign the higher rank among path
types.
Disadvantages

• It requires lots of memory since each level of


the tree must be saved into memory to
expand the next level.
• BFS needs lots of time if the solution is far
away from the root node.
• It can be very inefficient approach for
searching through deeply layered spaces, as it
needs to thoroughly explore all nodes at each
level before moving on to the next
• Time Complexity: Time Complexity of BFS algorithm
can be obtained by the number of nodes traversed in
BFS until the shallowest Node. Where the d= depth of
shallowest solution and b is a node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)

• Space Complexity: Space complexity of BFS algorithm


is given by the Memory size of frontier which is O(bd).

• Completeness: BFS is complete, which means if the


shallowest goal node is at some finite depth, then BFS
will find a solution.

• Optimality: BFS is optimal if path cost is a


non-decreasing function of the depth of the node.
2. Depth-first Search

• Depth-first search is a recursive algorithm for


traversing a tree or graph data structure.
• It is called the depth-first search because it starts
from the root node and follows each path to its
greatest depth node before moving to the next
path.
• DFS uses a stack data structure for its
implementation.
• The process of the DFS algorithm is similar to the
BFS algorithm.
Advantage
• DFS requires very less memory as it only
needs to store a stack of the nodes on the
path from root node to the current node.
• It takes less time to reach to the goal node
than BFS algorithm (if it traverses in the right
path).
• With the help of this we can stores the route
which is being tracked in memory to save time
as it only needs to keep one at a particular
time.
Disadvantage

• There is the possibility that many states keep


re-occurring, and there is no guarantee of
finding the solution.
• DFS algorithm goes for deep down searching
and sometime it may go to the infinite loop.
• The depth-first search (DFS) algorithm does
not always find the shortest path to a solution.
• Completeness: DFS search algorithm is complete within
finite state space as it will expand every node within a
limited search tree.

• Time Complexity: Time complexity of DFS will be equivalent


to the node traversed by the algorithm. It is given by: T(n)=
1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can


be much larger than d (Shallowest solution depth)

• Space Complexity: DFS algorithm needs to store only single


path from the root node, hence space complexity of DFS is
equivalent to the size of the fringe set, which is O(dm).

• Optimal: DFS search algorithm is non-optimal, as it may


generate a large number of steps or high cost to reach to
the goal node.
3. Depth-Limited Search Algorithm
• A depth-limited search algorithm is similar to
depth-first search with a predetermined limit l.
Depth-limited search can solve the drawback of
the infinite path in the Depth-first search. In this
algorithm, the node at the depth limit will treat as
it has no successor nodes further.

• Depth-limited search can be terminated with


two Conditions of failure:
– Standard failure value: It indicates that problem does
not have any solution.
– Cutoff failure value: It defines no solution for the
problem within a given depth limit.
Advantages
• Depth-Limited Search will restrict the search depth of the
tree, thus, the algorithm will require fewer memory
resources than the straight BFS (Breadth-First Search) and
IDDFS (Iterative Deepening Depth-First Search). After all,
this implies automatic selection of more segments of the
search space and the consequent why consumption of the
resources. Due to the depth restriction, DLS omits a
predicament of holding the entire search tree within
memory which contemplatively leaves room for a more
memory-efficient vice for solving a particular kind of
problems.
• When there is a leaf node depth which is as large as the
highest level allowed, do not describe its children, and then
discard it from the stack.
• Depth-Limited Search does not explain the infinite loops
which can arise in classical when there are cycles in graph of
cities.
Disadvantages
• Depth-limited search also has a disadvantage
of incompleteness.

• It may not be optimal if the problem has more


than one solution.

• The effectiveness of the Depth-Limited Search


(DLS) algorithm is largely dependent on the
depth limit specified. If the depth limit is set
too low, the algorithm may fail to find the
solution altogether.
• Completeness: DLS search algorithm is complete
if the solution is above the depth-limit.

• Time Complexity: Time complexity of DLS


algorithm is O(bℓ) where b is the branching factor
of the search tree, and l is the depth limit.

• Space Complexity: Space complexity of DLS


algorithm is O(b×ℓ) where b is the branching
factor of the search tree, and l is the depth limit.

• Optimal: Depth-limited search can be viewed as a


special case of DFS, and it is also not optimal even
if ℓ>d.
4. Uniform-cost Search Algorithm
• Uniform-cost search is a searching algorithm used for
traversing a weighted tree or graph. This algorithm
comes into play when a different cost is available for
each edge.
• The primary goal of the uniform-cost search is to find a
path to the goal node which has the lowest cumulative
cost.
• Uniform-cost search expands nodes according to their
path costs form the root node. It can be used to solve
any graph/tree where the optimal cost is in demand.
• A uniform-cost search algorithm is implemented by
the priority queue. It gives maximum priority to the
lowest cumulative cost. Uniform cost search is
equivalent to BFS algorithm if the path cost of all
edges is the same.
Advantages
• Uniform cost search is optimal because at every state
the path with the least cost is chosen.

• It is an efficient when the edge weights are small, as it


explores the paths in an order that ensures that the
shortest path is found early.

• It's a fundamental search method that is not overly


complex, making it accessible for many users.

• It is a type of comprehensive algorithm that will find a


solution if one exists. This means the algorithm is
complete, ensuring it can locate a solution whenever a
viable one is available. The algorithm covers all the
necessary steps to arrive at a resolution.
Disadvantages
• It does not care about the number of steps involve in
searching and only concerned about path cost. Due to
which this algorithm may be stuck in an infinite loop.
• When in operation, UCS shall know all the edge weights to
start off the search.
• This search holds constant the list of the nodes that it has
already discovered in a priority queue. Such is a much
weightier thing if you have a large graph. Algorithm
allocates the memory by storing the path sequence of
prioritizes, which can be memory intensive as the graph
gets larger. With the help of Uniform cost search we can
end up with the problem if the graph has edge's cycles with
smaller cost than that of the shortest path.
• The Uniform cost search will keep deploying priority queue
so that the paths explored can be stored in any case as the
graph size can be even bigger that can eventually result in
too much memory being used.
• Completeness:
Uniform-cost search is complete, such as if there is a
solution, UCS will find it.
• Time Complexity:
Let C* is Cost of the optimal solution, and ε is each
step to get closer to the goal node. Then the number of
steps is = C*/ε+1. Here we have taken +1, as we start
from state 0 and end to C*/ε.
Hence, the worst-case time complexity of Uniform- cost
search is O(b1 + [C*/ε])/.
• Space Complexity:
The same logic is for space complexity so, the worst-case
space complexity of Uniform-cost search is O(b1 + [C*/ε]).
• Optimal:
Uniform-cost search is always optimal as it only selects a
path with the lowest path cost.
5. Iterative deepening depth-first Search
• The iterative deepening algorithm is a combination of
DFS and BFS algorithms. This search algorithm finds
out the best depth limit and does it by gradually
increasing the limit until a goal is found.
• This algorithm performs depth-first search up to a
certain "depth limit", and it keeps increasing the depth
limit after each iteration until the goal node is found.
• This Search algorithm combines the benefits of
Breadth-first search's fast search and depth-first
search's memory efficiency.
• The iterative search algorithm is useful uninformed
search when search space is large, and depth of goal
node is unknown.
Here are the steps for Iterative deepening depth
first search algorithm:

• Set the depth limit to 0.


• Perform DFS to the depth limit.
• If the goal state is found, return it.
• If the goal state is not found and the maximum
depth has not been reached, increment the depth
limit and repeat steps 2-4.
• If the goal state is not found and the maximum
depth has been reached, terminate the search
and return failure.
Advantages:
• It combines the benefits of BFS and DFS search algorithm in
terms of fast search and memory efficiency.
• It is a type of straightforward which is used to put into
practice since it builds upon the conventional depth-first
search algorithm.
• It is a type of search algorithm which provides guarantees
to find the optimal solution, as long as the cost of each edge
in the search space is the same.
• It is a type of complete algorithm, and the meaning of this is
it will always find a solution if one exists.
• The Iterative Deepening Depth-First Search (IDDFS)
algorithm uses less memory compared to Breadth-First
Search (BFS) because it only stores the current path in
memory, rather than the entire search tree.
Disadvantages:
• The main drawback of IDDFS is that it repeats all the work
of the previous phase.
Completeness:
• This algorithm is complete is if the branching
factor is finite.
Time Complexity:
• Let's suppose b is the branching factor and depth
is d then the worst-case time complexity is O(bd).
Space Complexity: The space complexity of IDDFS
will be O(bd).
Optimal:
• IDDFS algorithm is optimal if path cost is a non-
decreasing function of the depth of the node.
6. Bidirectional Search Algorithm
• Bidirectional search algorithm runs two
simultaneous searches, one form initial state
called as forward-search and other from goal
node called as backward-search, to find the
goal node. Bidirectional search replaces one
single search graph with two small subgraphs
in which one starts the search from an initial
vertex and other starts from goal vertex. The
search stops when these two graphs intersect
each other.
• Bidirectional search can use search techniques
such as BFS, DFS, DLS, etc.
Advantages
• Bidirectional search is fast.
• Bidirectional search requires less memory
• The graph can be extremely helpful when it is
very large in size and there is no way to make
it smaller. In such cases, using this tool
becomes particularly useful.
• The cost of expanding nodes can be high in
certain cases. In such scenarios, using this
approach can help reduce the number of
nodes that need to be expanded.
Disadvantages
• Implementation of the bidirectional search
tree is difficult.

• In bidirectional search, one should know the


goal state in advance.

• Finding an efficient way to check if a match


exists between search trees can be tricky,
which can increase the time it takes to
complete the task.
• Completeness: Bidirectional Search is
complete if we use BFS in both searches.

• Time Complexity: Time complexity of


bidirectional search using BFS is O(bd).

• Space Complexity: Space complexity of


bidirectional search is O(bd).

• Optimal: Bidirectional search is Optimal.


1. A* Search Algorithm
• A* is a powerful graph traversal and
pathfinding algorithm widely used in artificial
intelligence and computer science.
• It is mainly used to find the shortest path
between two nodes in a graph, given the
estimated cost of getting from the current
node to the destination node.
• The main advantage of the algorithm is its
ability to provide an optimal path by exploring
the graph in a more informed way compared
to traditional search algorithms such as
Dijkstra's algorithm.
• Algorithm A* combines the advantages of two
other search algorithms: Dijkstra's algorithm
and Greedy Best-First Search.
• Like Dijkstra's algorithm, A* ensures that the
path found is as short as possible but does so
more efficiently by directing its search
through a heuristic similar to Greedy Best-First
Search.
• A heuristic function, denoted h(n), estimates
the cost of getting from any given node n to
the destination node.
f(n)=g(n)+h(n)
f(n), Estimated cost of the cheapest solution.
g(n), Cost to reach node n from start state.
A* Algorithm
Input: a search graph with cost on the arcs
Output: the minimal cost path from start node to a goal node.
• Put the start node s on OPEN.
• If OPEN is empty, exit with failure
• Remove from OPEN and place on CLOSED a node n having minimum f.
• If n is a goal node exit successfully with a solution path obtained by
tracing back the pointers from n to s.
• Otherwise, expand n generating its children and directing pointers
from each child node to n.
For every child node n’ do
– evaluate h(n’) and compute f(n’) = g(n’) +h(n’)= g(n)+c(n,n’)+h(n)
– If n’ is already on OPEN or CLOSED compare its new f with the old f and attach the
lowest f to n’.
– put n’ with its f value in the right order in OPEN
• Go to step 2.
1.
Advantages of A* Search Algorithm
• Optimal solution
• Completeness
• Efficiency
• Versatility
• Optimized search
• Memory efficiency
• Web search
• Tunable Heuristics
• Extensively researched
Disadvantages of A* Search Algorithm
• Heuristic accuracy
• Memory usage
• Time complexity
• Bottleneck at the destination
• Cost Binding
• Complexity in dynamic environments
• Perfection in infinite space
Applications of the A* Search Algorithm
• Pathfinding in Games
• Robotics and Autonomous Vehicles
• Maze solving
• Route planning and navigation
• Puzzle-solving
• Network Routing
• Natural Language Processing (NLP):
• Path planning in robotics
• Game AI
Greedy Best First Search
• BFS stands for Best First Search, which is a type of
Search in the context of artificial intelligence that aims
to choose the best paths toward the target path by
evaluating the nodes using a pre-stipulated evaluation
function.
• This algorithm takes the best features of the DFS and
BFS strategies to perform an efficient search in a
search space.
• Heuristic information assists the BFS algorithm in
choosing the next appropriate node for expansion so
as to minimize the overall Search and potential to get
closer to the goal state.
• The evaluation function, normally used as a heuristic
function, measures the cost of steps needed to reach
the goal from the current node.
Best First Search Algorithm
• Create 2 empty lists: OPEN and CLOSED
• Start from the initial node (say N) and put it in the
'ordered' OPEN list
• Repeat the next steps until the GOAL node is reached
OR OPEN list is empty
– Select the best node (say N) in the OPEN list and move it to
the CLOSED list. Also, capture the information of the parent
node
– If N is a GOAL node, then exit the loop returning 'True'. The
solution can be found by backtracking the path.
– If N is not the GOAL node, expand node N to generate the
'immediate' next nodes linked to node N and add all those
to the OPEN list
– Reorder the nodes in the OPEN list in ascending order
Advantages

• Reduction in Human Error


• Risk-Taking
• Efficiency and Productivity
• Enhanced Decision Making
• Personalization and Customer Experience
Disadvantages

• Reduces Employment
• Lacks Creative Ability
• Absence of Emotional Range
• Ethical Dilemmas
• High Costs
Applications of Best First Search
• Pathfinding in Games
• Robotics
• Network Routing
• Artificial Intelligence
• Navigation Systems
State Space Search
To locate a solution, state space search entails methodically
going through every potential state for an issue. This approach
can be used to solve a variety of AI issues, including
pathfinding, solving puzzles, playing games, and more. The
fundamental concept is to visualize the issue as a graph with
nodes standing in for states and edges for transitions.

Important ideas consist of:


• State: A specific configuration of the problem.
• Initial State: The starting point of the search.
• Goal State: The desired end configuration.
• Transition: An action that changes one state to another.
• Path: A sequence of states connected by transitions.
• Search Strategy: The method used to explore the state
space.
Applications of State Space Search
State space search is extensively employed in many
different fields, such as:
• Pathfinding: Finding the best pathways using
algorithms such as A* in robotics and GPS.
• Puzzle solving: resolving puzzles like Rubik's Cube,
Sudoku, and the 8-puzzle.
• AI for gaming: To assess potential moves in board
games like chess, checkers, and others.
• Planning: The automated scheduling of tasks in
logistics and robotics to achieve a specific objective.
• Natural language processing involves computer
translation and sentence parsing by examining many
interpretations.
• Theorem Proving: Examining logical proofs by looking
for potential logical inference sequences.
Challenges in State Space Search
• Complexity: High branching factors can cause
an exponential growth in the number of states
to be explored.
• Resource Limitations: Memory and processing
power limit the size of the state space that can
be practically searched.
• Quality of Heuristics: The effectiveness of the
search is often limited by the quality of the
heuristic function.
Generate and Test Search
• Generate and Test Search is a heuristic search
technique based on Depth First Search with
Backtracking which guarantees to find a solution if
done systematically and there exists a solution.
• In this technique, all the solutions are generated
and tested for the best solution. It ensures that
the best solution is checked against all possible
generated solutions.
• It is also known as British Museum Search
Algorithm as it’s like looking for an exhibit at
random or finding an object in the British
Museum by wandering randomly.
Algorithm
• Generate a possible solution. For example,
generating a particular point in the problem
space or generating a path for a start state.
• Test to see if this is a actual solution by
comparing the chosen point or the endpoint of
the chosen path to the set of acceptable goal
states
• If a solution is found, quit. Otherwise go to
Step 1
Properties of Good Generators
• Complete: Good Generators need to be complete i.e. they
should generate all the possible solutions and cover all the
possible states. In this way, we can guaranty our algorithm to
converge to the correct solution at some point in time.
• Non Redundant: Good Generators should not yield a
duplicate solution at any point of time as it reduces the
efficiency of algorithm thereby increasing the time of search
and making the time complexity exponential. In fact, it is
often said that if solutions appear several times in the
depth-first search then it is better to modify the procedure to
traverse a graph rather than a tree.
• Informed: Good Generators have the knowledge about the
search space which they maintain in the form of an array of
knowledge. This can be used to search how far the agent is
from the goal, calculate the path cost and even find a way to
reach the goal.
Hill Climbing Algorithm
• Hill climbing algorithm is a local search algorithm which continuously moves
in the direction of increasing elevation/value to find the peak of the
mountain or best solution to the problem. It terminates when it reaches a
peak value where no neighbor has a higher value.

• Hill climbing algorithm is a technique which is used for optimizing the


mathematical problems. One of the widely discussed examples of Hill
climbing algorithm is Traveling-salesman Problem in which we need to
minimize the distance traveled by the salesman.

• It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.

• A node of hill climbing algorithm has two components which are state and
value.

• Hill Climbing is mostly used when a good heuristic is available.

• In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.
Features of Hill Climbing
• Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which
helps to decide which direction to move in the search space.

• Greedy approach: Hill-climbing algorithm search moves in the direction


which optimizes the cost.

• No backtracking: It does not backtrack the search space, as it does not


remember the previous states.

• Deterministic Nature: Hill Climbing is a deterministic optimization


algorithm, which means that given the same initial conditions and the
same problem, it will always produce the same result. There is no
randomness or uncertainty in its operation.

• Local Neighborhood: Hill Climbing is a technique that operates within a


small area around the current solution. It explores solutions that are
closely related to the current state by making small, gradual changes. This
approach allows it to find a solution that is better than the current one
although it may not be the global optimum.
State-space Diagram for Hill Climbing
• The state-space landscape is a graphical
representation of the hill-climbing algorithm
which is showing a graph between various states
of algorithm and Objective function/Cost.

• On Y-axis we have taken the function which can


be an objective function or cost function, and
state-space on the x-axis. If the function on Y-axis
is cost then, the goal of search is to find the global
minimum and local minimum. If the function of
Y-axis is Objective function, then the goal of the
search is to find the global maximum and local
maximum.
Different regions in the state space
landscape
• Local Maximum: Local maximum is a state which is better
than its neighbor states, but there is also another state which
is higher than it.
• Global Maximum: Global maximum is the best possible state
of state space landscape. It has the highest value of objective
function.
• Current state: It is a state in a landscape diagram where an
agent is currently present.
• Flat local maximum: It is a flat space in the landscape where
all the neighbor states of current states have the same value.
• Shoulder: It is a plateau region which has an uphill edge.
• Ridge: A ridge is a higher region with a slope, which can look
like a peak. This may cause the algorithm to stop prematurely,
missing better solutions nearby.
Types of Hill Climbing Algorithm

• Simple hill Climbing


• Steepest-Ascent hill-climbing
• Stochastic hill Climbing
Simple Hill Climbing
Simple hill climbing is the simplest way to
implement a hill climbing algorithm. It only
evaluates the neighbor node state at a time and
selects the first one which optimizes current cost
and set it as a current state. It only checks it's one
successor state, and if it finds better than the
current state, then move else be in the same state.
This algorithm has the following features:
• Less time consuming
• Less optimal solution and the solution is not
guaranteed
Algorithm for Simple Hill Climbing
• Evaluate the initial state. If it is a goal state, return
success.
• Make the initial state the current state.
• Loop until a solution is found or no operators can be
applied:
– Select a new state that has not yet been applied to the
current state.
– Evaluate the new state.
– If the new state is the goal, return success.
– If the new state improves upon the current state, make it
the current state and continue.
– If it doesn’t improve, continue searching neighboring
states.
• Exit the function if no better state is found.
Steepest-Ascent hill climbing
The steepest-Ascent algorithm is a variation of simple hill climbing
algorithm. This algorithm examines all the neighboring nodes of the
current state and selects one neighbor node which is closest to the
goal state. This algorithm consumes more time as it searches for
multiple neighbors

Algorithm for Steepest-Ascent Hill Climbing

• Evaluate the initial state. If it is a goal state, return success.


• Make the initial state the current state.
• Repeat until the solution is found or the current state remains
unchanged:
– Select a new state that hasn’t been applied to the current state.
– Initialize a ‘best state’ variable and evaluate all neighboring states.
– If a better state is found, update the best state.
– If the best state is the goal, return success.
– If the best state improves upon the current state, make it the new
current state and repeat.
• Exit the function if no better state is found.
Stochastic hill climbing
• Stochastic Hill Climbing introduces randomness into the search
process. Instead of evaluating all neighbors or selecting the first
improvement, it selects a random neighboring node and decides
whether to move based on its improvement over the current state.

Algorithm for Stochastic Hill Climbing:

• Evaluate the initial state. If it is a goal state, return success.


• Make the initial state the current state.
• Repeat until a solution is found or the current state does not change:
– Apply the successor function to the current state and generate all
neighboring states.
– Choose a random neighboring state based on a probability
function.
– If the chosen state is better than the current state, make it the new
current state.
– If the selected neighbor is the goal state, return success.
• Exit the function if no better state is found.
Applications of Hill Climbing in AI
• Pathfinding: Hill climbing is used in AI systems that
need to navigate or find the shortest path between
points, such as in robotics or game development.
• Optimization: Hill climbing can be used for solving
optimization problems where the goal is to maximize
or minimize a particular objective function, such as
scheduling or resource allocation problems.
• Game AI: In certain games, AI uses hill climbing to
evaluate and improve its position relative to an
opponent’s.
• Machine Learning: Hill climbing is sometimes used for
hyper parameter tuning, where the algorithm iterates
over different sets of hyper parameters to find the
best configuration for a machine learning model.

You might also like