0% found this document useful (0 votes)
14 views

Module II (Part 2)

The document discusses search algorithms in artificial intelligence, emphasizing their importance for problem-solving agents. It categorizes search algorithms into uninformed (blind) and informed (heuristic) types, detailing various methods such as breadth-first search, depth-first search, and uniform-cost search, along with their properties and complexities. The document also highlights the characteristics of effective search algorithms, including completeness, optimality, time complexity, and space complexity.

Uploaded by

SHUBH SINGH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Module II (Part 2)

The document discusses search algorithms in artificial intelligence, emphasizing their importance for problem-solving agents. It categorizes search algorithms into uninformed (blind) and informed (heuristic) types, detailing various methods such as breadth-first search, depth-first search, and uniform-cost search, along with their properties and complexities. The document also highlights the characteristics of effective search algorithms, including completeness, optimality, time complexity, and space complexity.

Uploaded by

SHUBH SINGH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 124

Problem solving Methods

Search Algorithms in Artificial Intelligence


 Search algorithms are one of the most important areas of
Artificial Intelligence.
Problem-solving agents:
 Search techniques are common approaches to addressing
issues in artificial intelligence. In AI, rational agents or
problem-solving agents frequently used these search
techniques or algorithms to address a particular issue and
deliver the best outcome.
 Goal-based agents that use atomic representation are
problem-solving agents. We will study different search
algorithms for solving issues in this topic.
Search Algorithm Terminologies:

Search: In order to solve a search issue in a specific search


space, searching is a step-by-step process. Three major
factors can contribute to a search issue:
◦ Search Space: Search space represents a set of possible
solutions, which a system may have.
◦ Start State: It is a state from where agent begins the
search.
◦ Goal test: It is a function which observe the current state
and returns whether the goal state is achieved or not.
 Search tree: A search issue is represented as a tree in a search
tree. The root node, which corresponds to the initial
condition, is the node at the top of the search tree.
 Actions: It provides the agent with a description of each
action that is possible.
 Transition model: A transition model is a description of what
each move accomplishes.
 Path Cost: is a function that gives each path a numerical
expense.
 Solution: The answer is an action sequence that connects the
target node to the start node.
 Optimal Solution: If a solution has the lowest price of all
possible solutions, it is the optimal answer.
Properties of Search Algorithms:

The four crucial characteristics of search algorithms to contrast


their efficacy are as follows:
Completeness: A search algorithm is considered to be
complete if it ensures that a solution will be returned if at
least one appears for any random input.
Optimality: A solution for an algorithm is said to be optimal if
it is certain to be the best solution (with the lowest route
cost) among all other solutions.
Time Complexity: The amount of time required for an
algorithm to complete a job is known as time complexity.
Space Complexity: The amount of storage capacity needed at
any given time during the search depends on how complex
the issue is.
Types of search algorithms
 Based on the search problems we can classify the search
algorithms into uninformed (Blind search) search and
informed search (Heuristic search) algorithms.
Uninformed/Blind Search:
 The uninformed search lacks any domain information, such
as proximity or goal location.
 As it only contains instructions on how to traverse the tree
and how to recognise leaf and goal nodes, it works by brute
force.
 Uninformed search, also known as blind search, is the
process of searching a search tree without any prior
knowledge of the search space, such as initial state operators
and objective tests.
 Up until it reaches the target node, it examines every node in
the tree.
It can be divided into five main types:
 Breadth-first search
 Uniform cost search
 Depth-first search
 Iterative deepening depth-first search
 Bidirectional Search
Informed Search

 Domain expertise is used by intelligent search algorithms.


 Problem information is accessible during an informed search, and this
information can direct the search.
 Search strategies that are well-informed are more effective at finding an
answer.
 Heuristic search is another name for informed search.
 Heuristics are methods of finding solutions that may not always lead to
the best ones but are assured to do so in a reasonable amount of time.
 Many complex problems that cannot be solved in other ways can be
handled with informed search.
 The travelling salesman issue is an illustration of an informed search
algorithm.
 Greedy Search
 A* Search
Uninformed Search Algorithms

 Uninformed search is a category of all-purpose search algorithms


that uses a raw force approach to finding results.
 It is also known as blind search because uninformed search
algorithms only know how to traverse the tree and have no other
knowledge of the state or search area.
 The different categories of ignorant search algorithms are as
follows:
 Breadth-first Search
 Depth-first Search
 Depth-limited Search
 Iterative deepening depth-first search
 Uniform cost search
 Bidirectional Search
1. Breadth-first Search:

 The most popular search method for navigating a tree or


network is breadth-first search. The breadth-first search
algorithm performs a wide-ranging scan in a tree or graph.
 The BFS algorithm begins its search at the tree's base node
and expands every child node at the current level before
moving on to nodes at the next level.
 A general-graph search algorithm is the breadth-first search
algorithm.
 FIFO queue data structure is used to perform breadth-first
search.
Benefits: If a remedy is available, BFS will offer it.
If more than one answer exists for a particular issue, BFS
will offer the minimal solution with the fewest steps.
Cons: It uses a lot of memory because each level of the
tree must be stored in memory before expanding to
the following level.
If the solution is far from the root node, BFS takes a long
period.
Example:
 In the below tree structure, we have shown the traversing of
the tree using BFS algorithm from the root node S to goal
node K.
 BFS search algorithm traverse in layers, so it will follow the
path which is shown by the dotted arrow, and the traversed
path will be:

S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
 Time Complexity: Time Complexity of BFS algorithm can be
obtained by the number of nodes traversed in BFS until the
shallowest Node. Where the d= depth of shallowest solution
and b is a node at every state.
 T (b) = 1+b2+b3+.......+ bd= O (bd)
 Space Complexity: Space complexity of BFS algorithm is
given by the Memory size of frontier which is O(bd).
 Completeness: BFS is complete, which means if the
shallowest goal node is at some finite depth, then BFS will
find a solution.
 Optimality: BFS is optimal if path cost is a non-decreasing
function of the depth of the node.
2. Depth-first Search

 A recursive method for navigating a tree or graph data


structure is called depth-first search.
 The depth-first search is so named because it begins at the
root node, travels down each route to its deepest node, and
then moves on to the next path.
 In order to perform DFS, a stack data structure is used.
 The DFS algorithm works in a manner that is comparable to
the BFS algorithm.
Advantage: Because DFS only needs to keep a stack of the
nodes on the path from the root node to the current node,
it uses very little memory.
Compared to the BFS algorithm, it takes less time to reach the
target node. (if it traverses in the right path).
Disadvantage: There is no assurance that the problem will be
solved, and it is possible that many situations keep
happening.
The DFS algorithm performs deep search and occasionally may
enter an infinite cycle.
Example:
 In the below search tree, we have shown the flow of depth-
first search, and it will follow the order as:
 Root node--->Left node ----> right node.
 It will start searching from root node S, and traverse A, then
B, then D and E, after traversing E, it will backtrack the tree as
E has no other successor and still goal node is not found.
After backtracking it will traverse node C and then G, and
here it will terminate as it found goal node.
 Completeness: DFS search algorithm is complete within finite
state space as it will expand every node within a limited
search tree.
 Time Complexity: Time complexity of DFS will be equivalent
to the node traversed by the algorithm. It is given by:
 T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
 Where, m= maximum depth of any node and this can be
much larger than d (Shallowest solution depth)
 Space Complexity: DFS algorithm needs to store only single
path from the root node, hence space complexity of DFS is
equivalent to the size of the fringe set, which is O(bm).
 Optimal: DFS search algorithm is non-optimal, as it may
generate a large number of steps or high cost to reach to the
goal node.
3. Depth-Limited Search Algorithm:

 An method for depth-limited searches is a depth-first search with a


cap. The limitation of the infinite route in depth-first search can be
overcome by depth-limited search. The node at the depth limit will
be treated as having no further successor nodes in this method.
 Two conditions of failure can result in the termination of a depth-
limited search:
 Standard failure value: This means the issue cannot be solved.
 Cutoff failure value: It indicates that there isn't an answer to the
issue within a certain depth range.
 Advantages:
 Depth-limited search is Memory efficient.
 Disadvantages:
 Depth-limited search also has a disadvantage of incompleteness.
 It may not be optimal if the problem has more than one solution.
Example:
 Completeness: DLS search algorithm is complete if the solution is
above the depth-limit.
 Time Complexity: Time complexity of DLS algorithm is O(bℓ).
 Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
 Optimal: Depth-limited search can be viewed as a special case of
DFS, and it is also not optimal even if ℓ>d.
4. Uniform-cost Search Algorithm:

 An algorithm for navigating a weighted tree or network is known as


uniform-cost search. When a different cost is provided for each
edge, this algorithm is used. Finding the path with the lowest
cumulative cost to the target node is the main objective of the
uniform-cost search. According to the path expenses from the root
node, uniform-cost search expands nodes. Any graph or tree that
requires the optimum cost can be solved using it. The priority
queue uses an algorithm with a uniform expense. It provides the
lowest total cost the highest priority. In the event that the path
cost of every edge is the same, uniform cost search is identical to
the BFS algorithm.
 Benefits: Uniform cost search is the best option because it selects
the least expensive route at each state.
 Cons: It only considers the expense of the path rather than how
many steps there are in the search process. This algorithm might
become trapped in an endless cycle as a result.
Example:
 Completeness:
 Uniform-cost search is complete, such as if there is a solution, UCS will find
it.
 Time Complexity:
 Let C* is Cost of the optimal solution, and ε is each step to get closer to
the goal node. Then the number of steps is = C*/ε+1. Here we have taken
+1, as we start from state 0 and end to C*/ε.
 Hence, the worst-case time complexity of Uniform-cost search isO(b1 +
[C*/ε]
)/.
 Space Complexity:
 The same logic is for space complexity so, the worst-case space complexity
of Uniform-cost search is O(b1 + [C*/ε]).
 Optimal:
 Uniform-cost search is always optimal as it only selects a path with the
lowest path cost.
5. Iterative deepeningdepth-first Search:

 DFS and BFS algorithms are combined in the iterative deepening


method. This search algorithm determines the ideal depth limit by
progressively raising the limit until a target is discovered.
 This method uses depth-first search up to a predetermined "depth
limit," and iteratively increases the depth limit until the target
node is located.
 The advantages of depth-first search's memory efficiency and
breadth-first search's quick search are combined in this search
method.
 When the search space is vast and the depth of the target node is
unknown, the iterative search algorithm is helpful.
 Advantages:
 It incorporates the quick search and memory efficiency of the BFS
and DFS search algorithms.
 Disadvantages:
 IDDFS's primary flaw is that it repeats all of the work from the prior
phase.
 Example: The tree structure in the following paragraph illustrates
an iteratively increasing depth-first search. The IDDFS algorithm
runs through several iterations until it is unable to locate the target
node. The algorithm's repetition is described as follows:
 1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
 Completeness:
 This algorithm is complete is ifthe branching factor is finite.
 Time Complexity:
 Let's suppose b is the branching factor and depth is d then the worst-case
time complexity is O(bd).
 Space Complexity:
 The space complexity of IDDFS will be O(bd).
 Optimal:
 IDDFS algorithm is optimal if path cost is a non- decreasing function of the
depth of the node.
6. Bidirectional Search Algorithm:

 To locate the goal node, the bidirectional search algorithm


conducts two simultaneous searches, one from the initial state
(forward search) and the other from the goal node (backward
search).
 One single search graph is replaced by two small subgraphs in
bidirectional search, one of which begins the search from the initial
vertex and the other from the goal vertex.
 When these two graphs cross, the hunt comes to an end.
 BFS, DFS, DLS, and other search methodologies can be used for
bidirectional search.
 Advantages:
 Bidirectional search is fast.
 Bidirectional search requires less memory
 Disadvantages:
 Implementation of the bidirectional search tree is difficult.
 In bidirectional search, one should know the goal state in
advance.
 Example:
 In the below search tree, bidirectional search algorithm is applied.
This algorithm divides one graph/tree into two sub-graphs. It starts
traversing from node 1 in the forward direction and starts from
goal node 16 in the backward direction.
 The algorithm terminates at node 9 where two searches meet.
 Completeness: Bidirectional Search is complete if we use BFS in
both searches.
 Time Complexity: Time complexity of bidirectional search using
BFS is O(bd).
 Space Complexity: Space complexity of bidirectional search
is O(bd).
 Optimal: Bidirectional search is Optimal.
Informed Search Algorithms

 We have discussed uninformed search algorithms thus far, which


searched the search space for all potential answers to the issue
without any extra knowledge of the search space. However, an
informed search algorithm includes a variety of information,
including the distance to the objective, the expense of the path,
the best way to get there, etc. Agents can locate the goal node
more quickly and efficiently by using this information to explore
less of the search space.
 For big search spaces, the informed search algorithm is more
beneficial. Heuristic search is another name for informed search,
which also employs the heuristic concept.
 Heuristics function: Informed Search employs the heuristic
function, which identifies the most likely route. It generates an
estimate of how far away the agent is from the goal using the
agent's present state as input. The heuristic approach, however,
ensures that a decent solution will be found in a reasonable
amount of time, even though it may not always provide the best
solution. The proximity of a state to the objective is estimated by a
heuristic function. It computes the price of the best route between
the two states and is denoted by h(n). The heuristic function's
output is always positive.
 Admissibility of the heuristic function is given as:
 h(n) <= h*(n)
 Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence
heuristic cost should be less than or equal to the estimated cost.
 Pure Heuristic Search:
 Pure heuristic search is the simplest form of heuristic search
algorithms. It expands nodes based on their heuristic value h(n). It
maintains two lists, OPEN and CLOSED list. In the CLOSED list, it
places those nodes which have already expanded and in the OPEN
list, it places nodes which have yet not been expanded.
 On each iteration, each node n with the lowest heuristic value is
expanded and generates all its successors and n is placed to the
closed list. The algorithm continues unit a goal state is found.
 In the informed search we will discuss two main algorithms which
are given below:
 Best First Search Algorithm(Greedy search)
 A* Search Algorithm
 1.) Best-first Search Algorithm (Greedy Search):
 The path that looks to be the best at the time is always chosen by
the greedy best-first search algorithm. It combines methods for
depth-first search and breadth-first search. It employs search and
the heuristic function. We can benefit from both algorithms'
benefits by using best-first search. We can select the most likely
node at each stage using best-first search. The node that is nearest
to the goal node is expanded using the best first search algorithm,
and the closest cost is calculated using a heuristic function, i.e.
 f(n)= g(n).
 Were, h(n)= estimated cost from node n to the goal.
 The greedy best first algorithm is implemented by the priority queue.
 Best first search algorithm:
 Step 1: Place the starting node into the OPEN list.
 Step 2: If the OPEN list is empty, Stop and return failure.
 Step 3: Remove the node n, from the OPEN list which has the lowest value
of h(n), and places it in the CLOSED list.
 Step 4: Expand the node n, and generate the successors of node n.
 Step 5: Check each successor of node n, and find whether any node is a
goal node or not. If any successor node is goal node, then return success
and terminate the search, else proceed to Step 6.
 Step 6: For each successor node, algorithm checks for evaluation function
f(n), and then check if the node has been in either OPEN or CLOSED list. If
the node has not been in both list, then add it to the OPEN list.
 Step 7: Return to Step 2.
 Advantages:
 Best first search can switch between BFS and DFS by gaining the
advantages of both the algorithms.
 This algorithm is more efficient than BFS and DFS algorithms.
 Disadvantages:
 It can behave as an unguided depth-first search in the worst case
scenario.
 It can get stuck in a loop as DFS.
 This algorithm is not optimal.
 Example:
 Consider the below search problem, and we will traverse it using
greedy best-first search. At each iteration, each node is expanded
using evaluation function f(n)=h(n) , which is given in the below
table.
 In this search example, we are using two lists which
are OPEN and CLOSED Lists. Following are the iteration for
traversing the above example.
 Expand the nodes of S and put in the CLOSED list
 Initialization: Open [A, B], Closed [S]
 Iteration 1: Open [A], Closed [S, B]
 Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
 Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]
 Hence the final solution path will be: S----> B----->F----> G
 Time Complexity: The worst case time complexity of Greedy best
first search is O(bm).
 Space Complexity: The worst case space complexity of Greedy best
first search is O(bm). Where, m is the maximum depth of the search
space.
 Complete: Greedy best-first search is also incomplete, even if the
given state space is finite.
 Optimal: Greedy best first search algorithm is not optimal.
2.) A* Search Algorithm:

 A* search is the most commonly known form of best-first search. It


uses heuristic function h(n), and cost to reach the node n from the
start state g(n). It has combined features of UCS and greedy best-
first search, by which it solve the problem efficiently. A* search
algorithm finds the shortest path through the search space using
the heuristic function. This search algorithm expands less search
tree and provides optimal result faster. A* algorithm is similar to
UCS except that it uses g(n)+h(n) instead of g(n).
 In A* search algorithm, we use search heuristic as well as the cost
to reach the node. Hence we can combine both costs as following,
and this sum is called as a fitness number.
Algorithm of A* search:
 Step1: Place the starting node in the OPEN list.
 Step 2: Check if the OPEN list is empty or not, if the list is empty then
return failure and stops.
 Step 3: Select the node from the OPEN list which has the smallest
value of evaluation function (g+h), if node n is goal node then return
success and stop, otherwise
 Step 4: Expand node n and generate all of its successors, and put n
into the closed list. For each successor n', check whether n' is
already in the OPEN or CLOSED list, if not then compute evaluation
function for n' and place into Open list.
 Step 5: Else if node n' is already in OPEN and CLOSED, then it should
be attached to the back pointer which reflects the lowest g(n') value.
 Step 6: Return to Step 2.
Advantages:
 A* search algorithm is the best algorithm than other search
algorithms.
 A* search algorithm is optimal and complete.
 This algorithm can solve very complex problems.
Disadvantages:
 It does not always produce the shortest path as it mostly based on
heuristics and approximation.
 A* search algorithm has some complexity issues.
 The main drawback of A* is memory requirement as it keeps all
generated nodes in the memory, so it is not practical for various
large-scale problems.
Example:
 In this example, we will traverse the given graph using the A*
algorithm. The heuristic value of all states is given in the below
table so we will calculate the f(n) of each state using the formula
f(n)= g(n) + h(n), where g(n) is the cost to reach any node from
start state.
 Here we will use OPEN and CLOSED list.
Initialization: {(S, 5)}
 Iteration1: {(S--> A, 4), (S-->G, 10)}
 Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
 Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B,
7), (S-->G, 10)}
 Iteration 4 will give the final result, as S--->A--->C--->G it provides
the optimal path with cost 6.
Points to remember:
 A* algorithm returns the path which occurred first, and it does not
search for all remaining paths.
 The efficiency of A* algorithm depends on the quality of heuristic.
 A* algorithm expands all nodes which satisfy the condition f(n)
Complete: A* algorithm is complete as long as:
 Branching factor is finite.
 Cost at every action is fixed.

Optimal: A* search algorithm is optimal if it follows below two conditions:


 Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in
nature.
 Consistency: Second required condition is consistency for only A* graph-search.

If the heuristic function is admissible, then A* tree search will always find the least
cost path.
Time Complexity: The time complexity of A* search algorithm depends on
heuristic function, and the number of nodes expanded is exponential to the
depth of solution d. So the time complexity is O(b^d), where b is the branching
factor.
Space Complexity: The space complexity of A* search algorithm is O(b^d)
Heuristic Method

 Finding the best solution to an issue quickly, effectively, and


efficiently is referred to as the heuristic method. The Greek term
"eurisko" is the source of the word "heuristic." It signifies to look
for, find, or search. It is an effective mental shortcut for problem-
solving and making decisions that lessens cognitive burden and
doesn't call for perfection. The approach aids in finding a
satisfactory resolution to a much bigger issue in a constrained
amount of time.
 The most basic algorithm is the one that relies on trial and error. It
can be used in every circumstance, from matching screws and bolts
to determining an algebraic solution. The use of visual
representation, forward/backward reasoning, additional
assumptions, and simplification are some prevalent heuristics for
resolving metamaterial problems.
 Advantages of Heuristic
 One of the finest methods for problem-solving and decision-making
is the heuristic approach. With the aid of mental heuristics, the
technique offers a speedy resolution. Below are a few benefits of the
intuitive approach:
 Attribute Substitution: A simpler question that is linked to the original
question can be substituted for more difficult and complex questions.
The approach is improved by the attribute substitution technique.
 Effort Savings:By making a variety of decisions, the heuristic
technique lowers the mental effort needed to solve a problem. The
technique becomes one of the best for solving a variety of time-
consuming issues as a result.
 Quick and Cheap:The problems can be resolved within a short
amount of time by using a heuristic technique.
 Disadvantages of Heuristic Method
 As we all know, heuristic methods aid us in making quick, informed
choices that solve problems, but they can also lead to mistakes and
biassed judgements in certain circumstances. The following are some of
the intuitive method's drawbacks:
 Unreliable Decision: It is untrue to say that the intuitive approach always
yields a reliable solution or judgement. The technique occasionally offers
an incorrect conclusion or assessment of how frequently and how
representatively certain things may occur in your mind. The instances of
decision-making manipulation make it simple to comprehend.
 Bias Decision: It is not necessary or unquestionably demonstrated that
a choice or a solution that worked well in one circumstance will always
work in another or even in the same situation. It can be challenging to
see alternative, better answers if someone consistently uses the same
heuristic.
 Reduces Creativity: If a person consistently makes the same
choice, this can also lessen their capacity for creative decision-
making and problem-solving. It prevents the individual from
forming fresh opinions and judgements.
 Stereotypes and prejudice are two additional things that the
techniques have an impact on. A person risks missing the more
pertinent and instructive information when classifying and
categorising other people by using mental workarounds. Such
circumstances may lead to the creation of biassed and
stereotypical classifications of people and actions that do not
correspond to the actual circumstances.
 Four Principles of the Heuristic Method
 György (George) Pólya outlined the four heuristic technique tenets in his
book. How to Solve It was the title given to the book when it was
released in 1945. Finding the answer to the issue may be challenging if
these principles are not applied in the order that they are presented.
They are also known as the method's four stages for this reason.
 First Principle - Understanding the Problem: It is the initial stage in
problem solving. This is the most crucial rule because it's necessary to
comprehend the actual issue at hand before seeking a solution. But a lot
of individuals disregard the idea of selecting the initial appropriate
course of action. The guiding concept is to understand the issue and
consider it from various perspectives.This principle covers a variety of
issues, including what the issue is, what is happening, whether there is
another method to explain the issue, whether all necessary information
is available, etc. These all aid in comprehending the actual issue and all
of its facets.
 Second Principle - Making a Plan: A problem can be resolved in a variety of
methods. The second tenet states that the best approach to use in order to
find a remedy to the issue at hand must be identified. Finding the
requirement first is the best strategy for this reason. Working downward in
reverse can be useful here. In this, people erroneously believe they have a
remedy that assists them in tackling the issue head-on.

Additionally, it aids in creating a summary of the possibilities, eliminating the
less effective options right away, contrasting all of the surviving options, or
using symmetry. This enhances a person's capacity for inventiveness and
judgement.
 Third Principle: Carrying Out the Plan: The plan can be carried out once the
appropriate approach has been developed. To do this, though, requires
patience and giving the issue the necessary amount of time to be resolved.
because carrying out a strategy is more difficult than creating one. It is
advised to repeat the second principle in a better manner if the plan doesn't
offer a solution or performs poorly compared to expectations.
 Fourth Principle - Evaluation and Adaptation: This idea assessed
whether everything was going according to plan. In other words, it
instructed us to align the intended method with the accepted one.
Following this, it is discovered that everything is proceeding
according to plan, allowing for the best possible problem-solving
outcome. While some schemes might succeed, others might not.
Therefore, the most appropriate approach to solving the primary
issue can be modified following the correct evaluation.
 Types of Heuristic Methods
 Several heuristic methods were also used by Pólya. Some of the
most popular methods are discussed below:
 Dividing Technique: In order to find the solution more quickly, this
method breaks the original problem into smaller chunks or sub-
problems. Once these subproblems have been resolved
independently, they can be combined to yield the complete answer
to the initial problem's solution.
 Inductive Method: The inductive technique uses a much smaller
problem than the one that needed to be solved in the beginning.
By extrapolating the generalisation from the smaller issue or by
using the same approach as in the prior problem, the larger
original problem can be resolved.
 Reduction Method: This technique establishes various boundaries
for the primary problem in advance because, as we all know, the
problem is solved by various factors and causes. It aids in limiting
the scope of the initial issue and making the answer simple to find.
 Constructive Method: In this approach, each stage of the problem-
solving process is celebrated as a success once it has been
completed. Following that, successive steps are done to get to the
last one. It aids in finding the ideal solution to the issue and
achieving a fruitful outcome.
 Local Search Method: With this approach, the most practical
solution to an issue is sought out and employed. During the
problem-solving process, the method is continuously improved.
When there is no more room for improvement, the method reaches
its conclusion, and the outcome is the solution to the issue.
 Uses of Heuristic in Various Fields
 Heuristic Method vs. Exact Solution
 The features that make the heuristic method different and superior
from the exact solution method are as under:
Heuristic Method Exact Solution Method

The heuristic method is a The exact solution method focuses


mathematical method that provides on finding the optimal solution to a
a good solution with proof to a problem.
particular problem.

This method consumes less time. This method consumes more time.

It provides a good, immediate, short- It provides an optimal, perfect, or


term goal or approximate solution or rational solution or decision.
decision.
It is more flexible. It is less flexible.
Local Search Algorithms
Hill Climbing Algorithm in Artificial Intelligence
 In order to discover the mountain's peak or the best solution to the issue, the
hill climbing algorithm is a local search algorithm that continuously moves in
the direction of increasing elevation or value. When it hits a peak value where
none of its neighbours have a higher value, it ends.
 The hill climbing algorithm is a method for solving mathematical optimisation
issues. Traveling-salesman is one of the frequently cited instances of a hill
climbing algorithm. Problem where we need to cut down on the salesman's
journey distance.
 Because it only searches within its excellent immediate neighbour state and
not further afield, it is also known as greedy local search.
 State and value make up the two components of a hill ascending algorithm
node.
 When a reliable heuristic is provided, hill climbing is typically used.
 This algorithm only maintains one current state, so we don't need to handle or
maintain the search tree or graph.
 Features of Hill Climbing:
 These are a few of the key characteristics of the hill climbing
algorithm:
 Create and evaluate a variant: The Generate and Test technique
has an extension called Hill Climbing. Feedback from the Generate
and Test method aids in choosing which way to proceed through
the search space.
 Greedy strategy: The search of a hill-climbing algorithm advances
in the direction where the expense is optimised.
 No backtracking: Since it cannot recall prior states, it does not go
back in the search space.
 State-space Diagram for Hill Climbing:
 The hill-climbing algorithm is graphically represented by the state-
space landscape, which displays a graph between different
algorithmic stages and Objective function/Cost.
 State space is on the x-axis, and the function, which can be either
an objective function or a cost function, is on the y-axis. Finding
the global minimum and local minimum is the aim of the search if
the Y-axis function is cost. Finding the global maximum and local
maximum is the aim of the search if the Y-axis' function is an
objective function.
 Different regions in the state space landscape:
 Local Maximum: Local maximum is a state which is better than its
neighbor states, but there is also another state which is higher
than it.
 Play Video
 Global Maximum: Global maximum is the best possible state of
state space landscape. It has the highest value of objective
function.
 Current state: It is a state in a landscape diagram where an agent is
currently present.
 Flat local maximum: It is a flat space in the landscape where all the
neighbor states of current states have the same value.
 Shoulder: It is a plateau region which has an uphill edge.
 Types of Hill Climbing Algorithm:
 Simple hill Climbing:
 Steepest-Ascent hill-climbing:
 Stochastic hill Climbing:
 1. Simple Hill Climbing:
 The simplest method to put a hill climbing algorithm into practise is
simple hill climbing.Only one neighbour node state is evaluated at a
time, and the first one that minimises current cost is chosen and
assigned as the current state. It only examines one successor state,
and if it discovers that it is superior to the present state, it moves to
that state and stays there. These characteristics of this programme
include:
 requiring less effort
 less ideal option and no assurance that it will work
 Algorithm for Simple Hill Climbing:
 Step 1: Evaluate the initial state, if it is goal state then
return success and Stop.
 Step 2: Loop Until a solution is found or there is no new
operator left to apply.
 Step 3: Select and apply an operator to the current state.
 Step 4: Check new state:
◦ If it is goal state, then return success and quit.
◦ Else if it is better than the current state then assign new state
as a current state.
◦ Else if not better than the current state, then return to step2.
 Step 5: Exit.
 2. Steepest-Ascent hill climbing:
 A variant of the straightforward hill-climbing algorithm is the steepest-Ascent
algorithm. This algorithm looks at every node that borders the current state
and chooses the one that is most near the target state. This method takes
longer because it looks for more neighbours.
 Algorithm for Steepest-Ascent hill climbing:
 Step 1: Evaluate the initial state, if it is goal state then return success and stop,
else make current state as initial state.
 Step 2: Loop until a solution is found or the current state does not change.
◦ Let SUCC be a state such that any successor of the current state will be better than it.
◦ For each operator that applies to the current state:
 Apply the new operator and generate a new state.
 Evaluate the new state.
 If it is goal state, then return it and quit, else compare it to the SUCC.
 If it is better than SUCC, then set new state as SUCC.
 If the SUCC is better than the current state, then set current state to SUCC.
 Step 5: Exit.
 3. Stochastic hill climbing:
 Before relocating, stochastic hill climbing
does not look for all of its neighbours.
Instead, this search algorithm randomly
chooses one neighbour node and then
determines whether to use that node's
current state or look at a different state.
 Problems in Hill Climbing Algorithm:
 1. Local Maximum: A local maximum is a state that dominates its
surrounding states but is surpassed by another state that is also
present and is greater than the local maximum.
 Solution: Backtracking methodology is a potential remedy for the
local maximum in the state space environment. Make a list of the
potential paths so that the algorithm can go back and investigate
additional routes.
 2. Plateau: A plateau is a flat region of the search space where all
neighbour states of the present state have the same value. As a
result, the algorithm is unable to determine the best course of
action. In the plateau region, a hill-climbing hunt could become
disoriented.
 Solution: Moving past a plateau involves making either large or
small movements while looking for an answer. Choose a state at
random that is far from the one you are in now, giving the
algorithm a chance to discover a non-plateau area.
 3. Ridges: The local maximum can take unique forms, such as
ridges. It has a portion that is higher than the regions around it,
but because of its own slope, it cannot be reached in a single
motion.
 Solution: We can solve this issue by moving in multiple ways or by
using bidirectional search.
 Simulated Annealing:
 A hill-climbing algorithm that never descends is certain to be
unfinished because it may become trapped at a local maximum.
Additionally, if the method uses a random walk by moving a
successor, it may succeed but is inefficient.An algorithm that
produces both speed and completeness is called Simulated
Annealing.
 Mechanically speaking, annealing is the process of heating a metal
or glass to a high hardness and then gradually cooling it to enable
the metal to achieve a low-energy crystalline state. Simulated
annealing employs the same procedure, only the algorithm chooses
a random move as opposed to the optimal move. If the state is
improved by the random move, it proceeds along the same route. If
not, the algorithm moves downward and picks a different path, or it
selects a path with a probability of less than 1.
Constraint Satisfaction Problems in Artificial Intelligence

 In order to handle different problems, we have come across a


wide range of techniques, such as adversarial search and
instant search.
 Every approach to a problem has a singular goal in mind: to
identify a solution that will make it possible to accomplish
the goal.
 However, there were no limitations on the ability of
algorithms to solve problems and find solutions in adversarial
search and local search, respectively.
 The constraint optimisation methodology, also known as the
real concern method, is examined in this part.
 By its name, the concept of "constraints fulfilment" suggests
that a problem of this kind needs to be resolved while
abiding by a set of limitations or rules.
 A issue is said to have been solved using the multi-objective
approach when its variables adhere to strict conditions of
principles.
 Wow, what a method accomplishes in a research when it
comes to the complexity and organisation of both the
problem.
Three factors affect restriction compliance, particularly regarding:
 It refers to a group of parameters, or X.
 D: The variables are contained within a collection several domain.
Every variables has a distinct scope.
 C: It is a set of restrictions that the collection of parameters must
abide by.
Domains in constraint satisfaction are the places where parameters
were found after the task-specific constraints. A constraint
satisfaction method is made up entirely of those three parts. The
number of something like the requirement is made up of the
combination "scope, rel." The scope is a tuple of variables that
affect the restriction, and rel is a relationship that does, in fact,
contain a list of potential solutions for the parameters should take
into account in order to satisfy the limitations of something akin to
the problem.
Problems with Contains A certain percentage
For a constraint satisfaction problem (CSP), the following conditions
must be met:
 States area
 fundamental idea while behind remedy.
The definition of a state in phase space involves giving values to any
or all of the parameters, like as
 X1 = v1, X2 = v2, etc.

There are 3 methods to economically beneficial to something like a


parameter:
 Consistent or Legal Assignment: If a job complies with all
applicable laws and regulations, it is said to be Consistent or
Legal.
 Complete Assignment: An task where the CSP solution is
continuous and each variable has a corresponding number. One
of these tasks is known as a completed job.
 A partial assignment only assigns values to some of the variables.
These kinds of projects are known as unfinished assignments.
Domain Categories within CSP
One of the two categories of domains mentioned below is used
by the parameters:
Private Domain: It is possible to have a singular state with many
different variables in this infinite space. Every parameter, for
instance, might get an infinite number of starting conditions.
It is a finite domain with continuous phases, and it can only
really characterise one region for a single variable. It also
goes by the term constant area.
Types of Constraints in CSP
In terms of the parameters, there are essentially three distinct
categories of limitations:
Because they only constrain the value of one variable, unary
restrictions are the simplest kind of restrictions.
Binary resource constraints: These limitations link two
variables. The variable x2 contains a number between x1 and
x3.
Global resource constraints: There are an unlimited number of
factors in this type of restriction.
The following resolution methodologies are used to address the
major categories of restrictions:
Linear constraints are commonly used in linear programming
when every parameter bearing an integer value only appears
in a linear equation.
Nonlinear Restrictions: Several different kinds of restrictions
were used in non-linear programming when each variable (an
integer value) appears in a non-linear form.
It should be noted that the preferences limitation is a special
restriction that applies in reality.
 Imagine a Sudoku puzzle with some of the squares having
specific integers as their starting fills.
 A recurring integer of any kind must not appear in any of the
rows, columns, or blocks as you fill in the blank squares with
values between 1 and 9. This multi-objective problem is fairly
simple to solve. It is necessary to find a solution while taking
into account certain restrictions.
 The empty spaces themselves were referred to as variables,
while the integer range (1-9) that can actually fill the other
spaces is referred to as a domain. The variables' numbers are
initially taken from realm. The guidelines that specify how a
variable will choose the extent are known as constraints.
Backtracking

 We will learn about backtracking in this subject, which is an


essential skill for dealing with recursive solutions. Functions
that repeatedly summon themselves are said to be recursive.
Here's an illustration of a palindrome:
 When first invoked, the function Palindrome(S, 0, 8) has the
parameters Palindrome(S, 1, 7). With the inputs is
Palindrome, the recursive call is Palindrome(S, 1, 7).(S, 2, 6).

 Backtracking is a method that can be applied to address the


issue. By employing this method, we can formulate the
algorithm.
 It employs the brute force method of problem solving, which
states that we attempt to create all feasible answers for the
given issue before selecting the best one from all the
candidates. Although dynamic programming is used to address
optimisation issues, it still adheres to this guideline.
 Backtracking is not used, however, when trying to solve
optimisation issues. When we need all of the possible solutions
and have numerous ones, we backtrack.
 Backtracking implies that we are moving back and forward; if
the condition is satisfied, we return with success; otherwise, we
go back.
 It is used to solve a problem where a series of objects must be
selected from a given set in order for the sequence to meet
certain requirements.
When to use a Backtracking algorithm?
 When faced with a variety of options, we choose based on
those options. The backtracking method must be used in the
following circumstances:
 When there isn't enough knowledge at hand to make the
best decision, we use the backtracking strategy to explore
every option.
 New choices are presented by each pick. On the other hand,
we go back and make fresh choices. We must apply the
backtracking approach in this situation.
How does Backtracking work?
 Backtracking is a methodical way of trying out different
decision-making processes until you find one that works.
Let's use an illustration to clarify.
 With a start node, we begin. We start by going to point A. As
a result of it being an impractical solution, we continue on to
node B. We turn back to node A from node B because it is
also not a workable answer and is a dead end.
 Suppose another path exists from node A to node C. So, we
move from node A to node C. It is also a dead-end, so again
backtrack from node C to node A. We move from node A to
the starting node.
 Now we will check any other path exists from the starting
node. So, we move from start node to the node D. Since it is
not a feasible solution so we move from node D to node E.
The node E is also not a feasible solution. It is a dead end so
we backtrack from node E to node D.
 Suppose another path exists from node D to node F. So, we
move from node D to node F. Since it is not a feasible
solution and it's a dead-end, we check for another path from
node F.
 Suppose there is another path exists from the node F to node
G so move from node F to node G. The node G is a success
node.
The terms related to the backtracking are:
 Live node: The nodes that can be further generated are known as live
nodes.
 E node: The nodes whose children are being generated and become a
success node.
 Success node: The node is said to be a success node if it provides a
feasible solution.
 Dead node: The node which cannot be further generated and also does
not provide a feasible solution is known as a dead node.
Many problems can be solved by backtracking strategy, and that problems
satisfy complex set of constraints, and these constraints are of two types:
 Implicit constraint: It is a rule in which how each element in a tuple is
related.
 Explicit constraint: The rules that restrict each element to be chosen from
the given set.
Applications of Backtracking
 N-queen problem
 Sum of subset problem
 Graph coloring
 Hamiliton cycle

Difference between the Backtracking and Recursion


 Recursion is a method where the same function is called
repeatedly until the base case is reached. Backtracking is an
algorithm that uncovers every potential answer and picks the
ideal one from the available options.
Game Playing in Artificial Intelligence

 An essential area of artificial intelligence is game playing.


 Games don't take a lot of knowledge; all we need to know is
the game's rules, acceptable moves, and circumstances under
which we can win or lose. In an effort to succeed, both players
compete. So each turn, they both attempt to make the greatest
move they can.
 Because of the high branching factor, searching will take a long
time using methods like BFS (Breadth First Search), which is not
accurate for this. Therefore, we require new search techniques
that are improved.
 Generate procedure so that only good moves are generated.
 Test procedure so that the best move can be explored first.
 The creation of computer programmes to play games like chess,
checkers, or go is one of the more well-known applications of
artificial intelligence. Artificial intelligence game play aims to create
programmes that can learn how to play games and take actions that
will produce favourable results.
 1. The IBM chess programme Deep Blue, which defeated Garry
Kasparov, the world champion, in 1997, is one of the earliest
instances of a successful game-playing AI. Since then, a variety of
games, including two-player, multiplayer, and computer games, have
incorporated AI.
2. AI game playing typically takes one of two different approaches:
rule-based systems or machine learning-based systems. While
machine learning-based systems use algorithms to learn from
experience and make choices based on that experience, rule-
based systems use a set of fixed rules to play the game.
3. Machine learning-based systems have gained popularity recently
because they can learn from experience and get better with time,
which makes them ideal for challenging games like Go. For
instance, DeepMind's AlphaGo was the first machine learning-
based Go system to triumph over a world champion.
Research on game playing in AI is ongoing, and it has a wide range of
real-world uses, such as in game creation, instruction, and
combat training. AI algorithms can be used to create more
efficient decision-making systems for use in real-world
applications by simulating game-playing situations.
 Advantages of Game Playing in Artificial Intelligence:
 AI development: The creation of new algorithms and techniques that can
be used in other areas of AI is a result of the game industry, which has
been a major factor in the development of AI.
 Education and training: Playing video games can be used to instruct
professionals and students in AI concepts and algorithms, as well as to
educate the military and first responders.
 Research: AI researchers are actively studying and creating new methods
for decision-making and problem-solving through the use of games.
 Applications in the actual world: Techniques and algorithms created for
game play can be used in decision support systems, robotics, and
autonomous systems.
 Disadvantages of Game Playing in Artificial Intelligence:
 Limited scope: It's possible that the methods and algorithms
created for playing games won't work well for other kinds of apps
and will need to be adjusted or changed for various domains.
 Cost of computation: Playing games can be costly in terms of
computation, particularly for more complex games like chess or go,
which may call for powerful computers to achieve real-time
performance.
Mini-Max Algorithm in Artificial Intelligence

 In game theory and decision-making, the mini-max algorithm is a recursive or


backtracking method. In the event that the opponent is also playing optimally, it
offers the individual an optimal move.
 Recursion is used by the Mini-Max algorithm to explore through the game-tree.
 AI game playing typically uses the Min-Max algorithm. such as Go, tic-tac-toe, chess,
checkers, and numerous two-player games. The minimax decision for the present
state is computed by this algorithm.
 One person is referred to as MAX and the other is referred to as MIN in this
algorithm.
 The two players compete while they benefit the most and the opponent person
benefits the least.
 The game pits two players against one another, with MAX choosing the maximal
value and MIN choosing the minimal value.
 For the study of the entire game tree, the minimax algorithm uses a depth-first
search algorithm.
 The minimax method descends all the way to the tree's terminal node before
recursively going back up the tree.
Working of Min-Max Algorithm:
 It is simple to explain how the minimax algorithm functions using an illustration.
Below is an illustration of a game-tree that represents a two-player game.
 There are two players in this case; one is referred to as Maximizer, and the other as
Minimizer.
 Both the Maximizer and the Minimizer will strive for the highest and lowest scores,
respectively.
 Since this method uses DFS, we must traverse all of the leaves in the game-tree to
get to the terminal nodes.
 Since the terminal values are provided at the terminal node, we can compare them
and backtrack the tree until the original state is reached. The primary stages in
solving the two-player game tree are as follows:
Step-1: In the first step, the algorithm generates the entire game-tree and
apply the utility function to get the utility values for the terminal states. In
the below tree diagram, let's take A is the initial state of the tree. Suppose
maximizer takes first turn which has worst-case initial value =- infinity, and
minimizer will take next turn which has worst-case initial value = +infinity.
Step 2: Now, first we find the utilities value for the Maximizer, its initial value
is -∞, so we will compare each value in terminal state with initial value of
Maximizer and determines the higher nodes values. It will find the
maximum among the all.
 For node D max(-1,- -∞) => max(-1,4)= 4
 For Node E max(2, -∞) => max(2, 6)= 6
 For Node F max(-3, -∞) => max(-3,-5) = -3
 For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all
nodes value with +∞, and will find the 3rd layer node values.
 For node B= min(4,6) = 4
 For node C= min (-3, 7) = -3
Step 4: Now it's a turn for Maximizer, and it will again choose the
maximum of all nodes value and find the maximum value for the
root node. In this game tree, there are only 4 layers, hence we
reach immediately to the root node, but in real games, there will
be more than 4 layers.
 For node A max(4, -3)= 4
Properties of Mini-Max algorithm:
 Complete- Min-Max algorithm is Complete. It will definitely find a solution (if
exist), in the finite search tree.
 Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
 Time complexity- As it performs DFS for the game-tree, so the time complexity of
Min-Max algorithm is O(bm), where b is branching factor of the game-tree, and m
is the maximum depth of the tree.
 Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS
which is O(bm).
Limitation of the minimax Algorithm:
 The primary disadvantage of the minimax algorithm is that it becomes extremely
slow for challenging games like Go and Chess. The player has a tonne of options
to choose from in this genre of game due to its high branching component. The
alpha-beta pruning technique that we will discuss in the following subject can
help to address this shortcoming of the minimax algorithm.
Alpha-Beta Pruning

 A modified variant of the minimax algorithm is alpha-beta pruning.


It is a minimax algorithm optimisation method.
 The number of game states that the minimax search algorithm
must investigate is exponentially correlated with the depth of the
tree, as we have seen. Since the exponent cannot be removed, we
can reduce it by half. As a result, there is a method known as
pruning that allows us to compute the right minimax decision
without having to check each node of the game tree. It is known as
alpha-beta pruning because it includes the two threshold
parameters Alpha and beta for future expansion. Additionally
known as the Alpha-Beta Algorithm.
 Any level of a tree can be subjected to alpha-beta pruning, and
occasionally it doesn't just remove individual leaves but also whole
sub-trees.
 The two-parameter can be defined as:
◦ Alpha: The best (highest-value) choice we have found so
far at any point along the path of Maximizer. The initial
value of alpha is -∞.
◦ Beta: The best (lowest-value) choice we have found so far
at any point along the path of Minimizer. The initial value
of beta is +∞.
 The Alpha-beta pruning to a standard minimax algorithm
yields the same move as the standard algorithm, but it
eliminates all the nodes that are merely slowing down the
algorithm without having any real bearing on the final choice.
Thus, by pruning these nodes, the method becomes quick.
Condition for Alpha-beta pruning:
 The main condition which required for alpha-beta pruning is:
 α>=β

Key points about alpha-beta pruning:


 The Max player will only update the value of alpha.
 The Min player will only update the value of beta.
 While backtracking the tree, the node values will be passed
to upper nodes instead of values of alpha and beta.
 We will only pass the alpha, beta values to the child nodes.
Working of Alpha-Beta Pruning:
 Let's take an example of two-player search tree to
understand the working of Alpha-beta pruning
Step 1: At the first step the, Max player will start first move
from node A where α= -∞ and β= +∞, these value of alpha
and beta passed down to node B where again α= -∞ and β=
+∞, and Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn
for Max. The value of α is compared with firstly 2 and then 3,
and the max (2, 3) = 3 will be the value of α at node D and
node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of
β will change as this is a turn of Min, Now β= +∞, will
compare with the available subsequent nodes value, i.e. min
(∞, 3) = 3, hence at node B now α= -∞, and β= 3.
 In the next step, algorithm traverse the next successor of Node B which is
node E, and the values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change.
The current value of alpha will be compared with 5, so max (-∞, 5) = 5,
hence at node E α= 5 and β= 3, where α>=β, so the right successor of E will
be pruned, and algorithm will not traverse it, and the value at node E will
be 5.
Step 5: At next step, algorithm again backtrack the tree, from
node B to node A. At node A, the value of alpha will be
changed the maximum available value is 3 as max (-∞, 3)= 3,
and β= +∞, these two values now passes to right successor
of A which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed
on to node F.
Step 6: At node F, again the value of α will be compared with
left child which is 0, and max(3,0)= 3, and then compared
with right child which is 1, and max(3,1)= 3 still α remains 3,
but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here
the value of beta will be changed, it will compare with 1 so min (∞, 1) = 1.
Now at C, α=3 and β= 1, and again it satisfies the condition α>=β, so the
next child of C which is G will be pruned, and the algorithm will not
compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3,
1) = 3. Following is the final game tree which is the showing the nodes
which are computed and nodes which has never computed. Hence the
optimal value for the maximizer is 3 for this example.
Move Ordering in Alpha-Beta pruning:
The order in which each node is evaluated has a significant impact on the
efficacy of alpha-beta pruning. An essential component of alpha-beta
pruning is move sequence.
It comes in two varieties:
Worst ordering: In some circumstances, the alpha-beta pruning algorithm
behaves precisely like the minimax algorithm and does not remove any tree
leaves. Because of the alpha-beta variables in this situation, it also takes
more time; this type of pruning is known as worst ordering. The right side of
the tree is where to proceed in this situation. The sequence has an O time
complexity.(bm).
Ideal ordering: When there is a lot of pruning occurring in the tree, the best
movements take place on the left side of the tree, which is the optimal
ordering for alpha-beta pruning. Since we use DFS, the algorithm goes left of
the tree first and twice as deep as the minimax algorithm in the same period
of time. Ideal ranking is O(bm/2) complex.
Rules to find good ordering:
Following are some rules to find good ordering in alpha-beta
pruning:
 Occur the best move from the shallowest node.
 Order the nodes in the tree such that the best nodes are
checked first.
 Use domain knowledge while finding the best move. Ex: for
Chess, try order: captures first, then threats, then forward
moves, backward moves.
 We can bookkeep the states, as there is a possibility that
states may repeat.

You might also like