0% found this document useful (0 votes)
11 views

AI 3

The document compares various concepts in artificial intelligence, including propositional logic versus first-order logic, partial order planning versus total order planning, and informed versus uninformed search methods. It also explains algorithms like forward-chaining, backward-chaining, reinforcement learning, hill climbing, and local search algorithms, along with their applications in robotics. Additionally, it covers supervised and unsupervised learning, conditional probability, parse trees, and game-playing algorithms.

Uploaded by

sinomhatre0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

AI 3

The document compares various concepts in artificial intelligence, including propositional logic versus first-order logic, partial order planning versus total order planning, and informed versus uninformed search methods. It also explains algorithms like forward-chaining, backward-chaining, reinforcement learning, hill climbing, and local search algorithms, along with their applications in robotics. Additionally, it covers supervised and unsupervised learning, conditional probability, parse trees, and game-playing algorithms.

Uploaded by

sinomhatre0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

1.

Compare propositional logic and first order logic


Ans:
Aspect Propositional Logic First Order Logic
Simple true/false
Variables Objects, predicates, relations
propositions
Quantifiers Not supported Supports ∀ (forall), ∃ (exists)
More expressive, handles complex
Expressiveness Less expressive
relations
Syntax More complex syntax with
Simple syntax
Complexity functions and variables
No notion of objects or Can represent objects, domains, and
Domain
domains properties
Use Case Simple logic problems Complex knowledge representation

2. Compare partial order planning and total order planning


Ans:
Aspect Partial Order Planning Total Order Planning
Action Partially ordered (some
Totally ordered (strict sequence)
Ordering unordered)
High, allows concurrent
Flexibility Low, strict linear execution
actions
More efficient for complex Simpler but less efficient for
Efficiency
tasks concurrency
Handling Conflicts resolved by strict
Can resolve conflicts flexibly
Conflicts ordering
Allows parallel or flexible
Plan Execution Requires sequential execution
execution
Use Case Complex, parallel tasks Simple, linear tasks

3. Explain informed and uninformed search methods also Give difference between
Informed Search and Uninformed search Algorithms.
Ans: In Artificial Intelligence, search algorithms are used to find solutions or paths from a
start state to a goal state. These algorithms are broadly classified into Uninformed (Blind)
Search and Informed (Heuristic) Search based on the information they use during the search
process.

1. Uninformed Search Methods:


These algorithms do not have any additional information about the goal other than the
problem definition.
They explore the search space blindly until they find the goal.
Examples:
Breadth-First Search (BFS): Explores all nodes level by level.
Depth-First Search (DFS): Explores as deep as possible before backtracking.
Uniform Cost Search: Explores the least-cost path first.
Used when no heuristic is available.
2. Informed Search Methods:
These algorithms use heuristics or additional knowledge about the goal to guide the search.
Heuristics help estimate how close a state is to the goal, improving efficiency.
Examples:
A* Search: Uses path cost and heuristic to find optimal path.
Greedy Best-First Search: Uses heuristic only to select nodes.
More efficient than uninformed search if a good heuristic is available.

Aspect Uninformed Search Informed Search


Knowledge No extra info (blind search) Uses heuristic info (guided)
Efficiency Less efficient More efficient
Search direction Explores blindly Directed towards goal
Examples BFS, DFS, Uniform Cost Search A*, Greedy Best-First Search
Optimality Some guarantee optimality A* optimal with good heuristic
Memory Usage Usually less or moderate Can require more memory

4. Illustrate forward-chaining and backward-chaining algorithm with suitable example.


Ans: Forward-chaining and backward-chaining are two common inference methods used
in rule-based expert systems for reasoning and problem-solving.

1. Forward Chaining (Data-Driven Reasoning):


Starts with known facts and applies inference rules to extract more data until the goal is
reached.
Works from facts to conclusion.
Useful when all data is available and we want to find conclusions.
Example:
Rules:
R1: If it is raining, then the ground is wet.
R2: If the ground is wet, then the grass is slippery.
Known fact: It is raining.
Process:
Start with fact: "It is raining."
Apply R1: "Ground is wet."
Apply R2: "Grass is slippery."
Conclusion: Grass is slippery.

2. Backward Chaining (Goal-Driven Reasoning):


Starts with a goal and works backwards to see if known facts support the goal.
Works from goal to facts.
Useful when we want to check if a conclusion is true based on facts.
Example:
Goal: Is the grass slippery?
Rules:
R2: If the ground is wet, then the grass is slippery.
R1: If it is raining, then the ground is wet.
Known fact: It is raining.
Process:
Goal: "Is grass slippery?"
Check rule R2: Is the ground wet?
To confirm ground wet, check rule R1: Is it raining?
Since it is raining (known fact), ground is wet, so grass is slippery.

5. Explain reinforcement learning


Ans: Reinforcement Learning (RL) is a type of machine learning where an agent learns to
make decisions by interacting with an environment. The goal is to learn a policy that
maximizes the total reward over time through trial and error.

Key Concepts of Reinforcement Learning:


Agent: The learner or decision-maker (e.g., robot, AI program).
Environment: The external system the agent interacts with.
State (S): A specific situation or condition in the environment.
Action (A): The decision or move made by the agent.
Reward (R): Feedback from the environment after taking an action. It can be positive or
negative.
Policy (π): A strategy that defines the agent's behavior (which action to take in each state).
Value Function: Predicts future rewards and helps the agent evaluate the quality of a state.

How It Works:
The agent observes the current state.
Takes an action based on the policy.
Receives a reward and the next state from the environment.
Updates its policy to improve future rewards.

Example:
In a game, the AI (agent) learns to play by receiving positive rewards for winning and
negative rewards for losing. Over time, it learns which moves lead to better outcomes.\

Real-World Applications:
Game playing (e.g., AlphaGo, Chess)
Robotics (autonomous walking or gripping)
Self-driving cars
Recommendation systems
Stock trading

6. Explain Hill Climbing Algorithm and problems that occurs in hill climbing algorithm?
Ans: Hill Climbing is a local search algorithm used in Artificial Intelligence to solve
optimization problems. It starts with an arbitrary solution and iteratively moves to a better
neighboring state based on a given evaluation function until no further improvements can
be found.
It is called "hill climbing" because the algorithm attempts to reach the peak (maximum) of
a solution landscape by moving step-by-step towards better solutions.
Steps of the Algorithm:
Start with an initial solution (state).
Evaluate all neighboring states.
Move to the neighbor with the highest value (i.e., better than the current).
Repeat steps 2–3 until no better neighbor is found.

Types of Hill Climbing:


Simple Hill Climbing – Chooses the first better neighbor it finds.
Steepest-Ascent Hill Climbing – Looks at all neighbors and chooses the best one.
Stochastic Hill Climbing – Chooses a random better neighbor.

Problems in Hill Climbing Algorithm:


Local Maxima: The algorithm may stop at a local maximum, which is better than
neighboring states but not the global best.

Plateau:
A flat area where neighboring states have equal values.
The algorithm may wander or stop due to lack of improvement.

Ridges:
Narrow paths with higher values not directly accessible from current state.
Hill climbing may fail because it only moves in one direction at a time.

Greedy Nature:
The algorithm only considers immediate gains, not long-term benefits.
It doesn’t backtrack or explore enough to escape poor areas.

7. Applications of AI in Robotics
Ans:
Autonomous Navigation:
AI enables robots to move and navigate without human intervention using sensors, GPS,
and SLAM (Simultaneous Localization and Mapping).
Used in self-driving cars, delivery robots, and drones.

Object Detection and Recognition:


Robots use computer vision and AI algorithms to recognize and classify objects in real-
time.
Applied in manufacturing, warehouse sorting, and surveillance.

Path Planning:
AI helps robots plan optimal paths by avoiding obstacles and choosing efficient routes.
Used in warehouse robots and robotic vacuum cleaners.

Human-Robot Interaction (HRI):


AI enables robots to understand speech, gestures, and emotions to interact naturally with
humans.
Examples include social robots and assistive robots for the elderly or disabled.
Industrial Automation:
AI-powered robots are used in factories for assembly, inspection, welding, and packaging.
Increases productivity and reduces human error.

8. What are local search algorithms ? explain any one


Ans: Local Search Algorithms are optimization algorithms used in Artificial Intelligence to
find solutions by exploring the neighboring states of a current state, rather than searching
through the entire problem space.
They are especially useful for problems where:
The state space is large or infinite.
The goal is to find an optimal or near-optimal solution, not necessarily the path to it.
Complete search methods (like BFS or DFS) are not feasible due to memory or time
constraints.

Key Characteristics:
Start with an initial solution (state).
Move to a neighbor state based on a defined rule or evaluation.
Continue until a solution is found or a stopping condition is met.

Common Local Search Algorithms:


Hill Climbing
Simulated Annealing
Genetic Algorithms
Local Beam Search

9. Write short note on conditional probability


Ans: Conditional Probability is the probability of an event occurring given that another
event has already occurred. It is a fundamental concept in probability theory and is widely
used in Artificial Intelligence, especially in Bayesian networks, decision making, and
machine learning.
If A and B are two events, then the conditional probability of A given B is denoted by:

Example:
Suppose a card is drawn from a deck.
Let:
A = Event that the card is a king
B = Event that the card is a face card (Jack, Queen, King)
There are 12 face cards and 4 kings in a deck.
So,
This means: If a face card is drawn, the probability it is a king is 1/3

10. Explain supervised and unsupervised learning


Ans: In Artificial Intelligence (AI) and Machine Learning (ML), learning refers to how
machines gain the ability to make predictions or decisions based on data. There are two
major types of learning: Supervised Learning and Unsupervised Learning.

1. Supervised Learning:
Definition:
Supervised learning is a type of machine learning where the model is trained on a labeled
dataset, meaning each input has a corresponding correct output.
How it works:
The algorithm learns from input-output pairs.
It maps input data (X) to the correct output (Y).
It uses this training data to predict outputs for new, unseen inputs.
Examples:
Spam detection (email → spam or not spam)
House price prediction
Image classification (e.g., cat vs. dog)
Algorithms Used:
Linear Regression
Decision Trees
Support Vector Machines
Neural Networks

2. Unsupervised Learning:
Definition:
Unsupervised learning is a type of machine learning where the model is trained on
unlabeled data. It tries to find hidden patterns or structures in the input data.
How it works:
There are no output labels.
The algorithm groups or organizes data based on similarity or structure.
Examples:
Customer segmentation
Market basket analysis
Document or image clustering
Algorithms Used:
K-Means Clustering
Hierarchical Clustering
Principal Component Analysis (PCA)
11. Generate parse tree for “The cat ate the fish”
Ans: A parse tree (or syntax tree) is a tree representation that shows the syntactic structure
of a sentence according to a given grammar, usually a Context-Free Grammar (CFG).
The sentence:
“The cat ate the fish”
is a simple declarative sentence following the Subject-Verb-Object (SVO) pattern.
Step-by-Step Structure:
S → Sentence
NP → Noun Phrase (The cat)
VP → Verb Phrase (ate the fish)
Det → Determiner
N → Noun
V → Verb
Parse Tree:

Explanation:
The sentence starts with S (Sentence).
The Noun Phrase (NP) is "The cat":
Det: The
N: cat
The Verb Phrase (VP) is "ate the fish":
V: ate
NP: "the fish"
Det: the
N: fish

12. What is Game Playing Algorithm? Draw a game tree for Tic-Tac-Toe problem
Ans: A Game Playing Algorithm in Artificial Intelligence refers to a method used by
computers to make optimal decisions in strategic situations, particularly in two-player
games such as chess, tic-tac-toe, or checkers. These algorithms enable machines to simulate
possible moves and determine the best course of action to win or avoid losing the game.

Key Features:
Adversarial Search – Involves two opponents; one tries to maximize score, the other to
minimize it.
Game Tree – Represents all possible moves as a tree of game states.
Minimax Algorithm – Chooses the move that maximizes the player's minimum gain.
Alpha-Beta Pruning – Optimizes Minimax by cutting off unneeded branches.
Evaluation Function – Estimates the value of non-terminal states using heuristics.
Example:
In tic-tac-toe, the algorithm evaluates each possible move and predicts the outcome. If it
detects that a move leads to a win or blocks the opponent from winning, it selects that move.

Applications:
Board games (e.g., Chess, Go, Checkers)
Strategy-based video games
AI bots in gaming applications

You might also like