0% found this document useful (0 votes)
9 views

AI 2 MARKS

The document outlines various concepts in artificial intelligence, including types of reasoning (deductive, inductive, abductive), heuristic functions, and comparisons between different reasoning methods. It discusses learning paradigms such as supervised, unsupervised, and reinforcement learning, as well as the importance of decision trees and neural networks in AI. Additionally, it covers statistical reasoning, Bayesian probability, and natural language processing components and tasks.

Uploaded by

Rounakdeep Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

AI 2 MARKS

The document outlines various concepts in artificial intelligence, including types of reasoning (deductive, inductive, abductive), heuristic functions, and comparisons between different reasoning methods. It discusses learning paradigms such as supervised, unsupervised, and reinforcement learning, as well as the importance of decision trees and neural networks in AI. Additionally, it covers statistical reasoning, Bayesian probability, and natural language processing components and tasks.

Uploaded by

Rounakdeep Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

1. List out the types of reasoning in AI.

• Deductive reasoning

• Inductive reasoning

• Abductive reasoning

• Common-sense reasoning

• Probabilistic reasoning

• Non-monotonic reasoning

2. Define heuristic function.


A heuristic function, denoted as h(n), estimates the cost of the cheapest path from node n
to the goal in informed search algorithms. It guides search algorithms like A* and Greedy
Best-First Search.

3. Compare inductive and deductive reasoning in AI.

• Deductive reasoning: Derives specific conclusions from general facts or rules (top-
down).

• Inductive reasoning: Infers general rules from specific examples or data (bottom-
up).
Deductive is certain if premises are true; inductive gives probable conclusions.

4. What are the types of heuristic methods?

• Greedy Best-First Search

• A* Search

• Hill Climbing

• Genetic Algorithms

• Simulated Annealing
5. Show the uses of statistical reasoning.
Statistical reasoning is used in:

• Handling uncertainty in AI systems

• Bayesian networks for probabilistic inference

• Decision making under risk

• Pattern recognition and classification

6. Define Bayes rule.


Bayes’ Rule is:
P(A∣B)=P(B∣A)⋅P(A)P(B)P(A|B) = \frac{P(B|A) \cdot P(A)}{P(B)}P(A∣B)=P(B)P(B∣A)⋅P(A)
It is used to update the probability of a hypothesis given new evidence.

7. Define the term Uncertain data and Uncertain knowledge.

• Uncertain data: Information with measurement errors or ambiguity.

• Uncertain knowledge: Knowledge derived from incomplete, vague, or probabilistic


sources.

8. List out the components of Bayes’ Theorem.

• Prior Probability P(H)P(H)P(H)

• Likelihood P(E∣H)P(E|H)P(E∣H)

• Marginal Likelihood P(E)P(E)P(E)

• Posterior Probability P(H∣E)P(H|E)P(H∣E)

9. What is deductive reasoning in AI?


It is the process of deriving logically certain conclusions from known facts or premises
using inference rules. It ensures that if premises are true, the conclusion must be true.

10. What is Bayesian Probability?


Bayesian probability interprets probability as a measure of belief or certainty rather than
frequency. It updates the probability of a hypothesis based on new evidence using Bayes'
Theorem.

11. Compare monotonic and non-monotonic reasoning.

• Monotonic reasoning: Once a conclusion is derived, it cannot be withdrawn even if


new knowledge is added.

• Non-monotonic reasoning: Allows retraction of conclusions if new, conflicting


information is introduced.

12. Why is Probabilistic Reasoning Necessary in AI?


Probabilistic reasoning is essential to handle uncertainty in real-world AI applications. It
allows intelligent systems to make decisions with incomplete, noisy, or uncertain data
using probabilities.

13. What do you understand by monotonic reasoning?


Monotonic reasoning refers to a system where adding new knowledge does not invalidate
previous conclusions. It is deterministic and used in classical logic systems.

14. State Bayesian Belief Networks.


Bayesian Belief Networks (BBNs) are graphical models representing probabilistic
relationships among variables using nodes (variables) and edges (dependencies), enabling
reasoning under uncertainty.

15. Outline the uses of statistical reasoning.


Statistical reasoning is used in:

• Predictive analytics

• Medical diagnosis

• Speech and image recognition

• Natural language processing

• Robotics and autonomous systems


16. Define Dempster-Shafer Theory.
It is a mathematical theory of evidence that allows combining evidence from different
sources to calculate the probability of an event. It handles uncertainty more flexibly than
Bayesian probability.

17. List out the key elements of Dempster-Shafer Theory.

• Frame of discernment

• Basic probability assignment (BPA)

• Belief function

• Plausibility function

18. Outline the application of Bayes Theorem in AI.


Bayes’ Theorem is used in:

• Spam filtering

• Medical diagnosis

• Machine learning classification

• Speech recognition

• Decision-making systems

19. What do you mean by heuristic methods?


Heuristic methods are problem-solving techniques that use practical approaches or
shortcuts to produce good-enough solutions efficiently, especially when exact methods
are computationally expensive.

20. Define fuzzy reasoning.


Fuzzy reasoning is an approach based on fuzzy logic that deals with reasoning that is
approximate rather than fixed and exact. It is useful in systems where human-like
reasoning is needed.
21. List out the use of predicate logic.
Predicate logic is used for:

• Representing facts and rules in AI

• Defining relationships between entities

• Querying knowledge bases

• Supporting inference engines in expert systems

22. Define Knowledge Representation.


Knowledge representation is a field in AI that focuses on how to formally describe
information about the world so that a computer system can utilize it to solve complex
tasks.

23. Define meta knowledge.


Meta knowledge is knowledge about knowledge. It involves information about how, when,
and why to use specific knowledge or rules in a given situation.

24. Outline the kinds of knowledge represented in AI systems.

• Declarative Knowledge

• Procedural Knowledge

• Meta-Knowledge

• Heuristic Knowledge

• Structural Knowledge

25. List out the types of knowledge.

• Factual knowledge

• Procedural knowledge

• Semantic knowledge
• Meta knowledge

• Heuristic knowledge

26. What is predicate logic?


Predicate logic extends propositional logic by including quantifiers and predicates,
allowing for the representation of more complex statements involving objects and their
relationships.

27. Define the term Existential Quantifier and Universal Quantifier in predicate logic.

• Existential quantifier (∃): There exists at least one element that satisfies the
condition.

• Universal quantifier (∀): All elements satisfy the condition.

28. Compare procedural knowledge and declarative knowledge.

• Procedural knowledge: Describes how to perform tasks (algorithms, procedures).

• Declarative knowledge: Describes facts and information (what is true).


Declarative is "what", procedural is "how".

29. Define Semantic networks.


A semantic network is a graph of concepts and the relationships between them. It is used
for representing structured knowledge in a network form.

30. List out the features of logic programming.

• Based on formal logic (e.g., Prolog)

• Uses facts and rules

• Employs automatic reasoning

• Supports symbolic computation

• Enables knowledge-based system development


31. Define forward chaining.
Forward chaining is a data-driven reasoning method that starts with known facts and
applies inference rules to derive new facts until a goal is reached.

32. Define backward chaining.


Backward chaining is a goal-driven reasoning method that starts with a goal and works
backward to determine which facts must be true to support the goal.

33. State the importance of learning.


Learning enables an AI system to improve performance over time, adapt to new
environments, and make decisions in complex, dynamic scenarios.

34. List the types of learning.

• Supervised learning

• Unsupervised learning

• Reinforcement learning

• Semi-supervised learning

• Self-supervised learning

35. Define learning in AI.


Learning in AI refers to the ability of machines to acquire knowledge or skills from data,
experience, or environment and improve performance automatically.

36. List out the components of learning system.

• Learning element

• Performance element

• Critic
• Problem generator

37. Define decision tree.


A decision tree is a tree-like model of decisions and their possible consequences,
including chance event outcomes, used for classification and regression.

38. Outline the types of learning models.

• Concept learning

• Classification

• Regression

• Clustering

• Reinforcement models

39. List out the features of decision tree.

• Tree-structured model

• Easy to interpret

• Handles both categorical and numerical data

• Performs feature selection

• Prone to overfitting if not pruned

40. State the uses of decision tree.

• Classification tasks

• Medical diagnosis

• Credit risk assessment

• Customer segmentation

• Predictive analytics
41. Define training examples.
Training examples are input-output pairs provided to a learning algorithm to train the model
and allow it to learn underlying patterns.

42. What is the use of error correction rule?


Error correction rule adjusts the model weights in learning algorithms (e.g., perceptrons) to
minimize prediction errors and improve accuracy.

43. What is reinforcement learning?


Reinforcement learning is a type of learning where an agent learns by interacting with an
environment and receiving feedback in the form of rewards or penalties.

44. Define the term ‘learn by advice’.


Learn by advice involves providing direct instructions or guidance to the learning system,
which it uses to improve its knowledge or behavior.

45. Define the term ‘Learning by analogy’.


Learning by analogy is a method where a system solves new problems by finding
similarities to previously solved problems and adapting past solutions.

46. What do you mean by Genetic Algorithm?


A Genetic Algorithm is a search heuristic inspired by natural selection that uses operations
like selection, crossover, and mutation to solve optimization problems.

47. List out the operators used in Genetic Algorithm.

• Selection

• Crossover (recombination)

• Mutation

• Elitism (optional)
48. What do you understand by crossover and mutation?

• Crossover: Combines parts of two parent solutions to create offspring.

• Mutation: Randomly alters a part of the solution to maintain diversity.

49. What is meant by classifier system?


A classifier system is a machine learning system that learns to make decisions by evolving
a set of rules (classifiers) using reinforcement learning and genetic algorithms.

50. What are learning classifier systems?


Learning classifier systems are rule-based systems that combine reinforcement learning
with evolutionary algorithms to adaptively solve problems.

51. Define syntactic analysis.


Syntactic analysis (parsing) is the process of analyzing a sequence of words to determine
its grammatical structure according to a given formal grammar.

52. Define semantic analysis.


Semantic analysis is the process of understanding the meaning and interpretation of
words, phrases, and sentences in context.

53. Define discourse integration.


Discourse integration is the process of linking and interpreting the meaning of successive
sentences in a conversation or document for coherent understanding.

54. Define pragmatics.


Pragmatics involves understanding the intended meaning, context, and implications of
language beyond literal definitions.

55. List out the major tasks of NLP.

• Tokenization
• Part-of-speech tagging

• Named entity recognition

• Parsing

• Sentiment analysis

• Machine translation

• Question answering

56. Define Natural Language Generation.


Natural Language Generation (NLG) is the process of producing coherent, meaningful text
from non-linguistic data by a computer.

57. What is Natural Language Understanding?


Natural Language Understanding (NLU) is a subfield of NLP that focuses on machine
reading comprehension, intent recognition, and extracting meaning from text.

58. What is machine translation?


Machine translation is the automatic conversion of text or speech from one language to
another using AI algorithms.

59. List out the components of Natural Language Processing.

• Lexical analysis

• Syntactic analysis

• Semantic analysis

• Discourse integration

• Pragmatic analysis

60. List out the applications of NLP.

• Chatbots
• Virtual assistants

• Sentiment analysis

• Language translation

• Speech recognition

• Text summarization

• Spam filtering

61. What is Decision Tree of learning in AI?


A decision tree is a flowchart-like structure where internal nodes represent decisions or
tests, branches represent outcomes, and leaf nodes represent class labels. It's used for
classification and regression.

62. List out the types of Neural Networks in AI.

• Feedforward Neural Network (FNN)

• Convolutional Neural Network (CNN)

• Recurrent Neural Network (RNN)

• Modular Neural Network (MNN)

• Radial Basis Function Network (RBFN)

63. Outline the characteristics of Discovery-based learning.

• Learner-driven exploration

• Encourages hypothesis formation

• Promotes critical thinking

• Involves problem-solving

• Active learning process

64. What do you mean by Situation Calculus in AI?


It is a logical language used to represent and reason about dynamic domains, actions, and
their effects over time in AI planning systems.
65. Define Explanation-based learning.
A learning approach where the system uses prior knowledge to understand and generalize
from a single example by forming explanations.

66. List out the principles of Discovery learning model.

• Learner engagement

• Encouraging autonomy

• Problem-solving orientation

• Active exploration

• Use of past knowledge

67. Define Mutation.


A genetic algorithm operator that introduces random changes in offspring to maintain
genetic diversity in the population.

68. What do you mean by Learning by Analogy?


It is solving new problems by adapting solutions that were used to solve similar, previous
problems.

69. List out the advantages of Neural Networks.

• Ability to learn complex patterns

• Robustness to noise

• Generalization to unseen data

• Adaptive learning

• Parallel processing capability

70. What is Discovery Learning?


A learning model where learners construct knowledge themselves by exploring and
problem-solving, rather than receiving direct instruction.

71. Outline the basic components of Genetic Algorithms in AI.

• Initial population

• Fitness function

• Selection
• Crossover (recombination)

• Mutation

• Termination condition

72. Define Genetic Algorithm.


A search heuristic inspired by natural evolution that is used to find optimal or near-optimal
solutions by evolving a population of candidate solutions.

73. Define Machine Learning.


A subset of AI that enables systems to learn and improve from experience without being
explicitly programmed.

74. What is Learning in AI?


The process by which an AI system improves its performance by acquiring knowledge or
skills from data or experience.

75. What is Reinforcement Learning?


A learning paradigm where an agent learns to take actions by receiving rewards or penalties
based on outcomes.

76. Define Supervised Learning.


A type of machine learning where the model is trained on labeled data to predict outcomes
or classify data.

77. Define Unsupervised Learning.


A machine learning approach that finds hidden patterns or groupings in unlabeled data.

78. What is Semi-Supervised Learning?


A method that uses a small amount of labeled data with a large amount of unlabeled data
to improve learning accuracy.

79. What is Active Learning?


A technique where the learning algorithm chooses specific data from which it learns, often
asking queries to an oracle (e.g., human annotator).

80. Define Deep Learning.


A subset of machine learning that uses multi-layered neural networks to model complex
patterns in large datasets.

81. What is Overfitting in AI?


When a model learns the training data too well, including noise, resulting in poor
performance on new, unseen data.
82. What is Underfitting in AI?
When a model is too simple to capture the underlying pattern of the data, resulting in poor
training and test performance.

83. What is a Hypothesis in AI?


A candidate model or function that the learning algorithm generates to explain observed
data.

84. Define Bias in Machine Learning.


An error due to incorrect assumptions in the learning algorithm, leading to a model that is
consistently wrong in one direction.

85. Define Variance in Machine Learning.


A measure of how much the model's prediction changes when using different training data;
high variance indicates overfitting.

86. Define Training Set.


A dataset used to train a machine learning model by allowing it to learn patterns or
relationships.

87. Define Test Set.


A separate dataset used to evaluate the performance of a trained model on new, unseen
data.

88. What is Validation Set?


A dataset used during model training to tune hyperparameters and avoid overfitting.

89. What is Cross-validation?


A technique for assessing model performance by splitting the data into multiple parts and
rotating training and validation sets.

90. What is Bootstrapping in AI?


A statistical technique involving random sampling with replacement used to estimate
model accuracy or variance.

91. Define Concept Learning.


Learning the definition of a general category or concept from examples and
counterexamples.

92. What is Version Space in AI?


A framework representing all hypotheses consistent with the observed training examples.
93. Define Hypothesis Space.
The set of all possible hypotheses that a learning algorithm can choose from to explain
data.

94. What is a Target Function?


The actual function or pattern the model tries to learn and approximate from the data.

95. Define Concept Drift.


The change in data distribution over time that affects the performance of a learning model.

96. What is Online Learning?


A model that learns continuously from new data arriving over time, instead of training all at
once.

97. Define Batch Learning.


A learning method where the model is trained using the entire dataset at once.

98. Define Lazy Learning.


A type of learning that delays generalization until a query is made, e.g., k-nearest
neighbors.

99. Define Eager Learning.


Learning that builds a general model immediately during training, such as decision trees or
neural networks.

100. What is Transfer Learning?


The process of improving a model’s performance on a target task by leveraging knowledge
from a related task previously learned.

You might also like