0% found this document useful (0 votes)
3 views

AIML

The document provides an overview of Artificial Intelligence (AI) and its applications across various fields such as healthcare, finance, and transportation. It covers key concepts including AI characteristics, search algorithms, types of agents, uncertainty, inference, and machine learning techniques. Additionally, it discusses neural networks, ensemble learning, and the differences between shallow and deep networks.

Uploaded by

mdkhalidh18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

AIML

The document provides an overview of Artificial Intelligence (AI) and its applications across various fields such as healthcare, finance, and transportation. It covers key concepts including AI characteristics, search algorithms, types of agents, uncertainty, inference, and machine learning techniques. Additionally, it discusses neural networks, ensemble learning, and the differences between shallow and deep networks.

Uploaded by

mdkhalidh18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Unit 1

AI & Its Applications

Artificial Intelligence (AI) is the simulation of human intelligence in


machines programmed to think, reason, and learn.
Applications of AI include:

 Healthcare: Diagnosis, robotic surgery, drug discovery.

 Finance: Fraud detection, algorithmic trading.

 Transportation: Autonomous vehicles, traffic management.

 Customer Service: Chatbots, virtual assistants.

 Manufacturing: Predictive maintenance, robotics.

 Education: Personalized learning, automated grading.

Characteristics of AI & AI Agents

Characteristics of AI:

 Learning: Ability to improve performance over time.

 Reasoning: Drawing conclusions from data or rules.

 Problem Solving: Finding solutions to complex tasks.

 Perception: Processing sensory input (e.g., vision, sound).

 Autonomy: Acting without human intervention.

AI Agent: An autonomous entity that perceives its environment and takes


actions to maximize its performance.

Environment & Task Environment

 Environment: The external world in which the AI agent operates.

 Task Environment: The specific setting and conditions for a task,


defined by:

o Performance measure
o Environment

o Actuators

o Sensors
(Often called the PEAS framework)

Adversarial Search / Hill Climbing Algorithm

Adversarial Search:

 Used in competitive environments, such as games.

 Involves minimax algorithm and alpha-beta pruning.

 Assumes rational opponent.

 Example: Chess, Tic-Tac-Toe.

Hill Climbing Algorithm:

 A local search algorithm.

 Starts with a solution and iteratively moves to a neighbor with a better


value.

 Simple but can get stuck in local maxima.

Informed / Uninformed Search Algorithm

Uninformed Search:

 No additional information about goal direction.

 Examples:

o Breadth-First Search (BFS)

o Depth-First Search (DFS)

o Uniform Cost Search

Informed Search:

 Uses heuristics to guide search.

 More efficient than uninformed.


 Examples:

o A* Search

o Greedy Best-First Search

Agents & Its Types / Ideal Rational Agent

Types of Agents:

1. Simple Reflex Agents – React to current percept.

2. Model-Based Reflex Agents – Track the world state.

3. Goal-Based Agents – Act to achieve goals.

4. Utility-Based Agents – Maximize utility or satisfaction.

5. Learning Agents – Improve performance over time.

Ideal Rational Agent:

 One that acts to maximize its expected performance given the


knowledge and percepts it has.

Unit 2

1. ✅ Uncertainty / Inference

Uncertainty refers to situations where the outcome is not known with


certainty — common in real-world AI problems.

Inference is the process of using available evidence to reason about


unknowns (e.g., predicting hidden variables).

🔹 Example: If we know the grass is wet, can we infer it rained?

2. ✅ Bayes Rule / Bayesian Inference


3. ✅ Differentiate Logical & Probabilistic Reasoning

Feature Logical Reasoning Probabilistic Reasoning

Type of Deterministic
Uncertain (uses probabilities)
reasoning (true/false)

Outcome Exact conclusion Likely outcomes

"If A is true, then B is "If A happens, B has 80%


Example
true." chance."

Used in Rule-based systems AI, robotics, decision-making

4. ✅ Why is Hybrid Bayesian Network Called So?

A Hybrid Bayesian Network includes both:

 Discrete variables (like Yes/No),

 and Continuous variables (like temperature, speed).

It’s called "hybrid" because it combines both types of variables in one


model.

🔹 Useful in real-world systems where data is a mix (e.g., medical or


engineering systems).
Unit 3
Here are 2-mark answers (brief and to the point) for each of your Machine
Learning topics:

1. Niche of Machine Learning (ML)

Machine Learning specializes in developing algorithms that learn patterns


from data and make predictions or decisions without being explicitly
programmed, applicable in areas like image recognition, NLP, and
recommendation systems.

2. Difference b/w Supervised & Unsupervised Learning

 Supervised Learning: Learns from labeled data (input-output


pairs). Example: regression, classification.

 Unsupervised Learning: Learns from unlabeled data to find hidden


patterns. Example: clustering, dimensionality reduction.

3. Logic Behind Gaussian Processes / Naïve Bayes

 Gaussian Processes: A non-parametric model that defines a


distribution over functions, used in regression; predictions are made
using mean and covariance functions based on observed data.

 Naïve Bayes: A probabilistic classifier based on Bayes’ theorem,


assuming independence among features, making it simple yet
effective for text classification and spam filtering.

4. Random Forest / Classification Rule

 Random Forest: An ensemble of decision trees trained on different


data subsets and features; uses majority voting to improve accuracy
and reduce overfitting.

 Classification Rule: A condition-based rule (e.g., IF-THEN) that maps


input features to specific classes, used in rule-based classifiers like
RIPPER or decision trees.
5. Gradient Descent / Single & Multiple Variables

 Gradient Descent: An optimization algorithm that updates model


parameters by moving in the direction of the steepest descent of
the loss function to minimize it.

 Single vs. Multiple Variables:

o Single-variable GD updates one feature weight (θ) at a time.

o Multiple-variable GD handles multiple feature weights


simultaneously.

6. Difference b/w Probabilistic Discriminative / Generative Model

 Discriminative Model: Models the decision boundary directly by


learning P(Y∣X)P(Y|X). Example: Logistic Regression.

 Generative Model: Models how data is generated, learning


P(X∣Y)P(X|Y) and P(Y)P(Y). Example: Naïve Bayes.

Unit 4

1. Bagging / Boosting / Stacking / Voting

 Bagging (Bootstrap Aggregating): A technique that trains multiple


models independently on different bootstrapped subsets of the data
and averages (or votes) their predictions to reduce variance and
prevent overfitting.

 Boosting: An ensemble method that trains models sequentially, where


each new model focuses on correcting the errors made by previous
models, improving performance by reducing bias.

 Stacking: Combines predictions from multiple models (base learners)


using a meta-model that learns how best to combine them for
improved accuracy.
 Voting: An ensemble approach where multiple models vote on the
final prediction; can be hard voting (majority vote) or soft voting
(average predicted probabilities).

2. Ensemble Learning / Gaussian Mixture Model

 Ensemble Learning: A method that combines multiple models (often


weak learners) to produce a stronger, more accurate model, increasing
robustness and reducing overfitting.

 Gaussian Mixture Model (GMM): A probabilistic model that assumes


data is generated from a mixture of several Gaussian distributions,
used in clustering and density estimation.

3. When Does an Algorithm Become Unstable?

An algorithm becomes unstable when small changes in the training data


lead to large changes in the model's predictions. Examples include
high-variance models like decision trees or k-nearest neighbors.

Unit 5

Here are 2-mark answers (concise and clear) for your neural network-
related topics:

1. MultiLayer Perceptron / Unit Saturation

 MultiLayer Perceptron (MLP): A feedforward neural network with


one or more hidden layers, used for classification and regression
tasks.

 Unit Saturation: Happens when the neuron’s output gets stuck at


extreme values (like 0 or 1 in sigmoid), causing very small
gradients and slowing training.

2. Neural Networks / Activation Function


 Neural Networks: Computing systems inspired by the human brain,
made up of layers of interconnected nodes (neurons) that learn to
map inputs to outputs.

 Activation Function: Introduces non-linearity into the network,


allowing it to learn complex patterns. Common examples: ReLU,
sigmoid, tanh.

3. Differentiate Human Brain & Computer / Dropout

 Human Brain vs. Computer:

o Brain: Massively parallel, adaptive, uses biological neurons.

o Computer: Sequential or parallel, digital, uses logical circuits


and binary data.

 Dropout: A regularization technique where random neurons are


turned off during training to prevent overfitting and encourage
generalization.

4. Network Training / Error Backpropagation

 Network Training: The process of adjusting weights using input data


and loss function to minimize prediction error through iterative
optimization (e.g., gradient descent).

 Error Backpropagation: An algorithm that computes gradients of


the loss function w.r.t. each weight by propagating the error
backward through the network.

5. Deep Feedforward Network

A type of neural network where information flows only in one direction,


from input to output through multiple hidden layers, with no cycles or
feedback loops.

6. Shallow Networks to Deep Networks


 Shallow Networks: Networks with 1 or 2 hidden layers, limited in
learning complex patterns.

 Deep Networks: Contain many hidden layers, enabling them to


learn hierarchical and abstract representations from data.

Let me know if you want simple diagrams or mnemonic aids for quick
revision!

You might also like